Welcome, Guest.
Username: Password: Remember me

TOPIC: Reshade regression

Reshade regression 2 weeks 10 hours ago #1

Hey Crosire,

There appears to be a regression in Reshade with respect to processing AdaptiveSharpen.fx. I read a few posts regarding this and had been noticing it myself, so ran a quick test with a prior version. I chose an older version at random and indeed performance returned.

Screenshots (can't capture Reshade UI, but you can see the FPS counter):

Reshade 3.3.2 - 102 fps


Reshade 4.2.1 and 4.3.0 (tested both, same result) - 49 fps


Note that I use a central shared directory for Reshade shaders, so this is the same AdaptiveSharpen.fx both times. It's also the latest version as in the Reshade github shaders repository.

So somewhere between 3.3.2 and 4.2.1 something changed.

Edit: I just happened to be testing with Wolfenstein, which is OpenGL and I had been noticing it on another game (Star Traders: Frontiers), also OpenGL. And this chap here sees it with No Mans Sky (also OpenGL): reshade.me/forum/shader-troubleshooting/...en-reshade-4-0#30947

Testing with a DX game (Dying Light) there's no performance loss. So this applies to OpenGL only.
Last Edit: 2 weeks 10 hours ago by Martigen.
The administrator has disabled public write access.

Reshade regression 1 week 6 days ago #2

AMD or NVIDIA? ReShade 4.0+ has a new compiler that is blazing fast, but at the cost of offloading optimization work to the driver compiler. It also does not handle arrays very well, which that particular shader makes heavy use of. Combine the two and you have to hope that the driver does a decent job of optimizing the code. NVIDIA usually is very good at that, AMD more often than not relies on the developer to optimize the shader manually instead, which in this case is bad. The DX compiler does a great job too and is vendor-independent, which is why you see no performance loss there.
Cheers, crosire =)
The administrator has disabled public write access.

Reshade regression 1 week 6 days ago #3

@crosire
Shouldn't the performance (on every platform and api) have a higher priority than compile time? If the new compiler leads to bad situations like this, I would prefer the pre4.0 compiler.
The administrator has disabled public write access.

Reshade regression 1 week 6 days ago #4

The new compiler is absolutly necessary for DX12 and Vulkan support and has no performance deficit on DX and none on OGL for the majority of shaders. The exception apparently beeing a selected few and most likely only on selected vendors too.

I decided having support for DX12 and Vulkan is more important. And I stand by that decision. It also stopped the "ReShade loads soooo slow" cries, so that was a plus.

And besides that, there is an OpenGL extension that adds support for loading SPIR-V to OpenGL now, which quite possibly would improve performance to the same levels (since the new compiler can generate SPIR-V instead of GLSL and AMD/NVIDIA are mainly investing into the SPIR-V pipeline now because of Vulkan). But driver support for that has been buggy on NVIDIA until recently, so I haven't activated that feature yet (although it being implemented already).
Cheers, crosire =)
Last Edit: 1 week 5 days ago by crosire.
The administrator has disabled public write access.
The following user(s) said Thank You: brussell, seri14

Reshade regression 1 week 6 days ago #5

crosire wrote:
AMD or NVIDIA? ReShade 4.0+ has a new compiler that is blazing fast, but at the cost of offloading optimization work to the driver compiler. It also does not handle arrays very well, which that particular shader makes heavy use of. Combine the two and you have to hope that the driver does a decent job of optimizing the code. NVIDIA usually is very good at that, AMD more often than not relies on the developer to optimize the shader manually instead, which in this case is bad. The DX compiler does a great job too and is vendor-independent, which is why you see no performance loss there.
Nvidia, 1080Ti. Both CPU and GPU ms peak massively in Reshade's stats when AdapativeSharpening is enabled, but neither CPU or GPU are being limited (checked this to be sure), not that I'd expect that either but I was curious. CPU is 6-core 3970X @ 4.6Ghz.

I figure perhaps there was a bug in the compiler with some operation, but if that's not case, sounds like the shader might need to be re-written. Or is there another solution?

It's unfortunate as AdapativeSharpen is easily the best sharpener in the repository, though lucky this is only affecting OpenGL otherwise I imagine this would have come up sooner.

crosire wrote:
I decided having support for DX12 and Vulkan is more important. And I stand by that decision. It also stopped the "ReShade loads soooo slow" cries, so that was a plus.
I agree! DX12/Vulcan is more important, and the compiler speed is amazing :) Thank you.

Just need to come up with a solution for this or any other shaders that might work the same way in future.
Last Edit: 1 week 6 days ago by Martigen.
The administrator has disabled public write access.