Reshade regression
- Martigen
-
Topic Author
There appears to be a regression in Reshade with respect to processing AdaptiveSharpen.fx. I read a few posts regarding this and had been noticing it myself, so ran a quick test with a prior version. I chose an older version at random and indeed performance returned.
Screenshots (can't capture Reshade UI, but you can see the FPS counter):
Reshade 3.3.2 - 102 fps
Reshade 4.2.1 and 4.3.0 (tested both, same result) - 49 fps
Note that I use a central shared directory for Reshade shaders, so this is the same AdaptiveSharpen.fx both times. It's also the latest version as in the Reshade github shaders repository.
So somewhere between 3.3.2 and 4.2.1 something changed.
Edit: I just happened to be testing with Wolfenstein, which is OpenGL and I had been noticing it on another game (Star Traders: Frontiers), also OpenGL. And this chap here sees it with No Mans Sky (also OpenGL): reshade.me/forum/shader-troubleshooting/...en-reshade-4-0#30947
Testing with a DX game (Dying Light) there's no performance loss. So this applies to OpenGL only.
Please Log in or Create an account to join the conversation.
- crosire
-
Please Log in or Create an account to join the conversation.
- brussell
-
Shouldn't the performance (on every platform and api) have a higher priority than compile time? If the new compiler leads to bad situations like this, I would prefer the pre4.0 compiler.
Please Log in or Create an account to join the conversation.
- crosire
-
I decided having support for DX12 and Vulkan is more important. And I stand by that decision. It also stopped the "ReShade loads soooo slow" cries, so that was a plus.
And besides that, there is an OpenGL extension that adds support for loading SPIR-V to OpenGL now, which quite possibly would improve performance to the same levels (since the new compiler can generate SPIR-V instead of GLSL and AMD/NVIDIA are mainly investing into the SPIR-V pipeline now because of Vulkan). But driver support for that has been buggy on NVIDIA until recently, so I haven't activated that feature yet (although it being implemented already).
Please Log in or Create an account to join the conversation.
- Martigen
-
Topic Author
Nvidia, 1080Ti. Both CPU and GPU ms peak massively in Reshade's stats when AdapativeSharpening is enabled, but neither CPU or GPU are being limited (checked this to be sure), not that I'd expect that either but I was curious. CPU is 6-core 3970X @ 4.6Ghz.crosire wrote: AMD or NVIDIA? ReShade 4.0+ has a new compiler that is blazing fast, but at the cost of offloading optimization work to the driver compiler. It also does not handle arrays very well, which that particular shader makes heavy use of. Combine the two and you have to hope that the driver does a decent job of optimizing the code. NVIDIA usually is very good at that, AMD more often than not relies on the developer to optimize the shader manually instead, which in this case is bad. The DX compiler does a great job too and is vendor-independent, which is why you see no performance loss there.
I figure perhaps there was a bug in the compiler with some operation, but if that's not case, sounds like the shader might need to be re-written. Or is there another solution?
It's unfortunate as AdapativeSharpen is easily the best sharpener in the repository, though lucky this is only affecting OpenGL otherwise I imagine this would have come up sooner.
I agree! DX12/Vulcan is more important, and the compiler speed is amazingcrosire wrote: I decided having support for DX12 and Vulkan is more important. And I stand by that decision. It also stopped the "ReShade loads soooo slow" cries, so that was a plus.

Just need to come up with a solution for this or any other shaders that might work the same way in future.
Please Log in or Create an account to join the conversation.
- Martigen
-
Topic Author
So is there a programmatical solution to this, such as enabling the older code path if there's a flag set in a shader? (of course I say this realising you've probably completely thrown away the old system). I don't know if other shaders are affected by the slowdown, not of the dozen or so I usually use.
But this is easily the best sharpening filter. Do we just need to get it rewritten to bypass whatever is causing the slowdown? (or would that fundamentally break how the shader works?)
Please Log in or Create an account to join the conversation.
- v00d00m4n
-
Please Log in or Create an account to join the conversation.
- Diego0920
-
Edit: Using LumaSharpen with 1.2 strength and 0.020-40 limit seems to do the job just as well.
Please Log in or Create an account to join the conversation.