Reshade regression

  • Martigen
  • Topic Author
More
4 years 10 months ago - 4 years 10 months ago #1 by Martigen Reshade regression was created by Martigen
Hey Crosire,

There appears to be a regression in Reshade with respect to processing AdaptiveSharpen.fx. I read a few posts regarding this and had been noticing it myself, so ran a quick test with a prior version. I chose an older version at random and indeed performance returned.

Screenshots (can't capture Reshade UI, but you can see the FPS counter):

Reshade 3.3.2 - 102 fps


Reshade 4.2.1 and 4.3.0 (tested both, same result) - 49 fps


Note that I use a central shared directory for Reshade shaders, so this is the same AdaptiveSharpen.fx both times. It's also the latest version as in the Reshade github shaders repository.

So somewhere between 3.3.2 and 4.2.1 something changed.

Edit: I just happened to be testing with Wolfenstein, which is OpenGL and I had been noticing it on another game (Star Traders: Frontiers), also OpenGL. And this chap here sees it with No Mans Sky (also OpenGL): reshade.me/forum/shader-troubleshooting/...en-reshade-4-0#30947

Testing with a DX game (Dying Light) there's no performance loss. So this applies to OpenGL only.
Last edit: 4 years 10 months ago by Martigen.

Please Log in or Create an account to join the conversation.

  • crosire
More
4 years 10 months ago #2 by crosire Replied by crosire on topic Reshade regression
AMD or NVIDIA? ReShade 4.0+ has a new compiler that is blazing fast, but at the cost of offloading optimization work to the driver compiler. It also does not handle arrays very well, which that particular shader makes heavy use of. Combine the two and you have to hope that the driver does a decent job of optimizing the code. NVIDIA usually is very good at that, AMD more often than not relies on the developer to optimize the shader manually instead, which in this case is bad. The DX compiler does a great job too and is vendor-independent, which is why you see no performance loss there.

Please Log in or Create an account to join the conversation.

  • brussell
More
4 years 10 months ago #3 by brussell Replied by brussell on topic Reshade regression
@crosire
Shouldn't the performance (on every platform and api) have a higher priority than compile time? If the new compiler leads to bad situations like this, I would prefer the pre4.0 compiler.

Please Log in or Create an account to join the conversation.

  • crosire
More
4 years 10 months ago - 4 years 10 months ago #4 by crosire Replied by crosire on topic Reshade regression
The new compiler is absolutly necessary for DX12 and Vulkan support and has no performance deficit on DX and none on OGL for the majority of shaders. The exception apparently beeing a selected few and most likely only on selected vendors too.

I decided having support for DX12 and Vulkan is more important. And I stand by that decision. It also stopped the "ReShade loads soooo slow" cries, so that was a plus.

And besides that, there is an OpenGL extension that adds support for loading SPIR-V to OpenGL now, which quite possibly would improve performance to the same levels (since the new compiler can generate SPIR-V instead of GLSL and AMD/NVIDIA are mainly investing into the SPIR-V pipeline now because of Vulkan). But driver support for that has been buggy on NVIDIA until recently, so I haven't activated that feature yet (although it being implemented already).
Last edit: 4 years 10 months ago by crosire.
The following user(s) said Thank You: brussell, seri14

Please Log in or Create an account to join the conversation.

  • Martigen
  • Topic Author
More
4 years 10 months ago - 4 years 10 months ago #5 by Martigen Replied by Martigen on topic Reshade regression

crosire wrote: AMD or NVIDIA? ReShade 4.0+ has a new compiler that is blazing fast, but at the cost of offloading optimization work to the driver compiler. It also does not handle arrays very well, which that particular shader makes heavy use of. Combine the two and you have to hope that the driver does a decent job of optimizing the code. NVIDIA usually is very good at that, AMD more often than not relies on the developer to optimize the shader manually instead, which in this case is bad. The DX compiler does a great job too and is vendor-independent, which is why you see no performance loss there.

Nvidia, 1080Ti. Both CPU and GPU ms peak massively in Reshade's stats when AdapativeSharpening is enabled, but neither CPU or GPU are being limited (checked this to be sure), not that I'd expect that either but I was curious. CPU is 6-core 3970X @ 4.6Ghz.

I figure perhaps there was a bug in the compiler with some operation, but if that's not case, sounds like the shader might need to be re-written. Or is there another solution?

It's unfortunate as AdapativeSharpen is easily the best sharpener in the repository, though lucky this is only affecting OpenGL otherwise I imagine this would have come up sooner.

crosire wrote: I decided having support for DX12 and Vulkan is more important. And I stand by that decision. It also stopped the "ReShade loads soooo slow" cries, so that was a plus.

I agree! DX12/Vulcan is more important, and the compiler speed is amazing :) Thank you.

Just need to come up with a solution for this or any other shaders that might work the same way in future.
Last edit: 4 years 10 months ago by Martigen.

Please Log in or Create an account to join the conversation.

  • Martigen
  • Topic Author
More
4 years 9 months ago #6 by Martigen Replied by Martigen on topic Reshade regression
Hey Crosire,

So is there a programmatical solution to this, such as enabling the older code path if there's a flag set in a shader? (of course I say this realising you've probably completely thrown away the old system). I don't know if other shaders are affected by the slowdown, not of the dozen or so I usually use.

But this is easily the best sharpening filter. Do we just need to get it rewritten to bypass whatever is causing the slowdown? (or would that fundamentally break how the shader works?)

Please Log in or Create an account to join the conversation.

  • v00d00m4n
More
4 years 9 months ago #7 by v00d00m4n Replied by v00d00m4n on topic Reshade regression
Can you please add old compiler code path (with switch between too) as compatibility option, that can be enabled in option and got enabled by default in OpenGL? I see this as only solution for this issue.

Please Log in or Create an account to join the conversation.

  • Diego0920
More
4 years 7 months ago - 4 years 6 months ago #8 by Diego0920 Replied by Diego0920 on topic Reshade regression
Yeah, AdaptiveSharpen works very slowly for OpenGL games. Doom 3 BFG, Exanima, Quake, Homeworld and others cannot use it without losing most of my framerate and I'm forced to switch to FilmicAnamorphSharpen or others but they aren't the same.

Edit: Using LumaSharpen with 1.2 strength and 0.020-40 limit seems to do the job just as well.
Last edit: 4 years 6 months ago by Diego0920.

Please Log in or Create an account to join the conversation.

We use cookies
We use cookies on our website. Some of them are essential for the operation of the forum. You can decide for yourself whether you want to allow cookies or not. Please note that if you reject them, you may not be able to use all the functionalities of the site.