-
-
Notifications
You must be signed in to change notification settings - Fork 553
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Track depth textures for VR #205
base: main
Are you sure you want to change the base?
Conversation
…etrieved by the depth addon
color and depth textures match
…f it is the same texture with different bounds. This will happen if a game uses a single large texture for both eyes, in which case it is sufficient to apply effects only once.
…first eye individually, if necessary
This is a great start! I'm not fully happy with the way presentation is handled yet though. I think it would be better to integrate stereo support more deeply and keep the separate present events for the left/right eye submits (and instead pass a value that indicates whether this is a mono present, or a left or right stereo present). That way in the future we can e.g. also add support for the GUI in VR by rendering it in stereo (since the runtime would be aware of whether it should render the left or right part in the current present). |
Sounds good. I did actually briefly consider using the viewport/scissor approach, but I didn't want to go that deeply without your approval, and I wasn't sure if my (lack of) knowledge of Vulkan and DX12 would even allow me to implement that properly :D Still, I think that approach could have merit, and it also has the advantage that shaders would not have to be specialized to deal with single vs two-eye views - if that makes a difference for them. On that topic, there is one potential issue that might be worth considering. There are some effects out there that accumulate information over multiple frames (e.g. TAA from AstrayFX) by storing copies of previous frames in separate textures. These effects are obviously not aware of the possibility that |
Went with a single present call per VR frame now (f986346), as you had envisioned, since it simplifies backwards compatibility with effects, avoids problems with effects that accumulate information (like you pointed out). Both OculusVR and OpenXR have a single submit call to submit both eyes in one go, so this is more future proof too. This has several advantages:
One disadvantage is that post-processing may bleed from one eye to the other at the seam between them. But that shouldn't be that big of a problem. I'll rebase the pull request later. |
Seems like we've both been busy for a while :) I actually just looked into rebasing this PR, but the more I think about it, the more I feel that the work here is now largely unnecessary. The majority of it was to support different potential ways of how depth textures might be used, but with the changed submission process, we are "limited" to the single big texture case, anyway :) Given that that appears to be the vast majority of existing games, not a huge loss. I think all that's really needed now is a simple check in the generic_depth addon's |
Yeah, sorry, didn't forget about this, but didn't get to do much ReShade work in the last couple of weeks. The changed submission still supports games that submit individual eye textures (they are copied to a single big side-by-side texture in The problem with picking a runtime in the depth add-on is a more generic problem that needs to be solved anyway I think. It currently falls apart whenever there are more runtimes than one. E.g. in a racing simulator set-up with 3 monitors, where each monitor gets a separate runtime, the depth add-on will only pick a single depth buffer, rather than three different ones (since presumably racing games render those in separate steps). In VR we have the same (except just two potential depth buffers and the additional problem that there is a non-VR runtime instance that is entirely unimportant for rendering). Simply only using the VR runtime if there is one would fix it for this case, but wouldn't help for the general case, so will need to think about if there is a more generic solution that could tackle them all. |
Yes, but for depth support to work, you'll need to have the depth texture matching the single side-by-side texture, because otherwise you'd have to similarly copy individual depth textures, and that's unfortunately not so straight-forward to do (at least not with D3D11). I suppose you could use the extracted |
192019f
to
e617dc3
Compare
Great work! Are thate any updates on the progress of this PR? have the changes since been merged into main? |
4d95676
to
5b49eff
Compare
ffedc1a
to
92248fb
Compare
8eb8abe
to
0750cc0
Compare
330bc45
to
da91bbc
Compare
e63ed2e
to
a37eb06
Compare
What's the status of this PR? |
200e4db
to
924e661
Compare
This took a while, but I now have depth support working on a range of Unity and Unreal VR games, and Skyrim.
Disclaimer: many effects using depth are probably too expensive for VR, anyway, and are also prone to shimmering artefacts due to slight mismatches in calculation between the two eyes. Therefore, this feature is probably not as useful for VR as for flat games. So not supporting depth for VR games in favour of a simpler VR integration is definitely an option :)
Anyway, here is a rough rundown of what I had to do to get depth working:
Getting color and depth texture dimensions to match
The majority of games I encountered seem to be using a single big texture for both eyes during submit, and consequently their depth texture is the same. So far, Reshade has copied the submitted region for each eye and processed them separately, then copied the region back. To get a matching depth texture during post-processing, this would require to also do a regional copy of the depth texture. Unfortunately, D3D11's
CopySubResourceRegion
does not support that for depth/stencil textures, and so you'd have to use a (compute) shader for the copy.I did originally plan to add a
copy_resource_region
function to thecommand_list
interface, but I found no clean way to implement a compute shader call withind3d11::device_context_impl
.Instead, I now always give the full color texture to the runtime, so that the original depth texture will match. If the game sends the same texture for both eyes with different regions, I only call the runtime for the first submit, so that we don't process both eyes twice in a frame. It's a bit less elegant, but appears to also be ever so slightly faster for games that use the single big texture approach.
(This also invalidates my statistics hack in the other PR, but that's probably a good thing :D )
Communicating with the depth plugin
To properly track the depth textures for VR, the depth addon needs a few extra pieces of information from the runtime. For a starter, it needs to even recognize that it's dealing with a VR runtime, and it also needs the current eye and the submitted region to differentiate between the possible depth setups in a VR renderer. I added a small struct with a
set_data
call on the runtime that the depth addon can retrieve.Tracking the depth textures
Given that there can be up to two separate depth textures, the state in the
state_tracking_context
(selected texture, view, potential backup) needed to be replicated for the VR eyes. So I extracted them into their own struct. I also added a timestamp of the last drawcall to the counters so that we can decide which depth texture belongs to which eye (if separate textures are used). Most games probably render the left eye first, so that's the default assumption, but I also added a config option to swap the eyes if necessary.Dealing with the options for depth setup in VR
I've encountered a couple of different ways that depth textures may be used in VR games, and this is the way I deal with them: