-
Posts
395 -
Joined
-
Days Won
1
Posts posted by woah
-
-
@STiFUBeen a while but hoping you don't mind I bug you again on this topic. I lurk in a certain VR discord and saw this paper mentioned https://iovs.arvojournals.org/article.aspx?articleid=2613346. And it was hypothesized that the conclusions of this research may have significant implications for dynamic focus VR displays, and now I'm kind of wondering about this as well. The general assumption of the, I guess, "XR community" is that you can solely use various forms of artificial defocus (and not optical vergence) as feedback to guide the eye's accommodation to a desired focus depth, and thus even VR displays capable of displaying only a single varying focus depth (as opposed to a multi-focal display) can actually function: as the user's eyes move from the in focus part of the image to an out of focus part of the image, the artificial defocus blur generated in that out of focus part of the image is considered to be sufficient feedback to trigger the eyes to accommodate to the new focus depth (even though the optical vergence of that part of the image is still incorrect).
But, assuming I'm understanding the paper correctly, without the presence of "genuine" optical vergence corresponding to each potential focus depth in the scene, there wouldn't be sufficient feedback for the eye to accommodate to that depth correctly. There are apparently many accommodation cues for the human eye, but the claim here seems to be that true optical vergence is necessary. And wouldn't that mean a multifocal display is necessary?
It seems interesting to me because from what I've gathered over the past few years the varifocal prototypes in various labs are said to "kind of" work for certain people and not work at all for others. So, even assuming eyetracking were perfect, if they are nonetheless unable to provide the required feedback to guide accommodation, then the very kind of dynamic focus display that everyone is banking on (for the near term) to address the major optical issue with modern VR headsets would seem to be a dead end. You would instead need something like the CREAL lightfield headset I mentioned a while back. And facebook recently pushed their timeline for varifocal headsets out from 2022 to ~2030.
(again, assuming I'm even understanding this correctly)
- 1
-
Right there with you with respect to that desire to have good black levels again.
Also looking forward to Micro-OLED displays that may be coming to VR headsets next year, assuming they've addressed mura and black smear. Unfortunately I doubt they'll be dynamic focus displays
-
https://www.youtube.com/watch?v=Cz8ObjoYhLQ
This is also a really neat Half-Life themed short
-
- 2
-
Â
Continuing on with my previous posts about this topic, Carmack actually addressed the state of their varifocal technology in the above talk. I guess the much coveted Half Dome varifocal prototypes they demonstrated (well, "demonstrated" as in "showed a through the lens video of it") still have lots of problems and didn't really work well outside of the lab. Also, problems with cost and glasses. Unsurprisingly varifocal that isn't "perfect" is worse than fixed focus. It seems quite premature to me then that Lanman claimed varifocal was "almost ready for prime time" 1.5 years or so ago. Carmack hopes they can collect a bunch of eyetracking data across wider populations with their next headset (to increase accuracy I suppose), but this seems to confirm varifocal isn't going to be a feature from them any time soon. But just how accurate and robust does eyetracking have to be for this to work in a consumer product? E.g. if eyetracking running at >200hz screws up your focus once every minute, does that create an unacceptable user experience?
Â
https://skarredghost.com/2021/10/22/creal-ar-vr-lightfields
Then there's this demo of CREAL which approximates a light field: "CREAL states that its innovation is in not calculating low-resolution lightfields for many positions, but few high-quality resolution lightfields in a few selected positions around the eye of the user". The impressions are very exciting to me because it sounds like it's another step along the path of addressing the primary issues I have with VR:
Quote"the objects that fell in the lightfield region appeared more realistic than everything I have ever tried in VR in my life: the virtual elements felt so alive, crisp, and nuanced. I could change focus, and especially I could also focus on a little text that was written in the world, seeing it clearly and in focus, reading it very naturally. I usually have difficulties in reading text in VR, but in the lightfield region of this headset, the high resolution and the ability to focus on it made reading it incredibly easy."
However, there's no actual eyetracking and the lightfield portion of the display is limited to a mere 30 degrees (the display is foveated, there's a standard fixed focus display around the perimeter and the transition between the two is abrupt). I have to wonder if it's possible to use a similar kind of display but with eyetracking so the lightfield region follows your eye--sort of similar to what @STiFU mentioned a while back (though that was with a holographic display).
Â
- 4
-
On 10/3/2021 at 10:48 PM, jaxa said:
I'm not sure what the performance boost will be in real systems, but I think it's going to be somewhere between 50% and 500%. At the absurd end of the range, you would have a tiny portion of the screen rendering at 8K or 16K for paracentral vision, and progressively lower resolutions for near/mid/far peripheral vision.
Beyond the performance boost, if it can lower the amount of data that needs to be sent to the headset, that could help wireless (not standalone) headsets. For example, 8K @ 240Hz is about 8 gigapixels per second. If you can lower the pixel count by 90%, it's closer to 4K @ 90Hz.
Ultimately, I think we want to see an ultra wide FOV of about 200 degrees horizontal, comparable to 8K or 16K resolution using microLED panels, at as many as 1000 FPS. That would be the end goal in 10 or 20 years.
Heck, even 50% would be an incredible boost--that's like getting a new generation of GPUs. The numbers I've typically seen are between 30% and 200% but it will likely improve over time.
However, I don't think it's going to be anywhere close to the >1000% that Abrash was originally predicting. At the last
OculusFacebook Connect, Carmack reined in those expectations. More recently he said this:Â
Â
-
14 hours ago, jaxa said:
By variable focus you mean foveated rendering or varifocal adjustments (I forgot the term for it)? Because all headsets could use foveated rendering in the long run and it also requires eye tracking.
Vari-focal adjustments, i.e. basically allowing each eye to correctly accommodate in a way that's matched to vergence. Should make VR much more comfortable, more immersive, and less limiting (especially when it comes to near-field interactions). Right now the focus depth is fixed and it sucks.
EDIT: Foveated rendering may be a thing as well but it seems to have been way over-hyped (in terms of realistic performance gains)
- 2
-
Curious things happening on the PCVR front. A new headset, codenamed "Deckard", was found in the SteamVR files and Arstechnica has confirmed that it's real. Also other things suggesting a split rendering system (mixing parts of rendering between a PC and processing within the headset), "inside out" tracking that also works with lighthouse tracking, wireless, and possibly standalone functionality. Seems awfully similar to Valve's patents for split rendering, a new kind of tracking sensor, and headstraps with integrated wireless and processing.
Most interesting thing to me though is that based on public filings from a 2020 lawsuit, it's been revealed that in 2018 Valve invested $10m in the company ImagineOptix. IO is an optics manufacturer that uses photolithographic processes to pattern thin film (few micrometer thick) liquid crystal optics that can be layered and electrically controlled. IO entered a contract with Valve which included a Master Supply Agreement and the construction of a factory to mass product these optics for Valve. The factory was finished in mid 2020 and in early 2021 Valve filed ~10 patents describing the application of the technology to a variety of things important for VR.
From Valve's patents, they want to use the technology not just for variable focus but also optical/eye correction in the headset (i.e. no glasses/contacts), optical blur and dynamic angular resolution adjustment in different fields of the FOV, pancake lens-like features, and many other things. The varifocal aspect is similar to Facebook's "Half Dome 3" prototype (which they showed in 2019 through a short video) but apparently Valve was already making moves to mass produce similar liquid crystal lenses a year prior. They also recently filed a patent for a form of eyetracking I haven't seen used much, which would be necessary for most of this stuff to work at all.
Of course patents and leaks don't necessarily mean anything about actual products, Valve could cancel it, accurate eyetracking is hard, and if they actually release a VR headset that's this advanced it would fundamentally change the experience VR--it seems too good to be true. A solid state lens that can be electrically controlled to perform so many dynamic optical functions is like something out of science fiction. On the other hand, they built a factory to mass produce these lenses and Arstechnica says speculations about these lenses are "on the right track", so it's somewhat tantalizing.
- 2
-
It definitely seems like a neat device. I personally have no interest in actually using something like it so I see no reason to get it. However there are two things that stick out to me:
Â
(1) If I were a kid this would be a the perfect on-ramp to PC gaming. It's cheap (a decent GPU costs more than this) and has everything you need integrated (screen, battery, IO)--imagine if in the 90s you could get a fully capable PC for just $225 ($400 adjusted for inflation). It's powerful enough to play the latest games and FSR will extend its life span. It's simple/streamlined through SteamOS but you can hack around with it as you can any PC. You can upgrade the storage and bring it to a friend's house (which the kids love I guess--mobility doesn't matter to me).
And when you're ready to take off the training wheels, you can connect a mouse, keyboard, and external monitor. And then longer term you'll be primed to buy a desktop PC. I see a lot of Valve enthusiasts lining up with reservations to buy this thing "just because it's a cool device" but I hope Valve goes out of their way to market this to kids--that's where this could be really successful (I'm thinking a lot of these existing Steam users won't actually use it much because ... why not just use your PC?)
Â
(2) This will hopefully warm AMD up to the idea of making cheap PC gaming SoCs. Consoles are cheap not just due to subsidization but also due to the efficiencies that come with the tight integration of mass produced SoCs. And most PC gamers don't even upgrade their PCs--they just buy a whole new system all at once, so the direct benefits of modularity and specialization are lost on them. So if Valve could convince AMD to make SoCs with performance in line with the major consoles (that's the target games are designed around anyway), that could go a long way toward making decent PC gaming systems cheaper. Right now just a decent GPU costs as much as a modern console. From what Gabe has stated, they're probably even subsidizing this thing--which helps justify their 30% take if this is the direction they're going.
-
-
I never had a good experience when setting other people up for Google Photos. The desktop syncing applications were unreliable--would constantly stall and need to be restarted--and also CPU intensive. The duplicate detection didn't really work. The behavioral documentation was cryptic--I had to go to 3rd party websites just to get a good grasp on it. Support was practically nonexistent (as is typical for Google "products")
- 1
-
By the way, here is another company working on the VAC problem but using lightfields and it looks like they have some neat prototypes https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/
I guess coincidentally the company, "CREAL", is also pronounced "See Real"
- 2
-
1 hour ago, STiFU said:
A more descriptive term for pupil detection is probably pupil localization: You have a calibrated camera setup and try to locate the pupils in 3d space with that. This task can usually be performed fairly accurately.
Eye-tracking on the other hand usually refers to detecting what you are looking at, so you can control the mouse with your gaze for example. Pupil detection is actually one of the required steps in optical eye tracking. After the pupil center has been detected, the eye ball center also has to be estimated. Then, a ray is shot from the eye ball center through the pupil center and intersected with the display plane to get the display coordinates you look at. As you can imagine, estimation errors add up in these various steps, which makes eye-tracking a rather inaccurate technology, especially when free head movement and uncalibrated scenarious are involved.
Â
Thanks that makes a lot of sense. I imagine determining the eye ball center is a very difficult problem with the eye not being a rigid body.
- 1
-
On 12/21/2020 at 4:26 PM, STiFU said:
...Yes, you would still need a pupil detection, but that is extremely more robust than eye-tracking...
Quick question, because I'm having trouble finding more information on this: What is the difference between eye tracking and pupil detection? Is one just trying to determine direction, while the other is trying to determine the complex deformation of the eye?
- 1
-
3 hours ago, STiFU said:
I left academia three years ago and started working in the industry, so I might've missed some of the most recent developments (especially related to VR as I absolutely cannot stomach it), but I will gladly listen to that talk.
So, the concepts shown up until minute 50:00 all pretty much existed for quite some time. However, it was nice to see some working prototypes of them, especially the liquid crystal varifocal lens (dynamically adjustable focal length without using moving parts). Afterwards, the focal surface was introduced, which was totally new to me and looks really exciting to be honest. Using a spatial light modulator to implement what is essentially per-pixel varying focal-length is an ingenious idea!! At my previous institute, we had only used SLMs in the context of holography based 2d-image-projection, which really serves no purpose at all except seeing if it works or not.  The captured results of the focal-surface looked somewhat off to me, 'though. I can't really put my finger on what it is, but the blur etc. didn't look right. Possibly non-linear distortions because multiple wavelengths pass through the spatial light modulator, which is really only designed for one singular wavelength? If it is that, it could easily be solved by using one SLM per color-component and using laser-light as an illuminator. If it is not, I hope they get it sorted, as the idea seems really promising.
I am not even sure one would still need eye-tracking with good focal-surfaces. You are correct in your assessment that eye-tracking is extremely error-prone. For my experiements, I had to go to extreme lengths to get eye-tracking working properly for my first approach. I ended up carefully modeling all possible 3d-viewing eye-movements in a kalman filter to get rid of major tracking-noise and augmented that with a realtime stereo-3d-disparity estimation (which I implemented in CUDA) to be able to tell what point in 3d-space the user most likely looked at. It worked fairly well, but might still not be accurate enough for vergence-driven accomodation control.
Considering the problems with eye-tracking, I still think that holography is the most promising approach. Researches have long demonstrated that it is possible to create actual holpgraphic displays, but the problem is the computational side of things. It is incredibly costly to calculate a hologram in real-time. That company in Dresden I mentioned earlier had a brilliant idea to solve this problem. If you want to construct the full optical wavefront of a point in 3d-space, the resulting hologram occupies every pixel of your display. So, to render a full scene, you'd pretty much need to render a full frame (of extremely high resolution) for each 3d-point in your scene and all those holograms are super imposed atop of each other to form the final result. However, what if you don't need the full optical wavefront? Afterall, the wavefront only has to be accurate where the pupils of the viewer are located. If you only calculate the holograms for a very narrow range of angles, the subholograms actually only span a significantly reduced number of pixels locally, reducing the computational complexity by a HUGE margin. Yes, you would still need a pupil detection, but that is extremely more robust than eye-tracking.
So there you have it, my opinion on Facebook's new technology. Thanks for sharing that nice talk.Â
I myself gave a talk at Electronic Imaging in San Francisco myself in 2016 and had some nice holidays exploring the westcoast afterwards. So, seeing something from Eletronic Imaging always gives me some warm nostalgia. Â
Awesome thanks so much for the impressions on it. I'm not going to pretend that I understand anything more than the high level concepts, but I don't get to hear directly from actual researchers (former or otherwise) very often so it's hard to ground myself. A few years ago the impression I got from Abrash was that we'd have varifocal by now or very soon, but every subsequent year he's pushed out his predictions further into the future (and the latest is basically "I don't know when"). Sometimes I get the impression that his optimistic predictions are as much targeted at higher ups in the company (that may not want to wait 15 years for a technology to develop) as they are at developers and some of the public. And I'm sure Valve is also working on something targeted at consumers (well, enthusiasts) but nobody can get a word out of them about anything.
However, now it seems that if there will be any short term progress here it will involve a major breakthrough in eyetracking, or something more radical/unexpected. In addition, it seems there will be many iterations on varifocal. If we could just get to the point where the visual comfort of a VR headset is comparable to a 800x600 CRT monitor from 1995 I'd be pretty satisfied and would feel good about the state of the tech. Honestly most of the friends I've coaxed into buying headsets rarely ever use them anymore due to a variety of issues like this.
The thing about the company in Dresden is interesting. Do you know if they were approaching a computational complexity suitable for real time rendering?
- 1
-
6 hours ago, STiFU said:
You can wait a long time for that. I've written a whole PhD-thesis on this matter! There were some ideas to use lenses with adjustable focus-length, but that approach is rather impractical. There were some advances in real-time holographic displays by a research facility in Dresden, but that company eventually lost its funding, which is a huge bummer, because on the one hand, their approach was really promising, and on the other, holography is the only way the accomodation-vergence-conflict can truly be solved. I did not hear of any other instances to further research realtime holography after that apparently.Â
I my thesis, I developed two methods that tried to reduce the conflict in real-time.
- I used an eye-tracker to detect where the user is looking and slowly shifted those contents back to the display where the conflict does not exist. Obviously, this approach has its limitations and experiemental results did not show a significant improvement.
- In approach 1, I slowly shifted convergence. In approach 2, I dropped the idea of eye tracking and solely investigated how shifting convergence can be improved. When applying a convergence shift, you implicitly also get a distortion of depth. So in this approach, I countered this distortion of depth by a dynamic adjustment of the camera-baseline (i.e. the distance between the virtual eyes). This approach yielded significant improvement over the regular method, but was way too complex to be adopted by the filming industry. (Fun fact: the image content I used in this experiment was generated by a modified version of Doom 3 BFG.  )
As it currently stands, you have to rely on content producers that they design the content such that the AV-conflict is minimized. This of course is incredibly difficult with games where stuff regular flies out of the screen towards the viewer, which is the most critical thing to do as far as visual fatigue is concerned.Â
By the way, accomodation-vergence-conflict is the correct term for this, as accomodation already means focus-adjustment, while vergence relates to the convergence of the eyes on a point of interest.
Further reading:
https://eldorado.tu-dortmund.de/bitstream/2003/36031/1/Dissertation.pdfI'm curious then what you think about Facebook's approach to the problem as detailed in the video below. Not because I'm doubting you but Facebook has been drumming up a ton of hype and claiming that varifocal is "almost ready for prime time" and such, and I want to hear a researcher's perspective on it. The gist I get from this is that eyetracking is the major thing that's holding this technology back. Granted, FB has a vested interest in creating the impression that this problem is on the verge of being solved given its necessity for their enormous loss leading investment into VR.
At the same time Michael Abrash has said that eyetracking may never be good enough for varifocal and Carmack recently expressed significant doubts about eyetracking being anywhere near as accurate or as robust as people are hoping for.
Â
- 1
-
I think I'm going to quit VR until the fixed focus / vergence accommodation conflict is solved. Or maybe just take a long break. It seems to be getting more and more difficult for my eyes to tolerate it, perhaps because I'm ever more conscious of it. It's also harder to tolerate when I'm tired. There are times in games where I get immersed enough to tune it out a bit but for me it's the #1 reason that flat gaming is 10x more comfortable than VR.
Â
For a VR nut like me this is actually a big deal but I'm just tired of feeling like I'm crossing my eyes while playing games.
- 1
-
On 9/23/2020 at 10:29 AM, lowenz said:
Dark Descent and Machine for Pigs are now Open Source (the engine)
Â
The interactions in this game almost seem like they were designed for VR but limited by the M&K interface. I hope someone makes a VR mod.
- 1
-
If you've got a VR headset and like electronic music, check out The Wave and, at times, VRChat.
Various art exhibitions are being held in VR this year, e.g. through The Museum Of Other Realities https://store.steampowered.com/app/613900/Museum_of_Other_Realities/ https://twitter.com/museumor
For future sporting events and such, Google has figured out how to stream lightfield videos over a 300Mbps connection. Essentially lightfield video gives you a volume (e.g. 70cm^3) in which you can move around your head and the image is rendered correctly from every position and orientation (e.g. even mirrors work). Once headsets have variable focus I could see there being a huge market for this https://uploadvr.com/google-lightfield-camera-compression/
Of course nothing is quite like actually being present but this is really the next best thing and, for some people, it's probably good enough.
-
Â
Good talk on how to solve VR's last major visual issue
-
the psychology of US investors in the current market
Â
-
I'll be happy to get on just 100 Mbps fiber optic in the next few weeks. This will be up from an average of 50kB/s through verizon wireless with a data cap. Been trying to get them to install this for 4 years.
-
I doubt we'll see very many games at this fidelity for a while but it's nonetheless quite exciting. Especially for developers, assuming it's really that easy. Also, no cut up to $1 million is really cool for a fully open source top of the line game engine.
However what I found comical was how we're being shown these incredible graphics but in terms of interactions we're still stuck at "Press X To Interact"
- 3
-
My review:
The game is quite amazing in terms of production quality, atmosphere, immersion and the mechanics that they have implemented. It's hard to convey without actually experiencing it but I've never felt so "in" a virtual world before--I've played plenty of VR games but none of them have done anything close to this. The best way I can describe it is "dense". The graphics are often near photorealistic and nearly everything is intricately detailed. The audio is like nothing I've experienced before--almost every sound is accurately mapped spatially and feels so "correct". The environments are fleshed out to an absurd degree, so if you're the type of gamer that likes to spend a lot of time exploring and getting immersed in an environment, this game is a dream come true. You can pick up and prod just about anything, the physics are more well behaved than anything I've seen before, and the hand/finger mapping is so good that it makes you want to reach out and "feel" things--the way the haptics respond and the way your virtual hand conforms to the surfaces kind of compels you to do this (and parts of the environment will respond, e.g. the Xen fauna is a delight to interact with). It's an extraordinary work of art.
In a recent interview Valve said that they had tried to do this in the past but what would happen on the desktop is that, with the exception of a small minority, gamers would just speed right past everything and never look back. That meant many hours of developer time just being wasted so it couldn't be justified. However with VR they realized people were spending a lot of time interacting with environments at a higher resolution and this actually persisted over time (they originally assumed it would subside with the VR spectacle). After playing the game this makes total sense and has me pretty excited about the future of gaming. If developers can justify adding more depth to their games and the medium itself motivates players to actually experience that depth, that is only a good thing.
The mechanics, interactions, and AI they have implemented are done very well. Everything feels very rewarding to use and interact with--it's all polished to an absurd degree. Especially the gravity gloves, they are a joy to use. The combat is less about scale and more about small encounters. Every shot you take feels consequential. It was already clear to me that e.g. aiming a gun in real life is much harder than on a desktop, but what they've tried to do is take advantage of this to add intensity to the experience for what would otherwise have seemed banal (so e.g. an encounter with a few headcrabs becomes a big deal). The AI itself is actually more reminiscent of the HL1 AI (e.g. the grunt AI) and this is a good thing. This failed on some accounts to due to the teleportation focus and some design decisions that followed from that--which brings us to the downsides.
Where the game leaves much to be desired is in the variety of mechanics implemented, the forms of locomotion, and the teleport focus.  Much of what they've done *is* impressive, but only for a teleport game. The game supports smooth locomotion (and the actual movement is not a bad implementation at all), but it's clearly a game designed around the constraints of teleportation. The interactions, the AI, the combat, the types of locomotion are all within the confines of what is viable with teleportation.
E.g.: There is no melee combat in the game whatsoever--perhaps one of the most obvious affordances of VR--because any compelling melee combat requires movement (being able to quickly dodge and advance on the enemy). The AI is much less difficult than it should be because teleport gamers can't strafe and back step, which means the AI moves slowly, does not overwhelm you, and gives you plenty of time to take aim at it. To make things more difficult for smooth locomotion players, on the higher difficulties they just increased enemy hit points (combine can take like 6+ shots to the head on the highest difficulty), but I still didn't find it difficult because--as a smooth locomotion player--I can just strafe to cover and carefully take shots at enemies that give you plenty of time to aim. You can't climb, swim, jump, run, drive vehicles, or do anything that doesn't map well to teleportation. There is a sort of "teleport mantle"--which works--but it's still underwhelming.Â
Given the nature of this game--that tons of new VR gamers are going to be jumping straight into this with no prior VR experience--it does make sense to design the game this way. However if they make Half-Life 3 VR ( see Anderson's link if you want to be spoiled), they can't do it like this because the majority of regular VR users get used to smooth movement in short order and then never look back. Rather, they need to take full advantage of what are now "standard" forms of comfortable smooth movement and then port back to teleport for this minority that can't adapt.Â
E.g.: adding near field interactions and melee combat with AIs (thankfully it seems likely for HL3VR if you've spoiled the ending of HLA). They need to increase AI difficulty and tension not through hit points but rather through their ability to take cover and maneuver about quickly (VR is actually much better suited to low TTK due to the higher difficulty of aiming--makes each shot feel more rewarding and consequential). They need to incorporate the common forms of locomotion we enjoy in flat gaming--climbing, swimming, platforming, vehicles--and then expand on them in ways that take advantage of VR input (for example, geometry and physics based climbing mechanics with motion controllers). If, after they've implemented these things, the teleport counterparts are lame or clunky then that's unfortunate for that minority that can't adapt, but it is much better than leaving so much of what makes gaming compelling off of the table.
Overall the game is an undeniably incredible experience. I can't imagine what it would be like to experience this as your first VR game--it would probably be akin to that magic feeling that HL1 and HL2 gave me as a kid. The game deserves its praise. However, for a VR gamer that's seen a lot of cool VR mechanics from indie developers and that is not locomotively gimped, it's quite limited mechanically and I don't think this approach will work for their next VR Half-Life game. HLA needs to serve as the introductory experience--a beautiful work of art at that--but what comes next should retain this production quality but show the value proposition of VR mechanically.
- 1
Share your Status / What's on your mind?
in Off-Topic
Posted
Â
Hmm I see, I guess #2 is what I'm wondering about then. If the artificial defocus ends up being insufficient for robustly driving accommodation and you need the actual wavefront curvature (optical vergence) to be correct, wouldn't that mean varifocal headsets can't work reliably? Like, I'm imagining some object in the periphery with an unknown focus depth and the combination of focus cues being insufficient to reliably ascertain the sign of defocus. Or perhaps being able to eventually ascertain the sign of defocus but having to sort of find it by trial and error, or perhaps being able to determine the sign of defocus in 95% of circumstances but that last 5% presenting some difficulties, or perhaps you have some people that are more reliant on certain focus cues vs others--ultimately leading to a technology that "works" but that still provides a bad consumer experience. I can imagine anything unexpected happening with dynamic focus being worse than a fixed focus (actually, this is something Carmack mentioned in a Q&A session last year).
Just bringing this up as I try to untangle where the challenge and uncertainty is for varifocal headsets (again, the primary hypothesis of XR folks being that wavefront curvature is entirely unnecessary as a focus cue, artificial defocus is sufficient as a focus cue, and eyetracking just needs to be more accurate). If eyetracking really is the only barrier, then I would assume that at some point you'd have more niche VR hardware companies release something even if it only works for some people and not others--because the lack of dynamic focus is just so vexing of a problem. I mean, for example you have VR hardware companies like Pimax releasing high cost ultra wide FOV headsets despite those headsets making most people sick with their peripheral distortion.
Moreover, I was really surprised to read in that paper and others (e.g. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6911823/ ) how there are still many questions on the mechanisms of accommodation. For example, one theory on the mechanism of detecting optical vergence is that different shadows are cast by blood vessels in the retina when light is focused in front of or behind them, and that this is a sort of subconscious cue for accommodation.
Of course I don't expect you to give me a concrete answer about any of this. I guess it's just helpful to hear from someone knowledgeable that has some distance from the industry.