Jump to content
The Dark Mod Forums

woah

Member
  • Posts

    395
  • Joined

  • Days Won

    1

woah last won the day on July 18 2010

woah had the most liked content!

Reputation

83 Excellent

1 Follower

Recent Profile Visitors

3783 profile views
  1. Hmm I see, I guess #2 is what I'm wondering about then. If the artificial defocus ends up being insufficient for robustly driving accommodation and you need the actual wavefront curvature (optical vergence) to be correct, wouldn't that mean varifocal headsets can't work reliably? Like, I'm imagining some object in the periphery with an unknown focus depth and the combination of focus cues being insufficient to reliably ascertain the sign of defocus. Or perhaps being able to eventually ascertain the sign of defocus but having to sort of find it by trial and error, or perhaps being able to determine the sign of defocus in 95% of circumstances but that last 5% presenting some difficulties, or perhaps you have some people that are more reliant on certain focus cues vs others--ultimately leading to a technology that "works" but that still provides a bad consumer experience. I can imagine anything unexpected happening with dynamic focus being worse than a fixed focus (actually, this is something Carmack mentioned in a Q&A session last year). Just bringing this up as I try to untangle where the challenge and uncertainty is for varifocal headsets (again, the primary hypothesis of XR folks being that wavefront curvature is entirely unnecessary as a focus cue, artificial defocus is sufficient as a focus cue, and eyetracking just needs to be more accurate). If eyetracking really is the only barrier, then I would assume that at some point you'd have more niche VR hardware companies release something even if it only works for some people and not others--because the lack of dynamic focus is just so vexing of a problem. I mean, for example you have VR hardware companies like Pimax releasing high cost ultra wide FOV headsets despite those headsets making most people sick with their peripheral distortion. Moreover, I was really surprised to read in that paper and others (e.g. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6911823/ ) how there are still many questions on the mechanisms of accommodation. For example, one theory on the mechanism of detecting optical vergence is that different shadows are cast by blood vessels in the retina when light is focused in front of or behind them, and that this is a sort of subconscious cue for accommodation. Of course I don't expect you to give me a concrete answer about any of this. I guess it's just helpful to hear from someone knowledgeable that has some distance from the industry.
  2. @STiFUBeen a while but hoping you don't mind I bug you again on this topic. I lurk in a certain VR discord and saw this paper mentioned https://iovs.arvojournals.org/article.aspx?articleid=2613346. And it was hypothesized that the conclusions of this research may have significant implications for dynamic focus VR displays, and now I'm kind of wondering about this as well. The general assumption of the, I guess, "XR community" is that you can solely use various forms of artificial defocus (and not optical vergence) as feedback to guide the eye's accommodation to a desired focus depth, and thus even VR displays capable of displaying only a single varying focus depth (as opposed to a multi-focal display) can actually function: as the user's eyes move from the in focus part of the image to an out of focus part of the image, the artificial defocus blur generated in that out of focus part of the image is considered to be sufficient feedback to trigger the eyes to accommodate to the new focus depth (even though the optical vergence of that part of the image is still incorrect). But, assuming I'm understanding the paper correctly, without the presence of "genuine" optical vergence corresponding to each potential focus depth in the scene, there wouldn't be sufficient feedback for the eye to accommodate to that depth correctly. There are apparently many accommodation cues for the human eye, but the claim here seems to be that true optical vergence is necessary. And wouldn't that mean a multifocal display is necessary? It seems interesting to me because from what I've gathered over the past few years the varifocal prototypes in various labs are said to "kind of" work for certain people and not work at all for others. So, even assuming eyetracking were perfect, if they are nonetheless unable to provide the required feedback to guide accommodation, then the very kind of dynamic focus display that everyone is banking on (for the near term) to address the major optical issue with modern VR headsets would seem to be a dead end. You would instead need something like the CREAL lightfield headset I mentioned a while back. And facebook recently pushed their timeline for varifocal headsets out from 2022 to ~2030. (again, assuming I'm even understanding this correctly)
  3. Right there with you with respect to that desire to have good black levels again. Also looking forward to Micro-OLED displays that may be coming to VR headsets next year, assuming they've addressed mura and black smear. Unfortunately I doubt they'll be dynamic focus displays
  4. https://www.youtube.com/watch?v=Cz8ObjoYhLQ This is also a really neat Half-Life themed short
  5. Really enjoyed this STALKER fan film https://www.youtube.com/watch?v=GvJ91D-N29g
  6. Continuing on with my previous posts about this topic, Carmack actually addressed the state of their varifocal technology in the above talk. I guess the much coveted Half Dome varifocal prototypes they demonstrated (well, "demonstrated" as in "showed a through the lens video of it") still have lots of problems and didn't really work well outside of the lab. Also, problems with cost and glasses. Unsurprisingly varifocal that isn't "perfect" is worse than fixed focus. It seems quite premature to me then that Lanman claimed varifocal was "almost ready for prime time" 1.5 years or so ago. Carmack hopes they can collect a bunch of eyetracking data across wider populations with their next headset (to increase accuracy I suppose), but this seems to confirm varifocal isn't going to be a feature from them any time soon. But just how accurate and robust does eyetracking have to be for this to work in a consumer product? E.g. if eyetracking running at >200hz screws up your focus once every minute, does that create an unacceptable user experience? https://skarredghost.com/2021/10/22/creal-ar-vr-lightfields Then there's this demo of CREAL which approximates a light field: "CREAL states that its innovation is in not calculating low-resolution lightfields for many positions, but few high-quality resolution lightfields in a few selected positions around the eye of the user". The impressions are very exciting to me because it sounds like it's another step along the path of addressing the primary issues I have with VR: However, there's no actual eyetracking and the lightfield portion of the display is limited to a mere 30 degrees (the display is foveated, there's a standard fixed focus display around the perimeter and the transition between the two is abrupt). I have to wonder if it's possible to use a similar kind of display but with eyetracking so the lightfield region follows your eye--sort of similar to what @STiFU mentioned a while back (though that was with a holographic display).
  7. Heck, even 50% would be an incredible boost--that's like getting a new generation of GPUs. The numbers I've typically seen are between 30% and 200% but it will likely improve over time. However, I don't think it's going to be anywhere close to the >1000% that Abrash was originally predicting. At the last Oculus Facebook Connect, Carmack reined in those expectations. More recently he said this:
  8. Vari-focal adjustments, i.e. basically allowing each eye to correctly accommodate in a way that's matched to vergence. Should make VR much more comfortable, more immersive, and less limiting (especially when it comes to near-field interactions). Right now the focus depth is fixed and it sucks. EDIT: Foveated rendering may be a thing as well but it seems to have been way over-hyped (in terms of realistic performance gains)
  9. Curious things happening on the PCVR front. A new headset, codenamed "Deckard", was found in the SteamVR files and Arstechnica has confirmed that it's real. Also other things suggesting a split rendering system (mixing parts of rendering between a PC and processing within the headset), "inside out" tracking that also works with lighthouse tracking, wireless, and possibly standalone functionality. Seems awfully similar to Valve's patents for split rendering, a new kind of tracking sensor, and headstraps with integrated wireless and processing. Most interesting thing to me though is that based on public filings from a 2020 lawsuit, it's been revealed that in 2018 Valve invested $10m in the company ImagineOptix. IO is an optics manufacturer that uses photolithographic processes to pattern thin film (few micrometer thick) liquid crystal optics that can be layered and electrically controlled. IO entered a contract with Valve which included a Master Supply Agreement and the construction of a factory to mass product these optics for Valve. The factory was finished in mid 2020 and in early 2021 Valve filed ~10 patents describing the application of the technology to a variety of things important for VR. From Valve's patents, they want to use the technology not just for variable focus but also optical/eye correction in the headset (i.e. no glasses/contacts), optical blur and dynamic angular resolution adjustment in different fields of the FOV, pancake lens-like features, and many other things. The varifocal aspect is similar to Facebook's "Half Dome 3" prototype (which they showed in 2019 through a short video) but apparently Valve was already making moves to mass produce similar liquid crystal lenses a year prior. They also recently filed a patent for a form of eyetracking I haven't seen used much, which would be necessary for most of this stuff to work at all. Of course patents and leaks don't necessarily mean anything about actual products, Valve could cancel it, accurate eyetracking is hard, and if they actually release a VR headset that's this advanced it would fundamentally change the experience VR--it seems too good to be true. A solid state lens that can be electrically controlled to perform so many dynamic optical functions is like something out of science fiction. On the other hand, they built a factory to mass produce these lenses and Arstechnica says speculations about these lenses are "on the right track", so it's somewhat tantalizing.
  10. It definitely seems like a neat device. I personally have no interest in actually using something like it so I see no reason to get it. However there are two things that stick out to me: (1) If I were a kid this would be a the perfect on-ramp to PC gaming. It's cheap (a decent GPU costs more than this) and has everything you need integrated (screen, battery, IO)--imagine if in the 90s you could get a fully capable PC for just $225 ($400 adjusted for inflation). It's powerful enough to play the latest games and FSR will extend its life span. It's simple/streamlined through SteamOS but you can hack around with it as you can any PC. You can upgrade the storage and bring it to a friend's house (which the kids love I guess--mobility doesn't matter to me). And when you're ready to take off the training wheels, you can connect a mouse, keyboard, and external monitor. And then longer term you'll be primed to buy a desktop PC. I see a lot of Valve enthusiasts lining up with reservations to buy this thing "just because it's a cool device" but I hope Valve goes out of their way to market this to kids--that's where this could be really successful (I'm thinking a lot of these existing Steam users won't actually use it much because ... why not just use your PC?) (2) This will hopefully warm AMD up to the idea of making cheap PC gaming SoCs. Consoles are cheap not just due to subsidization but also due to the efficiencies that come with the tight integration of mass produced SoCs. And most PC gamers don't even upgrade their PCs--they just buy a whole new system all at once, so the direct benefits of modularity and specialization are lost on them. So if Valve could convince AMD to make SoCs with performance in line with the major consoles (that's the target games are designed around anyway), that could go a long way toward making decent PC gaming systems cheaper. Right now just a decent GPU costs as much as a modern console. From what Gabe has stated, they're probably even subsidizing this thing--which helps justify their 30% take if this is the direction they're going.
  11. I never had a good experience when setting other people up for Google Photos. The desktop syncing applications were unreliable--would constantly stall and need to be restarted--and also CPU intensive. The duplicate detection didn't really work. The behavioral documentation was cryptic--I had to go to 3rd party websites just to get a good grasp on it. Support was practically nonexistent (as is typical for Google "products")
  12. By the way, here is another company working on the VAC problem but using lightfields and it looks like they have some neat prototypes https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/ I guess coincidentally the company, "CREAL", is also pronounced "See Real"
  13. Thanks that makes a lot of sense. I imagine determining the eye ball center is a very difficult problem with the eye not being a rigid body.
  14. Quick question, because I'm having trouble finding more information on this: What is the difference between eye tracking and pupil detection? Is one just trying to determine direction, while the other is trying to determine the complex deformation of the eye?
×
×
  • Create New...