I am a little dubious about some of the claims in that video too. It was my understanding that there are two different Minecraft ray tracing demos and
one is RTX whereas the other is a screen-space ray tracing tech similar to what Reshade is currently doing with it's advanced AO tech.
I have no doubt that AMD can do ray tracing on current hardware. They've shown ray tracing demos as far back as 2008 so it's not a matter of
"capability" but instead it's a matter of "performance". I would be pretty surprised to see Nvidia spend over 30% of their GPU die on something
that doesn't really accelerate ray tracing in any meaningful way at all.
The only thing I can fathom is that this "might" be analogous to how Tessellation came to fruition with Nvidia.
ATI\AMD had a viable Tessellation tech as far back as DX8 hardware but Nvidia couldn't implement an equivalent without infringing on ATI patents (patent minefield).
So Nvidia spent years claiming that AMD's solution was "wrong", "incomplete" , etc then after Nvidia lobbied Microsoft and OpenGL ARB
to require the Tessellation specification to include features that were friendlier to Nvidia's pipeline then Nvidia was happy to release their own Tessellation
solution and shit on AMD for "not being able to Tessellate".
I think that RTX is in a similar category to that.
It's a ray tracing implementation that allows Nvidia to continue to use their existing OpenGL based hardware approach rather
than try to rebuild their tech to be closer DX12 which was designed by Microsoft and AMD.
Even if PS5 has superior ray tracing to RTX, Nvidia will be able to offer "acceptable" PC support with the original RTX generation and RTXv2
will be powerful enough to handle any disparity between how AMD's implementation works and how much translation overhead is needed to
make it run on Nvidia. Unless AMD figures out how to match or beat Nvidia's "secret sauce" of performance per die size (cough, better deferred render hardware, cough)
Nvidia can still "waste" it's silicon on RTX acceleration to close the gap with AMD in ray tracing while still beating them in standard raster performance.
It's a chess game for sure but it's probably Nvidia's only choice since redesigning their hardware to be map closer to DX12 would be another
patent minefield along with being expensive and risky.
Still... I think that this is the real practical solution in the near future:
Battlefield 3 came up with the idea of doing a low frequency ray tracing run on simplified geometry and using that as input
for GI Probes. It was doing this on the CPU! With GPU accelerated ray tracing, the same approach could be done with much
more detail and still be 100x less expensive than full scene ray tracing and de-noising. This could also be done "now" with
current AMD hardware so it's already a viable production strategy. Nvidia can simply tout it as their tech since they are providing
the end-to-end implementation strategy (which, of course, favours their hardware... ).