The current Xbox One \ PS4 console cycle is gonna be much longer than the previous ones due to the extension
products ( Xbox One X, PS4 Pro) so games are gonna be stuck targeting the original Xbox One specification.
Not as long as you might think. The mid-cycle "Pro" consoles basically put 4K resolution on the table, but won't extend the console cycle much past 7 years. Xbox One and PS4 came out in November 2013. Xbox Two "Scarlett" has been said to have a 2020 release date, and now a report from yesterday claims that Sony will try to sneak ahead with a December 2019 release for the PS5.
Thinking about it some more, now that these consoles use customized x86 CPUs and GPUs from AMD, you could see the console makers becoming even more aggressive with refreshes. Why not drop a slightly improved chip in every 2 years? All new gains can go towards lowering power consumption and increasing frame rates at a given resolution (especially VR).
At least, I expect the new console APUs to ditch Jaguar and use underclocked Ryzen. Probably keeping the core count at 8 but doubling the thread count to 16. Remember, the 7nm "Zen 2" Ryzen chips might be doubling the max core counts to 16, so octo-core would become the new "dual core" of the lineup, the ground level.
Nvidia can then leverage their mindshare in Ray Tracing to maybe get more console GPU contracts beyond Nintendo Switch,
or at least offer something game-changing to Nintendo for the next iteration.
Look at the Tegra X1, Tegra X2, Xavier, and Orin (no details). Tegra X1 was used in Nintendo Switch. Tegra X2 would only offer modest improvements and will look pretty old soon. Xavier increases TDP to 20-30 W and is oriented towards automotive machine learning uses. Also note that core counts have been changed around. The Switch can use 4 cores at one time and swap out for the 4 lower-powered cores at the discretion of the SoC. The Tegra X2 has a 2+4 configuration, and Xavier has 8 identical cores.
My point is that it's not clear which SoC you would drop in to a new version of the Nintendo Switch. And you would want to see something newer get used since quadrupling the GPU performance *could* allow for 4K gaming in docked mode, and possibly enable a VR headset mode, which Nintendo seems to be testing already. Obviously, the Switch's current 720p screen would have to be upgraded, to at least 1080p and maybe even 1440p. A 90 Hz framerate would be a good minimum, but if you look at Oculus Go, it is only targeting 60-72 Hz at 1440p. So maybe if GPU performance of the Switch is quadrupled, you could have 4K or pseudo-4K in docked mode, and 1080p or 1440p at 75 Hz undocked.
Nintendo Switch only came out in March 2017, so you could see them targeting a late 2020 release for a 4K/VR refresh to coincide with the Xbox release. The chip used could be Nvidia Orin, except it would probably be underclocked and maybe the die size would need to be cut down from the chips being used in automobiles. I'm also not sure about "RT cores" since the automotive platform probably doesn't need them and Nintendo might not be able to ride the raytracing wave within ~2 years, especially on something relatively underpowered like the Switch.
I came up with all this junk on my own, so I'd like to hear your opinions on this.
Ray Tracing has been the end goal of GPU manufacturers and lithography is hitting the limit of where things can shrink quickly
so getting dedicated Ray Tracing tech into the GPU now sets Nvidia up for moving the bulk of the GPU die to be dedicated to that
in the next two or three shrinks before things hit a dead end. If other GPU vendors do not follow suit, they will need to radically redesign
their GPU on the last of the shrinks.
1. I don't think it's completely clear that the RT cores are "dedicated". It's just not clear what they are. Maybe the shader cores can be switched to raytracing and back as needed. Here is how Anandtech's article described them:
New to the Turing architecture is what NVIDIA is calling an RT core, the underpinnings of which we aren’t fully informed on at this time, but serve as dedicated ray tracing processors.
Does "serve as dedicated ray tracing processors" really mean that "they are dedicated ray tracing processors"? Only Nvidia knows the answer.
2. The next 3 big shrinks will likely be 7nm, 5nm, and 3nm. Although various roadmaps, particularly Samsung's, have had interim nodes on them, such as 6nm or 4nm. At this point it's pretty clear that 5nm can be done, and the same can probably be said for 3nm. So that's 3 big shrinks. From there on, there is promising news about even smaller nodes such as 2.5nm and 1.5nm. Keep in mind that many of the feature sizes such as gate length are nowhere near to actually being described by those numbers, so the name of the node is basically marketing in most aspects. But it will be an improvement nonetheless.
What we need to start asking is whether we will see 3D chips by the "end" of Moore's law scaling. Basically, the CPU equivalent of 3D NAND, with many layers of transistors, and 1 or more cores per layer. These will have massive heat dissipation challenges, but the industry has about 10 more years to figure it out. The ability to move data to adjacent cores that are directly up or down could have both benefits and drawbacks.
Nvidia can take the PR hit of having only a modest bump in Raster performance now and then as the industry migrates to Ray Tracing
they will already be in an advantageous spot whilst AMD might have to explain to it's clients why their new competing Ray Tracing GPU
performs worse at Raster than the previous gen products.
There is no PR hit. 7nm node is not quite ready for the prime time, so they are going with the 12nm node, like AMD has for new Ryzen and Threadripper CPUs. There's also no incentive for them to push forward with 7nm GPUs since AMD is not offering any real GPU competition. There may be a PR hit from the high prices, but they are "good" prices because 1.) some people will be willing to pay them, 2.) you might get as much bang for your buck as the GeForce 10 series, except you'll have to pay twice the buck for twice the bang, 3.) they keep the demand low so that they can continue to sell off the remaining GeForce 10 series GPUs, and 4.) they will just lower the prices to sane levels once AMD launches some competing GPUs. The numbers reflect that Nvidia is unopposed at the high end. And even if the emphasis on raytracing is marketing PR to sell these half-node GPUs, I didn't really expect any games to come with support for real-time raytracing (at launch), and they have themselves a short list.
Edited by jaxa, 22 August 2018 - 11:17 PM.