-
Posts
2120 -
Joined
-
Last visited
-
Days Won
32
Everything posted by jaxa
-
This was a great purchase for me, covers all my needs, and brings me up to immunity to any massive chip shortage as a result of geopolitics. It appears to be running a core at 4.2 GHz basically forever even with low usage, but is quiet most of the time. Clearly faster than the i5-6600T despite the years of quad-core stagnation. The higher TDP and hyperthreading really helps. I doubt there's any noticeable improvement going from HD 530 to UHD 630 iGPU, but it does gain the better H.265/VP9 HW decode that came immediately after Skylake.
-
Yeah no lol. It's all getting sucked in by the AI industry. How much does 24 GB of HBM cost anyway? It could be $600 or something. Which doesn't sound like much when you consider the MSRP of an RTX 4090 but they are making a killing with those marginz. Well, I've just managed to upgrade to an i3-10105 system, possibility of future GPU upgrade (need to look for low profile), for $75. And I'm sticking in 64 GB of RAM that I happened to have lying around. This is likely to be my new TDM system if everything works properly. And I bought not one but two of these things with the other destined for media duty. I stuck the 8 GB from one in the other one. I guess I could end up putting an 11th gen Rocket Lake chip in it, but I'm in no particular hurry to do that. INB4 I'm an unironic buyer of the RTX 3050 6 GB.
-
It should be around $400-600, a price bracket that was once not considered mid-range, delivering raster performance similar to the 7900 XT but likely with better raytracing performance. We can only assume RDNA4 tops out at 16 GB, but 32 GB would be a funny option if they go for it. I think you can create scenarios where games could use as much or more than 24 GB in 4K, but it's obviously rare and largely unneeded. It would be a good amount of VRAM for AI stuff, though the sky's the limit there and 32 GB isn't going to be enough for some LLMs. HBM memory is expensive to make and in huge demand for AI accelerators, enterprise GPUs, and other enterprise products (such as Intel Sapphire Rapids CPUs aka Xeon Max with HBM). I think it's as much as 5x more expensive per gigabyte than GDDR6X/7. So while it would be great for consumer gaming GPUs, with major bandwidth and efficiency benefits, AMD and Nvidia are going to put it in $10,000 to $40,000 products instead. Years ago there was talk of making cheaper, less capable versions of HBM for the mass market, but it never materialized: https://www.tweaktown.com/news/53536/low-cost-hbm-way-hit-mass-market-soon/index.html If the AI bubble pops, we might see some efforts to pivot back to consumer products. Aside from GPUs, probably every CPU should eventually be packing a big L4 cache utilizing HBM, DRAM, or bespoke 3D layers by the late 2030s.
-
Well, the 7600 XT is considered sus for putting 16 GB on 128-bit, but it clearly works in some scenarios. Also IIRC GDDR7 will have about +30% bandwidth over GDDR6X right out of the gate, rising to about +100% as the generation progresses. Big caches (Infinity Cache L3 for AMD, lots of L2 for Nvidia) have made smaller bus widths more viable, and I think they have improved compression techniques and other factors over time to help alleviate bandwidth demands. There's already a little bit of analysis of what we can expect to see in RDNA3+ and RDNA4, very technical though: https://chipsandcheese.com/2024/02/04/amd-rdna-3-5s-llvm-changes/ I am eager to see if AMD is bold enough to do (or allow AIBs to put) 32 GB on the top RDNA4 card, which has long been rumored to be slower than the 7900 XTX in raster, but will hopefully beat it in raytracing and other areas such as AI/ML perf. And I think that card will have a 256-bit bus and 16 GB memory normally.
-
No, the 192-bit RTX 3060 12 GB came first. The cut down 8 GB model came over a year and a half later, and probably in small numbers because nobody talks about it much other than "don't get it, it's 20-30% slower". 3060 Ti had 8 GB from the start, and always has, although it looks like they made a GDDR6X version. They would have to change the bus width to accommodate 12 GB. There were rumors of products like 3070 16 GB, 3080 20 GB and so on, but they never materialized outside of engineering samples. If you think things are confusing now, just wait until 3 GB GDDR7 chips materialize within a couple of years. We could see 12 GB cards on a 128-bit bus, 9 GB on 96-bit, and so on.
-
3060 has 192-bit bus (cut to 128-bit for the maligned 8 GB model), and the gimped cards (like 7600 XT 16 GB 128-bit) can definitely use the extra VRAM in some scenarios. https://en.wikipedia.org/wiki/GeForce_30_series#Desktop
-
It was a relatively strong mid-range card that obviously has less VRAM than it should. And it's still funny that the RTX 3060 packs 12 GB while the RTX 3080 copes with 10 GB. Game devs would like PC users to have 12-16 GB VRAM, but they'll support 8 GB and do little tricks like downgrading the textures automatically.
-
https://store.epicgames.com/en-US/free-games https://store.epicgames.com/en-US/p/thief-5bb95f THI4F is available, it's time to fulfill your destiny
-
Sad News 😢 (but fm release?) [MAKE BELIEVE/NOT REAL/FAKE]
jaxa replied to LePetit_Baguette_69's topic in Fan Missions
Rich Text continues to ruin everyone's day decades later. -
Sad News 😢 (but fm release?) [MAKE BELIEVE/NOT REAL/FAKE]
jaxa replied to LePetit_Baguette_69's topic in Fan Missions
RIP in pepperonies. -
Hammerting 100% off on GOG https://www.gog.com/en/game/hammerting Losing everything (not downloaded to your computer) would not be unlikely, best case scenario the licenses get transferred over to Steam as part of some big deal. Whatever it is, you'd hear about it, and piracy is always on the menu if they screw it up. https://www.forbes.com/sites/paultassi/2023/11/07/tim-sweeneys-epic-games-store-is-still-losing-money-after-five-years/ Will it happen? Maybe. Not before 2027 I think, which is when it expected to become profitable.
-
Thi4f will be free on Epic Games Store from April 4 to April 11, along with The Outer Worlds. So it's your chance to see what all the fuss is about for $0 and test one of the few AMD TrueAudio games in existence. Although from what I heard watching one of the autopsy videos, the implementation is wasted. https://store.epicgames.com/en-US/free-games
-
Book of Demons 100% off on GOG https://www.gog.com/en/game/book_of_demons https://www.gog.com/giveaway/claim
-
Epic Games Store time, because it's Deus Ex: https://store.epicgames.com/en-US/p/deus-ex-mankind-divided-4c6370 https://store.epicgames.com/en-US/p/the-bridge
-
The text has unwanted formatting on it.
-
Nomads of Driftland: The Forgotten Passage 100% off on GOG This is a DLC or expansion pack, but I was able to get the base game for free. https://www.gog.com/en/game/nomads_of_driftland https://www.gog.com/en/game/nomads_of_driftland_the_forgotten_passage
-
What am I supposed to be testing? What's going on?
-
I had a problem with not being able to sheathe weapons with the default tilde key after updating to 2.12 beta 6. Did anyone else run into anything like this?
-
I think the writing is on the wall. Advanced upscaling will be adopted as widely as possible as the free performance band-aid for the gaming industry. The majority of players will probably run it automatically without even noticing. Recently we've seen rumors of Microsoft working on a Windows upscaler (which may be similar to AMD's RSR in that the game developers don't need to touch it) and Sony may include an NPU in a PlayStation 5 Pro for their own bespoke console-level upscaling solution (not an FSR 3/4, although those can be supported). The irony would be if Nvidia ended up killing the demand for gaming GPUs faster by marketing DLSS so hard, that there's less "need" for new and top-end GPUs. But they won't care because they prefer to chase more lucrative markets like AI, datacenter, automotive. I say "faster" because there is some point in the future when additional hardware can't push the boundaries of graphics, or faster hardware can't be created. We'll see an evolution of Unreal Engine 5's photorealism approach, adoption of 8K resolution, possibly 16K for VR, and a push to the 240-1000 FPS range. Generated frames could be used for a free doubling if not quadrupling of FPS to hit those high numbers, and upscaling tends to work better when your input/target resolution are already very high. For VR specifically, foveated rendering can slash hardware requirements, possibly by 80% or more if the implementation is good enough. On the hardware side, there's still free lunch to be had with a few additional node shrinks. Stacked L2/L3 cache could be extremely beneficial, think the 3D V-Cache version of Infinity Cache (Nvidia has gone with big L2 with Lovelace). We don't see adoption of High Bandwidth Memory in consumer GPUs because it is in such high demand for AI/enterprise products, but there's no technical reason it can't be used. We will see the blossoming of mega APUs this decade.
-
FlatOut 100% off on GOG https://www.gog.com/en/game/flatout https://www.gog.com/giveaway/claim
-
Dead Island: Riptide Definitive Edition 100% off on Steam https://slickdeals.net/f/17292832-dead-island-riptide-definitive-edition-pc-digital-download-free https://store.steampowered.com/app/383180/Dead_Island_Riptide_Definitive_Edition/
-
Looking at the performance charts, I feel the move is to turn it on, cap framerate at 60. Most of the minimums were around 60 with DLSS Performance on, so it would be flawless. But here I am at glorious 720p quality. Also my main system is out of commission for some reason, I'm going to try updating the BIOS.
-
Now that I've finished The Black Parade, I'm definitely open to checking out more NewDark era missions. I want to see what other ideas are possible now that the Dark Engine has been unchained.