Jump to content
The Dark Mod Forums

Recommended Posts

Too bad, uMatrix still can't protect against the threats to privacy caused by diverse browser fingerprinting techniques on the hand-selected sites i do allow to run some JavaScript...

 

You could use a "clean" browser (check with EFF's Panopticlick tool). Throw in a VPN and you're good. Using one but not the other would not be effective.

 

I'm pretty sure the Tor browser has a simplified configuration and makes specific recommendations that could lessen the chance of being fingerprinted (like avoiding having the browser window not maximized).

Share this post


Link to post
Share on other sites

 

You could use a "clean" browser (check with EFF's Panopticlick tool). Throw in a VPN and you're good. Using one but not the other would not be effective.

 

I'm pretty sure the Tor browser has a simplified configuration and makes specific recommendations that could lessen the chance of being fingerprinted (like avoiding having the browser window not maximized).

I am browsing with Chromium on a Gentoo Linux. If i enable enough JavaScript for panopticlick to actually work, it identifies my browser as unique. Browsers are absurdly chatty, when it comes to providing information. My canvas fingerprint alone is 20 bits worth (most likely the result of beeing up to date on a rolling release distribution using an exotic custom-compiled browser).

 

But i surf over a VPN provider with multiple exits and i hardened my browser config, so without JavaScript fingerprinting is probably a lot less reliable.

Share this post


Link to post
Share on other sites

Not really CPU/GPU related, but hardware related.

 

https://www.newegg.com/Product/Product.aspx?Item=N82E16816132037&cm_re=hot_swap_rack-_-16-132-037-_-Product

 

This thing looks cool, but why the hell do they use molex power connectors? Surely any modern PSU is going to have more SATA connectors than molex, and what molexes there are are going to be used for things like case fans and supplementing GPU power. I really want to install one of these, but it would needlessly complicate wiring.

Share this post


Link to post
Share on other sites

https://www.wired.com/story/amd-backdoor-cts-labs-backlash/

 

I did upgrade to Ryzen, but I also have an old FX8350 box sitting here. And I plan to keep it. Unlike modern chips, it doesn't have one of these built-in "insecurity processors"(the Intel implementation has security problems too), so the FX8350 is actually safer than new chips simply because attacks that would leverage the insecurity processor simply cannot work. You can't attack something that's not there!

  • Like 2

Share this post


Link to post
Share on other sites

That firm seems quite suspicious, so I'd hold my breath for a bit longer and wait for more details. All the exploits mentioned require admin privileges, which also allows you to modify signed drivers without Windows noticing (which is what the Pixel Clock Patcher does) or flash manipulated hard drive firmware. I doubt that many of these devices have proper validity/certificate checks, at least my DVD drive has none.

Since I have an unpatched 7200U Intel laptop, I think I should really take a deeper look at those Management Engine exploits. I wonder if these flaws have rendered Software Guard Extensions useless :P

 

Of course, every additional security issue makes it easier to find viable attack vectors, especially for attacks targeting larger groups of persons/organisations. And with encrypted and undocumented code running with such privileges (and hardware in general as Spectre and Meltdown have showed), users become more and more dependant on the 'goodwill' and competence of a couple of firms.

Share this post


Link to post
Share on other sites
"When asked about who his client is, without even asking specifically, simply what industry they were in:

 

"Guys I’m sorry we’re really going to need to jump off this call but feel free to follow up with any more questions."

Er, Intel are based in Israel - as I have said previously, this has their corp-espionage stink all over this.

Share this post


Link to post
Share on other sites

Geforce Partner Program

 

Nvidia engaged in monolopolistic, Anti-competative, Anti-consumer bullshit, the same kins of BS that Intel got a multi billion dollar fine for -

 

"The GPP is actually a bit sinister, anti-competitive, and uses monopolistic tactics. If a company joins the GPP, its NVIDIA-equipped gaming brand(s) must be 100% aligned with NVIDIA GPUs. And according to documentation provided to Bennett, these partners will allegedly receive priority allocations of GPUs and channel discounts, despite the fact that the NVIDIA GPP blog asserts that product discounts are not provided"

 

https://www.hardocp.com/article/2018/03/07/geforce_partner_program_impacts_consumer_choice

Edited by Bikerdude

Share this post


Link to post
Share on other sites

The next Metro game is supposed to be using it, but lets be clear - only parts of a given scene are raytraced and even then its mostly effects

Full-spectrum real-time raytracing is still well beyond the power of our graphics hardware, so to being with it’s only going to exist as a supplemental effects-based feature for games. You’ll see raytraced shadows, ambient occlusion, and reflections being used initially

Edited by Bikerdude

Share this post


Link to post
Share on other sites

Well, a phisically-correct version of Ambient Occlusion is really a good thing.

Edited by lowenz

Task is not so much to see what no one has yet seen but to think what nobody has yet thought about that which everybody see. - E.S.

Share this post


Link to post
Share on other sites

That doesn't look good at all......The classroom example with SVGF looks terrible, there's noise on the apples for each frame that's different so it's really noticeable. The shadows on the wall don't appear that accurate either.

The only good thing I see here is that it can be done really fast so it's better than shadowmaps. from what I can see, I actually prefer TDM shadows.


I always assumed I'd taste like boot leather.

 

Share this post


Link to post
Share on other sites

TDM shadows are great. But i fear that i will someday see realtime raytraced games keep running smoothly on my 15 years old (but GPU and SSD have been upgraded a few years ago) gaming rig while TDM will keep struggling to hold constant 30 FPS in most missions...

  • Like 1

Share this post


Link to post
Share on other sites

It's a worst case scenario thing (proof of concept).

They are de-noising from a minimal sample set.

 

When the hardware can handle more samples, or if they only do rays for part of the render

then the quality can be improved substantially.

 

Still wish PowerVR was making desktop GPU's with their Wizard raytracing hardware though.

  • Like 1

Please visit TDM's IndieDB site and help promote the mod:

 

http://www.indiedb.com/mods/the-dark-mod

 

(Yeah, shameless promotion... but traffic is traffic folks...)

Share this post


Link to post
Share on other sites

Right now they could double the samples to 2 ppp and drop performance to 50 fps in the same scenes, and it still wouldn't look that good.

I wonder what hardware they were running on, they did say it was a standard GPU though.


I always assumed I'd taste like boot leather.

 

Share this post


Link to post
Share on other sites

TDM shadows are great. But i fear that i will someday see realtime raytraced games keep running smoothly on my 15 years old (but GPU and SSD have been upgraded a few years ago) gaming rig while TDM will keep struggling to hold constant 30 FPS in most missions...

 

While TDM engine is limited, e.g. it probably uses like 10% of your hardware capabilities before going down from 60 fps, there's still a lot of room for improvement. For example, most materials are underdeveloped (you can get a lot of lovely effects with basic diffuse/specular/normal combo which isn't hardware-intensive, but you need to know e.g. how and when to use color speculars). Also most framerate problems are caused by how assets were created. They are clogging the CPU-GPU pipeline, which is the weakest link, regardless of your configuration or engine used. (I'd wager you'd see similar problems with UE3 or Unity.). While raising the limits would always be welcome, I'd still argue that there's a lot of untapped potential here anyway.

Share this post


Link to post
Share on other sites

https://www.theregister.co.uk/2018/05/04/nvidia_gpp_axed/

 

Remember that time these guys passed off 3.5 GB video cards as 4 GB? Yes, the 970 does come with 4 GB, but the last segment of memory is crippled, making performance drop significantly when compared to a real 4 GB video card once a program dips into that memory region. So, labeling these as 4 GB cards was a blatant scam, since they won't match up to rival 4 GB cards when pushed to the limit.

Share this post


Link to post
Share on other sites

https://www.theregister.co.uk/2018/05/04/nvidia_gpp_axed/

 

Remember that time these guys passed off 3.5 GB video cards as 4 GB? Yes, the 970 does come with 4 GB, but the last segment of memory is crippled, making performance drop significantly when compared to a real 4 GB video card once a program dips into that memory region. So, labeling these as 4 GB cards was a blatant scam, since they won't match up to rival 4 GB cards when pushed to the limit.

 

How is this a scam yet saying that dual gpu video cards come with 12GB of RAM? Technically they do, but you only get 6GB.

The 970 has 4GB worth of memory modules, and you can use it, but you get some performance degradation in a few very specific situations. (I hate Nvidia btw, not a fan boy)

 

Three people at my work have 970's and none of them have ever noticed weird issues, and benchmarks that I've see have struggled to show the problems in a meaningful way.

So when people need to create special benchmarks just to point a design flaw, perhaps there's more hype than reality.


I always assumed I'd taste like boot leather.

 

Share this post


Link to post
Share on other sites

It was a scam because the 970 cards were advertised as featuring 4GB of memory that runs at a specified bandwidth, but the final segment does not meet this specification. This has been shown to impact Blender rendering, where (when scene complexity spills into the slow region of memory), the 970 loses out to older cards that it shouldn't in rendering speed. You ordered a video card with 4GB of 256-bit memory at whatever specified bandwidth, so that's exactly what you should be getting. If we let NVidia cut corners like this, things will only get worse.

 

As far as dual GPUs on one board and memory usage, I'm not experienced with this type of card, but in theory, you could run a game on one GPU and use the other GPU to do Blender rendering or be the graphics head to a Qemu or something, at which point you could actually make use of the full 12 GB of the dual GPUs, since game data isn't duplicated between them and they're doing separate tasks.

 

But nothing you do to a GTX970 will make it perform like a real 4GB card. And if people knew what they were getting with the 970 wasn't really 4GB of 256-bit memory, they would have probably chosen a different product. Video memory is important, and it only becomes more and more so as the card ages. Once games really start to tax 4GB cards, this will become more of an issue.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...