Jump to content
The Dark Mod Forums

Search the Community

Showing results for tags 'machine learning'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General Discussion
    • News & Announcements
    • The Dark Mod
    • Fan Missions
    • Off-Topic
  • Feedback and Support
    • TDM Tech Support
    • DarkRadiant Feedback and Development
    • I want to Help
  • Editing and Design
    • TDM Editors Guild
    • Art Assets
    • Music & SFX

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 3 results

  1. https://www.gamesradar.com/witcher-3-mod-uses-ai-to-create-new-voice-lines-without-geralts-original-voice-actor/ https://www.ibtimes.com/witcher-3-story-mod-stirs-controversy-over-ai-generated-voice-acting-3237250 https://www.inputmag.com/gaming/video-game-voice-ai-human-actors-witcher-3-mod-controversy My position: get over it, voice actors. The writing has been on the wall since Vocaloid was released. I think we've seen games shipping with 10 to 30 gigabytes of voice data at this point. You could imagine some game using procedurally generated text (like GPT-3 used to fuel NPC chatbots), team written scripts, or crowdsourced scripts to get to the equivalent of 1 terabyte or more of lines. Powerful 8-core CPUs are becoming the minimum standard for gaming, and between the CPU and GPU there will be more than enough computational resources available to synthesize voice lines and other sounds, like object collisions, in real time. Assuming 500,000 pages per gigabyte, 1.5 minutes to voice a page, you can get 1.4 years of text-to-speech from 1 gigabyte of scripts. The legal issues are legitimate. I have no doubt that a court would side with voice actors who are having their voices "stolen", citing personality rights. At the same time, companies could figure out how to mix the voice samples of hundreds of real people together in training data, and adjust parameters to come up with an infinite number of indistinct voices that can be used without paying anyone. Meanwhile, fan efforts can rip off real voices from Hollywood and voice actors, or specific voice performances like Stephen Russell as Garrett. It's not worth it to sue them, and they can come together pseudo-anonymously and distribute code using torrent sites if needed. In some cases, an amateur will do the voice acting and then a different voice style will be pasted over the original recording. We could also see pure text-to-speech with a markup language to add emphasis, vocal cadence, etc. In either case, an algorithm can definitely transfer or fake the "breathing" and pauses. It's also likely to be considered art. Ignoring the fact that an "invisible sculpture" can be considered art, there will likely be a lot of creativity or at least fine tuning when writing a script, working with an AI that generates scripts, and perfecting the voices.
  2. https://www.youtube.com/user/keeroyz/videos Two Minute Papers is an excellent YouTube channel that reports on many science articles related to machine learning and cutting-edge computer graphics techniques. The graphics-related videos have obvious relevance to the TDM community. And then there's fun stuff like these: One common refrain from the guy who runs the channel, Károly Zsolnai-Fehér, is that the pace of this type of research is so fast that you can expect a particular ML/graphics technique to be obsolete within months. Just imagine what "deep fakes" will look like in 2028 after years of algorithmic and hardware improvements. We'll probably see completely synthesized videos that look genuine at first and second glance... rendered in real time. Trust No One.
  3. Nvidia has announced the upcoming launch of its RTX 2080 Ti ($1000-$1200, September 20), RTX 2080 ($700-800, September 20), and RTX 2070 ($500-600, October) GPUs. You didn't read that wrong, the 'R' is for Ray tracing. The key feature they are touting is real-time ray tracing using "dedicated" ray tracing (RT) cores. The tensor cores for machine learning are also used to help ray tracing by denoising. Here is an example of how that can work: Nvidia's keynote presentation at Gamescom 2018 included a demo of the Eidos Montreal game Shadows of the Tomb Raider, using the real-time ray tracing technique, highlighting improvements made to shadows. Earlier in the year, Microsoft announced a DirectX Raytracing API. Similar improvements are being made to Vulkan. There are a lot of questions raised here. How and when will AMD respond, for example? But most importantly, could/will a hybrid ray tracing technique ever be applied to TDM?
×
×
  • Create New...