Jump to content
The Dark Mod Forums

Facegen


oDDity

Recommended Posts

  • Replies 96
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

I now have a progam called customiser for facegen, which means I can make my own head mesh and UV layout and textures, import them to facegen, and then generate faces based on morphing of my own meshes. This obviously means there is no way of them knowing we used the app, since all faces will be based on my own meshes, UVs and textures.

http://www.facegen.com/customizer.htm

Civillisation will not attain perfection until the last stone, from the last church, falls on the last priest.

- Emil Zola

 

character models site

Link to comment
Share on other sites

I've been exporting blendshape facial expression and lip sync vertex animation from maya to to doom using a script that parsonsbear from the doom3world forums wrote. He's willing to configure and change it to best suit our needs. I'm basically beta testing it for him.

This means there's no problems from the animations side of lip sync and facial expression, its up to you coding boffins to work out your end of the deal.

Civillisation will not attain perfection until the last stone, from the last church, falls on the last priest.

- Emil Zola

 

character models site

Link to comment
Share on other sites

How do they usually do this, in general terms? Can we write an external app that takes .ogg speech files and translates that into a script sequence of lipsync animations? That seems like a pretty daunting task, though one that might be in high-demand. I wonder if there's an open-source program that does this?

Link to comment
Share on other sites

Here's one:

 

http://www.annosoft.com/sapi_lipsync/docs/

 

The company offers higher end stuff ~$500, but this release is free and includes the source, and does basically what we need (reads audio input and converts to a script of phoneme timings)

 

"Microsoft Speech API (SAPI) 5.1 Engine to generate time-aligned phonetical information given Microsoft RIFF Wave input... The inputs to the system are a wave file and an optional text transcription... The output from the system is a newline delimited list of phoneme timings and word timings produced by SAPI."

 

We would need to figure out how to read this script of phoneme timings in order to play and blend the lip sync anims, but at least the hard part of speech analysis and timing relative to the audio file would be done for us.

 

I guess one idea would be to run this program outside the game, to generate a script for each phrase that must be entered. We could maybe convert this to an md5 animation that is played on the mouth/face when that particular phrase is uttered. (Or we could try to do it in realtime blending md5 animations together based on the text "phoneme script," altho that might be more difficult).

 

[EDIT] In case anyone wants to try this out, here are the links:

 

http://www.annosoft.com/sapi_lipsync/latest_source.zip

 

You'll also need this, and will have to install it in a particular path unless you want to rebuild the C++ (see the docs)

MS Speech SDK (free):

http://www.microsoft.com/speech/download/sdk51/

 

Haven't tried this out yet myself but maybe will, looks interesting.

Link to comment
Share on other sites

THe easiet way to do it would of course be to make an actual animation for every AI bark.

That isn't as outrageous as it sounds - they don't have to be perfect, what's good enough for a game like this is nowhere near what's good enoughr a movie.

COding in the facial expressiions will be a lot simpler.

SO if nothing better can be thought of, then we'll fall back on doing this.

Civillisation will not attain perfection until the last stone, from the last church, falls on the last priest.

- Emil Zola

 

character models site

Link to comment
Share on other sites

If we want to have real lipsyncing it should be done in realtime, but I don't see how we can do this, unless we use an alternative sound engine.

Why? Is the dialogue going to change in real time? No it's not. Each sound file will have a phoneme file associated with it, and played when the sound file is played.

Link to comment
Share on other sites

Why would we have to use another sound engine when we have this src that directly processes the OGG/WAV files and generates phoneme timing scripts that can be used for lipsync? I have to agree that I think it'd be sufficient for our purposes if we wrote some code to generate a facial animation based on the phoneme timings, and then save an animation per phrase.

 

Trying to do it realtime would be much more involved (moreso than we need to spend time on IMO) we'd also have to read ahead in the stream, since from what Odd is saying, doing it correctly depends on what they're about to say in the future as well as what they're currently saying.

Link to comment
Share on other sites

Why? Is the dialogue going to change in real time? No it's not. Each sound file will have a phoneme file associated with it, and played when the sound file is played.

 

Because, if we don't have access to the sound engine we wont know when a sound is interrupted, or overlayed with another one. So we have a guard speaking something and then it is interrrupted in mid sentence and suddenly start to talk something else, but the animations continue to play the old sync. I don't know about you, but everytime I see a movie that is badly synchronized I get an itchy feeling up to the point that it becomes unbearable. It detracts from the picture and shatters immersion. And for this to happen, it only takes a split second of timing between the lipmovement and speach to be apart.

Something like this is even worse then having no lipsyncing at all.

Gerhard

Link to comment
Share on other sites

That will be easy to handle spar, no need to directly "see if a sound is playing"

 

Our AI decides what sound it will play at what time. When the AI is interupted, it needs to decide what to do about it, and it is at that point we can stop playing the sound, stop playing the phoneme animation, choose a new sound file, and start the phoneme anim that goes with it.

 

In fact, asking the sound directly to see if a particular sound is playing would be a really roundabout and innefficient way of doing it. In all my game programming experience, the only time I needed to ask the system if a certain sound file was playing was when I was limited to how many sounds I could play at once and needed to priorities which ones cancelled out others. And even then I could have programmed my own sound management system by starting timers for each sound file or something like that.

Link to comment
Share on other sites

The sound engine has enough exposed functions to avoid that problem you're talking about, Sparhawk. Domarius is right in the sense that we have access to all the call to start a sound in the sound engine, and where the code presently stops all other sound on that channel before playing a new sound, we can add in a line to stop all existing lipsync animation as well, before playin a new sound on the CHANNEL_VOICE channel.

Link to comment
Share on other sites

  • 2 weeks later...

This is some tests using the vertex exporter, which changes the vertices of the model into bones.

Of course, this script can be used for other things, like cloth deformation.

 

taling.gif

 

 

talking_boned2.gif

 

 

seethru.gif

Civillisation will not attain perfection until the last stone, from the last church, falls on the last priest.

- Emil Zola

 

character models site

Link to comment
Share on other sites

BTW, any lip readers among you?

(I didn't actually animate it to any specific phrase, but it seems pretty obvious what he's saying by reading his lips, it's just pure accident)

Civillisation will not attain perfection until the last stone, from the last church, falls on the last priest.

- Emil Zola

 

character models site

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Recent Status Updates

    • OrbWeaver

      Finally got round to publishing a tutorial on baking normal maps in Blender, since most of the ones we have are inaccessible or years out of date.
      · 0 replies
    • nbohr1more

      The FAQ wiki is almost a proper FAQ now. Probably need to spin-off a bunch of the "remedies" for playing older TDM versions into their own article.
      · 1 reply
    • nbohr1more

      Was checking out old translation packs and decided to fire up TDM 1.07. Rightful Property with sub-20 FPS areas yay! ( same areas run at 180FPS with cranked eye candy on 2.12 )
      · 3 replies
    • taffernicus

      i am so euphoric to see new FMs keep coming out and I am keen to try it out in my leisure time, then suddenly my PC is spouting a couple of S.M.A.R.T errors...
      tbf i cannot afford myself to miss my network emulator image file&progress, important ebooks, hyper-v checkpoint & hyper-v export and the precious thief & TDM gamesaves. Don't fall yourself into & lay your hands on crappy SSD
       
      · 7 replies
    • OrbWeaver

      Does anyone actually use the Normalise button in the Surface inspector? Even after looking at the code I'm not quite sure what it's for.
      · 7 replies
×
×
  • Create New...