Quicktalk Lip Synch Addon

UPDATE: Since recording the above tutorial I have added an option to use bone-location rather than bone-rotation when building or plotting phonemes to the timeline, and also changed the way it decides which bones to build a rig for: Now it will build for shapes called “AI”, “O”, “E”, “U”, “ETC”, “L”, “WQ”, “MBP”, “FV” and anything beginning with “qt_”, rather than everything except keys beginning with “!”.

I’ve written an addon for blender intended to help to lip-synch a character using a simple text file script describing what all the characters in the scene say.

You can find it here on Github:
https://github.com/revpriest/blenderquicktalk

Or just download the latest known-working version for Blender 2.7 from this website here:
http://tentacles.org.uk/media/scripts/QuickTalk.py

New Jan 2019: New version for Blender 2.8 here:
http://tentacles.org.uk/media/scripts/QuickTalk28.py

 

There’s a tutorial demonstration at the top of this page.

How’s it work?

You build shape-keys for your character’s mouth: 9 of them, one for each of the phonemes “AI”, “O”, “E”, “U”, “L”, “WQ”, “MBP”, “FV” and “ETC”

Next you apply an armature to that object, then click a “generate panel” button. This adds a panel of bones representing levers to the armature. Each lever controls each shape-key, so rotating the level “AI” increases the amount of “AI” shape-key shown.

Then select a script file, a .txt file which just contains the lines spoken by the characters. Every time the ‘speaker’ changes, have a line with that speaker’s name followed by a colon. Then just the words that they speak until the next person chimes in.

Clicking the “Guess Dialogue” markers then puts markers in at around about the places where the person speaking changes. You’ll need to adjust those markers so they are correct.

Then click the “Guess lines” button that’ll do the same for every line within each speaker’s dialogue. Again, adjust those markers.

Then click the “Guess works” button and new markers are added in for every word in every line in every character’s speech. Adjust those too.

Next select a dictionary file which tells the plugin which words are spoken with which phonemes. You can download a dictionary file from the CMU Pronouncing project here:
http://www.speech.cs.cmu.edu/cgi-bin/cmudict

Alternatively if you have trouble there the version I’ve been using is at:
http://tentacles.org.uk/media/scripts/standard_dictionary

Pro Tip
If your characters say non-dictionary words, like proper nouns or just Uggh! or Arghhgh! then the dictionary entry for those words obviously won’t exist.

Replace them in your script with dictionary words which sound or look the same. So, “Samuel” might become “Sam mule well” or “Arghgh” might become “Aren’t”.

Hope you find some use from this plugin, do check out my cartoons and puppet shows here

UPDATE

The script has been updated to allow x-translation instead of rotation for the bones if you like.

UPDATE 2

The script has been updated to allow the “TH” phoneme to be used. Just add a shape-key for it as before. If you don’t bother with the TH then it’ll just fall back to ETC which is what it was doing before.

UPDATE 3

The script has now been update to allow using MHX2 imported models created by Makehuman. If the Makehuman characters have been exported from Makehuman and imported into Blender with Thomas MakeHuman’s Script and you pick “MHX2” as the format (rather than X-Rotation or X-Translation) then there’s no need to even make shape-keys or build bone-panels.

See this video tutorial for an example, which takes you though installing Thomas’ exporter/importer, building and exporting a Makehuman character, importing it to Blender and using QuickTalk to make it speak some stupid joke lines.

 

UPDATE Jan 2019

Blender 2.8 is out now and brings a few API incompatibilities so there’s a new version of Quicktalk for Blender 2.8 here:
http://tentacles.org.uk/media/scripts/QuickTalk29.py

The “Makehuman” compatibility of the Blender2.8 version may or may not happen ever, but certainly won’t happen until the Makehuman Importer For Blender is compatible with 2.8

 

11 thoughts on “Quicktalk Lip Synch Addon

  1. changing lines 320-322 to
    x = bpy.context.scene.cursor.location[0];
    y = bpy.context.scene.cursor.location[1];
    z = bpy.context.scene.cursor.location[2];

    works

  2. Away from my machine at the moment, but sounds like you’re suggesting that the 2.8 version of my Quicktalk plugin here has started failing coz 2.8 has removed a system it uses. Thanks for the tip! Will try and check into it next week 🙂

  3. Hey there! Your plugin is the best I’ve seen for Blender. I’ve got a question. Have you heard of the plugin for Daz Studio called, Daz Importer? If you have, then what are your thoughts about creating an option for it’s visemes within your plugin as you did with MakeHuman?

  4. Hello. Now that Blender 2.8 has officially rolled out, will there be any changes made to the core plugin, or will it continue to work as it did with the beta? (haven’t downloaded the stable release yet)

  5. Hey Adam. The addon works nicely. One question on the fly though. In the future, do you plan on adding more visemes like say, 12 total instead of 9, for more detailed human characters? What are your thoughts on this?

  6. I do not plan to add more visemes, no.

    You may be able to edit the script yourself to add any you particularly need, there’s a mapping between visemes and phonemes around line 543. You could experiment with adding new ones there, but you’d also have to create the shape-keys etc. of course, which is why there won’t be any extras added by default. The selection that’s there is the optimum for my purposes at least.

Leave a Reply