UPDATE: Since recording the above tutorial I have added an option to use bone-location rather than bone-rotation when building or plotting phonemes to the timeline, and also changed the way it decides which bones to build a rig for: Now it will build for shapes called “AI”, “O”, “E”, “U”, “ETC”, “L”, “WQ”, “MBP”, “FV” and anything beginning with “qt_”, rather than everything except keys beginning with “!”.
I’ve written an addon for blender intended to help to lip-synch a character using a simple text file script describing what all the characters in the scene say.
You can find it here on Github:
Or just download the latest known-working version for Blender 2.7 from this website here:
New Jan 2019: New version for Blender 2.8 here:
There’s a tutorial demonstration at the top of this page.
How’s it work?
You build shape-keys for your character’s mouth: 9 of them, one for each of the phonemes “AI”, “O”, “E”, “U”, “L”, “WQ”, “MBP”, “FV” and “ETC”
Next you apply an armature to that object, then click a “generate panel” button. This adds a panel of bones representing levers to the armature. Each lever controls each shape-key, so rotating the level “AI” increases the amount of “AI” shape-key shown.
Then select a script file, a .txt file which just contains the lines spoken by the characters. Every time the ‘speaker’ changes, have a line with that speaker’s name followed by a colon. Then just the words that they speak until the next person chimes in.
Clicking the “Guess Dialogue” markers then puts markers in at around about the places where the person speaking changes. You’ll need to adjust those markers so they are correct.
Then click the “Guess lines” button that’ll do the same for every line within each speaker’s dialogue. Again, adjust those markers.
Then click the “Guess works” button and new markers are added in for every word in every line in every character’s speech. Adjust those too.
Next select a dictionary file which tells the plugin which words are spoken with which phonemes. You can download a dictionary file from the CMU Pronouncing project here:
Alternatively if you have trouble there the version I’ve been using is at:
If your characters say non-dictionary words, like proper nouns or just Uggh! or Arghhgh! then the dictionary entry for those words obviously won’t exist.
Replace them in your script with dictionary words which sound or look the same. So, “Samuel” might become “Sam mule well” or “Arghgh” might become “Aren’t”.
Hope you find some use from this plugin, do check out my cartoons and puppet shows here
The script has been updated to allow x-translation instead of rotation for the bones if you like.
The script has been updated to allow the “TH” phoneme to be used. Just add a shape-key for it as before. If you don’t bother with the TH then it’ll just fall back to ETC which is what it was doing before.
The script has now been update to allow using MHX2 imported models created by Makehuman. If the Makehuman characters have been exported from Makehuman and imported into Blender with Thomas MakeHuman’s Script and you pick “MHX2” as the format (rather than X-Rotation or X-Translation) then there’s no need to even make shape-keys or build bone-panels.
See this video tutorial for an example, which takes you though installing Thomas’ exporter/importer, building and exporting a Makehuman character, importing it to Blender and using QuickTalk to make it speak some stupid joke lines.
UPDATE Jan 2019
Blender 2.8 is out now and brings a few API incompatibilities so there’s a new version of Quicktalk for Blender 2.8 here:
The “Makehuman” compatibility of the Blender2.8 version may or may not happen ever, but certainly won’t happen until the Makehuman Importer For Blender is compatible with 2.8