Starship Schrödinger’s Destiny V0.1 Devlog

With the release of “Forgettery” comes a new #devlog.

https://www.youtube.com/watch?v=abT-8QTQix4

Hopefully you’ve watched my version 0.1 mini-episode, “Forgettery”, and found it entertaining. It was fun to make.

Version 0.1 took longer than expected. It’s been five months since version 0.0, I was aiming at two or three.

But that’s because the “Forgettery” story demanded things I hadn’t really planned to do as early as version 0.1.

So good to get those things into a working state.

REMODELLED CHARACTERS

Forgettery starred Simon and Anxi, because they’ve been re-done from scratch this version, bringing them up to spec with the rest of the crew.

NEW SHIP

And the ship, The Starship Destiny herself has been completely revamped too. New shape, new console, lots of code to keep the console dials and sticks and knobs and VR goggles actally working.

VR GOGGLES

The command-console has VR-Goggles on it. These will be key, putting them on your face can transport you virtually to many other places and viewpoints.

They don’t work yet, but you can pick ’em up in the Vive version and the code is mostly there to place them on the characters faces.

EXPANDED SPEC: WALKING, ENGINEERING DECK, ELEVATOR

And that was basically all I had planned for V0.1, but then the story I wrote for the mini-episode had all these extra demands:

Characters needed to be able to walk around, and the Engineering-Decks needed to exist, and to be fitted out. The ship needed an elevator, and code to run it. Scenes needed to load and unload as you change floors.

That all wook weeks.

VOICE ACTORS

I recruited actual voice-actors to play the main characters, and the two I’ve used so far, Anna Lock and Matthew Mahoney, both did an excellent job. Recording was nice and easy, as was doing an audio edit.

SCRIPT EDITOR IMPROVEMENTS

For version 0.0 I’d used Audacity to mark up the sound-files for animation direction.

Individually, one by one, saving a separate markers-file for every audio-sample.

It was a pain, it took ages, and I hated it.

So more automation needed for that job.

Now the script-editor is wave-sample aware, displaying the wave-form and letting me mark it up to save the data into the script file where it belongs.

360-3D VIDEO EDITION

Version 0.0 had no 360-3D video, and I wanted to have one for version 0.1. So I found a plugin to let you render the game in Unity at 360-Sterovision. But it takes like 10 seconds to do each frame. WAY longer than the flat-screen.

This means the game runs very slow while recording. About 30 hours or so to render Forgettery.

And first time I did that, it was wrong 🙁

EXPANDED SPEC2: MOUTH-ANIMATION

The mouth-movement of the characters was wired up to just reflect the volume of the audio-source attached to them. And of course the sample plays far too quickly to inform the mouth-movement during the 360-3d rendering. First time it rendered with everyone’s mouth resolutely shut, even as they spoke.

Version 0.2 is going to have better facial control, and it’s in the plan to use the Script-Editor’s Wavform-awareness to save the mouth-movement’s full viseme animation data into the script file so it can be edited.

So last job before re-rendering the 360 video version was to do the bare bones of that.

NEXT VERSION

Which brings me nicely on to the things planned for version 0.2

EMOTIVE FACES

The faces of the characters are basically entirely blank at the moment. They can’t even blink, let alone smile. They just have a single hinge for their jaw and swivelling eyes like a ventriloquist dummy.

So the plan is to figure out a way to morph the faces with more finesse, and hook it up to some kind of emotional-state tracking system in the character’s controller code. Have them smile when they’re happy and frown when they’re angry etc.

VISEMES

Then layer on top of that something to move the lips and form the visual-representation of the phonemes being spoken, the visemes system. That’ll need script-editor enhancements to build something a bit like papagayo into the script-editor, mark up the start of every word.

MOTION CAPTURE EXPERIMENTS

You’ll have noticed that the animation in Forgettery is rather crappy. This is coz there was literally no point in polishing it at all, it’s all just scaffold to be rebuilt more strongly.

Ideally, with a motion-capture system, so I can *act* the animation rather than having to laboriously construct it.

So I’ll do a few weeks of experimenting with this Kinect sensor and getting up a motion-capture system in my studio.

Hopefully that’ll be workable, but if not then big sigh and actually polish the existing animation.

WRAP UP

Anyway, yeah. Version 0.2 should hopefully have motion-captured animation and an emotion and viseme system to animate the character’s faces, and a whole new five-minute story.

Target: End of September.

Though these things rarely seem to come in on target.

Meantime, you can visit the ship in VR person and interact with the characters yourself
using the GearVR or Vive versions of this Version 0.1 release.

Do that, or subscribe for more info, or volunteer to help out from Posted on by to blog, DevLog, making-of, members, microblog, starship - No Comments

Leave a Reply