Vous êtes sur la page 1sur 129

Making a short movie with MakeHuman and

Blender 2.5 in two weeks.

Thomas Larsson

November 24, 2010

1
1 Introduction

The purpose of this document is to describe MakeHuman and the new tools for
film-making in Blender that have been created during the past year: the MHX
format, the mocap tool and the lipsync tool. The best way to know what tools
are needed is to actually try to make a short movie, and the best way to describe
the tools is to describe the workflow that lead there. The resulting movie can
be viewed at http://www.youtube.com/watch?v=VfWSSCUOjIA.

The main emphasis is on using the MHX rig with the mocap tool, but to make
this book self-contained it also contains some general information about the
film-making workflow in Blender. This general information is rather brief and
has been described better elsewhere. In particular I recommed Roland Hess’
book [3], which was my main source of information. It was written for Blender
2.49, but most of it translates directly to Blender 2.5x.

The movie has some technical flaws that will be discussed in the text. Some
of these flaws could have been fixed by polishing the movie further, but then
I could not had truthfully claimed that it is possible to make a short movie
with MakeHuman and Blender in two weeks. It is already stretching the truth
a little; the first pictures for the storyboard were created on Saturday, October
23, and the movie was uploaded to youtube on Saturday, November 6, which
means that it actually took fifteen days to complete. That it was possible to
finish the project in the time is an important point.

The time budget for this project was somewhat unusual for an animated film.
Very little time was spent on character modelling, texturing, rigging and ani-
mation. Designing a character in MakeHuman is very quick (especially since I
reused two characters from earlier projects), and a rigged and textured char-
acter can be brought into Blender within a minute with the MHX importer.
And since almost everything is animated with mocap (motion capture), the
animation itself only took the time needed to load the BVH file.

Instead, what did take most of the time was:

• Searching for good mocap files. I stayed within the mocap library from
CMU, but even so it took a lot of time to load and evaluate different
motions.
• Cleaning up animations. Mocap files are noisy, and sometimes the limbs
do not move as they should. Sometimes physical differences between the
mocap actor and the target character may lead to problems; e.g., the fe-
male character had a tendency to penetrate her own breasts when boxing.
• Stitching animation together in the NLA editor.
• Searching the internet for other resources (3D models of buildings, music,
sound effects, textures).
• Setting up lights and shadows. I obviously have a lot to learn in this
department.

2
• Rendering. The full movie rendered overnight on my home PC, which is
a rather modern computer with 6 Gb of ram running 64-bit Ubuntu.

MakeHuman is an open-source project and as such it suffers from incomplete


documentation. There has been no user manual for using the mocap and lipsync
tools, and the MHX documentation on the MakeHuman website is somewhat
outdated. The only tutorial previously available for the mocap tool was the
video [2]. The present book can be seen as a replacement for a user manual,
where the various options are described together with the actual workflow. Some
of the text in this book may eventually be reused in a proper user manual.

2 Getting the software

The movie was made exclusively with open-source or at least free software,
which can be legally downloaded.

2.1 MakeHuman

MakeHuman can be found at http://www.makehuman.org, but the actual down-


load site is http://sites.google.com/site/makehumandocs/download .

3
The latest official release is alpha
5, but this is outdated and can not
be used with this text. Instead
scroll down the page and download
a nightly build that suits your com-
puter. Figure 1: Download a nightly build

Alternatively, the most recent version of MakeHuman can be downloaded using


Subversion (svn); the svn site is:
http://code.google.com/p/makehuman/source/browse/#svn/trunk/makehuman.
Subversion is a program for managing large programming projects. To use it
you need an svn client. Good svn clients are TortoiseSVN for Windows
(http://tortoisesvn.tigris.org/) and RapidSVN for Linux
( http://rapidsvn.tigris.org/).

4
Figure 2: SVN browser

2.2 Blender

The main application used for creating the movie is Blender


(http://www.blender.org/), an open-source 3D package similar to Maya, 3ds-
Max or XSI. The movie was created with the most recent version, Blender 2.55,
which can be downloaded from
http://www.blender.org/development/release-logs/blender-254-beta/.
Later version may have released after this text was written.

The most recent Blender builds can be downloaded from


http://www.graphicall.org/.

The new Blender 2.5x is quite different from the previous 2.4x versions. It
has a new user interface, a new animation system, and much more. What is
important for us is that the python API (application code interface) has been
completely changed from 2.49. The MakeHuman tools are written for Blender
2.5x (more precisely, 2.55), and do not run under 2.4x. There used to be versions
of the MHX importer and lipsync tool for 2.49 as well, but they are not really
supported anymore. The mocap tool is written exclusively for Blender 2.5x.

2.3 Gimp

Gimp is a painting and photo-editing application, sometimes described as the


open-source alternative to Photoshop. Be that as it is. Gimp was used to
draw textures (mainly the masks that hide the characters’ bodies beneath their

5
clothes), and also to edit the illustrations in this book. Gimp can be downloaded
from http://www.gimp.org/.

2.4 Papagayo and Wine

Lipsyncing was done in Papagayo and then exported as a Moho file (.dat), which
can be imported by the lipsync tool. Papagayo is a quite old software which is
not being developed anymore, but it offers a simple way to do lipsyncing. It is
not open source, but it can be downloaded for free from
http://www.lostmarble.com/papagayo/.

I have been using Papagayo under Windows XP for quite a while, but I have
not been able to make the Linux version work. There is a download link for
Linux on Papagayo’s home page, but it is only for 32-bit Linux (not surprising
considering the age of the software), and it does not run on my 64-bit Ubuntu
computer.

However, 32-bit Windows programs do run under 64-bit Windows. Wine is a


Windows emulator for Linux, which makes it possible to run Windows programs
under Linux. As far as I understand, there is no performance penalty for using
Wine. It emulates Windows in the same way as running program in XP mode
under Window 7 emulates Windows XP; the program runs at full speed in
the processor, but system calls are intercepted and redirected. Some Windows
programs can have problems to run under Wine, but I did not experience any
problems with Papagayo. After installing Wine and Papagayo, the latter can
be access from the Applications > Wine > Program > Papagayo > Papagayo
menu.

Wine can be downloaded from http://www.winehq.org/.

2.5 Audacity

Audacity was used for editing the sound clips. It can be downloaded from
http://audacity.sourceforge.net/download/.

3 Planning the film

3.1 Script

Every movie starts with an idea, which is then developed into a script. The
script for Don’t call me babe was absolutely minimal, but here it is:

• A sexy woman is out walking when she is stalked by a villain.


• The woman hears something, and starts running.

6
• She is trapped in an alley.
• The villain, perhaps an ex-boyfriend, attacks the woman.

• There is a fight, which ends with the woman killing the villain.

Incidentally, the woman’s name is Alia and the villain is called Hunter.

This is admittedly not the most advanced plot in the world, and perhaps not a
very nice one neither. However, is suited my purposes well:

• To illustrate MakeHuman’s ability to create and export a wide variety of


characters; in this case a muscular male and a large-breasted female.
• To develop the mocap tool and show off its capability. To this end, the
script needs some walking, running and fighting, which is pretty much all
there is in this story.

3.2 Storyboard

The next step is to develop a storyboard, i.e. drawings of the shots, with camera
angles, indications of movements, etc. This can be done with paper and pencil,
or in some drawing application on a computer. However, with MakeHuman and
MHX export making it so easy to design and pose characters, I actually made
the characters first and rendered pictures of them in different poses in Blender.
For somebody with limited drawing skills like myself, this turned out to be the
fastest solution.

7
Figure 3: The storyboard of Don’t call me babe

The final movie differs quite a lot from the storyboard towards the end. The
reasons for that were not always motivated by artistic reasons.

• In the storyboard Alia kills Hunter in the alley, but the in movie they
move out to the street first. The reason is that it was difficult to control
the mocap animations in the narrow alley; the characters kept walking
into trashcans and walls.
• In the storyboard Alia kills Hunter with a stick, but in the movie she kicks
off his head. It was not so easy to find a fitting mocap animation for a
stab, and I didn’t want to take the time to hand animate it. But I did
find a nice animation of A Very Drunk Walk, which would be fitting for
somebody who has just lost his head.
• The storyboard describes a serious action movie. The actual film starts
out in that way, but then turns into something more bizarre.

8
• In some other scenes, the plot was driven by the available mocap data
rather than vice versa. E.g., Hunter starts laughing after he has kicked
Alia, mainly because I found a good laughing animation.

3.3 Animatic

The storyboard is then assembled into an animatic, i.e. a very preliminary


version of the movie. This takes place in Blender’s Video editor, also known as
the Sequencer. If you have not used the Video editor before, take some time to
familiarize yourself with it.

Figure 4: Select Add image

To load the storyboard pictures into the Video editor and making it into a
movie, we should perform the following steps:

1. In the Video editing layout, select Add > Add image.


2. Navigate to the directory where the storyboard is, select all files, and press
Add strip.
3. The files are now loaded into the Video editor, but the animation is way
too short. Each image only takes up a single frame. Separate the strip

9
into single images. For starters, choose a suitable length for each shot, say
100 frames.

4. Now adjust the timing of the individual shots


5. We could already add some sound at this stage.

Figure 5: Select all strips in the an- Figure 6: Separate the strip into in-
imatic directory dividual pictures

10
Figure 7: Animatic after images have been separated

This is how things should be done, but I admit that I did not complete the two
final steps. An important purpose of the animatic is to get the timing right.
However, mocap data is largely self-timing. If you load mocap files with the
correct frame rate, the duration of a movement automatically becomes correct.

4 Finding resources

The focus of Don’t call me babe was on animation and film-making. The char-
acters were designed in MakeHuman, but it takes a lot of other resources to
make an animated movie; 3D models of buildings and props, textures, dialog
and sound effects, music, etc. I only modelled very simple props myself; the
rest was found in various places on the internet. Remember that Google is your
friend.

11
4.1 3D models

Figure 8: Three downloaded resources

The set was built from three nice buildings and a lamppost which I found on
the internet:

• A house by ”Eon” at
http://blender-archi.tuxfamily.org. License: GPL.

• A church by ”Ric” at
http://e2-productions.com/repository/modules/PDdownloads/:
• A large building by Scopia Visual Interfaces Systems, s.l. (http://www.scopia.es)
at http://blender-archi.tuxfamily.org License: CC-BY

• A lamppost by Jean Montambault at


http://blender-archi.tuxfamily.org. License: GPL.

As far as I understand, the license conditions allow reuse.

12
Figure 9: A building. Note the license conditions to the right

4.2 Textures and materials

I found textures and materials at three locations:

• CG textures (http://www.cgtextures.com/)
• blender-materials.org (http://matrep.parastudios.de/)

• A Google image search for ”texture” came up with about 36,800,000 re-
sults in 0.15 seconds.

13
Figure 10: Scene with many downloaded textures

The scene above contains many downloaded textures and materials:

• Hunter’s trousers: jeans texture found by Google


• Hunter’s sweater: cloth texture found by Google

• Boxes: PlywoodNew0049 4 L.jpg texture from cgtextures.com


• Trashcan: MetalGalvanized0051 L.jpg texture from cgtextures.com
• Stones: rocky rock material from blendertextures.org
• Ground: asphalt texture from cgtextures.com

Where to find mocap data is discussed in section 11, and where to find sound
and music is discussed in section 15.

5 Linking

There is a blend file for each scene in Don’t call me babe, to make it possible
to adjust lighting, camera angles, etc. for each shot. However, every shot takes
place at the same location and involves the same characters, so it does not make
sense to duplicate this information. If we had made a copy of a character in

14
each shot file, and then realize that we want to improve her, the change has to
be repeated in each file.

The solution to this situation is file linking, which is an essentally part of the
film-making workflow. Each character, prop, building and set has its own blend
file, which are then linked into each shot file. Linking is recursive; a prop can
be linked into a set which in turn is linked into a scene. If we now improve a
character or a set, the changes propagate automatically to each scene that uses
the asset.

5.1 Directory structure

Even a short movie like Don’t call me babe makes use of some fifty files, not
counting the 6000+ rendered images that were created. Clearly these files must
be organized so we can find them. Moreover, we must decide on a good directory
structure before we start. One can of course add new subdirectories as they are
needed, but once we start linking files the asset files should not be moved. If an
asset file is moved and some file that uses the asset is then opened, the linked
libraries can not be found and the file is ruined, perhaps permanently. Therefore
we a typically stuck with the directory structure that we define in the beginning
of the project.

The following directory structure was created for Don’t call me babe.

animatic. (The storyboard pictures.)


characters
alia.
alia.blend
textures
texture.tif
texture ref.tif
mask.png
red silk.png
hunter
hunter.blend
texture.tif
texture ref.tif
mask.png
jeans.jpg
Fabric016.jpg
thomas
thomas.blend
models

15
Building.blend
church.blend
House.blend
props
box.blend
lamppost.blend
pebble.blend
trashcan.blend
textures
MetalGalvanized0051 L.jpg
PlywoodNew0049 4 L.jpg

render
010-alia walks
0150.png
0151.png
...
020-alia seen from behind
0180.png
0181.png
...

scenes

010-alia walks.blend
020-alia seen from behind.blend
...
sets

street.blend
textures
AsphaltCloseups0106 7 S.jpg
Roads0108 22 L.jpg
sound
brw-babe.wav
cautious-path.wav
heartbeat-speeding-up-02.wav
...

16
5.2 Filenames

The scene files start with a three-digit number, like 010-alia walks.blend. This
number makes it easy to sort the shots in correct order. Scenes were originally
number in steps of ten which makes it easy to insert scenes between existing
ones if you find that extra scenes are needed.

The rendered pictures from each scene are stored in a folder with the same
name as the scene file, i.e. 010-alia walks. Because the image folders are listed
alphabetically in file selectors, it is easy to load images in correct order into the
Video sequencer.

Only use letters, words, hyphens and underscores in your filename. Blender
has no problem with spaces in filenames, but such problems can arise anyway.
Originally, the files in Don’t call me babe had spaces in them. When it was
time to render, I wrote a shell script to automate the task, but it did not work
because the filenames were broken up at the spaces. Linux gurus probably know
how to get around that, but why introduce a problem when it is just as easy to
use underscores in filenames?

5.3 Props

I actually did model some very simple props for Don’t call be babe: a box, a
trashcan, and two different stones. All of these (in fact, several instances of
each) be seen in Figure 10. I will not discuss how they were modelled; if you
need help with basic modelling in Blender, consult the books [3, 4, 7] or the
many good tutorials that can be found at blender.org or at the internet.

However, we will discuss one step which is necessary to make linking work. Each
prop must be assigned to a group. Consider one of the boxes in the alley (only
one box was modelled). When we a ready with the modelling, we create a new
group and assign the box to it in the object context. The outline changes from
orange to green. We should also give the group some meaningful name.

Figure 11: Create a new group and assign the box to it.

17
The downloaded buildings in section 4.1 were also assigned to groups that could
be linked into the set. Each building consists of many different objects, which
were all are assigned to the same group. The groups were suitably named House,
Church and Building, respectively.

5.4 Linking

Let us start by linking the box group that we just creating into another file.
The very first thing we need to do before linking is to save our file. By default,
Blender 2.5 uses relative filepaths, i.e. filepaths relative to the present file.
Hence the current file need to have a filepath, i.e. it must be saved first.

Figure 12: First save the file. Then new groups can be linked into it.

We link a group, or anything else, from a blend file by selecting the File > Link
menu. Navigate to the asset file, and then inside the asset file to the Group
folder. A blend file is like a directory itself, and in the file selector we move
around both in the external file system and inside the blend file in the same
way. As we see in the figure above, total path for the Box group is
/home/thomas/projects/chase/props/box.blend/Groups/box. This means
that the filepath of the box file is
/home/thomas/projects/chase/props/box.blend, and that we have navigated
to Groups/box inside the blend file.

All assets from a Blender file – animations, materials, meshes, armatures, etc.
– can be linked into other files, but normally you only want to link groups.

18
Figure 13: Two linked boxes. One has been translated.

The linked box appears in the viewport. New boxes can now be created from
the Add > Group instance menu. The second box appears at exactly the same
place as the first one, so we need to translate it to actually see it. We can now
place the linked object where we want. Two boxes were placed on the ground
in the alley, and a third one was put on top of the others.

Figure 14: The box is really an empty.

Note that the linked box is really an empty. It can be translated, rotated and
scaled, but it is not possible to modify its internal structure. The only place a
linked asset can be edited is in the asset file. If you change the box there, e.g.
by adding a texture to it, all three boxes in the linking file receive the same
update.

If you need a local, editable copy of the box, it must be appended to the file
rather than linked. Assets are appended in the same way as they are linked;
Append is just above the Link in the File menu. There are situations where it is
desirable to append; e.g., the pebbles in the alley have the Rocky Rock material,
which was downloaded from the internet and then appended into pebble.blend.
However, if you append an asset, you lose the connection to the asset file, and
future updates of the asset will not appear in the using file.

19
Figure 15: Extra safety: first make all paths absolute, then make them relative
again.

One of the most annoying things that can happen is if a link is broken, and
Blender fails to find a linked asset. Hess [3] has some discussion about how
broken links can be restored, but it is best to avoid the situation altogether. To
ensure that the link is really there, I usually close the file after the assets have
been linked. If they are missing when the file is reopened, that’s it. You must
link the assets again, carefully avoiding any mistakes this time, but there is not
a lot of work that has been lost.

As an extra precaution when a new file is created, I usually toggle between all
paths absolute and relative in the File menu, and save the file inbetween. This is
probably not necessary, but it has become a habit to do so anyway. Fortunately,
no links were broken in the production of Don’t call me babe.

5.5 Sets

The first version of the set was very rough: the group was a plane and the
buildings were boxes. The animatic and the first animations were made using
this set as a background. Later on the set was replaced, part by part, when
better objects became ready. The final set looks like this.

20
Figure 16: The street set

The only objects that remained from the original rough set was the ground and
the wall at the end of the alley. The following groups were linked into the set
file:

• One church.
• Two buildings.

• Three houses.
• Many lampposts.
• Three boxes.
• Two trashcans.

• Many stones.

Each shot was lit separately, but two common sun lights were added to the set
to give all scenes a consistent mood. Since the story takes place at night or late
evening, the sun lights are really moon lights, with a blue tint.

21
Figure 17: All scenes have common sun (or moon) lights

The sun lights were placed on layer 1, and the various objects on layers 11-18.
All layers with objects on them were made visible. The reason for putting the
objects on different layers is that the layers can be turn on and off in the shot
file below, cf. section 5.7.

22
5.6 Characters

The characters were modelled in


MakeHuman and imported into
Blender with the MHX importer,
where they were further tweaked.
This process is described in detail
below.
Here we only focus on the part of
character creation that is related to
linking. All objects that make up
the character (the mesh, the rig,
and the clothes) were assigned to
the same group, just as the props
were assigned to a group in section
5.3. After a new group has been cre-
ated for the first object, the other
objects are assigned to the same Figure 18: The objects in the Alia
group. group. Note the green outline.

Figure 19: All objects to be used externally must be added to a group

23
5.7 Scenes

Each shot in the movie has its own .blend file. It is possible to have several scenes
in the same .blend, but Hess [3] recommends against it, so this is something I
have never tried. Before we start animating the size of a scene file is quite small,
because all assets are linked into it from other files.

After the shot file has been save in the scene directory (remember that it has
to be saved before we can link), we link the set just as we linked props. Just
navigate to the Scenes part of the set file and press Link.

Figure 20: Link set

The set is now linked. There are two scenes in the file, the original one call
Scene and the linked scene call Street. The linked scene has an L in front of it,
to indicate that it is linked.

Figure 21: The set has been linked

There is actually nothing that we can do in the linked scene, so we switch


back to our original empty scene again. In the Scene context, select Street as

24
a background scene. Suddenly the linked set appears in the viewport. This
time we can control which parts of the scene that should be visible. Recall how
we put different parts of the set on different layers in section 5.5. Controlling
set visibility is very useful e.g. to have a better view when animating; blocking
objects such as the wall at the end of the valley can be made invisible by turning
off visibility of the corresponding layer. We might also want to turn off some
lamps in the set file, to speed up rendering.

Figure 22: Use background scene. Layer visibility can be controlled.

Figure 23: Link a character. Alia is just an empty.

Next we want to link the characters into the scene. However, the linked Alia is
just a prop; she can be moved as an object but she can not be posed, because
she is just an empty. To make her into a posable character we must make a
proxy for the rig.

25
Select Alia and choose
Make proxy from the
Objects menu, or just
hit Ctrl-Alt-P on the
keyboard. Blender now
asks us which object in
the Alia group that we
want to proxy. We
choose the rig object. A
proxy object is now cre-
ated. Unlike the linked
object Alia, which was
just an empty, the
proxy object Alia proxy
is an armature which
can be posed. Now we
can pose Alia just like
any other armature. Figure 24: Make a proxy for the rig only.

Figure 25: Alia can now be posed.

There are still some limitation what we can do with the proxy armature. It can
be posed, but it is not possible to change the bone structure by going to Edit
mode. If we want import an object and then edit it, it must be appended to
our file rather than linked. However, an appended object is local to the new
file, and does not change if the object in the object in the asset file is modified.
The value of linked assets is that improvements in the asset file automatically
propagate to all files that use the asset.

26
This concludes our brief discussion of the general film-making workflow in
Blender. We now turn to specifics of using MakeHuman, MHX import, and
the various animation tools.

6 Modelling in MakeHuman

Start MakeHuman by double-clicking on the MakeHuman program (makehu-


man.exe under Windows and just makehuman under Linux). Alternatively,
type makehuman (Windows) or ./makehuman (Linux) at the command prompt
in a console window. We are now presented with the start screen.

Figure 26: MakeHuman start screen

To design a character in MakeHuman is a very intuitive process. We design the


character by moving sliders and pulling on the character itself, and we change
the camera view with the mouse buttons:

• Left: Rotate.
• Middle: Scale.

• Right: Translate.

27
The next few figures illustrate the process.

Figure 27: In General, move the main sliders to get an overall design.

Figure 28: In Details, continue to drag sliders.

28
Figure 29: Switch to Face camera to get a better view of the head.

Figure 30: Rotate the view to see the character from all sides.

29
Figure 31: Finally move or scale individual regions.

Our heroine Alia character had been created previously and saved in Make-
Human. Saved characters are typically stored as .mhm files in the makehu-
man/models subdirectory in your home directory. To load her, press the Files,
Load buttons and select among the characters appearing on the screen.

30
Figure 32: Loading a saved character

And here she has been loaded into MakeHuman.

31
Figure 33: Meet Alia

The MHX format allows us to export a rigged, textured and dressed character
to Blender. To export Alia as an MHX file, press the Files, Export buttons.
Choose the .mhx Export option; for our purpose, the .bvh and group options
are unimportant, nor does it matter whether Hair is exported as a mesh or as
curves. Type in the name of character and press Save.

32
Figure 34: Save the character as MHX

Two new files are created in the MakeHuman export folder: alia-24.mhx and
alia-25.mhx. These files can be loaded with the MHX importer into Blender
version 2.49b and 2.5x, respectively. The difference between these Blender ver-
sions is dramatic, and it is not possible to define an import format which suits
both. Besides, the Blender 2.4x series is becoming obsolete, and MHX export
to it is not actively supported anymore.

Figure 35: Two MHX files are created in the exports folder

33
Quick a few objects can be included in the MHX file. Apart from the character
and her rig, it can contain various clothes, one or more low-poly proxy meshes,
a mesh-deform cage, and perhaps more in the future. These extra objects can
come in handy, but if you know that you never intend to use them, they are
just excess baggage which costs you export time, file size, and import time. The
file proxy.cfg, located in MakeHuman’s main program directory, controls which
extra objects to include in the MHX file. Here is the content of this file in the
current MakeHuman distribution; edit it to suite your own preferences.

#
# Proxy configuration file for makehuman
# MakeHuman will try to include the enabled proxy meshes in obj and mhx export.
#
# A copy of this file at ~/makehuman or C:\ takes precedence.
#
# Lines which start with @ are commands:
# @ Obj True/False: Export as obj files
# @ Mhx True/False: Include in mhx files
# @ Proxy level: Import depends on Proxy checkbox
# @ Cage level: Import depends on Cage checkbox
# @ Clothes level: Import depends on Clothes checkbox
#
# The cage must come first, because other proxies use it for deformation.
#

@ Obj False
@ Mhx True
@ Cage 3
./data/templates/cage.mhclo

@ Obj True
@ Proxy 2
# ~/makehuman/myproxy.proxy
#./data/templates/Rorkimaru.proxy
#./data/templates/forsaken.proxy
./data/templates/ascottk.proxy

@ Obj True
@ Clothes 4
./data/templates/sweater.mhclo
./data/templates/jeans.mhclo

A MakeHuman character is bald by default, but that does not make a very at-
tractive female. Fortunately, MakeHuman comes with a library of different hair-
dos that we can choose from. Navigate to Libraries > Hair and browse through
the available hairstyles. Once you have found one that you like, double-click
on the head. Alia now has hair.

34
Figure 36: Select hair

Hair is not included in the MHX file. Instead it must be exported separately
as an obj file. This is actually a good thing, because we can export several
different hairstyles for the same character and choose between them in Blender.
Go to Files > Export, and select .obj and Export hair as curves. Alia’s name
should already appear in the text field, so all we have to do is to press save. A
file called hair-alia.obj is create in MakeHuman’s export folder.

Hair can be exported in two ways: as a mesh or as curves. Using a mesh to


represent hair may be useful for games, but in Blender particle hair is both
easier to handle and looks more realistic, so this is what we are aiming for. The
advantage of exporting hair as curves is that these can be used as guides for
particle hair. The script described in section 10.1.1 below imports the curves
and creates hair based on these.

35
Figure 37: Save hair as curves

Next we construct the other two protagonists. Hunter had also been designed
earlier; he is the star of the tutorial [2]. The other male character only plays a
small part, essentially limited to the dancing at the end. He is supposed to look
like myself and is thus called Thomas.

36
Figure 38: The other protagonists: Hunter and Thomas

7 MHX import into Blender

The MHX file create by MakeHuman must now be loaded into Blender. This is
done with the MHX importer, which is a python script.

7.1 Copy files from MakeHuman to Blender

The MHX importer is an add-on in Blender 2.55, which comes with the Blender
distribution. If you are using Blender 2.55 beta, you don’t need to copy the
importer to Blender. However, Blender 2.5x has not yet reached a completely
stable state. Changes in the python API can affect the MHX importer and the
mocap and lipsync tools. The API was in a complete flux between versions 2.53
and 2.54, when changes broke the MHX importer every other day. Things are
now becoming much more stable, but there are still occasional changes in the
API. In particular, a small change soon after Blender 2.55 beta was released
rendered the MHX importer that comes with Blender unusable. To fix this, you
need to copy the MHX importer from MakeHuman to Blender.

The MHX importer available from MakeHuman is always the most up-to-date.
It can be found either in the most recent nightly build or from the SVN site.
The file is called io import scene mxh.py and is located in the
importers/MHX/blender25x subdirectory under the main MakeHuman pro-
gram directory. This file must be copied to the location where Blender keeps its
add-ons, which in version 2.55 is in the 2.55/scripts/addons subfolder in the
Blender folder. The folder name 2.55 will of course change in future releases of
Blender.

37
Since a version of the MHX importer ships with Blender, the OS will warn you
that you are about to overwrite the file. This is ok. If you still want to keep the
existing importer around for some reason, move it so some other directory. It is
not a good idea to keep two different files which define different versions of the
same add-on in the addons folder, because they will compete. Also delete any
previous .pyc version of the MHX importer that may exist in the addons folder.

The mocap and lipsync tools are not distributed together with Blender. Per-
haps they will come with Blender in the future, but they have not yet reached
sufficient maturity. However, these files are also written as add-ons, and should
be copied between the same locations as the MHX importer.

Figure 39: Copy files from importers/MHX/blender25x to 2.55/scripts/addons.


Answer yes to replace the outdated importer. Remove any .pyc files.

7.2 Enable add-on and import

To enable MHX importer add-on, go to the File > User preferences menu. A
preferences window is now opened. Navigate to the Add-ons tab, and go to the

38
Import/Export section. Enable Import MakeHuman (.mhx) by checking the
checkbox in the upper-right corner. If you want to be able to import MHX files
every time you start Blender, press the Enable Add-On button at the button.

Figure 40: Enabling the MHX importer add-on

It is also a good idea to check the version information of the MHX importer.
The version that ships with the official Blender 2.55 beta has version 1.0, and
your version should not be lower than that. However, this version of the MHX
importer does not work with Blender builds that are only slightly younger than
the official beta, due to a change in Blender’s python API.

Once the MHX importer has been


enabled, you can import MHX files
by going to the File > Import > Im-
port MHX menu.
In the file selector that appears, se-
lect the MHX file that we just ex-
ported from MakeHuman. Since we
are using version 2.55 of Blender, Figure 41: File > Import > Import
the file to load is alia-25.mhx. MHX

39
Figure 42: Select the correct MHX file and press Import

And after a while a rigged and dressed Alia appears in the viewport.

Figure 43: After a while the rigged character is loaded

40
The picture looks rather complicated, because several objects are loaded at the
same time. To have a clearer view, we select a single layer at a time.

Figure 44: The objects on the different layers

1. The high-poly character mesh.


2. The armature.
3. Low-poly proxy meshes.
4. A cage, intended for use with the mesh-deform modifier (experimental).
5. Clothes
6. More clothes
7. Reserved for even more clothes.
8. The last layer made visible by default.

7.3 Import options


• Scale. MakeHuman uses decimeters internally, so scale 1.0 means that
1b.u. = 1dm. If your scene is made at another scale, the scale should
be changed accordingly. E.g. set scale = 0.1 if your unit is meters, and
scale = 1/0.254 = 3.94 if it is inches. It is preferable to import with a
scale rather than importing at scale = 1 and then rescaling the character
in Blender, because the scale parameter affects parameters that are not so
obvious. E.g. the SSS (subsurface scattering) scale needs to be adjusted
for plausible renders. The figure illustrates what happens if you rescale
the mesh in Blender without adjusting the SSS scale.

41
Figure 45: 1. Character imported with scale 0.1.
2. Character imported with scale 1.0, then scaled down in Blender.
3. Character imported with scale 1.0, then scaled up in Blender.

• Enforce version. Both the MHX importer and MHX files have a version
number, whose main purpose is to keep them synchronized. An MHX file
is usually incompatible with the importer if their version numbers differ.
You can try to import the MHX file anyway, but it is not recommended.
A better alternative is to load the character into an updated version of
MakeHuman (you did save your character, right?) and reexport to MHX
again.
By default it is an error to try to import an MHX file with a different ver-
sion number. You can override this default by unchecking this checkbox,
but you do so at your own peril.
• Proxies. As we saw in figure 44, low-poly versions (zero or more, de-
pending on the settings in proxy.cfg) of the character appeared on layer 3.
These proxy meshes could be useful for realtime game character, or per-
haps for background characters in a movie. The storyboard in Figure 3
was made with proxy characters of the ascottk type (i.e. the default proxy
mesh was made by him). A tell-tale sign is the weird-looking eyebrows.
Evidently the texturing of proxy meshes does not work perfectly.
• Replace scene. Delete all meshes, armatures and empties presently in
the scene. Other objects, such as cameras and lamps, are not affected.
• Cage With the right configuration in proxy.cfg, the MHX file can contain
a cage which encloses the mesh. This feature is indended to work with
Blender’s mesh-deform modifier, and is further described in section 7.4.
• Clothes As we saw in figure 44, Alia was dressed in the MHX file, provided
that the appropriate files were enabled in proxy.cfg. The clothes are very
simplistic and of a boring unisex type, but at least they allow you to put
your animations on youtube without being censored.
More significantly, the imported clothes can be a starting point for more
interesting garment. Whereas Hunter uses the default clothes with just

42
some added textures (and a new pair of shoes), and Thomas use the default
clothes with a simple color change, Alia’s clothes were edited in Blender.
In the illustrations in this text you will see Alia both wearing the edited
clothes and the default imported clothes (and bald!).
• Stretchy limbs This is a feature most often seen in cartoony characters;
Elastagirl from The Incredibles is an extreme example. Since MakeHuman
characters are supposed to be realistic, this feature is disabled by default.

• Face shapes The MHX mesh comes with shapekeys that can be used
to construct facial expressions and visemes, cf. the lipsync discussion in
section 16.2 below. If you know that your character will not need to
change his facial expression, you can save some time and a lot of space by
disabling this feature.

• Body shapes On several occasions I have tried to fix bad deformations


by making corrective shapekeys, This has never really worked out well,
especially not in the shoulder and groin regions. For now MHX files do
not contain any corrective shapes, so this option does nothing.
• Symmetric shapes Many shapekeys are asymmetric and come in a left
and a right version. However, they are stored symmetrically and filtered
through the Left and Right vertex groups. For those who make shapekeys
for the MHX mesh (i.e. myself) it is useful to be able to import a single
symmetric shapekey to start with. If this option is checked, the character
is loaded e.g. with a single Smile shape, rather than with a Shape L and
a Shape R shape.

• Diamonds. The MakeHuman mesh has a number of little diamonds


which are used for placing joints. The animator is usually not interested
in seeing these diamonds, and therefore they are deleted by default when
the mesh is imported into Blender. The importer can easily recognize the
diamonds, because they consist of trianglular faces whereas the rest of the
MakeHuman mesh is pure quad. However, there are occasions when it is
necessary to include the diamonds during import, in order to maintain the
correct vertex numbers. Clothes and low-poly proxy meshes are defined
in terms of the vertices of the main mesh. The utility for making clothes
therefore only works if the main mesh is imported with diamonds intact.

43
Figure 46: Joint diamonds used for placing joints.

• Bend joints. In Blender, IK works best if the mesh is modelled with


slightly bent elbows and knees. However, the MakeHuman mesh was not
made in this way, and changing that is not an option. With this option
checked, the MHX importer bends the joints at load time, giving better
IK behavior.
Joint bending has not been tried for quite some time, and I doubt that it
works anymore. Joints should not be bent in a rig intended for use with
mocap, which is presently the case, so just leave this option unchecked.

7.4 Cage and mesh-deform

If MakeHuman is configured to export a cage and the Cage option is checked dur-
ing import, the character comes into Blender with a cage intended for use with
the mesh-deform modifier. This is an alternative method of skinning the mesh,
which gives very smooth deformations. Unfortunately the mesh-deform methods
gives too smooth deformation in areas where sharp creases are intended, which
is often the case in the armpits, elbows, knees and groin. Mesh-deform can be
combined with traditional skinning, which perhaps yields the best results. This
is still a very experimental feature, and it was not used in the creation of Don’t
call me babe, but feel free to explore it if you are adventurous.

44
Figure 47: A caged character. Figure 48: Deformation could be better.

If the Cage option in the MHX importer was enabled, and the cage mesh was
enabled in the file proxy.fig, things look a little different when the character has
been imported. Apart from the clothes and rig, the character is also surrounded
by a low-poly cage.

A character using a cage for deformation can not be used out of the box. If we
nevertheless try to pose her as usual, only her arms move as in figure 48. There
are now two modifiers: a mesh-deform modifier above an armature modifier.
The armature modifier is restricted by the vertex group Cage; lower weights
mean a smaller influence. Also note that the multi-modifier option is active.
This means that the armature modifier starts with the mesh state at the top of
the stack rather than with the state just above it.

45
Figure 49: Press Bind to bind the
mesh to the cage. Figure 50: The Cage vertex group.

The rest of the deformation should be handled by a mesh-deform modifier, but


first we must bind the mesh to the cage. To do so we simply press the Bind
button. Binding is a rather complex operation which takes several seconds to
complete. Other meshes using mesh-deform, in this case the sweater and the
jeans, must also be bound to the cage.

46
Figure 52: Character peeks through her
Figure 51: Deformation is still poor. cage.

Now the character moves with the rig. Or at least part of the character, because
the deformation in figure 51 is still very poor. The reason is that the mesh-
deform modifier only works if the mesh is entirely enclosed by her cage; vertices
that peek out are not affected by the modifier. The cage was modelled to entirely
cover the default character, but its adaption to other characters is not perfect.
In particular, Alia’s prominent breasts differ considerably from the normal, and
hence it is not unexpected that problems arise in that area.

To fix these problems, the cage must be edited in Blender. However, before
we start to edit the cage, all meshes must be unbound from the cage; simply
press the Unbind button that has replaced the Bind button in the mesh-deform
modifier. We now edit the cage, and once it covers the entire mesh, we bind it
to the cage again.

Unfortunately, it is still difficult to get a very good deformation with the Make-
Human mesh. The cage must every surround the mesh, but in the same time it
must not self-intersect. It is difficult to satisfy both these conditions at the same
time, particularly in the groin area, because the left and right thighs are very
close, especially on the jeans. Perhaps one needs to deform the rest position
along the lines of section 12.4 below.

47
Figure 53: Cage edited to cover Figure 54: Deformation is better, but
character entirely. not perfect.

8 The MHX rig

8.1 Bone layers

The MHX rig uses bone layers to group joints which belong together, to facilitate
the animator’s work.

48
Figure 55: Layer 1-4

Figure 56: Layer 5-8

49
Figure 57: Layer 9-16

The bone layers are organized as follows.

1. The MasterFloor and Root (i.e. hip) bones.


2. The spine.
3. Arm IK.

4. Arm FK.
5. Leg IK.
6. Leg FK.
7. Finger control.

8. Individual fingers.
9. Face control.
10. Unused

11. Head, jaw, eyes.


12. Unused

50
13. Unused.
14. Unused.
15. Helper bones, not intended for the animator.
16. Deform bones, not intended for the animator.

8.2 Posing arms and legs

Figure 58: FK rig on layers 4 and 6 Figure 59: IK rig on layers 3 and 5

When the sliders above the face panel are dragged to the left, the arms and
legs are controlled by the FK (Forward Kinematics) rig on layers 4 and 6,
respectively. We pose the bones by rotating the circles surrounding the bones
to the desired position. Since the FK bones can only be rotated, translation is
not a meaningful operation, and both the R-key and G-key rotate the bones.

Forward Kinematics means that each bone is rotated into place, and all children
follow. This is very intuitive, and can lead to natural rotation arcs. Many people
use FK for limbs that swing freely, like the arms when walking. However, it is
difficult to use FK when the precise location of the chain endpoint matters. It
is important that the feet are planted exactly on the ground when walking, and
that the hands are at exactly the right positions when grabbing a rail.

Inverse Kinematics (IK) was created to solve this problem. The IK rigs for the
arms and legs are found on layers 3 and 5, respectively. We pose an arm by

51
moving the hand into position. The upper and lower arms are then moved in
some way that Blender feels fit. To control the elbow location, we can move its
pole target, i.e. the little box connected to the elbow with a rubber band.
The leg IK rig is similar and illustrated in the figure below.

Figure 60: Leg IK rig

Figure 61: Reverse foot IK rig

The IK foot rig deserves some attention. Known as the reverse foot setup, it
allows the foot to be rotated around three different pivot points. The ToeIK,
FootIK and LegIK bones rotate the foot around the toe tip, the ball and the
heel, respectively. The LegIK bone (the snowshoe shape) is also used to position
the foot.

8.3 FK/IK switch

Both FK and IK have their places, and sometimes one wants to use both in the
same action. Perhaps our character first walks, with leg IK and arm FK, then

52
picks up an object with arm IK on her right arm, and then starts to swim with
both leg and arm FK. The choice between FK and IK is controlled by the four
sliders above the face panel, as shown in figure 62. Since the slider is just a
bone in the armature, it can be keyed just as any other bone. This allows us to
dynamically switch between the FK and IK rigs in the same action.

Figure 62: FK/IK switch

8.4 The master bones

The root and the master bones are located on layer 1. The root, in the hip
area, is the parent of the entire rig. All other bones are children or grand-
children of Root, except for the end effectors (HandIK and LegIK), the pole
targets (ElbowIK and KneeIK) and the face panel, which do not have parents.
All parent-less bones have a ChildOf constraint targeting the MasterFloor bone,
which is the big, round bone surrounding the character’s feet. By moving Mas-
terFloor we can reposition the entire character. This is something that we will
use when stitching mocap actions together in the NLA editor.

Having the master bone on the floor is usually the best choice, but not always.
An acrobat swirling around in the air normally rotates around his hips, and
Spiderman has his pivot point between his shoulders. For this reason there
is also a MasterHips and MasterShoulder bone, and the parentless bones have
ChildOf constraints targeting them. They are normally hidden and the ChildOf
constraints have influence zero, but they can be unhidden and the influence can
be changed, should the need arise.

8.5 Fingers

The easiest way to control the fingers is with the control bones on layer 7. When
we rotate a finger control bone, each individual link rotates the same amount

53
around its local X axis. The first link can also be rotated around its Z axis to
some degree. This is the simplest way to control entire fingers e.g. to create a
fist. For situations that require more control, the individual links are available
on layer 8. We can switch between the two modes of control (control bone and
individual links) with the sliders on both sides of the face panel in Figure 62.
There is one slider for each finger, to allow different fingers to be controlled
differently.

Figure 63: Finger rig on layers 7 and 8

8.6 Face

The expression on the character’s face is controlled from the face panel on layer
9, cf figure 62. The bones drive various shapekeys when they are moved along
their local X and Z axes. The area affected by each slider should be intuitive,
and at any rate it is easy to try them out.

Animation of facial expressions will be discussed further in Section 16.2 which


deals with the lipsync tool.

8.7 Head

On layer 11 we find bones to pose the head area: Neck, Head, Jaw, Tongue,
and Gaze. The jaw bone opens the mouth, just as one of the shape keys. The
difference is that the shapekey interpolates vertices linearly, whereas the bone
rotates them.

54
Three tongue bones control the movement of the tongue. To make the character
stick out her tongue, scale these bones along their local Y axes.

The eye are staring at the Gaze L and Gaze R bones, respectively. These bones
are parented to a single Gaze bone which controls where the eyes are looking.

8.8 Deform bones

All deform bones exist on layer 16. They should normally not be seen by the
animator, but are necessary when doing bone weighting. Since the MHX mesh
is already weighted, you usually don’t need to access these bones. However, if
you want to modify the weights you find the deform bones here.

8.9 Helper bones

The helper bones on layer 15 are important for the rig’s inner working, but
the animator does not need to see them. However, I want to point out some
pecular helper bones: end effectors and pole targets for the FK rig. The bones
LegIK, ElbowIK and KneeIK have the FK counterparts LegFK, ElbowFK and
KneeFK. These bones have no function when posing the character, but are used
by the mocap tool to transfer mocap animations from the FK rig to the IK rig.

9 Hair

Hair import is currently broken. There seems to be serious problems with


Blender’s hair system. In the version that I am currently working with, it is
not even possible to edit hair in the viewport; there is some message about the
point cache. This section thus describes hair import as it should be and as it
once was.

Hair import is implemented as a Blender add-on, but it is not yet distributed


with Blender. The file import hair obj.py must hence first be copied to
Blender’s addons folder as described in section 7.1, and the add-on must then
be enabled in analogy with section 7.2.

Now select the character that should receive hair, and go to the File > Import
> MakeHuman hair (.obj). In the file selector that appears, navigate to the
obj file that was exported from MakeHuman in Figure 37. It is located in
MakeHuman’s export folder and is called hair alia.obj.

55
Figure 64: Import hair, currently with some problems

Alia now has hair, but not quite in the place we intended. I believe that this
can be fixed once Blender’s particle system has become stable, but for the time
being we must model the hair ourselves in Blender.

Figure 65: Imported hair from the good old days

The import script will complain if the obj file contains a mesh (hair was then
probably exported as mesh from MakeHuman). Hair created for another char-
acter can be imported, but the result is unpredictable especially if the characters
differ much in height.

56
10 Tweaking the characters

MHX import brings in a rigged and dressed character into Blender. However,
all characters are bald and dressed in the same boring and undistinct clothes.
This could be enough if we wanted to animate an army of bald soldiers, but
usually we want to tweak the characters’ appearence a little.

10.1 Alia

10.1.1 Hair

Since the hair importer is broken, we have to create hair directly in Blender.
Fortunately it is quite easy to make some hair that looks fairly good.

Alia’s hair grows on her scalp.


Therefore we create a scalp vertex
group and assign vertices to it as
shown in figure 66. Next go to the
particles tab, create a particle sys-
tem and change the type from Emit-
ter to Hair. The following hair set-
Figure 66: Scalp vertex group. tings were used for Alia’s hair.

57
Figure 67: Hair settings.

• Type: Hair. An emitter emits point-like particles, whereas hair emits


one-dimensional trajectories, which become hair strands.
• Emission: Emit 100 hairs from the mesh’s faces. The amount can be
kept quite low, because we use child particles to fill up the skull.

• Render: The render type is set to path and the hair material has number
3, which matches the hair material’s material number.
• Children: Each hair is rendered with 100 children scattered around it;
for speed only 10 children are displayed in the viewport. Radius is a quite
handy parameter. It spreads the children around their parent, making
the hair more unkempt but also reducing the risk that the skull shines
through.
• Vertexgroups: When density is set to Scalp hair will only grow from the
skull rather than from the whole body.

There are many other parameters that can be used to control the hair’s appear-
ance. Many of these are discussed in [3, 5].

58
Figure 68: Wild hairdo. Figure 69: Edited hair.

This leaves Alia with the rather wild hairdo in Figure 68. We now switch from
Object mode to Particle mode and give Alia a better hairstyle. I use the Comb,
Add, and Cut brushes most. Even though the Scalp group has higher weights
at the top of the skull, Blender tends to grow more hair further down, making
the character look like a monk. Adding more hairs at the top saves Alia from
bald spots. When we are happy with the hairstyle, we toggle back into Object
mode and the result looks something like Figure 69.

59
Figure 70: Comb hair in particle mode

10.1.2 Blouse and trousers

Alia’s clothing was modelled using her default sweater and jeans as a starting
point. The modifications of the mesh and the materials were straightforward.
The only remarkable point concerns the bone weights. The clothes are rigged
meshes with vertex groups corresponding to the deform bones. Deleting vertices
from the mesh is no problem; information about the vertex group weights is
stored inside each vertex. However, if vertices are added to the mesh, we must
ensure that it has the right vertex weights. We must also check that vertices
which are moved still deform well when the rig is posed.

10.1.3 Masking

The test render in Figure 71 reveals a problem: parts of Alia’s body is visible
through her clothes.

60
Figure 71: Alia with clothes on. Some body parts evidently peek through.

There are a number of methods to avoid this problem:

1. Delete vertices and faces that are entirely hidden under the clothes. This
would not work if the character is wearing semi-transparent clothes, but
a decent girl like Alia has no such garments. However, we must take care
not to delete too many faces close to the end of the clothes, lest the arms
and legs look detached from the rest of the body. Note that the limbs
should look attached in all relevant poses and from all relevant camera
angles.

2. Enlarge the clothes so they cover the character entirely, like we did with
the mesh-deform cage in section 7.4. Again, it is not enough just to cover
the body in rest pose.
3. Paint a mask that makes the skin material invisible beneath the clothes.

4. All of the above.

61
Figure 72: The body becomes invisible where the mask is black.

For Alia a combination of items 2 and 3 were used. First I used Blender’s
texture painting mode to paint a texture that is white where the skin is visible
and black where it is not. The tricky part is where to draw the line. The mask
texture was improved in Gimp, because in the 3D viewport it is difficult to
cover everything, e.g. between the toes. Then a new texture was added to the
Skin-SSS material to make the skin invisible beneath the clothes. Here are the
settings for the mask texture.

62
Figure 73: Texture settings for the mask texture

10.1.4 Painting garment

One way to avoid masking problems altogether is to paint the clothes directly
onto the texture. This works reasonably well for clothes that fit very snugly to
the body. Here is an example with Alia in undergarments.

63
Figure 74: Alia with undergarments Figure 75: Knickers texture. Parts not
painted directly onto texture covered are transparent.

The underwear textures come from an old project [2], but they could be reused
because all MakeHuman characters have the same UV layout. Actually, the
UV layout recently changed, which means that the clothes textures have to
be repainted if they are to be used in future projects. But the new textures
can then be used until the UV layout changes next time, which hopefully wont
happen in the near future.

Here are the settings for the knickers texture.

64
Figure 76: Texture settings for the knickers texture

One problem with painting textures is that we are still using the skin material
settings. In particular, painted clothes have the same amount of subsurface
scattering as the skin itself. Another problem is of course that painted clothes
have no thickness; it is just skin with another color.

10.2 Hunter

10.2.1 Shoes

An freshly imported character has some clothing but is barefoot. I therefore


had to hand-model some simple shoes for him. The shoes were textured and
parented to the armature with automatic weights, which were then tweaked in
weight-painting mode.

65
Figure 77: Hunter with and without clothes

10.2.2 Masking

Hiding Hunter’s body under his clothes was done slightly different from Alia.
Hidden vertices were deleted, except near the neck and hand areas. Since
Hunter’s clothes cover him almost entirely, the problem that too many faces
close to the clothes boundaries may be deleted is less severe. To further control
visibility close to the bounderies, as mask was painted just as for Alia.

Figure 78: Hunter Figure 79: Posing and shapekeys still work after deleting
also has a mask verts

Shapekeys are stored inside the mesh vertex just as vertex groups are. There-

66
fore posing and shapekeys still work after parts of the mesh has been deleted.
However, be careful if you intend to add new vertices to a mesh with shapekeys.

10.2.3 Splitting the mesh

At the end of the movie, Alia kicks off Hunter’s head. The head flies away and
lands on the street, whereas the body keeps stumbling around. To implement
this we made two new copies of Hunter’s mesh and deleted everything except the
head and the body (or what little was left of it below the clothes), respectively.
Together with the sweater, jeans and shoes, the head and body were added to
a new group, called HunterSplit. This group was linked into the scene where
the head is kicked off, instead of the Hunter group. Note that the clothing thus
belong to two different groups.

Figure 80: A copy of Hunter

Once Hunter has been decapitated, the viewer can see the inside of his neck.
New faces were added to block this view. As mentioned in the previous section,
adding vertices to a mesh with shapekeys is tricky; vertices can only be safely
deleted. Fortunately, all shapekeys are localized at the head. After all shapekeys
for the body were deleted, which could be done without detrimental effects, the
body mesh could be edited.

All shapekeys and vertex groups were removed from the head, too. The head
does not need to show any emotion; it has been kicked off, remember.

67
Figure 81: New faces in the neck

10.2.4 A hair problem

At one part of the production Hunter grew a nice beard, and I rendered some
shots with it. Fortunately the images were only rendered with OpenGL, so
they did not represent a large time investment. When the head vertices were
then deleted in the body mesh, hair started to grow out of the hands. The
explanation is presumably that hair is stored in an external table indexed by
vertex numbers, unlike vertex groups and shapekeys which are stored inside the
mesh vertices themselves. The conclusion is that you can not edit a mesh with
hair. Since I did not with to make beard on both Hunter and his separated
head, and to struggle with making the beards look identical, I decided that a
cleanshaven Hunter would look just as good. Or bad.

68
10.3 Thomas

Figure 82: Thomas mesh, with and without clothes.

The third character plays only a


small part in the story; he does
the Macarena dance at the end,
and also stands on the hotel bal-
cony and overlooks the fight be-
tween Alia and Hunter. Like
Hunter, most of his body is cov-
ered by the clothes and was re-
moved.
Incidentally, this character is
modelled to look somewhat like Figure 83: Rendered version with some
the present author. three-point lighting

69
11 Motion capture

Motion capture (mocap) is the process of recording the movements of a real


person to an animated character. To record the action, the actor is equipped
with a special suit with optical or magnetic markers. The markers are then
filmed with many cameras from different angles, and from this information the
location of the markers can be reconstructed.

The movement can be stored in various file formats, e.g.

• c3d. This is a binary format which stores primary marker data, i.e. the
locations of the point cloud. See http://www.c3d.org/.

• vsk/v. The Vicon .v format is a binary format which stores the movement
as joint angles in a skeleton, cf.
http://mocap.cs.cmu.edu/mocap/ViconVFileFormat.pdf.
• asv/asf. This is an ascii file format invented by the now defunct game
company Acclaim. It stores the movement as a skeleton and bone angles,
cf
http://www.cs.wisc.edu/graphics/Courses/cs-838-1999/Jeff/ASF-AMC.html.
• bvh. This is also an ascii format which stores a skeleton and joint angles.
This format is widely spread and can be read by almost every 3D package.
It is also the only format that MakeHuman’s mocap tool understands.

11.1 Finding BVH files.

Mocap files can be bought from many commercial sources, but a large range of
mocap files are also available for free download.

The motions in Don’t call me babe were constructed exclusively with mocap files
from the CMU Graphics Lab Motion Capture Database, hosted at Carnegie-
Mellon University. It has a huge library of mocap files which can be downloaded
for free. The web address is http://mocap.cs.cmu.edu.

CMU hosts mocap files in three formats: tvd, c3d and amc. However, the
mocap tool can only read BVH files, so none of these files can be used directly.
Fortunately, B. Hahne at cgspeed.com has converted the CMU files to BVH.
The converted files are located at
http://sites.google.com/a/cgspeed.com/cgspeed/motion-capture.

Another great source of free mocap files is the Advanced Computing Center for
the Arts and Design (ACCAD) at the Ohio State University. BVH files can be
downloaded from
http://accad.osu.edu/research/mocap/mocap data.htm

mocapdata.com is a Japanese company that sells mocap data commercially, but


they also offer a huge number of motions for free. According to their homepage,

70
mocapdata.com provides 744 premium motion data and 4197 free motion data.
The only catch is that downloading requires registration. Not surprisingly, the
homepage of mocapdata.com has the address http://www.mocapdata.com/.

Free mocap data can also be found at the Trailer’s Park,


http://www.thetrailerspark.com. This site does not offer original data, but
offer repacks of mocap data from other free sites for download. Free download
is limited to some five packs per day, so some patience is required here.

12 The Mocap tool

The mocap tools is implemented as a Blender add-on, but it is not yet dis-
tributed with Blender. The file space view3d MHX mocap.py must hence first
be copied to Blender’s addons folder as described in section 7.1, and the add-on
must then be enabled in analogy with section 7.2.

71
12.1 Load and retarget

Figure 84: Mocap tool is visible when the rig is active.

The mocap tool appears in the user interface panel (N-key) as soon as an MHX
rig is active. In fact, it appears even if some other kind of armature is selected,
but it only works properly with MHX rigs. A new layout now opens with many
options, as shown in Figure 84.

Figure 85: Select an ACCAD BVH file.

As a first example, we will load a BVH file from ACCAD. Their basic unit is

72
meters. Since the base unit of MakeHuman is decimeters, the scale should be
chosen to 0.1. Now press the ”Load, retarget, simplify” button. A file selector
appears. Select the BVH file to load and press the button at the upper right.
After a little while, the animation has been loaded onto the character.

Figure 86: The animation has been loaded

The next figure shows the same character at different frames, without and with
the rig visible.

Figure 87: Alia’s motion at different keyframes

73
Next we load an animation from CMU;
more precisely, a the Motionbuilder-
friendly conversion from cgspeed.com.
According to the documentation at the
CMU web site, their mocap data comes
with a mysterious factor 0.45. Being
recorded in an American settings, this
is relative to inches. Since one inch is
0.254 decimeters, the scale should be Figure 88: Options
chosen as 0.254/0.45 ≈ 0.6. We have for loading CMU-
found that a scale slightly larger than Motionbuilder-friendly
0.6 works well. files

Now navigate to a directory with Motionbuilder-friendly files and press the


”Load, retarget, simplify” button.

Figure 89: Select a CMU file

After a while, the animation should be loaded onto the MHX rig.

74
Figure 91: Settings for
Figure 90: CMU animation has been loaded mocapdata.com

As a final example, we wish to load an animation from mocapdata.com. This


is a Japanese site, which uses inches as its basic unit. Set Scale to 0.254, press
the ”Load, retarget, simplify” button, and navigate to the file location.

Figure 92: Select a BVH file from mocapdata.com

Alia has now learned how to make a karate kick.

75
Figure 93: Alia has learned karate.

12.2 Mocap options

Let us describe the options in Figure 84 in more detail.

12.2.1

• Initialize. The settings in the mocap panel can occasionally be lost; all
are set to zero, including the Scale. The reasons for this is unknown, but
if it happens the settings can be reinitizaled with this button.
• Save defaults. Saves the current settings in a file. The default settings
are used when the tool is started in a new Blender session, and every time
it is reinitialized.
• Angle FK → IK. Transfer bone angles from the FK rig to the IK rig.
The mocap tool loads a movement to the FK bones, and then transfers
it to the IK bones. If you modify the F-curves for the FK rig afterwards,
you can copy the changes to the IK rig with this button.

12.2.2 Load section

• Scale. The Blender scene has a basic unit (1 b.u. = 1 dm if we imported


the character with scale 1), and the BVH file has another. To make thing
match, the scale parameter should be set to the ratio of the two scale.
A simple way to determine the correct scale is described in section 12.4.
Load the BVH file with scale 1.0, rescale the loaded skeleton to match the
MHX rig, and take note of the scale factor.

76
The scale only affects the overall location of the Root bone, and not the
joint angles. It can be deliberately set to incorrect values to obtain cer-
tain artistic effects, e.g. a too large scale in a run animation makes the
character make giant leaps. The same effect can be achieved by rescaling
the location F-curves in the F-curve editor afterwards.
• Start frame. All frames in the BVH file before the start frame are
ignored. Use this is you don’t need the beginning of the animation. By
default the start frame is 1, so all frames are loaded.

• Last frame. All frames in the BVH file after the end frame are ignored.
Use this is you don’t need the end of the animation. By default the start
frame is 32000, so all frames are loaded for any reasonably long animation.
• Subsample. Only every n:th frame in the BVH file is loaded, where n
is the value of the subsample parameter. All other frames are ignored.
There are several uses of the subsample parameter.
Some data sets are recorded at a different speed than our target. To correct
for this we set subsample to the ratio of the recorded and target speeds.
E.g., CMU data were recorded at 120 fps but the movie is rendered in 24
fps; subsample should then be set to 120/24 = 5.
Subsample can be used to achieve artistic effects, like slow motion. The
soccer kick in the tutorial [2] is a CMU file and thus recorded in 120 fps,
but Subsample = 1 (more precisely, the subsample parameter did not exist
when that animation was made).
By using a too high subsampling and then rescaling the F-curves after-
wards, we can reduce loading time dramatically and also reduce high-
frequency oscillations in the motions.
Loading large BVH files can take considerable time; I estimate that the
loading time grows quadratically with the file size. Loading the file at
subsample 5 followed by a rescaling with a factor 5 would then reduce
loading time by a factor 52 = 25. Another way to reduce loading time is
to prune the animation in the beginning or end with the Start frame and
End frame parameters.
• Use default subsample. If this option is checked, the value of the
Subsample parameter is ignored and replaced by the most fitting value.
BVH files contain information about the frame rate that they were filmed
in. This option is on by default.
• Rotate 90 deg. Most 3D application (to my knowledge, all major apps
except Blender) use the convention that the Y axis points up, but in
Blender the Z axis points up instead. If this option is on, the coordinate
system in the loaded animation is changed to suit Blender’s convention.
This option is on by default, and is best left that way.
• Simplify FCurves. If this option is checked, automatic F-curve simpli-
fication is applied to loaded motions. See section 12.2.5 for details.
• Apply found fixes. The retarget algorithm only works well for similar-
looking rigs, but the rigs in different BVH files are different. If this option

77
is checked, the mocap tool attempts to correct for this. This is on by
default and this seems to work quite well. In section 12.3 we describe the
problem in detail, and in section 12.4 we discuss the steps that you have
to go through if you choose not to apply found fixes.
• Load BVH files (.bvh). This button loads a BVH file without actually
retargeting the movement to the MHX rig. This is useful for evaluating
actions and for trouble-shooting. The names of the BVH rig and its action
are derived from the BVH filename.
• Retarget selected to MHX. Once a BVH skeleton has been imported,
its action can be retargeted to the MHX rig. First select the BVH rig,
the shift-select the MHX rig to make it active, and then press this button.
The MHX rig should now perform the same animation as the BVH rigs.
We can retarget the actions from several BVH rigs to the MHX rig at
once. First select all the BVH rigs, and then shift-select the MHX rig to
make it active. If you look in the action editor, you see that several new
actions have been created, although only one can of course be active at
a time. The actions are named from the character’s name and the BVH
filename for easy reference.
• Load, retarget, simplify. This button performs the combined tasks
of loading a BVH file, retargeting to the MHX rig, and simplifying the
F-curves. It leaves no trace of the BVH skeleton or its action. Normally
this is the button that I use most.

12.2.3 Toggle section

The bones in the MHX rig can not be positioned arbitarily, because there are
restrictions in how much human joints can be bent. These restrictions are im-
plemented as IK limits and various Limit constraints (Limit location, Limit
rotation, Limit scale). However, the allowed limits differ between different in-
dividuals, and the values set in the MHX rig may be imperfect. Therefore the
mocap tool offers the possility to quickly turn limits on and off.

Limits are more relevant for hand animation. Since mocap is performed by a
human actor, all joints should bend within reasonable limits, and it makes sense
to turn the limits off. Anyway, if your loaded animation has problems, try these
buttons. They may solve your problems immediately.

• Toggle pole targets. The pole targets are used to position the elbows
and knees in the IK rigs. The mocap tool creates F-curves for the ElbowIK
and KneeIK bones, but sometimes the animation becomes better if the
rotation of the arm and leg bones in the IK chanin is used instead. The
button toggle pole targets on and off.
• Toggle IK limits. This button toggles IK limits on and off.
• Toggle Limit constraints. Limit location, limit rotation and limit scale
constraints are toggled on or off.

78
Figure 94: The effect of toggling pole targets, IK limits and Limit constraints
off. Note that the latter also affects the FK arms.

12.2.4 Plant section

As discussed in section 13.1 below, sliding feet is a common problem with mocap
animations. This group of tools are designed to assist in the removal of unwanted
sliding.

• Loc. Location F-curves are planted. This is on by default.


• Rot. Totation F-curves are planted. This is off by default.

• Use current.
• Plant. Press this button to plant the F-curves of the active bone. This
means that all keyframe points between the selected markers are replaced
by their average value

79
12.2.5 Simplify section

• Max loc error. Maximal allowed error in


Blender units allowed in location F-curves.
• Max rot error. Maximal allowed error
in degrees allowed in rotation F-curves.

• Simplify FCurves. Press this button to


simplify the F-curves of selected bones.

Provided that the Simplify FCurves checkbox is


enabled, a simplification pass is also included
when the Load, Retarget, Simplify button is
pressed, using the allowed errors in this section.
So what is the optimal amount of simplifica-
tion? Too little can lead to oscillations and
large file size, but too much leads to sliding feet
and undistinct movements. Experimentation is
called for. Figure 95:

12.2.6 Batch conversion section

The tools in this section make a batch conversion of all BVH files with a given
prefix in a given directory, creating separate actions for each file. This could
be useful to create a library of mocap actions targeted for a specific character.
However, the usefulness of this feature is limited by the fact that the mocap
tool slows down considerably when many actions are converted.

• Directory. The directory where the BVH files are located.


• Prefix. Only convert files with filenames that start with this. The prefix
is also removed from the filename when the action name is constructed.
Since the size of action names in Blender is limited to some 20 characters,
we have a problem if the name of the mocap files is longer than that. By
removing the prefix, non-descriptive parts in the beginning of the filename
(like “Female1”) don’t take up valuable space in the action name.
• Batch run. Do the batch conversion and have a cup of coffee.

80
12.2.7 Manage actions section

In hand animation the creation of an action takes a major effort. To prevent


you from losing this work by mistake, Blender makes it very difficult to actually
delete an action. With mocap files the situation is different. You often load
and evaluate many actions, only to decide that they were not quite want you
wanted. This process leaves us with a pile of discarded actions, which are quite
difficult to remove from the blend file. Even if we quit and restart Blender, the
unused actions are still there.
To assist in the purging of unwanted actions, the mocap tool has some tools to
manage, and primarely delete, unwanted actions.

• Actions. A dynamical list of available actions.


• Select action. Make the current action in the actionlist above the action
of the active rig.
• Really delete action. A precaution to prevent you from deleting actions
by mistake.
• Delete action. Delete the current action in the actionlist above. Blender
will refuse to delete it if it still has real users. Fake users are removed,
however.

12.3 Retargeting from different-looking rigs

Figure 96: MHX rig

81
The MHX rig is shown in Figure 96. The character is standing in a T-pose,
with arms extended straight out, and the legs pointing straight down. This is
the target for the movements of the various BVH rigs.

Figure 97: ACCAD female and male rigs

The female rig from ACCAD looks quite similar to the MHX rig, and retargeting
is straightforward. In contrast, the male rig is just weird, and it cannot be used
with the mocap tool. Strangely enough, animations with the male rig look fine;
the weird rest pose is compensated by equally weird F-curves. However, it can
not be used with the mocap tool.

82
Figure 98: CMU motionbuilder-friendly rig and mocapdata.com rig.

The CMU rig differs from the MHX rig in that the legs are spread more widely.
The upper legs are rotated some 22.5 degrees outwards. The difference betweeen
the MHX and mocapdata rigs are the arms. The mocapdata character’s arms
are rotated 90 degrees down, and then rotated 90 degrees around its own axis.

If we naı̈vely would retarget from the CMU rig to the MHX rig, the difference
in leg rotation would lead to problems like the on shown to the right in Figure
99. Fortunately, the mocap tool recognizes the CMU rig and knows how to
compensate for the different rest poses. If the animation is loaded with the Ap-
ply Found Fixes checkbox selected, the result looks as in the left figure instead.
Apply found fixes is on by default.

The mocap tool can also load data from mocapdata.com correctly.

83
Figure 99: Fixes.

12.4 Preparing the armature

The Apply Found Fixes checkbox is quite new, and had not yet been imple-
mented when I made Don’t call me babe. Since I exclusively used MotionBuilder-
friendly BVH files from cgspeed.com, the rest pose of the MHX rig had to be
changed to fit the CMU armature.

A disadvantage with this approach is that you remain restricted to animations


from a single source; animations from ACCAD or mocapdata.com can not be
loaded onto a MotionBuilder-friendly character. Alternatively, one could try the
Daz-friendly conversion of the CMU dataset. Initially I did this, but I found that
the Daz conversion had problems not present in the MotionBuilder conversion.

With the appearance of the Apply Found Fixes button, the content of this
section is somewhat obsolete, but it may be of interest anyway. Also, if you use
mocap data from a source not recognized by the mocap tool, this may remain
your only option.

84
Figure 100: Load BVH file with scale 1.0, and rescale the rig object in rest pose.

To change the rest pose, follow the instructions in the figure captions.

85
Figure 101: Pose the rig so it Figure 102: Don’t forget the IK rig.
matches the loaded skeleton .

Figure 103: Apply the armature modifier as a shape.

86
Figure 104: Turn up the value of the Armature shape to 1.0.

Figure 105: Also apply the armature modifier to the jeans. Since the jeans don’t
have shapekeys, there is no need to apply it as a shape.

87
Figure 106: Apply pose as rest pose. Figure 107: Finally add an arma-
Don’t forget to move all sliders back ture modifier to the mesh and jeans.
first. .

13 Cleaning mocap data

It is very easy to load an animation with the mocap tool. However, the an-
imation is usually not exactly the way we want it. There may be animation
flaws due to errors in the recording process (missed markers) or during retar-
geting. Or perhaps we want to cut out portions of different animation and stitch
them together. In this section we address the first problem, and in the next we
describe how to combine animations in the NLA editor.

88
Figure 108: The animation screen.

Animation cleaning is done in the animation screen. On the top we have the
Dopesheet/Action editor. A dopesheet gives us an overview of the animation
of several objects at once. This is the default, but we want to use the Action
editor instead. To this end, simply change Dopesheet to Action editor at the
bottom of this window.

The F-curve editor below the Action editor allows us to modify individual F-
curves. This is the best place for tweaking mocap data. At first it may look very
confusing with tons of F-curves, but the visibility of F-curves can be toggled
off. In figure 108 only the F-curves for the UpArmFK L bone are visible.

The timeline at the bottom has a yellow line at each frame with a set key. Notice
that keys are not set for all frames. A BVH file has a key on each frame, but
the mocap tool removes keys if the simplify option is selected.

89
13.1 Sliding feet

Figure 109: The left foot is sliding forward when it should be planted. Note
that the cursor is at the same position at the two frames.

One common problem with mocap data is that the feet often do not remain
fixed in space while planted on the ground. This is commonly known as sliding
or skating, and is a sign of bad mocap cleaning. Sliding can be seen in several
places in Don’t call me babe. We see an example in Figure 109, where the left
foot hits ground at frame 72 and leaves at frame 85. The heel should remain
at the same position relative to the 3D cursor, but it has slided forward quite a
bit.

Sliding manifests itself in the F-


curve editor. The location of
the LegIK bones should remain
constant during impact. Since
the LegIK Y axis points for-
ward, the Y F-curve for this
bone (the cyan curve in Figure
110 has a characteristic stair-
case shape. If the individual
steps are not flat but lean up-
wards, the character’s feet are Figure 110: The F-curves should be flat
sliding. between the selected markers.

90
The mocap tool has a means to automatically plant feet, and more generally
any bone.

1. Select the bone(s) that you want to plant, and make sure no other bones
are selected.
2. Create two markers on the timeline at the beginning and the end of the
bone’s impact period. Select both markers.
3. Press the Plant button on the Mocap tool.

The F-curve between the markers is now flat, becaues the keyframe points have
been replaced by their average value.

Figure 111: F-curves have been planted.

It is possible that the average


position is not a good one, be-
cause the leg has to stretch
too much forward at impact, or
too much back during take-off.
This can be fixed by moving the
keypoints vertically. Move the
LegIK bone to a good position
and set a location keyframe.
The new keyframe is offset ver-
tically from the others. Figure 112: Set a location key

91
Now select the other keyframes
between the markers, and move
them vertically to the same po-
sition. It may also be a good
idea to round off the F-curves
at the beginning and end of the
flat regions, to make the anima-
tion smoother. Figure 113: Keyframe points moved.

Figure 114: The foot remains fixed after planting.

13.2 Oscillations and spikes

Sometimes some bones oscillates rapidly from one frame to another, as can be
seen (although not very clearly) in figure 115. This problem manifests itself as
high-frequency oscillations in the corresponding F-curves, and it can be easily
fixed by removing the the offending keyframe points in the F-curve editor. The
animation will now be smooth because Blender automatically takes care of the
interpolation.

92
Figure 115: Rapid oscillations of the foot angle

Figure 116: Delete offending keyframe points in F-curve

A related problem is that there can be spikes where a bone jumps to completely
different position, caused by data loss during the recording. I have not seen
much of this phenomenon in the high-quality BVH files available from the sites
mentioned above, but if it happens one can deal with it in a similar fashion:
remove the problematic keyframe points in the F-curve editor.

13.3 Avoiding self-penetration

Animations recorded by an actor do not necessarily fit all 3D characters, due to


different body measures. A common problem is that limbs can pass through the
body or each other. An example of this phenomenon is seen in figure 117. Alia
is boxing, but because her breasts are bigger than those of the original actor,

93
here right underarm is penetrating her right breast. We see from the F-curves
that the lower arm has the same rotation throughout the animation. By moving
the rotation F-curves for Z up and W down lowers the underarm enough so it
never intersects the breast.

Figure 117: Moving some F-curves vertically may avoid self-penetration.

It is rather unusual that the F-curves are as flat as in this exemple. However,
we can still fix self-penetration by moving some parts of the F-curve. Just make
sure that the transition between the moved and original parts is smooth.

14 Combining actions in the NLA editor

Blender’s powerful Non-Linear Animation (NLA) editor makes it possible to


combine several actions into a more complex animation. In the backflip scene,
Alia makes four consecutive backflips to get out of the alley, cf. figure 119. All
backflips emanate from a single mocap action.

94
Figure 118: NLA strips for several consecutive backflips

The NLA editor is not present by default in the animation screen, nor in any
other screen for that matter. We access it by changing the type of some open
window, e.g. the F-curve editor. Figure 118 shows the NLA strips for the
backflip animation. It consists of four partially overlapping backflip strips and
a single master strip.

Figure 120 shows the information associated with the third backflip strip. The
track’s name Flip3 is displayed to the left in the NLA editor. The strips starts
at frame 346 and ends at frame 474, and we use Auto blending to make the
transition between consecutive strips smooth. The strip uses the action Ali-
aBackFlip. However, not the entire action is used, but only the frames between
98 and 226. These frames were chosen because Alia is in similar poses at those
times. The character must be in similar poses at the end of one strip as in the
beginning of the next, because otherwise the transition will not be smooth.

The scale is the ratio between the strip length and the action length.

95
Figure 119: A backflip

In this picture we can also see a problem with


the masking in sectionssec:masking: Alia’s right
leg is not connected to her body. However, few Figure 120: Strip set-
people in the audience will notice this flaw. tings

If an action is selected in the action editor, it is displayed as a red line in the


NLA editor. The F-curves of the active action takes precidence over the F-
curves of the NLA strips. To create an NLA strip, press the snowflake-shape
button to the right of the Action name.

96
Figure 121: The backflip action is selected

If no action is selected in the action editor, the NLA editor looks as in Figure
118.

Figure 122: The master action is selected

The NLA editor is very powerful because it smoothly interpolates between strips
in overlapping regions. Figure 123 shows two overlapping NLA strips. The third
flip ends at frame 474, but the fourth flip starts already at frame 468. In the
region 468 – 474 both strips are active; bone rotations are interpolated linearly
between the two strips.

97
Figure 123: Two NLA strips overlap between frames 468 and 474.

However, the problem with mocap data is that an action does usually not start
at the same position that the previous one ended. We must therefore move the
character into position, and this is why there is an action for the MasterFloor
bone. We manually key the MasterFloor bone at frames 468 and 474 so that the
Root bone remains at roughly the same position. If we had not inserted keys
for the MasterFloor bone, Alia would have instantaneously jumped back to her
position at frame 350 in the beginning of the new strip, which would look very
strange indeed.

98
Figure 124: Interpolation between two NLA strips

The MasterFloor F-curves should be set to linear interpolation.

99
Figure 125: MasterFloor F-curves

Another challenge is posed by the scene where Alia realizes that somebody is
following her. The animation consists of three actions:

1. Alia starts running. This action ends with her right foot down.
2. Alia runs for another step. This action starts with the right foot down
and ends with the left foot down.
3. Alia runs around the corner. This action starts with the left foot down.

Figure 126: Linear interpolation of MasterFloor rotation

However, Alia runs in opposite directions in the first and second action. To
compensate for that, the MasterFloor bone must be rotated 180 degrees between
frame 253 and 257. But look in figure 126 what happens. Alia revolves around
the MasterFloor bone, but since she ran away from that in the first action, it
has become a very bad pivot point. She should really have rotated around the
Root bone, i.e. the hip, but the Root has already keyframes in the action and
can not be keyed separately.

100
The solution is to key the MasterFloor bone at all overlap frames.

Figure 127: The MasterFloor bone has a key at every frame during the transi-
tion.

Now Alia does not wander around in circles during the transition. This is
much better, but the movement is still pretty jerky. This is a problem which I
have not yet understood how to solve. Perhaps one should write a script that
rotates the F-curves for the Root bone during the transition instead of relying
on MasterFloor.

Figure 128: Keying MasterFloor for better transition.

15 Music and sound effects

15.1 Download

There are many sites on the internet where you can find .wav files; use Google
to find them. All of the music and most of the sound effects were downloaded

101
from SoundJay, www.soundjay.com. The three music clips are described by
SoundJay as follows:

• Cautious Path.
suspenseful, cautious, suspicious, dramatic, mystical, tension.
• Jungle Run.
fast, moving, primitive, energetic.
• Iron Man.
edgy, guitar, powerful.

Apart from having music that fits the plot of Don’t call me babe, SoundJay
explicitly states in the terms of use that “you are free to incorporate the music
tracks into your projects, be it for commercial or non-commercial use”.
The dialog in this short movie was quite limited, to two lines only:

• Hasta la vista, baby.


The classic terminator clip comes with Papagayo.
• Don’t call me babe.
From the 1995 movie Barb Wire with Pamela Anderson.

An interesting alternative would be to use a text-to-speech voice synthesizer, like


Festival http://www.cstr.ed.ac.uk/projects/festival. I have only tried
Festival very briefly, and don’t have much to say about it.
Of course, if you actually have a life you can use your friends as voice talent
and make a recording of the soundtrack.

15.2 Audacity

The overall volume of a sound clip can be adjusted in Blender’s Video sequencer,
but you need a separate program to do more advanced sound editing than that.
The standard open-source sound editor is Audacity.

Figure 129: Audacity

102
My knowledge of Audacity is very rudimentary and its use in the short movie
was limited to two operations.

• Pruning clips. E.g., a cymbal beat was cut out of Jungle Run to appear
in the replay of the decapitation.

• Fade-out effect. The volume of the gun-shot in Don’t call me babe (the
sound-clip, not the short movie) was faded out.

The overall volume of the clips was not changed in Audacity, but was tuned
with the Video sequencer in Blender.

16 Lipsync

16.1 Papagayo

Start Papagayo and select Open


file. We can load either a
sound file (.wav) or a Papa-
gayo project (.pgo). Choose the
sound file brw-babe.wav. The
file is loaded into Papagayo’s
Figure 130: Open the wav file main window.

Type in the line in the text window: ”Don’t call me babe”. Make a phonetic
breakdown to English at the bottom left.

103
Figure 131: Type in the text

Papagayo now make a suggestion for the timing, by breaking down the sentence
into phonemes and distributing them evenly over the time. In this case the
suggested timing is utterly poor, because when the sentence is finished half of
the time remains; the rest is filled with a gun shot and silence.

Figure 132: Papagayo suggests a really poor timing.

104
Figure 133: We do better.

Adjust the timing while listen-


ing to the sound. We first drag
the orange words into place, and
then adjust the individual pur-
ple syllables within the words.
Finally we export the lipsync
data to a Moho (.dat) file.
This file can be then be read
by MakeHuman’s lipsync tool,
Figure 134: Export Moho (.dat) file making the character speak.

Make a note of the frame-rate in the upper right corner of Figure 132. Papagayo
works with 24 fps, which is close to Blender’s default 25 fps, but sufficiently
different to be very noticable in scenes with talking characters. We could change
the frame-rate in Papagayo, but I preferred to make the change in Blender
instead. Actually, I am not sure whether I remember to make this change in all
shots. The difference between 24 fps and 25 fps really only matters in the few
shots with a talking character, and in the master file where sound and images
are integrated.

16.2 Lipsync tool

The lipsync tool is implemented as a Blender add-on, but it is not yet dis-
tributed with Blender. The file space view3d mhx lipsync.py must hence first
be copied to Blender’s addons folder as described in section 7.1, and the add-on
must then be enabled in analogy with section 7.2.

105
After the add-on has been enabled, the lipsync tool is available in the user
interface panel (N-key) as soon as an MHX rig is selected. Like the mocap tool
it is in fact available with any armature selected, but it is only meaningful to use
it with an MHX rig. To load the lipsync animation that we just exported from
Papagayo, press the button labelled Moho (.dat) and navigate to the relevant
file.

Figure 135: Lipsync tool. Load Moho file.

Figure 136: Navigate to the .dat file exported from Papagayo

The lip animation is instantly loaded into Blender. If you play the animation
(Alt-A) you see Alia saying “Don’t call me babe”.

The lipsync tool works by positioning the bones in the face panel. These bones
then drive the facial shapekeys. We can see in figure 137 how the bone at the
chin has been moved down to open the mouth, and how the bones at the mouth
corners have moved toward the center to create a rounder mouth.

106
Figure 137: The lip animation has been loaded

The button next to the Moho button loads Magpie files instead. However, this
is an untested feature because I did not have access to any application that
exports magpie files.

Instead of loading a finished lip animation from another application, we can


use the tool to make lipsyncing in Blender. Figure 138 shows the visemes
corresponding to some the buttons. It is clear the shapes could be better. The
mouth is generally too wide; too much EE and too little OO. Improving the
shapes is a task for the future.

107
Figure 138: Some visemes

If you use the lipsync tool to build an action, location keyframes must be set
for the bones in the panel’s mouth area, unless the Autokey option is selected.
Blender’s own Autokey button, i.e. the red circle disk in the timeline area, does
not affect keys set by the lipsync tool.

The eyelids do not belong to the lips, but they can be posed with the lipsync
tool nonetheless. The Blink button shuts the eyes by moving the upper eyelids
down and the lower eyelids up. The Unblink button restores the eyelids to their
default state. A typical blink takes about a quarter of a second, i.e. six frames.
Animate it by setting an Unblink key at frame n, a Blink key at frame n + 3,
and another Unblink key at frame n + 6. The figure below shows an example.

108
Figure 139: A six-frame blink animation

Facial poses can be included directy into the main action. Alternatively, they
can be collected into a separate face action which can be mixed with the main
action in the NLA editor. Since mocap files do not contain any facial animation
(at least none of the files I have found on the internet), the main and facial
actions can be mixed without interference.

17 Simulation

17.1 Particles

In the end Alia kicks off Hunter’s head. Poor Hunter’s body keps staggering
(the mocap file was called A very drunk walk) while blood is gushing from his
neck.

The particle, material and texture settings used for the blood can be seen in
figures 140 and 141.

109
Figure 140: Blood settings

Figure 141: More blood settings

110
To make the blood particles stick
to the ground rather than falling
through it, the ground was made
into a collision object in the physics
context. Note that this had to be
done in the set file, because the
linked set can not be modified in
any way in the scene file. Alter-
natively, an invisible collision ob-
ject could have been modelled in the
scene file. Figure 142: Ground settings

Figure 143: Particles rendered as billboards and objects

Initially I tried to render the blood particles as icosphere objects. This worked
well in OpenGL rendering, but a real render simply took too much time. For
this reason I changed the render type to billboards instead. The difference can
be seen in Figure 143. Again, the picture does not include a real render as

111
objects, due to the long render time.

17.2 Softbody, cloth, fluids,...

Blender has several other cool simulation capabilities, like softbodies, cloth,
fluids, smoke, etc. I originally toyed with the idea to use a fluid simulation for
Hunter’s blood, but gave up the idea. A fluid simulation works best if it can be
contained in a small domain, but Hunter’s blood is pouring all over the street.

Another problem with simulations is the tweaking time needed to get it right.
Every parameter change must be rendered to see its effect. And that is a real,
time-consuming render, because OpenGL renders are not accurate enough. For
this reason, and because I was discouraged by the discussion in Hess [3], I used
no other simulations than hair and particles.

One thing that would definitely have improved the movie is hair physics. Hair
physics was turned off in the hair settings in section 10.1.1. This means that
Alia’s hair is following her head like a stiff helmet, rather than flowing naturally
like long hair does in reality. This is something that should change in my next
project.

But even if I did not find a use for the other simulators, maybe you will. It seems
particularly attractive to use cloth for skirts and long coats. A good reference
for simulations in Blender is Tony Mullen’s Bounce, Tumble, and Splash! [5].

18 Lighting and shadows

Lighting and shadows is an important part of the creation of a good-looking


shot. This is outside my expertise, which clearly shows in Don’t call me babe.
However, since much of the time was devoted to setting up lamps and shadows,
I will mentioned some observations that were made.

112
Figure 144: Sink or float

The most import purpose of shadows is to indicate spatial relationships. A foot


touching the ground leaves a shadow that is connected to the foot. If the shadow
start some distance away from the foot, it looks like the character is levitating
above the ground rather than standing firmly on it. An example is the rendered
image in figure 144; Alia seems to levitate because her shadow starts away from
her foot.

The left image is an OpenGL render of the same scene. Here the foot instead
seems to sink through the ground. The reason for this difference is that the
rendered image was composited with Alia above (alpha-over) the background.
If the feet are visible in the shot, care should be taken to plant them exactly on
the ground. Both feet that sink through the ground and feet that float above it
look bad. Evidently I did not succeed in all scenes.

113
As mentioned in section 5.5, the set
file has two sun lamps with a bluish
tint, to give a consistent illumina-
tion of all scenes. Each scene file
also has one or more local lights
on layer 1; I exclusively used spot-
lights. The spotlights were set to
cast buffered shadows.
A spotlight can cast two types of
shadows: buffered shadows and ray-
traced shadows. There are several
different types of buffered shadows,
as illustrated in the picture below.
The default is classic-half with size
512, but that produces quite jagged
edges, unless the spot is narrowly
focussed. Turning up the size to
2048 makes the edges better, but
the jaggedness is still there. There
is also a performance hit for using
large shadow buffers.
Irregular buffer shadows produce
sharp edges with no notable perfor-
mance penalty. However, hair does
not seem to be included in the calcu-
lation, so the shadow is bald. This
is presumably a temporary glitch in
the current version of Blender. Figure 145: Spot options

Figure 146: Buffered types: Irregular, classic-half with size 512, classic-half with
size 2048, deep with size 512

114
Raytraced shadow require that ray-
tracing is turned on, which is gen-
erally too slow for my taste. More-
over, I had a surprise when I tried
to render the scene with raytraced
shadows: Only Alia’s clothes cast
shadows, but not herself. For a ma-
terial to cast ray shadows, the trace-
able option in the material settings
must be set. But even when that is Figure 147: Skin material must be
done, the hair does not seem to be set to tracable to be included in ray-
included in the raytrace calculation. tracing

Figure 148: Ray traced shadows, without and with the Tracable option set.

Another thing to watch with buffered shadows is shadow clipping. If Clip Start is
set too high, clipping starts too late, and Alia looks like her breast are lactating.
If Clip End is set too low, clipping ends too early and the shadows are cut off
at the middle.

115
Figure 149: Shadow clipping: Good, starts to late, ends to early.

In reality images tend to be darker close to corners and edges, because there is
less bouncing light coming from those places. The renders in Don’t call me babe
are unrealistic because this effect is missing. This is clearly seen in the figure
above, where the wall and the ground have constant color instead of becoming
darker at the edge where they meet.

Darkening in corners can be simulated by aiming a spotlight with negative en-


ergy there; this will suck out light. A simpler method to simulate darkening is
ambient occlusion, which is calculated based on the proximity to other objects.
Blender has an option for using ambient occlusion. This produces quite nice
renders, but in the end I decided that it took too long time. Only some 15%
of the short movie was rendered overnight with ambient occlusion, compared to
100% without. I also decided that it was a bad idea to mix shots with and with-
out ambient occlusion, because that would have made the shots inconsistently
lit in a quite disturbing manner.

116
19 Rendering and compositing

19.1 Render settings

Figure 150: Render settings

Some things to notice:

• Resolution. Don’t call me babe was rendered using the TV Pal 4:3
preset. The settings are X : 720, Y : 576, 100%. It is very important that
you use the same settings in all scenes. Images with different resolutions
and formats can still be loaded into the Video sequencer, but the result
will look very bad. If you for some reason want to change the resolution
in the middle of the project, you should change it in every scene and
rerender everything. Depending on your other render settings, this can
mean a considerable time loss.
• Frame rate. Blender’s default is 25 fps (the game engine uses 60 fps), but
Don’t call me babe was rendered at 24 fps. The frame rate must match

117
that of the lipsync files from Papagayo, cf section 16.1. The difference
between 24 fps and 25 fps does not really matter in most shots, but it is
very visible in scenes where the lips have to move in sync with sound.

• Shading. Textures, Shadows and Color management should be kept on.


Subsurface scattering is used by the skin of the characters, and should
also be kept on in shots involving bare skin. Environment and especially
Ray Tracing are very time-consuming and should be turned off. There are
also some problems with the MakeHuman mesh, involving the eyebrows
and eyes, which sometimes lead to very poor results.
• Start and End animation Located below the timeline, here you define
at which frames the animation starts and stops. By default the animation
starts at frame 1 and ends at frame 250, which is equivalent to ten seconds
at 25 fps.

• Output. Blender put rendered images in the /tmp folder, but we want
them to end up in one of the subfolders of the /render folder in our
project directory. Make sure that the full pathname appears on the upper,
directory field. The lower field is a prefix that every rendered image will
begin with. The images are then numbered by the frame number.
In the output section we also specify the output format. Open-EXR is
probably the most comprehensive format, but Don’t call me babe was
rendered to PNG. Avoid JPG at this stage, since it is a lossy format, and
we want to keep all information in the renders at this stage. The final
movie will be compressed to save filesize when it is assembled from the
rendered images in section 20.

• OpenGL render and animate. To the right of the 3D view’s menu


bar are two buttons which look like a camera and a clapper board. They
give you a quick preview render with OpenGL, as seen e.g. in figure
151. The OpenGL animation outputs images to the same place as the real
animation. This is very handy because you can make test animations very
quickly; Don’t call me babe could be OpenGL rendered in its entirety in
a few minutes, whereas a full render took overnight.
• Only Render. The Only Render checkbox is located under the Display
options in the user inteface panel. When it is selected only renderable
objects, like meshes and particles, appear in the viewport, but not arma-
tures, lamps, and empties. Since OpenGL renders the objects that are
seen in the viewport, checking this option is a quick way to make OpenGL
animations a little prettier.

118
Figure 151: Three renders

19.2 Compositing

Rendering is only the one half of producing the final image; the second half
is to use the compositor. Compositing in Blender has been discussed at great
length by Wickes [6]; also the discussion in the final chapter of the old book [7]
is quite good. The compositing in Don’t call me babe was very basic; a typical
compositing network is shown in figure 152

Figure 152: A typical compositing layout

119
The characters and the background were rendered as separate render layers.
The renders were typically too bright for the nightly settings, so the renders
where color corrected. I used HSV nodes rather than RGB nodes, because I
could then achieve the right brightness simply by adjusting the Values slider.
Finally the two renders were combined into a single image with an alpha-over
node.
The scene consists of two render layers, named Alia and Background, respec-
tively. Each render layer can be brought into the compositor as an input node.
The render layer settings are shown in figure 153.

• Scene. The layers included in the scene. The same information is dis-
played at the bottom of the 3D view.
• Layer. The layers included in this render-layer. This is a proper subset
of the Scene layers. Alia is located on layer 2 and only included in the
Alia render layer, whereas the street objects are located on layers 11 – 18
and are only included in the background.
• Mask Layers. These layers are not included in the render themselves,
but occlude objects on included layers. It was not necessary to use mask
layers in the short movie, because the background never blocked the view
of the characters.

Figure 153: Layers included in characters and background renderlayers

19.3 Shadow planes

In shots where the camera is not moving, there is a very simple way to speed
up rendering: render a single still image of the background once, and composite

120
the moving characters on top of it. However, this will look unnatural unless
the background can receive shadows. This can be fixed by adding a low-poly
version of the set, with the sole purpose of receiving shadows. The low-poly
set has an transparent material with alpha = 0 which is set to receive shadows.
Since it consists of very few polygons, it renders very quickly.

Figure 154: An unused low-poly version of the street, ready to receive shadows.

In the end I did not use the low-poly set, because even the high-poly set rendered
quite rapidly.

19.4 Batch renders

The movie consists of some twenty scenes and was rendered overnight. To get
up twenty times during the night to press the animate button would not be
practical, but fortunately there is a better solution. We can run Blender from
the console without a user interface, and tell it to render all frames and then
quit. The syntax is

blender -b filename -F PNG -x 1 -s startframe e endframe -a

The options are

• -b: The name of the .blend file.


• -F: The image format to render to, in our case PNG.
• -x: Append file extension, i.e. .png.

121
• -s: The first frame of the animation. If omitted, use the startframe in the
file.

• -e: The last frame of the animation. If omitted, use the endframe in the
file.
• -a: Tells Blender to animate.

We can then write a shell script (on Linux) or a DOS script (on Windows) to
batch render several files. Here is the batch script for Don’t call me babe:

#!/bin/bash

#/home/thomas/apps/bf/blender -b 010-alia_walks.blend -F PNG -x 1 -a


#/home/thomas/apps/bf/blender -b 020-alia_seen_from_behind.blend -F PNG -x 1 -a
#/home/thomas/apps/bf/blender -b 030-alia_hesitates.blend -F PNG -x 1 -a
#/home/thomas/apps/bf/blender -b 040-alia_looks_around.blend -F PNG -x 1 -a
#/home/thomas/apps/bf/blender -b 050-brisk_walk_around_corner.blend -F PNG -x 1 -a
/home/thomas/apps/bf/blender -b 055-alia_stop_in_alley.blend -F PNG -x 1 -a
#/home/thomas/apps/bf/blender -b 060-alia_caught_in_alley.blend -F PNG -x 1 -a
/home/thomas/apps/bf/blender -b 070-alia_sneaks_into_alley.blend -F PNG -x 1 -a
#/home/thomas/apps/bf/blender -b 080-hunter_enters_alley.blend -F PNG -x 1 -a
#/home/thomas/apps/bf/blender -b 090-hunter_finds_alia.blend -F PNG -x 1 -a
#/home/thomas/apps/bf/blender -b 095-hasta_la_vista.blend -F PNG -x 1 -a
/home/thomas/apps/bf/blender -b 100-hunter_kicks.blend -F PNG -x 1 -a
#/home/thomas/apps/bf/blender -b 110-alia_wakes_up.blend -F PNG -x 1 -a
/home/thomas/apps/bf/blender -b 120-alia_kicks.blend -F PNG -x 1 -a
/home/thomas/apps/bf/blender -b 130-karate_back_and_forth.blend -F PNG -x 1 -a
/home/thomas/apps/bf/blender -b 140-alia_backflips.blend -F PNG -x 1 -a
#/home/thomas/apps/bf/blender -b 150-alia_kicks.blend -F PNG -x 1 -a
#/home/thomas/apps/bf/blender -b 160-head_off.blend -F PNG -x 1 -a
#/home/thomas/apps/bf/blender -b 170-dont_call_me_babe.blend -F PNG -x 1 -a
#/home/thomas/apps/bf/blender -b 610-thomas_applause.blend -F PNG -x 1 -s 452 -a

Note that several commands are commented out. This is the batch file for the
final rerender. At this point I was satisfied with most renders, but some had to
be redone. Comment signs in the batch file is a simple method to control which
scenes to render.

20 Final assembly

When all images have been rendered, it is time to assemble them into the final
movie. In Figure 155 we see an overview of the Video editor when all strips and
sound effects have been loaded.

122
Figure 155: Video sequencer with all data loaded.

Let us have a closer look at some interesting places.

123
Figure 156: A still image Figure 157: A normal strip

Figure 156 shows the opening screen. As we see in the information panel to the
right (N-key), the first strip is a single image which extends for 100 frames or
slightly above four seconds.

The first animation is in the second strip, cf. figure 157. The strip is named
after the first image, which is simply the first rendered frame in the scene file,
since the images were rendered to different directories without a prefix. In the
information field we can see when the strip starts and ends and its duration.

Also note the animation flaw in the second picture; Alia’s hands penetrate her
hips.

Images and image sequences are represented by violet strips in the Video Se-
quencer, whereas green strips correspond to sound. The background music in
the beginning of the movie is called Cautious Path and it is supposed to convey
a sense of suspension.

124
Figure 158: A sound effect Figure 159: Silence

Figure 158 shows an example of a sound effect. In addition to the background


music which is still there, we now hear that Alia’s heart starts to pound harder
and faster.

The strips were generally assembled back-to-back, with no fancy transition ef-
fects. Such effects should be used sparingly and only when it is artistically
motivated. However, there are times when a transition effect helps with the
story-telling. After Hunter kicks Alia she passes out for a moment. This is
conveyed by a black screen and silence, and corresponds to the gap between the
violet strips in Figure 159. The short sound clip just before Alia passes out is
the crack when Hunter’s feet hit Alia.

The second music strip starts a little after Alia has woken up. The tune ends
with a cymbal beat, close to the point where Hunter’s head comes off. This was
a pure coincidence, but when it now was there, I synchronized the cymbal with
the exact moment of decapitation. That the beginning moved did not really
matter.

125
Figure 160: Alpha over Figure 161: Image to put on top

Figure 160 shows the replay of the decapitation scene. A still image with the
text REPLAY in the upper-left corner is put on top of the rendered image. The
image is transparent apart from the text which has a shadeless white material.
To ensure that it had the correct dimensions, the REPLAY image was rendered
in Blender with the same resolution settings as the other images. Alpha was set
to Premultiply and the output format was PNG and RGBA; since the image
has transparent parts it also has an alpha channel. This can be compared with
the render settings in figure 150.

126
Figure 162: Lipsync Figure 163: Sound settings

All speech clips have to be carefully synchronized with the lip animations. In
figure 162 we see the scene where Alia says “Don’t call me babe”. The figure
shows the exact frame where she pronounces the first B in Babe. The green
sound clip must be moved so the sound matches the animation. Since we syn-
chronized the animation with Papagayo and the lipsync tool before, the entire
sound clip should now be synced with the animation. Provided, of course, that
Papagayo and Blender are using the same frame rate.

Figure 163 shows the settings for a sound clip. The volume of the clip can be
set in the sequencer, to a value that sounds good. I forgot to reduce the volume
of the “Don’t call me babe” sound clip, which is irritatingly loud in the final
movie.

127
Finally it is time to produce the
movie. Figure 164 shows the ren-
der settings used to make Don’t call
me babe. Here are some things to
look out for.
• Frame rate. Make sure to
use the correct frame rate, in
this case 24 fps.

• Output. Again we must


specify the filepath of the out-
put file. Since we are render-
ing to a movie we can not use
still PNG images this time.
A good choice is the H.264
codec.
• Encoding. When we chose
the H.264 codec in the Out-
put section, a new Encod-
ing section appears, where we
can specify details about the
encoding. I chose AVI for-
mat, H.254 codec and MP3
audio codec. The other fields
were left at their default val-
ues. These settings produced
a good result.
• Post processing. Make
sure that Sequencer is en-
abled. Hess [3] suggests that
one should set up another pass
in the compositor for color
correction of the rendered im-
ages. I did not do so for Don’t
call me babe, but if we include
a composition pass, Composi-
tion should be checked here. Figure 164: Settings for final render

An then we just hit Ctrl-F12 and wait for the animation to render, which takes
a few minutes. And now it only remains to upload the short movie to youtube
or vimeo, where the world can watch it.

21 Discussion

This book describes how to use MakeHuman and Blender 2.5x, and some other
freely available software, to rapidly make a short animated movie. More cor-
rectly, it describes how I did it. This was my first attempt to make a short

128
movie, and I learned a lot from it, which can be utilized in future projects. The
lessons also helped improve the MakeHuman tools, especially the mocap tool.
E.g., neither the mocap tool nor the lipsync tool worked with linked Blender
files prior to this project.

The animation has a number of technical glitches. Some of the most annoying
ones are:

• The feet slide when they should be firmly planted on the ground.
• The characters sometimes levitate in free air; shadows do not start at the
feet.
• The animation is sometimes jerky, due to problems with combining actions
which do not match up properly.

The animation could have been polished further to reduce some of these prob-
lems, but had grown tired of it when I put it on youtube.

The quality of the film also depends on the quality of the MakeHuman mesh
and the MHX rig. Whereas I believe that the MHX import script and the tools
work quite well from a programming point of view, there are lots to be desired
artistically. The deformation could be improved by better weight painting, and
the rig itself could probably use some improvements too. One thing that I intend
to do myself soon is to improve the facial shapekeys, which were constructed
more than a year ago and badly need to be remodelled.

The MakeHuman team could use more good artists; programmers not so much.
You can find us at www.makehuman.org.

References
[1] Don’t call me babe, http://www.youtube.com/watch?v=VfWSSCUOjIA
[2] MakeHuman, Blender and the Mocap tool,
http://www.youtube.com/watch?v=LJXcFRHYjMI
[3] D. Roland Hess, Animating with Blender. How to create short animations
from start to finish, Focal Press 2008, ISBN 978-0-240-81079-9
[4] Tony Mullen, Introduction to character animation with Blender, Wiley, 2007,
ISBN 978-0-4701-0260-2
[5] Tony Mullen, Bounce, tuble, and splash!, Wiley, 2008, ISBN 978-0-470-
19280-1
[6] Roger D. Wickes, Foundation Blender compositing, Apress, 2009, ISBN 978-
1-4302-1976-7
[7] D. Roland Hess (ed), The Essential Blender: Guide to 3D Creation with the
Open Source Suite Blender, No Starch Press, 2007, ISBN 978-1-5932-7166-4

129