Vous êtes sur la page 1sur 381

SynthEyes™ 2008+1.

1025 User Manual


©2003-2009 Andersson Technologies LLC

Welcome to SynthEyes™, our camera-tracking system and match-moving


application. With SynthEyes, you can process your film or video shot to
determine the motion and field of view of the camera taking the shot, or track an
object moving within it. You can combine feature locations to produce an object
mesh. After reviewing the match-moved shot, inserting some sample 3-D objects
and viewing the RAM playback, you can output the camera and/or object motions
to any of a large number of popular 3-D animation programs. Once in your 3-D
animation program, you can add 3-D effects to the live-action shot, ranging from
invisible alterations, such as virtual set extensions, to virtual product placements,
to over-the-top creature insertions. With the right setup, you can capture the
motion of an actor‘s body or face.
SynthEyes can also help you stabilize your shots, taking full advantage of
its two and three-dimensional tracking capabilities to help you generate rock-solid
moving-camera shots. The comprehensive stabilization feature set gives you full
directorial control over stabilization.
If you work with film images, especially for TV, the stabilization system can
also help you avoid some common mistakes in film workflows that compromise
3-D tracking, rendering, and effects work.
And if you are working on a 3D stereo movie (two cameras), SynthEyes
can not only help you add 3-D effects, but it can help you match the two shots
together to reduce eyestrain.
Unless you are using the demo version, you will need to follow the
registration and authorization procedure described towards the end of this
document and in the online tutorial.
To help provide the best user experience, SynthEyes has a Customer
Care center with automatic updates, messages from the factory, feature
suggestions, forum, and more. Be sure to take advantage of these capabilities,
available from SynthEyes‘s help menu.
If you are reading this document in HTML format, you can access the PDF
from within SynthEyes using the Help/Help PDF item. The PDF version has pre-
built bookmarks (table of contents) that make it easy to read. If you are using the
demo version, the PDF version is a separate download.
Be sure to check out the many video tutorials on the web site. We know
many of you are visual learners, and the subjects covered in this manual are
inherently 3-D, so a quick look at the tutorials can make the text much more
accessible.

1
DON’T PANIC. SynthEyes is designed to integrate into the most
sophisticated visual effects workflows on the planet. So it is only natural that you
will probably see some things in this manual you do not understand. Even if you
don‘t understand 3-D color LUTs or the ‗half‘ floating point image format, you‘ll be
fine! (hello, google!) We have worked hard to make sure that when you do not
need that functionality, everything still works, nice and simple. As you learn more
and do more, you‘ll likely discover that when you want to do something, some
control that hadn‘t make sense to you before will suddenly be just what you need.
So jump on in, the water‘s fine!

2
Contents
Quick Start: Automatic Tracking
Quick Start: Supervised Tracking
Quick Start: Stabilization
Shooting Requirements for 3-D Effects
Basic Operation
Opening the Shot
Automatic Tracking
Supervised Tracking
Fine-Tuning the Trackers
Checking the Trackers
Setting Up Mixed-Tripod Shots
Lenses and Distortion
Running the 3-D Solver
3-D Review
Cleaning Up Trackers Quickly
Setting Up a Coordinate System
Zero-Weighted Trackers
Perspective Window
Exporting to Your Animation Package
Building Meshes from Tracker Positions
Optimizing for Real-Time Playback
Troubleshooting
Combining Automatic and Supervised Tracking
Stabilization
Rotoscoping and Alpha-Channel Mattes
Object Tracking
Joint Camera and Object Tracking
Multi-Shot Tracking
Stereo Movies
Motion Capture and Face Tracking
Finding Light Positions

3
Curve Tracking and Analysis in 3-D
Merging Files and Tracks
Batch File Processing
Reference Material:
System Requirements
Installation and Registration
Customer Care Features and Automatic Update
Menu Reference
Control Panel Reference
Additional Dialogs Reference
Viewport Features Reference
Perspective Window Reference
Overview of Standard Tool Scripts
Preferences and Scene Settings Reference
Keyboard Reference
Viewport Layout Manager
Support

4
Quick Start: Automatic Tracking
To get started quickly with SynthEyes, match-move the demo shot
FlyoverJPEG, downloaded from http://www.ssontech.com/download.htm. Unpack
the ZIP file into a folder containing the image sequence.
An overview of the tracking process looks like this:
 Open the shot and configure SynthEyes to match the source
 Create 2-D trackers that follow individual features in the image, either
automatically or under user supervision.
 Analyze (solve) the 2-D tracks to create 3-D camera and tracker data.
 Set up constraints to align the solved scene in a way useful for adding
effects. This step can be done before solving as well.
 Export to your animation package.
Start SynthEyes from the shortcut on your desktop, and on the main menu
select File/New or File/Import/Shot. In the file-selection dialog that opens, elect
the first frame, FLYOVER0000.JPG, of the shot from the folder you have
downloaded it to.
The shot settings panel will appear. Screenshots in this manual are from a
PC with the light-colored user interface option so that they print better; the OS X
and/or dark-colored interfaces are slightly different in appearance but not
function.

5
QUICK START: AUTOMATIC TRACKING

You can reset the frame rate from the 24 fps default for sequences to the
NTSC rate by hitting the NTSC button , though this is not critical. The
aspect ratio, 1.333 is correct for this shot. If your machine has enough RAM, the
queue length should already be 150 frames, enough to buffer the entire shot in
RAM for maximum speed.

On the toolbar, verify that the summary panel button is selected.

On the summary panel, click Full Automatic. (An additional step


or two will later let you automatically fine-tune the trackers to further improve
accuracy.)
Time Saver: you can click Full Automatic as soon as you start SynthEyes,
and it will prompt you for the shot if you have not set one up already, so you do
not even have to do a File/New.

6
QUICK START: AUTOMATIC TRACKING

A series of message boxes will pop up showing the job being processed.
Wait for it to finish. This is where your computer‘s speed pays off. On a small
Core 2 Duo desktop, the shot takes about 8 seconds to process in total, about
twice as fast as shown in the following Pentium 4 capture.

Once you see Finished solving, hit OK to close this final dialog box.
SynthEyes will switch to a quad-viewport configuration (experienced users can
disable switching with a preferences setting).
Each tracker now has a small x (tracker point) to show its location in 3-D
space.
You can zoom in on any of the views, including the camera view, using the
middle mouse scroll and middle-mouse pan to see more detail. (You can also
right-drag for a smooth zoom.)
Mac OS X: SynthEyes uses middle-mouse-drag to pan, but OS X may
display the ‗Dashboard‘ instead. To fix that, fire up the Expose and Spaces
controls in the OS X System Preferences panel, and change the middle mouse
button from Dashboard to ―-― (nothing). You‘ll still be able to access dashboard
via F12.
The status line will show the zoom ratio in the camera view, or the world-
units size of any 3-D viewport. You can Control-HOME to re-center all four
viewports. See the Window Viewport Reference for more such information.

7
QUICK START: AUTOMATIC TRACKING

In the main viewports, look at the Left view in the lower left quadrant. The
green marks show the 3-D location of features that SynthEyes located. In the Left
view, they fall on a diagonal running top-left to lower-right. Since most of these
points are on the ground in the scene, we‘d like them to fall on the ground plane
of the animation environment. SynthEyes provides tools to let you eyeball it into
place, but there‘s a much better way…
Switch to the Coordinate System control panel using the toolbar button

or the Windows menu (or F8). The coordinate system panel is used to align
and scale the solved scenes.
Refer to the picture below for the location of the 3 trackers labeled in red.
We will precisely align these 3 trackers to become the ground plane. Note that
the details of which trackers are present may change somewhat from version to
version.

8
QUICK START: AUTOMATIC TRACKING

Begin by clicking the *3 button at top right of the coordinate system


panel. Next, click on the tracker labeled 1 (above) in the viewport. On the control
panel, the tracker will automatically change from Unlocked to Origin.
In this example, we will use trackers (1 and 2) aligned front to back. The
coordinate system mini-wizard (*3 button) handles points aligned left to right or
front to back. By default, it is at LR, so click the *3 button, which currently reads
LR, to change it to FB.
Click the tracker labeled 2, causing it to change to Lock Point. The Y field
above it will change to 20. The full-screen capture (above) showed SynthEyes
right after completing this step.
Select the tracker labeled 3, slightly right of center. It will change from
Unlocked to On XY Plane (ie the ground plane).
Why are we doing all this? The choice of trackers to use, the overall size
(determined by the 20 value above), and the choice of axes is arbitrary, up to you
to make your subsequent effects easier. See Setting Up the Coordinate System
for more details on why and how to set up a coordinate system. Note that
SynthEyes‘ scene settings and preferences allow you to change how the axes
are oriented to match other programs such as Maya or Lightwave: ie a Z-up or Y-
up mode. This manual‘s examples are in Z-Up mode unless otherwise noted; the
corresponding choices for one of the Y-Up modes should be fairly evident.
After you click the third tracker you will be prompted (―Apply coordinate
system?‖) to determine whether the scene should be re-solved to apply your new
settings. Select Yes. Hit Go! and SynthEyes will recalculate the tracker and
camera positions in a flash. To do this SynthEyes changed the solving mode (on

the Solver control panel ) from Automatic to Refine, so that it will update
the match-move, rather than recalculating from scratch.
Afterwards, the 3 trackers will be flat on the ground plane (XY plane) and
the camera path adjusted to match, as shown (after control-home):

9
QUICK START: AUTOMATIC TRACKING

You could have selected any three points to define the coordinate system
this way, as long as they aren‘t in a straight line or all bunched together. The
points you select should be based on how you want the scene to line up in your
animation package.

Switch to the 3-D control panel . Select the magic wand tool on
the panel. Change the mesh type drop-down at left of the wand to create a
Pyramid instead of a Box. Zoom in the Top viewport window so the tracker
points are spread out.
In the Top viewport, drag out the base of a rectangular pyramid. Then click
again and drag to set its height. Use the move , rotate , and scale
tools to make the box into a small pyramid located in the vacant field. Click on
the color swatch under the wand, and select a sandy pyramid color. Click
somewhere empty in the viewport to unselect the pyramid (bright red causes lag
in LCDs).
On the View menu, turn off Show Trackers and Show 3-D Points, and
switch to the camera viewport. You can do that by changing the selector
on the toolbar to Camera, or by clicking the tab at top right
of the camera view itself.

10
QUICK START: AUTOMATIC TRACKING

Hit Play . Note that there will appear to be some jitter because drawing
is not anti-aliased. It won‘t be present when you render in your 3-D application.
SynthEyes is not intended to be a rendering or modeling system; it operates in
conjunction with a separate 3-D animation application. (You can create anti-
aliased preview movies from the Perspective window.)

Hit Stop . If you see a delayed reaction to the stop button,


Edit/Preferences and turn on the Enhance Tablet Responsiveness checkbox.
Rewind to the beginning of the shot (say with shift -A).
By far, the most common cause of ―sliding‖ of an inserted object is that the
object has not been placed at the right altitude over the imagery. You should
compare the location of your insert to that of other nearby trackers, in 3-D,
adding a tracker at key locations if necessary. You will also think you have sliding
if you place a flat object onto a surface that is not truly flat. Normally we would
place the pyramid more carefully.
To make a preview movie, switch to the Perspective window. Right-click
and select Lock to Current Cam. Right-click again and select Preview Movie.

11
QUICK START: AUTOMATIC TRACKING

Click on … at upper right and change the saved file type to Quicktime
Movie. Enter a file name for the output movie in QuickTime format, typically in a
temporary scratch location. (If you don‘t have QuickTime installed, use one of the
sequenced file types and later use SynthEyes as a video playback program.)
Click on Compression Settings, and select Sorensen Video 3 at High Quality,
29.97 frames per second, leave the Key Frames checkbox on, and turn off the
Limit data rate checkbox. Click OK to close the compression settings. Back on
the Preview Movie Settings, turn off Show Grid, and hit Start. The preview movie
will be produced and played back in the Quicktime Player.
You can export to your animation package at this time, from the
File/Export menu item. Select the exporter for your animation package from the
(long) menu list. SynthEyes will prompt for the export location and file name; by
default a file with the same name as the currently-open file (flyover in this case),
but with an appropriate file extension, such as .ma for a Maya ASCII scene file.
This completes this initial example, which is the quickest, though not
necessarily always the best, way to go. You‘ll notice that SynthEyes presents
many additional views, controls, and displays for detecting and removing tracking
glitches, navigating in 3-D, handling temporarily obscured trackers, moving
objects and multiple shots, etc.
In particular, after auto-tracking and before exporting, you should always
check up on the trackers, especially using Clean Up Trackers and the graph
editor, to correct any glitches in tracking (which can result in little glitches in the
camera path), and to eliminate any trackers that are not stable. For example, in
the example flyover, the truck that is moving behind the trees might be tracked,
and it should be deleted and the solution refined (quickly recomputed).

12
QUICK START: AUTOMATIC TRACKING

The final scene is available from the web site as flyover_auto.sni.

13
Quick Start: Supervised Tracking
Sometimes you will need to ―hand track‖ shots, add additional supervised
trackers to an automatic track, or add supervised trackers to help the automated
system with very jumpy shots. Although supervised tracking takes a bit more
knowledge, it can often be done relatively quickly, and produce results on shots
the automated method can not handle.
To demonstrate, manually match-move the demo shot flyover. Start
SynthEyes and select File/New or File/Import/Shot. Open the flyover shot.
The shot settings panel will appear. If your computer‘s memory permits,
the Queue Length setting on the shot setup panel should be equal to the number
of frames in the shot (so the entire shot goes in RAM), as you will scrub back and
forth through the entire shot repeatedly.
The first major step is to create trackers, which will follow selected
features in the shot. We will track in the forward direction, from the beginning of
the shot to the end, so rewind to the beginning of the shot. On shots where
features approach from the distance, it is often more convenient to track
backwards instead.
Switch to the camera view and right-click the Create Trackers menu item.

It will bring up the tracking control panel and turn on the Create ( ) button.
Tip: You can create a tracker at any time by holding down the ‗C‘ key and
left-clicking in the camera view.
Begin creating trackers at the locations in the image below, by putting the
cursor over the location, pushing and holding the left mouse button, and
adjusting the tracker position while the mouse button is down, looking at the
tracker ―insides‖ window on the control panel to put the ―point‖ of the feature at
the center. Look for distinctive white or black spots in the indicated locations.

After creating the first tracker, click the green swatch under the mini
tracker view window and change the color to a bright yellow to be more visible.
Or, to do this after creating trackers, control-A to select them all, then click the
swatch.
Tip: there are two layouts for the tracking control panel, a small one for
recommended for laptops, and a larger one recommended for high-resolution
displays, selected by the Wider tracker-view panel checkbox in the
preferences. In between, take your pick!

15
QUICK START: SUPERVISED TRACKING

Once the eleven trackers are placed, type control-A (command-A on Mac)
to select all the trackers. On the tracker control panel, find .
Raise this spinner , called ―Key Every,‖ from zero to 20. This says you wish to
automatically re-key the tracker every 20 frames to accommodate changes in the
pattern.

Hit the Play button , and SynthEyes will track through the entire shot.
On this example, the trackers should stay on their features throughout the
entire shot without further intervention. You will notice that one has gone off-
screen and been shut down automatically. (Advanced feature hint: when the
image has black edges, you can adjust the Region-of-interest on the image
preprocessing panel to save storage and ensure that the trackers turn off when
they reach the out-of-bounds portion.) If necessary, you can reposition a tracker
on any frame, setting a key and teaching the tracker to follow the image from that
location subsequently.
After tracking, with all the trackers still selected (or hit Control/command-
A), click the Lock ( ) button to lock them , so they will not re-track as you
play around (…or get messed up).
Now you will align the coordinate system. This is the same as for
automatic tracking, except performed before any solving. See Setting Up the
Coordinate System for more details on why and how to set up a coordinate

system. Switch to the Coordinate System control panel using the toolbar.

16
QUICK START: SUPERVISED TRACKING

This is a similar guide picture to that from auto-tracking, though the


trackers are in different locations. Click the *3 button, then click on tracker #1.
Click the *3 button, now reading LR, to change it to FB. Click tracker #2. Click
tracker #3.

Now switch to the Solve control panel. Hit the Go! button. A display
panel will pop up, and after about 3 seconds, it should say Finished solving. Hit
OK to close the popup. You could add some objects from the 3-D panel at this
time, as in the automatic tracking example.

You can add some additional trackers now to increase accuracy. Use
(or shift-F) to go to the end of the shot, and change to backward tracking by
clicking the big on the main toolbar. It will change to the backwards direction
. On the Tracker control panel, turn on the Create ( ) button.
Hint: When you ‗play‘ the scene, SynthEyes updates the tracking data for
trackers that are set to the same direction as the playback itself.
Create additional trackers spread through the scene, for example only on
white spots. Switch their tracker type from a match tracker to a white-spot
tracker , using the type selection button on the tracker control panel. (Note
that the Key-every spinner does not affect spot-type trackers.)

17
QUICK START: SUPERVISED TRACKING

Hit Play to track them out. The tracker on the rock pile gets off-track in the
middle—you can either correct it by dragging and re-tracking, but it will be
easiest for this one to keep it as a match-type tracker. Scrub through the shot
to verify they have stayed on track, then control-A to select them all, and turn on
the lock .

Switch to the Solver control panel , change the top drop-down box,
the solving mode, from Automatic to Refine, and hit Go! again.

Go to the 3-D Panel , click on the create wand , change the object
type from Box (or Pyramid) to Earthling, then drag in the Top view to place an
earthling to menace this setting. Click a second time to set its size. In the
following example, a tracker on the concrete pad was used to adjust the height of
the Earthling statue to prevent sliding. You can use pan-to-follow mode (hit the 5
key to turn it on or off) to zoom in on the tracker (and nearby feet) to monitor their
positioning as you scrub. The final scene is available from the web site as
flyover_sup.sni.

18
QUICK START: SUPERVISED TRACKING

Typically, supervised tracking is performed more carefully, tracking a


single tracker at a time and monitoring it directly. SynthEyes generates keys
every 20 frames with the settings shown; normally such automatically-generated
keyframes are adjusted manually to prevent drifting. If you look at the individual
trackers in this example, you will see that some have drifted by the end of the
shot. Normally they are corrected, hence the term supervised tracking.
For more detailed information on supervised tracking, read the manual‘s
later write-up of supervised tracking, and see the online tutorial ―Care and
Feeding of Supervised Trackers.‖

19
Quick Start: Stabilization
Adding 3-D effects generally requires a moving camera, but making a
smooth camera move can be hard, and a jiggly shot often cries out ―Amateur!‖
SynthEyes can help you stabilize your shots for a more professional look, though
like any tool it is not a magic wand: a more stable original shot is always better.
Stabilization will sacrifice some image quality. We‘ll discuss more costs and
benefits of SynthEyes stabilization in the later full section.
We‘ll begin by stabilizing the shot grnfield, available from the web site.
We will do this shot one particular way for illustration, though many other options
are possible. Note that this shot orbits a feature, which will be kept in place.
SynthEyes also can stabilize traveling shots, such as a forward-looking view from
a moving car, where there is no single point that stays in view.
File/New and open the shot using the standard 4:3 defaults. You can play
through it and see the bounciness: it was shot out a helicopter door with no
mechanical stabilizing equipment.

Click the Full Automatic button on the summary panel to track


and solve the shot. If we wanted, we could track without solving, and stick with 2-
D tracks, but we‘ll use the more stable and useful 3-D results here.
Select the Shot/Image Preparation menu item (or hit the P key).
In the image prep viewport, drag a lasso around the half-dozen trackers in
the field near the parking lot at left. We could stabilize using all the trackers, but
for illustration we‘ll stabilize this particular group, which would be typical if we
were adding a building into the field.

Click the stabilization tab , change the Translation stabilization-


axis drop-down to Peg, and the Rotation drop-down to Filter. Reduce the Cut
Frequency spinner to 0.5 Hz. This will attenuate rotation instability, without
eliminating it. You should have something like this:

21
QUICK START: STABILIZATION

The image prep window is showing the stabilized output, and large black
bands are present at the bottom and left of the image, because the image has
been shifted (in a 3-D way) so that it will be stable. To eliminate the bands, we
must effectively zoom in a bit, expanding the pixels.
Hit the Auto-Scale button and that is done, expanding by almost 30%, and
eliminating the black bars. This expansion is what reduces image quality, and it
should always be minimized to the extent possible.

Use the horizontal spinner to the right of the frame number at


bottom center to scrub through the shot. The shot is stabilized around the purple
―point of interest‖ at left center.
You can see some remaining rotation. You may not always want to make
a shot completely stone solid. A little motion gives it some life. In this case,
merely attenuating the jitter frequency becomes ineffective because the shot is
not that long.
To better show what we‘re going to do next, click the Final button at right,
turning it to Padded mode. Increase the Margin spinner, below it, to 0.125.
Instead of showing the final image, we‘re showing where the final image (the red
outline) is coming from within the original image. Scrub through the shot a little,
then go to the end (frame 178).
Now, change the Rotation mode to Peg also. Instead of low-pass-filtering
the rotation, we have locked the original rotation in place for the length of the
shot. But now, by the end of the shot the red rectangle has gone well off the
original imagery. If you temporarily click Padded to get back to the Final image,
there are two large black missing portions.

22
QUICK START: STABILIZATION

Hit Auto-Scale again, which shrinks the red source rectangle, expanding
the pixels further. Select the Adjust tab of the image preparation window, and
look at the Delta Zoom value. Each pixel is now about 160% of its original size,
reducing image quality. Click Undo to get back to the 129% value we had before.
Unthinkingly increasing the zoom factor is not good for images.
If you scrub through the shot a little (in Padded mode) you‘ll see that the
image-used region is being forced to rotate to compensate for the helicopter‘s
path, orbiting the building site.
For a nice solution, go to the end of the shot, turn on the make-key button
at lower right, then adjust the Delta Rot (rotation) spinner to rotate the red
rectangle back to horizontal as shown.

Scrub through the shot, and you‘ll see that the red rectangle stays
completely within the source image, which is good: there won‘t be any missing
parts. In fact, you can Auto-scale again and drop the zoom to about 27%.
Click Padded to switch back to the Final display mode, and scrub through
to verify the shot again. Note that the black and white dashed box is the
boundary of the original image in Final mode.
You can reduce the slight blurring caused by resampling the image to
zoom it: click to the Rez tab, and switch the Interpolation method from Bi-Linear
to 2-Lanczos. You can see the effect of this especially in the parking lot.
To playback at speed, hit OK on the Image Prep dialog. You will probably
receive a message about some (unstabilized) frames that need to be flushed
from the cache; hit OK.
You‘ll notice that the trackers are no longer in the ―right‖ places: they are
in the right place for the original images, not the stabilized images. We‘ll later see

23
QUICK START: STABILIZATION

the button for this, but for now, right-click in the camera view and turn off
View/Show trackers and View/Show 3-D Points.

Hit the main SynthEyes play button , and you will see a very nicely
stabilized version of the shot.
By adding the hand-animated “directorial” component of the stabilization,
we were able to achieve a very nice result, without requiring an excessive
amount of zoom. [By intentionally moving the point of interest , the required
zoom can be reduced further to under 15%.]
If you look carefully at the shot, you will notice some occasional
strangeness where things seem to go out of focus temporarily. This is the motion
blur due to the camera‘s motion during shooting.
Important: To minimize motion blur when shooting footage that will
be stabilized, keep the camera’s shutter time as small as possible (a small
“shutter angle” for film cameras).
Doubtless you would now like to save the sequence out for later
compositing with final effects (or maybe a stabilized shot is all you needed). Hit P
to bring the image prep dialog back up, and select the Output tab . Click
the Save Sequence button.
Click the … button to select the output file type and name. Note that for
image sequences, you should include the number of zeroes and starting frame
number that you want in the first image sequence file name: seq001 or seq0000
for example. After setting any compression options, hit Start, and the sequence
will be saved.
There are a number of things which have happened behinds the scene
during this quick start, where SynthEyes has taken advantage of the 3-D solve‘s
field of view and many trackers to produce better results than traditional
stabilizing software.
SynthEyes has plenty of additional controls affording you directorial
control, and the ability to combine some workflow operations that normally would
be separate, improving final image quality in the process. These are described
later in the Stabilization section of the manual.

24
Quick Start: The Works
As a final ‗quick start,‘ we present a tripod shot with zoom; we will stabilize
the shot slightly.
Open the shot VFArchHalf from the web site, a representative
documentary-type shot at half-HD resolution, 960x540. (This is possibly the only
way to know what the arch actually says.) Select NTSC playback rate. If you look
carefully at the rest of the shot you will notice one thing that has been done to the
shot to make the cameraman‘s job easier.
Since the camera was mounted on a tripod and does not physically
translate during the shot, check the On Tripod checkbox on the Summary Panel

. The lens zooms, so check the Zoom lens checkbox.


Tip: Do not check the Zoom checkbox for all your shots, ―just in case.‖
Zoom processing is noisier and less robust.
Click the Run Auto-tracker button to generate trackers, but not solve yet.
Scrub through the beginning of the shot, and you will see a few trackers
on the moving tree branches at left. Lasso-select them, then hit the Delete key.
Click Solve.
After solving, hit shift-C or Track/Clean Up Trackers. Click Fix to delete a
few high-error trackers.
To update a tripod-type shot, we must use the Refine Tripod mode on the

Solver panel . Change the mode from Tripod to Refine Tripod. Hit Go!
Look in the Top and Left views and notice how all of the trackers are
located a fixed distance away from the camera.
SynthEyes must do that because in a tripod shot, there is no perspective
available to estimate the distance. You can easily insert 3-D objects and get
them to stick, but aligning them will be more difficult. You can use SynthEyes‘s
single-frame alignment capability to help do that.

For illustration now, go to the 3-D control panel and use the create
tool to create a cylinder, box, or earthling in the top view. No matter where
you create it, it will stick if you scrub through the shot. You can reposition it using
the other 3-D tools, move , rotate , and scale , however you like.
Once you finish playing, delete all the meshes you have created.
If you have let the shot play at normal playback speed, you‘ve probably
noticed that the camera work is not the best.

25
QUICK START: THE WORKS

Hit the P key to bring up the image preprocessor. Use the frame spinner at
bottom to go to the end of the shot.
Lasso-select the visible trackers, those in and immediately surrounding
the text area.
Now, on the Stabilize tab, change the Translation and Rotation stabilize
modes to Filter. As you do this, SynthEyes records the selected trackers as the
source of the stabilization data. If you did this first, then remembered to select
some particular trackers, or later want to change the trackers used, you can hit
Get Tracks to reload the stabilization tracking data.
Decrease the Cut Freq(Hz) spinner to 1.0 Hz.
Click Auto-Scale. If you click to the Adjust tab, you will see that it is less
than a 5% zoom.

Go to the Rez tab , experiment with the Interpolation if you like; the
default 2-Lanczos generally works well at a reasonable speed.
Hit OK to close the Image Preprocessor.
Switch to the Camera View.
Type J and control-J to turn off tracker display.
Select Normal Speed on the View menu.

For a 1:1 size, shift-click the reset-camera-zoom button.


Hit Play. You now have much smoother camera work, without being overly
robotic.

Use the output tab on the image preprocessor to write the


sequence back out if you wish.
If you want to insert an object into the stabilized shot, you need to update
the trackers and then the camera solution. On the Image Preprocessor‘s Output
tab, click Apply to Trkers once. Close the image preprocessor, then go to the

Solver panel , make sure the solver is in Refine Tripod mode, and click Go!

26
Shooting Requirements for 3-D Effects
You‘ve seen how to track a simple demo shot. How about your own
shots? Not every shot is suitable for match-moving. If you can not look at the
shot and have a rough idea of where the camera went and where the objects are,
SynthEyes won‘t be able to either. It‘s helpful to understand what is needed to
get a good match-move, to know what can be done and what can‘t, and
sometimes to help a project‘s director or camera-person plan the shots for effects
insertion.
This list suggests what is necessary:
 The camera must physically change location: a simple pan, tilt, or zoom is
not enough for 3-D scene reconstruction.
 Depth of scene: everything can not be the same distance, or very far, from
the camera.
 Distinct trackable features in the shot (reflected highlights from lights do
not count and must be avoided).
 The trackable features should not all be in the same plane, for example,
they should not all be on a flat floor or green-screen on the back wall.
If the camera did not move, then either
 You must need only the motion of a single object that occupies much of
the screen while moving nontrivially in 3-D (maybe a few objects at film
resolution),
 Or, you must make do with a ―2½ -D‖ match-move, which will track the
camera‘s panning, tilting, and zooming, but can not report the distance to
any point,
 Or, you must shoot some separate still or video imagery where the
camera does move, which can be used to determine the 3-D location of
features tracked in the primary shot.
For this second group of cases, if the camera spins around on a tripod, it
is IMPOSSIBLE, even in theory, to determine how far away anything is. This is
not a bug. SynthEyes‘ tripod tracking mode will help you insert 3-D objects in
such shots anyway. The axis alignment system will help you place 3-D objects in
the scene correctly. It can also solve pure lock-off shots.
If the camera was on a tripod, but shoots a single moving object, such as
bus driving by, you may be able to recover the camera pan/tilt plus the 3-D
motion of the bus relative the camera. This would let you insert a beast clawing
into the top of the bus, for example.
For visual examples, see the Tutorials section of our web site.

27
Basic Operation
Before describing the match-moving process in more detail, here is an
overview of the elements of the user interface, beginning with an annotated
image. Details on each element can be found in the reference sections.

Color Scheme
SynthEyes offers two default color schemes, a light version (shown) and a
dark version. The light version generally matches the operating system defaults
(and so is somewhat different on a PC and Mac), intended for a brighter office-
style environment. The darker user-interface scheme matches programs such as
Combustion, Fusion, Shake, etc, which are designed to be used in a darker
studio environment.
To switch schemes, select the Edit/Reset Preferences menu item and you
will be given a choice.
You can change virtually all of the colors in the user interface individually,
if you like. For example, you can change the default tracker color from green to
blue, if you are constantly handling green-screen shots. See Keeping Track of
the Trackers for more information.

Tool Bar
The tool bar runs across the top of the application, including normal
Windows icons, buttons to switch among the control panels, and several viewport

29
BASIC OPERATION

controls. SynthEyes includes full undo and redo support. Three buttons at right
control Customer Care Center functions such as messages and upgrades.

Control Panels
At any time, one of the control panels is displayed in the control panel
area, as selected by the toolbar buttons or some menu items. The control panel
can be floated by the Window/Float One Panel menu item. You can use a
control panel with any viewport.
SynthEyes uses control panels as a way to organize all the many
individual controls. Each control panel corresponds to a particular task, and while
that control panel is open, the mouse actions in the viewports, and the keyboard
accelerator keys, adapt to help accomplish a particular task. The buttons on the
toolbar are arranged so that you can start at the left and work to the right. This
approach has been used in a variety of older applications, and is making a
comeback in new applications as well because of its organizational ability.
By contrast, Adobe programs such as Photoshop and Illustrator have
many different palettes that can appear and disappear individually. If you are
more familiar with this style, or it is more convenient for a particular task, you can
select the Window/Many Floating Panels option, and have any number of
panels open at once. Keep in mind that only one panel is still primarily in charge,
and there may be unwanted interactions between panels in some combinations.
The primary panel is still marked in hot yellow, while the other panels are a
cooler blue.

Active Camera/Object versus Selection


At any point in time, one camera or moving object is considered active.
The list of cameras and objects may be found on the Shot menu; the active one
has a check mark and is listed in the button to the right of the viewport selection
on the toolbar (Camera01 in the screen capture above).
The active object (meaning a moving object or camera) will have its shot
shown in the Camera view, and its trackers visible and editable. The active
object, or all objects on its shot, will be exported, depending on the exporter.
Trackers, objects, mesh objects, cameras, and lights can all be selected,
for example by clicking on them or by name through the drop-down on the 3-D

panel . While any number of trackers on a single object can be selected at a


time, only a single other object can be selected at a given time.
In the perspective window, a single mesh object can be selected as the
―Edit Mesh,‖ where its facets and vertices are exposed and subject to editing.
Note that a moving object can be active, but not selected, and vice versa.
Similarly, a mesh object can be selected but not the edit mesh, and vice versa.

30
BASIC OPERATION

Floating Camera View


The camera view can be floated with Window/Floating Camera. For
example, you can move it to a second monitor. The camera view will be empty in
all viewport layouts that would normally contain the camera view—using Quad
Perspective instead can be very handy.

Play Bar
The play bar normally appears at the top of most control panel selections,
and features play , stop, frame forward , etc controls as well as the frame
number display. Frames are numbered from 0 unless you adjust the preferences.
You can move the playbar to the toolbar, if you have a high-resolution
monitor, using a setting on the preferences panel. This is especially useful if you
are using the Float Many Panels mode.
You can also float the playbar by itself, using the Window/Float Playbar
menu item.

Viewports
The main display area can show a single viewport, such as a Top or
Camera View, or several independent viewports simultaneously as part of a
layout, such as Quad.

Layouts
A layout consists of one or more viewports, shown simultaneously in a
particular arrangement. Select a layout with the drop-down list on the toolbar.
You can change any pane of a layout to a different type by clicking the tab
just above the upper left corner of the pane, creating a Custom layout. The
single ―Custom‖ layout is changed each time you do this. A tab at top right of
each pane brings it full size, or back if it already is.
You can adjust the relative sizing of each pane by dragging the gutters
between panes.
To name your custom layouts and create different pane arrangements,
use the layout manager (see the Window menu). Some viewport types can
appear only once in a particular layout: you can‘t have two camera viewports in
one layout.
Your layouts are stored in the SynthEyes file, you can also set up your
own default configurations (preferences) using the layout manager.

Coordinate Systems
SynthEyes can operate in any of several different coordinate system
alignments, such as Z up, Y up, or Y up left-handed (Lightwave). The coordinate

31
BASIC OPERATION

axis setting is controlled from Edit/Scene Settings; the default setting is controlled
from the Edit Preferences.
The viewports show the directions of each coordinate axis, X in red, Y in

green, Z in blue. One axis is out of the plane of the screen, and is labeled
as t(towards) or a(away). For example, in the Top view in Z-up mode, the Z axis
is labeled Zt.
SynthEyes automatically adjusts the scene and user interface when you
change the coordinate system setting. If a point is at X/Y/Z = 0,0,10 in Z-up
mode, then if you change to Y up mode, the point will be at 0,10,0. Effectively,
SynthEyes preserves the view from each direction: Top, Front, Left, etc, so that
the view from each direction never changes as you change the coordinate
system setting. The axis will shift, and the coordinates of the points and cameras.
Consequently, you can change the scene coordinate axis setting
whenever you like, and some exporters do it temporarily to match the target
application.

Spinners
Spinners are the plus/minus button things next to the numeric
edit fields. You can drag upwards and downwards from within the spinner to
rapidly adjust the value, or click the plus or minus to change a little at a time.
Some spinners show keyed frames with a red outline. You can remove a
key or reset a spinner to a default or initial value by right-clicking it. If you shift-
right-click a key, all following keys are truncated. If you control-right-click a key,
all keys are removed from the track.

Tooltips
Tooltips are helpful little boxes of text that pop up when you put the mouse
over an item for a little while. There are tooltips for the controls, to help explain
their function, and tooltips in the viewports to identify tracker and object names.
The tooltip of a tracker has a background color that shows whether it is an
automatically-generated tracker (lead gray), or supervised tracker (gold).

Status Line
Some mouse operations display current position information on the status
line at the bottom of the overall SynthEyes window, depending on what window
the mouse is in, and whether it is dragging. For example, zooming in the camera
view shows a relative zoom percentage, while zooming in a 3-D viewport shows
the viewport‘s width and height in 3-D units.

32
BASIC OPERATION

Keyboard Accelerators
SynthEyes offers keyboard accelerators, as listed in the reference section.
You can change the keyboard accelerators from the keyboard manager, initiated
with Edit/Edit Keyboard Map. Note that the tracker-related commands will work
only from within the camera view, so that you do not inadvertently corrupt a
tracker.
On a PC, you can also use Windows‘s ALT-whatever acceleration to
access the menu bar, such as ALT-F-X to exit.

Menus
When you see something like Shot/Edit Shot in this manual, it is referring
to the Edit Shot menu item within the Shot section of the main menu.
SynthEyes also has right-click menus that appear when you click the right
mouse button within a viewport. The menu that appears will depend on the
viewport you click in.
The menus also show the keyboard equivalent of each menu item, if one
is defined.

Click-on/Click-off Mode
Tracking can involve substantial sustained effort by your hands and wrists,
so proper ergonomics are important to your workstation setup, and you should
take regular breaks.
As another potential aid, SynthEyes offers click-on/click-off mode, which
replaces the usual dragging of items around with a click-on/move/click-off
approach. In this mode, you do not have to hold the mouse buttons down so
much, especially as you move, so there should be less strain (though we can not
offer a medical opinion on this, use at your own risk and discretion).
You can set the click-on/click-off mode as a preference, and can switch it
on and off whenever convenient from the Window menu.
Click-on/click-off mode affects only the camera view, mini-tracker view, 3-
D viewports, perspective window, and spinners, and affects only the left and
middle mouse buttons, never the right. This captures the common needs, without
requiring an excess of clicking in other scenarios.

Scripts
SynthEyes has a scripting language, Sizzle, and uses Sizzle scripts to
implement exporters, some importers, and tool functions. While many scripts are
supplied with SynthEyes, you can change them as you see fit, or write new ones
to interface to your studio workflow.
You can find the importers on the File/Importers menu, exporters on the
File/Exporters menu, and tool scripts on the main Script menu.

33
BASIC OPERATION

On your machine, scripts are stored in two places: a central folder for all
SynthEyes users, and a personal folder for your own. Two menu items at the top
of the Script menu will quickly open either folder.
SynthEyes mirrors the folder structure to produce a matching sub-menu
structure. You can create your own ―My Scripts‖ folder in your personal area and
place all your own scripts in that area, to be able to quickly find your scripts and
distinguish them from the standard system scripts. Similarly, a studio might have
an ―Our Shared Scripts‖ folder in the shared SynthEyes scripts folder.

Script Bars (for Menus too!)


It can be convenient to be able to quickly run scripts or access menu
commands as you work, without having to search through the menus or
remember a keyboard command. You can create script bars for this purpose,
which are small floating windows with a column of buttons, one for each script or
menu item you would like to quickly access. Use the Script Bar Manager to
create and modify the script bars, and the Script bar submenu of the Scripts
menu to open them.
Script bars are automatically re-opened and re-positioned each time you
start SynthEyes, if they were open when SynthEyes last closed.
Important: Script bars are stored and deployed as small text files. When
you create a new script bar, you must store it in either one of the system-wide
script folders or your user script folder. To see the proper locations, use the
Script/User script folder or Script/System script folder menu items to open the
File Explorer or Finder to the corresponding folder.

34
Opening the Shot
To begin tracking a shot, select File/New or File/Import/Shot if you just
started SynthEyes. Select the desired AVI, QT Movie, or MPEG file, or the first
frame of a series of JPEG, TIFF, BMP, SGI RGB, Cineon, SMPTE DPX or Targa
files. On a Mac, file type will be determined automatically even without a file
extension, if it has been written properly (though OSX does require extensions, in
theory). On a PC or Mac, if you have image files with no extension or file type,
select Just Open It in the Open File dialog box so your files are visible, then
select the first one and SynthEyes will determine its type automatically.
WARNING: SynthEyes is intended for use on known imagery in a secure
professional environment. It is not intended or updated to combat viral threats
posed by images obtained from the Internet or other unknown sources. Such
images may cause SynthEyes or your computer to crash, or even to be taken
over by rogue software, perhaps surreptitiously.
SynthEyes will normally produce an IFL (image file list) file for each file
sequence, and write it into the same folder as the images. The IFL serves as a
reliable placeholder for the entire sequence and saves time re-opening the
sequence, especially on networks, because SynthEyes does not have to re-
check the entire sequence. If the IFL file conflicts with your image-management
system, or you frequently open the same image sequence from different
machines, producing a different file name for the images from each computer,
you can turn off the Write .IFL files for sequences preference.
The Match image-sequence frame #’s preference tells SynthEyes to
generate extra frames at the beginning of a sequence, so that the SynthEyes
frame number will match up with the frame numbers of the files. (There‘s no need
for this if you already have the entire shot.) This setting can simplify interactions
with some other programs that always require this matching, and can also make
it easier to change around the ‗in‘ point of a shot if the edit changes. (But also
see the Prepend Additional Frames setting below.) This preference is not
necessarily compatible with all other applications or their exporters; be sure to
test this preference and understand its impact before using it.

Basic Open-Shot Settings


Adjust the following settings to match your shot. You can change these
settings later with Shot/Edit Shot. Don‘t be dismayed if you don‘t understand all
the settings to start; many are provided for advanced situations only. The Image
Aspect is the most important setting to get right. Maya users may want to use a
preset corresponding to one of the Maya presets.
Note that the Image Preprocessing button brings up another panel with
additional possibilities; we‘ll discuss those after the basic open-shot dialog.

35
OPENING THE SHOT

Start Frame, End Frame: the range of frames to be examined. You can
adjust this from this panel, or by shift-dragging the end of the frame range in the
time bar.
Stereo Off/Left/Right. Sequences through the three choices to control the
setup for stereo shots. Leave at Off for normal (monocular) shots, change to Left
when opening the first, left, shot of a stereo pair. See the section on Stereoscopic
shots for more information.
Frame rate: Usually 24, 24.98, or 29.97 frames per second. NTSC is used
in the US & Japan, PAL in Europe. Film is generally 24 fps, but you can use the
spinner for over- or under-cranked shots or multimedia projects at other rates.
Some software may have generated or require the rounded 25 or 30 fps,
SynthEyes does not care whether you use the exact or approximate values.
Interlacing: No for film or progressive-scan DV. Yes to stay with 25/30
fps, skipping every other field. Minimizes the amount of tracking required, with
some loss of ability to track rapid jitter. Use Yes, But for the same thing, but to
keep only the other (odd) field. Use Starting Odd or Starting Even for interlaced
video, depending on the correct first field. Guessing is fine. Once you have
finished opening the shot in a second, step through a few frames. If they go 2
steps forward, one back, select the Shot/Edit Shot menu item, and correct the
setting. Use Yes or None for source video compressed with a non-field-savvy
codec such as sequenced JPEG.

36
OPENING THE SHOT

Channel Depths: Process. 8-bit/16-bit/Float. Radio buttons. Selects the bit


depth used while processing images in the image preprocessor. Note that
Half is intentionally omitted because it is slow to process, use Float for
processing, then store as Half. Same controls as on Rez tab of the Image
Preprocessor.
Channel Depths: Store. 8-bit/16-bit/Half/Float. Radio buttons. Selects the bit
depth used to store images, after pre-processing. You may wish to
process as floats then store as Halfs, for example. A Half is a 16-bit
floating-point number, so it has enhanced range (not as much as a float)
but is only half the size to store as a float.
Apply Preset: Click to drop down a list of different film formats; selecting
one of them will set the image aspect, back plate width, squeeze factor, and
indirectly, most of the other aspect and image size parameters. You can make,
change, and delete your own local set of presets using the Save As and Delete
entries at the end of the preset list.
Image aspect ratio: overall image width divided by height. Equals 1.333
for video, 1.777 for HDTV, 2.35 or other values for film. Note: this is the aspect
ratio input to the image preprocessor, normally. The ―final aspect‖ shown at lower
right is the aspect ratio coming out of the image preprocessor. If the image
preprocessor is set to apply mode, applying distortion, this spinner is the output
aspect ratio, which was your original shot‘s aspect ratio. Instead of reading ―final
aspect‖ at lower right, the aspect ratio of the incoming imagery will appear,
labelled ―source aspect.‖
Pixel aspect ratio: width to height ratio of each pixel in the overall image.
(The pixel aspect is for the final image, not the skinnier width of the pixel on an
anamorphic negative.)
Back Plate Width: Sets the width of the ―film‖ of the virtual camera, which
determines the interpretation of the focal length. Note that the real values of focal
length and back plate width are always slightly different than the ―book values‖
for a given camera. Note: Maya is very picky about this value, use what it uses
for your shot.
Back Plate Height: the height of the film, calculated from the width, image
aspect, and squeeze.
Back Plate Units. Shows in for inches, mm for millimeters, click to
change the desired display units.
Anamorphic Squeeze: when an anamorphic lens is used on a film
camera, it squeezes a wide-screen image down to a narrower negative. The
squeeze factor reflects how much squeezing is involved: a value of 2 means that
the final image is twice as wide as the negative. The squeeze is provided for
convenience; it is not needed in the overall SynthEyes scene.

37
OPENING THE SHOT

Negative’s Aspect: aspect ratio of the negative, which is the same as the
final image, unless an anamorphic squeeze is present. Calculated from the
image aspect and squeeze factor.
Prepend Extra Frames: enabled only during the Change Shot Imagery
menu item, this spinner lets you indicate that additional frames have been added
at the beginning of the shot, and that all the trackers, object paths, splines, etc,
should be shifted this much later into the shot.
F.-P. Range Adjustment: adjusts the shot to compensate for the range of
floating-point image types (OpenEXR, TIFF, DPX). Values should go from 0..1, if
not, use this control to increase or decrease the apparent shot exposure by this
many f-stops as it is read in. Different than the image preprocessor exposure
adjust, because this affects the display and tracking but not images written
back to disk from the image preprocessor.
HiRez: For your supervised trackers, sets the amount of image re-
sampling and the interpolation method between pixels. Larger values and fancier
types will give sharper images and possibly better supervised tracking data, at a
cost of somewhat slower tracking. The default Linear x 4 setting should be
suitable most of the time. The fancier types can be considered for high-quality
uncompressed source footage.
Queue Length: how many frames to store in RAM, preferably the whole
shot. The associated display shows how much memory is remaining on your
computer. Other RAM-hungry applications such as Photoshop or your 3-D
application may reduce the amount of memory cited. You can request a RAM
queue length that requires much of your machine‘s physical memory anyway, if
you don‘t mind having those other applications slowed down temporarily while
you run SynthEyes. Note that this memory aids playback: only a comparatively
small amount of memory is required for automated tracking except for shots with
a thousand or more frames or large, busy, film scans.
16 Bit/Channel: if the incoming files have 16 bits per channel, then this
checkbox controls whether they are stored as 16 bit images, or reduced to 8 bit
images. The 8 bit images are smaller and faster, though slightly less accurate.
Conversely 16 bit images are larger and slower to display, though more accurate.
You can run automatic tracking at 16 bits, then drop to 8 bits to scrub the shot
quickest if you wish.
Keep Alpha: when checked, SynthEyes will keep the alpha channel when
opening files, even if there does not appear to be a use for it at present (ie for
rotoscoping). Turn on when you want to feed images through the image
preprocessor for lens distortion or stabilization and then write them, and want the
alpha channel to be processed and written also.
Image Preprocessing: brings up the image preprocessing (preparation)
dialog, allowing various image-level adjustments to make tracking easier (usually
more so for the human than the machine). Includes color, gamma, etc, but also

38
OPENING THE SHOT

memory-saving options such as single-channel and region-of-interest processing.


This dialog also accesses SynthEyes‘ image stabilization features.
Memory Status: shows the image resolution, image size in RAM in
megabytes, shot length in frames, and an estimated total amount of memory
required for the sequence compared to the total still available on the machine.
Note that the last number is only a rough current estimate that will change
depending on what else you are doing on the machine. The memory required per
frame is for the first frame, so this can be very inaccurate if you have an
animated region-of-interest that changes size in the Image Preprocessing
system.
The final aspect ratio coming out of the image preprocessor is also shown
here; it reflects resampling, padding, and cropping performed by the
preprocessor.

After Loading
After you hit OK to load the shot, the image prefetch system begins to
bring it into your processor‘s RAM for quick access. You can use the playbar and
timebar to play and scrub through the shot.
Note: image prefetch puts a severe load on your processor by design—it
rushes to load everything as fast as possible, taking advantage of high-
throughput devices such as RAID disks. However, if the footage is located on a
low-bandwidth remote drive, prefetch may cause your machine to be temporarily
unresponsive as the operating system tries to acquire the data. If you need to
avoid this, turn on the ―Read 1f at a time‖ option on the Shot menu. It is a sticky
preference. If that does not help enough, turn off prefetch on the Shot menu, or
turn off the prefetch preference to turn prefetch off automatically each startup.
You can use the Image Preprocessing stage to help fit the imagery into
RAM, as will be described shortly.
Even if the shot does not fit in RAM, you can get RAM playback of
portions of the shot using the little green and red playback markers in the
timebar: you can drag them to the portion you want to loop.
Sometimes you will want to open an entire shot, but track and solve only a
portion of it. You can shift-drag the start or end of the shot in the timebar (you
may want to middle-drag the whole timebar left or right first to see the boundary.
Select the proper coordinate system type (for MAX, Maya, Lightwave, etc)
at this time. Adjust the setting scene (Edit/Edit Scene Settings), or the preference
setting if desired.

Changing the Imagery


You may need to replace the imagery of a shot, for example, with lower-
or higher-resolution versions. Use the Shot/Change Shot Images menu item to

39
OPENING THE SHOT

do this. The shot settings dialog will re-appear, so you can adjust or correct
settings such as the aspect ratio.
When activated as part of Change Shot Images, the shot settings dialog
also features a Prepend Extra Frames setting. If you have tracked a shot, but
suddenly the director wants to extend a shot with additional frames at the
beginning, use the Change Shot Images selection, re-select the shot with the
additional images, and set the Prepend Extra Frames setting to the number of
additional frames. This will shift all the trackers, splines, object paths, etc later in
the shot by that amount. You can extend the trackers or add additional ones, and
re-solve the shot.
Note that if frames from the beginning of the shot are no longer needed,
you should leave them in place, but change the shot start value by shift-dragging
it in the time bar.

Image Preprocessing Basics


The image preparation dialog provides a range of capabilities aimed at the
following primary issues:
 Stabilizing the images, reducing wobbles and jiggles in the source
imagery,
 Making features more visible, especially to you for supervised
tracking,
 Reducing the amount of memory required to store the shot in RAM,
to facilitate real-time playback,
 Correcting image geometry: distortion or the optic axis position.
You can activate the image preprocessing panel either from the Open-
Shot dialog, or from the Shot menu directly.
The individual controls of the image preprocessor are spread among
several tabbed subpanels, much like the main SynthEyes window. These include
Rez, Levels, Cropping, Stabilize, Lens, Adjust, Output, and ROI.

40
OPENING THE SHOT

As you modify the image preprocessing controls, you can use the frame
spinner and assorted buttons to move through the shot to verify that the settings
are appropriate throughout it. Fetching and preprocessing the images can take a
while, especially with film-resolution images. You can control whether or not the
image updates as you change the frame# spinner, using the control button on the
right hand side of the image preprocessor.
The image preprocessing engine affects the shots as they are read from
disk, before they are stored in RAM for tracking and playback. The preprocessing
engine can change the image resolution, aspect ratio, and overall geometry.
Accordingly, you must take care if you change the image format---if
you change the image geometry, you may need to use the Apply to Trackers
button on the Output tab, or you will have to delete the trackers and do them
over, since their positions will no longer match the image currently being supplied
by the preprocessing engine.
The image preprocessor allows you to create presets within a scene, so
that you can use one preset for the entire scene, and a separate preset for a
small region around a moving object, for example.

Image Adjustments
As mentioned, the image adjustments allow you to fix up the image a bit to
make it easier for you and SynthEyes to see the features to be tracked. The
preprocessor‘s image adjustments encompass 3-D LUTs, saturation and hue,
level adjustments, and channel selection and/or bit depth.
Rez Tab.
You can change the processing and storage formats or reduce image
resolution here to save memory. Floating point format provides the most
accuracy, but takes much more time and space. Float processing with Half or 16-

41
OPENING THE SHOT

bit storage is a reasonable alternative much of the time. Most tracking activities
use 16 bit format internally; you may wish to use 8 or 16 bit while tracking for
speed and to maximize storage, then switch to float/float or float/half when you
render undistorted or re-distorted images, if you have high-dynamic-range half or
float input.
It may be worthwhile to use only one of the R, G, or B channels for
tracking, or perhaps the basic luminance, as obtained using the Channel setting.
(The Alpha channel can also be selected, mainly for a quick check of the alpha
channel.)
If you think selecting a single channel might be a good idea, be sure to
check them all. If you are tracking small colored trackers, especially on video,
you will find they often aren‘t very colorful. Rather than trying to increase the
saturation, use a different channel. For example, with small green markers for
face tracking, the red channel is probably the best choice. The blue channel is
usually substantially noisier than red or green.
Levels Tab.
SynthEyes reads files ―as is‖ by design, especially Cineon files are not
automatically gamma-corrected for display. That permits files to be ―passed
through‖ with the highest accuracy, and also allows you to select the proper
image and display calibration if you like.
The level adjustments are the ―simple way,‖ they map the specified Low
level to blackest black out (luma=0), and specified High level to whitest white
(luma=1), so that you can select a portion of the dynamic range to examine. The
Mid level is mapped to 50% gray (luma=0.5) by performing a gamma-type
adjustment; the gamma value is displayed and can be modified. You should be a
bit careful that in the interests of making the image look good on your monitor,
you don‘t compress the dynamic range into the upper end of brightness, which
reduces that actual contrast available for tracking.
The level adjustments can be animated to adjust over the course of the
shot, see the section on animated shot setup below.
The hue adjustment can be used to tweak the color before the channel
selection; by making yellows red, you can have a virtual yellow channel, for
example.
The exposure control here does affect the processed images, if you write
them back to disk. That is different than the F-P Range Control setting on the
Shot Setup panel. See the section on using floating-point images.
Note that you can change the image adjustments in this section without
having to re-track or adjust the trackers, since the overall image geometry does
not change.

42
OPENING THE SHOT

3-D Look-Up Tables (LUTs)


If you have them available, SynthEyes can read .3dl, .csp, or .cube 3-D
color lookup tables and use them to process your images. These allow rather
complex color manipulations to be performed, as well as potentially allowing you
to match your color monitor exactly. The .csp format is the most powerful and
flexible.
SynthEyes will not build these tables for you, it is not a color correction
tool. You will need to obtain the tables from other sources such as the film
scanning house. There is a script for combining LUTs, for example if you have a
film LUT and a LUT for your own monitor, you can combine them using the script,
since you can only apply one at a time.
You can find additional tools for manipulating and converting LUTs online,
including digitalpraxis.net — including a tool for ‗ripping‘ LUTs from before and
desired-after images. That permits you to adjust a sample image in your favorite
color-correction app, then burn what you did to it into a 3D LUT SynthEyes can
use. (Their tools are commercial software, not freeware, we have no relationship
with them and can not vouch for the tools in any fashion, merely cite them as a
potential example.)

Floating-Point Images
SynthEyes can handle floating-point images from EXR, TIFF, and DPX
image formats. Floating-point images offer the greatest accuracy and dynamic
range, at the expense of substantially greater memory requirement and
processing time. The 64-bit SynthEyes version is recommended for handling
floating-point images due to their large size. DPX images will offer the highest
performance.
Floating point images may use 32-bit floats, or the 16-bit ―half‖ format. The
half format does not have as much dynamic range, but it is almost always
enough for practical work even using High-Dynamic-Range images. The good
news is that Half-floats are half the size, only 16 bits. The bad news is that it
takes a substantial amount of time to translate between the half format and an 8
bit, 16-bit, or float format you can track or display.
Accordingly, SynthEyes offers separate bit-depth selections for processing
and for storage. If you need the extended range of a float (or 16-bit int) format,
you can use that for any processing (especially gamma correction and 3-D
LUTs), to reduce banding, then select a smaller storage format, Half, 16-bit, or 8-
bit. But keep in mind that additional processing time will be required.
Though a floating-point image—float or half—provides accuracy and
dynamic range, to track or display it, it must be converted to a standard 8-bit or
16-bit form, albeit temporarily. To understand the necessary controls, here are a
few details on how that is done (industry-wide).
Eight and sixteen bit (unsigned) integers are normally considered to range
from 0 to 255 or 65535. But to convert back and forth, the numbers are

43
OPENING THE SHOT

considered to range from 0 to 1.0 (in steps of 1/255), or 0 to 1.0 (in steps of
1/65535).
Correspondingly, the most-used values of the floating-point numbers
ranges from 0 to 1.0 also. With all the numbers ranging from 0 to 1, it is easy to
convert back and forth.
But, the floating point values do not necessarily have to range solely
between 0 and 1. With plenty of dynamic range in the original image, there may
be highlights that may be much larger, or details in the shadow that are much
lower. The 0 to 1 range is the only portion that will be converted to or from 8- or
16-bit.
The F.-P. Range Adjustment (F.-P. for floating-point) on the Shot setup
dialog allows you to convert a larger or smaller range of floating-point numbers
into the 0 to 1 range where they can be inter-converted. The effect of this control
is to brighten or darken the displayed image, but it affects only the display and
tracking—not the values themselves.
You can adjust the F.P. Range Adjustment, and it will not affect the
floating-point images later written back to disk after lens distortion or
stabilization.
This is quite different than the Exposure control on the Levels tab. The
Exposure control changes the actual floating-point values that will be written back
to disk later. The two controls serve different purposes, though the end result
may appear the same at first glance.

Minimizing Grain
The grain in film images can perturb tracking somewhat. Use the Blur
setting on the image preparation panel to slightly filter the image, minimizing the
grain. This tactic can be effective for compression artifacts as well.
SynthEyes can stabilize the images, re-size them, or correct for lens
distortion. As it does that, it interpolates between the existing pixels. There are
several interpolation modes available. You can produce a sharper image when
you are re-sampling using the more advanced modes, but you increase the grain
as you do so.

Handling Strobes and Explosion Shots


Quickly varying lighting can cause problems for tracking, especially
supervised tracking. You can reduce the lighting variations by hi-pass filtering the
image with the Hi-Pass setting. The image will turn into a fairly monotonous gray
version (consider using only the Luma channel to save memory). The Hi-Pass
setting is also a Gaussian filter size, but it is generally much larger than a 2-pixel
blur to compensate for grain, say around 10 pixels. The larger the value, the
more the original image will ―show through,‖ which is not necessarily the
objective, and the longer it will take to process.

44
OPENING THE SHOT

You can increase the hi-pass image contrast using the Levels settings, for
example low=0.25, high=0.75.
You can use a small blur for grain/compression in conjunction with the
high-pass filtering. It will also reduce any slight banding if you have used the
Levels to expand the range.

Memory Reduction
It is much faster to track, and check tracking, when the shot is entirely in
the PC‘s RAM memory, as fetching each image from disk, and possibly
decompressing it, takes an appreciable amount of time. This is especially true for
film-resolution images, which take up more of the RAM, and take longer to load
from disk.
SynthEyes offers several ways to control RAM consumption, ranging from
blunt to scalpel-sharp.
Starting from the basic Open-Shot dialog, if your source images have 16
bit data, you can elect to reduce them to 8 bit for storage, by unchecking the 16-
bit checkbox and reducing memory by a factor of two. Of course, this doesn‘t
help if the image is already 8 bit.
If you have a 2K or 4K resolution film image, you might be able to track at
a lower resolution. The DeRez control allows you to select ½ or ¼ image
resolution selections. If you reduce resolution by ½, the storage required drops to
¼ the previous level, and a reduction by ¼ reduces the storage to 1/16 th the prior
amount, since the resolution reduction affects both horizontal and vertical
directions. Note that by reducing the incoming image resolution, your tracks will
have a higher noise level which may be unacceptable; this is your decision.
If you can track using only a single channel, such as R, G, or luma, you
obtain an easy factor of 3 reduction in storage required.
The most precise storage reduction tool is the Region Of Interest (ROI),
which preserves only a moving portion of the image that you specify, and makes
the rest black. The black portion does not require any RAM storage, so if the ROI
is only 1/8th the width and height of the image, a reduction by 1/64th of storage is
obtained.
The region of interest is very useful with object-type shots, such as
tracking a face or head, a chestplate, a car driving by, etc, where the interesting
part is comparatively small. The ROI is also very useful in supervised tracking,
where the ROI can be set up for a region of trackers; once that region is tracked,
a different ROI can be configured for the next group. A time savings can be
achieved even though the next group will require an image sequence reload.
(See the section on presets, below, to be able to save such configurations.)
The ROI is controlled by dragging it with the left mouse button in the
Image Preprocessing dialog‘s viewport. Dragging the size-control box at its lower
right of the ROI will change the ROI size.

45
OPENING THE SHOT

The next section describes animating the preprocessing level and ROI.
It can also be helpful to adjust the ROI controls when doing supervised
tracking of shots that contain a non-image border as an artifact of tracking. This
extra border can defeat the mechanism that turns off supervised trackers when
they reach the edge of the frame, because they run out of image to track before
reaching the actual edge. Once the ROI has been decreased to exclude the
image border, the trackers will shut off when they go outside the usable image.
As with the image adjustments, changing the memory controls does not
require any re-tracking, since the image geometry does not change.

Animated Shot Setup


The Level, Saturation/Hue, lens Field of View, Distortion/Scale, stabilizer
adjustment, and Region of Interest controls may be animated, changing values
over the course of the shot.
Normally, when you alter the Level or ROI controls, a key at the first frame
of the shot is changed, setting a fixed value over the entire shot.

To animate the controls, turn on the Make Keys checkbox ( ) at lower


right of the image prep dialog. Changes to the animated controls will now create
keys at the current frame, causing the spinners to light up with a red outline on
keyframes. You can delete a keyframe by right-clicking a spinner.
If you turn off Make Keys after creating multiple keys, subsequent
changes will affect only the keyframe at the start of the shot (frame zero), and not
subsequent keys, which will rarely be useful.
You can navigate within the shot using the next frame and previous frame
buttons, the next/previous key buttons, or the rewind and to-end buttons.

Temporarily Disabling Preprocessing


Especially when animating a ROI, it can be convenient to temporarily turn
off most of the image preprocessor, to help you find what you are looking for. The
enable button (a stoplight) at the lower right will do this.
The color modifications, level adjustment, blur, down-sampling, channel
selection, and ROI are all disabled by the enable button. The padding and lens
distortion are not affected, since they change the image geometry—you do not
want that to change or you can not then place the ROI in the correct location.

Disabling Prefetch
SynthEyes reads your images into RAM using a sophisticated
multithreaded prefetch engine, which runs autonomously much of the time when
nothing else is going on. If you have a smaller machine or are maybe trying to
run some renders in the background, you can turn off the Shot/Enable prefetch
setting on the main menu.

46
OPENING THE SHOT

Get Going! You don‘t have to wait for prefetch to finish after you open a
shot. It doesn‘t need courtesy. You can plough ahead with what you want to do;
the prefetcher is designed to work quietly in the background.

Correcting Lens Distortion


Most animation software assumes that the camera is perfect, with no lens
distortion, and the camera‘s optic axis falls exactly in the center of the image. Of
course, the real world is not always so accommodating.
SynthEyes offers two methods to determine the lens distortion, either via a
manual process that examines the image curvature of lines that are straight in
the real world, or as a result of the solving process, if enough reliable trackers
are available.
SynthEyes accommodates the distortion, but your animation package
probably will not. As a consequence, a particular workflow is required that we will
introduce shortly and in the section on Lens Distortion.
The image preprocessing system lets distortion be removed, though after
doing so, any tracking must be repeated or corrected, making the manual
distortion determination more useful for this purpose.
The image preprocessing dialog offers a spinner to set the distortion to
match that determined. A Scale spinner allows the image to be scaled up or
down a bit as needed to compensate for the effect of the distortion removal.
You can animate the distortion and scale to correct for varying distortion
during zoom sequences.

Image Centering
The camera‘s optic axis is the point about which the image expands or
contracts as objects move closer or further away. Lens distortion is also centered
about this point. By convention of SynthEyes and most animation and
compositing software, this point must fall at the exact center of the image.
Usually, the exact optic center location in the image does not greatly affect
the 3-D solving results, and for this reason, the optic center location is notoriously
difficult to determine from tracking data without a laboratory-grade camera and
lens calibration. Assuming that the optic axis falls in the center is good enough.
There are two primary exceptions: when an image has been cropped off-
center, or when the shot contains a lot of camera roll. If the camera rolls a lot, it
would be wise to make sure the optic axis is centered.
Images can be cropped off-center during the first stages of the editorial
process (when a 4:3 image is cropped to a usable 16:9 window), or if a film
camera is used that places the optic axis allowing for a sound channel, and there
is none, or vice versa (none is allowed for, but there is one).

47
OPENING THE SHOT

Image stabilization or pan/scan-type operations can also destroy image


centering, which is why SynthEyes provides the tools to perform them itself, so
they can be done correctly.
Of course, shots will arrive that have been seriously cropped already. For
this reason, the image preprocessing stage allows images to be padded up to
their original size, putting the optic axis back at the correct location. Note that
padding up is necessary, not even further cropping! It will be important to identify
the degree of earlier cropping, to enable it to be corrected.
The Fix Cropping (Pad) controls have two sets of three spinners, three
each for horizontal and for vertical. Both directions operate the same way.
Suppose you have a film scan such that the original image, with the optic
axis centered, was 33 mm wide, but the left 3 mm were a sound track that has
been cropped. You would enter 3 mm into the Left Crop spinner, 30 mm into the
Width Used spinner, and 0 mm into the Right Crop spinner. The image will be
padded back up to compensate for the imagery lost during cropping.
The Width Used spinner is actually only a calculation convenience; if you
later reentered the image preprocessing dialog you would see that the Left Crop
was 0.1 and the Width Used 1.0, ie that 10% of the final width was cropped from
the left.
The Fix Cropping (Pad) controls change the image aspect ratio (see
below) and image resolution values on the Open Shot dialog, since the image
now includes the padded regions. The padding region will not use extra RAM,
however.
It is often simpler to fix the image centering in a way that does not change
the image aspect ratio, so that you can stay with the official original aspect ratio
throughout your workflow. For example, if the original image is 16:9 HD, it is
easiest to stay with 16:9 through out, rather than having the ratio change to 1.927
due to a particular camera‘s decentering. The Maintain original aspect
checkbox will permit you to update the image center coordinates, automatically
creating new padding values that keep the aspect ratio the same.

Image Preparation Preset Manager


It can be helpful to have several different sets of image preprocessor
settings, tailored to different regions of the image, or to different moving objects,
or different sections of the overall shot. A preset manager permits this; it appears
as a drop-down list at the center-bottom of the image preparation dialog.
You can create a preset by selecting the New Preset item from the list;
you will be prompted for the name (which you can later change via Rename).
The new preset is created with the current settings, your new preset name
appears and is selected in the preset manager listbox, and any changes you
make to the panel continue to update your new preset. (This means that when
you are creating several presets in a row, create each preset before modifying
the controls for that preset.)

48
OPENING THE SHOT

Once you have created several presets, you can switch among them
using the preset manager list. All changes in the image preprocessor controls
update the preset active at that time.
If you want to play for a bit without affecting any of your existing presets,
switch to the Preset Mgr. setting, which acts as a catchall (it disconnects you
from all presets). If you then decide you want to keep the settings, create a new
preset.
To reset the image preprocessor controls (and any active preset) back to
the initial default conditions, which do nothing to the incoming image, select the
Reset item from the preset manager. When you are creating several presets, this
can be handy, allowing you to start a new preset from scratch if that is quicker.
Finally, you can delete the current preset by selecting the Delete item.

Rendering Sequences for Later Compositing


The tracking results provided by SynthEyes will not produce a match
within your animation or compositing package unless that package also uses the
same padded, stabilized, resampled, and undistorted footage that SynthEyes
tracked. This is also true of SynthEyes‘s perspective window.
Use the Save Sequence button on the Image Preparation dialog‘s Output
tab to save the processed sequence. If the source material is 16 bit, you can
save the results as 16 bit or 8 bit. You can also elect whether or not to save an
alpha channel, if present. If the source has an alpha channel, but you are not
given the option to save it, open the Edit Shot dialog and turn on the Keep Alpha
checkbox.
Output file formats include Quicktime, BMP, Cineon, DPX, JPEG,
OpenEXR, PNG, SGI, Targa, TIFF(Mac only). Details of supported number of
bits per channel and alpha availability vary with format and platform.
If you have stabilized the footage, you will want to use this stabilized
footage subsequently.
However, if you have only removed distortion, you have an additional
option that maximized image quality and minimizes the amount of changes made
to the original footage: you can take your rendered effects and run them back
through the image preprocessor (or maybe your compositing package) to re-
introduce the distortion and cropping specified in the image preprocessing panel,
using the Apply It checkbox.
This redistorted footage can then be composited with the original footage,
preserving the match.
The complexity of this workflow is an excellent argument for using high-
quality lenses and avoiding excessively wide fields of view (short focal lengths).
You can also use the Save Sequence dialog to render an alpha mask
version of the roto-spline information and/or green-screen keys.

49
Automatic Tracking

Overall process
The automatic tracking process can be launched from the Summary panel
(Full Automatic or Run Auto-tracker), by the batch file processor, or controlled
manually. By breaking the overall process down into sub-steps, you can partially
re-run it with different settings, saving time. Though normally you can launch the
entire process with one click, the following write-up breaks it down for your
education, and sometimes you will want to run or re-run the steps yourself.
The automatic tracking process has four primary stages, as controlled by
the Feature panel:
1. Finding potential trackable points, called blips
2. Linking blips together to form paths
3. Selecting some blip paths to convert to trackers
4. Running the solving process to find the 3-D coordinates of the trackers, as
well as the camera path and field of view.
You can optionally include a Step 3.5: fine-tuning the trackers, which re-
analyzes the trackers using supervised tracking techniques to improve their
accuracy.
(After the automatic tracking process runs, you will still be cleaning up
trackers, setting up a coordinate system, and exporting, but those topics are
discussed separately and are the same for automatic and supervised tracking.)
Typically, blips are computed for the entire shot length with the Blips all
frames button. They can be (re)computed for a particular range by adjusting the
playback range, and computing blips over just that range. Or, the blips may be
computed for a single frame, to see what blips result before tracking all the
frames, or when changing blip parameters.
As the blips are calculated, they are linked to form paths from frame to
frame to frame.
Finally, complete automatic tracking by clicking Peel All, which will select
the best blip paths and create trackers for them. Only the blip paths of these
trackers will be used for the final camera/object solution.
You can tweak the automatic tracking process using the controls on the
Advanced Features panel, a floating dialog launched from the Feature control
panel.
You can delete bad automatically-generated trackers the same as you
would a supervised tracker; convert specific blip paths to trackers; or add

51
AUTOMATIC TRACKING

additional supervised trackers. See Combining Automatic and Supervised


Tracking for more information on this subject.
If you wish to completely redo the automated tracking process, first click
the Delete Leaden button to remove all automatic trackers (ie with lead-gray
tooltip backgrounds), and the Clear all blips button. After changes to the Roto
splines, you may also need to click Link Frames—in most cases you will be
prompted for that.
Note that the calculated blips can require megabytes of disk space to
store. After blips have been calculated and converted to trackers, you may wish
to clear them to minimize storage space. The Clean Up Trackers dialog
encourages this. (There is also a preferences option to compress SynthEyes
scene files, though this takes some additional time when opening or saving files.)

Motion Profiles
SynthEyes offers a motion profile setting that allows a trade-off between
processing speed and the range of image motions (per frame) that can be
accommodated. If the image is changing little per frame, there is no point
searching all over the image for each feature. Additionally, a larger search area
increases the potential for a false match to a similar portion of the image.
The motion profile may be set from the summary or feature panels.
Presently, two primary settings are available:
 Normal Motion. A wider search, taking longer.
 Crash Pan. Use for rapidly panning shots, such as tripod shots. Not only a
broader search, but allows for shorter-lived trackers that spin rapidly across
the image.
 Low Detail. Use for green-screen shots where much of the image has very
little trackable detail.
There are several other modes from earlier SynthEyes versions which may be
useful on occasion.

Controlling the Trackable Region


When you run the automatic tracker, it will assign all the trackers it finds to
the camera track. Sometimes there will be unusable areas, such as where an
actor is moving around, or where trackers follow a moving object that is also
being tracked.
SynthEyes lets you control this with animated rotoscoping splines, or an
alpha channel. For more information, see the section Rotoscoping with animated
splines and the alpha channel.

52
AUTOMATIC TRACKING

Green-Screen Shots
Although SynthEyes is perfectly capable of tracking shots with no artificial
tracking marks, you may need to track blue- or green-screen shots, where the
monochromatic background must be replaced with a virtual set. The plain
background is often so clean that it has no trackable features at all. To prevent
that, green-screen shots requiring 3-D tracking must be shot with tracking marks
added onto the screen. Often, such marks take the form of an X or + made of
electrical or gaffing tape. However, a dot or small square is actually more useful
to SynthEyes over a wide range of angles. With a little searching, you can often
locate tape that is a somewhat different hue or brightness as the background —
just enough different to be trackable, but sufficiently similar that it does not
interfere with keying the background.
You can tell SynthEyes to look for trackers only within the green- or blue-
screen region (or any other color, for that matter). By doing this, you will avoid
having to tell SynthEyes specifically how to avoid tracking the actors.
You can launch the green-screen control dialog from the Summary control

panel , using the Green Screen button.

When this dialog is active, the main camera view will show all keyed (trackable)
green-screen areas, with the selected areas set to the inverse of the key color,
making them easy to see. [You can also see this view from the Feature panel‘s

Advanced Feature Control dialog by selecting B/G Screen as the Camera


View Type.]
Upon opening this dialog, SynthEyes will analyze the current image to
detect the most-common hue. You may want to scrub through the shot for a
frame with a lot of color before opening the dialog. Or, use the Scrub Frame

53
AUTOMATIC TRACKING

control at lower right, and hit the Auto button (next to the Average Key Color
swatch) as needed.
After the Hue is set, you may need to adjust the Brightness and
Chrominance so that the entire keyed region is covered. Scrub through the shot
a little to verify the settings will be satisfactory for the entire shot.
The radius and coverage values should usually be satisfactory. The radius
reflects the minimum distance from a feature to the edge of the green-screen (or
actor), in pixels. The coverage is the amount of the area within the radius that
must match the keyed color. If you are trying to match solid non-key disks that go
as close as possible to an actor, you might want to reduce the radius and
coverage, for example.
You should use the Low Detail motion hint setting at the top of the
Summary panel to when tracking green-screen shots (it normally reads Normal).
SynthEyes‘s normal analysis looks for the motion of details in the imagery, but if
the most of the image is a featureless screen, that process can break down.
With Low Detail selected, SynthEyes uses an alternate approach. SynthEyes will
configure the motion setting automatically the first time you open the
greenscreen panel, as it turns on the green-screen enable. See also a technique
for altering the auto-tracker parameters to help green-screen shots.
The green-screen settings will be applied when the auto-track runs. Note
that it is undesirable to have all of the trackers on a distant flat back wall. You
need to have some trackers out in front to develop perspective. You might
achieve this with tracking marks on the floor or (stationary) props, or by rigidly
hanging trackable items from the ceiling or light stands. In these cases, you will
want to use supervised tracking for these additional non-keyed trackers.
Since the trackers default to a green color, if you are handling actual
green-screen shots (rather than blue), you will probably want to change the
tracker default color, or change the color of the trackers manually. See Keeping
Track of the Trackers for more information.
After green-screen tracking, you will often have several individual trackers
for a given tracking mark, due to frequent occlusions by the actors. As well as
being inconvenient, it does not give SynthEyes as much information as it would if
they were combined. You can use the Coalesce Nearby Trackers dialog to join
them together; be sure to see the Overall Strategy subsection.
You can write the green-screen key as an alpha-channel or RGB image
using the image preprocessor. Any roto-splines will be factored in as well. With a
little setup, you can use the roto-splines as garbage mattes, and use small roto
dots to repair the output matte to cover up tracking marks.

Promoting Blips to Trackers


The auto-tracker identifies many features (blips), and combines them into
trails, but only converts a fraction of them to trackers to be used in generating the

54
AUTOMATIC TRACKING

3-D solution. Some trails are too short, or crammed into an already densely-
populated area.
However, you may wish to have a tracker at a particular location to help
achieve an effect. You can create a supervised tracker if you like, but a quicker
alternative can be to convert an existing blip trail into a tracker—in SynthEyes-
speak, this is Peeling a trail.
To see this, open the flyover shot and auto-track it again. Switch to the

Feature panel and scrub into the middle of the shot. You‘ll see many little
squares (the blips) and red and blue lines representing the past and future paths
(the trails).

You can turn on the Peel button, then click on a blip, converting it to a full
tracker. Repeat as necessary.
Alternatively, you can use the Add Many Trackers dialog to do just that in
an intelligent fashion—after an initial shot solution has been obtained.

Keeping Track of the Trackers


After an auto-track, you will have hundreds or even thousands of trackers.
To help you see and keep track of them, SynthEyes allows you to assign a color
to them, typically so you can group together all the related trackers.

55
AUTOMATIC TRACKING

SynthEyes also provides default colors for trackers of different types.


Normally, the default color is a green. Separate default colors for supervised,
automatic, and zero-weighted trackers can be set from the Preferences panel.
You can change the defaults at any time, and every tracker will be updated
automatically—except those for which you have specifically set a color.

You can assign the color by clicking the swatch on the Tracker panel

, or by double-clicking the miniature swatch at the left of the tracker name in


the graph editor . If you have already created the trackers, lasso-select the
group, and shift-click to add to it. Then click the color swatch on the Tracker
panel to set the color. In the graph editor panel, if you have several selected,
double-click the swatch to cause the color of all the trackers in the group to be
set. Right-clicking the track panel swatch will set the color back to the default.
If you are creating a sequence of supervised trackers, once you set a
color, the same color will be used for each succeeding new tracker, until you
select an existing tracker with a different color, or right-click the swatch to get
back to the default.
You will almost certainly want to change the defaults, or set the colors
manually, if you are handling green-screen shots.
You will see the tracker colors in the camera view, perspective view, and
3-D viewports, as well as the miniature swatch in the graph editor.
If you have set up a group of trackers with a shared color, you can select
the entire group easily: select any tracker in the group, then click the Edit/Select
same color menu item or control-click the swatch in the graph editor.
To aid visibility, you can select the Thicker trackers option on the
preferences panel. This is particularly relevant for high-resolution displays, where
the pixel pitch may be quite small. The Thicker trackers option will turn on by
default for monitors over 1300 pixels horizontal resolution.
Note that there are some additional rules that may occasionally override
the color and width settings, with the aim of improving clarity and reducing clutter.

Advanced Feature Dialog Effects


The Advanced Feature Dialog controls a few of the internal technical
parameters of the auto-tracker. Here we present a few specific uses of the panel,
both revolving around situations where there are too many blips, degrading auto-
tracking.
Green Screen Shots
A green-screen shot has areas that are ‗flat,‘ meaning that the RGB
values are largely constant over a large area of the screen. Normally, SynthEyes
works adaptively to locate trackable features distributed across the image, but

56
AUTOMATIC TRACKING

that can backfire on green-screen shots, because there are usually no features
on the screen, except for the comparatively few that you have provided.
SynthEyes then goes looking for video noise, film grain, small shadows, etc.
Some of the time, it is successful at tracking small defects in the screen.
You can reduce the number of blips generated on these shots by turning
down the Density/1K number in the Small column of the Advanced Feature
dialog, typically to 1%. Try it with Auto Re-blip turned on, then close the panel,
Clear All Blips and do a Blips All Frames.
Too-Busy High Resolution Shots
High-resolution shots can have a similar problem as green-screen shots—
too many blips—which can result in trackers with many mistakes, as the blips are
linked incorrectly. The high-resolution source images can contain too much
detail, even if it is legitimate detail.
In this case, it is appropriate to tweak the Feature Size numbers on the
Advanced Feature dialog. You can first raise the Small Feature Size to 15, and if
necessary, raise both to 25.
This should reduce the number of features and the chances that trackers
jump along rows of repeated features. However, larger feature size values will
slow down processing substantially.

Skip-Frame Track

The Features panel contains a skip-frame checkbox that causes a


particular frame to be ignored for automatic tracking and solving. Check it if the
frame is subject to a short-duration extreme motion blur (camera bump), an
explosion or strobe light, or if an actor suddenly blocks the camera.
The skip-frames checkbox must be applied to each individual frame to be
skipped. You should not skip more than 2-3 frames in a row, or too many frames
overall, or you can make it more difficult to determine a camera solution, or at
least create a temporary slide.
You should set up the skip-frames track before autotracking. There is
some support for changing the skipped frames after blipping and before linking,
but this is not recommended; you may have to rerun the auto-tracking step.

Strengths and Limitations


The automatic tracker works best on relatively well-controlled shots with
plenty of consistent spot-type feature points, such as aerial and outdoor shots.
Very clean indoor sets with many line features can result in few trackable
features. A green-screen with no tracking marks is un-trackable, even if it
has an actor, since the (moving) actor does not contribute usable trackers.

57
AUTOMATIC TRACKING

Rapid feature motion can cause tracking problems, either causing loss of
continuity in blip tracks, or causing blips to have such a short lifetime that they
are ignored. Use the Crash Pan motion profile to address such shots.
Similarly, situations where the camera spins about its optic axis can
exceed SynthEyes expectations.
You can add supervised guide trackers to help SynthEyes determine the
frame-to-frame correspondence in difficult shots (in Low Detail mode). A typical
example would be a camera bump or explosion with several unusable frames,
disabled with the Skip Frames track. If the camera motion from before to after the
bump is so large that no trackers span the bump, adding guide trackers will
usually give SynthEyes enough information to reconnect the blip trails and
generate trackers that span the bump.

58
Supervised Tracking
Solving for the 3-D positions of your camera and elements of the scene
requires a collection of trackers tracked through some or all of the shot.
Depending on what happens in your shot, 7 or 8 may be sufficient (at least 6),
but a complex shot, with trackers becoming blocked or going off the edge of the
frame, can require substantially more. If the automated tracker is unable to
produce satisfactory trackers, you will need to add trackers directly. Or, you can
use the techniques here to improve automatically-generated ones. Specific
supervised trackers can be especially valuable to serve as references for
inserting objects, or for aligning the coordinate system as desired.
WARNING: Tracking, especially supervised tracking, can be stressful to
body parts such as your fingers, hands, wrists, eyes, and back, like any other
detail-oriented computer activity. Be sure to use an ergonometrically sound
workstation setup and schedule frequent rest breaks. See Click-on/Click-off
mode.

To begin supervised tracking, select the Tracker control panel . Turn


on the Create button .
Tip: You can create a tracker at any time by holding down the ‗C‘ key and
left-clicking in the camera view. Or, right-click in the camera view and select the
Create Trackers item. In either case you will be switched to the Tracker control
panel.

Rewind to the beginning of the shot .


Locate a feature to track: a corner or small spot in the image that you
could reach in and put your finger on. Do not select a reflective highlight that
moves depending on camera location. Left-click on the center of your feature,
and while the button is down, position the tracker accurately using the view
window on the command panel. The gain and brightness spinners located next to
the mini-tracker-view can make shadowed or blown-out features more visible.
Adjust the tracker size and aspect ratio to enclose the feature, and a little of the
region around it, using either the spinner or inner handle.
Adjust the Search size spinner or outer handle based on how uncertainly
the tracker moves each frame. This is a matter of experience. A smooth shot
permits a small search size even if the tracker accelerates to a higher rate.
Create any number of trackers before tracking them through the shot. It is
easier to do either one or 3-6 at a time.

To track them through the shot, hit the Play or frame forward button
or use the mouse scroll wheel inside the mini-tracker-view (scrubbing the time
bar does not cause tracking). Watch the trackers as you move through the shot.
If any get off track, back up a frame or two, and drag them in the image back to

59
SUPERVISED TRACKING

the right location. The Play button will stop automatically if a tracker misbehaves,
already selected for easy correction.

Prediction Modes and Hand-Held Shots


SynthEyes predicts where the feature will appear in each new frame. It
has different ways to do this, depending on your shot. By default, in the Steady
camera mode, it assumes that the shot is smooth, from a steadi-cam, dolly, or
crane, and uses the previous history over a number of frames to predict its next
position.
If you have a hand-held shot, select Hand-Held: Use others on the Track
menu. In this mode, SynthEyes uses other, already-tracked, trackers to predict
the location of new ones. Start by tracking a few easy-to-track features that are
distributed around the image. You will usually need a large search area, and to
re-key fairly frequently if the shot is very choppy. But as you add trackers, you
can greatly reduce the search size and will need to set new keys only
occasionally as the pattern changes.
Using the predict mode, you‘ll sometimes find that a tracker is suddenly
way out of position, that it isn‘t looking in the right place. If you check your other
trackers, you‘ll find that one of your previously-tracked trackers is off course on
either this or the previous frame. You should unlock that tracker, repair it, relock
it, and you‘ll see that the tracker you were originally working on is now in the
correct place (you may need to back up a frame and then track onto this frame
again).
If your shot and the individual trackers are very rough, especially as you
are tracking the first few trackers, you may find that the trackers aren‘t too
predictable, and you can set the mode to Hand-Held: Sticky, in which case
SynthEyes simply looks for the feature at its previous location (requiring a
comparatively large search region).
For some special situations, the Re-track at existing mode uses the
previously-tracked location, and looks again nearby (perhaps after a change in
some tracker settings). The search size can be kept small, and the tracker will
not make any large jumps to an incorrect location, if the track is basically correct
to begin with. SynthEyes uses this mode during fine-tuning. Note: on any frames
that were not previously tracked, Hand-Held: Sticky mode will be used.

Adjusting While Tracking


If a tracker goes off course, you can fix it several ways: by dragging it in
the camera view, by holding down the Z key and clicking and dragging in the
camera view, by dragging in the small tracker interior view, or by using the arrow
keys on the number pad. (Memo to lefties: use the apostrophe/double-quote key
‗/‖ instead of Z.)
You can keep an eye on a tracker or a few trackers by turning on the Pan
to Follow item on the Track menu (keyboard: 5 key), and zooming in a bit on the

60
SUPERVISED TRACKING

tracker, so you can see the surrounding context. When Pan To Follow is turned
on, dragging the tracker drags the image instead, so that the tracker remains
centered.
Or, the number-pad-5 key centers the selected tracker whenever you click
it.

Staying on Track and Smooth Keying


Help keep the trackers on course with the Key Every spinner, which
places a tracker key each time the specified number of frames elapses, adapting
to changes in the pattern being tracked. If the feature is changing significantly,
you may want to tweak the key location each time the key is added automatically.
Turn on the Stop on auto-key item on the Track menu to make this easier.
When you reposition a tracker, you create a slight ―glitch‖ in its path that
can wind up causing a corresponding glitch in the camera path. To smooth the
glitches away, set the Key Smooth spinner to 3, say, to smooth it over the
preceding 3 frames. When you set a key, the preceding (3 etc) frames need to be
re-tracked. If you turn on Pre-Roll by Key Smooth on the Track menu,
SynthEyes will automatically back up and retrack the appropriate frames when
you resume tracking (hit Play) after setting a key.
The combination of Stop on auto-key and Pre-roll by Key Smooth
makes for an efficient workflow. You can leave the mouse camped in the tracker
view window for rapid position tweaks, and use the space bar to restart tracking
to the next automatic key frame. See the web-site for a Flash movie example.
Warning: if SynthEyes is adding a key every 12 frames, and you want to
delete one of those keys because it is bad, it may appear very difficult. Each time
you delete it (by right-clicking in the tracker view, Now button, or position
spinners), a new key will immediately be created. You could just fix it. Or, you
should back up a few frames, create a key where the tracker went off-course,
then go forward to delete or fix the bad key.

Suspending or Finishing a Track


If an actor or other object permanently obscures a tracker, turn off its
enable button, disabling it for the rest of the shot, or until you re-enable it.
Trackers will turn off automatically at the edge of the image; turn them back on if
the image feature re-appears. (If the shot has a non-image border, use the
region-of-interest on the Image Preprocessing panel so that trackers will turn off
at the right location.)
You can also track backwards: go to the end of the shot, reverse the
playback direction , and play or single-step backwards.
You can change the tracking direction of a tracker at any time. For
example, you might create a tracker at frame 40 and track it to 100. Later, you
determine that you need additional frames before 40. Change the direction arrow

61
SUPERVISED TRACKING

on the tracker panel (not the main playback direction, which will change to
match). Note that you introduce some stored inconsistency when you do this.
After you have switched the example tracker to backwards, the stored track from
frames 40-100 uses lower-numbered reference frames, but backwards trackers
use higher-numbered reference frames. If you retrack the entire tracker, the
tracking data in frames 40-100 will change, and the tracker could even become
lost in spots. If you retrack in the new direction, you should continue to monitor
the track as it is updated. If you have regularly-spaced keyframes, little trouble
should be encountered.
When you are finished with one or more trackers, select them, then click
the Lock button . This locks them in place so that they won‘t be re-tracked
while you track additional trackers.

Combining Trackers
You might discover that you have two or more trackers tracking the same
feature in different parts of the shot, or that are extremely close together, that you
would like to consolidate into a single tracker.
Select both trackers, using a lasso-select or by shift-selecting them in the
camera view or graph editor. Then select the Track/Combine trackers menu
item, or the Shift-7 (ampersand &). All selected trackers will be combined,
preserving associated constraint information.
If several of the trackers being combined are valid on the same frame,
their 2-D positions are averaged. Any data flagged as suspect is ignored, unless
it is the only data available. Similarly, the solved 3-D positions are averaged.
There is a small amount of intelligence to maintain the name and configuration of
the most-developed tracker.
Note: the camera view‘s lasso-select will select only trackers enabled on
the current frame, not the 3-D point of a tracker that is disabled on the present
frame. This is by design for the usual case when editing trackers. Control-lasso
to lasso both the 2-D trackers and the 3-D points, or shift-click to select 3-D
points.

Filtering and Filling Gaps in a Track


To produce even smoother final tracks, instead of Locking the trackers,
click the Finalize button . This brings up the Finalize dialog, which filters the
tracker path, fills small missing gaps, and Locks the tracker(s). Though filtering
can create smoother tracks, it is best used when the camera path is smooth, for
example, from a dolly or crane. If the camera was hand-held, smoothing the
tracker paths causes sliding, because the trackers will be smoother than the
camera!
If you have already begun solving and have a solved 3-D position for a
tracker, you can also fill small gaps or correct obvious tracking glitches by using

62
SUPERVISED TRACKING

the Exact button on the tracker panel, which sets a key at the location of the
tracker‘s 3-D position (keyboard: X key, not shifted). You should do this with
some caution, since, if the tracking was bad, then the 3-D tracker position and
camera position are also somewhat wrong.

Pan To Follow
While tracking, it can be convenient to engage the automatic Pan To
Follow mode on the Track menu, which centers the selected tracker(s) in the
camera view, so you can zoom in to see some local context, without having to
constantly adjust the viewport positioning.
When pan to follow is turned on, when you start to drag a tracker, the
image will be moved instead, so that the tracker can remain centered. This may
be surprising to begin with.
Once you complete a tracker, you can scrub through the shot and see the
tracker crisply centered as the surroundings move around a bit. This is the best
way to review the stability of a track.

Skip-Frame Track
If a few frames are untrackable due to a rapid camera motion, explosion,
strobe, or actor blocking the camera, you can engage the Skip Frame checkbox
on the feature panel to cause the frame to be skipped. You should only skip a
few frames in a row, and not that many over all.
The Skip Frames track will not affect supervised tracking, but it affects
solving, causing all trackers to be ignored. After solving, the camera will have a
spline-interpolated motion on the resulting unsolved frames.
If you have a mixture of supervised and automatic tracking, see the
section on the Skip-Frame track in Automated Tracking as changing the track
after automated tracking can have adverse effects.

63
Fine-Tuning the Trackers
Supervised tracking can always produce the most accurate results by
definition, because a human can always look at an auto-track and find something
to improve. The accuracy of supervised tracking is also aided by the high
accuracy offered by the pattern-matching supervised tracking algorithms.
You can tell SynthEyes to make a second pass through the images, re-
tracking them using the pattern-matching supervised tracker. This fine-tuning
process can give you closer to the accuracy of a careful supervised track, though
it will take the computer a bit longer to process.
The fine-tuning workflow adds a step as follows:

1. Run Auto-tracker on the Summary panel


2. Click the Fine-tune trackers item on the Track menu.
3. Check the parameters on the fine-tune panel, then hit Run.

4. Go to the Solver panel and click Go! to solve the shot.


You can turn on the Fine-tune during auto-track checkbox on the Fine-
tune trackers dialog or summary panel to have fine tuning done during auto-
tracking. Or, you can do an automatic track and solve, then decide to fine-tune
and refine the solve later: the work-flow is up to you.

Controlling Fine-Tuning
When you fine-tune, SynthEyes will modify each auto-tracker so that there
is only one key every 8 frames (by default), then run the supervised tracker at all
the intermediate frames.
There are several options you can control when starting fine-tuning:
 The spacing between keys
 The size of the trackers
 The aspect ratio of the trackers (usually square, 1.0)
 The horizontal and vertical search sizes
 The shot‘s current supervised-tracker filter interpolation mode.
 Whether all auto trackers will be tuned, or those that are currently
selected (whether they are automatic, or a previously-unlocked
automatic tracker, which would not otherwise be processed).
 Whether you want the trackers to remain auto-trackers, or be
changed to be considered ―gold‖ supervised trackers.

65
FINE-TUNING THE TRACKERS

You should set these parameters based on your experience at supervised


tracking. Very static and slowly changing shots can use a larger spacing between
keys; more dynamic shots, say with twists or lighting changes, should use closer-
together keys.
Since the supervised tracking will be starting from the known location of
the automatic tracker, the search size can be relatively small.
Note that if you leave the trackers as auto-trackers, then later convert
them to gold, the search size will be reset to a default value at that time. That is
not a significant reason, keeping them as automatic trackers is recommended.

Usage Suggestions
The fine-tuning process is not necessary on all shots. The automatic
tracker produces excellent results, and fine-tuning may produce results that are
indistinguishable from the original. Shots with a slow camera motion or with busy,
repeating, content (such as woods and shrubbery) may deserve special
attention.
You can do a quick test by selecting and fine-tuning a single tracker, then
comparing its track (using tracker trails) before and after fine-tuning using Undo
and Redo. (See the online tutorial.) If the fine-tuning is beneficial, then fine-tune
the remaining trackers.
After fine-tuning, be sure to check the tracker graphs in the graph editor
and look for isolated spikes. Occasional spikes are typical when a tracker is in a
region with a lot of repeating fine detail, such as a picket fence.
Keep in mind that though fine-tuning can help give you a very smooth
track, often there are other factors at play as well, especially film grain,
compression artifacts, or interlacing.

66
Pre-Solve Tracker Checking
When you are doing supervised tracking, you should check on the
trackers periodically before starting to solve the shot, to verify that you have
sufficient trackers distributed throughout the shot.
You can also check on the trackers after automatic tracking, before
beginning to solve the shot. (On simpler shots you can have the automatic
tracker proceed directly from tracking to solving.)
This section describes ways to examine your trackers before solving. It
introduces the SynthEyes Graph Editor . After solving, other techniques and
tools are available, including the tracker clean-up tool.
Tip: automatic tracker tooltips have gray backgrounds; supervised trackers
have gold backgrounds.

Checking the Tracker Trails


The following procedure has proven to be a good way to quickly identify
problematic trackers and situations, such as frames with too few useful trackers.
1. Go to the camera view
2. Turn off View/Show Image on the main or right-click menu.
3. Scrub through the shot using the time bar. Look for
 regions of the image without many trackers,
 sections of the shot where the entire image does not have many
trackers,
 trackers moving the wrong way from the others.
4. Turn on View/Show tracker trails on the main or right-click menu.
5. Scrub through the shot using the time bar. Look for
 funny hooks at the beginning or end of a track, especially at the
edges of the image,
 zig-zag discontinuities in the trails.
Your mind is good at analyzing motion paths without the images —
perhaps even better because it is not distracted by the images themselves. This
process is helpful in determining the nature of problematic shots, such as shots
with low perspective, shots that unexpectedly have been shot on a tripod, tripod
shots with some dolly or sway, and shots that have a little zoom somewhere in
the middle. Despite the best efforts and record-keeping of on-set supervisors,
such surprises are commonplace.

67
PRE-SOLVE TRACKER CHECKING

Checking Tracker Lifetimes


You can overview how many trackers are available on each frame
throughout the shot with the tracks view of the Graph Editor .

The graph editor can be a floating window using its button on the
toolbar, or it can be embedded as a viewport by itself or as part
of other viewport configurations .

After you open the graph editor, make sure it is in the tracks view , if
you‘ve been playing earlier. If the shot is supervised tracking, make click on the
sort order button from sort alphabetic to sort by time . If you have

resized the window you may want to reset the horizontal scaling also.
Next click on the two buttons at lower right of the panel until they look like

this , which selects squish mode, with no keys, with the tracker-count
background visible (it starts out visible). The graph editor on one example shot
looks like this:

68
PRE-SOLVE TRACKER CHECKING

Each bar corresponds to one of the trackers; Tracker4 is selected and


thicker. The color-coded background indicates that the number of trackers is
problematic at left, OK in the middle, and ―safe‖ on the right.
You can configure the ―safe‖ level on the preferences. Above this limit
(default 12), the background will be white (gray for the dark UI setting), but below
the safe limit, the background will be the safe color (configurable as a standard
preference), which is typically a light shade of green: the number of trackers is
OK, but not high enough to hit your desired safe limit.
This squished view gives an excellent quick look at how trackers are
distributed throughout the shot. The color coding varies with for tripod-mode
shots and for shots with hold regions. Zero weighted trackers do not count.

Hint: When the graph editor is in graph mode , you can look at a
direct graph of the number of valid trackers on each frame by turning on the
#Normal channel of the Active Trackers node.

69
PRE-SOLVE TRACKER CHECKING

If there are unavoidably too few trackers on some frames, you can use the

Skip Frames track on the Feature Control Panel to proceed.


The graph editor is divided into three main areas: a hierarchy area at top
left, a canvas area at top right, and a tool area at the bottom. You can change the
width of the hierarchy area by sliding the gutter on its right. You can partially or
completely close the tool area with the toolbox at left. A minimal view is
particularly handy when the graph editor is embedded in a viewport layout.
In the hierarchy area. you can select trackers by clicking their line. You
can control-click to toggle selections, or shift-drag to select a range. The scrollbar
at left scrolls the hierarchy area.
You can also select trackers in the canvas area in squish mode, using the
same mouse operations as in the hierarchy area.
The icons next to the tracker name provide quick control over the tracker
visibility, color, lock status, and enable.
Warning: you can not change the enable, or much else, of a tracker while
it is locked!
The small green swatch shows the display color of a tracker or mesh.
Double-clicking brings up the color selection dialog so you can change the
display color. You can shift-click a color, and add all trackers of that color to the
current selection, control-click the swatch of an unselected tracker to select only
trackers of that color, or control-click the swatch on a selected tracker to unselect
the trackers of that color.
Jumping ahead, the graph editor hierarchy also shows any coordinate-
system lock settings for each tracker:
x, y, and z for the respective axis constraints;
l (lower-case L) when there is a linked tracker on the same object;
i for a linked tracker on a different object (an indirect link);
d for a distance constraint;
0 for a zero-weighted tracker;
p for a pegged tracker;
F for a tracker you specified to be far;
f for a tracker not requested to be far, but solved as far for cause.

Introduction to Tracker Graphs


The graph editor helps you find bad trackers and identify the bad portions
of their track. The graph editor has a very extensive feature set that we will begin
to overview; for full details see the graph editor reference. We won‘t get to the
process of how to find the worst ones until the end of the section, when you
understand the viewport.

70
PRE-SOLVE TRACKER CHECKING

To begin, open the graph editor and select the graphs mode .
Selecting a tracker, or exposing its contents, causes its graphs to appear.

In this example, a tracker suddenly started jumping along fence posts,


from pole to pole on three consecutive frames. The red curve is the horizontal U
velocity, the green is the vertical V velocity, and the purple curve is the tracker
figure-of-merit (for supervised trackers). You can see the channels listed under
Tracker15 at left. The green circles show which channels are shown; zoom,
pan, and color controls are adjacent. Double-clicking will turn on or off all the
related channels.
There are a variety of different curves available, not only for the trackers
but for other node types within SynthEyes.
The graph editor is a mult-curve editor—any number of completely
different kinds of curves can be displayed simultaneously. There is no single set
of coordinate values in the vertical direction because the zoom and pan can be
different for each kind of channel. To determine the numeric value at any
particular point on a curve, put the mouse over it and the tooltip will pop up with
the set of values.
The graph editor displays curves for each node that is exposed (its
channels are displayed; Enable, U. Vel, V. Vel, etc above).
The graph editor also displays curves for all selected nodes (trackers,
cameras, or moving objects) as long as the Draw Curves for Selected Nodes

button is turned on. This gives you quite a bit of quick control over what is
drawn, and enables you to compare a single tracker or camera‘s curves to any
other tracker as you run through them all, for example.

71
PRE-SOLVE TRACKER CHECKING

You zoom a channel by dragging the small zoom icon . The zoom
setting is shared between all channels with the same type. For example, the U
and V velocity channels are the same type, as are the X, Y, and Z position
channels of the camera. But the U velocity and U position are different types. If
you click on the small Zoom icon, the other zoom icons of the same type will
flash.
The zoom setting is also shared between nodes of the same type:
zooming or panning on one tracker affects the other trackers too. All related
channels will zoom also, so that the channels remain comparable to one another.
This saves time and helps prevents some incorrect thought patterns.
The pan setting is also shared between nodes, but not between
channels: the U velocity and V velocity can be separated out. When you pan,
you‘ll see a horizontal line that is the ―zero level‖ of the channel. It will snap
slightly to horizontal grid lines, making it easier to make several different curves
line up to the same location. You can later check on the zero level by tapping the
zoom or pan icons.
There are two kinds of auto-zooms, activated by double-clicking the zoom
or pan icons. The zoom double-click auto-zooms, but makes all channel of the
same type have the same zero level. The pan double-click auto-zooms, but pans
the channels individually. As a result, the zoom double-click keeps the data
more organized and easier to follow, but the pan double-click allows for a higher
zoom factor, because the zero levels can be different.
For example, consider zooming an X position that runs 0 to 1, and a Y
position that runs 10 to 12.
If we pan double-click, the X curve will run full-screen from 0 to 2, and Y
will run full-screen from 10 to 12. Note that X is not 0 to 1, because it must have
the same zoom factor as Y. X will only occupy the bottom half of the screen.
If we zoom double-click, X will run from 0 to 12 full screen, and Y will run
from 0 to 12 full screen. The range and zero locations of both curves will be the
same, and we‘ll be better-able to see the relationship between the two curves.
But if we want to see details, the pan-double-click is a better choice.
There is no option to have X run 0 to 1 and Y run 10 to 12, by design.
Both zoom and pan settings can be reset by right-clicking on the
respective icons.

Interpreting Figure of Merit

72
PRE-SOLVE TRACKER CHECKING

In this example, two trackers have been supervised-tracked with a Key


Every setting of 20 frames (but starting at different frames). The tracker Figure of
Merit (FOM) curve measures the amount of difference between the tracker‘s
reference pattern and what is found in the image. You see it drop down to zero
each time there is a key, because then the reference and image are the same.
One tracker has a small FOM value that stays mostly constant. The other
tracker has a much larger FOM, and in part of the shot it is much larger. In a
supervised shot, the reason for that should be investigated.
You can use this curve to help decide how often to place a key
automatically. The 20 frame value shown above is plenty for those features. If
you see the following, you should reduce the spacing between keys.

73
PRE-SOLVE TRACKER CHECKING

You‘ll also be able to see the effect of the Key Smooth setting: the key
smoothing will flatten out a steadily increasing curve into a gently rounded hump,
which will reduce spikes in the final camera path.

Velocity Spikes
Here‘s an example of a velocity curve from the graph editor:

74
PRE-SOLVE TRACKER CHECKING

At frame 217, the tracker jumped about 3 pixels right, to a very similar
feature. At frame 218, it jumped back, resulting in the distinctive sawtooth pattern
the U velocity curve exhibits. If left as-is, this spike will result in a small glitch in
the camera path on frame 217.

You can repair it using the Tracker control panel in the main user
interface by going to frame 217. Jiggle back and forth a few frames with the S
and D keys to see what‘s happening, then unlock the tracker and drop down
a new key or two. Step around to re-track the surrounding frames with the new
keys (or rewind and play through the entire sequence, which is most reliable).
DeGlitch Mode

You can also repair the glitch by switching to the Deglitch mode of the
graph editor, then clicking on the first (positive) peak of the U velocity at frame
217. SynthEyes will compute a new tracker location that is the average of the
prior and following locations. For most shots, this will eliminate the spike.
If you see a velocity spike in one direction only, it will be more difficult to
correct: it means that the tracker has jumped to a nearby feature, and not come
back. You will have to put it back in its correct location and then play (track)
through the rest of the shot.
The deglitch tool can also chop off the first or last frame of a tracker, which
can be affected when an object moves in front, or a feature is moving offscreen.

75
PRE-SOLVE TRACKER CHECKING

Even if the last two or three frames are bad, you can click a few times and
quickly chop them off.
Finding Spikes Before Solving
Learn to recognize these velocity spikes directly. There are double spikes
when a tracker jumps off course and returns, single spikes when it jumps off
course to a similar feature and stays there, large sawtooth areas where it is
chattering between near-identical features (or needs a new position key for
reference), or big takeoff ramps where it gets lost and heads off into featureless
territory.

To help find these issues, the graph editor features the Isolate mode .
Left-click it to turn it on, then right-click it to select all the trackers (it does not
have to be on for right-clicking to work).
With all the trackers selected, you will usually see a common pattern for
most of the trackers, plus a few spots where things stick out. If you click the
mouse over the spikes that stick out, that tracker will be selected for further
investigation. You can push the left button and keep it down and move around
investigating different curves, before releasing it to select a particular one. It can
be quicker to delete extra automatic trackers, rather than repairing them.
After repairing each tracker, you can right-click the isolate button again,
and look for more. With two monitors, you can put the graph editor on one, and
the camera view on another. With only one monitor, it may be easiest to operate
the graph editor from the Camera & Graphs viewport configuration. Once you are
done, do a refine-mode solving cycle.

Hint: You can stay in Deglitch mode , and temporarily isolate by


holding down the control key. This gives a quick workflow for finding and
repairing glitches.

76
Setting Up Mixed-Tripod Shots
In a tripod-mode shot (also known as nodal pan), the camera pans, tilts,
rolls, perhaps zooms—but does not translate. No 3-D range information can be
extracted. That is both a limitation and a benefit: without a depth, these shots are
the domain of traditional 2-D compositing, amenable to a variety of tricks and
gimmicks such as the ―Honey, I Shrunk the Kids‖ effect.
In the SynthEyes environment, ―without range information‖ means that all
tripod-shot trackers are automatically tagged as ―Far,‖ meaning that they are
directions in space (like a directional light), not a point in space (which
corresponds to an omni light).
SynthEyes solves these shots for (only) the pan, tilt, and roll (optionally
zoom) using the Tripod solving mode. And it helps you orient the tripod-solve
scene into a 3-D workspace.

Introducing “Holds”
Some shots are more complex, however: they contain both sections
where the camera translates substantially, and where the camera pans
substantially without translation. For example, the camera dollies down a track,
looking to the left, reaches the end of the track, spins 180 degrees, then returns
down the track while looking to the right.
Such a shot is complex because none of the trackers visible in the first
section of the shot are visible in the third portion. During the second panning-
tripod portion, all the trackers must be ―Far‖ and can have no depths because the
camera never translates during their lifetime. Taken literally, and of course we‘re
talking computers here, mathematically there is no way for SynthEyes to tell what
happened between the first and third sections of the shot—the camera could
have translated from here to Mars during the second section, and since the Far
points are infinitely far away, the tracking data would be the same.
Instead, we need to tell SynthEyes ―the camera is not translating‖ during
the second section of the shot. We call this a ―hold,‖ and there is a button

for this on the Summary and Solver control panels. By animating the
Hold button, you can tell SynthEyes which range(s) of frames that the camera is
panning but not translating. SynthEyes calculates a single XYZ camera position
for each section of frames where the hold button is continuously on—though it
continues to calculate separate pan, tilt, and roll (and optionally zoom) for each
frame. (Note: you do not have to set up a hold region if the camera comes to a
stop and pans, but only a little, so that most of the trackers are visible both before
and after the hold region. That can still be handled as a regular shot.)
The Hold button can be animated on and off in any pattern: off-on-off as
above; off-on, a shot with a final panning section; on-off, a shot with an initial
panning section followed by a translation; on-off-on, a pan at each end of a dolly;

77
SETTING UP MIXED-TRIPOD SHOTS

off-on-off-on, a translation, a pan, another translation, and a final pan; etc. There
is no requirement on what happens during each pan or each translate, they can
all be different. In effect, you are building a path with beads in it, where each
bead is a panning ―hold‖ section.

Preparing Trackers for Holds


It is crucial to maintain a smooth 3-D path in and out of a hold region—you
do not want a big jump. To achieve this requires careful control over whether
trackers are far or not. The operations and discussions that follow rely heavily on

the graph editor‘s tracks view of the world.


To begin with, a tracker must be configured as Far if the camera does not
translate within its lifetime (ie the tracker‘s lifetime is contained within a hold
region). A tracker with a lifetime solely outside the hold region will generally not
be far (unless it is in fact far, such as a point out on the horizon).
Trackers that exist both inside and outside the hold region present some
more interesting questions, yet they are common, since the auto-tracker rightfully
does not care about the camera motion—it is only concerned with tracking image
features.
If non-far trackers continue into a hold region, they will inevitably cause
the best XYZ position of the hold region to separate from the last XYZ position
before the start of the hold region. The additional tracking information will not
exactly match the prior data, and frequently the hold region contains a rapid pan
that tends to bias the tracking data (including a rolling shutter in the camera). A
jump in the path will result.
To prevent this, SynthEyes only pays attention to non-far trackers during a
short transition region (see the Transition Frms. setting on the Solver panel

). Inside the transitions at each end of a hold region, non-Far trackers are
ignored; their weight is zero to the solve. This ensures that the path is smooth in
and out of the hold region.
This causes an apparent problem: if you take an auto-tracked shot, turn
on the hold button, then inside the hold region, there will be no operating trackers
(and the Lifetimes panel will show those frames as reddish). There are no far
trackers, and no usable tracks in there! Your first instinct may be that SynthEyes
should treat the trackers as normal outside the hold region, and as far inside the
hold region—an instinct that is simple, understandable, and mathematically
impossible.
It turns out that the non-far and far versions of a tracker must be solved for
separately, and that the sensible approach is to split trackers cleverly into two
portions: a non-far portion, and one or more far portions. The lifetimes of the
trackers are manipulated to smoothly transition in and out of the hold region, and
smooth paths result.

78
SETTING UP MIXED-TRIPOD SHOTS

Hold Tracker Preparation Tool


To easily configure the trackers appropriately, SynthEyes offers the Hold
Region Tracker Preparation Tool, opened from the Windows menu. This tool
gives you complete control over the process if you want, but will also run
automatically with default settings if you start automatic tracking from the

Summary Panel after having set up one or more hold regions.


The tool operates in a variety of modes. Only automatically-generated
tracks are affected. ―Golden‖ and Far trackers are not affected.
The default Clone to Far mode takes each tracker and makes a clone. It
changes the clone to be a Far tracker, with a lifetime that covers the hold region,
plus a selectable number of frames before and after the hold region (Far
Overlap). The overlap can help maintain a stable camera pointing direction into
and out of the hold region, but you need to adjust this value based on how much
the camera is moving before and after the hold region. If it is moving rapidly,
keep Far Overlap at zero.
If the Combine checkbox is off, it makes a new tracker for each hold
region; if Combine is on, the same Far tracker covers all hold regions. For typical
situations, we recommend keeping Combine off.
The Clone to Far mode will ―cover the holes‖ in coverage. The original
trackers will continue to appear active throughout the hold region. If you find this
confusing, you can run a Truncate operation from the Preparation tool: it will turn
off the trackers during the hold region. However, this will make it more difficult if
you later decide to change the hold region.
The Hold Preparation tool can also change trackers to Far with Make Far
(all of them, though usually you should tell it to do only some selected ones). It
will change them to Far, and shut them down outside the hold region (past the
specified overlap).
The Clone to Far operation creates many new trackers. If you already
have plenty, you may wish to use the Convert Some option. It will convert a
specified percentage of the trackers to Far (tightening up their range), and leave
the rest untouched. This will often give you adequate coverage at little cost,
though Clone is safer.

Usage Hints
You should play with the Hold Preparation tool a bit, setting up a few fake
hold regions, so you can see what the different modes do. The Undo button on
the Hold Preparation Tool is there for a reason! It will be easier to see what is
happening if you select a single tracker and switch to Selected mode, instead of
changing all the trackers.

79
SETTING UP MIXED-TRIPOD SHOTS

After running the Hold Preparation operation (Apply button), you may want
to switch to the Sort by Time option in the graph editor.
If you need to change the hold region late in your workflow, it is helpful if
the entire tracking data is still available. If you have run a Truncate, the tracking
data for the interior of the hold regions will be gone and have to be re-tracked.
For that reason, the Truncate operation should be used sparingly, perhaps only
when first learning.
If you have done some tracker preparation, then other things, then need to
redo the preparation, use the Select By Type item on the Script menu to select
the Far trackers, then delete them. Make sure not to delete any Far trackers you
have created specially.
If you look back to the initial description of the hold feature, you will see
that the camera motion during a time of ―Far‖ trackers is arbitrary… it could be to
Mars and back. We introduced the hold only as a useful and practical
interpretation of what likely happened during that time.
Sometimes, you will discover that this assumption was wrong, that during
that big pan, the camera was moving. It might be a bump, or a shift, etc. After
you have solved the shot with Holds, you can sequentially convert the holds to
camera locks, hand-animating whatever motion you believed took place during
the hold. You should do this late in the tracking process, because it requires you
to lock in particular coordinates during each motion. The key difference between
holds and locks is this context: a hold says that the camera was stationary at
some coordinates still to be determined, while the lock will force you to declare
exactly which coordinates those are.
You may also need to use camera or tracker locks if you have exact
knowledge of the relationship between different sections of the path. For
example, if the camera traveled down a track, spun 90 degrees, then raised
directly vertically, the motion down the track and vertically are unlikely to be
exactly perpendicular. You can use the locks to achieve the desired result,
though the details will vary with the situation.
The Hold Tracker Preparation Tool presents plenty of options, and it is
important to know what the whole issue is about. But, in practice the setup tool is
a snap to use and can be run automatically without your intervention if you set up
the hold region(s) before auto-tracking. You can also adjust the Hold Tracker
Preparation tool settings at that time, before tracking. The settings are saved in
the file for batch processing or later examination.

80
Lenses and Distortion
Match-moving aims to electronically replicate what happened during a
live-action film or video shoot. Not only the camera motion must be determined,
but the field of view of the shot as well. This process requires certain
assumptions about what the camera is doing: at its simplest, that light travels in a
straight line, but also a number of nitty-gritty little details about the camera:
whether there is distortion in the lens, how big the film or sensor is, whether it is
centered on the axis of the lens, the timing of when pixels are imaged, and many
more smaller issues. It is very easy to take these for granted, and under ideal
conditions they can be ignored. But often, they will contribute small systematic
errors, or even cause outright failure. When the problems become larger, it is
important to recognize them and be able to fix them, and SynthEyes provides the
tools to help do that.
You should always be on the lookout for lens distortion, and be ready to
correct it. Most zoom lenses will exhibit very substantial distortion when set to
their widest field of view (shortest focal length).
Similarly, you should be on the lookout for de-centering errors, especially
on long ―traveling‖ shots and on shots with substantial distortion that you are
correcting.

Focal Length vs Field Of View


Since cameras are involved, often customers are constantly concerned
with focal length. They write them down, they try to decide if a value from
SynthEyes is correct or not.
Important: A focal length value is useless 99% of the time — unless you
also know the plate width of the image (typically in millimeters, to the hundredth
of a millimeter). Unfortunately, this value is rarely available at all, let alone at a
sufficient degree of accuracy. It takes a careful calibration of the camera and lens
to get an accurate value. Sometimes an estimate can be better than nothing;
read on.
SynthEyes uses the field of view value (FOV) internally, which does not
depend on plate size. It provides a focal length only for illustrative purposes. Set
the (back plate) film width using the Shot Settings dialog. Do not obsess over the
exact values for focal length, because finding the exact back plate width is like
trying to find the 25‖ on an old 25‖ television set. It‘s not going to happen.

Zoom, Fixed, or Prime Lens?


During a single shot, the camera lens either zooms, or does not. Often,
even though the camera has a zoom lens, it did not zoom. You can get much
better tracking results if the camera did not zoom.

Select the Lens control panel . Click

81
LENSES AND DISTORTION

 Fixed, Unknown if the camera did not zoom during the shot (even if it is a
zoom lens)
 Fixed, with Estimate if the camera did not zoom during the shot, and you
have a good estimate of the camera field of view, or both the focal length
and plate width.
 Zooming, Unknown if the camera did zoom
 Known if the camera field of view, fixed or zooming, has been previously
determined (more on this later).
If you are unsure if the camera zoomed or not, try the fixed-lens setting
first, and switch to zoom only if warranted. Generally, if you solve a zoom shot
with the fixed-lens setting, you will be able to see the zoom‘s effect on the
camera path: the camera will suddenly push back or in when it seems unlikely
that the real camera made that motion. Sometimes, this may be your only clue
that the lens zoomed a little bit.
Important: Never use ―Known‖ mode solely because someone wrote
down the lens setting during shooting. Like the turn-signal of an oncoming car, it
is only a guess, not something you can count on. Do not set a Known focal
length unless it is truly necessary.
You may have the scribbled lens focal length from on-set production. If
you also know the plate size, you can use the Fixed, with Estimate setting to
speed up the beginning of solving a bit, and sometimes to help prevent spurious
incorrect solutions if the tracking data is marginal. The mode is also useful when
you are solving several shots in a row that have the same lens setting: you can
use the field of view value without worrying about plate size. In either case, you
should rewind to the beginning of the shot and either reset any existing solution,
or select View/Show Seed Path, then set the lens field of view or focal length to
the correct estimated value. SynthEyes will compute a more accurate value
during solving.
It can be worthwhile to use an estimated lens setting as a known lens
setting when the shot has very little perspective to begin with, as it will be difficult
to determine the exact lens setting. This is especially true of object-mode
tracking when the objects are small. The Known lens mode lets you animate the
field of view to accommodate a known, zooming lens, though this will be rare. For
the more common case where the lens value is fixed, be sure to rewind to the
beginning of the shot, so that your lens FOV key applies to the entire shot.
When a zoom occurs only for a portion of a shot, you may wish to use the
Filter Lens F.O.V. script to flatten out the field of view during the non-zooming
portions, then lock it. This eliminates zoom/translation coupling that causes
noisier camera paths for zoom shots. See the online tutorial for more details.

82
LENSES AND DISTORTION

Introduction to Lens Distortion


SynthEyes has two main ways to deal with distortion: early, before

tracking, in the image preparation subsystem; and later, during solving .


Each approach has its own pros and cons.
The early approach, in image prep, is controlled from the Lens tab of the
image preparation dialog. It lets you set distortion coefficients to remove the
distortion from the source imagery (or later add it). But you must already know
the coefficients, or fiddle to find them.
The image preprocessor can also accept lens presets, if you have pre-
calibrated the lens or obtained a preset elsewhere. The presets can specify fish-
eye lenses or other complex distortion patterns.
The late approach to lens distortion, during solving, allows the solving
engine to determine a most likely distortion value. The approach uses only a
single distortion parameter appropriate only for moderate distortion, not severe
moustache distortion or fisheye lenses. The imagery you see will be the distorted
(original source) images, with the tracker locations made to match up in the
camera view, but not perspective view. Usually you are going to want to produce
some undistorted footage once you determine the distortion, at least for
temporary use.

Determining Distortion With Check Lines


If your scene has long, straight, lines, check to see if they are truly straight

in the image: click Add Line at the bottom of the Lens panel and draw an
alignment line along the line in the image (select No Alignment). If the lines
match all along their length, the image is not distorted.
If the image is distorted, you can adjust the lens panel‘s Lens Distortion
spinner until the lines do match; add several lines if possible. Create lines near
the four edges of the image, but stay away from the corners, where there is more
complex distortion. You will also see a lens distortion grid for reference
(controlled by an item on the View menu).

Calculating Distortion While Solving


If your shot lacks straight lines to use as a reference, turn on the

Calculate Distortion checkbox on the lens panel and it will be computed


during 3-D solving. Usually you should solve the shot without calculating
distortion (perhaps just a guess), then turn on Calculate Distortion. When
calculating distortion, significantly more trackers will be necessary to distinguish
between distortion, zoom, and camera/object motion.

83
LENSES AND DISTORTION

Distortion, Focal Length, and the Field of View


When using the image preprocessor to correct distortion, try to adjust the
scale so that the undistorted image is exactly the full width of the frame, if you
would like to be able to compare a SynthEyes focal length to an on-set focal
length. Note that this will not affect the match itself.

Cubic Distortion Correction

The basic distortion coefficient on the Lens panel and image


preprocessor‘s Lens tab can encompass a moderate amount of distortion.
However, with wider aspect ratios if 16:9 and 2.35, higher-order (more complex)
distortion becomes significant, especially in the corners of the image. If you shoot
a lens distortion grid (see the web site) and correct the distortion at the top
middle and bottom middle of the image, you might see that the corners are not
corrected due to the more complex distortion.
The image preprocessor has an additional ―Cubic distortion‖ parameter
that you can use to tweak the corners into place, after fixing the basic distortion.
You may have to go back and forth between the two parameters a few times to
do this. The cubic parameter will usually have the opposite sign of the main
distortion (ie one is positive, the other negative). It is also be possible to write a
Sizzle script to compute the coefficients from a grid of tracker positions.

Lens Distortion Profiles


SynthEyes can use stored information about known, pre-calibrated, lenses
from special files with the file extension ―.lni‖ (lens information). These files are
stored in the Lens sub-folder of the scripts folder. (There are two of them, a
system set and a user-specific set.)
The lens information file contains a table of values mapping from the
―correct‖ radius of any image point to the distorted radius. These tables can be
generated by small scripts, including a default fish-eye lens generator (which has
already been run to produce the two default fisheye lens files), and a polynomial
generator, which accepts coefficients from Zeiss for their Ultra-Prime lens series.
These distortion maps can be either relative, where the distortion is
independent of the physical image size, or absolute, where the distortion is
described in terms of millimeters. The relative files are more useful for
camcorders with built-in lenses, the absolute files more useful for cameras with
removable prime lenses.
The absolute files require an accurate back-plate-width in order to use the
table at all. Do not expect the lens calibration table to supply the value, because
the lens (ie a detachable prime lens) can be used with many different cameras!
For assembled camcorders, typically with relative files, the lens file can
supply optional nominal back-plate width and field of view values, displayed

84
LENSES AND DISTORTION

immediately below the lens selection drop-down on the image preprocessor‘s


lens tab. You can apply those values as you see fit.
If you change an lni file (by re-writing it with a script, for example), you
should hit the Reload button on the Lens tab, while that lens file is selected. If
you add new files, or update several, use ―Find New Scripts‖ on the main File
menu.

Lens De-Centering
What we would call a lens these days—whether it is a zoom lens or prime
lens, or fisheye lens—typically consists of 7-11 individual optical lens elements.
Each of those elements has been precisely manufactured and aligned, and by
the nature of this, they are all very round in order to work properly. Then they are
stacked up in a tube, which is again very round (along with gears and other
mechanisms) to form the kind of lens we buy in the store.
The important part of this explanation is that a lens is very round and
symmetric and has a single well-defined center right down the middle. You can
picture a laser beam right down the exact center of each individual lens of the
overall lens, shooting in the front and out the back towards the sensor chip or
film.
With an ideal camera, the center beam of the lens falls exactly in the
middle of the sensor chip or film. When that happens, parallel lines converge at
infinity at the exact center of the image, and as objects get farther away, they
gravitate towards the center of the image.
While that seems obvious, in fact it is rarely true. If you center something
at the exact center of the image, then zoom in, you‘ll find that the center goes
sliding off to a different location!
This is a result of lens de-centering. In a video camera, de-centering
results when the sensor chip is slightly off-center. That can be a result of the
manufacturer‘s design, but also because the sensor chip can be assembled in
slightly different positions within its mounting holes and sockets. In a film camera,
the centering (and image plate size) are determined solely by the film scanner!
So the details of the scanning process are important (and should be kept
repeatable).
De-centering errors creates systematic errors in the match-move when left
uncorrected. The errors will result in geometric distortion, or sliding. Most
rendering packages can not render images with a matching de-centering,
guaranteeing problems. And like the example zooming in earlier, the de-centered
lens can result in footage that doesn‘t look right.
It is fairly easy to determine the position of the lens center using a zoom
lens. See the de-centering tutorial on the web site. Even if you will use a prime
lens to shoot, you can use a zoom lens to locate the lens center, since the lenses
are repeatable, and the error is determined by the sensor/film scan.

85
LENSES AND DISTORTION

Once the center location is determined, the image preprocessor can


restore proper centering. It does that by padding the sides of the image to
produce a new larger but centered image. For starters, that larger image is
subject to lens distortion correction, possible stabilization, then saved to disk.
The CG renders will match this centered footage. At the end, the padding will be
removed.
This means that your renders will be a little larger, but there does not have
to be anything in the padded portions, so they should not add much time. Higher
quality input that minimizes de-centering will reduce costs.
As a more advanced tactic, the image preprocessor can be used to
resample the image and eliminate the padding, but this can only be done after
initial tracking, when a field of view has been determined, and it is a slightly
skewed resample that will degrade image quality slightly (the image
preprocessor‘s Lanczos resampling can help minimize that).

What About 2-D Grid Corrections?


When you are correcting distortion on a shot, you may see an asymmetry,
where there is more distortion on one side of the shot, and less in another. If you
correct one side, the other goes out of whack.
You might think or hear about grid-based distortion correction, to rubber-
sheet morph the different parts of the image individually into their correct places.
This seems a simple approach to the problem, and it is! But it is WRONG.
The actual cause is de-centering—you are correcting the distortion using
the wrong center, which results in an apparent asymmetry. If you use a grid-type
correction, you will likely fix the images, but not the imaging geometry, and the
entire match-move will come out wrong. If you fix the centering, the distortion will
go away properly without the need for an asymmetric grid-based correction—and
the match-move will come out right in the end.

Match-moving with Lens Distortion


Merely knowing the amount of lens distortion and having a successful 3-D
track is generally not sufficient, because most animation and compositing
packages are not distortion-aware. Similarly, if you have configured some
correction for earlier image de-centering or cropping (ie padding) using the Image
Preprocessing system, your post-tracking workflow must also reflect this.
When distortion and cropping are present, in order to maintain exactly
matching 3-D tracks, you will need to have the following things be the same for
SynthEyes and the compositing or animation package:
Undistorted shot footage, padded so the optic axis falls at the center,
 An overall image aspect ratio reflecting the effects of padding and
the pixel aspect ratio,
 A field of view that matches this undistorted, padded footage, or,

86
LENSES AND DISTORTION

 A focal length and back plate width that matches this footage,
 3-D camera path and orientation trajectories, and
 3-D tracker locations.
If a shot lines up in SynthEyes, but not in your compositing or animation
software, checking these items is your first step.
Since SynthEyes preprocesses the images, or mathematically distorts the
tracker locations, generally the post-tracking software will not receive matching
imagery unless care is taken as described below to generate matching imagery.
SynthEyes has a script that simplifies workflow when dealing with
distorted imagery. It uses a simple approach to the setup. You can do whatever it
does manually or via your own (modified) script if you need to do something
different.

Lens Distortion Workflows


There are two fundamentally different approaches to dealing with distorted
imagery:
1) deliver undistorted imagery as the final shot (one pass)
2) deliver distorted imagery as the final shot (two pass)
Delivering undistorted imagery as a final result is almost certainly the way
to go if you are also stabilizing the shot, are working with higher-resolution film or
RED scans being down-converted for HD or SD television, or where the
distortion is a small inadvertent result of a suboptimal lens.
Delivering distorted imagery is the way to go if the distortion is the
director‘s original desired look, or if the pixel resolution is already marginal, and
the images must be re-sampled again as little as possible to maximize quality. It
is called two-pass because the CG footage must be run back through SynthEyes
(or a different application) to apply the distortion to the CG imagery.
Delivering Undistorted Imagery
After determining the lens calibration, you will use the image
preprocessing system to produce an undistorted version of the shot.
 Determine lens distortion via checklines or solving with Calculate
Distortion.
 Save then Save As to create a new version of the file.
(recommended).
 Click ―Lens Workflow‖ on the Summary panel (or start the
Lens/Lens Workflow script).
 Select the ―Final output‖ option ―Undistorted(1)‖ and hit OK. The
script will zoom in slightly so that there are no uncovered black
areas in the output image.

87
LENSES AND DISTORTION

 Click ―Save Sequence‖ on the Summary panel or Output tab of the


image preprocessor (which lets you change resolution if you need
to). Write out a new version of the undistorted imagery.
 Save then Save As to create a new version of the file.
(recommended).
 On the edit menu, select Shot/Change Shot Images.
 Select the ―Switch to saved footage‖ option on the panel that pops
up, hit OK.
 You will now be set up to work with the undistorted (fixed) footage.
If you tracked and solved initially to determine the distortion (or
without realizing it was there), the trackers and solve has been
updated to compensate for the modified footage.
 You can track, solve, add effects, etc, all using the final undistorted
imagery.
If you need to do something different, or want to do more of the step
manually, here is what is happening behind the scenes.
The Lens Workflow script performs the following actions for you: transfers
any calculated distortion from the lens panel to the image preprocessor, turns off
the distortion calculation for future solves, changes the scale adjustment on the
image prep Adjust tab to remove black pixels, selects Lanczos interpolation,
updates the tracker locations (ie Apply to Trkers on the Output tab), adjusts the
field of view, and adjusts the back plate width (so focal length will be unchanged).
When you do the Change Shot Images with the ―Switch to saved footage‖
option, SynthEyes resets the image preprocessor to do nothing: if the lens
distortion and other corrections have already been applied to the modified
images, you do not want to perform them a second time once you switch to the
already-corrected images. The Clear button on the
Delivering Distorted Imagery
In this workflow option, you create and track undistorted imagery,
generate CG effects, re-distort the effects, then composite the distorted version
back to the original imagery.
 Determine lens distortion via checklines or solving with Calculate
Distortion turned on.
 Save then Save As to create a new version of the file.
(recommended).
 Click ―Lens Workflow‖ on the Summary panel (or start the
Lens/Lens Workflow script).
 Select the ―Final output‖ option ―Redistorted(2)‖ and hit OK. The
script will pad the image so that the output contains every input

88
LENSES AND DISTORTION

pixel. The margin value will include a few extra for good measure,
adjust as desired.
 Click ―Save Sequence‖ on the Summary panel or Output tab of the
image preprocessor (which lets you change resolution if you need
to). Write out a new version of the undistorted imagery.
 Important: Save then Save As to create a new version of the file,
call it Undistort for this discussion.
 On the edit menu, select Shot/Change Shot Images.
 Select the ―Switch to saved footage‖ option, hit OK.
 You will now be set up to work with the undistorted (fixed) footage.
If you tracked and solved initially to determine the distortion (or
without realizing it was there), the trackers and solve has been
updated to compensate for the modified footage.
 Track, solve, work in your 3-D app, etc, using the undistorted
imagery.
 Render 3-D effects from your 3-D app (which match the undistorted
imagery, not the original distorted images). You should render
against black with an alpha channel, not against the undistorted
images.
 Re-open the Undistort file you saved earlier in SynthEyes.
 Do a Shot/Change Shot Images, select ―Re-distort CGI‖ mode on
the panel that pops up; select the rendered shot as the new footage
to change to.
 Use Save Sequence on the Summary panel or Image
preprocessor‘s output tab to render a re-distorted version of the CG
effect.
 Composite the re-distorted imagery with the original imagery.
Obviously this is a more complex workflow than delivering undistorted
images, but it is a consequence of the end product desired.

Working with Zooms and Distortion


Most zoom lenses have the most distortion at their widest setting. As you
zoom in, the distortion disappears and the lens becomes more linear. This poses
some interesting issues. It is not possible to reliably compute the distortion if it is
changing on every frame. Because of that, the lens distortion value computed
from the main SynthEyes lens panel is a single fixed value. If you apply the
distortion of the worst frames to the best frames, the best frames will be messed
up instead.

89
LENSES AND DISTORTION

The image prep subsystem does allow you to create and remove
animated distortions. You will need to hand-animate a distortion profile by using a
value determined with the alignment line facility from the main Lens panel, and
taking into account the overall zoom profile of the shot. If the shot starts at
around a 60 deg field of view, then zooms in to a 20 degree field of view, you
could start with your initial distortion value, and animate it by hand down to zero
by the time the lens reaches around 40 deg. If there are some straight lines
available for the alignment line approach throughout, you can do something fairly
exact. Otherwise, you are going to need to cook something up, but you will have
some margin for error.
You can save the corrected sequence away and use it for subsequent
tracking and effects generation.
This capability will let you and your client look good, even if they never
realize the amount of trouble their shot plan and marginal lens caused.

Summary of Lens Issues


Lens distortion is a complex topic with no easy ―make it go away‖ button
possible. Each individual camera‘s distortion is different, and distortion varies
with zoom, with focal distance, iris setting, and details of the camera‘s sensor
technology. We can not supply you with lens calibration data to repair your
shoot, or the awful-looking shots your client has just given you. You should
carefully think through your workflow and understand the impact of lens issues
and what they mean for how you track and how you render.
SynthEyes provides the tools to enable you to handle a variety of complex
distortion-related tasks; calibrating for lens distortion and centering should be
kept on the to-do list during the shoot. Without that, analysis will be less accurate
and more difficult or even impossible. As always, an ounce of prevention is worth
a pound of cure.

90
Running the 3-D Solver
With trackers tracked, and coordinates and lens setting configured, you
are ready to obtain the 3-D solution.

Solving Modes

Switch to the Solve control panel . Select the solver mode as follows:
 Auto: the normal automatic 3-D mode for a moving camera, or a moving
object.
 Refine: after a successful Auto solution, use this to rapidly update the
solution after making minor changes to the trackers or coordinate system
settings.
 Tripod: camera was on a camera, track pan/tilt/roll(/zoom) only.
 Refine Tripod: same as Refine, but for Tripod-mode tracking.
 From Seed Points: use six or more known 3-D tracker positions per
frame to begin solving (typically, when most trackers have existing
coordinates from a 3-D scan or architectural plan). You can use Place
mode in the perspective view to put seed points on the surface of an
imported mesh. Turn on the Seed button on the coordinate system panel
for such trackers. You will often make them locks as well.
 From Path: when the camera path has previously been tracked,
estimated, or imported from a motion-controlled camera. The seed
position, and orientation, and field of view of the camera must be
approximately correct.
 Indirect: to estimate based on trackers linked to another shot, for
example, a narrow-angle DV shot linked to wide-angle digital camera stills.
See Multi-shot tracking.
 Individual: when the trackers are all individual objects buzzing around,
used for motion and facial capture with multiple cameras.
 Disabled: when the camera is stationary, and an object viewed through it
will be tracked.
The solving mode mostly controls how the solving process is started: what
data is considered to be valid, and what is not. The solving process then
proceeds pretty much the same way after that, subject to whatever constraints
have been set up.

Automatic-Mode Directional Hint.


When the solver is in Automatic mode, a secondary drop-down list
activates: a hint to tell SynthEyes in which direction the camera moved—

specifically between the Begin and End frames on the Solver Panel . This
secondary dropdown is normally in Automatic mode also. However, on difficult
solves you can use the directional hint (Left, Right, Upwards, Downwards, Push

91
RUNNING THE 3-D SOLVER

In, Pull Back) to tell SynthEyes where to concentrate its efforts in determining a
suitable solution. Here it has been changed:

World Size
Adjust the World Size on the solver panel to a value comparable to the
overall size of the 3-D set being tracked, including the position of the camera.
The exact value isn‘t important. If you are shooting in a room 20‘ across, with
trackers widely dispersed in it, use 20‘. But if you are only shooting items on a
desktop from a few feet away, you might drop down to 10‘.
Important: the world size does not control the size of the scene, that is the
job of the coordinate system setup.
The world size is used to stabilize some internal mathematics during
solving; essentially all the coordinates are divided by it internally, so that the
coordinates stay near 1 even if raised to a large power. Then after the
calculation, the world size is multiplied back in. This process improves your
computer‘s accuracy.
Choose your coordinate system to keep the entire scene near the origin,
as measured in multiples of the world size. If all your trackers will be 1000 world-
sizes from the origin (for example, near [1000000,0,0] with a world size of 1000),
accuracy might be affected. The Shift Constraints tool can help move them all if
needed.
As you see, the world size does not affect the calculation directly at all.
Yet a poorly chosen world size can sabotage a solution. If you have a marginal
solve, sometimes changing the world size a little can produce a different solution,
maybe even the right one.
The world size also is used to control the size of some things in the 3-D
views and during export: we might set the size of an object representing a tracker
to be 2% of the world size, for example.

Go!

You‘re ready, set, so hit Go! on the Solver panel . SynthEyes will pop
up a monitor window and begin calculating. Note that if you have multiple
cameras and objects tracked, they will all be solved simultaneously, taking inter-
object links into accounts. If you want to solve only one at a time, disable the
others.

92
RUNNING THE 3-D SOLVER

The calculation time will depend on the number of trackers and frames,
the amount of errors in the trackers, the amount of perspective in the shot, the
number of confoundingly wrong trackers, the phase of the moon, etc. For a 100-
frame shot with 120 trackers, a 2-second time might be typical. With hundreds or
thousands of trackers and frames, some minutes may be required, depending on
processor speed. Shots with several thousand frames can be solved, though it
may take some hours.
It is not possible to predict a specific number of iterations or time required
for solving a scene ahead of time, so the progress bar on the solving monitor
window reflects the fraction of the frames and trackers that are currently included
in the tentative solution it is working on. SynthEyes can be very busy even
though the progress bar is not changing, and the progress bar can be at 100%
and the job still is not done yet — though it will be once the current round of
iterations completes.

During Solving
If you are solving a lengthier shot where trackers come and go, and where
there may be some tracking issues, you can monitor the quality of the solving
from the messages displayed.
As it solves, SynthEyes is continually adjusting its tentative solution to
become better and better (―iterating‖). As it iterates, SynthEyes displays the field
of view and total error on the main (longest) shot. You can monitor this
information to determine if success is likely, or if you should stop the iterations
and look for problems.
SynthEyes will also display the range of frames it is adding to the solution
as it goes along. This is invaluable when you are working on longer shots: if you
see the error suddenly increase when a range of frames is added, you can stop
the solve and check the tracking in that range of frames, then resume.
You can monitor the field of view to see if it is comparable to what you
think it should be — either an eyeballed guess, or if you have some data from an
on-set supervisor. If it does not seem good to start, you can turn on Slow but
sure and try again.
Also, you can watch for a common situation where the field of view starts
to decrease more and more until it gets down to one or two degrees. This can
happen if there are some very distant trackers which should be labeled Far or if
there are trackers on moving features, such as a highlight, actor, or automobile.
If the error suddenly increases, this usually indicates that the solver has
just begun solving a new range of frames that is problematic.
Your processor utilization is another source of information. When the
tracking data is ambiguous, usually only on long shots, you will see the message
―Warning: not a crisp solution, using safer algorithm‖ appear in the solving

93
RUNNING THE 3-D SOLVER

window. When this happens, the processor utilization on multi-core machines will
drop, because the secondary algorithm is necessarily single-threaded. If you
haven‘t already, you should check for trackers that should be ―far‖ or for moving
trackers.

After Solving
Though having a solution might seem to be the end of the process, in fact,
it‘s only the … middle. Here‘s a quick preview of things to do after solving, which
will be discussed in more detail in further sections.
 Check the overall errors
 Look for spikes in tracker errors and the camera or object path
 Examine the 3-D tracker positioning to ensure it corresponds to the
cinematic reality.
 Add, modify, and delete trackers to improve the solution.
 Add or modify the coordinate system alignment
 Add and track additional moving objects in the shot
 Insert 3-D primitives into the scene for checking or later use
 Determine position or direction of lights
 Convert computed tracker positions into meshes
 Export to your animation or compositing package.
Once you have an initial camera solution, you can approximately solve
additional trackers as you track them, using Zero-Weighted Trackers (ZWTs).

RMS Errors

The solver control panel displays the root-mean-square (RMS) error


for the selected camera or object, which is how many pixels, on average, each
tracker is from where it should be in the image. [In more detail, the RMS average
is computed by taking a bunch of error numbers, squaring them, dividing by the
number of numbers to get the average square, then taking the square root of that
average. It‘s the usual way for measuring how big errors are, when the error can
be both positive and negative. A regular average might come out to zero even if
there was a lot of error!]
The RMS error should be under 1 pixel, preferably under 0.5 for well-
tracked features. Note that during solving, the popup will show an RMS error that
can be larger, because it is contains contributions from any constraints that have
errors. Also, the error during solving is for ALL of the cameras and objects
combined; it is converted from internal format to human-readable pixel error

94
RUNNING THE 3-D SOLVER

using the width of the longest shot being solved for. The field of view of that shot
is also displayed during solving.
There is an RMS error number for each tracker displayed on the
coordinate system and tracker panels. The tracker panel also displays the per-
frame error, which is the number being averaged.

Checking the Lens


You should immediately check the lens panel‘s field of view, to make sure
that there is a plausible value. A very small value generally indicates that there
are bad trackers, severe distortion, or that the shot has very little perspective (an
object-mode track of a distant object, say).

Solving Issues
If you encounter the message "Can't find suitable initial frames", it means
that there is limited perspective in the shot, or that the Constrain button is on, but
the constrained trackers are not simultaneously valid. Turn on the checkboxes
next to Begin and End frames on the Solver panel, and select two frames with
many trackers in common, where the camera or object rotates around 30
degrees between the two frames. You will see the number of trackers in common
between the two frames, you want this to be as high as possible. Make sure the
two frames have a large perspective change as well: a large number of trackers
will do no good if they do not also exhibit a perspective change. Also, it will be a
good idea to turn on the "Slow but sure" checkbox.
You may encounter "size constraint hasn't been set up" under various
circumstances. If the solving process stops immediately, probably you have no
trackers set up for the camera or object cited. Note that if you are doing a moving
object shot, you need to set the camera‘s solving mode to Disabled if you are not
tracking it also, or you will get this message.
When you are tracking both a moving camera and a moving object, you
need to have a size constraint for the camera (one way or another), and a size
constraint for the object (one way or another). So you need TWO size
constraints. It isn't immediately obvious to many people why TWO size
constraints are needed. This is the related to a well-known optical illusion, relied
on in shooting movies such as "Honey, I Shrunk the Kids". Basically, you can't
tell the difference between a little thing moving around a little, up close, and a big
thing moving around a lot, farther away. You need the two size constraints to set
the relative proportions of the foreground (object) and background (camera).
The related message ―Had to add a size constraint, none provided‖ is
informational, and does not indicate a problem.
If you have SynthEyes scenes with multiple cameras linked to one
another, you should keep the solver panel‘s Constrain button turned on to
maintain proper common alignment.
See also the Troubleshooting section.

95
RUNNING THE 3-D SOLVER

96
3-D Review
After SynthEyes has solved your scene, you‘ll want to check out the paths
in 3-D, and see what an inserted object looks like. SynthEyes offers several
ways to do this: traditional fixed 3-D views, including a Quad orthogonal isometric
view, camera-view overlays, user-controlled 3-D perspective window, preview
movies, and velocity vs time curves.

Quad View
If you are not already in Quad view, switch to it now on the toolbar:
. You will see the camera/object path and 3-D tracker locations
in each view. You can zoom and pan around using the middle mouse button and
scroll wheel. You can scrub or play the shot back in real-time (in sections, if there
is insufficient RAM). See the View menu for playback rate settings.

Camera View Overlay


To see how an inserted object will look, switch to the 3-D control panel

. Turn on the Create tool (magic wand). Select one of the built-in mesh
types, such as Box or Pyramid. Click and drag in a viewport to drag out an
object. Often, two drags will be required, to set first the position and breadth, then
a second drag to set the height or overall scale. A good coordinate-system setup
will make it easy to place objects. To adjust object size after creating it, switch to
the scaling tool . Dragging in the viewport, or using the bottommost spinner,
will adjust overall object size. Or, adjust one of the three spinners for each
coordinate axis size.
When you are tracking an object and wish to attach a test object onto it
(horns onto a head, say), switch the coordinate system button on the 3-D Panel
from to .
Note: the camera-view overlay is quick and dirty, not anti-aliased like the
final render in your animation package will be (it has ―jaggies‖), so the overlay
appears to have more jitter than it will then. You can sometimes get a better idea
by zooming in on the shot and overlay as it plays back (use Pan-To-Follow).
Shortly, we‘ll show how to use the Perspective window to navigate around
in 3-D, and even render an antialiased preview movie.

Checking Tracker Coordinates


If SynthEyes finds any trackers that are further than 1000 times the world
size from the origin, it will not save them as ―solved.‖ You can use the Script
menu‘s Select By Type script to locate and select Unsolved trackers. You can
change them to Zero-weighted to see where they might fall in 3-D, and prevent
them from affecting future solves.

97
3-D REVIEW

Frequently these trackers should either distant horizon points that should
be changed to Far, corrected, or deleted if they are on a moving object or the
result of some image artifact. Such points can also arise when a tracker is visible
for only a short time when the camera is not moving. The Clean up trackers
dialog can do this automatically.
Note: the too-far-away test can cause trouble if you have a small world
size setting but are using measured GPS coordinates. You should offset the
scene towards the origin using the Shift Constraints script.
You should also look for trackers that are behind the camera, which can
occur on points that should be labeled Far, or when the tracking data is incorrect
or insufficient for a meaningful answer.
After repairing, deleting, or changing too-far-away or behind-camera

trackers, you should use the Refine mode on the Solver panel to update the
solution, or solve it from scratch. Eliminating such trackers will frequently provide
major improvements in scene geometry.

Checking Tracker Error Curves


After solving, the tracker 3-D error channel will be available in the graph
editor . It is important to understand the 3-D error: it is the distance, in pixels,
on each frame, between the tracker‘s 2-D position, and the position in the image
of the solved 3-D tracker position. Let‘s work this through. The solver looks at the
whole 2-D history of a tracker to arrive at a location such as X=3.2, Y=4.5, Z=0.1
for that tracker. On each frame, knowing the camera‘s position and field of view,
we can predict where the tracker should be, if it really is at the calculated XYZ.
That‘s the position at which the yellow X is displayed in the camera view after
solving. The 3-D error is the distance between where the tracker actually is, and
the yellow X where it should be. If the tracking is good, the distance is small, but
if the tracking has a problem, the tracker is away from where it should be, and
the 3-D error is larger. Obviously, given this definition, there‘s no 3-D error
display until after the scene has been solved.
You should check these error curves using the fundamentals described
earlier in Pre-Solve Tracker Checking, but looking at the Error channel. Here

we‘ve used isolate mode to locate a rather large spike in the blue error
curve of one of the trackers of a shot.

98
3-D REVIEW

This glitch was easy to pick out—so large the U and V velocities had to be

moved out of the way to keep them clearly visible. The deglitch tool easily
fixes it.
You can look at the overall error for a tracker from the Coordinate System

panel . This is easiest after setting the main menu‘s View/Sort by Error,
unselecting all the trackers (control/command-D), then clicking the down arrow
on your keyboard to sequence through the trackers from worst towards best. In
addition to the curves in the graph editor, you can see the numeric error at the

bottom of the tracker panel : both the total error, and the error on the current
frame. You can watch the current error update as you move the tracker, or set it
to zero with the Exact button.
For comparison, following is a tracker graph that has a fairly large error; it
tracks a very low contrast feature with a faint moving highlight and changing
geometry during its lifespan. It never has a very large peak error or velocity, but
maintains a high error level during much of its lifespan, with some clearly visible
trends indicating the systematic errors it represents.

99
3-D REVIEW

And finally, a decent tracker with a typical error level:

The vertical scale is the same in these last three graphs. (Note that in the
rd
3 one, the current time is to the left, before frame 160 or so, hence the blue
arrow.)

100
3-D REVIEW

You can sort the trackers within the graph editor‘s Active Trackers node by

changing Sort Alphabetic to Sort By Error .


Do not blindly correct apparent tracking errors. A spike suggesting a
tracking error might actually be due to a larger error on a different tracker that
has grossly thrown off the camera position, so look around.

Check for a Smooth Camera Path


You should also check that the camera or object path is satisfactorily
smooth, using the camera nodes in the graph editor. We‘ve closed the Active
Trackers node, and exposed the Camera & Objects node and the Camera01
node within it. We‘re looking at subset of the velocities of the camera: the X, Y,
and Z translation velocities.

There‘s a spike around frame 215-220, to find it, expose the Active

Trackers, select them all (control/command-A), and use Isolate mode


around that range of frames. The result:

101
3-D REVIEW

We‘ve found the tracker that causes the spike, and can use the deglitch

tool , or switch back to the tracker control panel and camera viewport,
unlock the tracker, correct it, then re-lock it.
Tip: In the capture above, the selected tracker is not visible in the
hierarchy view. You can see where it is in the scroll bar, though—it is located at
the white spot inside the hierarchy view‘s scroll bar. Clicking at that spot on the
scroll bar will pan the hierarchy view to show that selected tracker.

If that is the last glitch to be fixed, switch to the Solve control panel ,
and re-solve the scene using Refine mode.

You can also use the Finalize tool on the tracker control panel to
smooth one or more trackers, though significant smoothing can cause sliding. If

102
3-D REVIEW

your trackers are very noisy, check whether film grain or compression artifacts
are at fault, which can be addressed by image-preprocessor blur, verify that the
interlace setting is correct, or see if you should fine-tune the trackers.
Alternatively, you can fix glitches in the object path by using the deglitch

tool directly on the camera or moving object’s curves, because it works


on any changeable channel. You can also move the object using the 3-D

viewports and the tools on the 3-D control panel , by repositioning the object
on the offending frame.
Warning #1: If you fix the camera path, instead of the tracker data, then
later re-solve the scene, corrections made to the camera path will be lost, and
have to be repeated. It is always better to fix the cause of a problem, not the
result.
If you have worked on the trackers to reduce jitter, but still need a
smoother path (after checking in your animation package), you can turn up the
Filter Size control on the Solver panel. A filter size of 2 or 3 should make
substantial reductions in jitter. After adjusting the control, switch to Refine mode
and hit Go! again to apply the filtering.
Warning #2: filtering the path this way increases the real error, and
causes sliding. Remember that your objective is to produce a clean insert in the
image, not produce an artificially smooth camera trajectory that works poorly.

103
Cleaning Up Trackers Quickly
SynthEyes offers the Clean Up Trackers dialog (on the Track menu) to
quickly identify bad trackers of several types. The dialog helps improve tracking,
identifying elements that cause glitches or may be creating systematic errors; it is
not a way to ‗rescue‘ bad tracking data. You can only use the dialog after
successfully solving the scene. It will not run before the scene is solved, because
it operates by analyzing both 2-D and 3-D information. If you run it on a grossly
incorrect solution, it will get things wrong and may delete trackers that are
incorrect, and keep trackers that are wrong.

This dialog has a generally systematic organization, with a few exceptions. Each
category of trackers has a horizontal row of controls, and the number of trackers
in that category is in parentheses after the category name. A tracker can be a
member of several categories.
Down the left edge, a column of checkboxes control whether or not the
category of trackers will be fixed. Mostly, trackers are fixed by deleting them, but
after you have identified them, you can also adjust them manually if that is
appropriate.
When clicked on, the Select buttons in the middle select that category of
trackers in the viewport. They flash as they are selected, making them easier to
find. At the top of the panel, notice that the Clean-up dialog can work on all the
trackers, or only the selected ones. It records the selected trackers as you open
the panel, and they are not affected by selecting trackers with these buttons.
At right are a column of spinners that determine the thresholds for whether
a tracker is considered to be far-ish, short-lived, etc. The initial values of these
thresholds are good starting points but not the last word.

105
CLEANING UP TRACKERS QUICKLY

Part of the clean-up trackers dialog fun is to select a category of trackers,


and start changing the threshold up and down, and see how many trackers are
affected, and where they are. It‘s a quick way to learn more about your shot.
The following sections provide some more information about how to
interpret and use the panel. For full details, see the tracker clean-up reference.
Bad Frames
The bad-frames category locates individual frames on each tracker where
the 3-D error is over the threshold, if the hpix radio button is selected, or it finds
the top 2% of errors or whatever, if the % radio button is selected.
If you click the Show button, SynthEyes clears out the tracking results for
each bad frame. The intent is that you can see the overall pattern of bad frames,

by having the graph editor open in tracks mode , with squish-no keys

active. Each bad frame will be marked in red.


If you click the bad-frame‘s Select button, the trackers with bad frames
are selected in the viewport. This makes the tracks thicker in the squish view,
which is also helpful.
If you turn on the Delete checkbox for Bad Frames, there are two choices
for how to handle that: Disable and Clear. The clear option does what happens
during Show: it clears out the tracking results so it looks like the frame was
tracked, but the feature was not found, resulting in the red section in the squish
view. The disable option re-keys the tracker‘s enable track, so that there is no
attempt to track on those previously-bad frames. There will be no track at all on
those frames in the squish view.
As a result, the Clear choice is better when you want to see where there
were problems, and potentially go back and fix the problems. The Disable option
is better when you want to permanently shut down the trackers on those spots.
Be aware that though a frame of a tracker may be bad, and you are better
off without it and the glitch it causes, having a missing frame can also create
glitches—the higher the overall error (and poorer the fit), the larger the glitch
caused by a missing frame. A missing frame on a fairly unique tracker close to
the camera will cause a bigger glitch than a missing frame on a tracker far from
the camera that is one of many similar trackers. Manually repairing the bad
frames will always produce the best results.
Far-ish Trackers
The tracker clean-up dialog detects trackers with too little parallax for an
adequate distance measurement. Consider a tracker 1 meter from the camera,
and the camera moving 10 cm to its right. The position of the tracker in the two
images from the camera will be different, tens or hundreds of pixels apart. What if
the tracker is 1 kilometer from the camera, and the camera moved the same 10

106
CLEANING UP TRACKERS QUICKLY

cm? The tracker may be located in exactly the same pixel in both images, and no
distance can be determined to the tracker 1 km away.
Accordingly, far-ish-ness (you will not find this in the dictionary) can be
measured in terms of the numbers of pixels of perspective, and the threshold is
the number of pixels of perspective change produced by the camera motion over
the lifespan of the tracker.
As you slowly increase the far-ish-ness threshold, you‘ll see trackers
further and further from the camera become labeled as far.
But, you may also find a few trackers close to the camera that are also
labeled Far-ish, even at a low threshold. What has happened, that these nearby
trackers are far-ish? Simple: either they are short–lived, or the camera was not
translating much during their lifetime. For example, there may be many far-ish
trackers there is a tripod-type ―hold‖ region during a shot.
Far-ish trackers can not really be fixed. If the same feature is tracked
earlier or later in the shot, a short-lived tracker might be combined with its longer-
lived siblings. But otherwise, they may only be made Far (in which case their
solve will only produce a direction), or they may be deleted.
High-Error Trackers
To be a high-error tracker, a tracker must meet either of two criteria:
 the percentage of a tracker‘s lifespan that consists of bad frames must be
over a threshold, or
 the average hpix RMS error over the lifespan of the tracker, as appears for

the tracker on the Coordinate panel , must be over a threshold.


The percentage threshold appears to the right of the high-error trackers
line as usual. The hpix error threshold appears underneath it, to the right of the
Unsolved/Behind category, an otherwise empty space because that category
requires no thresholds.
As an example of the first criterion, consider a tracker that is visible for 20
frames. However, 8 of those frames are ―bad frames‖ as defined for that
category. The percentage of bad frames is 8 out of 20, or 40%, and at the
standard threshold of 30% the tracker would be considered high-error and
eligible for deletion. Typically such trackers have switched to an adjacent nearby
feature for a substantial portion of their lifespan.
Unlocking the User Interface
The clean-up trackers dialog is modal, meaning you can not adjust any
other controls while the dialog is displayed. However, it is often helpful to adjust
the user interface with the dialog open, for example to configure the graph editor
or to locate a tracker in the viewports.

107
CLEANING UP TRACKERS QUICKLY

The clean-up dialog does offer a frame spinner along the bottom row,
which allows you to rapidly scrub through the shot looking for particular trackers.

The dialog also offers the Unlock UI button, which temporarily


makes the dialog modeless, permitting you to adjust other user-interface
controls, bring up new panels, etc.
The keyboard accelerators do not work when Unlock UI is turned on. You
need to use the main menu controls instead.
The ―selected trackers‖ list processed by Clean Up Trackers is reloaded
each time you turn off Unlock UI—if you are using the Selected Trackers option,
but need to change which trackers those are, you can unlock the user interface
and change them. But, you must have turned off all the Select buttons first, or
they will affect what happens.

108
Setting Up a Coordinate System
You should tell SynthEyes how to orient, position, and size the trackers
and camera path in 3-D. Historically, people learning tracking have had a hard
time with this because they do not understand what the problem is, or even that
there is a problem at all. If you do not understand what the problem is, what you
are trying to do, it is pretty unlikely you will understand the tools that let you solve
it. What follows is an attempt to give you a tangible explanation. It‘s silly, but
please read carefully! Please also be sure to check out the tutuorials on the web
site about coordinate systems.

SynthEyes and the Coordinate Measuring Machine


Pretend SynthEyes is a 2D-to-3D-converting black box on your desk that
manufactures a little foam-filled architectural model of the scene filmed by your
shot. This little model even has a little camera on a track showing exactly where
the original camera went, and for each tracker, a little golf pole and flag with the
name of the tracker on it.
Obviously SynthEyes is a pretty nifty black box. One problem, though: the
foam-filled model is not in your computer yet. It fell out of the output hopper, and
is currently sitting upside down on your desk.
Fortunately, you have a nifty automatic coordinate measuring machine,
with a little robot arm that can zip around measuring coordinates and putting
them into your computer.
You open the front door of the coordinate measuring machine and see the
inside looks like the inside of a math teacher‘s microwave oven, with nice graph-
paper coordinate grids on the inside of the door, bottom, top, and sides, and you
can see through the glass sides if you look carefully. Those are the coordinates
measured by the machine, and where things will show up at inside your
animation package. The origin is smack in the middle of the machine.
So you think ―Great‖, casually throw your model, still upside-down, into the
measuring machine, and push the big red button labeled ―Good enough!‖ The
machine whirs to life and seconds later, your animation package is showing a
great rendition of your scene—sitting cock-eyed upside down in the bottom of
your workspace. That is not what you wanted at all, but hey! That‘s what you got
just throwing your model into the measuring machine all upside down.
You open up the door, pull out your model, flip it over, put it back in, and
close the door. Looking at the machine a little more carefully, you see a green
button labeled ―Listen up‖ and push it. Inside, a hundred little feet march out a
small door, crawl under the model, and lift it up from the bottom of the machine.
Since it is still pretty low, you shout ―A little higher, please.‖ The feet cringe
a little—maybe the shouting wasn‘t necessary—but the little feet lift your model a
bit higher. That‘s a good start, but now ―More to the right. Even some more.‖
You‘re making progress, it looks like the model might wind up in a better place

109
SETTING UP A COORDINATE SYSTEM

now. You try ―Spin around X‖ and sure enough the feet are pretty clever. After
about ten minutes of this, though the model is starting to have its ground plane
parallel to the bottom of the coordinate measuring machine, you‘ve decided that
the machine is really a much better listener than you are a talker, and you have
learned why the red button is labeled ―Good enough!‖ Giving up, you push it, and
you quickly have the model in your computer, just like you had positioned it in the
machine.
Hurrah! You‘ve accomplished something, albeit tediously. This was an
example of Manual Alignment: it is usually too slow and not too accurate, though
it is perfectly feasible.
Perhaps you haven‘t given the little feet enough credit.
Vowing to do better, you try something trickier: ―Feet, move Tracker37 to
the origin.‖ Sure enough, they are smarter than you thought.
As you savor this success, you notice the feet starting to twiddle their toes.
Apparently they are getting bored. This definitely seems to be the case, as they
slowly start to push and spin your model around in all kinds of different directions.
All is not lost, though. It seems they have not forgotten what you told
them, because Tracker37 is still at the origin the entire time, even as the rest of
the model is moving and spinning enough to make a fish sea-sick. Because they
are all pushing and pulling in different directions, the model is even pulsing
bigger and smaller a bit like a jellyfish.
Hoping to put a stop to this madness, you bark ―Put Tracker19 on the X
axis.‖ This catches the feet off guard, but once they calm down, they sort it out
and push and pull Tracker19 onto the X axis.
The feet have done a good job, because they have managed to get
Tracker19 into place without messing up Tracker37, which is still camped at the
origin.
The feet still are not all on the same page yet, because the model is still
getting pushed and pulled. Tracker37 is still on the origin, and Tracker19 is on
the X axis, but the whole thing is pulsing bigger and smaller, with Tracker19
sliding along the axis.
This seems easy enough to fix: ―Keep Tracker19 at X=20 on the X axis.‖
Sure enough, the pulsing stops, though the feet look a bit unhappy about it. [You
could say ―Make Tracker23 and Tracker24 15 units apart with the same effect,
but different overall size.]
Before you can blink twice, the feet have found some other trouble to get
into: now your model is spinning around the X axis like a shish-kebab on a
barbecue rotisserie. You‘ve got to tell these guys everything!
As Tracker5 spins around near horizontal, you nail it shut: ―Keep Tracker5
on the XY ground plane.‖ The feet let it spin around one more time, and

110
SETTING UP A COORDINATE SYSTEM

grudgingly bring your model into place. They have done everything you told
them.
You push ―Good enough‖ and this time it is really even better than good
enough. The coordinate-measuring arm zips around, and now the SynthEyes-
generated scene is sitting very accurately in your animation package, and it will
be easy to work with.
Because the feet seemed to be a bit itchy, why not have some fun with
them? Tracker7 is also near the ground plane, near Tracker5, so why not ―Put
Tracker7 on the XY ground plane.‖ Now you‘ve already told them to put Tracker5
on the ground plane, so what will they do? The little feet shuffle the model back
and forth a few times, but when they are done, the ground plane falls in between
Tracker5 and Tracker7, which seems to make sense.
That was too easy, so now you add ―Put Tracker9 at the origin.‖ Tracker37
is already supposed to be at the origin, and now Tracker9 is supposed to be
there too? The two trackers are on opposite sides of the model! Now the feet
seem to be getting very agitated. The feet run rapidly back and forth, bumping
into each other. Eventually they get tired, and slow down somewhere in the
middle, though they still shuffle around a bit.
As you watch, you see small tendrils of smoke starting to come out of the
back of your coordinate measuring machine, and quickly you hit the Power
button.

Back to Reality
Though our story is far-fetched, it is quite a bit more accurate than you
might think. Though we‘ll skip the hundred marching feet, you will be telling
SynthEyes exactly how to position the model within the coordinate system.
And importantly, if you don‘t give SynthEyes enough information about
how to position the model, SynthEyes will take advantage of the lack of
information: it will do whatever it finds convenient for it, which rarely will be
convenient for you. If you give SynthEyes conflicting information, you will get an
averaged answer—but if the information is sufficiently conflicting, it might take a
long time to provide a result, or even throw up its hands and generate a result
that does not satisfy any of the constraints very well.
There are five main methods for setting up the coordinates, which we will
discuss in following sections:
 Manually
 Using the 3-point method
 Configuring trackers individually
 Alignment Lines
 Constrained camera path

111
SETTING UP A COORDINATE SYSTEM

The alignment line approach is used for tripod-mode and even single-
frame lock-off shots. The constrained camera path methods (for experts!) are
used when you have prior knowledge of how the shot was obtained from on-set
measurements.
One last point: you might wonder which trackers get selected to be
constrained: Tracker37, Tracker19, etc. You will pick the trackers to create the
coordinate system that you want to see in the animation/compositing package.
You must decide what you want! If the shot has a floor and you have
trackers on the floor, you probably want those trackers to be on the floor in your
chosen coordinate system. Your choice will depend on what you are planning to
do later in your animation or compositing package. It is very important to realize:
the coordinate system is what YOU want to make your job easier. There is no
correct answer, there is no coordinate system that SynthEyes should be picking if
only it was somehow smarter…They are all the same. The coordinate measuring
machine is happy to measure your scene for you, no matter where you put it!
You don‘t need to set a coordinate system up, if you don‘t want to, and
SynthEyes will plough ahead happily. But picking one will usually make inserting
effects later on easier. You can do it either after tracking and before solving, or
after solving.
Hint: if you will be exporting to a compositing package, they often
measure everything, including 3-D coordinates, in terms of pixels, not inches,
meters, etc. Be sure to pick sizes for the scene that will work well in pixels. While
you might scale a scene for an actor 2m tall, if you export to a compositor and
the actor is two pixels tall that will rarely make sense.

Three-Point Method
Here‘s the simplest and most widely applicable way to set up a coordinate
system. It is strongly recommended unless there is a compelling reason for an
alternative. SynthEyes has a special button to help make it easy. We‘ll described
how to use it, and what it is doing, so that you might understand it, and be able to
modify its settings as needed.

Switch to the Coordinate System control panel . Click the *3 button;


it will now read Or. Pick one tracker to be the coordinate system origin (ie at X=0,
Y=0, Z=0). Select it in the camera view, 3-D viewport, or perspective window. On
the coordinate system panel, it will automatically be changed from Unlocked to
Origin. Again, any tracker can be made the origin, but some will make more
sense and be more convenient than others.
The *3 button will now read LR (for left/right). Pick a second tracker to fall
along the X axis, and select it. It will automatically be changed from Unlocked to
Lock Point; after the solution it will have the X/Y/Z coordinates listed in the three
spinners. Decide how far you want it to be from the origin tracker, depending on
how big you want the final scene to be. Again, this size is arbitrary as far as

112
SETTING UP A COORDINATE SYSTEM

SynthEyes is concerned. If you have a measurement from the set, and want a
physically-accurate scene, this might be the place to use the measurement. One
way or another, decide on the X axis position. You can guess if you want, or you
can use the default value, 20% of the world size from the Solver panel. Enter the
chosen X-axis coordinate into the X coordinate field on the control panel.
The *3 button now reads Pl. Pick a third point that should be on the
ground plane. Again, it could be any other tracker―except one on the line
between the origin and the X-axis tracker. Select the tracker, and it will be
changed from Unlocked to On XY Plane (if you are using a Z-Up coordinate
system, or On XZ Plane for Y-up coordinates). This completes the coordinate
system setup, so the *3 button will turn off.
The sequence above places the second point along the X axis, running
from left to right in the scene. If you wish to use two trackers aligned stage front
to stage back, you can click the button from LR (left/right) to FB (front/back)
before clicking the second tracker. In this case, you will adjust the Y or Z
coordinate value, depending on the coordinate system setting.
To provide the most accurate alignment, you should select trackers
spread out across the scene, not lumped in a particular corner.
Depending on your desired coordinate system, you might select other axis
and plane settings. You can align to a back wall, for example. For the more
complex setups, you will adjust the settings manually, instead of using *3.
You can lock multiple trackers to the floor or a wall, say if there are
tracking marks on a green-screen wall. This is especially helpful in long traveling
shots. If you are tracking objects on the floor, track the point where the object
meets the floor; otherwise you‘ll be tracking objects at different heights from the
floor (more on this in a little).

Size Constraints
As well as the position and orientation of your scene, you need to control
the size of the reconstructed scene. There are three general ways to do this:
1. Have two points that are locked to (different) xyz coordinates, such as an
origin (0,0,0) and a point at (20,0,0), as in the recommended method
described above, or,
2. With a distance (size) constraint between two points.
3. With an inter-ocular constraint for stereo shots.
If you want to use one collection of trackers to position and align the
coordinate system, but use an on-set measurement between two other trackers,
you can use a distance constraint.
You can set up the distance constraint as follows. Suppose you have two
trackers, A and B, and want them 20 units apart, for example. Open the
coordinate system control panel. Select tracker A, ALT-click (Mac: Command-

113
SETTING UP A COORDINATE SYSTEM

click) on tracker B to set it as the target of A. Set the distance (Dist.) spinner to
20.
Note: if you set up a distance constraint and have used the *3 tool, you
should select the second point, which is locked to 20,0,0, and change its mode
from Lock Point to On X Axis (On Y Axis for front/back setups). Otherwise, you
will have set up two size constraints simultaneously, and unless both are right,
you will be causing a conflict.

Configuring Constraints Directly


Each tracker has individual constraints that can be used to control the
coordinate system, accessed through the Coordinate System Control panel. The
*3 button automatically configures these controls for easy use, but you can
manually configure them to achieve a much wider variety of effects—if you keep
in mind the mental picture of the busy little feet in the coordinate measuring
machine. Those feet do whatever you tell them, but are happy to wreak havoc in
any axis you do not give them instructions for.
As examples of other effects you can achieve, you can use the Target
Point capability to constrain two points to be parallel to a coordinate axis, in the
same plane (parallel to XY, YZ, or XZ), or to be the same. For example, you can
set up two points to be parallel to the X axis , two other points
to be parallel to the floor, and a fifth point to be the origin.
Suppose you have three trackers that you want to define the back wall (Z
up coordinate system).
1) Go to the coordinate system control panel
2) If the three trackers are A, B, and C, select B, then hold down
ALT (Mac: Command) and click A.
3) Change the constraint type from Unlocked to Same XZ plane.
4) Select C, and ALT-click (Command) on A, and set it to Same XZ
Plane also.
This has nailed down the translation, but rotation only partially—the feet
will be busy. You also need to specify another rotation, since B and C can spin
freely around A so far (or around the Y axis about any point in the plane).
You might have two other trackers, D and E, that should stack up
vertically. Select E and Alt/Command-Click tracker D and set it to Parallel to Z
Axis (or X axis if they should be horizontal).
Details of Lock Modes
There are quite a few different constraint (lock) modes that can be
selected from the drop-down list. Despite the fair number of different cases, they
all can be broken down to answering two simple questions: (1) which coordinates
(X, Y, and/or Z) of the tracker should be locked, and (2) to what values.

114
SETTING UP A COORDINATE SYSTEM

The first question can have one of eight different answers: all the
combinations of whether or not each of the three coordinate axes is locked,
ranging from none (Unlocked) to all (Lock Point). Rather than listing each of the
combinations of which axes are locked, the list really talks about which axis is
NOT locked. For example, an X Axis lock really locks Y and Z, leaving X
unlocked. Locking to the XZ plane actually locks only Y. The naming addresses
WHAT you want to do, not HOW SynthEyes will achieve it.
The second question has three possible answers: (a) to zero, (b) to the
corresponding ―Seed and Lock‖ spinner, or (c) the corresponding coordinate from
the tracker assigned as the Target Point. Answer (c) is automatically selected if a
target point is present, while (a) is selected for ―On‖ lock types, and (b) for ―Any‖
lock types. Use the Any modes when you have some particular coordinates you
want to lock a tracker to, for example, if a tracker is to be placed 2 units above
the ground plane.
Watch Out! If you select several trackers, some with targets, some
without, the lock type list will be empty. Either select fewer trackers, or right-click
the Target button to clear the target tracker setting from all selected trackers.
Here‘s the total list:

Lock Mode Axes Locked To What

Unlocked None Nothing

Lock Point X,Y,Z Spinners

Origin X, Y, Z Zero

On X Axis Y, Z Zero

On Y Axis X, Z Zero

On Z Axis X, Y Zero

On XY Plane Z Zero

On XZ Plane Y Zero

On YZ Plane X Zero

Any X Axis Y, Z Spinners

Any Y Axis X, Z Spinners

Any Z Axis X, Y Spinners

Any XY Plane Z Spinners

115
SETTING UP A COORDINATE SYSTEM

Lock Mode Axes Locked To What

Any XZ Plane Y Spinners

Any YZ Plane X Spinners

Identical Place X, Y, Z Target

|| X Axis Y, Z Target

|| Y Axis X, Z Target

|| Z Axis X, Y Target

Same XY Plane Z Target

Same XZ Plane Y Target

Same YZ Plane X Target

Configuring Constraints for Tripod-Mode Shots


When the camera is configured in tripod mode, a simpler coordinate-
system setup can be used. In tripod mode, no overall sizing is required, and no
origin is required or allowed. The calculated scene must only be aligned, though
even that is not always necessary.
The simplest tripod alignment scheme relies on finding two trackers on the
horizon, or at least that you‘d like to make the horizon. Of the two, you assign
one to be the X axis, say, by setting it up as a Lock to the coordinates X=100,
Y=0, Z=0, for the normal World Size of 100. If the world size was 250, the lock
point would be 250, 0, 0 : a Far tracker should always be locked to coordinates
where X squared plus Y squared plus Z squared equals the world size squared.
It is not necessary for the constraint to work correctly, but for it to be displayed
correctly.
With one axis nailed down, the other tracker only needs to be labeled ―On
XY plane,‖ say (or XZ in Y-Up coordinates).
Tip: if you have a tripod shot that pans a large angle, 120 degrees or
more, small systematic errors in the camera, lens, and tracking can accumulate
to cause a banana-shaped path. To avoid this, set up a succession of trackers
along the horizon or another straight line, and peg them in place, or use a roll-
axis lock.
Constrained Points View
After you have set up your constraints, you should check your work using
the Constrained Points viewport layout, as shown here:

116
SETTING UP A COORDINATE SYSTEM

This is the view with the recommended (front/back-variant) constraint


setup in Z-Up coordinates, as applied to a typical shot, after solving. Only
trackers with constraints are listed, along with what they are locked to
(coordinates or another tracker). The solved position is shown, along with the 3-D
error of the constraint. For example, if a tracker is located at (1,0,0) but is locked
to (0,0,0), the 3-D error will be 1. It will have a completely different 2-D error in
hpix on the coordinate system panel.
The constrained points view lets you check your constraints after solving,
giving you the resulting 3-D errors, or check your setup before solving, without
any error available yet. You can select the trackers directly from this view and
tweak them with the coordinate system panel displayed.
Upside-down Cameras: Selecting the Desired Solution
Many coordinate system setups can be satisfied in two or more different
ways: completely different camera and tracker positions that are camera
matches, and satisfy the constraints.
To review, the most basic 3-point setup consists of a point locked to the
origin and a point locked to a specific coordinate on the X axis, plus a third point
locked to be somewhere on the ground plane (XY plane for Z-up). This setup
can be satisfied two different ways. If you start from one solution, you can get the
other by rotating the entire scene 180 degrees around the X axis. If the camera is
upright in the first solution, it will be upside-down in the second. The third point
will have a Y coordinate that is positive in one, and negative in the other (for Z-
Up coordinates).
If you take this basic 3-point setup, and chase the setup of the second
point from a lock to (20,0,0) to ―On X Axis,‖ and add a separate distance (scale)
constraint, there are now four different possible solutions: the different
combinations of the second point‘s X being positive or negative, and the third
point‘s Y coordinate being positive or negative.
SynthEyes offers two ways to control which solution is used. Without
specific instructions, SynthEyes uses the solution where the camera is upright,
not upside-down. That handles the most common case, but if you need the
camera upside-down, or have a setup with four solutions, you need to be more
specific.
SynthEyes lets you specify whether a coordinate should be positive or
negative (a polarity), for each coordinate of each constrained tracker. The
Coordinate System Control panel has buttons next to the X, Y, and Z spinners.

117
SETTING UP A COORDINATE SYSTEM

The X button, for example, sequences from X to X+ to X-, meaning that X can
have either polarity, that X must be positive, or that X must be negative.
If there are two solutions, you should set up a polarity for an axis of one
point; if there are four solutions, set the polarity for one axis of two points. For
example, set a polarity for Y of the 3rd tracker, and an X polarity for the 2nd (on-
axis) tracker.
Subtleties and Pitfalls
The locks between two trackers are inherently bidirectional. If you lock A
to B, do not lock B to A. Similarly, avoid loops, such as locking A to B, B to C,
and C to A.
If you want to lock A, B, C, and D all to be on the same ground plane with
the same height, say, it is enough to lock B, C, and D all to A.
When you choose coordinates, you should keep the scene near the origin.
If your scene is 2000 units across, but it is located 100000 units from the origin, it
will be inconvenient to work with, and runs the risk of numeric inaccuracy. This
can happen after importing scene coordinates based on GPS readings. You can
use the Track/Shift Constraints tool to offset the scene back towards the origin.

Alignment Versus Constraints


With a small, well-chosen, set of constraints, there will be no conflict
among them: they can all be satisfied, no matter the details of the point
coordinates. This is the case for the 3-tracker recommended method.
However, this is not necessarily the case: you could assign two different
points to both be the origin. Depending on their relative positions, this may be
fine, or a mistake.
SynthEyes has two main ways to approach such conflicts: treating the
coordinate system constraints as suggestions, or as requirements, as controlled

by the Constrain checkbox on the Solver control panel .


For a more useful example, consider a collection of trackers for features
on the set floor. You can apply constraints telling SynthEyes that the trackers
should be on the floor, but some may be paint spots, and some may be pieces of
trash a small distance above the floor.
With the Constrain box off, SynthEyes solves the scene, ignoring the
constraints, then applies them at the end, only by spinning, rotating, and scaling
the scene. In the example of trackers on a floor, the trackers are brought onto an
average floor plane, without affecting their relative positions. The model is
fundamentally not changed by the constraints.
On the other hand, with the Constrain checkbox on, the constraints are
applied to each individual tracker during the solving process. Applied to trackers

118
SETTING UP A COORDINATE SYSTEM

on a floor, the vertical coordinate will be driven towards zero for each and every
such tracker, possibly causing internal conflict within the solving process.
If you have tracked 3 shadows on the floor, and the center of one tennis
ball sitting on the floor, you have a problem. The shadows really are on the floor,
but the ball is above it. If all four height values are crunched towards zero, they
will be in conflict with the image-based tracking data, which will be attempting to
place the tennis ball above the shadows.
You can add poorly chosen locks, or so many locks, that solving becomes
slower, due to additional iterations required, and may even make solving
impossible, especially with lens distortion or poor tracking. By definition, there will
always be larger apparent errors as you add more locks, because you are telling
SynthEyes that a tracker is in the wrong place. Not only are the tracker positions
affected, but the camera path and field of view are affected, trying to satisfy the
constraints. So don‘t add locks unless they are really necessary.
Generally, it will be safer to leave the Constrain checkbox off, so that
solving is not compromised by incorrectly configured constraints. You will want to
turn the checkbox on when using multiple-shot setups with the Indirectly solving
method, or if you are working from extensive on-set measurements. It must be
on to match a single frame.
Pegged Constraints
With the constraints checkbox on, SynthEyes attempts to force the
coordinate values to the desired values. It can sometimes be helpful to force the
coordinates to be exactly the specified value, by turning on the Peg button on
the tracker‘s Coordinate system panel.
Pegs are useful if you have a pre-existing scene model that must be
matched exactly, for example, from an architectural blueprint, a laser-rangefinder
scan, or from global positioning system (GPS) coordinates. Pegging GPS
coordinates is especially useful in long highway construction shots, where overall
survey accuracy must be maintained over the duration of the shot.
Pegs are active only when the Constrain checkbox is on, and you can only
peg to numeric coordinates or to a tracker on a different camera/object, if the
tracker‘s camera/object is Indirectly solved. You can not peg to a tracker on the
same camera/object, this will be silently ignored.
The 3-D error will be zero when you look at a pegged tracker in the
Constrained Points view. However, the error on the coordinate system or tracking
panel, as measured in horizontal pixels, will be larger! That is because the peg
has forced the point to be at a location different than what the image data would
suggest.
Constrain Mode Limitations and Workflow
The constrain mode has an important limitation, while initially solving a
shot in Automatic solving mode: enough constrained points must be visible on
the solving panel‘s Begin and End frames to fully constrain the shot in position

119
SETTING UP A COORDINATE SYSTEM

and orientation. It can not start solving the scene and align it with something it
can not see yet, that‘s impossible!
SynthEyes tries to pick Begin and End frames where the constrained
points are simultaneously visible, but often that‘s just not possible when a long
shot moves through an environment, such as driving down a road. The error
message ―Can‘t locate satisfactory initial frames‖ will be produced, and solving
will stop.
In such cases, the Constrain mode (checkbox) must be turned off on the
solving panel, and a solution will easily be produced, since the alignment will be
performed on the completed 3-D tracker positions.
You can now switch to the Refine solving mode, turn on the Constrain
checkbox, and have your constraints and pegs enforced rigorously. As long as
the constraints aren‘t seriously erroneous, this refine stage should be quick and
reliable.
Here‘s a workflow for complex shots with measured coordinates to be
matched:
1. Do the 2-D tracking (supervised or automatic)
2. Set up your constraints (if you have a lot of coordinates, you can
read them from a file).
3. Do an initial solve, with Constrain off.
4. Examine the tracker graphs, assess and refine the tracking
5. Examine the constrained points view to look for gross errors
between the calculated and measured 3-D locations, which are
usually typos, or associating the 3-D data with the wrong 2-D
tracker. Correct as necessary.
6. Change the solver to Refine mode
7. Turn on the Constrain checkbox
8. Solve again, verify successful.
9. Turn on the Peg mode for tracker constraints that must be achieved
exactly.
10. Solve again
11. Final checks that pegs are pegged, etc.
With this approach, you can use Constrain mode even when constrained
trackers are few and far between, and you get a chance to examine the tracking
errors (in step 4) before your constraints have had a chance to affect the solution
(ie possibly messing it up, making it harder to separate bad tracking from bad
constraints.)
Note: if you have survey data that you are matching to a single frame, you
must use Seed Points mode and you must turn on Constrain.

120
SETTING UP A COORDINATE SYSTEM

Tripod and Lock-off Shot Alignment


Tripod-mode shots provide special issues for alignment, since by their
nature, a full 3-D solution is not available. Tripod shot tracking provides the pan,
tilt, and roll of the camera versus time, and the direction to the trackers, but not
the distance to the trackers. So if you need to place objects in the shot in 3-D, it
can be difficult to know where to place them. The good news is that wherever
you put them, they will ―stick,‖ so the primary concern is to locate items so that
they match the perspective of the shot.

SynthEyes‘s Lens Control Panel contains a perspective-matching


tool to help, with the requirement that your shot contain several straight lines.
Depending on the situation, two or more must be parallel. Here‘s an example
(we‘ll tell how to set it up in a later section):

There are parallel lines under the eaves and window, configured to be
parallel to the X axis. Vertical (Z) lines delineate edges of the house and door
frame. The selected line by the door has been given a length to set the overall
scale.
The alignment tool gives you camera placements and FOV for completely
locked-off shots, even a single still photograph such as this.
What Lines Do I Need?
The alignment solver can be used after a shot has been solved and a lens
field of view (FOV) determined; it might be used without a solve, with a known

121
SETTING UP A COORDINATE SYSTEM

FOV; or it might be used to determine the lens FOV. In each case it will
determine the camera placement as well.
If the FOV is known, either from a solve or an on-set measurement, you
will need to set up at least two lines, which must be parallel to two different
coordinate axes in 3-D (X, Y, or Z). This means they must not be parallel to each
other (because then they would be parallel to the same axis). You may have any
number of additional lines.
When the FOV is not known, you must define at least three lines. Two of
them must be parallel to each other and to a coordinate-system axis. The third
line must be parallel to a different coordinate system axis. You may have
additional lines parallel to any of the three coordinate system axes.
Note: SynthEyes permits unoriented lines to be used to help find the lens
distortion. Unoriented lines do not have to be aligned with any of the desired
coordinate system axes—but do not count at all towards the count of lines
required for alignment.
Whether the FOV is known to start or not, two of the lines on different
axes must be labeled as on-axis, meaning that the scene will be moved around
until those lines fall along the respective axis. For example, you might label one
line as On X Axis and another as On Y Axis. If you do not have enough on-axis
lines, SynthEyes will assign some automatically, though you should review those
choices.
The intersection of the on-axis lines will be the origin of the coordinate
system. In the example above, the origin will be at the bottom-right corner of the
left-most of the two horizontal windows above the door. As with tracker-based
coordinate system setup, there is no ―correct‖ assignment—the choice is up to
you to suit the task at hand.
To maximize accuracy, parallel lines should be spread out from one
another: two parallel lines that are right next to each other do not add much
independent information. If you bunch all the lines on a small object in a corner of
the image, you are unlikely to get any usable results. We can not save you from
a bad plan!
It is better if the lines are spread out, with parallel lines on opposing sides
of the image, and even better if they are not parallel to one another in the image.
For example, the classis image of railroad tracks converging at the horizon
provides plenty of information.
Also, be alert for situations where lines appear to be parallel or
perpendicular, but really are not. For example, wooden sets may not really be
geometrically accurate, as that is not normally a concern (they might even have
forced perspective by design!). Skyscrapers may have slight tapers in them for
structural reasons. The ground is usually not perfectly flat. Resist the temptation
to ―eyeball‖ some lines into a shot whenever possible. Though plenty of things
are pretty parallel or perpendicular, keep in mind that SynthEyes is using exact

122
SETTING UP A COORDINATE SYSTEM

geometry to determine camera placement, so if the lines are not truly right, the
camera will come out in a different location because of it.
Operating the Panel

To use the alignment system, switch to the Lens Control panel .


Alignment lines are displayed only when this panel is open.
Go to a frame in your sequence that nicely shows the lines you plan to use
for alignment. All the lines must be present on this single frame, and this frame
number will be recorded in the ―At nnnf‖ button at the lower-left of the lens panel.
You can later return to this frame just by clicking the button. If you later play with
some lines on a different frame, and need to change the recorded frame number,
right-click the button to set the frame number to the current frame.
Click on the Add Line button, then click, drag, and release in the camera
view to create a line in the image. When you release, a menu will appear,
allowing you to select the desired type of line: plain, parallel to one of the
coordinate axes, on one of the coordinate axes, or on an axis, with the length
specified. Specify the type desired, then continue adding lines as needed. Be
sure you check your current coordinate-axis setting in SynthEyes (Z-Up, Y-Up, or
Y-Up-Left), so that you can assign the line types correctly. You should make the
lines as long as possible to improve accuracy, as long as the image allows you to
place it accurately.
Lines that are on an axis must be drawn in the correct direction: from
the negative coordinate values to the positive coordinate values. For example,
with SynthEyes in Z-Up coordinate mode, a line specified as ―On Z Axis‖ should
be drawn in the direction from below ground to above ground. There will be an
arrow at the above ground end, and it should be point upwards. But don‘t worry if
you get it wrong, you can click the swap-end button <-> to fix it instantly.
It does not matter in what direction you draw lines that are merely parallel
to an axis, not on it. The arrowhead is not drawn for lines parallel to the axis.
To control the overall sizing of the scene, you can designate a single on-
axis line to have a length. Again, this line must be on an axis, not merely parallel
to it. After creating the line, select one of the ―on-axis with length‖ types. This will
activate the Length spinner, and you can dial in the desired length.
Before continuing to the solution, be sure to quickly zoom in on each of
the alignment lines endpoints, to make sure they are placed as accurately as
possible. (Zooming into the middle will tell you if you need to engage the lens
distortion controls, which will complicate your workflow.) You can move either
endpoint or the whole line, and adjust the line type at any time.
After you have completed setting up the alignment lines, click the Align!
button. SynthEyes will calculate the camera position relative to the origin you
have specified, and if the scene is not already solved and parallel lines are
available, SynthEyes will also calculate the field of view.

123
SETTING UP A COORDINATE SYSTEM

A total alignment error will be listed on the status line at the bottom of the
SynthEyes window. The alignment error is measured in root-mean-square
horizontal pixels like the regular solver. A value of a pixel or two is typical. If you
do not have a good configuration of lines, an error of hundreds of pixels could
result, and you must re-think.
SynthEyes will take the calculated alignment and apply it to an existing
solution, such that the camera and origin are at their computed locations on the
frame of reference (indicated in the At nnnf button, for example ).
Suppose you are working on, and have solved, a 100-frame tripod-mode shot.
You have built the alignment lines on frame 30. When you click Align!, SynthEyes
will alter the entire path, frames 0-99, so that the camera is in exactly the right
location on frame 30, without messing up the camera match before or after the
frame.
Most meshes will not be affected by the alignment, so that they can be
used as references. To make them move, turn on Whole affects meshes on the
3-D viewport or perspective-view‘s right-click menus.
You should switch to the Quad view and create an object or two to verify
that the solution is correct.
If the quality of the alignment lines you have specified is marginal, you
may find SynthEyes does not immediately find the right solution. To try
alternatives, control-click the Align! button. SynthEyes will give you the best
solution, then allow you to click through to try all the other (successively worse)
solutions. If your lines are only slightly off-kilter, you may find that the correct
solution is the second or maybe third one, with only a slightly higher RMS error.
Advanced Uses and Limitations
Since the line alignment system is pretty simple to understand and use,
you might be tempted to use it all the time, to use it to align regular full 3-D
camera tracking shots as well. And in fact, as its use on tripod-mode shots
suggests, we have made it usable on regular moving camera and even moving-
object shots, which are an even more tempting use.
But even though it works fine, it probably is not going to turn out the way
you expect, or be a usable routine alternative to tracker constraints for 3-D shots.
First, there‘s the accuracy issue. A regular 3-D moving-camera shot is
based on hundreds of trackers over hundreds of frames, yielding many hundreds
of thousands of data points. By contrast, a line alignment is based on maybe ten
lines, hand-placed into one frame. There is no way whatsoever for the line-based
alignment to be as accurate as the tracker solutions. This is not a bug, or an
issue to be corrected next week. Garbage in, garbage out.
Consequently, after your line-based alignment, the camera will be at one
location relative to the origin, but the trackers will be in a different (more correct)
position relative to the camera, so…. The trackers will not be located at the origin
as you might expect. Since the trackers are the things that are locked properly to

124
SETTING UP A COORDINATE SYSTEM

the image, if you place objects as you expect into the alignment-determined
coordinate system, they will not stick in the image—unless you tweak the
inserted object‘s position to make them match better to the trackers, not the
aligned coordinate system.
Second, there is the size issue. When you set up the size of the alignment
coordinate, it will position the camera properly. But it will have nothing to say
about the size of the cloud of trackers. You can have the scene aligned nicely for
a 6-foot tall actor, but the cloud of trackers is unaffected, and still corresponds to
30 foot giants. To have any hope of success using alignment with 3-D solves,
you must still be sure to have at least a distance constraint on the trackers. This
is even more the case with moving-object shots, where the independent sizing of
the camera and object must be considered, as well as that of the alignment lines.
The whole reason that the alignment system works easily for tripod and
lock-off shots is that there is no size and no depth information, so the issue is
moot for those shots.
To summarize, the alignment subsystem is capable of operating on
moving-camera and moving-object shots, but this is useful only for experts, and
probably is not even a good idea for them. If you send us a your scene file at
tech support looking for help on that, we are going to tell you not to do it, to use
tracker constraints instead, end of story.
But, you should find the alignment subsystem very useful indeed for your
tripod-mode and lock-off shots!

Manual Alignment
You can manually align the camera and solved tracker locations if you
like. This technique is most useful for tripod-mode shots; it is generally better to
set up an accurate coordinate system using the methods above for normal shots.

To align manually, switch to the 3-D control panel and the Quad or
Quad Perspective view. Click on the camera (typically Camera01) in one of the
viewports to select it, so that it is listed in the dropdown on the 3-D control panel.
It will be easiest, though not strictly necessary, to turn on the selection-lock
button right underneath the dropdown.
Turn on the Whole button on the 3-D control panel, then use the move
, rotate , and scale tools to reposition the camera using the viewports.
As you do this, not only the camera will move, but its entire trajectory and the
tracker locations.
By default, meshes will not be carried along, so that you can import a 3-D
model (such as a new building), then reposition the camera and trackers relative
to the building‘s (fixed) position. However, you can turn on Whole affects

125
SETTING UP A COORDINATE SYSTEM

meshes, on the 3-D viewport or perspective-view right-click menus, and meshes


will be moved.
You can use the same technique for moving-object shots, discussed later.
In that case, you will usually click the World button to change to Object
coordinates; you can then re-align the object‘s coordinate system relative to the
object‘s trackers (much like you move the pivot point in a 3-D model). As you do
this, the object path will change correspondingly to maintain the overall match.

Using 3-D Survey Data


Sometimes you may be supplied with exact 3-D coordinates for a number
of features in the shot, as a result of hand measurements, laser scans, or GPS
data for large outdoor scenes. You may also be supplied with a few ruler
measurements, which you can apply as size constraints; we won‘t discuss that
further here, but will focus on some aspects of handling 3-D coordinates. The full
details continue in following sections.
First, given a lot of 3-D coordinates, it can be convenient to read them in
automatically from a text file, see the manual‘s section on importing points.
SynthEyes gives you several options for how seriously the coordinate data
is going to be believed. Any 3-D data taken by hand with a measuring tape for an
entire room should be taken as a suggestion at best. At the other end of the
spectrum, coordinates from a 3-D model used to mill the object being tracked, or
laser-surveyed highway coordinates, ought be interpreted literally.
Trackers with 3-D coordinates, entered manually or electronically, will be

set up as Lock Points on the Coordinate System panel , so that X, Y, and Z


will be matched. Trackers with very exact data will also be configured as Pegs,
as described later.
If the 3-D coordinates are measured from a 2-D map (for a highway or
architectural project), elevation data may not be available. You should configure
such trackers as Any Z (Z-up coordinates) or Any Y (Y-up coordinates), so that
the XY or XZ coordinates will be matched, and the elevation allowed to float.
If most of your trackers have 3-D coordinates available to start (six or
more per frame), you can use Seed Points solving mode on the Solver control

panel . Turn on the coordinate system panel‘s Seed button for the trackers
with 3-D coordinates. This will give a quick and reliable start to solving. You must
use Seed Points and Constrain modes on the solver panel if you are matching a
single frame from survey data.
For more information on how to configure SynthEyes for your survey data,
be sure to check the section on Alignment vs Constraints.

126
SETTING UP A COORDINATE SYSTEM

Constraining Camera Position and Motion


Sometimes you may already know some or all of the path of the camera,
for example,
 it may be available from a motion controlled camera,
 the camera motion may be mechanically constrained by a camera
dolly,
 you may have measured some on-set camera data to determine
overall scene sizing, or
 you may have already solved the camera path, then hand-edited it for
cleanup.
SynthEyes lets you take advantage of this information to improve a
solution or help set up the coordinate system, using the trajectory lock controls at

the bottom of the Solver Control Panel , the Hard and Soft Lock Control
dialog, and the camera‘s seed path information.
Warning: using camera position, orientation, and field of view locks is a
very advanced topic. You need to thoroughly understand SynthEyes and the
coordinate system setup process, and have excellent mental visualization skills,
before you are ready to consider camera locks. Under no circumstances should
they be considered a way to compensate for inadequate basic tracking skills.
Concept and Terminology
SynthEyes allows you to create locks on path (X, Y, and/or Z translation),
rotation (pan, tilt, and roll), and field of view. You can lock one or more channels,
and locks are animated, so they might apply to an entire shot, a range of frames,
or one frame.
Each lock forces the camera to, or towards, the camera‘s seed path. The
seed path is what you see before solving, or after clearing the solution. You can
see the seed path at any time using the View/Show seed path menu control, or
the button on the Hard and Soft Lock Control dialog.
Locks may be hard or soft. Hard locks force the camera to the specified
values exactly (except if Constrain is off), similar to pegged trackers. Soft locks
force the camera towards the specified value, but with a strength determined by
a weight value.

Locks are affected by the Constrain checkbox on the solver panel ,


similar to what happens with trackers. With Constrain off, locks are applied after
solving, and do not warp the solution. All soft locks are treated as hard locks.
With Constrain on, locks are applied before and during solving, soft locks are
treated as such, and locks do warp the solution. Field of view locks are not
affected by Constrain, and are always applied.

127
SETTING UP A COORDINATE SYSTEM

Camera position locks are more useful than orientation locks; we‘ll
consider position locks separately to start with.
You can also constrain objects, but this is even more complex. A separate
subsystem, the stereo geometry panel, handles camera/camera constraints in
stereo shots.
Basic Operation
Set up generally proceeds as follows:
1. If you have not already attempted to solve the scene, go to step 5.

2. Go to the Solver Control Panel . Click the more button to bring up


the Hard and Soft Lock Control panel.
3. Position and animate the camera as desired, creating a key on each
frame where you want the position to be constrained. The Get buttons
can help with this.
4. Turn on the L/R, F/B, and/or U/D buttons as appropriate depending on
the axes to be constrained — these stand for left/right, front/back, and
up/down respectively.
5. Adjust the Constrain checkbox as needed. The camera position
constraints behave similarly to constraints on the trackers: if the
Constrain checkbox on, they are enforced exactly during the solve, but
if the Constrain checkbox is off, they are enforced only loosely after the
completion of the solve. Loosely means that they are satisfied as best
as can be, without modifying the trajectory or overall RMS error of the
solution.
The result of this process is to make the camera match the X, Y, and/or Z
coordinates of the seed path at each key. This basic setup can be used to
accomplish a variety of effects, as described above and covered in more detail
below. At the end of the section, we‘ll show some even more exotic possibilities.
Using a Camera Height Measurement
Suppose the camera is sitting on a moving dolly in a studio, and you
measured the camera (lens‘s) height above the floor, and you have some
trackers that are (exactly) on that floor. You can use the height measurement to
set up the scene size as follows:
1. Show the seed path: View/Show Seed Path menu item
2. At frame 0, position the camera at the desired height above the
ground plane: 2 meters, 48 inches, whatever.
3. Turn on the U/D button on frame 0, turn it back off at frame 1.
4. Set up a main coordinate system using 3 or more trackers on the
floor. Make sure to not create a size constraint in the process: if

128
SETTING UP A COORDINATE SYSTEM

using the *3 button on the Coordinate system panel or the

Coord button on the Summary panel , select the 2nd (on-axis)


tracker, and in the Coordinate panel, change it from Lock Point (at
20,0,0) to On X Axis or On Y Axis.

5. Solve with Go! on the Solver panel


Note that you can use whatever more complex setup you like in step 4, as
long as it completely constrains both the translation and rotation, but not the size.
WARNING: You might be tempted to think ―Hmmm, the camera is on a
dolly, so the entire path must be exactly 43 inches off the floor, let me set that
up!‖ (by not turning U/D back off). But this is almost always a bad idea! The
obvious problem is that the dolly track is never really completely flat and free of
bumps. If the vertical field of view is 2 meters, and you are shooting 1080i/p
HDTV, then roughly your track must be perfectly flat to 1 millimeter or so to
have a sub-pixel impact. If your track is that flat, congratulations.
The conceptually more subtle, but bigger impact problem is this: a normal
tripod head puts the camera lens very far from the center of rotation of the
head—roughly 1 foot or 0.25 meter. As you tilt the head, the position of the
camera increases and decreases up to that much in height! Unless your camera
does not tilt during the shot, or you have an extra-special nodal-pan head, the
camera height will change dramatically during the shot.
A Straight Dolly Track Setup
If your camera rides a straight dolly track, you can use the length of that
track to set the scale, and almost the entire coordinates system if desired. While
the camera height measurement setup discussed above is simpler, it is
appropriate mainly for a studio environment with a flat floor. The dolly track setup
here is useful when a dolly track is set up outdoors in an environment with no
clearly-defined ground plane—in front of a hillside, say.
For this setup, you should measure the distance traveled by the camera
head down the track, by a consistent point on the camera or tripod. For example,
if you have a 20‘ track, the camera might travel only 16‘ or so because there will
be a 2‘ dead zone at each end due to the width of the tripod and dolly. Measure
the starting/ending position of the right front wheel, say.
Next, clear any solved path (or click View/Show seed path), and animate
the camera motion, for example moving from 0,0,0 at the beginning of the shot to
16,0,0 at the end (or wherever it reaches the maximum, if it comes back).
You now have two main options: A) mostly tracker-based coordinate
setup, or B) mostly dolly-based coordinate setup, for side-of-hillside shots.

129
SETTING UP A COORDINATE SYSTEM

For setup A, turn on only the L/R camera axis constraint checkbox on the
first and last frames (only). The X values you have set up for the camera have
set up an X positioning for the scene, so when you set up constraints on the
trackers, they should constrain rotation fully, plus the front/back and up/down
directions—but not the L/R direction since that would duplicate and conflict with
the camera constraint (unless you are careful and lucky).
For setup B, turn on L/R, F/B, and U/D on the first and last frames (only).
You should take some more care in deciding exactly what coordinate values you
want to use for each axis of the animated camera path, because those will be
defining the coordinate system. [By setting keys only at the beginning and end of
the shot, you largely avoid problems with the camera tilting up and down—at
most it tilts the overall coordinate system from end to end, without causing
conflicting constraints.]
If the track is not level from end to end, you can adjust the beginning or
ending height coordinate of the tracker as appropriate. But usually we expect the
track to have been leveled from end to end.
With X, Y, and Z coordinates keyed at the beginning and end of the shot,
you have already completely constrained translation and scale, and have
constrained 2 of the 3 rotation axes. The only remaining unconstrained rotation
axis is a rotation around the dolly.
To constrain this remaining rotation requires only a single additional
tracker, and only its height measurement! On the set, you should measure the
relative height of a trackable feature compared to the track (usually this will be to
the base of the track, so you should also measure the height of the camera
versus the base). You can measure this height using a level line (a string and a
clip-on bubble level) and a ruler.

On the Coordinate System Control Panel , select the tracker and set
it to Any XY Plane and set the Z coordinate (for Z-up mode), or select Any XZ
Plane and set the Y coordinate (for Y-up mode).
Now you‘re ready to go! This setup is a valuable one for outdoor shots
where a true vertical reference is required, but the features being tracked are not
structured (rocks, bushes, etc).
Again, we recommend not trying to constrain the camera to be exactly
linear, though you can easily set this up by locking Y and Z to be fixed for the
duration of the shot, with single-frame locks on X at the beginning and end of the
shot. This setup forces the camera motion to be exactly straight, but moving in an
unknown fashion in X. Although the motion will be constrained, the setup will not
allow you to use fewer trackers for the solve.
Using a Supplied Camera Path
This section addresses the case where you have been supplied with an
existing camera translation path, either from a motion-controlled camera rig, or

130
SETTING UP A COORDINATE SYSTEM

as a result of hand-editing a previous camera solution, which can be useful in


marginal tracks where you have a good idea what the desired camera motion is.
After editing the path, you want to find the best orientation data for the given
path.
If you have an existing camera path in an external application (either from
a rig, or after editing in maya or max, for example), typically you will import it
using a standard or custom camera-path import script. Be sure that the solved
camera path is cleared first, so that the seed path is loaded.
If you have a solved camera path in SynthEyes, you can edit it directly.
First, select the camera, and hit the Blast button on the 3-D panel. This transfers
the path data from the solved path store into the seed path store. Clear the
solved path and edit the seed path.
Rewind and turn on all 3 camera axis locks: L/R, F/B, and U/D.
Next, configure the solver‘s seeding method. This requires some care.
You can use the Path Seeding method only if your existing path includes
correct orientation and field of view data. Otherwise, you can use the
Automatic method or maybe Seed Points. The Refine mode is not an option
since you have already cleared the solution to load the seed path, and don‘t have
orientation data anyway or you‘d use Path Seeding.
You can use Seed Points mode if you are editing the path in SynthEyes—
but be sure to hit the Set All button on the Coordinate System Setup Control
panel before clearing the prior solution, so that the points are set up properly as
seeds. You should probably not make them locks, unless you are confident of
the positions already.
With the camera path locked to a complex path (other than a straight line),
no further coordinate system setup is required, or it will be redundant.
You can solve the scene first with the Constrain checkbox off, then switch
to Refine mode, turn on Constrain, and solve again. This will make it apparent
during the second solve whether or not you have any problems in your constraint
setup, instead of having a solution fail unexpectedly due to conflicting constraints
the first time.
Camera-based Coordinate System Setup
The camera axis constraints can be used in small doses to set up the
coordinate system, as we‘ve seen in the prior sections. Typically you will want to
use only 1 or 2 keys on the seed path; 3 or more keys will usually heavily
constrain the path and require exact knowledge of the camera move timing.
Roughly, each keyed frame is equivalent in effect to a constrained tracker
located at the same spot. You should keep that in mind as you plan your setup,
to avoid under- or over-constraining the coordinate system.

131
SETTING UP A COORDINATE SYSTEM

Soft Locks
So far we have described hard locks, which force the camera exactly to
the specified values. Soft locks pull more gently on the camera path, for example,
to add stability to a section of the track with marginal tracking. In either case, for
a lock to be active, the corresponding lock button (U/D, L/R, pan, etc) must be
on.
The weight values on the Hard and Soft Lock dialog controls whether
locks are hard or soft. If the weight is zero (the default), it is a hard lock. A non-
zero weight value specifies a soft lock.
Weight values range from 1 to 120, with 60 a nominal neutral value.
However, we recommend that when creating soft locks, you start with a weight of
10, and work upwards through 20, 30, etc until the desired effect is obtained.
Weight values are in decibels, a logarithmic scale where 20 decibels is a
factor of 10, and 6 decibels is a factor of two. So 40 is 10 times stronger than 20,
and 26 is twice as strong as a weight of 20. (Decibels are commonly used for
sound level measurements.)
A lock can switch from hard to soft on a frame-by-frame basis, ie frames
0-9 can be hard, and 10-14 soft. You may need to key the weight track carefully
to avoid slow transitions from 20 down to 0, for example.
Soft locks are treated as such only when the Constrain check box is on: it
is the solver that distinguishes between hard and soft locks. If Constrain is off,
the locks will be applied during the final alignment, when they do not affect the
path at all, just re-orient it, so soft locks are treated the same as hard locks.
Note that the soft lock weight is not a path blending control. You might
naively be tempted to set up a nominal locked path, and try to animate the soft
lock weights expecting a smooth blend between the solved path and your
animated locked path. But that is not what will happen. The weight changes how
seriously SynthEyes takes your request that the camera should be located at the
specified position—but it will affect the tracker positions and everything else as
well.
Orientation Locks
You can apply Pan, Tilt, and Roll rotation locks as well as translational
locks. They can be used for path editing and, to a lesser extent, for coordinate
system setup.
For example, a roll-angle constraint can be used to keep the camera
forced upright. That can be handy on tripod shots with large pans: small amounts
of lens distortion can bend the path into a banana shape; the roll constraint can
flatten that back out.
If the camera looks in two different directions with the roll locked, it
constrains two degrees of freedom: only a single pan angle is undetermined! For

132
SETTING UP A COORDINATE SYSTEM

example, if looks along the X axis then along the Y axis, both with roll=0. You
might want to think about that for a minute.
The perspective window‘s local-coordinate-system and path-relative
handles can help make specific adjustments to the camera path.
Inherently, SynthEyes is not susceptible to ―gimbal-lock‖ problems.
However, when you have orientation locks, you are using pan, tilt, and roll axes
that do define overall north and south poles, and you may encounter some
problems if you are trying to lock the camera almost straight up or down. If this is
the case, you may want to change your coordinate system so those views are
along the +Y and –Y axes, for example.
Object Tracking
You can also use locks on moving objects, in addition to cameras.
However, there are several restrictions on this, because moving objects are
solved relative their hosting camera path, but the locks are world coordinate
system values.
If a moving object has path locks, then
1. the host camera must have been previously solved, pre-loaded, or pre-keyed,
and the camera solving mode set to Disabled,
2. the translation axis locks must either all be on, or all off, and
3. the rotation axis locks must either all be on, or all off.
Normally, when SynthEyes handles shots with a moving camera and
moving object, it solves camera and object simultaneously, optimizing them both
for the best overall solution. However, when object locks are present, SynthEyes
must be able to access the camera solution first, in order to be able to apply the
object locks.
With the camera path, SynthEyes changes the translation and rotation
axis lock values into a form usable for the object, but the individual axes are no
longer available, and either all must be constrained, or none. SynthEyes will
automatically turn all the enables on or off together if a moving object is active.
Object locks have very hard-to-think-about impacts on the local coordinate
system of the trackers within the object. Most object locks will wind up over-
constraining the object coordinate system.
We recommend that object locking be used only to work on the object
path, not to try to set up the object coordinate system.

Field of View/Focal Length Constraints


You can create constraints on the camera‘s field of view and focal length
in a similar fashion to path and orientation constraints. Field of view constraints
are enabled (and make sense) only when the camera lens is set to Zoom.

133
SETTING UP A COORDINATE SYSTEM

Warning 1: This topic is for experts. Do not use field of view constraints
on a shot unless you have a specific need encountered on that shot. Do not use
them just because focal length values were recorded during shooting. FOV/FL
values calculated by SynthEyes are more accurate by definition than recorded
values.
Warning 2: Do not use focal length values unless you have measured
and entered a very good value for the plate width. Use field of view values
instead.
The Known lens mode can also be viewed as a simple form of field of
view constraint: one that allows arbitrary animation of the field of view, but that
requires that the exact field of view be known and keyed in for the entire length of
the shot. We will not discuss this mode further, except to note that the same
effect, and many more, can also be achieved with field of view constraints.
As with path constraints, field of view constraints are created with a seed
field of view track, animated lock enable, and lock weight. See the Lens panel

, Solver panel , and lock control dialog.


Both hard and soft locks operate at full effect all the time, regardless of the
state of the Constrain checkbox on the solver panel.
As with path constraints, field of view constraints affect the solution as a
whole. If you have a spike in the field of view track on a particular frame, adding
a constraint on that single frame will not do what you probably expect. All the
trackers locations will be affected, and you will have the same spike, but in a
slightly different location. This is not a bug. Instead, you need to also key
surrounding frames. In all cases, identifying and correcting the cause of the spike
will be a better approach if possible.
If the lens zooms intermittently, you can determine an average zoom value
for each stationary portion of the shot, and lock the field of view to that value.
You can repeat this for each stationary portion, producing a smoother field of
view track.
Sometimes you may have a marginal zoom shot where you are given the
starting and ending zoom values (field of view or focal length), but you do not
know the exact details of the zoom in between. SynthEyes might report a zoom
from 60 to 120mm, but you know the actual values were 50 to 100mm. You can
address this by entering a one frame field of view constraint at the beginning and
end of the shot with the correct values. As long as your values are reasonably
correct in reality, the overall zoom curve should alter to match your values.
If only the endpoints change, but the interior remains at other values, then
SynthEyes has significant evidence to the contrary from your values, which most
likely indicates the values are wrong, the plate width is wrong, or that there is
substantial uncorrected lens distortion.

134
SETTING UP A COORDINATE SYSTEM

Spinal Path Editing


Since it can be tedious to repeatedly change coordinate system setups,
SynthEyes can dynamically recompute portions of a solve as you change certain
values.
Warning: this is a really advanced topic. It can be used quickly and
easily, especially Align mode, but it can just as quickly reduce your solve to
rubble. We‘re not kidding, this thing is complicated!
First, what is ―spinal editing‖ and why is it called that? Spinal editing is
designed to work on an already-solved track, where you have an existing camera
or object path to manipulate. The path is the spine that we edit. It is spinal
because you can think of the trackers as being attached to it like ribs. If you
manipulate the spine, the ribs move in response. You‘ll be working on the spine
to improve or reposition it. The perspective window‘s local-coordinate-system
and path-relative handles can help make specific adjustments to the camera
path.
After you have completed an initial solve producing a camera path, you
can initiate spinal editing by launching the control panel with the Window/Spinal
Editing menu item. This will open a small spinal control panel. You can also
enable spinal editing with the Edit/Spinal aligning and Edit/Spinal solving menu
items, though then you lose the feedback from the control panel.
There are two basic modes, controlled by the button at top left of the
spinal control panel: Align and Solve.
Note that the recalculations done by spinal editing are launched only in
response to a specific relatively small set of operations:
 dragging the camera or object in a 3-D viewport or perspective
view,
 dragging the ―seed point‖ (lock coordinates) of a properly-
configured tracker in a 3-D viewport or perspective view,
 changing the field of view spinner on the lens control panel or soft-
lock panel.
 changing the weight control on the spinal editing dialog.
In order for a tracker‘s seed point to be dragged and used for spinal
alignment, it must be set to Lock Point mode.
Spinal Align Mode
In Align mode, your path is moved around as the coordinate system is
repeatedly updated, but the shape of the path and the relationship to the trackers
is not affected. The RMS error of the solve is unchanged. This can be a nice way
to help get that specific coordinate system alignment you want; it allows a
mixture of object and tracker constraints.

135
SETTING UP A COORDINATE SYSTEM

You can use a combination of locks on the camera and on trackers in the
scene. As you drag the camera or tracker, the alignment will be repeatedly
recalculated. Use the figure of merit value to keep track of whether you have an
overconstrained setup: the value is normally very small, such as 0.000002. If it
rises much above that, you don‘t have a minimal set of constraints (typically it
reaches 0.020–0.050). That is not a problem—unless you begin solving with the
Constrain checkbox on.
Note that all camera and object locks are treated as hard locks by the
alignment software.
Spinal Solve Mode
In Solve mode, you are changing the solve itself, generally by adding
constraints on the camera path, then re-solving. The RMS error will always get
worse! But it lets you interactively repair weak spots in your solve.
The spinal solve performs a Refine operation on your existing solution,
meaning that it makes small changes to that solution. If the constraints you add
after the initial solve, either directly or by dragging with the spinal solve mode,
change the solution too much, then you will get a solution that is ―the best
solution near the old solution‖ rather than the best overall solution, which you
would obtain by starting the solve from scratch (ie Automatic solving mode).
To maintain interactive response rates, the spinal solve panel allows you
to terminate the refine operation early—and while dragging you‘re just going to
be changing things again anyway. When you stop dragging, SynthEyes will
perform a last refine cycle to allow the refine to complete, although you can also
keep it from taking too long. After you‘ve been moving around for a bit, especially
if your solves are not completing all the way, you can click the Finish button to
launch a final normal Refine cycle (Finish is the same as the Go button on the
solver panel).
Spinal editing might be used in especially subtle ways on long shots.
Match-moving inherently produces local ―rate of change‖ measurements, and
small random errors (often amplified by small systematic effects such as lens
distortion or off-center optic axis) accumulate to produce small twists in the
geometry by the end of a long traveling shot. If you have GPS or survey data you
can easily fix this using a few locks. But survey data is not always available.
These accumulating errors can be particularly problematic when a long
shot loops back onto itself. Suppose a shot starts with a building site for a house,
showing the ground where it will be. The shot flies past the house, loops around,
then approaches from the side. However, the side view does not include the
ground, but only some other details not visible from the front. The inserted house
is now seen at an incorrect location, perhaps slanted a bit. The path needs to be
bent into shape, and spinal path editing can help you achieve that.
Please keep in mind that the results of these manipulated solves are
generally not the same result you would obtain if you started the solve again from

136
SETTING UP A COORDINATE SYSTEM

scratch in Automatic solving mode. You might consider re-starting the solve
periodically to make sure you‘re not doing a whole lot of work on a marginal
solution.
Using soft locks and spinal editing mode is a black art made available to
those who wish to use it, for whatever results can be obtained with it. It is a tool
that affects the solver in a certain way. There is no guarantee that it will do the
specific thing that you want at this moment. If it does not do what you think it
―should be doing,‖ it is not a bug.

Avoid Constraint Overkill


To recap and quickly give a word of warning, keep your coordinate system
constraints as simple as possible, whether they are on trackers or camera path. It
is a common novice error to assign as many constraints as possible to things that
are remotely near the floor, a wall, the ceiling, etc, in the mistaken belief that the
constraints will rescue some bad tracking, or cure a distorted lens.
Consequently, the first thing we do with problematic scene files in
SynthEyes technical support is to remove all the customer‘s constraints, re-solve,
and look at the tracker graphs to locate bad tracks, which we usually delete.
Presto, very often the scene is now fine.
Stick with the recommended 3-point method until you have a decent
understanding of tracking, and a clear idea of why doing something else is
necessary to achieve the size, positioning, and orientation you need.
If you have a shot with no physical camera translation—a nodal tripod
shot—do not waste time trying to do a 3-D solve and coordinate system
alignment. Many of the shots we see with ―I can‘t get a coordinate system
alignment‖ are tripod shots erroneously being solved as full 3-D shots. Set the
solver to tripod mode, get a tripod solution, and use the line alignment tool to set
up coordinates.

137
Zero-Weighted Trackers
Suppose you had a visual feature you were so unsure of, you didn‘t want it
to affect the camera (or object) path and field of view at all. But you wanted to
track it anyway, and see what you got. You might have a whole bunch of leaves
on a tree, say, and hope to get a rough cloud for it.
You could take your tracker, and try bringing its Weight in the solution
down to zero. But that would fail, because the weight has a lower limit of 0.05. As
the weight drops and the tracker has less and less effect, there are some
undesirable side effects, so SynthEyes prevents it.
Instead, you can click the zero-weighted-tracker (ZWT) button on the
tracker panel, which will (internally) set the weight to zero. The undesirable side
effects will be side-stepped, and a new capability emerges.
ZWTs do not affect the solution (camera or object path and field of view,
and normal tracker locations), and can not be solved until after an initial solution
has been obtained. ZWTs are solved to produce their 3-D position at the
completion of normal solving.
Tip: There is a separate preference color for ZWTs. Though it is normally
the same color as other trackers, you can change it if you want ZWTs to stand
out automatically.
Importantly, ZWTs are automatically re-solved whenever you change their
2-D tracking, the camera (or object) path, or the field of view. This is possible
because the ZWT solution will not affect the overall solution.
It makes possible a new post-solving workflow.

Solve As You Track

After solving, if you want to add a tracker, create it and change it


to a ZWT (use the W keyboard accelerator if you like). Keep the Quad view open.
Begin tracking. Watch as the 3-D point leaps into existence, wanders around as
you track, and hopefully converges to a stable location. As you track, you can
watch the per-frame and overall error numbers at the bottom of the tracker panel

Hop over to the graph editor , and take a quick look at the error curve
for any spikes—since the position is already calculated, the error is valid.
Once you‘ve completed tracking, change the tracker back to normal
mode. Repeat for additional new trackers as needed. You can use the same
approach modifying existing trackers, temporarily shifting them to ZWTs and
back.

When you do your next Refine cycle using the Solver panel , the
trackers will be solved normally, and influence the solution in the usual way. But,

139
ZERO-WEIGHTED TRACKERS

you were able to use the ZWT capability to help do the tracking better and
quicker.

Juicy Details
ZWTs don‘t have to be only on a camera, they can be attached to a
moving object as well. You can also configure Far ZWTs.
The ZWT calculation respects the coordinate system constraints: you can
constrain Z=0 (with On XY Plane) to force a ZWT onto the floor in Z-up mode. A
ZWT can be partially linked to another tracker on the same camera or object. It
doesn‘t make sense to link to a tracker on a different object, since such links are
always in all 3 axes, overriding the ZWT calculation. Distance constraints are
ignored by ZWT processing.
If you have a long shot and a lot of ZWTs and must recalculate them often
(say by interactively editing the camera path), it is conceivable that the ZWT
recalculation might bog down the interactive update rate. You can temporarily
disable ZWT recalculation by turning off the Track/ZWT auto-calculation menu
item. They will all be recalculated when you turn it back on.

Adding Many More Trackers


After you have auto-tracked and solved a shot, you may want to add
additional trackers, either to improve accuracy in a particular area of the shot, or
to flesh out additional detail, perhaps before building a mesh from tracker
locations.
SynthEyes provides a way to do this efficiently in a controlled manner,
with the Add Many Trackers dialog. This dialog takes advantage of the already-
computed blips and the existing camera path to identify suitable trackers: it is the
same situation as Zero-Weighted-Trackers (ZWTs), and by default, the newly-
created trackers will be ZWTs—they do not have to be solved any further to
produce a 3-D position, since the 3-D position is already known.
Important: you must not have already hit Clear All Blips on the Feature

panel or Clean Up Trackers dialog, since it is the blips that are analyzed to
produce additional trackers.
The Add many trackers dialog, below, provides a wide range of controls to
allow the best and most useful trackers to be created. You can run the dialog
repeatedly to address different issues.
You can also use the Coalesce Nearby Trackers dialog to join multiple
disjointed tracks together: the sum is greater than the parts!

140
ZERO-WEIGHTED TRACKERS

When the dialog is launched from the Track menu, it may spend several
seconds busily calculating all the trackers that could be executed, and it saves
that list in a temporary store. The number of prospective trackers is listed as the
Available number, 2754 above. By adjusting the controls on the dialog, you
control which of these prospective trackers are added to the scene when you
push the Add button. At most, the Desired number of trackers will be added.
Basic Tracker Requirements
The prospective trackers must meet several basic requirements, as
described in the requirements section of the panel. These include a minimum
length (measured in frames), and an amplitude, plus average and peak errors.
The amplitude is a value between zero and one, describing the change in
brightness between the tracker center and background. Larger values will require
more pronounced trackers.
The errors numbers measure the distance between the 2-D tracker
position and the computed 3-D position of the tracker, mapped back into the
image. The average error limits the noisiness and jitter in the trackers, while the
peak error limits the largest ―glitch‖ error. Notice that these controls do not
change any trackers, but instead select which of the prospective trackers are
actually selected for addition.
To a Range of Frames
To add trackers in a specific range of frames in the shot, set up that region
in the Frame-Range Controls: from a starting frame to an ending frame. Then,
set a minimum overlap: how many frames each prospective tracker must be

141
ZERO-WEIGHTED TRACKERS

valid, within this range of frames. For example, if you have only a limited number
of trackers between frames 130 and 155, you would set up those two as the
limits, and set the minimum overlap to 25 at most, perhaps 20.
To an Area in 3-D Space
To add trackers in a particular 3-D area of the scene, open the camera
view, and go to a frame that makes the region needing frames clearly visible.
Lasso the region of interest—it does not matter if there are any trackers there
already or not. The lassoed region will be saved. (Fine point: the frame number is
also saved, so it does not matter if you change frames afterwards.)
Open the Add many trackers dialog, and turn on the Only within last
Lasso checkbox. The only trackers selected will be those where the 3-D point
falls within the lassoed area, on the frame at which the lasso occurred.
Zero-Weighted vs Regular Trackers
Once all the criteria have been evaluated, and a suitable set of trackers
determined, hitting Add will add them into the scene. There are several options to
control this (which should be configured before hitting Add).
The most important decision to make is whether you want a ZWT or a
regular tracker. Intrinsically, the Add many trackers dialog produces ZWTs, since
it has already computed the XYZ coordinates as part of its sanity-checking
process. By using ZWTs, you can add many more trackers without appreciably
affecting the re-solve time if you later need to change the shot. So using ZWTs is
computationally very efficient, and is an easy way to go if you need more trackers
to build a mesh from.
On the other hand, if you need additional trackers to improve the quality of
the track, by adding more trackers in an under-populated region of 3-space or
range of frames, then adding ZWTs will not help, since they do not affect the
overall camera solution. Instead, check the Regular checkbox, and ordinary
trackers will be created, still pre-solved with their XYZ coordinates. You can solve
again using Refine mode, and the camera path will be updated taking into
account the new trackers.
If you add hundreds or thousands of regular trackers, the solve time will
increase substantially. Designed for the best camera tracking, SynthEyes is most
efficient for long shots, not for thousands of trackers. To see why this choice was
made, note that even if all the added trackers are of equal quality, the solution
accuracy increases much slower than the rate trackers are added. You can use
some of the trackers for the solve, and keep the rest as ZWTs.
Other New Tracker Properties
Normally, you will want the trackers to be selected after they are added,
as that makes it easy to change them, see which were added, etc. If you do not
want this, you can turn off the Selected checkbox.

142
ZERO-WEIGHTED TRACKERS

Finally, you can specify a display color for the trackers being added by
selecting it with the color swatch, and turning on the Set color checkbox. That
will help you identify the newly-added trackers, and you can re-select them all
again later using the Select same color item on the Edit menu.
It may take several seconds to add the trackers, depending on the number
and length of trackers. Afterwards, you are free to add additional trackers to
address other issues if you like—the ones already added will not be duplicated.

Coalescing Nearby Trackers


Now that you know how to create many more trackers, you need a way to
combine them together intelligently. Whether you use the Add Many More
Trackers panel or not, after an autotrack (or even heavy supervised tracking) you
will often find that you have several trackers on the same feature, but covering
different ranges of frames. Tracker A may track the Red Rock for frames 0-50,
and Tracker B may also track Red Rock from frames 55-82. In frames 51-54,
perhaps an actor walked by, or maybe the rock got blurred out by camera motion
or image compression.
It is more than a convenience to combine trackers A and B. The combined
tracker gives SynthEyes more information than the two separately, and will result
in a more stable track, less geometric distortion in the scene, and a more
accurate field of view. (Exception: if there is much uncorrected lens distortion,
you are better off with consistently short-lived trackers.)
The Coalesce Nearby Trackers dialog, available on the Tracker menu, will
automatically identify all sets of trackers that should be coalesced, according to
criteria you control.

When you open the dialog, you can adjust the controls (described shortly)
and then click the Examine button.
SynthEyes will evaluate the trackers and select those to be coalesced, so
that you can see them in the viewports. The text field, reading ―(click Examine)‖
in the screen capture above, will display the number of trackers to be eliminated
and coalesced into other trackers.

143
ZERO-WEIGHTED TRACKERS

At this point, you have several main possibilites:


1. click Coalesce to perform the operation and close the panel;
2. adjust the controls further, and Examine again;
3. close the dialog box with the close box (X) at top right (circle at top left on
Mac), then examine the to-be-coalesced trackers in more detail in the
viewports; or
4. Cancel the dialog, restoring the previous tracker selection set.
If you are unsure of the best control settings to use, option 3 will let you
examine the trackers to be coalesced carefully, zooming into the viewports. You
can then open the Coalesce Nearby Trackers dialog again, and either adjust the
parameters further, or simply click Coalesce if the settings are satisfactory.
What Does Nearby Mean?
The Distance, Sharpness, and Consistency controls all factor into the
decision whether two trackers are close enough to coalesce. It is a fairly complex
decision, taking into account both 2-D and 3-D locations, and is not particularly
amenable to human second-guessing. The controls are pretty straightforward,
though.
As an aside, it might seem that all that is needed is to measure the 3-D
distance between the computed tracker points, and coalesce them if the points
are within a certain distance measured in 3-D (not in pixels). However, this
simplistic approach would perform remarkable poorly, because the depth
uncertainty of a tracker is often much larger than the uncertainty in its horizontal
image-plane position. If the distance was large enough to coalesce the desired
trackers, it would be large enough to incorrectly coalesce other trackers.
Instead, SynthEyes uses a more sophisticated and compute-intensive
approach which is evaluated over all the active frames of the trackers.
The first and most important parameter is the Distance, measured in
horizontal pixels. It is the maximum distance between two trackers that can be
considered for coalescing. If they are further apart than this in all frames, they
will definitely not be coalesced. If they are closer, some of the time, they may be
coalesced, increasingly likely the closer they are.
The second most important parameter, the Consistency, controls how
much of the time the trackers must be sufficiently close, compared to their overall
lifetime. So very roughly, at 0.7 the trackers must be within the given distance on
70% of the frames. If a track is already geometrically accurate, the consistency
can be made higher, but if the solution is marginal, the consistency can be
reduced to permit matches even if the two trackers slide past one another.
The third parameter, Sharpness, controls the extent to which the exact
distance between trackers affects the result, versus the fact that they are within
the required Distance at all. If Sharpness is zero, the exact distance will not

144
ZERO-WEIGHTED TRACKERS

matter at all, while at a sharpness of one (the maximum), if the trackers are at
almost the maximum distance, they might as well be past it.
Sharpness can be used to trade off some computer time versus quality of
result: a small distance and low sharpness will give a faster but less precise
result. Settings with a larger distance and larger sharpness will take longer to run
but product a more carefully-thought-out result—though the two sets of results
may be very similar most of the time, because the larger sharpness will make the
larger distance nearly equivalent to the smaller distance and low sharpness.
If you are handling a shot with a lot of jitter in the trackers, due to large film
grain or severe compression artifacts, you should decrease the sharpness,
because those small differences in distance are in fact meaningless.
What Trackers should be Coalesced?
Three checkboxes on the coalesce panel control what types of trackers
are eligible to be coalesced.
First, you can request that Only selected trackers be coalesced. This
allows you to lasso-select a region where coalescing is required. (Note: if you
only need 2 particular trackers coalesced, for sure, use Track/Combine Trackers
instead.)
Second, frequently you will only want to coalesce auto-trackers, or
trackers created by the Add Many Trackers dialog. By default, supervised non-
zero-weighted trackers are not eligible to be coalesced. This prevents your
carefully-constructed supervised trackers from inadvertently being changed.
However, you can turn on the Include supervised non-ZWT trackers checkbox
to make them eligible.
SynthEyes will also generally coalesce only trackers that are not
simultaneously active: for example, it might coalesce two trackers that are valid
on frames 0-10 and 15-25, respectively, but not two trackers that are valid on
frames 0-10 and 5-15. If both are autotrackers, if they are simultaneously active,
they are not tracking the same thing. The exception to this is if they are a large
autotracker and a small one, or an autotracker and a supervised tracker. To
combine overlapping trackers, turn off the Only with non-overlapping frame
ranges checkbox.
A satisfactory approach might be to coalesce once with the checkbox on,
as is the default, then open the dialog again, turn the checkbox off, and Examine
the results to see if something worth coalescing turns up.
An Overall Strategy
Although we have talked as if SynthEyes only combines two trackers, in
fact SynthEyes considers all the trackers simultaneously, and can merge three or
more trackers together into a single result in one pass.

145
ZERO-WEIGHTED TRACKERS

It is possible that coalescing immediately a second time may produce


additional results, but this is probably sufficiently rare to make it unnecessary in
routine use.
However, after you coalesce trackers, it will often be helpful to do a Refine
solving cycle, then coalesce again. After the first coalesce, the refine cycle will
have an improved geometric accuracy due to the longer tracker lifetimes. With
the improved geometry, additional trackers may now be stable enough to be
determined to be tracking the same feature, permitting a coalesce operation to
combine them together, and the cycle to repeat.
Viewing this pattern in reverse, observe that a broader distance
specification will be required initially, when trackers on the same feature may be
calculated at different 3-D positions.
This is particularly relevant to green-screen shots, where the
comparatively small number of trackable features and their frequently short
lifetime, due to occlusion by the actors, can result in higher-than-usual initial
geometric inaccuracy.
Because the green-screen tracking marks are generally widely separated,
there is little harm in increasing the allowable coalesce Distance. The features
can then be coalesced properly, and the Refine cycle will then rectify the
geometry. The process can be repeated as necessary.
If you are using Add Many Trackers and then Coalescing and refining, you
should turn on the Regular, not ZWT checkbox on the Add Many dialog, so that
the added trackers will affect the Refine solution.

146
Perspective Window
The perspective window allows you to go into the scene to view it from
any direction. Or, you can lock the perspective view to the tracked camera view.
You can build a collection of test or stand-in objects to evaluate the tracking.
Later, we‘ll see that it enables you to assemble tracker locations into object
models as well.
The perspective window is controlled by a right-click menu, where
different mouse modes can be selected. The middle mouse button can always be
used for general navigation using the control and ALT variations. The left mouse
button may be used instead, with the same variations, when the Navigation mode
is selected.

Image Overlay
The perspective window can be used to overlay inserted objects over the
live imagery, much like the camera view. Select Lock to Current Camera to lock
or release, or use the ‗L‘ key. Note that when the view is locked to the camera,
you can not move or rotate the camera, or adjust the field of view.
The perspective window is designed to work only with undistorted
imagery: the perspective view always shows the view with an ideal, undistorted
camera, even if SynthEyes has calculated a lens distortion value.
This reflects the essential difference between the camera and perspective
views: the camera view shows the source footage as is, and distorts all 3-D
geometry to match. The perspective view shows the 3-D world as is, and would
have to dynamically un-distort the footage in order to make everything line up.
Rather than trying to do that while you use the perspective window, it requires
that you un-distort the footage previously, if necessary.

Freeze Frame
Each perspective view can be independently disconnected from the main
user interface time slider, ―frozen‖ on a particular frame. This can be useful to
view a shot from two different frames simultaneously (to link trackers from
different parts of the same shot), or to view two shots with different lengths
simultaneously and with some independent control. That is especially helpful for
multi-shot tracking, where the reference shot is only a few frames long.
See View/Freeze on this frame on the right-click menu. Using the normal
A, s, d, F, period, and comma accelerator keys within a frozen perspective
window will change the frozen frame, not the main user interface time. To
update the main user interface time from within the perspective window, use the
left and right arrows (or move outside the perspective window!). To re-set the
frozen time to the current time, hit View/Freeze on this frame again. To unfreeze,
use View/Unfreeze.

147
PERSPECTIVE WINDOW

Stereo Display
SynthEyes can display anaglyph stereo images, on the right-click menu
select View/Stereo Display. If it is a stereo shot, both images will be displayed if
the image is enabled. If it is not a stereo shot, SynthEyes will artificially create
two views for the stereo display. See the settings on View/Perspective View
Settings. They include the inter-ocular distance and vergence distance, plus the
type of glasses you have. (You should look for glasses that strongly reject the
unwanted colors, some paper glasses are best!)

Navigation
SynthEyes offers four basic ways to reposition the camera within the
perspective view: pan (truck), look, orbit, and dolly in/out. You select a motion
using the control and ALT keys, and make it happen at any time using the middle
mouse button, or with the left mouse button when you are in Navigation mode.
The idea behind doing it this way is that you can change the motion at any
time, as you are doing it, by pressing or releasing a key, while keeping the
mouse moving. That makes it a lot faster than switching back and forth between
various tools and then moving the camera.
When neither control nor ALT are pressed, dragging will pan the display
(truck the camera sideways or up and down). Control-dragging will cause the
camera to look around in different directions, without translating. Control-ALT-
dragging will dolly the camera forwards or backwards.
ALT-dragging will cause the camera to orbit. The center of the orbit will be
the center of any selected vertices in the current edit mesh (more on that later),
around a selected object, or around a point in space directly ahead of the
camera.
You can see what will happen by looking at the top-left of the perspective
window at the text display.
The shift key will create a finer motion of any of these types.
The mouse‘s scroll wheel will dolly in and out if the perspective window is
not locked to the camera, or if it is, it will change the current time. If locked, shift-
scrolling will zoom the time bar.
If you hold down the Z or ‗/‖ (apostrophe/double-quote) key when left-
clicking, the mouse mode will temporarily change to be navigation mode; the
mode will switch back when the mouse button is released. You can also switch to
navigate mode using the ‗N‘ key. So it is always easy to navigate and quick to
navigate without having to move the mouse to select a different mode.

Creating Objects
Create objects on the perspective window grid with the Create mesh
object mode. Use the 3-D panel or right-click menu to control what kind of object
is created. Selecting an object type from the right-click menu launches the

148
PERSPECTIVE WINDOW

creation mode immediately. If the SynthEyes user interface is set so that a


moving object is active on the Shot menu, the created object will be attached to
that object.
The Duplicate Mesh script on the Script menu can clone and offset a
mesh, so you can create a row of fence posts quickly, for example.

Importing Objects
You can import OBJ or DXF meshes into SynthEyes using File/Import
Mesh. You can later discover the original source of the meshes using File/Scene
Information (meshes subsection)
If you later change the source mesh file, you can reload it inside the
SynthEyes scene, without having to reestablish its other settings, by selecting it
and clicking the Reload button on the 3-D Panel.

Opacity
It can be helpful to make one or more meshes partially transparent; this
can be achieved using the opacity spinner on the 3-D Panel, which ranges from
an opacity of zero (fully transparent and invisible) to an opacity of one (fully
opaque; the default).
The opacity setting affects the mesh in the perspective view and in the
camera view only if the OpenGL camera view is used. See View/OpenGL
Camera View and the preference Start with OpenGL camera view. The OpenGL
camera view is the default on Mac OS X.
Note that while in an ideal world the opacity setting would simulate turning
a solid mesh into an attenuating solid, in reality opacity is simulated using some
fast but surface-based alpha compositing features in OpenGL. Depending on the
situation, including other objects in the scene, the transparent view may differ
substantially from what a true attenuating solid would produce, but generally the
effect generated should be quite satisfactory for helping understand the scene.

Moving and Rotating Objects


When an object is selected, handles appear. You can either drag the
handle to translate the object along the corresponding axis, or control-drag to
rotate around that axis.
The handles appear along the main coordinate system axes by default, so
for example, you can always drag an object vertically no matter what its
orientation.
However, if you select Local-coordinate handles on the right-click menu,
the handles will align with the object‘s coordinate system, so that you can
translate along a cylinder‘s axis, despite its orientation.
Additionally, for cameras or moving objects, you can select Path-relative
handles, so you can adjust along or perpendicular to the path.

149
PERSPECTIVE WINDOW

Placing Seed Points and Objects


In the Place mode, you can slide the selected object around on the
surface of any existing mesh objects. For example, place a pyramid onto the top
of a cube to build a small house.
You can also use the place mode to put a tracker‘s seed/lock point onto
the surface of an imported reference head model, for example, to help set up
tracking for marginal shots.
For this latter workflow, set up trackers on the image, import the reference
model. Go to the Camera and Perspective viewport configuration. Set the
perspective view to Place mode. Select each tracker in the camera view, then
place its seed point on the reference mesh in the perspective view. You can
reposition the reference mesh however you like in the perspective view to make
this easy—it does not have to be locked to the source imagery to do this. This
work should go quite quickly.
If you need to place trackers (or meshes) at the vertices of the mesh, not
on the surface, hold the control key down as you use the place mode, and the
position will snap onto the vertices.

Grid Operations
The perspective window‘s grid is used for object creation and mesh
editing. It can be aligned with any of the walls of the set: floor, back, ceiling, etc.
A move-grid mode translates the grid, while maintaining the same orientation, to
give you a grid 1 meter above the floor, say.
A shared custom grid position can be matched to the location of several
vertices or trackers using the right-click|Grid|To Facets/Verts/Trackers menu
item. If 3 trackers (or vertices) are selected, the grid is moved into the plane
defined by the three. If two are selected, the grid is rotated to align the side-to-
side axis along the two. If one is selected, the grid slides to put that tracker at the
origin. So by repeatedly selecting some trackers(vertices) and using this menu
command, the grid can be aligned as desired.
You can easily create an object on the plane defined by any 3 trackers by
selecting them, aligning the grid to the trackers, then creating the object, which
will be on the grid.
You can toggle the display of the grid using the Grid/Show Grid menu
item, or the ‗G‘ key.

Shadows
The perspective window generates shadows to help show tracking quality
and preview how rendered shots will ultimately appear.
The 3-D panel includes control boxes for Cast Shadows and Catch
Shadows. Most objects (except for the Plane) will cast shadows by default when
they are created.

150
PERSPECTIVE WINDOW

If there are no shadow-catching objects, shadows will be cast onto the


ground plane. This may be more or less useful, depending on your ground plane;
if the ground is very irregular or non-existent, this will be confusing.
If there are shadow-catching objects defined, shadows will be cast from
shadow-casting objects onto the shadow-catching objects. This can preview
complex effects such as a shadow cast onto a rough terrain.
Shadows may be disabled from the main View menu, and the shadow
black level may be set from the Preferences color settings. The shadow enable
status is ―sticky‖ from one run to the next, so that if you do not usually use it, you
will not have to turn it off each time you start SynthEyes.
Note that as with most OpenGL fast-shadow algorithms, there can be
shadow artifacts in some cases. Final shadowing should be generated in your 3-
D rendering application.
Note that the camera viewport does not display shadows by design.

Edit Mesh
The perspective window allows meshes to be constructed and edited,
which is discussed in Building Meshes from Tracker Positions. One mesh can be
selected as an edit mesh at any time―select a mesh, then right-click Set Edit
Mesh or hit the ‗M‘ key. A mesh's status as the edit mesh is independent of its
status as a selected mesh. To not have any edit mesh, clear the overall selection
by clicking in empty space, then right-click Set Edit Mesh (ie to nothing).

Preview Movie
After you solve and add a few test objects, you can render a test
Quicktime movie (except on Win64) or a BMP, Cineon, DPX, JPEG, OpenEXR,
PNG, SGI, or Targa sequence (also TIFF on Mac). While the RAM-based
playback is limited by the amount of RAM, and has a simplified drawing scheme
to save time, the preview movie supports anti-aliasing. The movie playback can
later run at the full rate regardless of length.
Right-click in the perspective window to bring up the menu and select the
Preview Movie item to bring up a dialog allowing the output file name,
compression settings, and various display control settings to be set. Usually you
will want to select square pixel output for playback on computer monitors in
Quicktime; it will convert 720x480 source to 640x480, for example, so that the
preview will not be stretched horizontally.
If you are making a Quicktime movie, be sure to bring up the compression
settings and select something, Quicktime has no default and may crash if you do
not select something.
Also, different codecs will have their own parameters and requirements.

151
PERSPECTIVE WINDOW

Important Tip: the H.264 codec requires that the Key every N frames
checkbox be off, and the limit data-rate to 90 kb/sec checkbox be off: otherwise
there will be only one frame.
Similarly, image files used in a sequence may have their own settings
dialog. Note that image sequences written from the Preview Movie are always 8
bit/channel with no alpha. You can re-write image sequences at 16 bit and
including an alpha channel using the Image Preprocessor (again depending on
details of the source and output file format).

Technical Controls
The Scene Settings dialog contains many numeric settings for the
perspective view, such as near and far camera planes, tracker and camera icon
sizes, etc. You can access the dialog either from the main Edit menu, or from
the perspective window‘s right-click menu.
By default, these items are sized proportionate to the current ―world size‖
on the solver control panel. Before you go nuts changing the perspective window
settings, consider whether it really means that you need to adjust your world size
instead!

152
Exporting to Your Animation Package
Once you are happy with the object paths and tracker positions, use the
Export menu items to save your scene.
The following options are currently available (note that this list is
constantly being expanded; check the web site):
 3ds max 4 or later (Maxscript). Should be usable for 3D Studio MAX 3 as
well. Separate versions for 3dsmax 5 and earlier, and 3dsmax 6 and later.
 After Effects (via a special maya file)
 Bentley Microstation
 Blender
 Carrara
 Cinema 4D (via Lightwave scene)
 Combustion
 ElectricImage (less integrated due to EI import limitations)
 FLAIR motion control cameras (Mark Roberts Motion Control)
 Flame (3-D)
 Fusion 5
 Hash Animation:Master. Hash 2001 or later.
 Houdini
 Inferno 3-D Scene.
 Lightwave LWS. Use for Lightwave, Cinema 4D
 Maya scene file
 Mistika
 Modo
 Motion – 2-D
 Nuke (D2 Software, subsidiary of Digital Domain)
 Particle Illusion
 Poser
 Realsoft 3D
 Shake (several 2-D/2.5-D plus Maya for 3-D scenes)
 SoftImage XSI, via a dotXSI file
 Toxik (earlier versions, not updated for 2009)
 trueSpace
 Vue 5 and 6 Infinite
 VIZ (via 3ds Max scene)
SynthEyes offers a scripting language, SIZZLE™, that makes it easy to
modify the exported files, or even add your own export type. See the separate
SIZZLE User Manual for more information. New export types are being added all
the time, check the export list in SynthEyes and the support site for the latest
packages or beta versions of forthcoming exporters.

153
EXPORTING TO YOUR ANIMATION PACKAGE

General Procedures
You should already have saved the scene as a SynthEyes file before
exporting. Select the appropriate export from the list in the File/Exports area.
SynthEyes keeps a list of the last 3 exporters used on the top level of the File
menu as well.
Hint: SynthEyes has many exports. To simplify the list, click
Script/System Script Folder, create a new folder ―Unused‖ in it, and move all
the scripts for applications you do not use into that folder. You will have to repeat
this process when you later install new builds, however.
There is also an export-again option, which repeats the last export
performed by this particular scene file, with the most-recently-used export
options, without bring up the export-options dialog again to save time for
repeated exports.
When you export, SynthEyes uses the file name, with the appropriate file
extension, as the initial file name. By default, the exported file will be placed in a
default export folder (as set using the preferences dialog).
In most cases, you can either open the exported file directly, or if it is a
script, run the script from your animation package. For your convenience,
SynthEyes puts the exported file name onto the clipboard, where you can paste it
(via control-V or command-V) into the open-file dialog of your application, if you
want. (You can disable this from the preferences panel if you want.)
Note that the detailed capabilities of each exporter can vary somewhat.
Some scripts offer popup export-control dialogs when they start, or small internal
settings at the beginning of each Sizzle script. For example, 3ds max does not
offer a way to set the units from a script before version 6 and the render settings
are different, so there slightly different versions for 3dsmax 5 and 6+. Settings in
the Maya script control the re-mapping of the file name to make it more suitable
for Maya on Linux machines. If you edit the scripts, using a text editor such as
Windows‘ Notepad, you may want to write down any changes as they must be
re-applied to subsequent upgraded versions.
Be aware that not all packages support all frame rates. Sometimes a
package may interpret a rate such as 23.98 as 24 fps, causing mismatches in
timing later in the shot. Or one package may produce 29.96 vs 29.97 in another.
Handle image sequences and use frame counts rather than AVIs, QTs, frame
times, or drop-frame time codes wherever possible.

The Coordinate System control panel offers an Exportable checkbox


that can be set for each tracker. By default, all trackers will be exported, but in
some cases, especially for compositors, it may be more convenient to export only
a few of the trackers. In this case, select the trackers you wish to export, hit
control-I to invert the selection, then turn off the checkbox. Note that particular
export scripts can choose to ignore this checkbox.

154
EXPORTING TO YOUR ANIMATION PACKAGE

Setting the Units of an Export


SynthEyes uses generic units: a value of 10 might mean 10 feet, 10
meters, 10 miles, 10 parsecs—whatever you want. It does not matter to
SynthEyes. This works because match-moving never depends on the overall
scale of the scene.
SynthEyes generally tries to export the same way as well—sending its
numbers directly as-is to the selected animation or compositing package.
However, some software packages use an absolute measurement system
where, for instance, Lightwave requires that coordinates in a scene file always be
in meters. If you want something else inside Lightwave, it will automatically
convert the values.
For such software, SynthEyes needs to know what units you consider
yourself to be using within SynthEyes. It doesn‘t care, but it needs to tell the
downstream package the right thing, or pre-scale the values to match your
intention.
To set the SynthEyes units selection, use the Units setting on the
SynthEyes preferences panel. Changing this setting will not change any numbers
within SynthEyes; it will only affect certain exports.
The exports affected by the units setting are currently these:
 After Effects (3-D)
 Hash Animation Master
 Lightwave
 3ds max
 Maya
 Poser
Before exporting to one of these packages, you should verify your units
setting. Alternatively, if you observe that your imported scene has different values
than in SynthEyes, you should check the units setting in SynthEyes.
Hint: if you will be exporting to a compositing package, they often
measure everything, including 3-D coordinates, in terms of pixels, not inches,
meters, etc. Be sure to pick sizes for the scene that will work well in pixels. While
you might scale a scene for an actor 2m tall, if you export to a compositor and
the actor is two pixels tall that will rarely make sense.

Image Sequences
Different software packages have different conventions and requirements
regarding the numbering of image sequences: whether they start at 0 or 1,
whether there are leading zeroes in the image number, and whether they handle
sequences that start at other numbers flexibly.

155
EXPORTING TO YOUR ANIMATION PACKAGE

For example, if you have a shot that originally had frames img1.tif-
img456.tif, but you are using only images img100.tif-img150.tif of it, SynthEyes
will normally consider it as a 51 frame shot, starting with frame 0 (img100.tif) or,
with First frame is 1 preference on, as frame 1 at img100.tif.
Other software sometimes requires that their frame numbers match the file
number, so img100.tif must always be frame 100, no matter what frame# they
normally start at.
SynthEyes gives you the option to pad the beginning of IFLs with extra
copies of the first frame, so that the SynthEyes frame number matches the image
frame number, by turning on the Match frame#‘s preference. While this sounds
simple, it will cause trouble for many of the exports. It is especially a problem if
you do not have the unused frames, as is often the case.
By being aware of these differences, you will be able to recognize when
your particular situation requires an adjustment to the settings—typically when
there is a shift between the camera path animation and the imagery.

Generic 2-D Tracker Exporters


There are a number of similar exporters that all output 2-D tracker paths to
various compositing packages. Why 2-D, you protest? For starters, SynthEyes
tracking capabilities can be faster and more accurate. But even more
interestingly, you can use the 2-D export scripts to achieve some effects you
could not with the compositing package alone.
For image stabilizing applications, the 2-D export scripts will average
together all the selected trackers within SynthEyes, to produce a synthetic very
stable tracker.
For corner-pinning applications, you can have SynthEyes output not the 2-
D tracker location, but the re-projected location of the solved 3-D point. This
location can not only be smoother, but continues to be valid even if the tracker
goes off-screen. So suppose you need to insert a painting into an ornate
picture-frame using corner pinning, but one corner goes off-screen during part of
the shot. By outputting the re-projected 3-D point (Use solved 3-D points
checkbox), the corner pin can be applied over the entire shot without having to
guess any of the path.
Taking this idea one step further, you can create an ―extra‖ point in 3-D in
SynthEyes. Its re-projected 2-D position will be averaged with any selected
trackers; if there are none, its position will be output directly. So you can do a
four-corner pin even if one of the corners is completely blocked or off-screen.
By repeating this process several times, you can create any number of
synthetic trackers, doing a four-corner insert anywhere in the image, even where
there are no trackable features. Of course, you could do this with using a 3-D
compositing environment, but that might not be simplest.

156
EXPORTING TO YOUR ANIMATION PACKAGE

At present, there are compatible 2-D exporters for AfterEffects, Digital


Fusion, Discreet (Combustion/Inferno/Flame), Particle Illusion, and Shake. Note
that you will need to import the tracker data file (produced by the correct
SynthEyes exporter) into a particular existing tracker in your compositing
package.
There is also a 2-D exporter that exports all tracker paths into a single file,
with a variety of options to change frame numbers and u/v coordinates. A similar
importer can read the same file format back in. Consequently, you can use the
pair to achieve a variety of effects within SynthEyes, including transferring
trackers from SynthEyes file to SynthEyes file, as described in the section on
Merging Files and Tracks. This format can also be imported by Fusion.

Generic 3-D Exporters


There are several 3-D exports that produce plain text files. You can use
them for any software SynthEyes don‘t already support, for example, non-visual-
effects software. You can also use them as a way to manipulate data with little
shell, AWK, or Perl scripts, for example.
Importantly, you can also use them as a way to transfer data between
SynthEyes scene files, for example, to compute some tracker locations to be
used by a number of shots. There are several ways to do this, see the section on
Merging Files and Tracks.
The generic exports are Camera/Object Path for a path, Plain Trackers for
the 3-D coordinates of trackers and helper points, and corresponding importers.
You can import 3-D locations to create either helper points, or trackers. This
latter option is useful to bring in surveyed coordinates for tracking.

After Effects 3-D Procedure


1. Export to After Effects in SynthEyes to produce a (special) .ma file.
2. In After Effects, do a File/Import File
3. Change "Files of Type" to All File Formats
4. Select the .ma file
5. Double-click the Composition (Square-whatever composition in older AE
versions).
6. Re-import the original footage
7. Click File/Interpret Footage/Main and be sure to check the exact frame
rate and pixel aspect. Be especially careful with 23.976 and 29.97,
entering 24 or 30 fps will cause subtle errors!
8. Rewind to the beginning of the shot
9. Drag the re-imported footage from the project window into the timeline as
the first layer
10. Tracker nulls have a top-left corner at the active point, instead of being
centered on the active point as in SynthEyes.
Important note: AfterEffects uses "pixels" as its unit within the 3-D
environment, not inches or feet (ie it does not convert units at all). The default

157
EXPORTING TO YOUR ANIMATION PACKAGE

SynthEyes coordinate system setup keeps the world less than 100 units across.
As AE interprets that as pixels, your 3-D scene can appear to be quite small in
AE, as is the case in the tutorial on the web site, which is why we had to scale
down the object we created and inserted in AE. It is much easier to adjust the
coordinate system in SynthEyes first, so the 3-D world is bigger, for example by
changing the coordinates of the second point used in coordinate system setup
from 20,0,0 to be 1000,0,0, say.

After Effects 2-D Procedure


1. Select one or more trackers to be exported.
2. Export using the After Effects 2-D Clipboard. You can select either the 2-D
tracking data, or the 3-D position of tracker re-projected to 2-D.
3. Open the text file produced by the export
4. In the text editor, select all the text, using control-A or command-A.
5. Copy the text to the clipboard with control-C or command-C.
6. In After Effects, select a null to receive the path.
7. Paste the path into it with control-V or command-V.

Bentley MicroStation
You can exporter to Bentley‘s Microstation V8 XM Edition by following
these directions.
Exporting from SynthEyes
1. MicroStation requires that animated backgrounds consist of a consecutive
sequence of numbered images, such as JPEG or Targa images. If
necessary, the Preview Movie capability in SynthEyes‘s Perspective
window can be used to convert AVIs or MOVs to image sequences.
2. Perform tracking, solving, and coordinate system alignment in SynthEyes.
(Exporting coordinates from MicroStation into SynthEyes may be helpful)
3. File/Export/Bentley MicroStation to produce a MicroStation Animation
(.MSA) file. Save the file where it can be conveniently accessed from
MicroStation. The export parameters are listed below.
SynthEyes/MicroStation Export Parameters:
Target view number. The view number inside MicroStation to be
animated by this MSA file (usually 2)
Scaling. This is from MicroStation‘s Settings/DGN File Settings/Working
Units, in the Advanced subsection: the resolution. By default, it is listed as 10000
per distance meter, but if you have changed it for your DGN file, you must have
the same value here.
Relative near-clip. Controls the MicroStation near clipping-plane
distance. It is a ―relative‖ value, because it is multiplied by the SynthEyes world
size setting. Objects closer than this to the camera will not be displayed in
MicroStation.

158
EXPORTING TO YOUR ANIMATION PACKAGE

Relative view-size. Another option to adjust as needed if everything is


disappearing from view in MicroStation.
Relative far-clip. Controls the MicroStation far clipping-plane distance. It
is a ―relative‖ value, because it is multiplied by the SynthEyes world size setting.
Objects farther than this from the camera will not be displayed in MicroStation.
Importing into MicroStation
1. Open your existing 3-D DGN file. Or, create a new one, typically based on
seed3d.dgn
2. Open the MicroStation Animation Producer from
Utilities/Render/Animation
3. File/Import .MSA the .msa file written by the SynthEyes exporter.
4. Set the View Size correctly—this is required to get a correct camera
match.
a. Settings/Rendering/View Size
b. Select the correct view # (typically 2)
c. Turn off Proportional Resize
d. Set X and Y sizes as follows. Multiply the height(Y) of your image,
in pixels, by the aspect ratio (usually 4:3 for standard video or 16:9
for HD) to get the width(X) value. For example, if your source
images are 720x480 with a 4:3 aspect ratio, the width is 480*4/3 =
640, so set the image size to X=640 and Y=480, either directly on
the panel or using the ―Standard‖ drop-down menu. This process
prevents horizontal (aspect-ratio) distortion in your image.
e. Hit Apply
f. Turn Proportional Resize back on
g. Close the view size tool
5. On the View Attributes panel, turn on the Background checkbox.
6. Bring up the Animation toolbar (Tools/Visualization/Animation) and select
the Animation Preview tool. You can dock it at the bottom of MicroStation
if you like.
7. If you scrub the current time on the Animation Preview, you‘ll move
through your shot imagery, with synchronized camera motion. Unless you
have some 3-D objects in the scene, you won‘t really be able to see the
camera motion, however.
8. If desired, use the Tools/3-D Main/3-D Primitives toolbar to create some
test objects (as you probably did in SynthEyes).
9. To see the camera cone of the camera imported from SynthEyes, bring up
Tools/Visualization/Rendering, and select the Define Camera tool. Select
the view with the SynthEyes camera track as the active view in the Define
Camera tool, and turn on the Display View Cone checkbox.
Transferring 3-D Coordinates
If you would like to use within MicroStation the 3-D positions of the
trackers, as computed by SynthEyes, you can bring them into MicroStation as
follows.

159
EXPORTING TO YOUR ANIMATION PACKAGE

1. You have the option of exporting only a subset of points from SynthEyes
to MicroStation. All trackers are exported by default; turn off the
Exportable checkbox on the coordinate system panel for those you don‘t
wish to export. You may find it convenient to select the ones you want,
then Edit/Invert Selection, then turn off the box.
2. In SynthEyes, File/Export/Plain Trackers with Set Names=none, Scale=1,
Coordinate System=Z Up. This export produces a .txt file listing all the
XYZ tracker coordinates.
3. In MicroStation, bring up the Tools/Annotation/XYZ Text toolbar.
4. Click the Import Coordinates tool. Select the .txt file exported from
SynthEyes in Step 1. Set Import=Point Element, Order=X Y Z, View=2 (or
whichever you are using).
Transferring Meshes
SynthEyes uses two types of meshes to help align and check camera
matches: mesh primitives, such as spheres, cubes, etc; and tracker meshes, built
from the computed 3-D tracker locations. The tracker meshes can be used to
model irregular areas, such as a contoured job site into which a model will be
inserted. Both types of models can be transferred as follows:
1. In SynthEyes, select the mesh to be exported, by clicking on it or selecting
it from the list on the 3-D panel.
2. Select the File/Export/STL Stereolithography export, and save the mesh to
a file.
3. In MicroStation, select File/Import STL and select the file written in step 2.
You can use the default settings.
4. Meshes will be placed in MicroStation at the same location as in
SynthEyes.
5. You can bring up its Element/Information and assign it a material.
To Record the Animation
1. Select the Record tool on the Animation toolbar
(Tools/Visualization/Animation)
2. Important: Be sure the correct (square pixels) output image size is
selected, the same one as the viewport size. For example, if your input is
4:3 720x480 DV footage, you MUST select 640x480 output to achieve 4:3
with square pixels (ie 640/480 = 4/3). MicroStation always outputs square
pixels. You can output images with any overall aspect you wish, as long
as the pixels are square (pixel aspect ratio is 1.0). Note that HD images
already have square pixels.
3. Don‘t clobber your input images! Be sure to select a different location for
your output footage than your input.

Blender Directions
Blender has a tendency to change around frequently, so the details of
these directions might best be viewed more as a guide than the last word.

160
EXPORTING TO YOUR ANIMATION PACKAGE

When working with image sequences and blender, it will be a good idea to
ensure that the overall frame number is the same as the number in the image file
name. Although you can adjust the offset, Blender incorrectly eliminates a frame
number of zero.
1. In SynthEyes, export to Blender (Python)
2. Start Blender
3. Delete the default cube and light
4. Change one of the views to the blender Text Editor
5. In the text editor, open the blender script you exported in step 1.
6. Hit ALT-P to run the script
7. Select the camera (usually Camera01) in the 3-D Viewport
8. In a 3-D view, select Camera on the View menu to look through the
imported, animated, SynthEyes camera
9. Select View/Background image
10. Click Use Background Image
11. Select your image sequence or movie from the selection list.
12. Adjust the background image settings to match your image. Make sure the
shot length is adequate, and that Auto Refresh is on. If the images and
animation do not seem to be synced correctly, you probably have to adjust
the offset.
13. Decrease the ―blend‖ value to zero, or you can go without the background,
and set up compositing within blender.
14. On the View Properties dialog, you might wish to turn off Relationship
Lines to reduce clutter.
15. Use a Timeline view to scrub through the shot.

Cinema 4D Procedure
1. Export from SynthEyes in Lightwave Scene format (.lws) — see below.
2. Start C4D, import the .lws file, yielding camera and tracking points.
3. To set up the background, add a Background using the Objects menu
4. Create a new Texture with File/New down below.
5. At right, click on ―…‖ next to the file name for texture.
6. Select your source file (jpeg sequence or movie).
7. Click on the right-facing triangle button next to the file name, select Edit.
8. Select the Animation panel
9. Click the Calculate button at the bottom.
10. Drag the new texture from the texture editor onto the ―Background‖ on the
object list. Background now appears in the viewport.

DotXSI Procedure
1. In SynthEyes, after completing tracking, do File/Export/dotXSI to create a
.xsi file somewhere.
2. Start Softimage, or do a File/New.
3. File/Import/dotXSI... of the new .xsi file from SynthEyes. The options may
vary with the XSI version, but you want to import everything.

161
EXPORTING TO YOUR ANIMATION PACKAGE

4. Set the camera to Scene1.Camera01 (or whatever you called it in


SynthEyes).
5. Open the camera properties.
6. In the camera rotoscopy section, select New from Source and then the
source shot.
7. Make sure ―Set Pixel Ratio to 1.0‖ is on.
8. Set ―Use…‖ pixel ratio to ―Camera Pixel Ratio‖ (should be the default)
9. In the Camera section, make sure that Field of View is set to Horizontal.
10. Make sure that the Pixel Aspect Ratio is correct. In SynthEyes, select
Shot/Edit Shot to see the pixel aspect ratio. Make sure that XSI has the
exact same value: 0.9 is not a substitute for 0.889, so fix it! Back story:
XSI does not have a setting for 720x480 DV, and 720x486 D1 causes
errors!
11. Close the camera properties page.
12. On the display mode control (Wireframe, etc), turn on Rotoscope.

ElectricImage
The ElectricImage importer relies on a somewhat higher level of user
activity than normal, in the absence of a scripting language for EI. You can export
either a camera or object path, and its associated trackers.
1. After you have completed tracking in SynthEyes, select the camera/object
you wish to export from the Shots menu, then select File/Export/Electric
Image. SynthEyes will produce two files, an .obm file containing the
trajectory, and an .obj file containing geometry marking the trackers.
2. In ElectricImage, make sure you have a camera/object that matches the
name used in SynthEyes. Create new cameras/objects as required. If you
have Camera01 in SynthEyes, your camera should be "Camera 1" in EI.
The zero is removed automatically by the SynthEyes exporter.
3. Go to the Animation pull-down menu and select the "Import Motion"
option.
4. In the open dialog box, select "All Files" from the Enable pop-up menu, so
that the .obm file will be visible.
5. Navigate to, and select, the .obm file produced by SynthEyes. This will
bring up the ElectricImage motion import dialog box which allows you to
override values for position, rotation, etc.

Normally, you will ignore all these options as it is simpler to parent the
camera/object to an effector later. The only value you might want to
change is the "start time" to offset when the camera move begins. Click
OK and you will get a warning dialog about the frame range.

This is a benign warning that sets the "range of frames" rendering option
to match the length of the incoming camera data. Hitting cancel will abort
the operation, so hit OK and the motion data will be applied to the camera.
6. Select "Import Object" from the Object pull-down menu.

162
EXPORTING TO YOUR ANIMATION PACKAGE

7. Enable "All Files" in the pop-up menu.


8. Select the .obj file produced by SynthEyes.
9. Create a hierarchy by selecting one tracker as the parent, or bringing in all
trackers as separate objects.
10. If you are exporting an object path, parent the tracker object to the object
holding the path.

Fusion 5
There are several Fusion-compatible exporters. The main exporter is the
Fusion 5 composition export, which can be opened directly in Fusion.
The Tracker 2-D Paths export can write all the exportable trackers to a
text file, which can then be read in Fusion with the Import SynthEyes Trackers
script and assigned to any Point-type input on a node. Select a node and start
the Import script from its right-click menu. At present, it appears that you should
animate the desired control before importing, then tell the script to proceed
anyway when it notices that the control is already animated.
There is also a generic 2-D path exporter for Fusion.

Houdini Instructions:
1. File/New unless you are addding to your existing scene.
2. Open the script Textport
3. Type source "c:/shots/scenes/flyover.cmd" or equivalent.
4. Change back from COPs to OBJs.

Lightwave
The Lightwave exporter produces a lightwave scene file (.lws) with several
options, one of them crucial to maintaining proper synchronization.
As mentioned earlier, Lightwave requires a units setting when exporting
from SynthEyes. The SynthEyes numbers are unitless: by changing the units
setting in the lightwave exporter as you export, you can make that 24 in
SynthEyes mean 24 inches, 24 feet, 24 meters, etc. This is different than in
Lightwave, where changing the units from 24 inches would yield 2 feet, 0.61
meters, etc. This is the main setting that you may want to change from scene to
scene.
Lightwave has an obscure preferences-like setting on its Compositing
panel (on the Windows menu) named ―Synchronize Image to Frame.‖ The
available options are zero or one. Selecting one shifts the imagery one frame
later in time, and this is the Lightwave default. However, for SynthEyes, a setting
of zero will generally be more useful (unless the SynthEyes preference First
Frame is 1 is turned on). The Lightwave exporter from SynthEyes allows you to
select either 0 or 1. We recommend selecting zero, and adjusting Lightwave to
match. You will only have to do this once, Lightwave remembers it subsequently.
In all cases, you must have a matching value on the exporter UI and in

163
EXPORTING TO YOUR ANIMATION PACKAGE

Lightwave, or you will cause a subtle velocity-dependent error in your camera


matches in Lightwave that will drive you nuts until you fix the setting.
The exporter also has a checkbox for using DirectShow. This checkbox
applies only for AVIs, and should be on for most AVIs that contain advanced
codecs such as DV or HD. If an AVI uses an older codec and is not opened
automatically within Lightwave, export again with this checkbox turned off.

Modo
The modo exporter handles normal shots, tripod shots, object shots,
zooms etc. It transfers any meshes you've made, including the UV coordinates if
you've frozen a UV map onto a tracker mesh.
The UI includes the units (you can override the SynthEyes preferences
setting); the scaling of the tracker widgets in Modo—this is a percentage value,
adjust to suit; plus there is an overall scaling value you can tweak if you want to
(better to set up the coordinates right instead).
Limitations
1. Only Image Sequences can be transferred to and displayed by Modo -- modo
does not support AVI or Quicktime backdrops.
2. Image sequences in modo MUST have a fixed number of digits: the first and
last frames must have the same number of digits (may require leading
zeroes). YES: img005..img150. NO: img2..img913. This may be a problem
for Quicktime-generated sequences.
3. Modo occasionally displays the wrong image on the first frame of the
sequence after you scrub around in Modo. Do not panic.
4. The export is set up to use Modo's default ZXY angle ordering.
Directions
1. Track, solve, etc, then export using the Modo Perl Script item (will produce a
file with a ".pl" extension). Be sure to select the correct modo version for the
export.
2. Start modo, on the System menu select Run Script and give it the file you
exported from SynthEyes.
3. To see the match, you may need to re-set the modo viewport to show the
exported camera, typically Camera01

Nuke
The nuke exporter produces a nuke file you can open directly. Be sure to
select the exporter appropriate to your version of Nuke—the files are notably
different between Nuke versions. The 5.0 exporters are substantially more
feature-rich than the Nuke 4 exporter, handling a wide variety of scene types.

164
EXPORTING TO YOUR ANIMATION PACKAGE

The pop-up parameter panel lets you control a number of features. The
Nuke exporter will change SynthEyes meshes to Nuke built-ins where possible,
such as for boxes and spheres. It can export non-primitive meshes as OBJ files
and link them in automatically. If the ‗other‘ meshes are not exported, they are
changed to bounding boxes in Nuke. Note that SynthEyes meshes can be scaled
asymmetrically; you can either burn the scaling into the OBJ file (especially
useful if you wish to use the OBJ elsewhere), or you can have the scaling
duplicated by the Nuke scene.
You can indicate if you have a slate frame at the start of the shot, or select
renderable or non-rendering tracker marks. The renderable marks are better for
tracking, the non-rendering marks better for adding objects within Nuke‘s 3-D
view. The size of the renderable tracker marks (spheres) can be controlled by a
knob on the enclosing group. You can ask for a sticky note showing the
SynthEyes scene file, or a popup message with the frame and tracker count.
Note that Nuke 5.1 and earlier requires only INTEGER frame rates
throughout. SynthEyes will force the value appropriately, but you may need to
pay attention throughout your pipeline if you are using Nuke on 23.976 fps shots,
which is ―24 fps‖ from an HD/HDV camera.

Poser
Poser struggles a little to be able to handle a match-moved camera, so the
process is a bit involved. Hopefully Curious Labs will improve the situation in
further releases.
The shot must have square pixels to be used properly by Poser; it doesn't
understand pixel aspect ratios. So if you have a 720x480 DV source, say, you
need to resample it in SynthEyes, AfterEffects or something to 640x480. Also,
the shot has to have a frame rate of exactly 30 fps. This is a drag since normal
video is 29.97 fps, and Poser thinks it is 29.00 fps, and trouble ensues. One way
to get the frame rate conversion without actually mucking up any of the frames is
to store the shot out as a frame sequence, then read it back in to your favorite
tool as a 30 fps sequence. Then you can save the 640x480 or other square-pixel
size.
Note that you can start with a nice 720x480 29.97 DV shot, track it in
SynthEyes, convert it as above for Poser, do your poser animation, render a
sequence out of Poser, then composite it back into the original 720x480.
One other thing you need to establish at this time---exactly how many
frames there are in your shot. If the shot ranges are 0 to 100, there are 101; from
10 to 223, there are 214.
1. After completing tracking in SynthEyes, export using the Poser Python
exporter.
2. Start Poser.

165
EXPORTING TO YOUR ANIMATION PACKAGE

3. Set the number of frames of animation, at bottom center of the Poser


interface, to the correct number of frames. It is essential that you do this
now, before reading the python script
4. File/Run Python Script on the python script output from SynthEyes.
5. The Poser Dolly camera will be selected and have the SynthEyes camera
animation on it. There are little objects for each tracker, and also
SynthEyes boxes, cones, etc are brought over into Poser.
Open Question: How to render out of Poser with the animated movie
background. The best approach appears to be to render against black with an
alpha channel, then composite over the original shot externally.

Shake
SynthEyes offers three specific exporters for Shake, plus one generic one:
1. MatchMove Node.
2. Tracker Node
3. Tracking File format
4. 3-D Export via the ―AfterFX via .ma‖ or Maya ASCII exports.
The first two formats (Sizzle export scripts) produce shake scripts (.shk files); the
third format is a text file. The fourth option produces Maya scene files that Shake
reads and builds into a scene using its 3-D camera.
We‘ll start with the simplest, the tracking file format. Select one tracker
and export with the Shake Tracking File Format, and you will have a track that
can be loaded into a Shake tracker using the load option. You can use this to
bring a track from SynthEyes into existing Shake tracking setups.
Building on this basis, #2, Tracker Node, exports one or more selected
trackers from SynthEyes to create a single Tracker Node within Shake. There are
some fine points to this. First, you will be asked whether you want to export the
solved 3-D positions, or the tracked 2-D positions. These values are similar, but
not the same. If you have a 3-D solution in SynthEyes, you can select the solved
3-D positions, and the export will be the ―ideal‖ tracked (predicted) coordinates,
with less jitter than the plain 2-D coordinates.
Also, since you might be exporting from a PC to a Mac or Linux machine,
the image source file(s) may be named differently: perhaps
X:\shots1\shot1_#.tga on the PC, and
//macmachine/Users/tom/shots/shot1_#.tga on the PC. The Shake export script‘s
dialog box has two fields, PC Drive and Mac Drive, that you can set to
automatically translate the PC file name into the Mac file name, so that the
Shake script will work immediately. In this example, you would set PC Drive to
―X:\\‖ and Mac Drive to ―//macmachine/Users/tom/‖.
Finally, the MatchMove node exporter looks not for trackers to export, but
for SynthEyes planes! Each plane (created from the 3-D panel) is exported to
Shake by creating four artificial trackers (in Shake) at the corners of the plane.
The matchmove export lets you insert a layer at any arbitrary position within the

166
EXPORTING TO YOUR ANIMATION PACKAGE

3-D environment calculated by SynthEyes. For example, you can insert a matte
painting into a scene at a location where there is nothing to track. You can use a
collection of planes, positioned in SynthEyes, to obtain much of the effect of a 3-
D camera. The matchmove node export also provides PC to Mac/Linux file name
translation.

trueSpace Directions:
Warning: trueSpace has sometimes had problems executing the exported
script correctly. Hopefully Caligari will fix this soon.

1. In SynthEyes, export to trueSpace Python.


2. Open trueSpace.
3. Right-click the Play button in the trueSpace animation controls.
4. Set the correct BaseRate/PlayRate in the animation parameters to match
your source shot.
5. Open the Script Editor.
6. From inside the Script Editor, Open/Assign the python script you created
within SynthEyes.
7. Click Play (Time On) in the Script Manager.
8. When the Play button turns off, close the ScriptManager.
9. Open the Object Info panel.
10. Verify that the SynthEyes camera is selected (usually Camera01).
11. Change the Perspective view to be View from Object.
12. Select the Camera01Screen.
13. Open the Material Editor (paint pallete).
14. Right click on Color shaders button.
15. Click on (Caligari) texture map, sending it to the Material Editor color
shader.
16. Open the subchannels of the Material Editor (Color, Bump, Reflectance).
17. On the Color channel of the Material Editor, right click on the "Get Texture
Map" button and select your source shot.
18. Check the Anim box.
19. Click the Paint Object button on the Material Editor.
20. Click on File/Display Options and change the texture resolution to
512x512.
21. You may want to set up a render background to overlay animated objects
on the background, or you can use an external compositing program.
Make the Camera01Screen object invisible before rendering.
22. In trueSpace, you need to pay special attention to get the video playback
synchronized with rest of the animation, and to get the render aspect ratio
to match the original. For example, you must add the texture map while
you are at frame zero, and you should set the pixel aspect ratio to match
the original (SynthEyes's shot panel will tell you what it is).

167
EXPORTING TO YOUR ANIMATION PACKAGE

Vue 5 Infinite
The export to Vue Infinite requires a fair number of manual steps pending
further Vue enhancements. But with a little practice, they should only take a
minute or two.
1. Export from SynthEyes using the Vue 5 Infinite setting. The options
can be left at their default settings unless desired. You can save the
python script produced into any convenient location.
2. Start Vue Infinite or do a File/New in it.
3. Select the Main Camera
4. On its properties, turn OFF "Always keep level"
5. Go to the animation menu, turn ON the auto-keyframe option.
6. Select the Python/Run python script menu item, select the script
exported from SynthEyes, and run it.
7. In the main camera view, select the "Camera01 Screen" object (or
the equivalent if the SynthEyes camera was renamed)
8. In the material preview, right-click, select Edit Material.
9. The material editor appears, select Advanced Material Editor if not
already.
10. Change the material name to flyover or whatever the image shot
name is.
11. Select the Colors tab.
12. Select "Mapped picture"
13. Click the left-arrow "Load" icon under the black bitmap preview
area
14. In the "Please select a picture to load" dialog, click the Browse File
icon at the bottom --- a left arrow superimposed on a folder
15. Select your image file in the Open Files dialog. If it is an image
sequence, select the first image, then shift-select the last.
16. On the material editor, under the bitmap preview area, click the
clap-board animation icon to bring up the Animated Texture
Options dialog
17. Set the frame rate to the correct value.
18. Turn on "Mirror Y"
19. Hit OK on the Animated Texture dialog
20. On the drop-down at top right of the Advanced Material Editor,
select a Mapping of Object- Parametric

168
EXPORTING TO YOUR ANIMATION PACKAGE

21. Turn off "Cast shadows" and "Receive shadows"


22. Back down below, click the Highlights tab
23. Turn Highlight global intensity down to zero.
24. Click on the Effects tab
25. Turn Diffuse down to zero
26. Click the Ambient data-entry field and enter 400
27. Hit OK to close the Advanced Material Editor
28. Select the Animation/Display Timeline menu item (or hit F11)
29. If this is the first time you have imported from SynthEyes to Vue
Infinite, you must perform the following steps:
a. Select File/Options menu item.
b. Click the Display Options tab
c. Turn off "Clip objects under first horizontal plane in main
view only", otherwise you will not be able to see the
background image.
d. Turn off "Clip objects under first horizontal plane (ground /
water)
e. Turn off "Stop camera going below clipping plane (ground /
water)" if needed by your camera motion.
f. Hit OK
30. Delete the "Ground" object
31. If you are importing lights from SynthEyes, you can delete the Sun
Light as well, otherwise, spin the Sun Light around to point at the
camera screen, so that the image can be seen in the preview
window.
32. You may have to move the time bar before the image appears. Vue
Infinite only shows the first image of the sequence, so you can
verify alignment at frame zero.
33. You will later want to disable the rendering of the trackers, or delete
them outright.
34. Depending on what you are doing, you may ultimately wish to
delete or disable the camera screen as well, for example, if you will
composite an actor in front of your Vue Infinite landscape.
35. The import is complete; you can start working in Vue Infinite. You
should make probably save a copy of the main camera settings so
that you can have a scratch camera available as you prepare the
scene in Vue Infinite.

169
EXPORTING TO YOUR ANIMATION PACKAGE

Vue 6 Infinite
1. Export from SynthEyes using the Vue 6 Infinite option, producing a
maxscript file.
2. Import the maxscript file in Vue 6 Infinite
3. Adjust the aspect ratio of the backdrop to the correct overall aspect ratio
for your shot. This is important since Vue assumes square pixels, and if
they aren‘t (for all DV, say), the camera match will be off badly.

170
Building Meshes from Tracker Positions
It can be useful to be able to build a mesh from the solved tracker
positions. Meshes can serve to catch or cast shadows, act as front-projection
targets, etc. in your compositing or animation package, and these applications
can be previewed within SynthEyes. The perspective window allows you to do
so. You may want to increase the mesh density with the Track menu‘s Add many
trackers dialog, rapidly creating additional trackers after an initial auto-track and
solve has been performed.
At any time, SynthEyes can have an Edit Mesh, which is different than a
normally-selected mesh object. The Edit Mesh has its vertices and facets
exposed for editing.
If, in the perspective view, you select a cylinder, for example, and hit click
Set as Edit Mesh on the right-click menu, you‘ll see the vertices. Right-click the
Lasso Vertices mode and lasso-select some vertices, then right-click Mesh
Operations/Delete selected faces, and you‘ve knocked a hole in the cylinder.
Right-click the Navigate mode.

Example: Ground Reconstruction


Next, with the solved flyover_auto.sni shot open and the perspective
window open, right-click Lock to current camera (keyboard: L), click anywhere
to deselect everything, then right-click Set Edit Mesh and Mesh
Operations/Convert to Mesh. All the trackers now are vertices in a new edit
mesh. (If you had selected a group of trackers, only those trackers would have
been converted.) Rewind to the beginning of the shot (shift-A), and right-click
Mesh Operations/Triangulate. Right click unlock from camera. Click one of the
vertices (not trackers) near the center, then control-middle-drag to rotate around
the new mesh. Note that the triangulation occurs with respect to a particular point
of view; a top-down view is preferable to a side-on one which will probably have
an interdigitated structure rather than what you likely want.
Lock the view back to the camera. Click on the tracker mesh to select it.

Select the 3-D control panel and click Catch Shadows. Select Cylinder as
the object-creation type on the 3-D panel, and create a cylinder in the middle
of the mesh object (it will be created on the ground plane). You will see the
shadow on the tracker mesh. Use the cylinder‘s handles to drag it around and the
shadow will move across the mesh appropriately. For more fun, right-click Place
mode and move the cylinder around on the mesh.
In your 3-D application, you will probably want to subdivide the mesh to a
smoother form, unless you already have many trackers. A smoother mesh will
prevent shadows from showing sharp bends due to the underlying mesh.

171
BUILDING MESHES FROM TRACKER POSITIONS

Front Projection
Next, with the cylinder casting an interesting shadow on an irregular
surface, right-click Texturing/Rolling Front Projection. The mesh apparently
disappears, but the irregular shadow remains. This continues even if you scrub
through the shot.
In short, the image has been ―front projected‖ onto the mesh, so that it
appears invisible. But, it continues to serve as a shadow catcher.
In this ―Rolling Front Projection‖ mode, new U,V coordinates are being
calculated on each frame to match the camera angle, and the current image is
being projected, ensuring invisibility.
Alternatively, the ―Frozen Front Projection‖ mode calculates U,V
coordinates only once, when the mode is applied. Furthermore, the image from
that frame continues to be applied for the rest of the frames as well. This kind of
configuration is often used for 3-D Fix-It applications where a good frame is used
to patch up some other ones, where a truck drives by, for example.
Because the image is projected onto a 3-D surface, some parallax can be
developed as the shot evolves, often hiding the essentially 2-D nature of the fix. If
the mesh geometry is accurate enough, this amounts to texture-mapping it with a
live frame.
Furthermore, the U,V coordinates of the mesh can be exported and used
in other animation software, along with the source-image frame as a texture, in
the rare event it does not support camera mapping.

Changing Camera Path


If you have a well-chosen grid of trackers, you may be able to fly another
camera along a similar camera path to the original, with the original imagery re-
projected onto the mesh, to produce a new view. Usually you will have to model
some parts of the scene fairly carefully, however.

Practical Details
In practice, you will want to exercise much finer control over the building of
the mesh. The mesh built from the flyover trackers winds up with a lot of
bumpiness due to the trees and sparsity of sampling. SynthEyes provides tools
for building models more selectively.
The convert-to-mesh and triangulate tools operate only on selected
trackers or vertices, respectively. Usually you will want to select only a subset of
the trackers to triangulate. After doing so, you may find that you want to take out
some facets and re-triangulate them differently to better reflect the actual world
geometry or your planned use.
You can accomplish that by deleting the offending facets (after selecting
them by selecting all their vertices), and then selectively re-triangulating.

172
BUILDING MESHES FROM TRACKER POSITIONS

Often an outlying tracker may need to be removed from the mesh, for
example, the top of a phone pole that creates a ―tent‖ in an otherwise mostly flat
landscape. You can select that vertex, and right-click Remove and Repair.
Removed vertices are not deleted, to give you the opportunity to reconnect them.
Use the Delete Unused Vertices operation to finally remove them.
Long triangles cause display problems in all animation packages, as
interpolation across them does not work accurately. SynthEyes allows you to
subdivide facets by placing a vertex at center, and converting the facet to three
new ones, or subdivide the edges by putting a vertex at the center of each edge
and converting each facet to four new ones.
Of course, there may not necessarily be a tracker where you need one to
accurately present the geometry. Even if you used auto-tracking, and the Add
many trackers dialog, you will probably want to add additional supervised
trackers for particular locations. Use Convert to Mesh to add them to the existing
edit mesh.
Also, you can add vertices directly using the Add Vertices tool, or move
them around with the move tool. Both of these rely on the grid to establish the
basic positioning, typically using the Grid menu‘s Align to Trackers/Vertices
option. You can then add vertices on the grid, move them along it, or move them
perpendicular to it by shift-dragging. You can move multiple vertices by lasso-
selecting them, or shift-clicking them from Move mode.
After we get into object tracking, you will see that you can use the mesh
construction process to generate starting points for object modeling efforts as
well.

Depth Maps
With a mesh constructed from the tracker positions, you can generate a
depth map or movie to feed to 3-D compositing applications.
Once you have completed tracking and created the mesh, open the
perspective window and begin creating a Preview Movie. Select the Depth
channel to be written and select an output file name and format, either an
OpenEXR or BMP file sequence (BMPs are OK on a Mac). Unless the output is
OpenEXR, you must turn off the RGB data.
Click Start, and the depth map sequence will be produced. Note that you
may need to manipulate it in your compositing application if that application
interprets the depth data differently.
.

173
Optimizing for Real-Time Playback
SynthEyes can be used as a RAM player for real-time playback of source
sequences, source with temporary inserts, or final renders. This section will
discuss how to best configure SynthEyes for this purpose.
Note that SynthEyes leaves incoming Cineon and DPX files in their raw
format, so that they can be undistorted and saved with maximum accuracy. If you
want to use SynthEyes as a RAM player, you should use the image preprocessor
to color-correct the images for proper display. You might use the low, mid,
gamma, and high level controls, or a color LUT.

Image Storage
First, you want to get the shot into RAM. Clearly, having a lot of RAM will
help. If you are using a 32-bit system (XP/Vista-32 or OS X), you can only cache
about 2.5 GB of imagery in RAM at a time, regardless of how much RAM is in
your system, due to the nature of 32-bit addressing. In SynthEyes-64, running on
XP/Vista-64, you can use your entire RAM, except for about 1.5 GB.
If your shot does not fit, you have two primary options: using the small
playback-range markers on the SynthEyes time bar to play back a limited range
of the shot at a time, or to reduce the amount of memory by down-sampling the
images in the SynthEyes image preprocessor (or maybe drop to black/white). If
you have 4K film or RED scans and are playing back on a 2K monitor, you might
as well down-sample by 2x anyway.
If you have a RAID array on your computer, SynthEyes‘s sophisticated
image prefetch system should let you pull large sequences rapidly from disk.

Refresh Rate Optimization


You want SynthEyes to play back the images at as rapid a rate as
possible. On a PC, that usually means the Camera view, in normal mode, not
OpenGL. On a Mac, use the Camera View in OpenGL mode.
All the other items being displayed also take up time and affect the display
rate. From the Window menu, turn off ―Show Top Time bar‖ and select ―No
Panel.‖ On the View menu, adjust Show Trackers and Show 3-D points
depending on the situation.
Select the Camera view. It will now fill the entire viewport area, with only
the menu and toolbar at top, the status line showing playback rate at the bottom,
and a small margin on the left and right. You can further reduce the items
displayed by selecting Window/Floating Camera. (There is no margin-less ―full-
screen‖ mode)

Actual-Speed Playback
Once you have your shot playing back as rapidly as possible, you
probably want it to play at the desired rate, typically 24, 25, or 29.97 fps.

175
OPTIMIZING FOR REAL-TIME PLAYBACK

You can tell SynthEyes to play back at full speed, half speed, quarter
speed, or double actual speed using the items on the View menu.
SynthEyes does not change your monitor display rate. It achieves your
desired frame rate by playing frames as rapidly as possible, duplicating or
dropping frames as appropriate (much like a film projector double-exposes
frames). The faster the display rate, the more accurately the target frame rate
can be achieved, with less jitter.
With the control panel hidden, you should use the space bar to start and
stop playback, and shift-A to rewind to the beginning of the shot.

Safe Areas
You can enable one or more safe-area overlays from the safe area
submenu of the View menu.

176
Troubleshooting
Sliding. This is what you see when an object appears to be moving,
instead of stationary on a floor, for example. This is a user error, not a software
error, typically due to object placement errors. Almost always, this is because the
inserted object has not been located in exactly the right spot, rather than
indicating a tracking problem. Often, an object is inserted an inch or two above a
floor. Be sure you have tracked the right spot: to determine floor level, track
marks on the floor, not tennis balls sitting on it, which are effectively an inch or
two higher. If you have to work from the tennis balls, set up the floor coordinate
system taking the ball radius into account, or place the object the corresponding
amount below the apparent floor.
Also, place trackers near the location of the inserted object whenever
possible.
Another common cause of sliding: a tracker that jumps from one spot to
another at some frame during the track.
“It lines up in SynthEyes, but not XXX.” The export scripts do what they
can to try to ensure that everything lines up just as nicely in your post-tracking
application as in SynthEyes, but life is never simple. There are preferences that
may be different, maybe you‘re integrating into an existing setup, maybe you
didn‘t think hitting xxx would matter, etc. The main causes of this problem have
been when the field of view is mangled (especially when people worry about
focal length instead, and have the wrong back plate width), and when the post-
tracking application turns out to be using a slightly different timing for the images,
one frame earlier or later, or 29.97 vs 30 fps etc, or with or without some
cropping.
―Camera01: No trackers, please fix or set camera to disabled.‖ You
have created a scene with more than one camera, opening a new shot into an
existing file—one with no trackers. The message is 100% correct. You need to
select the original camera on the Shot menu, then Shot/Remove object.
“Can’t locate satisfactory initial frame” when solving. When the
Constrain checkbox is on (Solver panel), the constrained trackers need to be
active on the begin and end frames. Consequently, keeping Constrain off is
preferable. Alternatively, the shot may lack very much parallax. Try setting the
Solver Panel‘s Begin and/or End frames manually. For example, set the range to
the entire shot, or a long run of frames with many trackers in common. However,
keep the range short enough that the camera motion from beginning to end stays
around 30 degrees maximum rotation about any axis.
“I tried Tripod mode, and now nothing works” and you get Can‘t locate
satisfactory initial frame or another error message. Tripod mode turns all the
trackers to Far, since they will have no distance data in tripod mode. Select all
the trackers, and turn Far back off (from the coordinate system control panel).

177
TROUBLESHOOTING

Bad Solution, very small field of view. Sometimes the final solution will
be very small, with a small field of view. Often this means that there is a problem
with one or more trackers, such as a tracker that switches from one feature to a
different one, which then follows a different trajectory. It might also mean an
impossible set of constraints, or sometimes an incomplete set of rotation
constraints. You might also consider flipping on the Slow but sure box, or give a
hint for a specific camera motion, such as Left or Up. Eliminate inconsistent
constraints as a possibility by turning off the Constrain checkbox.
Object Mode Track Looks Good, but Path is Huge. If you‘ve got an
object mode track that looks good---the tracker points are right on the tracker
boxes---but the object path is very large and flying all over the place, usually you
haven‘t set up the object‘s coordinate system, so by default it is the camera
position, far from the object itself. Select one tracker to be the object origin, and
use two or more additional ones to set up a coordinate system, as if it was a
normal camera track.
Master Reset Does Not Work. By design, the master reset does not
affect objects or cameras in Refine or Refine Tripod mode: they will have to be
set back to their primary mode anyway, and this prevents inadvertent resets.
Can’t open an image file or movie. Image file formats leave room for
interpretation, and from time to time a particular program may output an image in
a way that SynthEyes is not prepared to read. SynthEyes is intended for RGB
formats with 8 or more bits per channel. Legacy or black and white formats will
probably not read. If you find a file you think should read, but does not, please
forward it to SynthEyes support. Such problems are generally quick to rectify,
once the problematic file can be examined in detail. In the meantime, try a
different file format, or different save options, in the originating program, if
possible, or use a file format converter if available. Also, make sure you can read
the image in a different program, preferably not the one that created it: some
images that SynthEyes ―couldn‘t read‖ have turned out to be corrupted
previously.
Can’t delete a key on a tracker (ie by right-clicking in the tracker view
window, or right-clicking the Now button). If the tracker is set to automatically key
every 12 frames, and this is one of those keys, deleting it will work, but
SynthEyes will immediately add a new key! Usually you want to back up a few
frames and add a correct key; then you can delete or correct the original one. Or,
increase the auto-key setting. Also, you can not delete a key if the tracker is
locked.

Crashes
By far the largest source of SynthEyes crashes is running your machine
out of memory. Large auto-tracked HD scenes can do that on 32-bit systems. If
you suspect that may be a problem, turn the queue length down to 10 on the shot
setup dialog when you open the shot (or by doing a Shot/Edit Shot). It is also a
good idea to re-open SynthEyes if you have auto-tracked the same shot several

178
TROUBLESHOOTING

times—or turn down the undo setting because the amount of data per undo can
be very large.
In the event that SynthEyes detects an internal error, it will pop up an
Imminent Crash dialog box asking you if you wish to save a crash file. You
should take a screen capture with Print Screen on your keyboard, then respond
Yes. SynthEyes will save the current file to a special crash location, then pops up
another dialog box that tells you that location (within your Documents and
Settings folder).
You should then open a paint program such as Photoshop, Microsoft
Paint, Paint Shop Pro, etc, and paste in the screen capture. Save the image to a
file, then e-mail the screen capture, the crash save file, and a short description of
what you were doing right before the crash, to SynthEyes technical support for
diagnosis, so that the problem can be fixed in future releases. If you have
Microsoft‘s Dr. Watson turned on, forwarding that file would also be helpful.
The crash save file is your SynthEyes scene, right before it began the
operation that resulted in the crash. You should often be able to continue using
this file, especially if the crash occurred during solving. It is conceivable that the
file might be corrupted, so if you recently had saved the file, you may wish to go
back to that file for safety.

179
Combining Automated and Supervised Tracking
It can be helpful to combine automated tracking with some supervised
trackers, especially when you would like to use particular features in the image to
define the coordinate system, to help the automated tracker with problematic
camera motions, to aid scene modeling, or to stabilize effects insertion at a
particular location.

Guide Trackers
Guide Trackers are supervised trackers, added before automated
tracking. Pre-existing trackers are automatically used by the automated tracking
system to re-register frames as they move. With this guidance, the automated
tracking system can accommodate more, or crazier, motions than it would
normally expect.
Unless the overall feature motion is very slow, you should always add
multiple guide trackers distributed throughout the image, so that at any location in
the image, the closest guide tracker has a similar motion. [The main exception: if
you have a jittery hand-held shot where, if it was stabilized, the image features
actually move rather slowly, you can use only a single guide tracker.]
Note: guide trackers are rarely necessary, and are processed differently
than in previous versions of SynthEyes.

Supervised Trackers, After Automated Tracking


You can easily add supervised trackers after running the automated
tracker. Create the trackers from the Tracker panel, adjust the coordinate system
settings as needed, then, on the Solver Panel, switch to Refine mode and hit Go!

Converting Automatic Trackers to Supervised Trackers


Suppose you want to take an automatically-generated tracker and modify
it by hand. You may wish to improve it: perhaps to extend it earlier or later in the
shot, or to patch up a few frames where it gets off track.

From the Tracking Control Panel , select the automatically-generated


tracker(s) you want to work on, and unlock them. This converts them to
supervised trackers and sets up a default search region for them.

You can also use the To Golden button on the Feature Control Panel
to turn selected trackers from automatic to supervised without unlocking them
(and without setting up a search region).
Sometimes, you may wish to convert a number of automatic trackers to
supervised, possibly add some additional trackers, and then get rid of all the
other automatically-generated trackers, leaving you with a well-controlled group

181
COMBINING AUTOMATED AND SUPERVISED TRACKING

of supervised trackers. The Delete Leaden button on the Feature Control Panel

will delete all trackers that have not been converted to golden.
You can also use the Combine trackers item on the Track Menu to
combine a supervised tracker with an automatically-generated one, if they are
tracking the same feature.
The Track/Fine-tune Trackers menu item re-tracks supervised trackers, to
improve accuracy on some imagery.

182
Stabilization
In this section, we‘ll go into SynthEyes‘ stabilization system in depth, and
describe some of the nifty things that can be done with it. If we wanted, we could
have a single button ―Stabilize this!‖ that would quickly and reliably do a bad job
almost all the time. If that‘s what you‘re looking for, there are some other
software packages that will be happy to oblige. In SynthEyes, we have provided
a rich toolset to get outstanding results in a wide variety of situations.
You might wonder why we‘ve buried such a wonderful and significant
capability quite so far into the manual. The answer is simple: in the hopes that
you‘ve actually read some of the manual, because effectively using the stabilizer
will require that you know a number of SynthEyes concepts, and how to use the
SynthEyes tracking capabilities.
If this is the first section of the manual that you‘re reading, great, thanks
for reading this, but you‘ll probably need to check out some of the other sections
too. At the least, you have to read the Stabilization quick-start.
Also, be sure to check the web site for the latest tutorials on stabilization.
We apologize in advance for some of the rant content of the following
sections, but it‘s really in your best interest!

Why SynthEyes Has a Stabilizer


The simple and ordinary need for stabilization arises when you are
presented with a shot that is bouncing all over the place, and you need to clean it
up into a solid professional-looking shot. That may be all that is needed, or you
might need to track it and add 3-D effects also. Moving-camera shots can be
challenging to shoot, so having software stabilization can make life easier.
Or, you may have some film scans which are to be converted to HD or SD
TV resolution, and effects added.
People of all skill levels have been using a variety of ad-hoc approaches
to address these tasks, sometimes using software designed for this, and
sometimes using or abusing compositing software. Sometimes, presumably, this
all goes well. But many times it does not: a variety of problem shots have been
sent to SynthEyes tech support which are just plain bad. You can look at them
and see they have been stabilized, and not in a good way.
We have developed the SynthEyes stabilizer not only to stabilize shots,
but to try to ensure that it is done the right way.

How NOT to Stabilize


Though it is relatively easy to rig up a node-based compositor to shift
footage back and forth to cancel out a tracked motion, this creates a fundamental
problem:

183
STABILIZATION

Most imaging software, including you, expects the optic center of an


image to fall at the center of that image. Otherwise, it looks weird—the
fundamental camera geometry is broken. The optic center might also be called
the vanishing point, center of perspective, back focal point, center of lens
distortion.
For example, think of shooting some footage out of the front of your car as
you drive down a highway. Now cut off the right quarter of all the images and
look at the sequence. It will be 4:3 footage, but it‘s going to look strange—the
optic center is going to be off to the side.
If you combine off-center footage with additional rendered elements, they
will have the optic axis at their center, and combined with the different center of
the original footage, they will look even worse.
So when you stabilize by translating an image in 2-D (and usually zooming
a little), you‘ve now got an optic center moving all over the place. Right at the
point you‘ve stabilized, the image looks fine, but the corners will be flying all over
the place. It‘s a very strange effect, it looks funny, and you can‘t track it right. If
you don‘t know what it is, you‘ll look at it, and think it looks funny but not know
what has hit you.
Recommendation: if you are going to be adding effects to a shot, you
should ask to be the one to stabilize or pan/scan it also. We‘ve given you the tool
to do it well, and avoid mishap. That‘s always better than having someone else
mangle it, and having to explain later why the shot has problems, or why you
really need the original un-stabilized source by yesterday.

In-Camera Stabilization
Many cameras now feature built-in stabilization, using a variety of
operating principles. These stabilizers, while fine for shooting baby‘s first steps,
may not be fine at all for visual effects work.
Electronic stabilization uses additional rows and columns of pixels, then
shifts the image in 2-D, just like the simple but flawed 2-D compositing approach.
These are clearly problematic.
One type of optical stabilizer apparently works by putting the camera
imaging CCD chip on a little platform with motors, zipping the camera chip
around rapidly so it catches the right photons. As amazing as this is, it is clearly
just the 2-D compositing approach.
Another optical stabilizer type adds a small moving lens in the middle of
the collection of simple lens comprising the overall zoom lens. Most likely, the
result is equivalent to a 2-D shift in the image plane.
A third type uses prismatic elements at the front of the lens. This is more
likely to be equivalent to re-aiming the camera, and thus less hazardous to the
image geometry.

184
STABILIZATION

Doubtless additional types are in use and will appear, and it is difficult to
know their exact properties. Some stabilizers seem to have a tendency to
intermittently jump when confronted with smooth motions. One mitigating factor
for in-camera stabilizers, especially electronic, is that the total amount of offset
they can accommodate is small—the less they can correct, the less they can
mess up.
Recommendation: It is probably safest to keep camera stabilization off
when possible, and keep the shutter time (angle) short to avoid blur, except when
the amount of light is limited. Electronic stabilizers have trouble with limited light
so that type might have to be off anyway.

3-D Stabilization
To stabilize correctly, you need 3-D stabilization that performs ―keystone
correction‖ (like a projector does), re-imaging the source at an angle. In effect,
your source image is projected onto a screen, then re-shot by a new camera
looking in a somewhat different direction with a smaller field of view. Using a new
camera keeps the optic center at the center of the image.
In order to do this correctly, you always have to know the field of view of
the original camera. Fortunately, SynthEyes can tell us that.

Stabilization Concepts
Point of Interest (POI). The point of interest is the fixed point that is being
stabilized. If you are pegging a shot, the point of interest is the one point on the
image that never moves.
POI Deltas (Adjust tab). These values allow you to intentionally move the
POI around, either to help reduce the amount of zoom required, or to achieve a
particular framing effect. If you create a rotation, the image rotates around the
POI.
Stabilization Track. This is roughly the path the POI took—it is a
direction in 3-D space, described by pan/tilt/roll angles—basically where the
camera (POI) was looking (except that the POI isn‘t necessarily at the center of
the image).
Reference Track. This is the path in 3-D we want the POI to take. If the
shot is pegged, then this track is just a single set of values, repeated for the
duration of the shot.
Separate Field of View Track. The image preparation system has its own
field of view track. The image prep‘s FOV will be larger than main FOV, because
the image prep system sees the entire input image, while the main tracking and
solving works only on the smaller stabilized sub-window output by image prep.
Note that an image prep FOV is needed only for stabilization, not for pixel-level
adjustments, downsampling, etc. The Get Solver FOV button transfers the main
FOV track to the stabilizer.

185
STABILIZATION

Separate Distortion Track. Similarly there is a separate lens distortion


track. The image prep‘s distortion can be animated, while the main distortion can
not. The image prep distortion or the main distortion should always be zero, they
should never both be nonzero simultaneously. The Get Solver Distort button
transfers the main distortion value (from solving or the Lens-panel alignment
lines) to the stabilizer, and begs you to let it clear the main distortion value
afterwards.
Stabilization Zoom. The output window can only be a portion of the size
of the input image. The more jiggle, the smaller the output portion must be, to be
sure that it does not run off the edge of the input (see the Padded mode of the
image prep window to see this in action). The zoom factor reflects the ratio of the
input and output sizes, and also what is happening to the size of a pixel. At a
zoom ratio of 1, the input and output windows and pixels are the same size. At a
zoom ratio of 2, the output is half the size of the input, and each incoming pixel
has to be stretched to become two pixels in the output, which will look fairly
blurry. Accordingly, you want to keep the zoom value down in the 1.1-1.3 region.
After an Auto-scale, you can see the required zoom on the Adjust panel.
Re-sampling. There‘s nothing that says we have to produce the same
size image going out as coming in. The Output tab lets you create a different
output format, though you will have to consider what effect it has on image
quality. Re-sampling 3K down to HD sounds good; but re-sampling DV up to HD
will come out blurry because the original picture detail is not there.
Interpolation Filter. SynthEyes has to create new pixels ―in-between‖ the
existing ones. It can do so with different kinds of filtering to prevent aliasing,
ranging from the default Bi-Linear to the most complex 3-Lanczos. The bi-linear
filter is fastest but produces the softest image. The Lanczos filters take longer,
but are sharper—although this can be drawback if the image is noisy.
Tracker Paths. One or more trackers are combined to form the
stabilization track. The tracker‘s 2-D paths follow the original footage. After
stabilization, they will not match the new stabilized footage. There is a button,
Apply to Trkers, that adjusts the tracker paths to match the new footage, but
again, they then match that particular footage and they must be restored to
match the original footage (with Remove f/Trkers) before making any later
changes to the stabilization. If you mess up, you either have to return to an
earlier saved file, or re-track.

Overall Process
We‘re ready to walk through the stabilization process. You may want to
refer to the Image Preprocessor Reference.
 Track the features required for stabilization: either a full auto-track,
supervised tracking of particular features to be stabilized, or a combination.

186
STABILIZATION

 If possible, solve the shot either for full 3-D or as a tripod shot, even if it is not
truly nodal. The resulting 3-D point locations will make the stabilization more
accurate, and it is the best way to get an accurate field of view.
 If you have not solved the shot, manually set the Lens FOV on the Image
Preprocessor‘s Lens tab (not the main Lens panel) to the best available
value. If you do set up the main lens FOV, you can import it to the Lens tab.
 On the Stabilization tab, select a stabilization mode for translation and/or
rotation. This will build the stabilization track automatically if there isn‘t one
already (as if the Get Tracks button was hit), and import the lens FOV if the
shot is solved.
 Adjust the frequency spinner as desired.
 Hit the Auto-Scale button to find the required stabilization zoom
 Check the zoom on the Adjust tab; using the Padded view, make any
additional adjustment to the stabilization activity to minimize the required
zoom, or achieve desired shot framing.
 Output the shot. If only stabilized footage is required, you are done.
 Update the scene to use the new imagery, and either re-track or update the
trackers to account for the stabilization
 Get a final 3-D or tripod solve and export to your animation or compositing
package for further effects work.
There are two main kinds of shots and stabilization for them: shots
focusing on a subject, which is to remain in the frame, and traveling shots, where
the content of the image changes as new features are revealed.

Stabilizing on a Subject
Often a shot focuses on a single subject, which we want to stabilize in the
frame, despite the shaky motion of the camera. Example shots of this type
include:
 The camera person walking towards a mark on the ground, to be
turned into a cliff edge for a reveal.
 A job site to receive a new building, shot from a helicopter orbiting
overhead
 A camera car driving by a house, focusing on the house.
To stabilize these shots, you will identify or create several trackers in the
vicinity of the subject, and with them selected, select the Peg mode on the
Translation list on the Stabilize tab.
This will cause the point of interest to remain stationary in the image for
the duration of the shot.

187
STABILIZATION

You may also stabilize and peg the image rotation. Almost always, you will
want to stabilize rotation. It may or may not be pegged.
You may find it helpful to animate the stabilized position of the point of
interest, in order to minimize the zoom required, see below, and also to enliven a
shot somewhat.
Some car commercials are shot from a rig that shows both the car and the
surrounding countryside as the car drives: they look a bit surreal because the car
is completely stationary—having been pegged exactly in place. No real camera
rig is that perfect!

Stabilizing a Traveling Shot


Other shots do not have a single subject, but continue to show new
imagery. For example,
 A camera car, with the camera facing straight ahead
 A forward-facing camera in a helicopter flying over terrain
 A camera moving around the corner of a house to reveal the
backyard behind it
In such shots, there is no single feature to stabilize. Select the Filter mode
for the stabilization of translation and maybe rotation. The result is similar to the
stabilization done in-camera, though in SynthEyes you can control it and have
keystone correction.
When the stabilizer is filtering, the Cut Frequency spinner is active. Any
vibratory motion below that frequency (in cycles per second) is preserved, and
vibratory motion above that frequency is greatly reduced or eliminated.
You should adjust the spinner based on the type of motion present, and
the degree of stabilization required. A camera mounted on a car with a rigid
mount, such as a StickyPod, will have only higher-frequency residual vibration,
and a larger value can be used. A hand-held shot will often need a frequency
around 0.5 Hz to be smooth.
Note: When using filter-mode stabilization, the length of the shot matters.
If the shot is too short, it is not possible to accurately control the frequency and
distinguish between vibration and the desired motion, especially at the beginning
and end of the shot. Using a longer version of the take will allow more control,
even if much of the stabilized shot is cut after stabilization.

Minimizing Zoom
The more zoom required to stabilize a shot, the less image quality will
result, which is clearly bad. Can we minimize the zoom, and maximize image
quality? Of course, and SynthEyes provides the controllability to do so.
Stabilizing a shot has considerable flexibility: the shot can be stable in lots
of different ways, with different amounts of zoom required. We want a shot that

188
STABILIZATION

everyone agrees is stable, but minimizes the effect on quality. Fortunately, we


have the benefit of foresight, so we can correct a problem in the middle of a shot,
anticipating it long before it occurs, and provide an apparently stable result.
Animating POI
The basic technique is to animate the position of the point-of-interest
within the frame. If the shot bumps left suddenly, there are fewer pixels available
on the left side of the point of interest to be able to maintain its relative position in
the output image, and a higher zoom will be required. If we have already moved
the point of interest to the left, fewer pixels are required, and less zoom is
required.
Earlier, in the Stabilization Quick Start, we remarked that the 28% zoom
factor obtained by animating the rotation could be reduced further. We‘ll continue
that example here to show how. Re-do the quick start to completion, go to frame
178, with the Adjust tab open, in Padded display mode, with the make key button
turned on.
From the display, you can see that the red output-area rectangle is almost
near the edge of the image. Grab the purple point-of-interest crosshair, and drag
the red rectangle up into the middle of the image. Now everything is a lot safer. If
you switch to the stabilize tab and hit Autoscale, the red rectangle enlarges—
there is less zoom, as the Adjust tab shows. Only 15% zoom is now required.
By dragging the POI/red rectangle, we reduced zoom. You can see that
what we did amounted to moving the POI. Hit Undo twice, and switch to the Final
view.
Drag the POI down to the left, until the Delta U/V values are approximately
0.045 and -0.035. Switch back to the Padded view, and you‘ll see you‘ve done
the same thing as before. The advantage of the padded view is that you can
more easily see what you are doing, though you can get a similar effect in the
Final view by increasing the margin to about 0.25, where you can see the dashed
outline of the source image.
If you close the Image Prep dialog and play the shot, you will see the
effect of moving the POI: a very stable shot, though the apparent subject
changes over time. It can make for a more interesting shot and more creative
decisions.
Too Much of a Good Thing?
To be most useful, you can scrub through your shot and look for the worst
frame, where the output rectangle has the most missing, and adjust the POI
position on that frame.
After you do that, there will be some other frame which is now the worst
frame. You can go and adjust that too, if you want. As you do this, the zoom
required will get less and less.

189
STABILIZATION

There is a downside: as you do this, you are creating more of the


shakiness you are trying to get rid of. If you keep going, you could get back to no
zoom required, but all the original shakiness, which is of course senseless.
Usually, you will only want to create two or three keys at most, unless the
shot is very long. But exactly where you stop is a creative decision based on the
allowable shakiness and quality impact.
Auto-Scale Capabilities
The auto-scale button can automate the adjustment process for you, as
controlled by the Animate listbox and Maximum auto-zoom settings.
With Animate set to Neither, Auto-scale will pick the smallest zoom
required to avoid missing pieces on the output image sequence, up to the
specified maximum value. If that maximum is reached, there will be missing
sections.
If you change the Animate setting to Translate, though, Auto-scale will
automatically add delta U/V keys, animating the POI position, any time the zoom
would have to exceed the maximum.
Rewind to the beginning of the shot, and control-right-click the Delta-U
spinner, clearing all the position keys.
Change the Animate setting to Translate, reduce the Maximum auto-zoom
to 1.1, then click Auto-Scale. SynthEyes adds several keys to achieve the
maximum 10% zoom. If you play back the sequence, you will see the shot
shifting around a bit—10% is probably too low given the amount of jitter in the
shot to begin with.
The auto-scale button can also animate the zoom track, if enabled with the
Animate setting. The result is equivalent to a zooming camera lens, and you
must be sure to note that in the main lens panel setting if you will 3-D solve the
shot later. This is probably only useful when there is a lot of resolution available
to begin with, and the point of interest approaches the boundary of the image at
the end of the shot.
Keep in mind that the Auto-scale functionality is relatively simple. By
considering the purpose of the shot as well as the nature of any problems in it,
you should often be able to do better.

Tweaking the Point of Interest


This is different than moving it!
When the selected trackers are combined to form the single overall
stabilization track, SynthEyes examines the weight of each tracker, as controlled
from the main Tracker panel.
This allows you to shift the position of the point-of-interest (POI) within a
group of trackers, which can be handy.

190
STABILIZATION

Suppose you want to stabilize at the location of a single tracker, but you
want to stabilize the rotation as well. With a single tracker, rotation can not be
stabilized. If you select two trackers, you can stabilize the rotation, but without
further action, the point of interest will be sitting between the two trackers, not at
the location of the one you care about.
To fix this, select the desired POI tracker in the main viewport, and
increase its weight value to the maximum (currently 10). Then, select the other
tracker(s), and reduce the weight to the minimum (0.050). This will put the POI
very close to your main tracker.
If you play with the weights a bit, you can make the POI go anywhere
within a polygon formed by the trackers. But do not be surprised if the resulting
POI seems to be sliding on the image: the POI is really a 3-D location, and
usually the combination of the trackers will not be on the surface (unless they are
all in the same plane). If this is a problem for what you want to do, you should
create a supervised tracker at the desired POI location and use that instead.
If you have adjusted the weights, and later want to re-solve the scene, you
should set the weights back to 1.0 before solving. (Select them all then set the
weight to 1).

Resampling and Film to HDTV Pan/Scan Workflow


If you are working with filmed footage, often you will need to pull the actual
usable area from the footage: the scan is probably roughly 4:3, but the desired
final output is 16:9 or 1.85 or even 2.35, so only part of the filmed image will be
used. A director may select the desired portion to achieve a desired framing for
the shot. Part of the image may be vignetted and unusable. The image must be
cropped to pull out the usable portion of the image with the correct aspect ratio.
This cropping operation can be performed as the film is scanned, so that
only the desired framing is scanned; clearly this minimizes the scan time and disk
storage. But, there is an important reason to scan the entire frame instead.
The optic center must remain at the center of the image. If the scanning is
done without paying attention, it may be off center, and almost certainly will be if
the framing is driven by directorial considerations. If the entire frame is scanned,
or at least most of it, then you can use SynthEyes‘s stabilization software to
perform keystone correction, and produce properly centered footage.
As a secondary benefit, you can do pan and scan operations to stabilize
the shots, or achieve moving framing that would be difficult to do during
scanning. With the more complete scan, the final decision can be deferred or
changed later in production.
The Output tab on the Image Preparation controls resampling, allowing
you to output a different image format then that coming in. The incoming
resolution should be at least as large as the output resolution, for example, a 3K
4:3 film scan for a 16:9 HDTV image at 1920x1080p. This will allow enough
latitude to pull out smaller subimages.

191
STABILIZATION

If you are resampling from a larger resolution to a smaller one, you should
use the Blur setting to minimize aliasing effects (Moire bands). You should
consider the effect of how much of the source image you are using before
blurring. If you have a zoom factor of 2 into a 3K shot, the effective pixel count
being used is only 1.5K, so you probably would not blur if you are producing
1920x1080p HD.
Due to the nature of SynthEyes‘ integrated image preparation system, the
re-sampling, keystone correction, and lens un-distortion all occur simultaneously
in the same pass. This presents a vastly improved situation compared to a typical
node-based compositor, where the image will be resampled and degraded at
each stage.

Changing Shots, and Creating Motion in Stills


You can use the stabilization system to adjust framing of shots in post-
production, or to create motion from still images (the Ken Burns effect).
To use the stabilizing engine you have to be stabilizing, so simply
animating the Delta controls will not let you pan and scan without the following
trick. Delete any the trackers, click the Get Tracks button, and then turn on the
Translation channel of the stabilizer. This turns on the stabilizer, making the
Delta channels work, without doing any actual stabilization.
You must enter a reasonable estimate of the lens field of view. If it is a
moving-camera or tripod-mode shot, you can track it first to determine the field of
view. Remember to delete the trackers before beginning the mock stabilization.
If you are working from a still, you can use the single-frame alignment tool
to determine the field of view. You will need to use a text editor to create an IFL
file that contains the desired number of copies of your original file name.

Stabilization and Interlacing


Interlaced footage presents special problems for stabilization, because
jitter in the positioning between the two fields is equivalent to jitter in camera
position, which we‘re trying to remove. Because the two different fields are taken
at different points in time (1/30th or 1/25th of a second apart, regardless of shutter
time), it is impossible for man or machine to determine what exactly happened, in
general. Stabilizing interlaced footage will sacrifice a factor of two in vertical
resolution.
Best Approach: if at all possible, shoot progressive instead of interlace
footage. This is a good rule whenever you expect to add effects to a shot.
Fallback Approach: stabilize slow-moving interlaced shots as if they were
progressive. Stabilize rapidly-moving interlaced shots as interlaced.
To stabilize interlaced shots, SynthEyes stabilizes each sequence of fields
independently.

192
STABILIZATION

Note that within the image preparation subsystem, some animated tracks
are animated by the field, and some are animated by the frame.
Frame: levels, color/hue, distortion/scale, ROI
Field: FOV, cut frequency, Delta U/V, Delta Rot, Delta Zoom
When you are animating a frame-animated item on an interlaced shot, if
you set a key on one field (say 10), you will see the same key on the other field
(say 11). This simplifies the situation, at least on these items, if you change a
shot from interlaced to progressive or ―yes‖ mode or back.

Avoid Slowdowns Due to Missing Keyframes


While you are working on stabilizing a shot, you will be re-fetching frames
from the source imagery fairly often, especially when you scrub through a shot to
check the stabilization. If the source imagery is a QuickTime or AVI that does not
have many (or any!) keyframes, random access into the shot will be slow, since
the codec will have to decompress all the frames from the last keyframe to get to
the one that is needed. This can require repeatedly decompressing the entire
shot. It is not a SynthEyes problem, or even specific to stabilizing, but is a
problem with the choice of codec settings.
If this happens (and it is not uncommon), you should save the movie as an
image sequence (with no stabilization), and Shot/Change Shot Images to that
version instead.
Alternatively, you may be able to assess the situation using the Padded
display, turning the update mode to Neither, then scrubbing through the shot.

After Stabilizing
Once you‘ve finished stabilizing the shot, you should write it back out to
disk using the Save Sequence button on the Output tab. It is also possible to
save the sequence through the Perspective window‘s Preview Movie capability.
Each method has its advantages, but using the Save Sequence button will
be generally better for this purpose: it is faster; does less to the images; allows
you to write the 16 bit version; and allows you to write the alpha channel.
However, it does not overlay inserted test objects like the Preview Movie does.
You can use the stabilized footage you write for downstream applications
such as 3dsmax and Maya.
But before you export the camera path and trackers from SynthEyes, you
have a little more work to do. The tracker and camera paths in SynthEyes
correspond to the original footage, not the stabilized footage, and they are
substantially different. Once you close the Image Preparation dialog, you‘ll see
that the trackers are doing one thing, and the now-stable image doing something
else.

193
STABILIZATION

You should always save the stabilizing SynthEyes scene file at this point
for future use in the event of changes.
You can then do a File/New, open the stabilized footage, track it, then
export the 3-D scene matching the stabilized footage.
But… if you have already done a full 3-D track on the original footage, you
can save time.
Click the Apply to Trkers button on the Output tab. This will apply the
stabilization data to the existing trackers. When you close the Image Prep, the 2-
D tracker locations will line up correctly, though the 3-D X‘s will not yet. Go to the
solver panel, and re-solve the shot (Go!), and the 3-D positions and camera path
will line up correctly again. (If you really wanted to, you could probably use Seed
Points mode to speed up this re-solve.)
Important: if you later decide you want to change the stabilization
parameters without re-tracking, you must not have cleared the stabilizer. Hit the
Remove f/Trkers button BEFORE making any changes, to get back to the
original tracking data. Otherwise, if you Apply twice, or Remove after changes,
you will just create a mess.
Also, the Blip data is not changed by the Apply or Remove buttons, and it
is not possible to Peel any blip trails, which correspond to the original image
coordinates, after completing stabilization and hitting Apply. So you must either
do all peeling first; remove, peel, and reapply the stabilization; or retrack later if
necessary.

Flexible Workflows
Suppose you have written out a stabilized shot, and adjusted the tracker
positions to match the new shot. You can solve the shot, export it, and play
around with it in general. If you need to, you can pop the stabilization back off the
trackers, adjust the stabilization, fix the trackers back up, and re-solve, all without
going back to earlier scene files and thus losing later work. That‘s the kind of
flexibility we like.
There‘s only one slight drawback: each time you save and close the file,
then reopen it, you‘re going to have to wait while the image prep system
recomputes the stabilized image. That might be only a few seconds, or it might
be quite a while for a long film shot.
It‘s pretty stupid, when you consider that you‘ve already written the
complete stabilized shot to disk!
Approach 1: do a Shot/Change Shot Images to the saved stabilized shot,
and reset the image prep system from the Preset Manager. This will let you work
quickly from the saved version, but you must be sure to save this scene file
separately, in case you need to change the stabilization later for some reason.
And of course, going back to that saved file would mean losing later work.

194
STABILIZATION

Approach 2: Create an image prep preset (―stab‖) for the full stabilizer
settings. Create another image prep preset (―quick‖), and reset it. Do the
Shot/Change Shot Images. Now you‘ve got it both ways: fast loading, and if you
need to go back and change the stabilization, switch back to the first (―stab‖)
preset, remove the stabilization from the trackers, change the shot imagery back
to the original footage, then make your stabilization changes. You‘ll then need to
re-write the new stabilized footage, re-apply it to the trackers, etc.
Approach 1 is clearly simpler and should suffice for most simple situations.
But if you need the flexibility, Approach 2 will give it to you.

195
Rotoscoping and Alpha Channel Mattes
You may choose to use SynthEyes‘s rotoscoping and alpha channel matte
capabilities when you are using automatic tracking in the following situations:
 A portion of the image contains significant image features that don‘t
correspond to physical objects---such as reflections, sparkling, lens
flares, camera moiré patterns, burned-in timecode, etc,
 There are pesky actors walking around creating moving features,
 You want to track a moving object, but it doesn‘t cover the entire
frame,
 You want to track both a moving object and the background
(separately).
In these situations, the automatic tracker needs to be told, for each frame,
which parts of the image should be used to match-move the camera and each
object (and for the remainder, which portions of the image should be ignored
totally).
Hint: Often you can let the autotracker run, then manually delete the
unwanted trackers. This can be a lot quicker than setting up mattes. To help find
the undesirable trackers, turn on Tracker Trails on the Edit menu.
SynthEyes provides two methods to control where the autotracker tracks:
animated splines and alpha channel mattes. Both can be used in one shot. To
create the alpha channel mattes, you need to use an external compositing
program to create the matte, typically by some variation of painting it. If you‘ve no
idea what that last sentence said, you can skip the entire alpha channel
discussion and concentrate on animated splines, which do not require any other
programs.

Overall, and Rotoscope Panel

The Rotoscoping Panel controls the assignment of animated splines


and alpha-channel levels to cameras and objects. The next section will describe
how to set up splines and alpha channels, but for now, here are the rules for
using them.
The rotoscoping panel contains a list of splines. The bottom-most spline
that contains the blip wins. As you add new splines at the end of the list, they
override the ones you have previously added. Internally, SynthEyes searches the
spline list from bottom to top. You can think of the splines as being layered: the
top of the list is the back layer, the end of the list is at front and has priority.

There are two buttons, Move Up and Move Down , that let you
change the order of the splines.

197
ROTOSCOPING AND ALPHA CHANNEL MATTES

A drop-down listbox, underneath the main spline list, lets you change the
camera or object to which a spline is selected.

This listbox always contains a Garbage item. If you assign Garbage to a


spline, that spline is a garbage matte and any blips within it are ignored.
If a blip isn‘t covered by any splines, then the alpha channel determines to
which object the blip is assigned.

Spline Workflow
When you create a shot, SynthEyes creates an initial static full-screen
rectangle spline that assigns all blips to the shot‘s camera. You might add
additional splines, for garbage matte areas or moving objects you want to track.
Or, you might delete the rectangle and add only a new animated spline, if you are
tracking a full-screen moving object.
Ideally, you should add splines before running the autotracker the first
time, that will be simplest. However, if you run the autotracker, then decide to
add or modify the splines (using the Roto panel), you can then use the Features

panel to create a new set of trackers:


 Delete the existing trackers using control-A and delete, or the
Delete Leaden button on the features panel,
 Click the Link Frames button, which updates the possible tracker
paths based on your modified splines. Don‘t worry, you will be
prompted for this if you forget in almost all cases.
 Click the Peel All button to make new trackers.
The separate Link step is required to accommodate workflows with
manual Peeling using Peel mode button. (You may also be prompted to Link in
when entering Peel mode.)

Animated Splines
Animated splines are created and manipulated in the camera viewport

only while the rotoscope control panel is open. At the top of the rotoscope
panel, a chart shows what the left and right mouse buttons do, depending on the
state of the Shift key.
Each spline has a center handle, a rotate/scale handle, and three or more
vertex control handles. Splines can be animated on and off over the duration of
the shot, using the stop-light enable button .

198
ROTOSCOPING AND ALPHA CHANNEL MATTES

Vertex handles can be either corners or smooth. Double-click the vertex


handle to toggle the type.
Each handle can be animated over time, by adjusting the handle to the
desired location while SynthEyes is at the desired frame, setting a key at that
frame. The handle turns red whenever it is keyed on that frame. In between keys,
a control handle follows a linear path. The rotospline keys are shown on the
timebar, and the and ―advance to key‖ buttons apply to the spline keys.

To create an animated spline, turn on the magic wand tool , go to the


spline‘s first frame and left-click the spline‘s desired center point. Then click on a
series of points around the edge of the region to be rotoscoped. Too many points
will make later animation more time consuming. You can switch back and forth
between smooth and corner vertex points by double-clicking as you create. After
you create the last desired vertex, right click to exit the mode.

You can also turn on and use create-rectangle and create-circle


spline creation modes, which allow you to drag out the respective shape.
After creating a spline, go to the last frame, and drag the control points to
reposition them on the edge. Where possible, adjust the spline center and
rotation/scale handle to avoid having to adjust each control point. Then go to the
middle of the shot, and readjust. Go one quarter of the way in, readjust. Go to the
three quarter mark, readjust. Continue in this fashion, subdividing each unkeyed
section until the spline is in the correct location already, which generally won‘t be
too long. This approach is much more effective than proceeding from beginning
to end.
You may find it helpful to create keys on all the control points whenever
you change any of them. This can make the spline animation more predictable in
some circumstances (or to suit your style). To do this, turn on the Key all CPs if
any checkbox on the roto panel.
Note that the splines don‘t have to be accurate. They are not being used
to matte the objects in and out of the shot, only to control blips which occur
relatively far apart.
Right-click a control point to remove a key for that frame. Shift-right-click
to remove the control point completely. Shift-left-click the curve to create a new
control point along the existing curve.
As you build up a collection of splines in the viewport, you may wish to
hide some or all of them using the Show this spline checkbox on the roto control
panel. The View menu contains an Only selected splines item; with it enabled,
only the spline selected in the roto panel‘s list will appear in the viewport.

199
ROTOSCOPING AND ALPHA CHANNEL MATTES

From Tracker to Control Point


Suppose the shot is from a helicopter circling a highway interchange you
need to track, and there is a large truck driving around. You want to put a
garbage matte around it before autotracking. If the helicopter is bouncing around
a bit and only loosely locked onto the interchange, you might have to add a fair
number of keys to the spline for the truck.
Alternatively, you could track the truck and import its path into the spline,
using the Import Tracker to CP mode of the rotoscoping panel.
To do this, begin by adding a supervised tracker for the truck. At the start
of the shot, create a rough spline around the truck, with its initial center point
located at the tracker. Turn on Import Tracker to CP, select the tracker, then click
on the center control point of the spline. The tracker‘s path will be imported to the
spline, and it will follow the truck through the shot. You can animate the outline of
the spline as needed, and you‘re done.
If the truck is a long 18-wheeler, and you‘ve tracked the cab, say, the back
end of the truck may point in different directions in the shot, and the whole truck
may change in size as well.
You might simplify animating the truck‘s outline with the next wrinkle: track
something on the back end of the truck as well. Before animating the truck‘s
outline at all, import that second tracker‘s path onto the rotation/scale control
point. Now your spline will automatically swivel and expand to track the truck
outline.
You may still need to add some animation to the outline control points of
the truck for fine tuning. If there is an exact corner that can be tracked, you can
add a tracker for it, and import the tracker‘s path directly onto spline‘s individual
control points.
The tracker import capability gives a very flexible capability for setting up
your splines, with a little thought. Here are a few more details. The import takes
place when you click on the spline control point. Any subsequent changes to the
tracker are not ―live.‖ If you need them, you should import the path again. The
importer creates spline keys only where the tracker is valid. So if the tracker is
occluded by an actor for a few frames, there will be no spline keys there, and the
spline‘s linear control-point interpolation will automatically fill the gap. Or, you can
add some more keys of your own. You‘ll also want to add some keys if your
object goes off the edge of the screen, to continue it‘s motion.
Finally, the trackers you use to help animate the spline are not special.
You can use them to help solve the scene, if they will help (often they will not), or
you can delete them or change them into zero-weighted trackers (ZWTs) so that
they do not affect the camera solution. And you should turn off their Exportable
flag on the Coordinate System panel.

200
ROTOSCOPING AND ALPHA CHANNEL MATTES

Writing Alpha Mattes from Roto Splines


If you have carefully constructed some roto splines, you can export them
to other compositing programs using the image preprocessor. Select an output
format that supports alpha channels, and turn on the alpha channel output. If the
source does not contain an alpha channel, the roto spline data will be rendered
as alpha instead. The green-screen key will be combined in as well, if one is
configured.
You can also output an RGB version of the roto information, even for
formats that don‘t support alpha channels, by turning off the RGB checkbox in
the save-sequence settings, then turn on the alpha channel output checkbox.
The data will automatically be converted from an alpha channel to RGB.
In a complex object-tracking setup, you can output a mask showing the
region for each object, by having that object active in the main user interface
when you render the output.

Using Alpha Mattes


SynthEyes can use an alpha channel painted into your shot to determine
which image areas correspond to which object or the camera. The alpha channel
is a fourth channel (in addition to Red, Green, and Blue) for each pixel in your
image. You will need external program, typically a compositor, to create such an
alpha channel. Plus, you will need to store the shot as sequenced DPX,
OpenEXR, SGI, TARGA, or TIFF images, as these formats accommodate an
alpha channel.
Suppose you wish to have a camera track ignore a portion of the images
with a ―garbage matte.‖ Create the matte with the alpha value of 255 (1.0, white)
for the areas to be tracked, and 0 (0.0, black) for the areas to be ignored. You‘ll
need to do this for every frame in the shot, which is why the features of a good
compositing program can be helpful. [Note: if a shot lacks an alpha channel,
SynthEyes creates a default channel that is black(0) for all hard black pixels
(R=G=B=0), and white(255) for all other pixels.]
You can make sure the alpha channel is correct in SynthEyes after you
open the shot by temporarily changing the Camera View Type on the Advanced

Feature Control dialog (launched from the Feature Panel ) to Alpha, or


using the Alpha channel selection in the Image Preprocessing subsystem.

Next, on the Rotoscoping panel , delete the default full-size-


rectangular spline. This is very important, because otherwise this spline will
assign all blips to its designated object. The alpha channel is used only when a
blip is not contained in any spline!

201
ROTOSCOPING AND ALPHA CHANNEL MATTES

Change the Shot Alpha Levels spinner to 2, because there are two
potential values: zero and one. This setting affects the shot (and consequently all
the objects and the camera attached to it).
Change the Object Alpha Value spinner to 255. Any blip in an area with
this alpha value will be assigned to the camera; other blips will be ignored. This
spinner sets the alpha value for the currently-active object only.
If you are tracking the camera and a moving object along with a garbage
matte simultaneously, you would create the alpha channel with three levels: 0,
garbage; 128, camera; 255, object. Note that this order isn‘t important, only
consistency.
After creating the matte, you would set the Shot Alpha Levels to 3. Then
switch to the Camera object on the Shot menu and set the Object Alpha Value to
128. Finally, switch to the moving object on the Shot menu, and set the Object
Alpha Value to 255.
Note that the Shot Alpha Levels setting controls only the tolerance
permitted in the alpha level when making an assignment, so that other nearby
alpha values that might be incidentally generated by your rotoscoping software
will still be assigned correctly. If you set Shot Alpha Levels to 17, the nominal
alpha values would be 0, 16, 32, … 255, and you could use only any 3 of them if
that was all you needed.

202
Object Tracking
Here‘s how to do an object-tracking shot, using the example shot
lazysue.avi, which shows a revolving kitchen storage tray spinning (called a
Lazy Susan in the U.S. for some reason). This shot provides a number of
educational opportunities. It can be tracked either automatically or under manual
supervision, so both will be described.
The basic point of object tracking is that the shot contains an object whose
motion is to be determined so that effects can be added. The camera might also
be moving; that motion might also be determined if possible, or the object‘s
motion can be determined with respect to the moving camera, without concern
for the camera‘s actual motion.
The object being tracked must exhibit perspective effects during the shot.
If the object occupies only a small portion of the image, this will be unlikely. A film
or HD source will help provide enough accuracy for perspective shifts to be
detected.
For object-tracking, all the features being tracked must remain rigidly
positioned with respect to one another. For example, if a head is to be tracked,
feature points must be selected that are away from the mouth or eyes, which
move with respect to one another. If the expression of a face is to be tracked for
character animation, see the section on Motion Capture.
Moving-object tracking is substantially simpler than motion capture, and
requires only a single shot and no special on-set preparation during shooting.

Automatic Tracking
 Open the lazysue.avi shot, using the default settings.

 On the Solver panel , set the camera‘s solving mode to Disabled.


 On the Shot menu, select Add Moving Object. You will see the object at the
origin as a diamond-shaped null object.

 Switch to the Roto panel , with the camera viewport selected.


 Scrub through the shot to familiarize yourself with it, then rewind back to the
beginning.

 Click the create-spline (magic wand) button on the Roto panel.


 Click roughly in the center of the image to establish the center point.
 Click counterclockwise about the moving region of the shot, inset somewhat
from the stationary portion of the cabinetry and inset from the bottom edge of
the tray. Right-click after the last point. [The shape is shown below.]

203
OBJECT TRACKING

 Click the create-spline (magic wand) button again to turn it off.


 Double-click the vertices as necessary to change them to corners.
 In the spline list on the Roto panel, select Spline1 and hit the delete key.
 On the object setting underneath the spline list, change the object setting
from Garbage to Object01. Your screen should look something like this:

 Go to the Feature Panel .


 Change the Motion Profile to Gentle Motion.
 Hit Blips all frames.
 Hit Peel All.
 Go to the end of the shot.
 Verify that the five dots on the flat floor of the lazy susan have associated
trackers: a green diamond on them.
 If you need to add a tracker to a tracking mark, turn on the Peel button on the
Feature panel. Scrub around to locate a long track on each untracked spot,
then click on the small blip to convert it to a tracker. Turn off Peel mode when
you are done.

 Switch to the Coordinate System Panel .

204
OBJECT TRACKING

 Go to frame 65.
 Change the tracker on the ―floor‖ that is closest to the central axis to be the
origin.
 Set the front center floor tracker to be a Lock Point, locked to 10,0,0.
 Set the front right tracker to XY Plane (or XZ plane for a Y-Up axis mode).

 Switch to the Solver Panel .


 Make sure the Constrain checkbox is off.
 Hit Go!.
 Go to the After Tracking section, below.

Supervised Tracking
The shot is best tracked backwards: the trackers can start from the easiest
spots, and get tracked as long as possible into the more difficult portion at the
beginning of the shot. Tracking backwards is suggested for features that are
coming towards the camera, for example, shots from a vehicle.
 Open the lazysue.avi shot, using the default settings.

 On the Solver panel , set the camera‘s solving mode to Disabled.


 On the shots menu, select Add Moving Object. You will see the object at the
origin as a diamond-shaped null object.

 On the Tracker panel, turn on Create . The trackers will be associated


with the moving object, not the camera.
 Switch to the Camera viewport, to bring the image full frame.

 Click the To End button on the play bar.


 Click the Playback direction button from to (backwards).
 Create a tracker on one of the dots on the shelf. Decrease the tracker size to
approximately 0.015, and increase the horizontal search size to 0.03.
 Create a tracker on each spot on the shelf. Track each as far as possible
back to the beginning of the shot. Use the tracker interior view to scroll
through the frames and reposition as needed. As the spots go into the
shadow, you can continue to track them, using the tracker gain spinner. When
a tracker becomes untrackable, turn off Enable , and Lock the tracker .
Right-click the spinner to reset it for the next tracker.
 Continue adding trackers from the end of the shot roughly as follows:

205
OBJECT TRACKING

 Begin tracking from the beginning, by rewinding, changing the playback


direction to forward , then adding additional trackers. You will need to add
these additional trackers to achieve coverage early in the shot, when the
primary region of interest is still blocked by the large storage container.

206
OBJECT TRACKING

 Switch to the graph editor in graph mode , sort by error mode .


Use the mouse to sweep through and select the different trackers. Or, select
Sort by error on the main View menu, and use the up and down arrows on the
keyboard to sequence through the trackers. Look for spikes in the tracker
velocity curves (solid red and green). Switch back to the camera view as
needed for remedial work.

 Switch to the Coordinate System control panel and camera viewport, at


the end of the shot.
 Select the tracker at center back on the surface of the shelf; change it to an
Origin lock.
 Select the tracker a bottom left on the shelf, change it to Lock Point with
coordinate X=10.
 Select the tracker at front right; change it to an On XY Plane lock (or On XZ if
you use Y-axis up for Maya or Lightwave).

 Switch to the Solver control Panel .


 Switch to the Quad view; zoom back out on the Camera viewport.
 Hit Go! After solving completes in a few seconds, hit OK.
 Continue to the After Tracking section, below.

After Tracking

 Switch to the 3-D Objects panel , with the Quad viewport layout selected.
 Click the World button, changing it to Object.

 Turn on the Magic Wand tool and select the Cone object.
 In the top view, draw a cone in the top-right quadrant, just above and right of
the diamond-shaped object marker.
 Hint: it can be easier to adjust the cone‘s position in the Perspective view,
locked to the camera, with View/Local coordinate handles turned on.
 Scrub the timeline to see the inserted cone. In your animation package, a
small amount of camera-mapped stand-in geometry would be used to make
the large container occlude the inserted cone and reveal correctly as the shelf
spins.
 Advanced techniques: use Coalesce Trackers and Clean Up Trackers.

207
OBJECT TRACKING

Difficult Situations
When an object occupies only a relatively small portion of the frame, there
are few trackers, and/or the object is moving so that trackers get out of view
often, object tracking can be difficult. You may wind up creating a situation where
the mathematically best solution does not correspond to reality, but to some
impossible tracker or camera configuration. It is an example of the old adage,
“Garbage In, Garbage Out” (please don‘t be offended, gentle reader).
Goosing the Solver
Small changes in the initial configuration may allow the solver to,
essentially randomly, pick a more favorable solution. Be sure to use the Slow
but sure checkbox and all the different possibilities of the Rough camera motion
selection, both on the solver panel. Trying a variety of manually-selected seed
frames is also suggested. Small changes in trackers, or adding additional
trackers, especially those at different depths, may also be helpful in obtaining the
desired solution.
Inverting Perspective
Sometimes, in a low-perspective object track, you may see a situation
where the object model and motion seem almost correct, except that some things
that are too far away are too close, and the object rotates the wrong way. This is
a result of low/no/conflicting perspective information. If you cannot improve the
trackers or convince the solver to arbitrarily pick a different solution, read on.
The Invert Perspective script on the Script menu will invert the object and
hopefully allow you to recover from this situation quickly. It flips the solved
trackers about their center of gravity, on the current frame, changes them to seed
trackers (this will mess up any coordinate system), and changes the solving
mode to From Seed Points. You can then re-solve the scene with this solution,
and hopefully get an updated, and better, path and object points. You should
then switch back to Refine mode for further tracking work!
Using a 3-D Model
You might also encounter situations where you have a 3-D model of the
object to be tracked. If SynthEyes knows the 3-D coordinates of each tracker, or
at least 6-10 of them, it will be much easier to get a successful 3-D track. You
can import the 3-D model into SynthEyes, then use the Perspective window‘s
Place mode to locate the seed point of each tracker on the mesh at the correct
location. Or, carefully position the mesh to match one of the frames in the shot,
select the group of trackers over top of it, and use the Track menu‘s Drop onto
mesh to place all the seed points onto the mesh at once.

Turn on the Seed checkbox on for each (if necessary, usually done
automatically), and switch to the From Seed Points solving mode on the solver

panel .

208
OBJECT TRACKING

If you have determined the 3-D coordinates of your tracker externally


(such as from a survey or animation package), construct a small text file
containing the x, y, and z coordinates, followed by the tracker name. Use
File/Import/Tracker Locations to set these coordinates as the seed locations,
then use the From Seed Points solver option. If the tracker named doesn‘t exist,
it will be created (using the defaults from the Tracker Panel, if open), so you can
import your particular points first, and track them second, if desired, though
tracking first is usually easier.
The seed points will help SynthEyes select the desired (though
suboptimal) starting configuration. In extreme situations, you may want to lock
the trackers to these coordinates, which can be achieved easily by setting all the

imported trackers to Lock Points on the Coordinate System panel . To make


this easy, all the affected trackers are selected after an Import/Tracker Locations
operation.

209
Joint Camera and Object Tracking
If both the camera and an object are moving in a shot, you can track each
of them, solve them simultaneously, and produce a scene with the camera and
object moving around in the 3-D scene. With high-quality source, several objects
might be tracked simultaneously with the camera. First, you must set up
rotoscoping or an alpha channel to distinguish the object from the background.
Or, perform supervised tracking on both. Either way, you‘ll wind up with one set
of trackers for the object, and a different set for the background (camera).
You must set up a complete set of constraintsposition locks, orientation,
and distance (scale)for both the camera and object (a set for each object, if
there are several). Frequently, users ask why a second set of constraints for the
object is required, when it seems that the camera (background) constraints
should be enough.
However, recall a common film-making technique: shooting an actor, who
is close to the camera, in front of a set that is much further away. Presto, a giant
among mere mortals! Or, in reverse, a sequel featuring yet another group of
shrunken relatives, name the variety. The reason this works is that it is
impossible to visually tell the difference between a close-up small object moving
around slightly, and a larger object moving around dramatically, a greater
distance away. This is true for a person or a machine, or by any mathematical
means.
This applies independently to the background of a set, and to each object
moving around in the set. Each might be large and far, or close and small. Each
one requires its own distance constraint, one way or another.
The object‘s position and orientation constraints are necessary for a
different reason: they define the object‘s local coordinate system. When you
construct a mesh in your favorite animation package, you can move it around
with respect to a local center point, about which the model will rotate when you
later begin to animate it. In SynthEyes, the object‘s coordinate constraints define
this local coordinate system.
Despite the veracity of the above, there are ways that the relative
positioning of objects moving around in a scene can be discerned: shadows of an
object, improper actor sightlines, occasions where a moving object comes in
contact with the background set, or when the moving object temporarily stops.
These are assumptions that can be intellectually deduced by the audience,
though the images do not require it. Indeed, these assumptions are
systematically violated by savvy filmmakers for cinematic effect.
However, SynthEyes is neither smart/stupid enough to make assumptions,
nor to know when they have been violated. Consequently, it must be instructed
how to align and size the scenes in the most useful fashion.
The alignment of the camera and object coordinate systems can be
determined independently, using the usual kinds of setups for each.

211
JOINT CAMERA AND OBJECT TRACKING

The relative sizing for camera and object must be considered more
carefully when the two must interact, for example, to cast shadows from the
object onto a stationary object.
When both camera and object move and must be tracked, it is a good idea
to take on-set measurements between trackable points on the object and
background. These measurements can be used as distance constraints to obtain
the correct relative scaling.
If you do not have both scales, you will need to fix either the camera or
object scale, then systematically vary the other scale until the relationship
between the two looks correct.

212
Multi-Shot Tracking
SynthEyes includes the powerful capability of allowing multiple shots to be
loaded simultaneously, tracked, linked together, and solved jointly to find the best
tracker, camera, and (if present) object positions. With this capability, you can
use an easily-trackable ―overview‖ shot to nail down basic locations for trackable
features, then track a real shot with a narrow field of view, few trackable features,
or other complications, using the first shot as a guide. Or, you might use a left
and right camera shot to track a shot-in-3-D feature. If you don‘t mind some large
scene files, you can load all the shots from a given set into a single scene file,
and track them together to a common set of points, so that each shot can share
the same common 3-D geometry for the set.
In this section, we‘ll demonstrate how to use a collection of digital stills as
a road-map for a difficult-to-track shot: in this case, a tripod shot for which no 3-D
recovery would otherwise be possible. A scenario such as this requires
supervised tracking, because of the scatter-shot nature of the stills. The tripod
shot could be automatically tracked, but there‘s not much point to that because
you must already perform supervised tracking to match the stills, and there‘s not
much gained by adding a lot more trackers to a tripod shot. It will take around 2
hours to perform this example, which is intentionally complex to illustrate a more
complex scenario.
The required files for this example can be found at
http://www.ssontech.com/download.htm: both land2dv.avi and DCP_103x.zip are
required. The zip file contains a series of digital stills, and should be unpacked
into the same working folder as the AVI. You can also download multix.zip,
which contains the .sni scene files for reference.
Prerequisites: You need to be able to do supervised tracking and handle
coordinate system setup for this description; it does not contain a beginner-level
description.
Start with the digital stills, which are 9 pictures taken with a digital still
camera, each 2160 by 1440. Start SynthEyes and do a File/New. Select
DCP_1031.JPG. Use the default settings, including an aspect ratio of 1.5.

213
MULTI-SHOT TRACKING

Create trackers for each of the balls: six at the top of the poles, six near
ground level on top of the cones. Create each tracker, and track it through the
entire (nine-frame) shot. Because each camera position is much different than its
predecessor, you will have to manually position the tracker in each frame. It will
be helpful to turn on the Track/Hand-held Sticky menu setting. You can use
control-drag to make final positioning easier on the high-resolution still. Create
the trackers in a consistent order, for example, from back left to front left, then
back right to front right. After completing each track, Lock the tracker.
The manual tracking stage will take around an hour. The resulting file is
available as multi1.sni.
Set up a coordinate system using the ground-level (cone) trackers. Set the
front-left tracker as the Origin, the back-left tracker as a Lock Point at
X=0,Y=50,Z=0, and the front-right tracker as an XY Plane tracker.

You can solve for this shot now: switch to the Solver panel and hit
Go! You should obtain a satisfactory solution for the ball locations, and a rather
erratic and spaced out camera path, since the camera was walked from place to
place. (multi2.sni)
It is time for the second shot. On the Shot menu, select Add Shot (or
File/Import/Shot). Select the land2dv.avi shot. Set Interlacing to No; the shot
was taken was a Canon Optura Pi in progressive scan mode.

Bring the camera view full-screen, go to the tracker panel , and begin
tracking the same ball positions in this shot with bright-spot trackers. Set the Key
spinner to 8, as the exposure ramps substantially during the shot. The balls
provide low contrast, so some trackers are easiest to control from within the
tracker view window on the tracker panel . The back-right ground-level ball is
occluded by the front-left above-ground ball, so you do not have to track the
back-right ball. It will be easiest to create the trackers in the same order as in the
first shot. (multi3.sni)
Next, create links between the two sets of trackers, to tell SynthEyes what
trackers were tracking the same feature. You will need a bare minimum of six (6)

links between the shots. Switch to the coordinate system panel , and the
Quad view. Move far enough into the shot that all trackers are in-frame.

Camera/Viewport Matching
To assign links, select a tracker from the AVI in the camera view. Go to
the top view and zoom in to find the matching 3-D point from the first shot, and
ALT-click it (Mac: Command-click). Select the next tracker in the camera view,
and ALT-click the corresponding point in the Top view; repeat until all are

214
MULTI-SHOT TRACKING

assigned. If you created the trackers consistently, you can sequence through
them in order.

Camera/Perspective View Matching


You can display both shots simultaneously using a perspective view and
the camera view. Use the Camera & Perspective viewport layout, or modify a
Quad layout to replace one of the viewports with a perspective window. Make the
reference shot active. On the Perspective view‘s right-click menu, select Lock to
Current Camera. In the right-click View menu, select Freeze on this frame.
(You can adjust which frame it is frozen on using the A, s, d, F, period, or comma
keys within the perspective view.)
Change the user interface to make the main shot active using the toolbar
button or shot menu.
You can now select trackers in the camera view, then, with the Coordinate
System panel open, ALT-click the corresponding tracker in the reference view.
See the section on Stereo Supervised Tracking for information on color coding of
linked trackers using both the Camera and Perspective views.

Match By Name
Another approach is to give each tracker a meaningful name. In this case,
clicking the Target Point button will be helpful: it brings up a list of trackers to
choose from.
A more subtle approach is to have matching names, then use the
Track/Cross Link By Name menu item. Having truly identical names makes
things confusing, so the cross link command ignores the first character of each
name. You can then name the trackers lWindowBL and rWindowBL and have
them automatically linked. After setting up a number of matching trackers, select
the trackers on the video clip, and select the Cross Link By Name menu item.
Links will be created from the selected trackers to the matching trackers on the
reference shot.
Notes on links: a shot with links should have links to only a single other
shot, which should not have any links to other shots. You can have several shots
link to a single reference.

Ready to Solve

After completing the links, switch to the Solver panel . Change the
solver mode to Indirect, because this camera‘s solution will be based on the
solution initially obtained from the first shot.(multi4.sni) Make sure Constrain is
off at this time.
Hit Go! SynthEyes will solve the two shots jointly, that is, find the point
positions that match both shots best. Each tracker will still have its own position;
trackers linked together will be very close to one another.

215
MULTI-SHOT TRACKING

In the example, you should be able to see that the second (tripod) shot
was taken from roughly the location of the second still. Even if the positions were
identical, differences between cameras and the exact features being tracked will
result in imperfect matches. However, the pixel positions will match satisfactorily
for effect insertion. The final result is multi5.sni.

216
Stereoscopic Movies
3-D movies have had a long and intermittent history, but recently they
have made a comeback as the technology improves, with polarized or frame-
sequential projectors, and better understanding and control over convergence to
reduce eyestrain. 3-D movies may be a major selling point to bring larger
audiences back to theaters.
Filmmakers would like to use the entire arsenal of digital techniques to
help produce more compelling and contemporary films. SynthEyes can and has
been used to handle stereo shots using a variety of techniques based on its
single-camera workflow, but there are now extensive specialized features to
support stereo filmmaking.
SynthEyes is designed to help you make stereo movies. The stereo
capabilities range from tracking and cross-camera linking to solving, plus a
variety of user-interface tweaks to simplify handling stereo shots. A special
Stereo Geometry panel allows constraints to be applied between the cameras to
achieve specific relative positions and produce smoother results. Additional
stereo capabilities will be added as the stereo market develops.

STOP! Stereo filmmaking requires a wide variety of techniques from


throughout SynthEyes: the image preprocessor, tracking, solving, coordinate
system setup, etc. The material here builds upon that earlier material; it is not
repeated here because it would be exactly that, a repetition. If this is the first part
of the manual you are reading, expect to need to read the rest of it.
You will need to know a fair amount about 3-D movie-making to be able to
produce watchable 3-D movies. 3-D is a bleeding edge field and you should
allow lots of time for experimentation. SynthEyes technical support is necessarily
limited to SynthEyes; please consult other training resources for general
stereoscopic movie theory and workflow issues in other applications.

What’s Different About Stereo Match-Moving?


Match-moving relies on have different views of the same scene in order to
determine the 3-D location of everything. With a single camera, that means that
the camera must physically move (translate) to produce different views.
24/7 Perspective
With stereo, there are two different views all the time, and even a single
frame from each camera is enough to produce a 3-D solve. At least in theory,
you never have to worry about ―tripod shots‖ that do not produce 3-D. Every shot
can produce 3-D. Every stereo shot can also be used in a motion-capture setup
to produce a separate path for even a single moving feature. That‘s clearly good
news.
But before you get too excited about that, recall that in a stereo camera
rig, the cameras are usually under 10 cm apart. Compare that to a dolly shot or

217
STEREOSCOPIC MOVIES

crane shot with several meters of motion to produce perspective. And each of the
hundreds of frames in a typical moving-camera shot produces additional data to
help produce a more accurate solution.
So, even though you can produce 3-D from a very short stereo shot, the
information will not be very accurate (that‘s the math, not a software issue), and
longer shots with a moving camera will always help produce better-quality 3-D
data.
On a short shot with no camera rig translation (with the rig on a tripod),
you can get 3-D solves for features near to the camera(s). Features that are far
from the cameras must still be configured as ―Far‖ to SynthEyes, meaning that no
3-D depth can be determined. Similarly, for motion-capture points, accuracy in
depth will degrade as the points move away from the camera. The exact
definition of ―far‖ depends on the resolution and field of view of the cameras, you
might consider something far if it is several hundred times the inter-ocular
distance from the camera.
Easier Sizing
If we know the inter-ocular distance (and we always should have a
measurement for the beginning or end of the shot), then we know the coordinate
system sizing immediately. There is no need for distance measurements from the
set, and no problem with consistency between shots.
That makes coordinate system setup much simpler. On a stereo shot,
when an inter-ocular distance is set up, the *3 coordinate system tool generates
a somewhat different set of constraints, one that aligns the axes, but does not
impose its own size, allowing the inter-ocular distance to have effect.
Keep in mind that the sizing is only as good as the measurement. If the
measurement is 68 +/- 1 mm, that is over 1% uncertainty. If you have some other
measurement that you expect to come out at 6000 mm and it comes out at 6055,
you shouldn‘t be at all surprised. Some scenes with little perspective will not vary
much depending on inter-ocular distance, so the inter-ocular distance may size
the scene accurately.
If you have a crucial sizing requirement, you should use a direct scene
measurement, it will be more accurate. (In that case, switch to a Fixed inter-
ocular distance, instead of Known.)

Basics of 3-D Camera Rigs


Different hardware and software manufacturers have their own
terminologies, technologies, and viewpoints; here‘s ours.
To make stereo movies, we need two cameras. They get mounted on
some sort of a rig, which holds the two cameras in place at some specific
relationship to one another. You can then mount that rig on a tripod, a dolly, a
crane, whatever, that carries the two cameras around as a unit.

218
STEREOSCOPIC MOVIES

The two cameras must be matched to one another in several ways in


order to be usable:
 Same overall image aspect ratio
 Same field of view
 Same frame rate and synchronization
 Same lens distortion (typically none)
 Same overall orientation (geometric alignment)
 Matching color and brightness grading
Most of these should be fairly obvious. Many can be manipulated in post,
and SynthEyes is designed to help you achieve the required matching, even from
very low-tech rigs.
Even the simplest rig will require matching work in post-production. It is
not possible to bolt two cameras together, even with any kind of mechanical
alignment feature, and have the cameras be optically aligned. Cameras are not
manufactured to be repeatable in this way; the circuit board and chip-in-socket
alignment within the camera is not sufficiently accurate or repeatable between
cameras to be directly useful.
Synchronization
The cameras should be synchronized so that they take pictures at exactly
the same instant. Otherwise, when you do the tracking and solving, you will by
definition have some very subtle geometric distortions and errors: basically you
can‘t triangulate because the subject is at two different locations, corresponding
to each different time.
To make life interesting, if the film will be projected using a frame-
sequential projector (or LCD glasses), then the two cameras should be
synchronized but 180 degrees out of phase. But that will mean you can not track,
it is the worst possible synchronization error. Instead, for provable accuracy you
should film and track at twice the final rate (eg 48 or 60 fps progressive), then
have the projectors show only every other frame from each final image stream.
If circumstances warrant that you shoot unsynchronized or anti-
synchronized footage, you must be aware that you (and the audience) will be
subject to motion-related problems.
CMOS cameras are also subject to the Rolling Shutter problem, which
affects monocular projects as well as stereoscopic ones. The rolling shutter
problem will also result in geometric errors, depending on the amount of motion
in the imagery. To cover a common misconception, this problem is not reduced
by a short shutter time. It at all possible, use (synchronized) CCD or film
cameras.

219
STEREOSCOPIC MOVIES

One-Toe vs. Two-Toe Camera Rigs


Ideally, a camera rig has two cameras next to each other, perfectly
aligned. If both camera viewing axes are perfectly parallel, they are said to be
converged at infinity, and this is a particularly simple case for manipulation.
Usually, one or both cameras toe in slightly to converge at some point closer to
the camera, just as our eyes converge to follow an approaching object.
Mechanically, this may be accomplished directly, or by moving a mirror. We refer
to the total inwards angle of the cameras as the vergence angle.
It might seem that there is no difference between one camera toeing in or
two, but there is. Consider the line between the cameras. With both cameras
properly aligned and converged at infinity, the viewing direction is precisely
perpendicular to the line between the cameras. If one camera toes in, the other
remains at right angles to the line between them. If both cameras toe in, they
both toe in an equal amount, with respect to the line between them.
If you consider an object approaching the rig along the centerline from
infinity, the two-toe rig remains stationary with both cameras toeing in. The one-
toe rig moves backwards and rotates slightly, in order to keep one camera at
right angles to the perpendicular line between the camera centers.
SynthEyes works with either kind of rig. Though the one-toe rigs seem a
little unnatural (effectively they make the audience turn their heads), the motions
are very small and not really an issue for people, except for those who are trying
to do their tracking to sub-pixel accuracy! The one-toe rigs are mechanically
simpler and seem more likely to actually produce the motion they are supposed
to (are the two-toe rigs really moving exactly matching angles? Are the axes
parallel? Maybe, maybe not).
From Where to Where?
The inter-ocular distance is a very important number in stereo movie-
making: it is the distance between the eyes, or the cameras, with a typical value
around 65 mm. It is frequently manipulated by filmmakers, however; more on that
in a minute.
Although you can measure the distance between your buddy‘s eyes within
a few millimeters pretty easily, when we start talking about cameras it is a little
less obvious where to measure.
It turns out that this question is much more significant than you might think
as soon as you allow the camera vergence to change: if the cameras are tilted
inwards towards each other, the point at which you measure will have a dramatic
effect. Depending on where you measure, the distance will change more or less
or not at all.
The proper point to consider is what we call the nodal point, as used for
tripod mode shots and panoramic photos. It‘s not technically a nodal point for
opticians. It is the center of the camera aperture, as seen from the outside of the

220
STEREOSCOPIC MOVIES

camera. See this article on the pivot point for panoramic photography for more
details.
The inter-ocular distance (IOD) is the distance between the nodal points of
the cameras.
Dynamic Rigs
Though the simplest rigs bolt the two cameras together at a fixed location,
more sophisticated rigs allow the cameras to move during a shot.
The simplest and most useful motion may not be what you think: it is to
change the inter-ocular distance on the fly. This preserves the proper 3-D
sensation, while avoiding extreme vergence angles that make it difficult to keep
everything on-screen in the movie theater.
The more complex effect is to change the vergence angle on the fly. This
must be done with extreme caution: unless the rig is very carefully built, changing
the vergence angle may also change the inter-ocular distance—or even change
the direction between them as well. If a rig is to change the vergence angle, it
must be constructed to locate the camera nodal point exactly at the center of the
vergence angle‘s rotation.
A rig that changes only the inter-ocular distance does not have to be
calibrated as carefully. A changing IOD should always be exactly parallel to the
line between the camera nodal points, which in turn means that on a one-toe
camera, the non-moving camera must be perpendicular to the translation axis, or
a two-toe camera must have equal toe-in angles relative to the translation axis.
The penalty for a rig that does not maintain a well-defined relationship
between the cameras is simple: it must be treated as two separate cameras. The
most dangerous shots and rigs are those with changing vergence, either with
mirrors or directly, where the center of rotation does not exactly match the nodal
point. Unless you have calibrated, it will be wrong. You will be in the same boat
as people who shoot green-screen with no tracking markers—and that boat has
a hole…

Camera/Camera Stereo Geometry Parameters


SynthEyes permits you to create constraints that limit the relative position
between the two cameras in sophisticated ways, so that you can ask for specific
relationships between the cameras, and eliminate unnecessary noise-induced
chatter in the position.
If you work in an animation package and have a child object attached to a
parent, you will have six numbers to adjust: 3 position numbers (X, Y, and Z),
and 3 angles (for example Pan, Tilt, and Roll, or Roll, Pitch, and Yaw). The same
six numbers are used for the basic position and orientation of any object.
Those particular six numbers are not convenient for describing the
relationship between the two cameras in a stereo pair, however! In the real world,

221
STEREOSCOPIC MOVIES

there is only one real position measurement that can be made accurately, the
inter-ocular distance, and it controls the scaling of everything.
Accordingly, SynthEyes uses spherical coordinates—which have only a
single distance measurement—to describe the relationship between the
cameras.
Of the two cameras, we‘ll refer to one as the dominant camera (the one
we want to think about the most, typically the right), and the other as the
secondary camera. The camera parameters describe the relationship of the
secondary (child) camera to the dominant (parent) camera. Which camera is
dominant is controlled on the Stereo Geometry panel. In each case, when we talk
about the position of a camera, we are talking about the position of its nodal point
(inside the front of the lens), not of the base of the camera, which doesn‘t matter.
You can think about the stereo parameters in the coordinate space of the
dominant camera. The dominant camera has a ―ground plane‖ consisting of its
side vector, which flies out the right side from the nodal point, and its ―look‖
vector, which flies forward from the nodal point towards what it is looking at. The
camera also has an up vector, which points in the direction of the top of the
camera image. All of these are relative to the camera body, so if you turn the
camera upside down, the camera‘s ―up‖ vector is now pointing down!
Here are the camera parameters. They have been chosen to be as
human-friendly as possible. Most of the time, you should be concerned mainly
with the Distance and Vergence; SynthEyes will tell you what the other values
are and they shouldn‘t be messed with much.
Distance. The inter-ocular distance between the cameras. Note that this
value is measured in the same units as the main 3-D workspace units. So if you
want an overall scene to be measured in feet, the inter-ocular distance should be
measured in feet as well. Centimeters is a reasonable overall choice.
Direction. This is the direction (angle) towards the nodal point of the
secondary camera from the dominant camera, in the ground plane. If the
secondary camera is directly next to the dominant camera, in the most usual
configuration, the direction value is zero. The Direction angle increases if the
secondary camera moves forward, so that at 90 degrees, the secondary camera
is in front of the primary camera (ignoring relative elevation). See additional
considerations in Two-Toe Revisited, below.
Elevation. This is the elevation angle (above the dominant camera‘s
ground plane). At zero, the secondary camera is on the dominant camera‘s
ground plane. At 90 degrees, the secondary camera is above the dominant
camera, on its up axis.
Vergence. This is the total toe-in angle by which the two cameras point in
towards each other. At zero, the look directions of the cameras are parallel, they
are converged at infinity. At 90 degrees, the look directions are at right angles.
See Two-Toe Revisited below.

222
STEREOSCOPIC MOVIES

Tilt. At a tilt of zero, the secondary camera is looking in the same ground-
plane as the dominant camera. At positive angles, the secondary camera is
looking increasingly upwards, relative to the dominant camera. At a tilt of 90
degrees, the secondary camera is looking along the dominant camera‘s Up axis,
perpendicular to the dominant camera viewing direction (they aren‘t looking at
the same things at all!).
Roll. Relative roll of the secondary camera relative the dominant. At a roll
angle of zero, the cameras are aren‘t twisted with respect to one another at all;
both camera look vectors point in the same direction. But as the roll angle
increases, the secondary camera rolls counter-clockwise with respect to the
dominant camera, as seen from the back.
You can experiment with the stereo parameters by opening a stereo shot,
opening the Stereo Geometry panel, clicking More… and then one of the Live
buttons. Adjusting the spinners will then cause the selected camera to update
appropriately with respect to the other camera.

Two-Toe, Revisited
The camera parameters, as described above, describe the situation for
―single-toed‖ camera rigs, where only one camera (the secondary) rotates for
vergence. The situation is a little more complex for two-toe rigs, where both
cameras toe inwards for vergence. These modes are ―Center-Left‖ and ―Center-
Right‖ in the Stereo Geometry panel‘s dominance selection.
The dominant camera never moves during two-toed vergence, yet we still
achieve the effect of both camera tilting in evenly. How is that possible?
Consider a vergence angle of 90 degrees. With a one-toe rig, the
secondary camera has turned 90 degrees in place without moving, and is now
looking directly at the primary camera.
With a two-toe rig at a vergence of 90 degrees, the secondary has turned
90 degrees so it is looking at right angles to the look direction of the dominant
camera.
But, and this is the key thing, at the same time the secondary camera has
swung forward to what would otherwise be Direction=45 degrees, even though
the Direction is still at zero. As a result, the secondary camera has tilted in 45
degrees from the nominal look direction, and the dominant camera is also 45
degrees from the nominal look direction—which is the perpendicular to the line
between the two cameras.
The thing to keep in mind is that the line between the two cameras (nodal
points) forms the baseline; the nominal overall ‗rig‘ look direction is 90 degrees
from that. SynthEyes changes the baseline in centered mode to maintain the
proper even vergence for the two cameras; it does that by changing the definition
of where the zero Direction is. The Direction value is offset by one-half the
vergence in centered mode.

223
STEREOSCOPIC MOVIES

If you put the stereo pair into one of the Centered modes and use Live
mode, you‘ll see the camera swinging forward and backward in response to
changes in the vergence. Once you understand it, it should make sense. If it
seems a bit more complex and demanding than single-toe rigs… you‘re right!

3-D Rig Calibration Overview


If you assemble two cameras onto a rig—at it‘s simplest a piece of metal
with cameras bolted to it—you‘ll rapidly discover that the two cameras are
looking in different directions, with different image sizes, and usually with quite
different roll angles.
Using a wide field of view is important to achieving a good stereoscopic
effect—sense of depth—especially since some of the view will be sacrificed in
calibration. The wide field of view frequently means substantial lens distortion,
and removing the distortion will eliminate some of the field of view also.
So an important initial goal of stereo image processing is to make the two
images conform to one another (match geometrically). (Color and brightness
should also be equalized.)
There are three basic methods:
1) Mechanical calibration before shooting, using physical adjustments on
the rig to align the cameras, in conjunction with a monitor that can
superimpose the two images
2) Electronic calibration, by shooting images of reference grids, then
analyzing that footage to determine corrections to apply to the real
footage to cause it to match up.
3) Take as-shot footage with no calibration, track and analyze it to
determine the stereo parameters, use them to correct the footage to
match properly.
Of these choices, the first is the best, because the as-shot images will
already be correct and will not need resampling to correct them. The downside is
that a suitably adjustable rig and monitor are more complex and expensive.
The second choice is reasonable for most home-built rigs, where two
cameras are lashed together. We recommend that you set up the shoot of the
reference grid, mechanically adjust the cameras as best you are able, then lock
them down and use electronic correction (applied by the image preprocessor) to
correct the remaining mismatch. With a little care, the remaining mismatch
should only require a zoom of a few percent, with minimal impact on image
quality.
The third case is riskiest, and is subject to the details of each shot: it may
not always be possible to determine the camera parameters accurately. We
recommend this approach only for ―rescuing‖ un-calibrated shots at present.

224
STEREOSCOPIC MOVIES

Electronic Calibration
To electronically calibrate, print out the calibration grid from the web site
using a large-format black and white (drafting) printer, which can be done at
printing shops such as Fedex Kinkos. Attach the grid to a convenient wall,
perhaps with something like Joe‘s Sticky Stuff from Cinetools. Position the rig on
a tripod in front of the wall, as close as it can get with the entire outer frame
visible in both cameras (zoom lenses wide open). Adjust the height of the rig so
that the nodal point of the cameras is at the same height as the center point of
the grid.
Re-aim the cameras as necessary to center them on the grid. This will
converge them at that distance to the wall; you may want to offset them slightly
outwards or inwards to achieve a different convergence distance, depending on
what you want.
Shoot a little footage of this static setup. Record the distance from the
cameras to the wall, and the width of the visible grid pattern (48‖ on our standard
grid at 100% size).
For camcorders with zoom lenses, you should shoot a sequence, zooming
in a bit at a time in each camera. You can use one remote control to control both
cameras simultaneously. This sequence will allow the optic center of the lens to
be determined—camcorder lenses are often far off-center.
Once you open the shots in SynthEyes, create a full-width checkline and
use the Camera Field of View Calculator script to determine the overall field of
view. Use the Adjust tools on the image preprocessor to adjust each shot to have
the same size and rotation angle. Use the lens distortion controls to remove any
distortion. Correct any mirroring with this pass as well, see the mirror settings on
the image preprocessor‘s Rez tab. Use the Cropping and re-sampling controls to
remove lens off-centering. A small Delta Zoom value will equalize the zoom. See
the tutorial for an overview of this process.
Your objective is to produce a set of settings that take the two different
images and make them look exactly the same, as if your camera rig was perfect.
Once you‘ve done that, you can record all of the relevant settings (see the
Export/Stereo/Export Stereo Settings script), and re-use them on each of your
shots (see Import/Stereo/Import Stereo Settings) to make the actual images
match up properly. Then, you should save a modified version of each sequence
out to disk for subsequent tracking, compositing, and delivery to the audience.
Obviously this process requires that your stereo rig stay rigid from shot to
shot (or periodic calibrations performed). The better the shots match, the less
image quality and field of view will be lost in making the shots match.

Opening Shots
SynthEyes uses a control on the shot parameters panel to identify shots
that need stereo processing. Open the left shot, and on the shot settings panel,

225
STEREOSCOPIC MOVIES

click Stereo off until it says Left. After you adjust any other parameters and click
OK, SynthEyes will immediately prompt you to open the right shot. Any settings,
including image preprocessor settings, will be copied over to the right shot to
save time.
If you do not configure the stereo setting when you initially open the shot,
you can do so later using the shot settings dialog. You can turn it on or off as
your needs warrant. To get stereo processing, you must open the left shot first
and the right shot second, and set the first shot to left and the second to right.
Both shots must have the same shot-start and -end frame values.
Stereo rigs that include mirrors will produce reversed images. If the
camera was mechanically calibrated, use the Mirror Left/Right or Mirror
Top/Bottom checkboxes on the Rez tab of the image preprocessor to remove the
mirroring. (If the cameras are electronically calibrated using the image
preprocessor, you should remove mirroring then as described above.)
Note that you can use the Stereo view in the Perspective view to show
both images simultaneously as an anaglyph.

Stereoscopic Tracking
In a stereo setup, there are links from trackers on the secondary camera
to the corresponding trackers on the dominant camera; they tell SynthEyes which
features are tracked in both cameras. These links are always treated as peg-type
locks, regardless of the state of the Constrain checkbox.
Automatic
If you use automatic tracking, SynthEyes will track both shots
simultaneously and automatically link trackers between the two shots. For this to
work well, your two shots need to be properly matched, both in overall geometric
alignment, and in color and brightness grading. A quick fix, if needed, is to turn
on high-pass filtering in the image preprocessor. Be sure to set the proper
dominant camera before auto-tracking, as SynthEyes will examine that to
determine in which direction to place the links (from secondary trackers to
primary camera trackers, which can be left to right or right to left).
Supervised
If you use supervised tracking, you will need to set up the links manually
between the corresponding left- and right-camera trackers. There are a number
of special features to make handling stereo tracker pairs easier (for automatic
trackers also).
It is easiest to do supervised stereo tracking with a perspective window
containing one eye, and a camera view containing the other eye, typically the
Camera+Perspective viewport setup or a custom configuration. With the right
camera active, click Lock to Current Cam on the perspective view's right-click
menu. Then make the left camera active. Use the shot menu's Activate Other
Eye (default accelerator key: minus sign) to flip-flop the eyes shown in the

226
STEREOSCOPIC MOVIES

camera and perspective views. In the perspective view, the right-click View/Show
only locked setting can be helpful: it causes the perspective view to show only
the trackers on the camera that perspective view is locked to. The following
discussion largely requires this setup, with the perspective view showing the
opposite eye of the camera view.
To create stereo pairs, you should create a few trackers on one eye,
switch to the other eye (typically using the minus sign key), create the matching
trackers, then link them together. To link each pair, you should click the tracker in
the camera view to select it, then ALT-click the matching tracker in the
perspective view. SynthEyes will automatically link the trackers in the correct
direction, depending on the camera dominance setting.
Note: while linking trackers normally requires that the Coordinate System
Panel be open (so that you can see the results), the panel does not have to be
open to link trackers in a stereo shot using the perspective view—you can link
them while keeping the Tracker panel open continuously, to save time.
When you have stereo pairs configured and are using the
camera/perspective view setup, if you select a tracker in the camera view, the
perspective view will show the matching tracker in the other camera in a different
color (yellow by default, "Persp, Opposite sel" in the preferences). This makes it
easy to check the matching.
ACHTUNG! Stay awake, this next one is tricky: if you click on a tracker in
the perspective view, that tracker will not be selected, because it is not on the
currently-active camera, but instead the matching tracker on the other camera (in
the camera view) will be selected. That will in turn make the tracker you just
clicked (in the perspective window) turn yellow, because its matching tracker is
now selected. Again, this makes it easy to see what tracker goes with what. It
sounds complicated but should be clear when you try it for real. If you want to
select and edit a tracker displayed in the perspective view (on the opposite eye),
you should switch the views with minus sign—the camera view is the place to do
that.

Configuring a Stereo Solve


When solving a stereo shot, you can use either of three different setups:
the Stereo solving mode, Automatic/Indirectly, or the Tripod/Indirectly setup. The
Stereo mode works for stationary tripod-like shots or even stills when there are
nearby features, while the Automatic/Indirectly approach requires a moving
camera in order to produce (generally more reliable) startup results. The
Tripod/Indirectly setup is required when all trackable features are distant from the
camera pair.
You will need to pay more attention to the solving setup for stereo shots
than for normal monocular shots, so please read on in the following subsections.
If you hit the big green Auto button, or the Run Auto-tracker button,
SynthEyes will prompt you to determine the desired mode of these three. If you

227
STEREOSCOPIC MOVIES

are working manually or later need to change the modes, you can do so
individually for each camera. Be sure to keep Indirectly, if present, on the
secondary camera.
Note that ―far‖ trackers can be an issue with stereo shots that are on a
tripod: once a tracker is many times the interocular distance from the cameras,
its distance can no longer be determined, and the stereo solve goes from a nice
stereo tripod situation to a combination of two tripod shots, see the section on
Stereo Tripod Shots.
Stereo Solving Mode
The stereo mode uses the trackers that are linked between cameras to get
things going. It does not rely on any camera motion at all: the camera can be
stationary, and even a single still for each camera can be used, as long as there
are enough nearby features (compared to the inter-ocular distance).
Important: The Begin and End frames on the Solver panel should be
configured directly for Stereo Solving mode—somewhat differently than for a
usual automatic solve, so please keep reading. To begin with, the checkboxes
should be checked so that the values can be set manually.
The Stereo solve will literally start with the Begin frame from both
cameras; it should be chosen at a frame with many trackers in common between
the two cameras.
However, this offers a limited pool of data with which to get started. A
much larger, and thus more reliable, pool of data is considered when the End
frame is set as well. The Stereo solver startup considers all the frames between
the Begin and End frames as source data.
The one caveat: none of the camera parameters must change between
the begin and end frames, including the distance or vergence, even if they were
marked as changing (see the next section).
If any of the camera parameters are constantly changing throughout the
shot, or it can not be determined that they do not, then you must set the End
frame and Begin frame to the same frame, and forego having any additional data
for startup. Such a frame should have as many trackers as possible, and they
should be carefully examined to reduce errors.
If you do not select the Begin/End frames manually (leave them in
automatic mode), then SynthEyes will select a single starting/ending frame that
has as many trackers in common as possible. But as described, supplying a
range is a better idea.
Note that you might be able to use the entire shot as a range, though
probably this will increase run time and a shorter period may produce equivalent
results.

228
STEREOSCOPIC MOVIES

Automatic/Indirectly Solving Mode


The stereo mode effectively uses the inter-ocular distance as the baseline
for triangulating to initially find the tracking points. If the camera is moving, a
larger portion of the motion can be used to get solving started, producing a more
accurate starting configuration.
To do that, use the normal Automatic solving mode for the dominant
camera, and the Indirectly mode for the secondary camera. Assuming the
moving camera path is reasonable, SynthEyes will solve for the dominant
camera path, then for the secondary path, applying the selected camera/camera
constraints at that time.
This approach will probably work better on shots where most of the
trackers are fairly far from the cameras, and the camera moves a substantial
distance, thus establishing a baseline for triangulation. If the camera moves
(translates) little, you should use the Stereo solving mode.
Tripod/Indirectly Solving Mode
With two cameras, nodal tripod shots are less an issue because distances
and 3-D coordinates can be determined if there are enough nearby features.
However, you may encounter shots that are nodal by virtue of not having
anything nearby; call them "all-far" shots. For example, consider a camera on the
top of a mountain, which must be attacked by CG birds. With no nearby features,
the shot will be nodal, and there will be no way to determine the inter-ocular
distance. Any inter-ocular distance can be used, with no way to tell if it is right or
wrong.
Like a (monocular) tripod shot, no 3-D solve is possible, only what
amounts to two linked tripod solves.
Use the Tripod/Indirectly setup (tripod mode on dominant camera,
indirectly on secondary). When refining, use Refine Tripod mode for both
cameras.
On the stereo geometry panel (see below), you should set up your best
estimate of the inter-ocular distance, either from on-set measurements or from
other shots. You can animate it if you have the information to do so. Set the
Direction and Elevation numbers to zero, or known values from other shots.
SynthEyes will solve the shot to produce two synchronized tripod solves.
Then, it will compute adjusted camera paths, based on the interocular
distance and the pointing direction of the camera, as if the camera had been on a
tripod. These will typically be small arc-like paths. If you need to later adjust the
inter-ocular distance, Refine (Tripod) the shot to have the paths recalculated.
As a result, you will have two matching camera paths so that you can add
CG effects that come close to the camera. Since SynthEyes has regenerated
the camera paths at a correct inter-ocular distance, even though all the tracked

229
STEREOSCOPIC MOVIES

features are far, you will still be able to add effects nearby and have them come
out OK.

Setting Up Constraints
The Stereo Geometry panel can be used to set up constraints between
the two cameras. If you will be using the inter-ocular distance to set the overall
scale of the scene, then you should do that initially, before setting up a
coordinate system using the *3 tool. The *3 tool will recognize the inter-ocular
distance constraint, and generate a modified set of tracker constraints to avoid
creating a conflict with the inter-ocular distance constraint.
The left-most column on the Stereo Geometry panel sets the solving mode
for each of the six stereo parameters; they can be configured individually and
often will be. The default As-Is setting causes no constraint to be generated for
that parameter. To constrain the Distance, change its mode to Known, and set
the Lock-To Value to the desired value.
The Lock-To value can be animated, under control of the Make Key button
at top left of the panel. With Make Key off, the lock value shown and animated is
that at the beginning of the shot. Beware, this can hide any additional keys you
have already created.
Usually it will be best to solve a shot once first, with at most a Distance
constraint, and examine the resulting camera parameters. The stereo parameters
can be viewed in the graph editor under the node ―Stereo Pairs.‖ The colors of
the parameters are shown on the stereo panel for convenience.
Sudden jumps in a parameter will usually indicate a tracking problem,
which should be addressed directly. The error is like an air bubble under
plastic—you can move it around, but not eliminate it. The stereo locks are all
‗soft‘ and can not necessarily overcome an arbitrarily large error. If you do not fix
the underlying errors in the tracking data, even if you force the stereo parameters
to the values you wish, the error will appear in other channels or in the tracker
locations.
Usually, the other four stereo parameters (other than distance and
vergence) are constant at an unknown value. Use the Fixed mode to tell
SynthEyes to determine the best unknown value (like the Fixed Unknown lens-
solving mode).
If you are very confident of your calibration, or wish to have the best solve
for a specific set of parameters, you can use the Known mode for them also.
In the Varying solving mode, you can create constraints for specific
desired ranges of frames, by animating the respective Lock button on or off. The
parameter will be locked to the Lock-To value for those specific frames. The Hold
button may also be activated (for vergence and distance); see the following
section on Handling Shots with Changing IOD or Vergence.

230
STEREOSCOPIC MOVIES

Note that usually you should keep solving ―from scratch‖ after changing
the stereo constraint parameters, rather than switching to Refine mode. Usually
after a change the desired solution will be too far away from the current one to be
determined without re-running the solve.

Weights
Each constraint has an animated weight track. Weights range from 0 to
120, with 60 being nominal. The value is decibels, meaning 20 units changes the
weight by a factor of 10. Thus, the total weight range is 0.001 to 1000.0.
Excessively large weight values can de-stabilize the equations, producing
a less-accurate result. We advise sticking with the default values to begin with,
and only increasing a weight if needed to reduce difference values after a solve
has been obtained. On a difficult solve where there is much contradictory
information, it may be more helpful to reduce the weight, to make the equations
more flexible and better able to find a decent solution.

Identical Lens FOV Weight


You might wish to keep the two camera fields of view close to one
another. The Identical Lens Weight spinner on the Lens Panel allows you to do
that. As with the constraint weights, 60 is the nominal value, with a range from 0
to 120.
Note that truly identical camera lenses are very unlikely, even if you have
done some pre-processing. It is probably best to stay with a moderate weight.
If you want truly identical fields of view, you should solve with a identical-
FOV weight first, then average the two values and set that value as the known
FOV for each camera.
SynthEyes notices even a very small identical-FOV weight and uses it to
help produce a more reliable initial solution, so this may sometimes be a good
idea even if the values are not necessarily identical.

Handling Shots with Changing IOD and/or Vergence


When the inter-ocular distance or vergence changes during the shot, it
may be desirable to clean up the curve to reduce chatter. Since inter-ocular
distances are typically small, there will be some chatter most of the time,
especially if the shot is difficult. It is especially desirable to reduce the chatter if
new CG objects are being added close to the camera, as is often the case.
When the vergence or distance is changing rapidly, any chatter will be
difficult to see. It is most useful to squash any jitter while the vergence or
distance is stationary, and allow it to change freely only while it is actively
changing.
Conveniently, the Distance and Verge Hold controls allow you to do
exactly that. The parameter must be set to Varying mode to enable the Hold (and

231
STEREOSCOPIC MOVIES

Lock) controls. You can animate the respective Hold button to be on during the
frames when the respective parameter is fixed, and keep the Hold off when the
parameter is changing. (It is better to Hold on too few frames than too many.)
After solving a shot, you may want to create a specific IOD or vergence
curve. You can animate it directly, or create it with Approximate Keys in the
graph editor. Use Vary mode and animate a Lock to get lock only some frames,
or use Known mode to lock then entire shot if the whole curve is known.

Post-Stereo-Solve Checkup
After a stereo solve with constraints, you should verify that they have been
satisfied correctly, using the graph editor. If the tracking data is wrong or calls for
a stereo relationship too different than the constraints, they may not be satisfied
and adjusting the constraints or tracking data may be necessary.
IMPORTANT! On some marginal shots, the constraints may not be
satisfied on the very first frame due to some mathematical details, creating a
one-frame glitch in the camera/camera relationship. In this case, once you have
the tracking fairly complete, and have set up a coordinate system on the
dominant camera, switch to Refine mode, turn on the Constrain checkbox, and
hit Go!

Object Tracking
SynthEyes can perform stereo object tracking, where the same rigid object
is tracked from both cameras. (It can also do motion capture from stereo
imagery, where the objects do not have to be rigid, and each feature is tracked
separately.) Or you can do single-eye object tracking on a stereo shot, if the
object moves enough for that to work well.
To set up the stereo moving-object setup, do a Shot/Add Moving Object
when a stereo shot is active. You will be asked whether to create a regular or
stereo moving object. The latter is similar to adding a moving object twice, once
when each camera (left and right) is selected as the main active object (the
currently-selected camera is used as the parent when a new object is created),
except that SynthEyes records that the two are linked together. Each object will
have the Stereo solving mode.
An object can only be in one place at a time: there should only be a single
path for the object in world space, the same path as seen in the left and right
cameras, just like each 3-D tracker pair only has a single location. The world
path depends on the camera path, though! This creates challenging ―what comes
first‖ issues. Object solves always start with an initial camera-type estimate,
before being converted to an object-type solve, and that is what happens with
object solves as well.
Simple object solves start out with two separate motion paths, one for the
―left object‖ and one for the ―right object.‖ As the camera path becomes available,

232
STEREOSCOPIC MOVIES

additional constraints are applied that force the left and right paths, in world
space, to become identical by adjusting the camera and object positioning.
Because of this inherent interaction, it is wise to work on the camera solve
first, before proceeding to the objects.
Helpful Hint: in addition to using the "Activate Other Eye" menu item or its
minus-key accelerator, it can also be helpful to right-click the active
camera/object button on the toolbar to rotate backwards through the collection of
objects and cameras when there are multiple cameras and objects present.
To perform motion-capture style tracking in a stereo shot, after completing
the camera tracking, do an Add Moving Object, then set the solver mode for the
object(s) to Individual Mocap. Add stereo tracking pairs on the moving object,
and each pair will be solved to create a separate independent path. See the
online tutorial for an example, and see the Motion Capture section of the manual.

Interactions and Limitations


SynthEyes has many different features and capabilities; not all make
sense for stereo shots, or have been enhanced to work with stereo shots.
Following is a list of some of these items:
 User interface – on a stereo shot, the ―important‖ settings are
generally those configured on the left camera, for example the
solver‘s Begin and End frames. You might find yourself changing
the right-camera controls and be surprised they seem to have no
effect, because they do not! Generally we will continue to add code
to prevent this kind of mishap when a stereo shot is active.
 Lens distortion – must be addressed before solving, not during it. It
does not make sense to have lens distortion being solved for
(separately or together) during the solve.
 Zoom vs Fixed vs Known – Both lenses must be fixed, or both must
be zooming. It is far preferable if both lenses are Known, from
calibration, to produce a more stable and reliable solve.
 Tracker Cleanup and Coalesce Trackers are not stereo-aware.
 Hold mode for camera paths – in a hold region, the camera is
effectively switched to tripod mode, which doesn‘t make sense for
stereo shots. With an inter-ocular distance, any change in
orientation always produces a change in position. In the future,
might be modified to make the dominant camera (only) stationary,
as a way to reduce chatter.
 Object tracking with no camera trackers – with a single camera, it is
sometimes useful to do an object track relative the camera, with the
camera disabled. The same approach is not possible with stereo –
if the cameras are disabled, they can not be solved or moved to

233
STEREOSCOPIC MOVIES

achieve the proper inter-ocular distance. You can solve the shot as
a moving-camera, store the stereo parameters on the Known
tracks, reset the cameras, transfer the stereo parameters to the
secondary camera via Set 1f or Set All, then change the setup to
moving-object (see the tutorial) and solve as a moving object.
 Moving Object locking – you can not lock the coordinates of a
moving object to world coordinates.
 Exporting – exports will contain the separate cameras and objects.
Some exports may export only one camera at a time. In the future,
we will probably have modified stereo exporters that parent the
cameras to assemble a small rig. In any case, if you want to use
some particular rig in your animation package, you will need to use
some tool to convert the path information to drive your particular rig.

234
Motion Capture and Face Tracking
SynthEyes offers the exciting capability to do full body and facial motion
capture using conventional video or film cameras.

STOP! Unless you know how to do supervised tracking and


understand moving-object tracking, you will not be able to do motion tracking.
The material here builds upon that earlier material; it is not repeated here
because it would be exactly that, a repetition.
First, why and when is motion capture necessary? The moving-object
tracking discussed previously is very effective for tracking a head, when the face
is not doing all that much, or when trackable points have been added in places
that don‘t move with respect to one another (forehead, jaws, nose). The moving-
object mode is good for making animals talk, for example. By contrast, motion
capture is used when the motion of the moving features is to be determined, and
will then be applied to an animated character. For example, use motion capture
of an actor reading a script to apply the same expressions to an animated
character. Moving-object tracking requires only one camera, while motion
capture requires several calibrated cameras.
Second, we need to establish a few very important points: this is not the
kind of capability that you can learn on the fly as you do that important shoot,
with the client breathing down your neck. This is not the kind of thing for which
you can expect to glance at this manual for a few minutes, and be a pro. Your
head will explode. This is not the sort of thing you can expect to apply to some
musty old archival footage, or using that old VHS camera at night in front of a
flickering fireplace. This is not something where you can set up a shoot for a
couple of days, leave it around with small children or animals climbing on it, and
get anything usable whatsoever. This is not the sort of thing where you can take
a SynthEyes export into your animation software, and expect all your work to be
done, with just a quick render to come. And this is not the sort of thing that is
going to produce the results of a $250,000 custom full body motion capture
studio with 25 cameras.
With all those dire warnings out of the way, what is the good news? If you
do your homework, do your experimentation ahead of time, set up technically
solid cameras and lighting, read the SynthEyes manual so you have a fair
understanding what the SynthEyes software is doing, and understand your 3-D
package well enough to set up your character or face rigging, you should be able
to get excellent results.
In this manual, we‘ll work through a sample facial capture session. The
techniques and issues are the same for full body capture, though of course the
tracking marks and overall camera setup for body capture must be larger and
more complex.

235
MOTION CAPTURE AND FACE TRACKING

Introduction
To perform motion capture of faces or bodies, you will need at least two
cameras trained on the performer from different angles. Since the performer‘s
head or limbs are rotating, the tracking features may rotate out of view of the first
two cameras, so you may need additional cameras to shoot more views from
behind the actor.
The fields of view of the cameras must be large enough to encompass the
entire motion that the actor will perform, without the cameras tracking the
performer (OK, experts can use SynthEyes for motion capture even when the
cameras move, but only with care).
You will need to perform a calibration process ahead of time, to determine
the exact position and orientation of the cameras with respect to one another
(assuming they are not moving). We‘ll show you one way to achieve this, using
some specialized but inexpensive gear.
Very Important: You‘ll have to ensure that nobody knocks the cameras
out of calibration while you shoot calibration or live action footage, or between
takes.
You‘ll need to be able to resynchronize the footage of all the cameras in
post. We‘ll tell you one way to do that.
Generally the performer will have tracker markers attached, to ensure the
best possible and most reliable data capture. The exception to this would be if
one of the camera views must also be used as part of the final shot, for example,
a talking head that will have an extreme helmet added. In this case, markers can
be used where they will be hidden by the added effect, and in locations not
permitting trackers, either natural facial features can be used (HD or film
source!), or markers can be used and removed as an additional effect.
After you solve the calibration and tracking in SynthEyes, you will wind up
with a collection of trajectories showing the path through space of each individual
feature. When you do moving-object tracking, the trackers are all rigidly
connected to one another, but in motion capture, each tracker follows its own
individual path.
You will bring all these individual paths into your animation package, and
will need to set up a rigging system that makes your character move in response
to the tracker paths. That rigging might consist of expressions, Look At
controllers, etc; it‘s up to you and your animation package.

Camera Types
Since eachcamera‘s fields of view must encompass the entire
performance (unless there are many overlapping cameras), at any time the actor
is usually a small portion of the frame. This makes progressive DV, HD, or film
source material strongly suggested.

236
MOTION CAPTURE AND FACE TRACKING

Progressive-scan cameras are strongly recommended, to avoid the factor


of two loss of vertical resolution due to interlacing. This is especially important
since the tracking markers are typically small and can slip between scan lines.
While it may make operations simpler, the cameras do not have to be the
same kind, have the same aspect ratio, or have the same frame rate.
Resist the urge to use that old consumer-grade analog videotape camera
as one of the cameras—the recording process will not be stable enough for good
results.
Lens distortion will substantially complicate calibration and processing. To
minimize distortion, use high-quality lenses, and do not operate them near their
maximum field of view, where distortion is largest. Do not try to squeeze into the
smallest possible studio space.

Camera Placement
The camera placements must address two opposing factors: one, that the
cameras should be far apart, to produce a large parallax disparity with good
depth perception, and that the cameras should be close together, so that they
can simultaneously observe as many trackers as possible.
You‘ll probably need to experiment with placement to gain experience,
keeping in mind the performance to be delivered.
Cameras do not have to be placed in any discernable pattern. If the
performance warrants it, you might want coverage from up above, or down
below.
If any cameras will move during the performance, they will need a visible
set of stationary tracking markers, to recover their trajectory in the usual fashion.
This will reduce accuracy compared to a carefully calibrated stationary camera.

Lighting
Lighting should be sufficient to keep the markers well illuminated, avoiding
shadowing. The lighting should be enough to be able to keep the shutter time of
the cameras as low as possible, consistent with good image quality.

Calibration Requirements and Fixturing


In order for motion tracking footage to be solved, the camera positions,
orientations, and fields of view must be determined, independent of the ―live‖
footage, as accurately as possible.
To do this, we will use a process based on moving-object tracking. A
calibration object is moved in the field of view of all the cameras, and tracked
simultaneously.
To get the most data fastest and easiest, we constructed a prop we call a
―porcupine‖ out of a 4‖ Styrofoam ball, 20-gauge plant stem wires, and small 7

237
MOTION CAPTURE AND FACE TRACKING

mm colored pom-pom balls, all obtained from a local craft shop for under $5.
Lengths of wire were cut to varying lengths, stuck into the ball, and a pom-pom
glued to the end using a hot glue gun. Retrospectively, it would have been
cleverer to space two balls along the support wire as well, to help set up a
coordinate system.

The porcupine is hung by a support wire in the location of the performer‘s


head, then rotated as it is recorded simultaneously from each camera. The
porcupine‘s colored pom-poms can be viewed virtually all the time, even as they
spin around to the back, except for the occasional occlusion.
Similar fixtures can be built for larger motion capture scenarios, perhaps
using dolly track to carry a wire frame. It is important that the individual trackable
features on the fixture not move with respect to one another: their rigidity is
required for the standard object tracking.
The path of the calibration fixture does not particularly matter.

Camera Synchronization
The timing relationship between the different cameras must be
established. Ideally, all the cameras would all be gen-locked together, snapping
each image at exactly the same time. Instead, there are a variety of possibilities
which can be arranged and communicated to SynthEyes during the setup
process.

238
MOTION CAPTURE AND FACE TRACKING

Motion capture has a special solver mode on the Solver Panel :


individual mocap. In this mode, the second dropdown list changes from a
directional hint to control camera synchronization.
If the cameras are all video cameras, they can be gen-locked together to
all take pictures identically. This situation is called ―Sync Locked.‖

If you have a collection of video cameras, they will all take pictures at
exactly the same (crystal-controlled) rate. However, one camera may always be
taking pictures a bit before the other, and a third camera may always be taking
pictures at yet a different time than the other two. The option is ―Crystal Sync.‖
If you have a film camera, it might run a little more or a little less that 24
fps, not particularly synchronized to anything. This will be referred to as ―Loose
Sync.‖
In a capture setup with multiple cameras, one can always be considered
to be Sync Locked, and serve as a reference. If it is a video camera, other video
cameras are in Crystal Sync, and any film camera would be Loose Sync.
If you have a film camera that will be used in the final shot, it should be
considered to be the sync reference, with Sync Locked, and any other cameras
are in Loose Sync.
The beginning and end of each camera‘s view of the calibration sequence
and the performance sequence must be identified to the nearest frame. This can
be achieved with a clapper board or electronic slate. The low-budget approach is
to use a flashlight or laser pointer flash to mark the beginning and end of the
shot.

Camera Calibration Process


We‘re ready to start the camera calibration process, using the two shot
sequences LeftCalibSeq and RightCalibSeq. You can start SynthEyes and do a
File/New for the left shot, and then Add Shot to bring in the second. Open both
with Interlace=Yes, as unfortunately both shots are interlaced. Even though these
are moving-object shots, for calibration they will be solved as moving-camera
shots.
You can see from these shots how the timing calibration was carried out.
The shots were cropped right before the beginning of the starting flash, and right
after the ending flash, to make it obvious what had been done. Normally, you
should crop after the starting flash, and before the ending flash.

239
MOTION CAPTURE AND FACE TRACKING

On your own shots, you can use the Image Preprocessing panel‘s Region-
of-interest capability to reduce memory consumption to help handle long shots
from multiple cameras.
You should supervise-track a substantial fraction of the pom-poms in each
camera view; you can then solve each camera to obtain a path of the camera
appearing to orbit the stationary pom-pom.
Next, we will need to set up a set of links between corresponding trackers
in the two shots. The links must always be on the Camera02 trackers, to a
Camera01 tracker. This can be achieved at least three different ways.
Matching Plan A: Temporary Alignment
This is probably easiest, and we may offer a script to do the grunt work in
the future.
Begin by assigning a temporary coordinate system for each camera, using
the same pom-poms and ordering for each camera. It is most useful to keep the
porcupine axis upright (which is where pom-poms along the support wire would
come in useful, if available); in this shot three at the very bottom of the porcupine
were suitable.
With matching constraints for each camera, when you re-solve, you will
obtain matching pairs of tracker points, one from each camera, located very
close to one another.

Now, with the Coordinate System panel open, Camera02 active, and
the Top view selected, you can click on each of Camera02‘s tracker points, and
then alt-click (or command-click) on the corresponding Camera01 point, setting
up all the links.
As you complete the linking, you should remove the initial temporary
constraints from Camera02.
Matching Plan B: Side by Side
In this plan, you can use the Camera & Perspective viewport
configuration. Make Camera01 active, and in the perspective window, right-click
and Lock to current camera with Camera01‘s imagery, then make Camera02
active for the camera view. Now camera and perspective views show the two
shots simultaneously. (Experts: you can open multiple perspective windows and
configure each for a different shot. You can also freeze a perspective window on
a particular frame, then use the key accelerators to switch frame as needed.)
You can now click the trackers in the camera(02) view, and alt-click the
matching (01) tracker in the perspective window, establishing the links.

Reminder: The coordinate system control panel must be open for linking.
This will take a little mental rotation to establish the right correspondences; the
colors of the various pom-poms will help.

240
MOTION CAPTURE AND FACE TRACKING

Matching Plan C: Cross Link by Name


This plan is probably more trouble than it worth for calibration, but can be
an excellent choice for the actual shots. You assign names to each of the pom-
poms, so that the names differ only by the first character, then use the
Track/Cross-Link by Name menu item to establish links.
It is a bit of pain to come up with different names for the pom-poms, and
do it identically for the two views, but this might be more reasonable for other
calibration scenarios where it is more obvious which point is which.
Completing the Calibration
We‘re now ready to complete the calibration process. Change Camera02

to Indirectly solving mode on the Solver panel .


Note: the initial position of Camera01 is going to stay fixed, controlling the
overall positions of all the cameras. If you want it in some particular location, you
can remove the constraints from it, reset its path from the 3-D panel, then move it
around to a desired location
Solve the shot, and you have two orbiting cameras remaining at a fixed
relative orientation as they orbit.
Run the Motion Capture Camera Calibration script from the Script
menu, and the orbits will be squished down to single locations. Camera01 will be
stationary at its initial location, and Camera02 will be jittering around another
location, showing the stability of the offset between the two. The first frame of
Camera02‘s position is actually an average relative position over the entire shot;
it is this location we will later use.
You should save this calibration scene file (porcupine.sni); it will be the
starting point for tracking the real footage. The calibration script also produces a
script_output.txt file in a user-specific folder that lists the calibration data.

Body and Facial Tracking Marks


Markers will make tracking faster, easier, and more accurate. On the
face, markers might be little Avery dots from an office supply store, ―magic
marker‖ spots, pom-poms with rubber cement(?), mascara, or grease paint. Note
that small colored dots tend to lose their coloration in video images, especially
with motion blur. Make sure there is a luminance difference. Single-pixel-sized
spots are less accurate than those that are several pixels across.
Markers should be placed on the face in locations that reflect the
underlying musculature and the facial rigging they must drive. Be sure to include
markers on comparatively stationary parts of the head.
For body tracking, a typical approach is to put the performer in a black
outfit (such as UnderArmour), and attach table-tennis balls as tracking features
onto the joints. To achieve enough visibility, placing balls on both the top and

241
MOTION CAPTURE AND FACE TRACKING

bottom of the elbow may be necessary. Because the markers must be placed on
the outside of the body, away from the true joint locations, character rigging will
have to take this into account.

Preparation for Two-Dimensional Tracking


We‘re ready to begin tracking the actual performance footage. Open the

final calibration scene file. Open the 3-D panel . For each camera, select the
camera in the select-by-name dropdown list. Then hit Blast and answer yes to
store the field of view data as well. Then, hit Reset twice, answering yes to
remove keys from the field of view track also. The result of this little dance is to
take the solved camera paths (as modified by the script), and make them the
initial position and orientation for each camera, with no animation (since they
aren‘t actually moving).
Next, replace the shot for each camera with LeftFaceSeq and
RightFaceSeq. Again, these shots have been cropped based on the light flashes,
which would normally be removed completely. Set the End Frame for each shot
to its maximum possible. If necessary, use an animated ROI on the Imaging
Preprocessing panel so that you can keep both shots in RAM simultaneously. Hit
Control-A and delete to delete all the old trackers. Set each Lens to Known to
lock the field of view, and set the solving mode of each camera to Disabled, since
the cameras are fixed at their calibrated locations.
We need a placeholder object to hold all the individual trackers. Create a
moving object, Object01, for Camera01, then a moving object, Object02, for
Camera02. On the Solving Panel, set Object01 and Object02 to the Individual
mocap solving mode, and set the synchronization mode right below that.

Two-Dimensional Tracking
You can now track both shots, creating the trackers into Object01 and
Object02 for the respective shots. If you don‘t track all the markers, at least be
sure to track a given marker either in both shots, or none, as a half-tracked
marker will not help. The Hand-Held: Use Others mode may be helpful here for
the rapid facial motions. Frequent keying will be necessary when the motion
causes motion blur to appear and disappear (a lot of uniform light and short
shutter time will minimize this).

Linking the Shots


After completing the tracking, you must set up links. The easiest approach
will probably be to set up side-by-side camera and perspective views. Again, you
should link the Object02 trackers to the Object01 trackers, not the other way
around.
Doing the linking by name can also be helpful, since the trackers should
have fairly obvious names such as Nose or Left Inner Eyebrow, etc.

242
MOTION CAPTURE AND FACE TRACKING

Solving

You‘re ready to solve, and the Solve step should be very routine,
producing paths for each of the linked trackers. The final file is facetrk.sni.
Afterwards, you can start checking on the trackers. You can scrub through
the shot in the perspective window, orbiting around the face. You can check the
error curves and XYZ paths in the graph editor . By switching to Sort by Error

mode , you can sequence through the trackers starting from those with the
highest error.

Exports & Rigging


When you export a scene with individual trackers, each of them will have a
key frame on each frame of the shot, animating the tracker path.
It is up to you to determine a method of rigging your character to take
advantage of the animated tracker paths. The method chosen will depend on
your character and animation software package. It is likely you will need some
expressions (formulas) and some Look-At controls. For full-body motion capture,
you will need to take into account the offsets from the tracking markers (ie balls)
to the actual joint locations.

Modeling
You can use the calculated point locations to build models. However, the
animation of the vertices will not be carried forward into the meshes you build.
Instead, when you do a Convert to Mesh operation in the perspective window,
the current tracker locations are frozen on that frame.
If desired, you can repeat the object-building process on different frames
to build up a collection of morph-target meshes.

243
Finding Light Positions
After you have solved the scene, you can optionally use SynthEyes to
calculate the position of, or at least direction to, principal lights affecting the
scene. You might determine the location of a spotlight on the set, or the direction
to the sun outdoors. In either case, knowing the lighting will help you match your
computer-graphic scene to the live footage.
SynthEyes can use either shadows or highlights to locate the lights. For
shadow tracking, you must track both the object casting the shadow, and the
shadow itself, determining a 3-D location for each. For highlight tracking, you will
track a moving highlight (mainly in 2-D), and you must create a 3-D mesh
(generally from an external modeling application, or a SynthEyes 3-D primitive)
that exactly matches the geometry on which the highlight is reflected.

Lights from Shadows


Consider the two supervised trackers in the image below from the BigBall
example scene:

One tracks the spout of a teacup, the other tracks the spout‘s shadow on
the table. After solving the scene, we have the 3-D position of both. The
procedure to locate the light in this situation is as follows.

Switch to the Lighting Control Panel . Click the New Light button, then
the New Ray button. In the camera view, click on the spout tracker, then on the
tracker for the spout‘s shadow.
We could turn on the Far-away light checkbox, if the light was the sun, so
that the direction of the light is the same everywhere in the scene. Instead, we‘ll
leave the checkbox off, and instead set the distance spinner to 100, moving the
light away that distance from the target.

245
FINDING LIGHT POSITIONS

The light will now be positioned so that it would cast a shadow from the
one tracker to the next; you can see it in the 3-D views. The lighting on any mesh
objects in the scene changes to reflect this light position, and you see the
shadows in the perspective view. You can repeat this process for the second
light, since the spout casts two shadows. This scene is Teacup.sni.
If the scene contained two different teapot-type setups due to the same
single light, you can place two rays on one light, and the 3-D position of the light
will be triangulated, without any need for a distance.
SynthEyes handles another important case, where you have walls, fences,
or other linear features casting shadows, but you can not say that a single point
casts a shadow at another single point. Instead, you may know that a point casts
a shadow somewhere on a line, or a line casts a shadow onto a point. This is
tantamount to knowing that the light falls somewhere in a particular 3-D plane.
With two such planes, you can identify the light‘s direction; with four you may be
able to locate it in 3-D.
To tell SynthEyes about a planar constraint, you must set up two different
rays, one with the common tracker and one point on the wall/fence/etc., and the
other ray containing the common tracker and the other point on the
wall/fence/etc.

Lights from Highlights


If you can place a mesh into the scene that exactly matches that portion of
the scene‘s geometry, and if there is a specular highlight reflected from that
geometry, you can determine the direction to the light, and potentially its position
as well.
To illustrate, we‘ll overview an example shot, BigBall. After opening the
shot, it can be tracked automatically or with supervised trackers (symmetric
trackers will work well). If you auto-track, kill all the trackers in the interior of the
ball, and the reflections on the teapot as well.

246
FINDING LIGHT POSITIONS

Set up a coordinate system as shown above—the tracker at lower left is


the origin, the one at lower right is on the left/right axis at 11.75‖, and the tracker
at center left is on the floor plane. Solve the shot. [Note: no need to convert units,
the 11.75‖ could be cm, meters, etc.]
Create symmetric supervised trackers for the two primary light reflections
at center top of the ball and track them though the shot. Change them both to
zero-weighted trackers (ZWT) on the tracker panel—we don‘t want them to affect
the 3-D solution.
To calculate the reflection from the ball, SynthEyes requires matching
geometry. Create a sphere. Set its height coordinate to be 3‖ and its size to be
12.25‖. Slide it around in the top view until the mesh matches up with the image
of the ball. You can zoom in on the top view for finer positioning, and into the
camera view for more accurate comparison.
The lighting calculations can be more accurate when vertex normals are
available. In your own shots, you may want to import a known mesh, for
example, from a scan. In this case, be sure to supply a mesh that has vertex
normals, or at least, use the Create Smooth Normals command of the
Perspective window.

On the lighting control panel , add a new light, click the New Ray
button, then click one of the two highlight trackers twice in succession, setting
that tracker as both the Source and Target. The target button will change to read
―(highlight)‖ Raise the Distance spinner to 48‖, which is an estimated value (not
needed for Far-away lights). From the quad view, you‘ll see the light hanging in

247
FINDING LIGHT POSITIONS

the air above the ball, as in reality. Add a second light for the second highlight
tracker.
If you scrub through the shot, you‘ll see the lights moving slightly as the
camera moves. This reflects the small errors in tracking and mesh positioning.
You can get a single average position for the light as follows: select the light,
select the first ray if it isn‘t already by clicking ―>‖, then click the ―All‖ button.
This will load up your CPU a bit as the light position is being repeatedly averaged
over all the frames. This can be helpful if you want to adjust the mesh or tracker,
but you can avoid further calculations by hitting the Lock button. If you later
change some things, you can hit the Lock button to cause a recalculation.

In favorable circumstances, you will not need an approximate light


height or distance. The calculation SynthEyes is making with All or Lock
selected is more than just an average—it is able to triangulate to find an exact
light position. As it turns out, often, as in this example shot, the geometry of the
lights, mesh, and camera does not make that accurately possible, because the
shift in highlight position as the camera moves is generally quite small. (You can
test this by turning the distance constraint down to zero and hitting Lock again.)
But it may be possible if the camera is moving extensively, for example, dollying
along the side of a car, when a good mesh for the car is available.

248
Curve Tracking and Analysis in 3-D
While the bulk of SynthEyes is concerned with determining the location of
points in 3-D, sometimes it can be essential to determine the shape of a curve in
3-D, even if that curve has no trackable points on it, and every point along the
curve appears the same as every other. For example, it might be the curve of a
highway overpass to which a car chase must be added, the shape of a window
opening on a car, or the shape of a sidewalk on a hilly road, which must be used
as a 3-D masking edge for an architectural insert.
In such situations, acquiring the 3-D shape can be a tremendous
advantage, and SynthEyes can now bring it to you using its novel curve tracking

and flex solving capability, as operated with the Flex/Curve Control Panel .

Terminology
There‘s a bit of new terminology to define here, since there are both 2-D
and 3-D curves being considered.
Curve. This refers to a spline-like 2-D curve. It will always live on one
particular shot‘s images, and is animated with a different location on each frame.
Flex. A spline-like 3-D curve. A flex resides in 3-D, though it may be
attached to a moving object. One or more curves will be attached to the flex;
those curves will be analyzed to determine the 3-D shape of the flex.
Rough-in. Placing control-point keys periodically and approximately.
Tuning a curve. Adjusting a curve so it matches edges exactly.

Overview
Here‘s the overall process for using the curve and flex system to
determine a 3-D curve. The quick synopsis is that we will get the 2-D curves
positioned exactly on each frame throughout the shot, then run a 3-D solving
stage. Note that the ordering of the steps can be changed around a bit, and
additional wrinkles added, once you know what you are doing — this is the
simplest and easiest to explain.
1. Open the shot in SynthEyes
2. Obtain a 3-D camera solution, using automatic or supervised tracking
3. At the beginning of the shot, create a (2-D) curve corresponding to the
flex-to-be.
4. ―Rough-in‖ the path of the curve, with control-point animation keys
throughout the shot. There is a tool that can help do this, using the
existing point trackers.
5. Tune the curve to precisely match the underlying edges (manual or
automatic).

249
CURVE TRACKING AND ANALYSIS IN 3-D

6. Draw a new flex in an approximate location. Assign the curve to it.


7. Configure the handling of the ends of the flex.
8. Solve the flex
9. Export the flex or convert it to a series of trackers.

Shot Planning and Limitations


Determining the 3-D position of a curve is at the mercy of underlying
mathematics, just as is the 3-D camera analysis performed by the rest of
SynthEyes. Because every point along a curve/flex is equivalent, there is
necessarily less information in the curve data than in a collection of trackers.
As a result, first, flex analysis can only be performed after a successful
normal 3-D solve that has determined camera path and field of view. The curve
data can not help obtain that solve; it does not replace and is not equivalent to
the data of several trackers.
Additionally, the camera motion must be richer and more complex than for
a collection of trackers. Consider a flex consisting of a horizontal line, perhaps a
clothesline or the top of a fence. If the camera moves left to right so that its path
is parallel to the flex, no 3-D information (depth) can be produced for the flex. If
the camera moves vertically, then the depth information can be obtained. The
situation is reversed for a vertical line: a vertical camera motion will not produce
any depth information.
Generally, both the shape of the flex and camera path will be more
complex, and you will need to ensure that the camera path is sufficiently complex
to produce adequate depth information for all of the flex. If the flex is circular,
and the camera motion horizontal, then the top and bottom of the circle will not
have well-defined depth. The flex will prefer a flat configuration, which is often,
but not necessarily, correct.
Note that a simple diagonal motion will not solve this problem: it will not
explore the depth in the portion of the circle that is parallel to the motion path.
The camera path must itself curve to more completely identify the depth all the
way around the circle — hence the comment that the camera motion must itself
be more complex than for point tracking.
Similarly, tripod (nodal pan) shots are not suitable for use with the curve &
flex solving system. As with point tracking, tripod shots do not produce any depth
information.
Flexes and curves are not closed like the letter O — they are open like the
letter U or C. Also, they do not contain corners, like a V. Nor do they contain
tangency handles, since the curvature is controlled by SynthEyes.
Generally, the curve will be set up to track a fairly visible edge in the
image. Very marginal edges can still be used and solved to produce a flex, if you
are willing to do the tracking by hand.

250
CURVE TRACKING AND ANALYSIS IN 3-D

Initial Curve Setup


Once you have identified the section of curve to be tracked and made into

a 3-D flex, you should open the Flex Control Panel , which contains both
flex and curve controls, and select the camera view.
Click the New Curve button, then, in the Camera View, click along the
section of curve to be tracked, creating control points as you go. Place additional
control points in areas of rapid curvature, and at extremal points of the curve.
Avoid area where there is no trackable edge if possible.
When you have finished with the last control point, right-click to exit the
curve creation mode.

Roughing in the Curve Keys


Next, we will approximately position the curve to track the underlying
edge. This can be done manually or automatically, if the situation permits.
Manual Roughing
For manual roughing, you move through the shot and periodically re-set
the position of the curve. By starting at the ends, and then successively
correcting the position at the most extremely-wrong positions within the shot,
usually this isn‘t too time consuming (unless the shot is a jumpy hand-held one).
SynthEyes splines the control point positions over time.
To re-set the curve, you can drag the entire curve into an approximate
position, then adjust the control points as necessary. If you find you need
additional control points, you can shift-click within the curve to create them.
You should monitor the control point density so that you don‘t bunch many
of them in the same place. But you do not have to worry about control points
―chattering‖ in position along the curve. This will not affect SynthEyes or the
resulting flex.
Automatic Roughing
SynthEyes can automatically rough the curve into place with a special tool
— as long as there is a collection of trackers around the curve (not just one end),
such that the trackers and curve are all roughly on the same plane.
When this is the case, shift-select all the trackers you want to use, click
the Rough button on the Flex control panel, then click the curve to be roughed
into place.
The Rough Curve Import panel will appear, a simple affair.

251
CURVE TRACKING AND ANALYSIS IN 3-D

The first field asks how many trackers must be valid for the roughing
process to continue. In this case, 5 trackers were selected to start. As shown, it
will continue even if only one is valid. If the value is raised to 5, the process will
stop once any tracker becomes invalid. If only a few trackers are valid (especially
less than 4), less useful predictions of the curve shape can be made.
The Key every N frames setting controls how often the curve is keyed. At
the default setting of 1, a key will be placed at every frame, which is suitable for a
hand-held shot, but less convenient to subsequently refine. For a smooth shot, a
value of 10-20 might be more appropriate.
The Rough Curve Importer will start at the current frame, and begin
creating keys every so often as specified. It will stop if it reaches the end of the
shot, if there are too few trackers still valid, or if it passes by any existing key on
the curve. You can take advantage of this last point to ―fill in‖ keys selectively as
needed, using different sets of trackers at different times, for example.
After you‘ve used the Rough Curve Import tool, you should scrub through
the shot to look for any places where additional manual tweaking is required.
The curve may go offscreen or be obscured. If this happens, you can use
the curve Enable checkbox to disable the curve. Note that it is OK if the curve
goes partly offscreen, as long as there is enough information to locate it while it is
onscreen.

Curve Tuning
Once the curve has been roughed into place, you‘re ready to ―tune‖ it to
place it more accurately along the edge. Of course, you can do this all by hand,
and in adverse conditions, that may be necessary. But it is much better to use
the automated Tune tool.
You can tune either a single frame, with the Tune button, or all of the
frames using of course the All button. When a curve is tuned on a frame, the
curve control points will latch onto the nearby edge.
For this reason, before you begin tuning, you may wish to create
additional control points along the curve, by shift-clicking it.

252
CURVE TRACKING AND ANALYSIS IN 3-D

The All button will bring up a control panel that controls both the single-
and multi-frame tuning. If you want to adjust the parameters without tuning all the
frames, simply close the dialog instead of hitting its Go button.

You can adjust to edges of different widths, control the distance within
which the edge is searched, and alter the trade-off between a large distant edge,
and a smaller nearby one. Clearly, it is going to be easier to track edges with no
nearby edges of similar magnitude.
The control panel allows you to tune all frames (potentially just those
within the animation playback range), only the frames that already have keys (to
tune your roughed-in frames), or only the frames that do not have keys (to
preserve your previously-keyed frames).
You can also tell the tracking dialog to use the tuned locations as it
estimates (using splining) where the curve is in subsequent frames, by turning on
the Continuous Update checkbox. If you have a simple curve well-separated
from confounding factors, you can use this feature to track a curve through a shot
without roughing it in first. The drawback of doing this is that if the curve does get
off course, you can wind up with many bad keys that must be repaired or
replaced. [You can remove erroneous keys using Truncate.] With the Continuous
Update box off, the tuning process is more predictable, relying solely on your
roughed-in animation.

Flex Creation
With your curve(s) complete, you can now create a flex, which is the 3-D
splined curve that will be made to match the curve animation. The flex will be
created in 3-D in a position that approximately matches its actual position and
shape. It is usually most convenient to open the Quad view, so that you can see
the camera view at the same time you create the flex in one of the 3-D views
(such as the Top view).

253
CURVE TRACKING AND ANALYSIS IN 3-D

Click the New Flex button, then begin clicking in the chosen 3-D view to
lay out a succession of control points. Right-click to end the mode. You can now
adjust the flex control points as needed to better match the curve. You should
keep the flex somewhat shorter than the curve.
To attach the curve to the flex, select the curve in the camera view, then,
on the flex control panel, change the parent-flex list box for the curve to be your
flex. (Note: if you create a flex, then a curve while the flex is still selected, the
curve is automatically connected to the flex.)

Flex Endpoints
The flex‘s endpoints must be ―nailed down‖ so that the flex can not just
shrivel up along the length of the curve, or pour off the end. The ends are
controlled by one of several different means:
1. the end of the flex can stay even with its initial position,
2. the end of the flex can stay even with a specific tracker, or
3. the end of the flex can exactly match the position of a tracker.
The first method is the default. The last method is possible only if there is
a tracker at the desired location; this arises most often when several lines
intersect. You can track the intersection, then force all of the flexes to meet at the
same 3-D location.
To set the starting or ending tracker location for a flex, click the Start Pt or
End Pt button, then click on the desired tracker. Note that the current 3-D
location of the tracker will be saved, so if you re-track or re-solve, you will need to
reset the endpoint.
The flex will end ―even‖ with the specified point, meaning so that the point
is perpendicular to the end of the flex. To match the position exactly, turn on the
Exact button.

Flex Solving
Now that you‘ve got the curve and flex set up, you are ready to solve. This
is very easy — click the Solve button (or Solve All if you have several flexes
ready to be solved).
After you solve a flex, the control points will no longer be visible—they are
replaced by a more densely sampled sequence of non-editable points. If you
want to get back to the original control points to adjust the initial configuration,
you can click Clear.

Flex Exports
Once you have solved the flex, you can export it. At present, there are two
principal export paths. The flexes are not currently exported as part of regular
tracker exports.

254
CURVE TRACKING AND ANALYSIS IN 3-D

First, you can convert the flex into a sequence of trackers with the Convert
Flex to Trackers script on the Script menu. The trackers can be exported directly,
or, more usefully, you can use them in the Perspective window to create a mesh
containing those trackers. For example, on a building project where the flex is the
edge of the road, you can create a ground mesh to be landscaped, and still have
it connect smoothly with the road, even if the road is not planar.
Second, you can export the coordinates of the points along the flex into a
text file using the Flex Vertex Coordinates exporter. Using that file is up to you,
though it should be possible to use it to create paths in most packages.

255
Merging Files and Tracks
When you are working on scenarios with multiple shots or objects, you
may wish to combine different SynthEyes .sni files together. For example, you
may track a wide reference shot, and want to use those trackers as indirect links
for several other shots. You can save the tracked reference shot, then use the
File/Merge option to combine it with each of several other files.
Alternatively, you can transfer 2-D or 3-D data from one file to another, in
the process making a variety of adjustments to it as discussed in the second
subsection. You can track a file in several different auto-track sections, and
recombine them using the scripts.

File/Merge
After you start File/Merge and select a file to merge, you will be asked
whether or not to rename the trackers as necessary, to make them unique. If the
current scene has Camera01 with trackers Tracker01 to Tracker05, and the
scene being merged also has Camera01 with trackers Tracker01 to Tracker05,
then answering yes will result in Camera01 with Tracker01 to Tracker05 and
Camera02 with Tracker06 to Tracker10. If you answer no, Camera01 will have
Tracker01 to Tracker05 and Camera02 will also have (different) Tracker01 to
Tracker05, which is more confusing to people than machines.
As that example shows indirectly, cameras, objects, meshes, and lights
are always renamed to be unique. Renaming is always done by appending a
number: if the incoming and current scenes both have a TrashCan, the incoming
one will be renamed to TrashCan1.
If you are combining a shot with a previously-tracked reference, you will
probably want to keep the existing tracker names, to make it easiest to find
matching ones. Otherwise, renaming them with yes is probably the least
confusing unless you have a particular knowledge of the TrackerNN assignments
(in which case, giving them actual names such as Scuff1 is probably best).
You might occasionally track one portion of a shot in one scene file, and
track a different portion of the same shot in a separate file. You can combine the
scene files onto a single camera as follows:
1. Open the first shot
2. File/Merge the second shot.
3. Answer yes to make tracker names unique (important!)
4. Select Camera02 from the Shot menu.
5. Hit control-A to select all its trackers.

6. Go to the Coordinate System Panel .

257
MERGING FILES AND TRACKS

7. Change the trackers‘ host object from Camera02 to *Camera01.


(The * before the camera name indicates that you are moving the
tracker to a different, but compatible, shot.)
8. Delete any moving objects, lights, or meshes attached to
Camera02.
9. Select Remove Object on the Shot menu to delete Camera02.
All the trackers will now be on the single Camera01. Notice how Remove
Object can be used to remove a moving object or a camera and its shot. In each
case, however, any other moving objects, trackers, lights, meshes, etc, must be
removed first or the Remove Object will be ignored.

Tracker Data Transfer


You can transfer tracking data from file to file using SynthEyes export
scripts, File/Export/Export 2-D Tracker Paths, and File/Import/Import 2-D
Tracker Paths. These scripts can be used to interchange with other programs
that support similar tracking data formats. The scripts can be used to make a
number of remedial transforms as well, such as repairing track data if the source
footage is replaced with a new version that is cropped differently.
The simple data format, a tracker name, frame number, horizontal and
vertical positions, and an optional status code, also permits external
manipulations by UNIX-style scripts and even spreadsheets.
Exporting
Initiate the Export 2-D Tracker Paths script, select a file, and a script-
generated dialog box will appear:

258
MERGING FILES AND TRACKS

As can be seen, it affords quite a bit of control.


The first three fields control the range of frames to be exported, in this
case, frames 10 from 15. The offset allows the frame number in the file to be
somewhat different, for example, -10 would make the first exported frame
appear to be frame zero, as if frame 10 was the start of the shot.
The next four fields, two scales and two offsets, manipulate the horizontal
(U) and vertical (V) coordinates. SynthEyes defines these to range from -1 to +1
and from left to right and top to bottom. Each coordinate is multiplied by its scale
and then the offset added. The normal defaults are scale=1 and offset=0. The
values of 0.5 and 0.5 shown rework the ranges to go from 0 to 1, as may be used
by other programs. A scale of -0.5 would change the vertical coordinate to run
from bottom to top, for example.
The scales and offsets can be used for a variety of fixes, including
changes in the source imagery. You‘ll have to cook up the scale and offset on
your own, though. Note that if you are writing a tracker file on SynthEyes and will
then read it back in with a transform, it is easiest to write it with scale=1 and
offset=0, then make changes as you read in, since if you need to try again you
can retry the import, without having to reexport.
Continuing with the controls, Even when missing causes a line to be
output even if the tracker was not found in that frame. This permits a more
accurate import, though other programs are less likely to understand the file.
Similarly, the Include Outcome Codes checkbox controls whether or not a small
numeric code appears on each line that indicates what was found; it permits a
more accurate import, though is less likely to be understood elsewhere.
The 2-D tracks box controls whether or not the raw 2-D tracking data is
output; this is not necessarily mandatory, as you‘ll see.
The 3-D tracks box controls whether or not the 3-D path of each tracker is
included―this will be the 2-D path of the solved 3-D position, and is quite
smooth. In the example, 3-D paths are exported and 2-D paths are not, which is
the reverse of the default. When the 3-D paths are exported, an extra Suffix for
3-D can be added to the tracker names; usually this is _3D, so that if both are
output, you can tell which is which.
Finally, the Extra Points box controls whether or not the 2-D paths of an
extra helper points in the scene are output.
Importing
The File/Import/Import 2-D Tracker Paths import can be used to read the
output of the 2-D exporter, or from other programs as well. The import script
offers a similar set of controls to the exporter:

259
MERGING FILES AND TRACKS

The import runs roughly in reverse of the export. The frame offset is
applied to the frame numbers in the file, and only those within the selected first
and last frames are stored.
The scale and offset can be adjusted; by default they are 1 and 0
respectively. The values of 2 and -1 shown undo the effect of the 0.5/0.5 in the
example export panel.
If you are importing several different tracker data files into a single moving
object or camera, you may have several different trackers all named Tracker1,
for example, and after combining the files, this would be undesirable. Instead, by
turning on Force unique names, each would be assigned a new unique name.
Of course, if you have done supervised tracking in some different files to
combine, you might well leave it off, to combine the paths together.
If the input data file contains data only for frames where a tracker has
been found, the tracker will still be enabled past the last valid frame. By turning
on Truncate enables after last, the enable will be turned off after the last valid
frame.
After each tracker is read, it is locked up. You can unlock and modify it as
necessary. The tracking data file contains only the basic path data, so you will
probably want to adjust the tracker size, search size, etc.
If you will be writing your own tracker data file for this script to import, note
that the lines must be sorted so that the lines for each specific tracker are
contiguous, and sorted in order of ascending frame number. This convention
makes everyone‘s scripts easier. Also, note that the tracker names in the file
never contain spaces, they will have been changed to underscores.

260
MERGING FILES AND TRACKS

Transferring 3-D Paths


The path of a camera or object can be exported into a plain file containing
a frame number, 3 positions, 3 rotations, and an optional zoom channel (field of
view or focal length).
Like the 2-D exporter, the File/Export/Plain Camera Path exporter
provides a variety of options:
First Frame. First frame to export
Last Frame. Last frame to export.
Frame Offset. Add this value to the frame number before storing it in the file.
World Scaling. Multiplies the X,Y, Z coordinates, making the path bigger or
smaller.
Axis Mode. Radio-buttons for Z Up; Y Up, Right; Y Up, Left. Adjust to select the
desired output alignment, overriding the current SynthEyes scene setting.
Rotation Order. Radio buttons: XYZ or ZXY. Controls the interpretation of the 3
rotation angles in the file.
Zoom Channel. Radio buttons: None, Field of View, Vertical Field of View, Focal
Length. Controls the 7th data channel, namely what kind of field of view
data is output, if any.
Look the other way. SynthEyes camera looks along the –Z axis; some systems
have the camera look along +Z. Select this checkbox for those other
systems.
The 3-D path importer, File/Import/Camera/Object Path, has the same
set of options. Though this seems redundant, it lets the importer read flexibly
from other packages. If you are writing from SynthEyes and then reading the
same data back in, you can leave the settings at their defaults on both export and
import (unless you want to time-shift too, for example). If you are changing
something, usually it is best to do it on the import, rather than the export.

Writing 3-D Tracker Positions


You can output the trackers‘ 3-D positions using the File/Export/Plain
Trackers script with these options:
Tracker Names. Radio buttons: At beginning, At end of line, None. Controls
where the tracker names are placed on each output line. The end of line
option allows tracker names that contain spaces. Spaces are changed to
underscores if the names are at the beginning of the line.
Include Extras. If enabled, any helper points are also included in the file.
World Scaling. Multiplies the coordinates to increase or decrease overall
scaling.
Axis Mode. Temporarily changes the coordinate system setting as selected.

Reading 3-D Tracker Positions


On the input side, there is an File/Import/Tracker Locations option and
an File/Import/Extra Points option. Neither has any controls; they automatically

261
MERGING FILES AND TRACKS

detect whether the name is at the beginning or end of the line. Putting the
names at the end of each line is most flexible, because then there is no problem
with spaces embedded in the file name. A sample file might consist of lines such
as:
0 0 0 Origin
10 0 0 OnXAxis
13 -5 0 OnGroundPlane
22 10 0 AnotherGroundPlane
3 4 12 LightPole
When importing trackers, the coordinates are automatically set up as a
seed position on the tracker. You may want to change it to a Lock constraint as
well. If a tracker of the given name does not exist, a new tracker will be created.

262
Batch File Processing
The SynthEyes batch file processor lets you queue up a series of shots for
match-moving or file-sequence rendering, over lunch or over night. Please follow
these steps:
1. In SynthEyes, do a File/New and select the first/next shot.
2. Adjust shot settings in SynthEyes as needed, for example, set it to
zoom or tripod mode, and do an initial export — the same kind will
be used at the completion of batch processing. You can skip the
export if you want to use the export type and export folder from the
preferences.
3. Hit File/Submit for Batch to submit a file for tracking and solving.
4. To render image sequences for shots that have been tracked,
converged, undistorted, or otherwise manipulated in the image
preprocessor, configure the Save Sequence dialog, close it, then hit
File/Submit for Rendering.
5. Repeat steps 1-3 or 4 for each shot.
6. Start the SynthEyes batch file processor with File/Batch Process
or from the Windows Start menu, All Programs, Andersson
Technologies LLC, SynthEyes Batcher.
7. Wait for one or more files to be completed.
8. Open the completed files from the Batch output folder.
9. Complete shot tracking as needed, such as assigning a coordinate
system, tracker cleanup, etc. followed by a Refine pass.
While the batcher runs, you can continue to run SynthEyes interactively
(only on the same machine), which is especially useful for setting up additional
shots, or finishing previously-completed ones.
Note: it is more efficient to use the batcher to process one shot while you
work on another one, instead of starting two windows of SynthEyes, because the
batcher does not attempt to load the entire shot into your RAM. Because the
batcher does not use playback RAM, most RAM is available for your interactive
SynthEyes window.

Details
SynthEyes uses two folders for batch file processing: an input folder and
an output folder. Submit for Batch places them into the input folder; exports are
written to the exports folder, completed files are written to the output folder, and
the input file removed. You can set the location of the input, export, and output
folders from the Preferences panel.

263
BATCH FILE PROCESSING

Dedicated UNIX-style programmers can see that this can be exploited to


produce a ―tracking farm‖ (a license per machine is required!) — but it will take a
little bit more work than simply sharing the input queue folder because there is no
per-file locking. A small demon must copy files out of the input queue to the
queue of the individual worker machines. If demand warrants, this could be
integrated.
Thanks for reading this far!

264
SynthEyes Reference Material

System Requirements
Installation and Registration
Customer Care Features and Automatic Update
Menu Reference
Control Panel Reference
Additional Dialogs Reference
Viewport Features Reference
Perspective Window Reference
Overview of Standard Tool Scripts
Preferences and Scene Settings Reference
Keyboard Reference
Viewport Layout Manager
Support

265
System Requirements
PC
 Intel or AMD ―x86‖ processor with SSE2, such as Pentium 4, Athlon 64,
Opteron, or Core/Core Duo. Note: SSE2 support is a requirement for
SynthEyes 2007. Pentium 3 and Athlon XP/MP processors are not
supported.
 Windows Vista, XP. Supports XP‘s 3GB mode.
 32-bit version runs under Windows XP 64 Pro or 64-bit Vista. A separate
64-bit SynthEyes version is available.
 1 GB RAM typical. 512 MB suggested minimum. 2+ GB suggested for pro,
HD, and film users. 4+ GB are strongly suggested for 8-core machines.
 Mouse with middle scroll wheel/button. See the viewport reference section
for help using a trackball.
 1024x768 or larger display, 32 bit color, with OpenGL support. Large
multi-head configurations require graphics cards with sufficient memory.
 DirectX 8.x or later recommended, required for DV and usually MPEG.
 Quicktime 5 or later recommended, required to read .mov files.
 A supported 3-D animation or compositing package to export paths and
points to. Can be on a different machine, even a different operating
system, depending on the target package.
 A user familiar with general 3-D animation techniques such as key-
framing.

Mac OS X
 Intel Mac, G5 Mac or G4 Mac (marginal).
 1 GB RAM typical. 512 MB RAM suggested minimum. 2+ GB suggested
for pro, HD, and film users. 4+ GB are strongly suggested for 8-core
machines.
 3 button mouse with scroll wheel. See the viewport reference section for
help using a trackball or Microsoft Intellipoint mouse driver.
 1024x768 or larger display, 32 bit color, with OpenGL support. Large
multi-head configurations require graphics cards with sufficient memory.
 Mac OS 10.4 or 10.5.
 A supported 3-D animation or compositing package to export paths and
points to. Can be on a different machine, even a different operating
system, depending on the target package.
 A user familiar with general 3-D animation techniques such as key-
framing.

Interchange
The Mac OSX versions can read SynthEyes files created on Windows and
vice versa. Note that Windows, 64-bit Windows, and OS X licenses must be
purchased separately; licenses are not cross-platform.

267
Installation and Registration
Following sections describe installation for the PC and, separately, Mac.
After installation, follow the directions in the Registration page to activate the
product.

PC Installation
Please uninstall SynthEyes Demo before installing the actual product.
To install a downloaded SynthEyes, run the installer syn08setup.exe
(Syn0864Installer.msi for 64 bit), or insert the CD.
You can install to the default location, or any convenient location. The
installer will create shortcuts on the desktop for the SynthEyes program and
HTML documentation.
If you have a trackball or tablet, you may wish to turn on the No middle-
mouse preference setting to make alternate mouse modes available. See the
viewport reference section. You should turn on Enhance Tablet Response if
you have trouble stopping playback or tracking (Wacom appears to have fixed
the underlying issue in recent drivers, so getting a new tablet driver may be
another option.)
Proceed to the Registration section below.

PC Fine Print
If you receive this error message:
Error 1327.Invalid Drive E:\ (or other drive)
then Windows Installer wants to check something on that drive. This can occur if
you have a Firewire, network, or flash drive with a program installed to it, or an
important folder such as My Documents placed on it, if the drive is not turned on
or connected. The easiest cure is to turn the device on or reconnect it.
This behavior is part of Windows, see
http://support.installshield.com/kb/view.asp?articleid=q107921
http://support.microsoft.com/default.aspx?scid=kb;en-us;282183

PC - DirectX
SynthEyes requires Microsoft‘s DirectX 8 or later to be able to read DV
and MPEG shots. DirectX is a free download from Microsoft and is already a
component of many current games and applications. You may be able to verify
that you already have it by searching for the DirectX diagnostic tool dxdiag.exe,
located in \windows\system or \winnt\system32. If you run it, the system tab
shows the DirectX version number at the bottom of the system information.
To download and install DirectX, go to http://www.microsoft.com and
search for DirectX. Select a DirectX Runtime download for your operating

269
INSTALLATION AND REGISTRATION

system. DirectX 9.0c or DirectX 10 are current versions. Download (~ 8 MB) and
install DirectX per Microsoft‘s directions.

PC - QuickTime
If you have shots contained in QuickTime™ (Apple) movies (ie .mov files),
you must have Apple‘s QuickTime installed on your computer. If you use a
capture card that produces QuickTime movies, you will already have QuickTime
installed. SynthEyes can also produce preview movies in QuickTime format.
You can download QuickTime from
http://www.apple.com/quicktime/download/
Quicktime Pro is not required for SynthEyes to read or write files.
Note that at present Apple does not offer a 64-bit version of Quicktime, so
Quicktime support is not available for 64-bit SynthEyes.

Mac OS X Installation
1. Download the Syn08MT.dmg file to a convenient location on the Mac.
2. Double-click it to open it and expose the SynthEyes installation package.
3. Double-click the installation bundle to launch the install.
4. Proceed through a normal install; you will need root permissions.
5. Eject the .dmg file from the finder; it will be deleted.
6. Start SynthEyes from your Applications folder. You can create a shortcut
on your desktop if you wish.
7. Proceed to the Registration directions below.
Note that pictures throughout this document are based on the PC version;
the Mac version will be very similar. In places where an ALT-click is called for on
a PC, a Command-click should be used on the Mac, though these should be
indicated in this manual.
If you have a trackball or Microsoft‘s Intellipoint mouse driver, you may
wish to turn on the No middle-mouse preference setting to make alternate
mouse modes available. See the viewport reference section. You should turn on
Enhance Tablet Response if you have trouble stopping playback or tracking
(Wacom appears to have fixed the underlying issue in recent drivers, so getting a
new tablet driver may be another option.)

Registration and Authorization Overview


After you order SynthEyes, you must register to receive your
permanent program authorization data. For your convenience, some
temporary registration data is automatically supplied as part of your order
confirmation, so you can put SynthEyes immediately to work.

270
INSTALLATION AND REGISTRATION

For an online tutorial on registration and authorization, see


http://www.ssontech.com/content/regitut.htm
The overall process (described in more detail later) is this:
1. Order SynthEyes
2. Receive order confirmation with download information and
temporary authorization.
3. Download and install SynthEyes
4. Start SynthEyes, fill out registration form, and send data to
Andersson Technologies LLC.
5. Restart SynthEyes, enter the temporary authorization data.
6. Wait for the permanent authorization data to arrive.
7. Start SynthEyes and enter the permanent authorization data.

Registration
When you first start SynthEyes, a form will appear for you to enter
registration information. Alternatively, if you‘ve entered the temporary
authorization data first, you can access the registration dialog from the
Help/Register menu item.

Proceed as follows:
1. Use copy and paste to transfer the entire serial number (starts with
SN- on PC, S6- on Win64, or IM- on OS X) from the email
confirmation of your purchase to the form.
2. Fill out the remainder of the form. Sorry if this seems redundant to
the original order form, but it is necessary. This data should

271
INSTALLATION AND REGISTRATION

correspond to the user of the software. If the user has no clear


relationship to the purchaser (a freelancer, say), please have the
purchaser email us to let us know, so we don‘t have to check
before issuing the license.
3. Hit OK, and SynthEyes will place a block of data onto the clipboard.
Be sure to hit OK, not the other button, this is a frequent cause of
confusion, simple though it may seem.
4. An email composition window will now appear, using your system‘s
default email program. [If this does not happen, or to use a different
emailer, create a new message entitled ―SynthEyes Registration‖
addressed to register@ssontech.com.] Click inside the new-
message window‘s text area, then hit control-V (command-V on
Mac) to paste the information from SynthEyes into the message. If
the email is blank, or contains temporary authorization information,
be 300% sure you clicked OK after filling out the registration form.
5. If you are re-registering, after getting a new workstation, say, or
are not the person originally purchasing the software, please add a
remark to that effect to the mail.
6. Send the e-mail. Please use an email address for the organization
owning the license, not a personal gmail, hotmail, etc address. We
can not send confidential company authorization data to your
personal email; use of personal emails frequently causes problems
for license owners.
7. You will receive an e-mail reply, typically the next business day,
containing the authorization data. Be sure to save the mail for
future reference.

Authorization
1. View the email containing the authorization data.
2. Highlight the authorization information — everything from the left
parentheses bracket ―(‖ to the right parentheses ―)‖ and including
both parentheses — in your e-mail program, and select Edit/Copy
in your mail program. Note: the serial number (SN-, IM-, etc) is not
part of the authorization data but is included above it only for
reference, especially for multiple licenses.
3. Start SynthEyes. If the registration dialog box appears, click the
Use license on Clipboard button. If your temporary registration is
still active, the registration dialog will not appear, so click
Help/Authorize instead.
4. Vista: if you get a message that you must start SynthEyes with
Administrator permissions to authorize, you should go to the folder

272
INSTALLATION AND REGISTRATION

\Program Files\Andersson Technologies LLC\SynthEyes, right-click


on SynthEyes.exe, and select "Run as administrator".
5. A ―Customer Care Login Information‖ dialog will appear. You should
enter the support login and password that also came in the email
with the authorization data. The user ID looks like jan02, and the
password looks like jm323kx (these two will not work, use the ones
from your mail). Note: if you have a temporary license, you should
hit Cancel on this panel.
6. SynthEyes will acknowledge the data, then exit. When you restart
it, you should see your permanent information listed on the splash
screen, and you‘re ready to go.

PC Uninstallation
Like other Windows-compatible programs, use the Add/Remove Programs
tool from the Windows Control Panel to uninstall SynthEyes.

Mac Uninstallation
Delete the folders /Applications/SynthEyes, /Library/Application
Support/SynthEyes, and /Users/YourName/Library/Applications
Support/SynthEyes. To get really draconian, delete OS X‘s secret record,
/Library/Receipts/Syn08MT.pkg

273
Customer Care Features and Automatic Update
SynthEyes features an extensive customer care facility, aimed at helping
you get the information you need, and helping you stay current with the latest
SynthEyes builds, as easily as possible.
These facilities are accessed through 3 buttons on the main toolbar, and a
number of entries on the Help menu.
These features require internet access during use, but internet access is
not required for normal SynthEyes operation. You can use them with a dialup
line, and you can tell SynthEyes to use it only when you ask.
We strongly recommend using these facilities, based on past customer
experience! Note: some features operate slightly differently or are not available
from the demonstration version of the software. Also Vista throws some
wrenches into the works.
For more information on accessing SynthEyes updates, see the tutorial on
configuring auto-update.

Customer Care Setup


The auto-update, messaging, and suggestions features all require access
information to the customer-only web site to operate. The necessary login and
password arrive with your SynthEyes authorization data (that big (….) thing), and
you are prompted to enter them immediately after authorizing, or by selecting the
Help/Set Update Info menu item. Customer Care uses the same login
information as for accessing the support site. If you do not have the customer
care login information yet, hit Cancel instead of entering it; this will not affect
program operation at all. The customer care facility also uses the full serial
number; it will be shown with a ... after you have entered it the first time.
If the D/L button is red when you start SynthEyes or check for updates,
internet operations are failing. You should check your serial number and login
information, if it is the first time, or check that you are really connected to the
internet.
Also, if you have an Internet firewall program on your computer, you must
permit SynthEyes to connect to the internet for the customer-care features to
operate. You‘ll have to check with your firewall software‘s manual or support for
details.

Checking for Updates


The update info dialog allows you to control how often SynthEyes checks
for updates from the ssontech.com web site. You can select never, daily, or on
startup, with daily the recommended selection.
SynthEyes automatically checks for updates when it starts up, each time
in ―on startup‖ mode, but only the first time each day in ―daily‖ mode. The check

275
CUSTOMER CARE FEATURES AND AUTOMATIC UPDATE

is performed in the background, so that it does not slow you down. (Note: on
Vista, the daily setting will check each startup.)
You can easily check for updates manually, especially if you are in ―never‖
mode. Click the D/L button on the main toolbar, or Help/Check for updates.

Automatic Downloads
SynthEyes checks to determine the latest available build on the web site.
If the latest build is more current than its own build, SynthEyes begins a
download of the new version. The download takes place in the background as
you use SynthEyes. The D/L button will be Yellow during the download.
Once the download is complete, the D/L button will turn green. When you
have reached a convenient time to install the new version, click the D/L button or
select the Help/Install Updated menu item. After making sure your work is saved,
and that you are ready to proceed, SynthEyes closes and opens the folder
containing the new installer.
Depending on your system and security settings, the installer may or may
not start automatically. If it does not start automatically, click it to begin
installation.
The same process occurs when you check for updates manually by
clicking the D/L button, with a few more explanatory messages.

Messages from Home


The Msg button and Help/Read Messages menu item are your portal to
special information from Andersson Technologies LLC to bring you the latest
word of updated scripts, tutorials, operating techniques, etc.
When the Msg button turns green, new messages are available; click it
and they will appear in a web browser window. You can click it again later too, if
you need to re-read something.

Suggestions
We maintain a feature-suggestion system to help bring you the most
useful and best-performing software possible. Click the Sug button on the
toolbar, or Help/Suggest a Feature menu item.
This miniature forum not only lets you submit requests, but comment and
vote on existing feature suggestions. (This is not the place for technical support
questions, however, please don‘t clog it up with them.)
Demo version customers: this area is not available. Send email to support
instead. Past experience has shown that most suggestions from demo customers
are already in SynthEyes, be sure to check the manual first!

276
CUSTOMER CARE FEATURES AND AUTOMATIC UPDATE

Web Links
The Help menu contains a number of items that bring up web pages from
the www.ssontech.com web site for your convenience, including the main home
page, the tutorials page, and the forum.

E-Mail Links
The Help/Tech Support Mail item brings up an email composition window
preaddressed to technical support. Please investigate matters thoroughly before
resorting to this, consulting the manual, tutorials, support site, and forum.
If you do have to send mail, please include the following:
 Your name and organization
 An accurate subject line summarizing the issue
 A detailed description of your question or problem, including
information necessary to duplicate it, preferably from File/New
 Screen captures, if possible, showing all of SynthEyes.
 A .sni scene file, after Clear All Blips, and ZIPped up (not RAR).
The better you describe what is happening, the quicker your issue can be
resolved.
Help/Report a Credit brings up a preaddressed email composition
window so that you can let us know about projects that you have tracked using
SynthEyes, so we can add them to our ―As Seen On‖ web page. If you were
wondering why your great new project isn‘t listed there… this is the cure.

277
Menu Reference
File Menu
Many entries are Windows-standard. For example, File/New clears the
scene and also opens the Shot/Add Shot dialog.
File/Merge. Merges a previously-written SynthEyes .sni scene file with the
currently-open one, including shots, objects, trackers, meshes, etc. Most
elements are automatically assigned unique names to avoid conflicts, but
a dialog box lets you select whether or not trackers are assigned unique
names.
File/Import/Shot. Clears the scene and opens the Shot/Add Shot dialog, if
there are no existing shots, or adds an additional shot if one or more shots
are already present.
File/Import/Mesh. Imports a DXF or Alias/Wavefront OBJ mesh as a test object.
File/Import/Reload mesh. Reloads the selected mesh, if any. If the original file
is no longer accessible, allows a new location to be selected.
File/Import/Tracker Locations. Imports a text file composed of lines: x_value
y_value z_value Tracker_name. For each line, if there is an existing
tracker with that name, its seed position is set to the coordinates given. If
there is no tracker with that name, a new one is created with the specified
seed coordinates. Use to import a set of seed locations from a pre-existing
object model or set measurements, for example. New trackers use
settings from the tracker panel, if it is open. See the section on merging
files.
File/Import/Extra Points. Imports a text file consisting of lines with x, y, and z
values, each line optionally preceded or followed by an optional point
name. A helper point is created for each line. The points might have been
determined from on-set surveying, for example; this option allows them to
be viewed for comparison. See the section on merging files.
Export Again. Redoes the last export, saving time when you are exporting
repeatedly to your CG application.
Find New Scripts. Causes SynthEyes to locate any new scripts that have been
placed in the script folder since SynthEyes started, making them available
to be run.
File Info. Shows the full file name of the current file, its creation and last-written
times, full file names for all loaded shots, file names for all imported
meshes (and the time they were imported). Plus, allows you to add your
own descriptive information to be stored in the file.
User Data Folder. Opens the folder containing preferences, the batch, script,
downloads, and preview movie folders, etc.
Submit for Batch. The current scene is submitted for batch processing by
writing it into the queue area. It will not be processed until the Batch
Processor is running, and there are no jobs before it.
Submit for Render. The current scene is submitted for batch processing: the
Save Sequence process will be run on the active shot to write out the re-

279
MENU REFERENCE

processed image sequence to disk as a batch task. Use the Save


Sequence dialog to set up the output file and compression settings first,
close it without saving, then Submit for Batch. You will be asked whether
or not to output both image sequences simultaneously for stereo. Other
multiple-shot renderings can be obtained by Sizzle scripting, or by
submitting the same file several times with different shots active.
Batch Process. SynthEyes opens the batch processing window and begins
processing any jobs in the queue.
Batch Input Queue. Opens a Windows Explorer to the batch input queue folder,
so that the queue can be examined, and possibly jobs removed or added.
Batch Output Queue. Opens a Windows Explorer to the batch output queue
folder, where completed jobs can be examined or moved to their final
destinations.
Exporter Outputs. Opens a Windows Explorer to the default exporter folder.

Edit Menu
Undo. Undo the last operation, changes to show what, such as ―Undo Select
Tracker.‖
Redo. Re-do an operation previously performed, then undone.
Select same color. Select all the (un-hidden) trackers with the same color as the
one(s) already selected.
Select All etc affect the tracker selections, not objects in the 3-D viewports.
Invert Selection. Select unselected trackers, unselect selected trackers.
Clear Selection. Unselect all trackers.
Delete. Delete selected objects and trackers.
Hide unselected. Hide the unselected trackers
Hide selected. Hide the selected trackers
Reveal selected. Reveal (un-hide) the selected trackers (typically from the
lifetimes panel).
Reveal nnn trackers. Reveal (un-hide) all the trackers currently hidden, ie nnn
of them.
Flash selected. Flashes all selected trackers in the viewports, making them
easier to find.
Spinal aligning. Sets the spinal adjustment mode to alignment.
Spinal solving. Sets the spinal adjustment mode to solving.
Edit Scene Settings affects the current scene only.
Edit Preferences contains some of the same settings; these do not affect the
current scene, but are used only when new scenes are created.
Reset Preferences. Set all preferences back to the initial factory values. Gives
you a choice of presets for a light- or dark-colored user interface,
appropriate for office or studio use, respectively.
Edit Keyboard Map. Brings up a dialog allowing key assignments to be altered.

View Menu
Reset View. Resets the camera view so the image fills its viewport.
Expand to Fit. Same as Reset View.

280
MENU REFERENCE

Reset Time Bar. Makes the active frame range exactly fill the displayable area.
Rewind. Set the current time to the first active frame.
To End. Set the current time to the last active frame.
Play in Reverse. When set, replay or tracking proceeds from the current frame
towards the beginning.
Frame by Frame. Displays each frame, then the next, as rapidly as possible.
Quarter Speed. Play back at one quarter of normal speed.
Half Speed. Play back at one half of normal speed.
Normal Speed. Play back at normal speed (ie the rated frame per second
value), dropping frames if necessary. Note: when the Tracker panel is
selected, playback is always frame-by-frame, to avoid skipping frames in
the track.
Double Speed. Play back at twice normal speed, dropping frames if necessary.
Show Image. Turns the main image‘s display in the camera view on and off.
Show Trackers. Turns on or off the tracker rectangles in the camera view.
Only Camera01’s trackers. Show only the trackers of the currently-selected
camera or object. When checked, trackers from other objects/cameras are
hidden. The camera/object name changes each time you change the
currently-selected object/camera on the Shot menu.
Show Tracker Trails. When on, trackers show a trail into the future(red) and
past(blue).
Show 3-D Points. Controls the display of the solved position marks (X‘s).
Show 3-D Seeds. Controls the display of the seed position marks (+‘s).
Show Seed Paths. When on, values for the ‗seed‘ path and field of view/focal
length of the camera and moving objects will be shown and edited. These
are used for ―Use Seed Paths‖ mode and for camera constraints. When
off, the solved values are displayed.
Show Meshes. Controls display of object meshes in the camera viewport.
Meshes are always displayed in the 3-D viewports.
Solid Meshes. When on, meshes are solid in the camera viewport, when off,
wire frame. Meshes are always wireframe in the 3-D viewports.
Shadows. Show ground plane or on-object shadows in perspective window. This
setting is sticky from SynthEyes run to run.
Show Lens Grid. Controls the display of the lens distortion grid (only when the
Lens control panel is open).
OpenGL Camera View. When enabled, the OpenGL camera view is used, which
is faster on Macs, and when there are large meshes present (50,000
vertices/faces and up). Keep off on PCs when there are no complex
meshes.
OpenGL 3-D Viewports. When enabled, the OpenGL version of the 3-D
viewports is used, which is faster on Macs, and when there are large
meshes present (50,000 vertices/faces and up). Keep off on PCs when
there are no complex meshes.
Double Buffer. Slightly slower but non-flickery graphics. Turn off only when
maximal playback speed required.

281
MENU REFERENCE

Sort Alphabetic. Trackers are sorted alphabetically, mainly for the up/down
arrow keys. Updated when you change the setting in the graph editor.
Sort by Error. Trackers are sorted from high error to low error.
Sort by Time. Trackers are sorted from early in the shot to later in the shot.
Only Selected Splines. When checked, the selected spline, and only the
selected spline, will be shown, regardless of its Show This Spline status.
Safe Areas. This is a submenu with checkboxes for a variety of safe areas you
can turn on and off individually (you can turn on both 90% and 80% at
once, for example). Safe areas are defined in the file safe08.ini in the
main SynthEyes folder; you can add your own safe08.ini to add your own
personal safe area definitions. Change the color via the preferences.

Track Menu
Add Many Trackers. After a shot is auto-tracked and solved, additional trackers
can be added efficiently using the dialog.
Clean Up Trackers.
Coalesce Nearby Trackers. Brings up a dialog that searches for, and
coalesces, multiple trackers that are tracking the same feature at different
times in the shot.
Combine Trackers. Combine all the selected trackers into a single tracker, and
delete the originals.
Cross Link by Name. The selected trackers are linked to trackers with the same
name, except for the first character, on other objects. If the tracker‘s object
is solved Indirectly, it will not link to another Indirectly-solved object. It also
will not link to a disabled object.
Drop onto mesh. If a mesh is positioned appropriately in the camera viewport,
drops all selected trackers onto the mesh, setting their seed coordinates.
Similar to Place mode of Perspective window.
Fine-tune Trackers. Brings up a dialog to automatically re-track automatic
trackers using supervised tracking. Reduces jitter on some scenes.
Selected Only. When checked, only selected trackers are run while tracking.
Normally, any tracker which is not Locked is processed.
Stop on auto-key. Causes tracking to stop whenever a key is added as a result
of the Key spinner, making it easy to manually tweak the added key
locations.
Preroll by Key Smooth. When tracking starts from a frame with a tracker key,
SynthEyes backs up by the number of Key Smooth frames, and retracks
those frames to smooth out any jump caused by the key.
Pan to Follow. The camera view pans automatically to keep selected trackers
centered. This makes it easy to see the broader context of a tracker.
Pan to Follow 3D. This variant keeps the solved 3-D point of the tracker
centered, which can be better for looking for systematic solve biases.
ZWT auto-calculation. The 3-D position of each zero-weighted tracker is
recomputed whenever it may have changed. With many ZWTs and long
tracks, this might slow interactive response; use this item to temporarily
disable recalculation if desired.

282
MENU REFERENCE

Steady Camera. Predicts the next location of the tracker based on the last
several frames. Use for smooth and steady shots from cranes, dollies,
steadi-cams.
Hand-Held: Sticky. Use for very irregular features poorly correlated to the other
trackers. The tracker is looked for at its previous location. With both hand-
held modes off, trackers are assumed to follow fairly smooth paths.
Hand-Held: Use others. Uses previously-tracked trackers as a guide to predict
where a tracker will next appear, facilitating tracking of jittery hand-held
shots.
Re-track at existing. Use this mode to re-track an already-tracked tracker. The
search will be centered at the previously-determined location, preventing
large jumps in position. Used for fine-tuning trackers, for example.
No resampling. Supervised tracking works at the original image resolution.
Linear x 4. Supervised tracking runs at 4 times the original image resolution, with
linear interpolation between pixels. Default setting, suitable for usual DV
and prosumer cameras.
Lanczos2 x 4. Tracking runs at 4x resolution with N=2 Lanczos filtering, which
produces sharper images—of the image and the noise. Suitable primarily
for clean uncompressed source footage. Takes longer than Linear x 4.
Lanczos3 x 4. Tracks at 4x with N=3 Lanczos, which is even sharper, but takes
longer too.
Linear x 8. Supervised tracking runs at 8x the original resolution. Not necessarily
any better than running at 4x.
Lanczos2 x 8. Tracks at 8x with N=2 Lanczos.
Lanczos3 x 8. Tracks at 8x with N=3 Lanczos.
(Tool Scripts). Tool scripts were listed at the end of the track menu in earlier
versions of SynthEyes. They now have their own Script menu.

Shot Menu
Add Shot. Adds a new shot and camera to the current workspace. This is
different than File/New, which deletes the old workspace and starts a new
one! SynthEyes will solve all the shots at the same time when you later hit
Go, taking links between trackers into account. Use the camera and object
list at the end of the Shot menu to switch between shots.
Edit Shot. Brings up the shot settings dialog box (same as when adding a shot)
so that you can modify settings. Switching from interlaced to noninterlaced
or vice versa will require retracking the trackers.
Change Shot Images. Allows you to select a new movie or image sequence to
replace the one already set up for the present shot. Useful to bring in a
higher or lower-resolution version, or one with color or exposure
adjustments. Warning: changes to the shot length or aspect ratio will
adversely affect previously-done work.
Image Preparation. Brings up the image preparation dialog (also accessed from
the shot setup dialog), for image preparation adjustments, such as region-
of-interest control, as well as image stabilization.

283
MENU REFERENCE

Enable Prefetch. Turns the image prefetch on and off. When off, the cache
status in the timebar will not be updated as accurately.
Read 1f at a time. Preference! Tells SynthEyes to read only one frame at a
time, but continue to pre-process frames in parallel. This option can
improve performance when images are coming from a disk or network that
performs poorly when given many tasks at once.
Activate other eye. When the camera view is showing one of the views from a
stereo pair, switches to the other eye. Additionally, if there is a perspective
window locked to the other (now-displayed) eye, it is switched to show the
original camera view, swapping the two views.
Stereo Geometry. Brings up the Stereo Geometry control panel.
Add Moving Object. Adds a new moving object for the current shot. Add
trackers to this object and SynthEyes will solve for its trajectory. The
moving object shows as a diamond-shaped null in the 3-D workspace.
Remove Moving Object. Removes the current object. If it is a camera, it must
not have any attached objects; if it is removed the whole shot goes with it.
(Camera and Object List). This list of cameras and objects appears at the end
of the shot menu, showing the current object or camera, and allowing you
to switch to a different object or camera. Selecting an object here is
different than selecting an object in a 3-D viewport.

Script Menu
User Script Folder. Opens your personal folder of custom scripts in the Explorer
or Finder. Handy for making or modifying your own. SynthEyes will mirror
the subfolder structure to produce a submenu tree, so you can keep yours
separate, for example.
System Script Folder. Opens SynthEyes‘s folder of factory scripts. Helpful for
quickly installing new script releases. SynthEyes will mirror the subfolder
structure to produce a submenu tree, so you can put all the unused scripts
into a common folder to simplify the view, for example.
(Tool Scripts). Any tool scripts will appear here; selecting one will execute it.
Such scripts can reach into the current scene to act as scripted importers,
gather statistics, produce output files, or make changes. Standard scripts
include Filter Lens F.O.V., Invert Perspective, Select by type, Motion
capture calibrate, Shift constraints, etc. Note that importers and exporters
have their own submenus on the File menu. See the Sizzle reference
manual for information on writing scripts.

Window Menu
(Control Panel List). Allows you to change the control panel using standard
Windows menu accelerator keystrokes.
No floating panels. The current active panel is docked on the left edge of the
main application window.
Float One Panel. The active panel floats in a small carrier window and can be
repositioned. If the active panel is changed, the carrier switches to the

284
MENU REFERENCE

new panel. This may makes better use of your screen space, especially
with larger images or multiple monitor configurations
Many Floating Panels. Each panel can be floated individually and
simultaneously. Clicking each panel‘s button either makes it open, or if it is
already open, closes it. Only one panel is the official active panel.
Important note: mouse, display, and keyboard operations can depend on
which panels are open, or which panel is active. These combinations may
not make sense, or may interact in undesirable ways without warning. If in
doubt, keep only a single panel open.
No Panel. Closes all open floating panels, or removes the fixed panel. Note that
one panel is still active for control purposes, even though it is not visible.
Useful to get the most display space, and minimize redraw time, when
using SynthEyes for RAM playback.
Graph editor. Opens the graph editor.
Hold Region Tracker Prep. Launch the Hold Tracker Preparation dialog, used
to handle shots with a mix of translation and tripod-type nodal pans.
Solver Locking. Launch the solver‘s lock control dialog, used to constrain the
camera path directly.
Spinal Editing. Launch the spinal editing control dialog, for real-time updates of
solves.
Floating Camera. Click to float the camera view independently. The camera
view will be empty in the standard viewport configurations.
Float Playbar. Floats the playbar (play, frame forward, rewind, etc) as a
separate movable window. See also Playbar on toolbar in the preferences.
Show Top Time Bar. Turns the time-bar at the top of the main window on or off,
for example, if you are using a graph editor‘s time bar on a second
monitor, you can turn off the time bar on the main display.
Viewport Manager. Starts the viewport layout manager, which allows you to
change and add viewport configurations to match your working style and
display system geometry.
Click-on/Click-off. Quick toggle for click-on/click-off ergonometric mode, see
discussion in the Preferences panel.

Help Menu
Commands labeled with an asterisk(*) require a working internet
connection, those with a plus sign(+) require a properly-configured support login
as well. An internet connection is not required for normal SynthEyes operation,
only for acquiring updates, support, etc.
Help HTML. Opens the SynthEyes help file (from disk) in your web browser.
Help PDF. Opens the PDF version of the help file: the PDF‘s bookmarks make
this handy. Note: PDF help is a separate download for the demo version.
Sizzle PDF. Opens the Sizzle scripting language manual.
Read Messages+. Opens the web browser to a special message page
containing current support information, such as the availability of new

285
MENU REFERENCE

scripts, updates, etc. This page is monitored automatically; this is


equivalent to the Msg button on the toolbar.
Suggest Features+. Opens the Feature-Suggestion page for SynthEyes,
allowing you to submit suggestions, as well as read other suggestions and
comment and vote on them. (Not available on the demo version: send mail
to support with questions/comments/suggestions.)
Tech Support Site*. Opens the technical support page of the web site.
Tech Support Mail*. Opens an email to technical support. Be sure to include a
good Subject line! (Email support is available for one year after purchase.)
Report a credit*. Hey, we all want to know! Drop us a line to let us know what
projects SynthEyes has been used in.
Website/Home*. Opens the SynthEyes home page for current SynthEyes news.
Website/Tutorials*. Opens the tutorials page.
Website/Forum*. Opens the SynthEyes forum.
Register. Launches a form to enter information required to request SynthEyes
authorization. Information is placed on the Windows clipboard. See the
registration and authorization tutorial on the web site.
Authorize. After receiving new authorization information, copy it to the Windows
clipboard, then select Authorize to load the new information.
Set Update Info. Allows you to update your support-site login, and control how
often SynthEyes checks for new builds and messages.
Check for Updates+. Manually tells SynthEyes to go look for new builds and
messages. Use this periodically if you have dialup and set the automatic-
check strategy to never. Similar to the D/L button on the toolbar.
Install Updated. If SynthEyes has successfully downloaded an updated build
(D/L button is green), this item will launch the installation.
About. Current version information.

286
Control Panel Reference
SynthEyes has the following control panels:
 Summary Panel
 Rotoscope Control Panel
 Feature Control Panel
 Tracking Control Panel.
 Lens Control Panel.
 Solver Control Panel.
 Coordinate System Control Panel.
 3-D Control Panel.
 Lighting Control Panel.
 Flex/Curve Control Panel.
Select via the control panel selection portion of the main toolbar.

The Graph Editor icon appears in the toolbar area to indicate a


nominal workflow, but it launches a floating window.
Additional panels are described below:
 Add Many Trackers Dialog
 Advanced Features
 Clean Up Trackers
 Coalesce Nearby Trackers
 Curve tracking control
 Finalize Trackers
 Fine-Tuning Panel
 Green-screen control
 Hard and Soft Lock Controls
 Hold Tracker Preparation Tool
 Image Preparation
 Spinal Editing Control
The shot-setup dialog is described in the section Opening the Shot.

Spinners
SynthEyes uses spinners, the stacked triangles on the right of the
following graphic ( ), to permit easy adjustment of numeric fields on
the control panels. The spinner control provides the following features:
 Click either triangle to increase or decrease the value in steps,
 Drag within the control to smoothly increase and decrease the value,
 Turns red on key frames,

287
CONTROL PANEL REFERENCE

 Right-click to remove a key, or if none, to reset to a predefined value,


 Shift-drag or -click to change the value much more rapidly,
 Control-drag or -click to change the value slowly for fine-tuning.

Tool Bar

New, Open, Save, Undo, Redo. Buttons. Standard Windows (only). Wait for the
tooltips or use the Undo/Redo menu items to see what function will be
undone or redone.
(Control Panel buttons). Changes the active control panel.
Forward/Backward ( / ). Button. Changes the current playback and
tracking direction.
Reset Time . Button. Resets the timebar so that the entire shot is visible.
Fill . Button. The camera viewport is reset so that the entire image becomes
visible. Shift-fill sets the zoom to 1:1 horizontally.
Viewport Configuration Select . List box. Selects the viewport
configuration. Use the viewport manager on the Window menu to modify
or add configurations.
Camera01. Active camera/object. Left-click to cycle forward through the cameras
and objects, right-click to cycle backwards.

Play Bar

Rewind Button. Rewind back to the beginning of the shot.


Back Key Button. Go backwards to the previous key of the selected tracker
or object.
Frame Number . Numeric Field. Sequential frame number, starting at zero
or at 1 if selected on the preferences.
Forward Key Button. Go forward to the next key of the selected tracker or
object.
To End Button. Go to the last frame of the shot.
Frame Backwards . Button. Go backwards one frame. Auto-repeats.
Play/Stop / . Button. Begin playing the shot, forwards or backwards, at the
rate specified on the View menu.
Frame Forward . Button. Go forwards one frame. Auto-repeats.

288
CONTROL PANEL REFERENCE

Summary Panel

Auto. (the big green one) Run the entire match-move process: create
features(blips), generate trackers, and solve. If no shot has been set up
yet, you will be prompted for that first, so this is truly a one-stop button.
See also Submit for Batch.
Motion Profile. Select one of several profiles reflecting the kinds of motion the
image makes. Use Crash Pan for when the camera spins quickly, for
example, to be able to keep up. Or use Gentle Motion for faster
processing when the camera/image moves only slightly each frame.
Green Screen. Brings up the green-screen control dialog.
Zoom Lens. Check this box if the camera zooms.
On Tripod. Check this box if the camera was on a tripod.
Hold. Animated Button. Use to create hold regions to handle shots with a mix of
normal and tripod-mode sections.
Fine-tune. Performs an extra stage of re-tracking between the initial feature
tracking and the solve. This fine-tuning pass can improve the sub-pixel
stability of the trackers on some shots.
Settings. Launches the settings panel for fine-tuning.
Run Auto-tracker. Runs the automatic tracking stage, then stops.
Solve. Runs the solver.

289
CONTROL PANEL REFERENCE

Not solved. This field will show the overall scene error, in horizontal pixels, after
solving.
Coords. Initiates a mode where 3 trackers can be clicked to define a coordinate
system. After the third, you will have the opportunity to re-solve the scene
to apply the new settings. Same as *3 on the Coordinate System panel.
Master Solution Reset ( ). Clear any existing solution: points and object
paths.
Lens Workflow. Button. Starts a script to help implement either of the two main
lens-distortion workflows, adjusting tracker data and camera field of view
to match distortion.
Save Sequence. Button. Launches the dialog to save the image preprocessor‘s
output sequence, typically to render new images without distortion. Same
as save sequence on the Output tab of the Image Preprocessor.

Rotoscope Control Panel


The roto panel controls the assignment of a shot‘s blips to cameras or
objects. The roto mask can also be written as an alpha channel or RGB image
using the image preprocessor.

Spline/Object List. An ordered list of splines and the camera or object they are
assigned to. The default Spline1 is a rectangle containing the entire
image. A feature is automatically assigned to the camera/object of the last

290
CONTROL PANEL REFERENCE

spline in the list that contains the feature. Double-click a spline to rename
it as desired.
Camera/Object Selector. Drop-down list. Use to set the camera/object of the
spline selected in the Spline/Object List. You can also select Garbage to
set the spline as a garbage matte.
Show this spline. Checkbox. Turn on and off to show or hide the selected
spline. Also see the View/Only Selected Splines menu item.
Key all CPs if any. Checkbox. When on, moving any control point will place a
key on all control points for that frame. This can help make keyframing
more predictable for some splines.
Enable. Button. Animatable spline enable.
Create Circle. Lets you drag out circular splines.
Create Box. Lets you drag out rectangular splines.
Magic Wand. Lets you click out arbitrarily-shaped splines with many control
points.
Delete. Deletes the currently-selected spline.
Move Up. Push button. Moves the selected spline up in the Spline/Object
List, making it lower priority.
Move Down. Push button. Moves the selected spline down in the
Spline/Object List, making it higher priority.
Shot Alpha Levels. Integer spinner. Sets the number of levels in the alpha
channel for the shot. For example, select 2 for an alpha channel
containing only 0 or 1(255), which you can then assign to a camera or
moving object.
Object Alpha Level. Spinner. Sets the alpha level assigned to the current
camera or object. For example, with 2 alpha levels, you might assign level
0 to the camera, and 1 to a moving object. The alpha channel is used to
assign a feature only if it is not contained in any of the splines.
Import Tracker to CP. Button. When activated, select a tracker then click on a
spline control point. The tracker‘s path will be imported as keys onto the
control point.

291
CONTROL PANEL REFERENCE

Feature Control Panel

Motion Profile. Select one of several profiles reflecting the kinds of motion the
image makes. Use Crash Pan for when the camera spins quickly, for
example, to be able to keep up. Or use Gentle Motion for faster
processing when the camera/image moves only slightly each frame.
Clear all blips. Clears the blips from all frames. Use to save disk space after
blips have been peeled to trackers.
Blips this frame. Push button. Calculates features (blips) for this frame.
Blips playback range. Push button. Calculates features for the playback range
of frames.
Blips all frames. Push button. Calculates features for the entire shot. Displays
the frame number while calculating.
Delete. Button. Clears the skip frame channel from this frame to the end of
the shot, or the entire shot if Shift is down when clicked.
Skip Frame. Checkbox. When set, this frame will be ignored during automatic
tracking and solving. Use (sparingly) for occasional bad frames during
explosions or actors blocking the entire view. Camera paths are spline
interpolated on skipped frames.
Advanced. Push button. Brings up a panel with additional control parameters.
Link frames. Push button. Blips from each frame in the shot are linked to those
on the prior frame (depending on tracking direction). Useful after changes
in splines or alpha channels.
Peel. Mode button. When on, clicking on a blip adds a matching tracker, which
will be utilized by the solving process. Use on needed features that were
not selected by the automatic tracking system.

292
CONTROL PANEL REFERENCE

Peel All. Push button. Causes all features to be examined and possibly
converted to trackers.
To Golden. Push button. Marks the currently-selected trackers as ―golden,‖ so
that they won‘t be deleted by the Delete Leaden button.
Delete Leaden. Push button. Deletes all trackers, except those marked as
―golden.‖ All manually-added trackers are automatically golden, plus any
automatically-added ones you previously converted to golden. This button
lets you strip out automatically-added trackers.

Tracking Control Panel


The tracker panel has two variations with different sizes for the tracker
view area, and slightly different button locations. The wider version gives a better
view of the interior of the panel, especially on high-resolution displays. The
smaller version is a more compact layout that reduces mouse motion, and
because of the reduced size, is better for use on laptops. Select the desired
version using the Wider tracker-view panel preference.

Tracker Interior View. Shows its interior---the inner box of the tracker. Left
Mouse: Drag the tracker location. Middle Scroll: Advance the current

293
CONTROL PANEL REFERENCE

frame, tracking as you go. Right Mouse: Add or remove a position key at
the current frame. Or, cancel a drag in progress.
Create. Mode Button. When turned on, depressing the left mouse button in
the camera view creates new trackers. When off, the left mouse button
selects and moves trackers.
Delete. Button (also Delete key). Deletes the selected tracker.
Finish. Button. Brings up the finalize dialog box, allowing final filtering and
gap filtering as a tracker is locked down.
Lock. Button. Non-animated enable, turn on when tracker is complete; will
then be locked.
Tracker Type. , , , . Button. Toggles the tracker type among normal
match-mode, dark spot, bright spot, or symmetric spot.
Direction. Button. Configures the tracker for backwards tracking: it will only
track when playing or stepping backwards.
Enable. Button. Animated control turns tracker on or off. Turn off when tracker
gets blocked by some thing, turn back on when it becomes visible again.
Contrast. Number-less spinner. Enhances contrast in the Tracker Interior View
window.
Bright. Number-less spinner. Turns up the Tracker Interior View brightness.
Color. Rectangular swatch. Sets the display color of the tracker for the camera,
perspective, and 3-D views.
Now. Button. Adds a tracker position key at the present location and frame.
Right-click to remove a position key. Shift-right-click to truncate, removing
all following keys.
Key. Spinner tells SynthEyes to automatically add a key after this many frames,
to keep the tracker on track.
Key Smooth. Spinner. Tracker‘s path will be smoothed for this many frames
before each key, so there is no glitch due to re-setting a key.
Name. Edit field. Adjust the tracker‘s name to describe what it‘s tracking.
Pos. H and V spinners. Tracker‘s horizontal and vertical position, from –1 to +1.
You can delete a key (border is red) by right-clicking. Shift-right-clicking
will truncate the tracker after this frame.
Size. Size and aspect spinners. Size and aspect ratio (horizontal divided by
vertical size) of the interior portion of the tracker.
Search. H and V spinners. Horizontal and vertical size of the region (excluding
the actual interior) that SynthEyes will search for the tracker around its
position in the preceding frame. Preceding implies lower-numbered for
forward tracking, higher-numbered for backward tracking.
Weight. Spinner. Defaults to 1.0. Multiplier that helps determine the weight given
to the 2-D data for each frame from this tracker. Higher values cause a
closer match, lower values allow a sloppier match. Animated, so you can
reduce weight in areas of marginal accuracy for a particular tracker. Adjust
the key at the first frame to affect the entire shot. WARNING: This control

294
CONTROL PANEL REFERENCE

is for experts and should be used judiciously and infrequently. It is easy to


use it to mathematically destabilize the solving process, so that you will
not get a valid solution at all. Keep near 1. Also see ZWTs below.
Exact. For use after a scene has already been solved: set the tracker‘s 2-D
position to the exact re-projected location of the tracker‘s 3-D position. A
quick fix for spurious or missing data points, do not overuse. See the
section on filtering and filling gaps. Note: applied to a zero-weighted-
tracker, error will not become zero because the ZWT will re-calculate
using the new 2-D position, yielding a different 3-D and then 2-D position.
F: n.nnn hpix. (display field, right of Exact button) Shows the distance, in
horizontal pixels, between the 2-D tracker location and the re-projected 3-
D tracker location. Valid only if the tracker has been solved.
ZWT. When on, the tracker‘s weight is internally set to zero—it is a zero-
weighted-tracker (ZWT), which does not affect the camera or object‘s path
at all. As a consequence, its 3-D position will be continually calculated as
you update the 2-D track or change the camera or object path, or field of
view. The Weight spinner of a ZWT will be disabled, because the weight is
internally forced to zero and special processing engaged. The grayed-out
displayed value will be the original weight, which will be restored if ZWT
mode is turned off.
T: n.nnn hpix. (display field, right of ZWT button) Shows the total error, in
horizontal pixels, for the solved tracker. This is the same error as from the
Coordinate System panel. It updates dynamically during tracking of a
zero-weighted tracker.

295
CONTROL PANEL REFERENCE

Lens Control Panel

Field of View. Spinner. Field of view, in degrees, on this frame.


Focal Length. Spinner. Focal length, computed using the current Back Plate
Width on Scene Settings. Provided for illustration only.
Add/Remove Key. , Button. Add or remove a key to the field of view
(focal length) track at this frame.
Known. Radio Button. Field of view is already known (typically from an earlier
run) and is taken from the field of view seed track. May be fixed or
zooming. You will be asked if you want to copy the solved FOV track to
the seed FOV track—do that if you want to lock down the solved FOV.
Fixed, Unknown. Radio Button. Field of view is unknown, but did not zoom
during the shot.
Fixed, with Estimate. Radio Button. Camera did not zoom, and a reasonable
estimate of the field of view is available and has been set into the
beginning of the lens seed track. This mode can make solving slightly
faster and more robust. Important: verify that you know, and have
entered, the correct plate size before using any on-set focal length
values. A correct on-set focal length with an incorrect plate size makes the
focal length useless, and this setting harmful.
Zooming, Unknown. Radio Button. Field of view zoomed during shot.
Identical Lens Weight. Spinner. A 0-120 solver weight for stereo shots, when
non-zero it forces the two lens FOVs towards being identical. Use with
care for special circumstances, lenses are rarely identical!

296
CONTROL PANEL REFERENCE

Lens Distortion. Spinner. Show/change the lens distortion coefficient.


Calculate Distortion. Checkbox. When checked, SynthEyes will calculate the
lens distortion coefficient. You should have plenty of well-distributed
trackers in your shot.
Add Line. Checkbox. Adds an alignment line to the image that you can line up
with a straight line in the image, adjust the lens distortion to match, and/or
use it for tripod or lock-off scene alignment.
Kill Line. Checkbox. Removes the selected alignment line (the delete key also
does this). Control-click to delete all the alignment lines at once.
Axis Type. Drop-down list. Not oriented, if the line is only there for lens distortion
determination, parallel to one of the three axes, along one of the three
(XYZ) axes, or along one of the three axes, with the length specified by
the spinner. Configures the line for alignment.
<->. Button. Swaps an alignment line end for end. The direction of a line is
significant and displayed only for on-axis lines.
Length. Spinner. Sets the length of the line to control overall scene sizing during
alignment. Only a single line, which must be on-axis, can have a length.
At nnnf. Button. Shows (not set) if no alignment lines have been configured.
This button shows the (single) frame on which alignment lines have been
defined and alignment will take place; clicking the button takes you to this
frame. Set each time you change an alignment line, or right-click the
button to set it to the current frame.
Align! Button. Aligns the scene to match the alignment lines defined—on the
frame given by the At… button. Other frames are adjusted
correspondingly. To sequence through all the possible solutions, control-
click this button.

297
CONTROL PANEL REFERENCE

Solver Control Panel

Go! Button. Starts the solving process, after tracking is complete.


Master Reset. Button. Resets all cameras/objects and the trackers on them,
though all Disabled camera/objects are left untouched. Control-click to
clear the seed path, and optionally the seed FOV (after confirmation).
Error. Number display. Root-mean-square error, in horizontal pixels, of all
trackers associated with this object or tracker.
Seeding Method. Upper drop-down list controlling the way the solver begins its
solving process, chosen from the following methods:
Auto. List Item. Selects the automatic seeding(initial estimation) process,
for a camera that physically moves during the shot.
Refine. List item. Resumes a previous solving cycle, generally after
changes in trackers or coordinate systems.
Tripod. List Item. Use when the camera pans, tilts, and zooms, but does
not move.
Refine Tripod. List item. Resumes a previous solving cycle, but indicates
that the camera was mounted on a tripod.
Indirect. List Item. Use for camera/objects which will be seeded from links
to other camera/objects, for example, a DV shot indirectly seeded
from digital camera stills.

298
CONTROL PANEL REFERENCE

Individual. List Item. Use for motion capture. The object‘s trackers are
solved individually to determine their path, using the same feature
on other ―Individual‖ objects; the corresponding trackers are linked
in one direction.
Points. List Item. Seed from seed points, set up from the 3-D trackers
panel. Use with on-set measurement data, or after Set All on the
Coordinate Panel. You should still configure coordinate system
constraints with this mode: some hard locks and/or distance
constraints.
Path. List Item. Uses the camera/object‘s seed path as a seed, for
example, from a previous solution or a motion-controlled camera.
Disabled. List Item. This camera/object is disabled and will not be solved
for.
Directional Hint. Second drop-down list. Gives a hint to speed the initial
estimation process, or to help select the correct solution, or to specify
camera timing for ―Individual‖ objects. Chosen from the following for
Automatic objects:
Automatic. List Item. In automatic seeding mode, SynthEyes can be
given a hint as to the general direction of motion of the camera to
save time. With the automatic button checked, it doesn‘t need such
a hint.
Left. List Item. The camera moved generally to its left.
Right. List Item. The camera moved generally to its right.
Up. List Item. The camera moved generally upwards.
Down. List Item. The camera moved generally downwards.
Push In. List Item. The camera moved forward (different than zooming
in!).
Pull Back. List Item. The camera moved backwards (different than
zooming out!).
Camera Timing Setting. The following items are displayed when ―Individual‖ is
selected as the object solving mode. They actually apply to the entire shot,
not just the particular object.
Sync Locked. List Item. The shot is either the main timing reference, or is
locked to it (ie, gen-locked video camera).
Crystal Sync. List Item. The camera has a crystal-controlled frame rate
(ie a video camera at exactly 29.97 Hz), but it may be up to a frame
out of synchronization because it is not actually locked.
Loosely Synced. List item. The camera‘s frame rate may vary somewhat
from nominal, and will be determined relative the reference.
Notably, a mechanical film camera.
Slow but sure. Checkbox. When checked, SynthEyes looks especially hard (and
longer) for the best initial solution.
Constrain. Checkbox for experts. When on, constraints set up using the
coordinate system panel are applied rigorously, modifying the tracker
positions. When off, constraints are used to position, size, and orient the
solution, without deforming it. See alignment vs constraints.

299
CONTROL PANEL REFERENCE

Hold. Animated Button. Use to create hold regions to handle shots with a mix of
normal and tripod-mode sections.
Begin. Spinner and checkbox. Numeric display shows an initial frame used by
SynthEyes during automatic estimation. With the checkbox checked, you
can override the begin frame solution. Either manually or automatically,
the camera should have panned or tilted only about 30 degrees. If the
camera does something wild between the automatically-selected frames,
or if their data is particularly unreliable for some reason, you can manually
select the frames instead. The selected frame will be selected as you
adjust this, and the number of frames in common shown on the status line.
End. Spinner and checkbox. Numeric display shows a final frame used by
SynthEyes during automatic estimation. With the checkbox checked, you
can override the end frame solution.
World size. Spinner. Rough estimate of the size of the scene, including the
trackers and motion of the camera.
Transition Frms. Spinner. When trackers first become usable or are about to
become unusable, SynthEyes gradually reduces their impact on the
solution, to maintain an undetectable transition. The value specifies how
many frames to spread the transition over.
Filter Frms. Spinner. Controls post-solving path filtering. If this control is set to 3,
say, then each frame‘s camera position is a (weighted) average of its
position within 3 frames earlier and 3 frames later in the sequence. A
larger number creates a smoother path, but increases errors.
Overall Weight. Spinner. Defaults to 1.0. Multiplier that helps determine the
weight given to the data for each frame from this object‘s trackers. Lower
values allow a sloppier match, higher values cause a closer match, for
example, on a high-resolution calibration sequence consisting of only a
few frames. WARNING: This control is for experts and should be used
judiciously and infrequently. It is easy to use it to mathematically
destabilize the solving process, so that you will not get a valid solution at
all. Keep near 1.
More. Button. Brings up or takes down the Hard and Soft Lock Controls dialog.
Axis Locks. 7 Buttons. When enabled, the corresponding axis of the current
camera or object is constrained to match the corresponding value from the
seed path. These constraints are enforced either loosely after solving,
with Constrain off, or tightly during solving, with Constrain on. See the
section on Constraining Camera or Object Position. Animated. Right-click
to remove a key on the current frame.
L/R. Left/right axis (ie X)
F/B. Front/back axis (Y or Z)
U/D. Up/down axis (Z in Z-up or Y in Y-up)
FOV. Camera field of view (available/relevant only for Zoom cameras)
Pan. Pan angle around ground plane
Tilt. Tilt angle up or down from ground plane
Roll. Roll angle from vertical

300
CONTROL PANEL REFERENCE

Never convert to Far. Normally, SynthEyes monitors trackers during 3-D solves,
and automatically converts trackers to Far if they are found to be too far
away. This strategy backfires if the shot has very little perspective to start
with, as most trackers can be converted to far. Use this checkbox if you
wish to try obtaining a 3-D solve for your nearly-a-tripod shot.

Coordinate System Control Panel

Tracker Name. Edit. Shows the name of selected tracker, or change it to


describe what it is tracking.
Camera/Object. Drop-down list. Shows what object or camera the tracker is
associated with; change it to move the tracker to a different object or
camera on the same shot (or, you can clone it there for special situations).
Entries beginning with asterisk(*) are on a different shot with the same
aspect and length; trackers may be moved there, though this may
adversely affect constraints, lights, etc.
*3. Button. Starts and controls three-point coordinate setup mode. Click it once to
begin, then click on origin, on-axis, and on-plane trackers in the camera
view, 3-D viewports, or perspective window. The button will sequence
through Or, LR, FB, and Pl to indicate which tracker should be clicked
next. Click this button to skip from LR (left/right) to FB (front/back), or to

301
CONTROL PANEL REFERENCE

skip setting other trackers. After the third tracker, you will have the
opportunity to re-solve the scene to apply the new settings.
Seed & Lock Group
X, Y, Z. Buttons. Multi-choice buttons flip between X, X+, X-; Y, Y+, Y-; and Z,
Z+,Z- respectively. These buttons control which possible coordinate-
system solution is selected when there are several possibilities. Only
significant when the tracker is locked on one or two axes.
X, Y, Z. Spinners. An initial position used as a guess at the start of solving (if
seed checkbox on), and/or a position to which the tracker is locked,
depending on the Lock Type list.
Seed. Mode button. When on, the X/Y/Z location will be used to help estimate
camera/object position at the start of solving, if Points seeding mode is
selected.
Peg. Mode button. If on, and the Solver panel‘s Constrain checkbox is on, the
tracker will be pegged exactly, as selected by the Lock Type. Otherwise,
the solver may modify the constraints to minimize overall error. See
documentation for details and limitations.
Far. Mode button. Turn on if the tracker is far from the camera. Example: If the
camera moved 10 feet during the shot, turn on for any point 10,000 feet or
more away. Far points are on the horizon, and their distance can not be
estimated. This button states your wish, SynthEyes may solve a tracker
as far anyway, if it is determined to have too little perspective.
Lock Type. Drop-down list. Has no effect if Unlocked. The other settings tell
SynthEyes to force one or more tracker position coordinates to 0 or the
corresponding seed axis value. Use to lock the tracker to the origin, the
floor, a wall, a known measured position, etc. See the section on Lock
Mode Details. If you select several trackers, some with targets, some
without, this list will be empty—right-click the Target Point button to clear
it.
Target Point. Button. Use to set up links between trackers. Select one tracker,
click the Target Point button to select the target tracker by name. Or, ALT-
click (Mac: Command-Left-Click) the target tracker in the camera view or
3-D viewport. If the trackers are on the same camera/object, the Distance
spinner activates to control the desired distance between the trackers.
You can also lock one or more of their coordinates to be identical, forcing
them parallel to the same axis or plane. If the trackers are on different
camera/objects, you have created a link: the two trackers will be forced to
the same location during solving. If two trackers track the same feature,
but one tracker is on a DV shot, the other on digital camera stills, use the
link to make them have the same location. Right-click to remove an
existing target tracker.
Dist. Spinner. Sets the desired distance between two trackers on the same
object.
Solved. X, Y, Z numbers. After solving, the final tracker location.

302
CONTROL PANEL REFERENCE

Error. Number. After solving, the root-mean-square error between this tracker‘s
predicted and actual positions. If the error exceeds 1 pixel, look for
tracking problems using the Tracker Graph window.
[FAR]. This will show up after the error value, if the tracker has been solved as
far.
Set Seed. Button. After solving, sets the computed location up as the seed
location for later solver passes using Points mode.
All. Button. Sets up all solved trackers as seeds for subsequent passes.
Exportable. Checkbox. Uncheck this box to tell savvy export scripts not to export
this tracker. For example, exporting to a compositor, you may want only a
half dozen of a hundred or two automatically-generated trackers to be
exported and create a new layer in the compositor. Non-exportable points
are shown in a different color, somewhat closer to that of the background.

3-D Control Panel

Creation Mesh Type. Drop-down. Selects the type of object created by the
Create Tool.
Create Tool. Mode button. Clicking in a 3-D viewport creates the mesh
object listed on the creation mesh type list, such as a pyramid or Earthling.
Most mesh objects require two drag sequences to set the position, size,
and scale. Note that mesh objects are different than objects created with
the Shot Menu‘s Add Moving Object button. Moving objects can have

303
CONTROL PANEL REFERENCE

trackers associated with them, but are themselves null objects. Mesh
objects have a mesh, but no trackers. Often you will create a moving
object and its trackers, then add a mesh object(s) to it after solving to
check the track.
Object name. Editable drop-down. The name of the object selected in the 3-D or
camera viewports. Changeable.
Delete. Button. Deletes the selected object.
Lock Selection. Mode button. Locks the selection in the 3-D viewport to
prevent inadvertent reselection when moving objects.
World/Object. Mode button. Switches between the usual world coordinate
system, and the object coordinate system where everything else is
displayed relative to the current object or camera, as selected by the shot
menu. Lets you add a mesh aligned to an object easily.
Move Tool. Mode button. Dragging an object in the 3-D viewport moves it.
Rotate Tool. Mode button. Dragging an object in the 3-D viewport rotates it
about the axis coming up out of the screen.
Scale Tool. Mode button. Dragging an object in the 3-D viewport scales it
uniformly. Use the spinners to change each axis individually.
Make/Remove Key. , Button. Adds or removes a key at the current frame
for the currently-selected object.
Show/Hide. Button. Show or hide the selected mesh object.
Object color. Color Swatch. Object color, click to change.
X/Y/Z Values. Spinners. Display X, Y, or Z position, rotation or scale values,
depending on the currently-selected tool.
Size. Spinner. This is an overall size spinner, use it when the Scale Tool is
selected to change all three axis scales in lockstep.
Whole. Button. When moving a solved object, normally it moves only for the
current frame, allowing you to tweak particular frames. If you turn on
Whole, moving the object moves the entire path, so you can adjust your
coordinate system without using locks. If you do this, you should set up
some locks subsequently and switch to Point or Path seeding, or you will
have to readjust the path again if you re-solve. Hint: Whole mode has
some rules to decide whether or not to affect meshes. To force it to
include all meshes in the action, turn on Whole affects meshes on the 3-
D viewport and perspective window‘s right-click menu.
Blast. Button. Writes the entire solved history onto the object‘s seed path, so it
can be used for path seeding mode.
Reset. Button. Clears the object‘s solved path, exposing the seed path.
Cast Shadows. (Mesh) Object should cast a shadow in the perspective window.
Catch Shadows. (Mesh) Object should catch shadows in the perspective
window.
Back Faces. Draw the both sides of faces, not only the front.

304
CONTROL PANEL REFERENCE

Invert Normals. Make the mesh normals point the other way from their imported
values.
Opacity. Spinner 0-1. Controls the opacity of the mesh in the perspective view
and the OpenGL version of the camera view (see the View menu and
preferences to enable OpenGL camera view). Note that opacity rendering
is an inexact surface-based approximation and, to allow interactive
performance, is not equivalent to changing the object into a
semitransparent 3-D aero-gel.
Reload. Reloads the selected mesh, if any. If the original file is no longer
accessible, allows a new location to be selected.

Lighting Control Panel

New Light. Button. Click to create a new light in the scene.


Delete Light. Button. Delete the light in the selected-light drop-down list.
Selected Light. Drop-down list. Shows the select light, and lets you change its
name, or select a different one.
Far-away light. When checked, light is a distant, directional, light. When off, light
is a nearby spotlight or omnidirectional(point) light.
Compute over frames: This, All, Lock. In the (normal) This mode, the light‘s
position is computed for each frame independently. In the All or Lock
mode, the light‘s position is averaged over all the frames in the sequence.
In the All mode, this calculation is performed repeatedly for ―live updates.‖
In the Lock mode, the calculation occurs only when clicking the Lock
button.
New Ray. Button. Creates a new ray on the selected light.
Delete Ray. Button. Delete the selected ray.

305
CONTROL PANEL REFERENCE

Previous Ray (<). Button. Switch to the previous lower-numbered ray on the
selected light.
Ray Number. Text field. Shows something like 1/3 to indicate ray 1 of 3 for this
light.
Next Ray (>). Button. Switch to the next higher ray on the selected light.
Selected Ray
Source. Mode button. When lit up, click a tracker in the camera view or any 3-D
view to mark it as one point on the ray.
Target. Mode button. When lit up, click a tracker in the camera view or any 3-D
view to mark it as one point on the ray. If the source and target trackers
are the same, it is a reflected-highlight tracking setup, and the Target
button will show ―(highlight).‖ For highlight tracking to be functional, there
must be a mesh object for the tracker to reflect from.
Distance. Spinner. When only a single ray to a nearby light is available, use this
spinner to adjust the distance to the light. Leave at zero the rest of the
time.

306
CONTROL PANEL REFERENCE

Flex/Curve Control Panel

The flex/curve control panel handles both object types, which are used to
determine the 3-D position/shape of a curve in 3-D, even if it has no discernable
point features. If you select a curve, the parameters of its parent flex (if any) will
be shown in the flex section of the dialog.
New Flex. Creates and selects a new flex. Left-click successively in a 3-D view
or the perspective view to lay down a series of control points. Right-click to
end.
Delete Flex. Deletes the selected flex (even if it was a curve that was initially
clicked).
Flex Name List. Lists all the flexes in the scene, allowing you to select a flex, or
change its name.
Moving Object List. If the flex is parented to a moving object, it is shown here.
Normally, ―(world)‖ will be listed.
Show this 3-D flex. Controls whether the flex is seen in the viewports or not.

307
CONTROL PANEL REFERENCE

Clear. Clears any existing 3-D solution for the flex, so that the flex‘s initial seed
control points may be seen and changed.
Solve. Solves for the 3-D position and shape of the flex. The control points
disappear, and the solved shape becomes visible.
All. Causes all the flexes to be solved simultaneously.
Pixel error. Root-mean-square (~average) error in the solved flex, in horizontal
pixels.
Count. The number of points that will be solved for along the length of the flex.
Stiffness. Controls the relative importance of keeping the flex stiff and straight
versus reproducing each detail in the curves.
Stretch. Relative importance of (not) being stretchy.
Endiness. (yes, made this up) Relative importance of exactly meeting the end-
point specification.
New Curve. Begins creating a new curve—click on a series of points in the
camera view.
Delete. Deletes the curve.
Curve Name List. Shows the currently-selected curves name among a list of all
the curves attached to the current flex, or all the unconnected curves if this
one is not connected.
Parent Flex List. Shows the parent flex of this curve, among all of the flexes.
Show. Controls whether or not the curve is shown in the viewport.
Enable. Animated checkbox indicating whether the curve should be enabled or
not on the current frame. For example, turn it off after the curve goes off-
screen, or if the curve is occluded by something that prevents its correct
position from being determined.
Key all. When on, changing one control point will add a key on all of them.
Rough. Select several trackers, turn this button on, then click a curve to use the
trackers to roughly position the curve throughout the length of the shot.
Truncate. Kills all the keys off the tracker from the current frame to the end of the
shot.
Tune. Snaps the curve exactly onto the edge underneath it, on the current frame.
All. Brings up the Curve Tracking Control dialog, which allows this curve, or all
the curves, to be tracked throughout an entire range of frames.

308
Additional Dialogs Reference
This section contains descriptions of additional dialogs used in SynthEyes.
Generally they can be launched from the main menu. Some of the dialogs
contain very powerful multi-threaded processing engines to solve particular tasks
for you.

Add Many Trackers Dialog

This dialog, launched from the Trackers menu, allows you to add many
more trackers—after you have successfully auto-tracked and solved the shot.
Use to improve accuracy in a problematic area of the shot, or to produce
additional trackers to use as vertices for a tracker mesh.
Note: it may take several seconds between launching the dialog and its
appearance. During this time your processors will be very busy.
Tracker Requirements
Min #Frames. Spinner. The minimum number of valid frames for any tracker
added.
Min Amplitude. Spinner. The minimum average amplitude of the blip path,
between zero and one. A larger value will require a more visible tracker.
Max Avg Err. Spinner. The maximum allowable average error, in horizontal
pixels, of the prospective tracker. The error is measured in 2-D between
the 2-D tracker position, and the 3-D position of the prospective tracker.
Max Peak Err. Spinner. The maximum allowable error, in horizontal pixels, on
any single frame. Whereas the average error above measures the overall

309
ADDITIONAL DIALOGS REFERENCE

noisiness, the peak error reflects whether or not there are any major
glitches in the path.
Only within last Lasso. Checkbox. When on, trackers will only be created within
the region swept out by the last ―lasso‖ operation in the main camera view,
allowing control over positioning.
Frame-Range Controls
Start Region. Spinner. The first frame of a region of frames in which you wish to
add additional trackers. When dragging the spinner, the main timeline will
follow along.
End Region. Spinner. The final frame of the region of interest. When dragging
the spinner, the main timeline will follow along.
Min Overlap. The minimum required number of frames that a prospective tracker
must be active within the region of interest. With a 30-frame region of
interest, you might require 25 valid frames, for example.
Number of Trackers
Available. Text display field. Shows the number of prospective trackers
satisfying the current requirements.
Desired. Spinner. The maximum number of trackers to be added: the actual
number added will be the least of the Available and Desired values.
New Tracker Properties
Regular, not ZWT. Checkbox. When off, ZWTs are created, so further solves will
not be bogged down. When on, regular (auto) trackers will be created.
Selected. Checkbox. When checked, the newly-added trackers will be selected,
facilitating easy further modification.
Set Color. Checkbox. When checked, the new trackers will be assigned the color
specified by the swatch. When off, they will have the standard default
color.
Color. Swatch. Color assigned to trackers when Set Color is on.
Others
Max Lostness. Spinner. Prospective trackers are compared to the other trackers
to make sure they are not ―lost in space.‖ The spinner controls this test:
the threshold is this specified multiple of the object‘s world size. For
example, with a lostness of 3 and a world size of 100, trackers more than
300 units from the center of gravity of the others will be dropped.
Re-fetch possibles. Button. Push this after changes in Max Lostness.
Add. Button. Adds the trackers into the scene and closes the dialog. Will take a
little while to complete, depending on the number of trackers and length of
the shot.
Cancel. Button. Close the dialog without adding any trackers.
Defaults. Button. Changes all the controls to the standard default values.

310
ADDITIONAL DIALOGS REFERENCE

Advanced Features

This floating panel can be launched from the Feature control panel,
affecting the details of how blips are placed and accumulated to form trackers.
Feature Size (small). Spinner. Size in pixels for smaller blips
Feature Size (big). Spinner. Size in pixels for larger blips, which are used for
alignment as well as tracking.
Density/1K. Spinner for each of big and small. Gives a suggested blip density in
term of blips per thousand pixels.
Minimum Track Length. Spinner. The path of a given blip must be at least this
many frames to have a chance to become a tracker.
Minimum Trackers/Frame. Spinner. SynthEyes will try to promote blips until
there are at least this many trackers on each frame, including pre-existing
guide trackers.
Maximum Tracker Count. Spinner. Only this many trackers will be produced for
the object, unless even more are required to meet the minimum
trackers/frame.
Camera View Type. Drop-down list. Shows black and white filtered versions of
the image, so the effect of the feature sizes can be assessed. Can also
show the image‘s alpha channel, and the blue/green-screen check image,
even if the screen control dialog is not displayed.
Auto Re-blip. Checkbox. When checked, new blips will be calculated whenever
any of the controls on the advanced features panel are changed. Keep off
for large images/slow computers.

Clean Up Trackers Dialog


Run the clean up tracker dialog after solving, to identify bad trackers and
frames needing repair. This helps remove bumps in tracks and improves overall
accuracy. Start it from the Track menu.

311
ADDITIONAL DIALOGS REFERENCE

The panel is organized systematically, with a line for trackers with different
categories of problems. A tracker can be counted in several different categories.
There are Select toggle buttons for each category; each Select button selects
and flashes trackers in that category in the main viewports. Click the button a
second time to turn it off and de-select the trackers.
After cleaning up the trackers (―Fix‖), you should re-solve or refine the
solution.
All trackers. Radio button. All trackers are affected.
Selected trackers. Radio button. Only the trackers already selected when the
dialog is opened are affected.
(Delete) Bad Frames. Checkbox. When checked, bad frames are deleted when
the Fix button is clicked. Note that the number of trackers in the category
is shown in parentheses.
Show. Toggle button. Bad frames are shown in the user interface, by temporarily
invalidating them. The graph editor should be open in Squish mode to see
them.
Threshold. Spinner. This is the threshold for a frame to be bad, as determined
by comparing its 2-D location on a frame to its predicted 3-D location. The
valud is either a percentage of the total number of frames (ie the worst
2%), or a value in horizontal pixels, as controlled by the radio buttons
below.
%. Radio button. The bad-frame threshold is measured in percentage; the worst
N% of the frames are considered to be bad.
Hpix. Radio button. The bad-frame threshold is a horizontal-pixel value.
Disable. Radio button. When ―fixed,‖ bad frames are disabled by adjusting the
tracker‘s enable track.

312
ADDITIONAL DIALOGS REFERENCE

Clear. Radio button. Bad frames are fixed by clearing the tracking results from
that frame; the tracker is still enabled and can be easily re-tracked on that
frame.
(Delete) Far-ish Trackers. Checkbox. When on, trackers that are too far-ish
(have too little perspective) are deleted.
Threshold. Spinner. Controls how much or little perspective is required for a
tracker to be considered far-ish. Measured in horizontal pixels.
Delete. Radio button. Far-ish trackers will be deleted when fixed.
Make Far. Radio button. Far-ish trackers will be changed to be solved as Far
trackers (direction only, no distance).
(Delete) Short-Lived Trackers. Checkbox. Short-lived trackers will be deleted.
Threshold. Spinner. Number of frames a tracker must be valid to avoid being to
short-lived.
(Delete) High-error Trackers. Checkbox. Trackers with too many bad frames
will be deleted.
Threshold. Spinner. A tracker is considered high-error if the percentage of its
frames that are bad (as defined above by the bad-frame threshold) is
higher than this first percentage threshold, or if its average rms error in
hpix is more than the second threshold below (next to ―Unsolved‖) . For
example, if more than 30% of a tracker‘s frames are bad, or its average
error is more than 2 hpix, it is a high-error tracker.
Unsolved/Behind. Checkbox. Some trackers may not have been solved, or may
have been solved so that they are behind the camera. If checked, these
trackers will be deleted.
Threshold. Spinner. This is the average hpix error threshold for a tracker to be
high error. Though it is next to the Unsolved category, it is part of the
definition of a high-error tracker.
Clear All Blips. Checkbox. When checked, Fix will clear all the blips. This is a
way to remember to do this and cut the final .sni file size.
Unlock UI. Button. A tricky button that changes this dialog from modal (meaning
the rest of the SynthEyes user interface is locked up) to modeless, so that
you can go fix or rearrange something without having to close and reopen
the panel. Note: keyboard accelerators do not work when the user
interface is unlocked.
Frame. Spinner. The current frame number in SynthEyes, use to scrub through
the shot without closing the dialog or even having to unlock the user
interface.
Fix. Button. Applies the selected fixes, then closes the panel.
Close. Button. Closes the panel, without applying the fixes. Parameter settings
will be saved for next time. The clean-up panel can be a quick way to
examine the trackers, even if you do not use it to fix anything itself.

313
ADDITIONAL DIALOGS REFERENCE

Coalesce Nearby Trackers Dialog

Trackers, especially automatic trackers, can wind up tracking the same


feature in different parts of the shot. This panel finds them and coalesces them
together into a single overall tracker.
Coalesce. Button. Runs the algorithm and coalesces trackers, closing the panel.
Cancel. Button. Removes any tracker selection done by Examine, then closes
the dialog without saving the current parameter settings.
Close. Button on title bar. The close button on the title bar will close the dialog,
saving the tracker selection and parameter settings, making it easy for
examine the trackers and then re-do and complete the coalesce.
Examine. Button. Examines the scene with the current parameter settings to
determine which trackers will be coalesced and how many trackers will be
eliminated. The trackers to be coalesced will be selected in the viewports.
# to be eliminated. Display area with text. Shows how many trackers will be
eliminated by the current settings. Example: SynthEyes found two pairs of
trackers to be coalesced. Four trackers are involved, two will be
eliminated, two will be saved (and enlarged). The display will show 2
trackers to be eliminated.
Defaults. Button. Restores all controls to their factory default settings.
Distance (hpix). Spinner. Sets the maximum consistent distance between two
trackers to be coalesced. Measured in horizontal pixels.
Sharpness. Spinner. Sets the sensitivity within the allowable distance. If zero,
trackers at the maximum distance are as likely to be coalesced as trackers
at the same location. If one, trackers at the maximum distance are
considered unlikely.
Consistency. Spinner. The fraction of the frames two trackers must be nearby to
be merged.
Only selected trackers. Checkbox. When checked, only pre-selected trackers
might be coalesced. Normally, all trackers on the current camera/object
are eligible to be coalesced.
Include supervised non-ZWT trackers. Checkbox. When off, supervised
(golden) trackers that are not zero-weighted-trackers (ZWTs) are not
eligible for coalescing, so that you do not inadvertently affect hand-tuned

314
ADDITIONAL DIALOGS REFERENCE

trackers. When the checkbox is on, all trackers, including these, are
eligible.
Only with non-overlapping frame ranges. Checkbox. When checked, trackers
that are valid at the same time will not be coalesced, to avoid coalescing
closely-spaced but different trackers. When off, there is no such
restriction.

Curve Tracking Control

Launched by the All button on the Flex/Curve control panel.


Filter Size. Edge detection filter size, in pixels. Use larger values to accurately
locate wide edges, smaller value for thinner edges.
Search Width. Pixels. Size of search region for the edge. Larger values mean a
roughed-in location can be further from the actual location, but might also
mean that a different edge is detected instead.
Adjacency Sharpness. 0..1. This is the portion of the search region in which the
edge detector is most sensitive. With a smaller value, edges nearest the
roughed-in location will be favored.
Adjacency Rejection. 0..1. The worst weight an edge far from the roughed-in
location can receive.
Do all curves. When checked all curves will be tuned, not just the selected one.
Animation range only. When checked, tuning will occur over the animation
playback range, rather than the entire playback range.
Continuous Update. Normally, as a range of frames is tuned, the tuning result
from any frame does not affect where any other frame is searched for—
the searched-for location is based solely on the earlier curve animation
that was roughed in. With this box checked, the tuning result for each
frame immediately updates the curve control points, and the next frame

315
ADDITIONAL DIALOGS REFERENCE

will be looked for based on the prior search result. This can allow you to
tune a curve without previously roughing it in.
Do keyed or not. All frames will be keyed, whether or not they have a key
already.
Do only keyed. Add keys only to frames that are already have keys, typically to
tune up a few roughed in keys.
Do only unkeyed. Only frames without keys will be tuned. Use to tune without
adversely affecting frames that have already been carefully manually
keyed.

Finalize Tracker Dialog

With one or more trackers selected, launch this panel with the Finalize
Button on the Tracker control panel, then adjust it to automatically close gaps in
a tracker (where an actor briefly obscures a tracker, say), and to filter (smooth)
the trajectory of the selected trackers.
The Finalize dialog affects only trackers which are not Locked (ie their
Lock button is unlocked). When the dialog is closed via OK, affected trackers are
Locked. If you need to later change a Finalized tracker, you should unlock it, then
rerun the tracker from start to finish (this is generally fairly quick, since you‘ve
already got all the necessary keys in place).
Filter Frames. The number of frames that are considered to produce the filtered
version of a particular frame.
Filter Strength. A zero to one value controlling how strongly the filter is applied.
At the default value if one, the filter is applied fully.
Max Gap Frames. The number of missing frames (gap) that can be filled in by
the gap-filling process.
Gap Window. The number of frames before the gap, and after the gap, used to
fill frames inside the gap.
Begin. The first frame to which filtering is applied.
End. The last frame to which filtering is appied.

316
ADDITIONAL DIALOGS REFERENCE

Entire Shot. Causes the current frame range to be set into the Begin and End
spinners.
Playback Range. Causes the current temporary playback range to be set into
the Begin and End spinners.
Live Update. When checked, filtering and gap filtering is applied immediately,
allowing its effect to be assessed if the tracker graph viewport is open.

Fine-Tuning Panel
Launched from the Track menu.

Fine-tune during auto-track. Checkbox. If checked, the fine-tuning process will


run automatically after auto-tracking.
Key Spacing. Spinner. Requests that there be a key every this many frames
after fine-tuning.
Tracker Size. Spinner. The size of the trackers during and after fine tuning. The
tracker size and search values are the same as on the Tracker panel.
Tracker Aspect. Spinner. The aspect ratio of the trackers during and after fine
tuning.
U Search Size. Spinner. U (horizontal) search area size. Note that because the
fine-tuning starts from the previously-tracked location, the search sizes
can be very small, equivalent to a few pixels.
V Search Size. Spinner. V (vertical) search area size.
Reset. Button. Restore the current settings of the panel to factory values (not the
preferences). Does not change the preferences; to reset the preferences
to the factory values click Reset then Set Prefs.
Get Prefs. Button. Set the current settings to the values stored as preferences.

317
ADDITIONAL DIALOGS REFERENCE

Set Prefs. Button. Save the current settings as the new preferences.
HiRez. Drop-down. Sets the high-resolution resampling mode used for
supervised tracking (this is the same setting as displayed and controlled
on the Track menu).
All auto-trackers. Radio button. The Run button will work on all auto-trackers.
Selected trackers. Radio button. The Run button will work on only the selected
trackers, typically for testing the parameters.
Make Golden. Checkbox. When on, fine-tuned trackers become ‗golden‘ as if
they had been supervised-tracked initially. When off, they are left as
automatic trackers.
Run. Button. Causes all the trackers, or the selected trackers, to be fine-tuned
immediately, according to the selected parameters. The other way for fine-
tuning to occur is during automatic tracking, if the Fine-tune during auto-
track checkbox is turned on. If run automatically, the top set of
parameters (in the ―Overall Parameters‖ group) apply during the automatic
fine-tune cycle.

Green-Screen Control

Launched from the Summary Control Panel, causes auto-tracking to look


only within the keyed area for trackers. The key can also be written as an alpha
channel or RGB image by the image preprocessor.
Enable Green Screen Mode. Turns on or off the green screen mode. Turns on
automatically when the dialog is first launched.
Reset to Defaults. Resets the dialog to the initial default values.

318
ADDITIONAL DIALOGS REFERENCE

Average Key Color. Shows an average value for the key color being looked for.
When the allowable brightness is fairly low, this color may appear darker
than the actual typical key color, for example.
Auto. Sets the hue of the key color automatically by analyzing the current
camera image.
Brightness. The minimum brightness (0..1) of the key color.
Chrominance. The minimum chrominance (0..1) of the key color.
Hue. The center hue of the key color, -180 to +180 degrees.
Hue Tolerance. The tolerance on the matchable hue, in degrees. With a hue of -
135 and a tolerance of 10, hues from -145 to -125 will be matched, for
example.
Radius. Radius, in pixels, around a potential feature that will be analyzed to see
if it is within the keyed region (screen).
Coverage. Within the specified radius around the potential feature, this many
percent of the pixels must match the keyed color for the feature to be
accepted.
Scrub Frame. This frame value lets you quickly scrub through the shot to verify
the key settings over the entire shot.

Hard and Soft Lock Controls


This panel is launched by the more button on the Solver control panel, or
the Window/Solver Locking menu item. It displays values for the active camera or
object (on the toolbar), and is unaffected by what camera or object is selected in
the viewports. All of the enables, weights, and values may be animated.

319
ADDITIONAL DIALOGS REFERENCE

Master Controls
All. Button. Turn on or off all of the position and rotation locks. Shift-right-click to
truncate keys past the current frame. Control-right-click to clear all keys
leaving the object un-locked.
Master weight. Spinner. Set keys on all position and rotation soft-lock weights.
Shift-right-click to truncate keys past the current frame. Control-right-click
to clear all keys (any locked frames will be hard locks).
Back Key. Button. Skip backwards to the previous frame with a lock enable
or weight key (but not seed path key).
Forward Key. Button. Skip forward to the next frame with a lock enable or
weight key (but not seed path key).
Show. Button. When on, the seed path is shown in the main viewports, not the
seed path. Also, the seed field of view/focal length is shown on the Lens
Control panel, instead of the solved value.
Translation Weights
Pos. Button. Turn on or off all position lock enables.
Position Weight. Spinner. Set all position weights.
L/R. Button. Left/right lock enable.
L/R Weight. Spinner. Left/right weight.
F/B. Button. Front/back lock enable.
F/B Weight. Spinner. Front/back weight.
U/D. Button. Up/down lock enable.
U/D Weight. Spinner. Up/down weight.
X Value. Spinner. X value of the seed path at the current frame (regardless of
the Show button)
Y Value. Spinner. Y value of the seed path.
Z Value. Spinner. Z value of the seed path.
Get 1f. Button. Create a position and rotation key on the seed path at the current
frame, based on the solved path.
Get PB. Button. Create position and rotation keys for all frames in the playback
range of the timebar, by copying from the solved path to the seed path.
Get. Button. Copy the entire solved path to the seed path, for all frames
(equivalent to the Blast button on the 3-D panel, except that FOV is never
copied).
Rotation Weights
Rot. Button. Turn on or off all rotation lock enables.
Rot Weight. Spinner. Set all rotation weights.
Pan. Button. Pan angle lock enable.
Pan Weight. Spinner. Pan axis soft-lock weight.
Tilt. Button. Tilt axis lock enable.
Tilt Weight. Spinner. Tilt weight.
Roll. Button. Roll axis lock enable.
Roll Weight. Spinner. Roll soft-lock weight.

320
ADDITIONAL DIALOGS REFERENCE

Pan Value. Spinner. Pan axis seed path value.


Tilt Value. Spinner. Tilt axis seed path value.
Roll Value. Spinner. Roll axis seed path value.
Field of View/Focal Length Weights
FOV/FL. Button. Field-of-view/focal length lock enable.
FOV/FL Weight. Spinner. FOV/FL soft-lock weight.
FOV Value. Spinner. Field of view seed-path value.
FL Value. Spinner. Focal length seed-path value.
Get 1f. Button. Create a field of view key on the seed path at the current frame
from the solved value.
Get PB. Button. Create field of view keys on the seed path from the solved
values, for the frames in the playback range.
Get. Button. Create field of view keys from the solved values, for all frames.

Stereo Geometry Dialog


Launched from the Shot menu, this modeless dialog adds and controls
constraints on the relative position and orientation of the two cameras in a 3-D
stereo rig. The constraints prevent chatter that could not have occurred in the
actual rig.

321
ADDITIONAL DIALOGS REFERENCE

Make Keys. Button. When on, keys are created and shown at the current frame.
When off, the value and status on the first frame in the shot are shown—
for non-animated fixed parameters.
Back to Key. Button. Moves back to the next previous frame with any stereo-
related key.
Forward to Key. Button. Moves forward to the next following frame with any
stereo-related key.
Dominant Camera. Drop-down list. Select which camera, left or right, should be
taken to be the dominant stationary camera; the stereo parameters will
reflect the position of the secondary camera moving relative to the
dominant camera. The Left and Right settings are for rigs where only one
camera toes in to produce vergence; the Center-Left and Center-Right
settings are for rigs where both cameras toe in equally to produce
vergence. When you change dominance, you will be asked if you wish to
switch the direction of the links on the trackers (and solver modes).
Show Actuals. Radio button. When selected, the Actual Values column shows
the actual value of the corresponding parameter on the current frame.
Show Differences. Radio button. When selected, the Actual Values column
shows the difference between the Lock-To Value and the actual value on
the frame.
The following sections describe each of the parameters specifying the
relationship between the two cameras, ie one for each row in the stereo
geometry panel. Note that the parameters are all relative, ie they do not depend
on the overall position or orientation within the 3-D environment. If you move the
two cameras as a unit, they can be anywhere in 3-D without changing these
parameters. For each parameter, there is a number of columns, which are
specified in the section after this.
Distance. Parameter row. This is the inter-ocular distance between the (nodal
points) of the two cameras. Note that this value is unit-less, like the rest of
SynthEyes, its units are the same as the rest of the 3-D environment. So if
you want the main 3-D environment to be in feet, you should enter the
inter-ocular distance in feet also.
Direction. Parameter row. Degrees. Direction of the secondary camera relative
to the primary camera, in the coordinate system of the primary camera.
Zero means the secondary is directly beside the primary, a positive value
moves it forward until at 90 degrees it is directly in front of the primary
(though see Elevation, next). However: in Center-Left or Center-Right
mode, the zero-direction changes as a result of vergence to maintain
symmetric toe-in. See other material to help understand that.
Elevation. Parameter row. Degrees. Elevation of the secondary camera relative
to the primary camera, in the coordinate system of the primary camera. At
an elevation of zero degrees, it is at the same relative elevation. At an
elevation of 90 degrees, it would be directly over top of the primary.
Vergence. Parameter row. Degrees. Relative in/out look direction of the two
cameras. At zero, the cameras axes are parallel (subject to Tilt and Roll

322
ADDITIONAL DIALOGS REFERENCE

below), and positive values toe in the secondary camera. In center-left or


center-right mode, the direction changes to the secondary camera to
achieve symmetric toe-in.
Tilt. Parameter Row. Degrees. Relative up/down look direction of the secondary
camera relative to the primary. At zero, they are even, as the value
increases, the secondary camera is twisted looking upwards relative to
the primary camera.
Roll. Parameter Row. Degrees. Relative roll of the secondary camera relative to
the primary. At zero, they have no relative roll. Positive values twist the
secondary camera counter-clockwise, as seen from the back.
Description of Parameter Columns
Lock Mode. Selector. Controls the mode and functionality of constraints for this
parameter: As Is, no constraints are added; Known, constraints are
added to force the cameras so that the parameter is the Lock-To value,
which can be animated; Fixed Unknown, the parameter is forced to a
single constant value, which is unknown but determined during the solve;
Varying, the value can be intermittently locked to specific values using the
Lock button and Lock-To value, or intermittently held at a to-be-
determined value by animating a Hold range.
Color. Swatch. Shows the color of the curve in the graph editor.
Channel. Text. Name of the parameter for the row.
Lock. Button. When set, constraints are generated to lock the parameter to the
Lock-To value. Available only in Varying mode. Animated so specific
ranges of frames may be specified. If all frames are to be locked, use
Known mode instead.
Hold. Button. Animated button that forces the parameter to hold a to-be-
determined value for the specific time it is active. For example, animate on
during frames 0-50 to say that vergence is constant at some value during
that time, while allowing it to change after that. Available only for inter-
ocular distance and vergence and only in Varying mode. If Hold should be
on for the entire shot, use Fixed mode instead.
Lock-To Values. Spinner. Shows the value the parameter will be locked to,
animatable. Note that the spinner shows the value at the first frame of the
shot when Make Keys is off.
Actual Values. Text field. Shows the value of the parameter on the current
frame, or the difference between the Lock-To value and the actual value, if
Show Differences is selected.
Weights. Spinner. Animated control over the weight of the generated constraints.
Shows value at first frame if Make Keys is off. The value 60 is the nominal
value, the weight increases by a factor of 10 for an increase of 20 in the
value (decibels). With a range from 0 to 120, this corresponds to 0.001 to
1000. Hint: if a constraint is not having effect, you will usually do better
reducing the weight, not increasing it. It‘s like shouting is rarely effective,
just annoys people. Unlike on the hard/soft lock panel, a weight of zero

323
ADDITIONAL DIALOGS REFERENCE

does not create a hard lock. All stereo locks are soft, if the weight is zero it
has no effect.
Less… More… Button. Shows or hides the following set of controls which shift
information back and forth between the stereo channels and the camera
positions.
Get 1f. Button. Gets the actual stereo parameters on the current frame, and
writes them into the Lock-To value spinners. Any parameter in As-Is mode
is not affected!
Get PB. Button. Same as Get 1f, but for the entire playback range (little green
and red triangles on the time bar).
Get All. Button. Same as Get 1f, but for the entire shot.
Move Left Camera Live. Mode Button. The left camera is moved to a position
determined by the right camera and stereo parameters (excluding any that
are As-Is). If you adjust the spinners, the camera will move
correspondingly. The seed path, solve path, or both are affected, see the
checkboxes at bottom.
Move Left Camera Set 1f. Button. The left camera is moved to a position
determined by the right camera and stereo parameters (excluding any that
are As-Is). The seed path, solve path, or both are affected, see the
checkboxes at bottom. Unlike Live, this is a one-shot event each time you
click the button.
Move Left Camera Set PB. Button. Same as Set 1f, but for the playback range.
Move Left Camera Set All. Button. Same as Set 1f, but updates the left camera
for the entire shot. For example, you might want track the right camera of
a shot by itself; if you have known stereo parameters you can use this
button to instantly generate the left camera path for the entire shot.
Move Both from Center Live/Set 1f/Set PB/Set All. Same as the Left version,
except that the position of the two cameras is averaged to find a center
point, then both cameras are offset half outwards in each direction
(including tilt, roll, etc) to form new positions.
Move Right Camera Live/Set 1f/Set PB/Set All. Same as the Left version,
except the right camera is moved based on the left position and stereo
parameters.
Write seed path. Checkbox. Controls whether or not the Move buttons affect the
seed path. You will need this on if you wish to create hard or soft camera
position locks for a later solve. You can keep it off if you wish to make
temporary fixes. If you write the solve path but not seed path, anything you
do will be erased by the next solve (except in refine mode).
Write solve path. Checkbox. Controls whether or not the Move buttons affect
the solve path. Normally should be on if the camera has already been
solved; keep off if you are generating seed paths. If both Write boxes are
off, the Move buttons will do nothing. If Write seed path is on, Write solve
path is off, and the camera is solved, the Move buttons will be updating
the seed path, but you will not be able to see anything happening—you
will be seeing the solve path unless you select View/Show seed path.

324
ADDITIONAL DIALOGS REFERENCE

Hold Tracker Preparation Tool


Launched from the Window/Hold Region Tracker Prep menu item to
configure trackers when hold regions are present: regions of tripod-type motion in
a shot.

Apply. Button. The preparation operation is performed.


Undo. Button. Undo the last operation (of any kind)
Preparation Mode
Truncate. Button. Affected trackers are shut down in the interior of any hold
region.
Make Far. Button. Affected trackers are converted to Far, and shut down outside
the hold region, plus the specified overlap.
Clone to Far. Button. Default. Affected trackers are cloned, and the clone
converted to Far with a reduced range as in Make Far.
Convert Some. Button. A specified percentage of trackers is randomly selected
and converted to Far.
Percentage. Spinner. The percentage of trackers converted in Convert Some
mode.
Affected Trackers
Selected. Button. Only selected trackers are affected by the operation.
All. Button. All trackers are affected. (In both options, only automatic, non-far,
trackers are considered).
Transitions Considered

325
ADDITIONAL DIALOGS REFERENCE

Nearest Current Frame. Button. Only the trackers crossing the transition
nearest to the current frame (within 10 frames) are considered.
All Transitions. Button. Operation is applied across all hold regions.
Combine Cloned Fars. Checkbox. When off (default), a separate cloned far
tracker is created for each hold region. When on, only a single Far tracker
is produced, combining far trackers from all hold regions.
Minimum Length. Spinner. Prevents the creation of tracker fragments smaller
than this threshold. Default=6.
Far Overlap. Spinner. The range of created Far trackers is allowed to extend out
of the hold region, into the adjacent translating-camera region, by this
amount to improve continuity.

Image Preparation Dialog


The image preparation dialog allows the incoming images from disk to be
modified before they are cached in RAM for replay and tracking. The dialog is
launched either from the open-shot dialog, or from the Shot/Image Preparation
menu item.

Like the main SynthEyes user interface, the image preparation dialog has
several tabs, each bringing up a different set of controls. The Stabilize tab is
active above. With the left button pushed, you can review all the tabs quickly.
For more information on this panel, see the Image Preparation and
Stabilization sections.
Warning: you should be sure to set up the cropping and distortion/scale
values before beginning tracking or creating rotosplines. The splines and trackers
do not automatically update to adapt to these changes in the underlying image

326
ADDITIONAL DIALOGS REFERENCE

structure, which can be complex. Use the Apply/Remove Lens Distortion script
on the main Script menu to adapt to late changes in the distortion value.
Shared Controls
OK. Button. Closes the image preprocessing dialog and flushes no-longer-valid
frames the RAM buffer to make way for the new version of the shot
images. You can use SynthEyes‘s main undo button to undo all the effects
of the Image Preprocessing dialog as a unit, or then redo them if desired.
Cancel. Button. Undoes the changes made using the image preprocessing
dialog, then closes it.
Undo. Button. Undo the latest change made using the image preprocessing
panel. You can not undo changes made before the panel was opened.
Redo. Button. Redo the last change undone.
Add (checkline). Button. When on, drag in the view to create checklines.
Delete (checkline). Button. Delete the selected checkline.
Final. Button. Reads either Final or Padded: the two display modes of the
viewport. The final view shows the final image coming from the image
preparation subsection. The padded view shows the image after padding
and lens undistortion, but before stabilization or resampling.
Both. Button. Reads either Both, Neither, or ImgPrep, indicating whether the
image prep and/or main SynthEyes display window are updated
simultaneously as you change the image prep controls. Neither mode
saves time if you do not need to see what you are doing. Both mode
allows you to show the Padded view and Final view (in the main camera
view) simultaneously.
Margin. Spinner. Creates an extra off-screen border around the image in the
image prep view. Makes it easier to see and understand what the
stabilizer is doing, in particular.
Show. Button. When enabled, trackers are shown in the image prep view.
Image Prep View. Image display. Shows either the final image produced by the
image prep subsystem (Final mode), or the image obtained after padding
the image and undistorting it (Padded mode). You can drag the Region-of-
interest (ROI) and Point-of-interest (POI) around, plus you can click to
select trackers, or lasso-select by dragging.
Playbar (at bottom)
Preset Manager. Drop-down. Lets you create and control presets for the image
prep system, for example, different presets for the entire shot and for each
moving object in the shot.
Preset Mgr. Disconnect from the current preset; further changes on
the panel will not affect the preset.
New preset. Create and attach to a new preset. You will be
prompted for the name of the new preset.
Reset. Resets the current preset to the initial settings, which do
nothing to the image.

327
ADDITIONAL DIALOGS REFERENCE

Rename. Prompt for a new name for the current preset.


Delete. Delete the current preset.
Your presets. Selecting your preset will switch to it. Any changes
you then make will affect that preset, unless you later select
the Preset Mgr. item before switching to a different preset.
Rewind. Button. Go back to the beginning of the shot.
Back Key. Button. Go back to the previous frame with a ROI or Levels key.
Back Frame. Button. Go back one frame; with Control down, back one key;
with Shift down, back to the beginning of the shot. Auto-repeats.
Frame. Spinner. The frame to be displayed in the viewport, and to set keys for.
Note that the image does not update while the spinner drags because that
would require fetching all the intermediate frames from disk, which is
largely what we‘re trying to avoid.
Forward Frame. Button. Go forward one frame; with Control down, forward
one key; with Shift down, forward to the end of the shot. Auto-repeats.
Forward Key. Button. Go forward to the next frame with a ROI or Levels key.
To End. Button. Go to the end of the shot.
Make Keys. Checkbox. When off, any changes to the levels or region of
interest create keys at frame zero (for when they are not animated). With
the checkbox on, keys are created at the current frame.
Enable. Button (stoplight). Allows you to temporarily disable levels, color, blur,
downsampling, channels, and ROI, but not padding or distortion. Use to
find a lost ROI, for example. Effective only within image prep.
Rez Tab
Blur. Spinner. Causes a Gaussian blur with the specified radius, typically to
minimize the effect of grain in film. Applied before down-sampling, so it
can eliminate artifacts.
Hi-Pass. Spinner. When non-zero, creates a high-pass filter using a Gaussian
blur of this radius. Use to handle footage with very variable lighting, such
as explosions and strobes. Radius is usually much larger than typical blur
compensations. Applied before down-sampling.
DownRez. Drop-down list: None, By 1/2, By 1/4. Causes the image from disk to
be reduced in resolution by the specified amount, saving RAM and time
for large film images, but reducing accuracy as well.
Interpolation. Drop-down list. Bi-Linear, 2-Lanczos, 3-Lanczos. The bi-linear
method is fastest but softens the image slightly. If the shot has a lot of
noise, that can be a good thing. The 2-Lanczos filter provides a sharper
result, after a longer time. The 3-Lanczos filter is even sharper, with more
time and of course the noise is made sharper also.
Channel. Drop-down list: RGB, Luma, R, G, B, A. Allows a luminance image to
be used for tracking, or an individual channel such as red or green. Blue is

328
ADDITIONAL DIALOGS REFERENCE

usually noisy, alpha is only for spot-checking the incoming alpha. This can
reduce memory consumption by a factor of 3.
Invert. Checkbox. Inverts the RGB image or channel to improve feature visibility.
Channel Depths: Process. 8-bit/16-bit/Float. Radio buttons. Selects the bit
depth used while processing images in the image preprocessor. Note that
Half is intentionally omitted because it is slow to process, use Float for
processing, then store as Half. Same controls as on Shot Setup dialog
Channel Depths: Store. 8-bit/16-bit/Half/Float. Radio buttons. Selects the bit
depth used to store images, after pre-processing. You may wish to
process as floats then store as Halfs, for example.
Keep Alpha. Checkbox. Requests that SynthEyes read and store the alpha
channel (always 8-bit) even if SynthEyes will not use it itself—typically so
that it can be saved with the pre-processed version of the sequence.
Mirror Left/Right. Checkbox. Mirror-reverse the image left and right (for some
stereo rigs).
Mirror Top/Bottom. Checkbox. Mirror-reverse the image top and bottom (for
some stereo rigs).
Levels Tab
3-D Color Map. Drop-down selector. Select a 3-D Color Look-Up-Table (LUT) to
use to process the images.
Reload. Button. Forces an immediate reload of the selected color map. Note that
File/Find New Scripts also does a reload of any color maps that have
changed. Either way, reloading a color map will invalidate the image
cache.
High. Spinner. Incoming level that will be mapped to full white in RAM. Changing
the level values will create a key on the current frame if the Make Keys
checkbox is on, so you can dynamically adjust to changes in shot image
levels. Use right-click to delete a key, shift-right-click to truncate keys past
the current frame, and control-right-click to kill all keys. High, Mid, and Low
are all keyed together.
Mid. Spinner. Incoming level that will be mapped to 50% white in RAM. (Controls
the effective gamma.)
Low. Spinner. Incoming level that will be mapped to 0% black in RAM.
Gamma. Spinner. A gamma level corresponding to the relationship between
High, Mid, and Low.
Hue. Spinner. Rotates the hue angle +/- 180 degrees. Might be used to line up a
color axis a bit better in advance of selecting a single-channel output.
Saturation. Spinner. Controls the saturation (color gain) of the images, without
affecting overall brightness.
Exposure. Spinner. Controls the brightness, up or down in F-stops (2 stops = a
factor of two). This exposure control affects images written to disk, unlike
the range adjustment on the shot setup panel. This one can be animated,
that one can not.

329
ADDITIONAL DIALOGS REFERENCE

Cropping Tab
Left Crop. Spinner. The amount of image that was cropped from the left side of
the film.
Width Used. Spinner. The amount of film actually scanned for the image. This
value is not stored permanently; it multiplies the left and right cropping
values. Normally it is 1, so that the left and right crop are the fraction of the
image width that was cropped on that size. But if you have film
measurements in mm, say, you can enter all the measurements in mm
and they will eventually be converted to relative values.
Right Crop. Spinner. The relative amount of the width that was cropped from the
right.
Top Crop. Spinner. The relative amount of the height that was cropped.
Height Used. Spinner. The actual height of the scanned portion of the image,
though this is an arbitrary value.
Bottom Crop. Spinner. The relative amount of the height that was cropped along
the bottom.
Effective Center. 2 Spinners. The optic center falls, by definition, at the center of
the padded-up (uncropped) image. These values show the location of the
optic center in the U and V coordinates of the original image. You can also
change them to achieve a specified center, and corresponding cropping
values will be created.
Maintain original aspect. Checkbox. When checked, changing the effective
image center will be done in a way that maintains the original image
aspect ratio, which minimizes user confusion and workflow impact.
Stabilize Tab
For more information, see the Stabilization section of the manual.
Get Tracks. Button. Acquires the path of all selected trackers and computes a
weighted average of them together to get a single net point-of-interest
track.
Stabilize Axes:
Translation. Dropdown list: None/Filter/Peg. Controls stabilization of the left/right
and up/down axes of the stabilizer, if any. The Filter setting uses the cut
frequency spinner, and is typically used for traveling shots such as a car
driving down a highway, where features come and go. The Pegged
setting causes the initial position of the point of interest on the first frame
to be kept throughout the shot (subject to alteration by the Adjust tracks).
This is typical for shots orbiting a target.
Rotation. Dropdown list: None/Filter/Peg. Controls the stabilization of the
rotation of the image around the point of interest.
Cut Freq(Hz). Spinner. This is the cutoff frequency (cycles/second) for low-pass
filtering when the peg checkbox(es) are off. Any higher frequencies are
attenuated, and the higher they are, the less they will be seen. Higher
values are suitable for removing interlacing or residual vibration from a car
mount, say. Lower values under 1 Hz are needed for hand-held shots.

330
ADDITIONAL DIALOGS REFERENCE

Note that below a certain frequency, depending on the length of the shot,
further reducing this value will have no effect.
Auto-Scale. Button. Creates a Delta-Zoom track that is sufficient to ensure that
there are no empty regions in the stabilized image, subject to the
maximum auto-zoom. Can also animate the zoom and create Delta U and
V pans depending on the Animate setting.
Animate. Dropdown list: Neither/Translate/Zoom/Both. Controls whether or not
Auto-Scale is permitted to animate the zoom or delta U/V pan tracks to
stay under the Maximum auto-zoom value. This can help you achieve
stabilization with a smaller zoom value. But, if it is creating an animated
zoom, be sure you set the main SynthEyes lens setting to Zoom.
Maximum auto-zoom. Spinner. The auto-scale will not create a zoom larger
than this. If the zoom is larger, the delta U/V and zoom tracks may be
animated, depending on the Animate setting.
Clear Tracks. Button. Clears the saved point-of-interest track and reference
track, turning off the stabilizer.
Lens Tab
Get Solver FOV. Button. Imports the field of view determined by a SynthEyes
solve cycle, or previously hand-animated on the main SynthEyes lens
panel, placing these field of view values into the stabilizer‘s FOV track.
Field of View. Spinner. Horizontal angular field of view in degrees. Animatable.
Separate from the solver‘s FOV track, as found on the main Lens panel.
Focal Length. Spinner. Camera focal length, based on the field of view and back
plate width shown below it. Since plate size is rarely accurately known,
use the field of view value wherever possible.
Plate. Text display. Shows the effective plate size in millimeters and inches. To
change it, close the Image Prep dialog, and select the Shot/Edit Shot
menu item.
Get Solver Distort. Button. Brings the distortion coefficient from the main Lens
panel into the image prep system‘s distortion track. Note that while the
main lens distortion can not be animated, this image prep distortion can
be. This button imports the single value, clearing any other keys. You will
be asked if you want to remove the distortion from the main lens panel,
you should usually answer yes to avoid double-distortion.
Distortion. Spinner. Removes this much distortion from the image. You can
determine this coefficient from the alignment lines on the SynthEyes Lens
panel, then transfer it to this Image Preparation spinner. Do this BEFORE
beginning tracking. Can be animated.
Cubic Distort. Spinner. Adjusts more-complex (higher-order) distortion in the
image. Use to fine-tune the corners after adjusting the main distortion at
the middle of the top, bottom, left, and right edges. Can be animated.
Scale. Spinner. Enlarges or reduces the image to compensate for the effect of
the distortion correction. Can be animated.
Lens Selection. Dropdown. Select a pre-stored lens distortion profile to apply
(instead of the Distortion/Cubic Distort values), or none at all. These curve

331
ADDITIONAL DIALOGS REFERENCE

selections can help you solve fisheye and other complex wide-angle
shots, with proper advance calibration.
Reload. Button. Reloads the currently-selected lens profile from disk. This will
flush all the frames that use the old version of the profile when the image
preprocessor panel is closed. File/Find New Scripts will reload any lens
profile that has changed.
Nominal BPW. Text field, invisible when empty. A nominal back-plate-value
supplied by the lens profile, use at your discretion.
Nominal FL. Text field, invisible when empty. A nominal focal length supplied by
the lens profile, use at your discretion.
Apply distortion. Checkbox. Normally the distortion, scale, and cropping
specified are removed from the shot in preparation for tracking. When this
checkbox is turned on, the distortion, scale, and cropping are applied
instead, typically to reapply distortion to externally-rendered shots to be
written to disk for later compositing.
Adjust Tab
Delta U. Spinner. Shifts the view horizontally during stabilization, allowing the
point-of-interest to be moved. Animated. Allows the stabilization to be
―directed,‖ either to avoid higher zoom factors, or for pan/scan operations.
Note that the shift is in 3-D, and depends on the lens field of view.
Delta V. Spinner. Shifts the view vertically during stabilization. Animated.
Delta Rot. Spinner. Degrees. Rotates the view during stabilization. Animated.
Delta Zoom. Spinner. Zooms in and out of the image. At a value of 1.0, pixels
are the same size coming in and going out. At a value of 2.0, pixels are
twice the size, reducing the field of view and image quality. This value
should stay down in the 1.10-1.20 range (10-20% zoom) to minimize
impact on image quality. Animated. Note that the Auto-Scale button
overwrites this track.
Output Tab
Resample. Checkbox. When turned on, the image prep output can be at a
different resolution and aspect than the source. For example, a 3K 4:3 film
scan might be padded up to restore the image center, then panned and
scanned in 3-D and resampled to produce a 16:9 1080p HD image.
New Width. Spinner. When resampling is enabled, the new width of the output
image.
New Height. Spinner. The new height of the resampled image.
New Aspect. Spinner. The new aspect ratio of the resampled image. The
resampled width is always the full width of the zoomed image being used,
so this aspect ratio winds up controlling the height of the region of the
original being used. Try it in ―Padded‖ mode and you‘ll see.
4:3. Button. A convenience button, sets the new aspect ratio spinner to 1.333.
16:9. Button. More convenience, sets the new aspect ratio to 1.778.
Save Sequence. Button. Brings up a dialog which allows the entire modified
image sequence to be saved back to disk.

332
ADDITIONAL DIALOGS REFERENCE

Apply to Trkers. Button. Applies the effect of the selected padding, distortion, or
stabilization to all the tracking data, so that tracking data originally created
on the raw image will be updated to correspond to the present image
preprocessor output.Used to avoid retracking after padding, changing
distortion, or stabilizing a shot. Do not hit more than once!
Padding. Checkbox. Apply/remove the effect of the cropping/padding.
Distortion. Checkbox. Apply/remove the effect of lens distortion. If Padding and
Stabilization are on, Distortion should be on also.
Stabilization. Checkbox. Apply/remove the effect of stabilization.
Remove f/Trkers. Button. Undoes the effect of the selected operations, to get
coordinates that are closer to, or correspond directly to, the original image.
Use to remove the effect of earlier operations from tracking data before
changing the image preprocessor setup, to avoid retracking.
Region of Interest (ROI)
Hor. Ctr., Ver. Ctr. Spinners. These are the horizontal and vertical center
position of the region of interest, ranging from -1 to +1. These tracks are
animated, and keys will be set when the Make Keys checkbox is on.
Normally set by dragging in the view window. A smaller ROI will require
less RAM, allowing more frames to be stored for real-time playback. Use
right-click to delete a key, shift-right-click to truncate keys past the current
frame, and control-right-click to kill all keys.
Half Width, Half Height. Spinners. The width and height of the region of interest,
where 0 is completely skinny, and 1 is the entire width or height. They are
called Half Width and Height because with the center at 0, a width of 1
goes from -1 to +1 in U,V coordinates. Use Control-Drag in the viewport to
change the width and height. Keyed simultaneously with the center
positions. Use right-click to delete a key, shift-right-click to truncate keys
past the current frame, and control-right-click to kill all keys.
Save Processed Image Sequence Dialog
Launched from the Save Sequence button on the Output tab.

333
ADDITIONAL DIALOGS REFERENCE

… (ellipsis, dot dot dot) Button. Click this to set the output file name to write the
sequence to. Make sure to select the desired file type as you do this.
When writing an image sequence, include the number of zeroes you wish
in the resulting sequence file names. For example, seq0000 will be a four-
digit image number, starting at zero, while seq1 will have a varying
number of digits, starting from 1.
Compression Settings. Button. Click to set the desired compression settings,
after setting up the file name and type. Subtle non-SynthEyes Quicktime
―feature:‖ the H.264 codec requires that the Key Frame every … frames
checkbox in its compression settings must be turned off. Otherwise the
codec produces only a single frame! Also, be sure to set compression for
Quicktime movies, there is no default compression set by Quicktime.
RGB Included. Checkbox. Include the RGB channels in the files produces
(should usually be on).
Alpha Included. Checkbox. Include the alpha channel in the output. Can be
turned on only if the output format permits it. If the input images do not
contain alpha data, it will be generated from the roto-splines and/or green-
screen key. Or, if (only after) you turn off the RGB Included checkbox, you
can turn on the Alpha Included checkbox, and alpha channel data will be
produced from the roto-spline/green-screen and converted to RGB data
that is written. This feature allows a normal black/white alpha-channel
image to be produced even for formats that do not support alpha
information, or for other applications that require separate alpha data.
Start. Button. Get going…
Close/Cancel. Button. Close: saves the filename and settings, then close. When
running, changes to Cancel: stop when next convenient. For image
sequences on multi-core processors, this can be several frames later
because frames are being generated in parallel.

Spinal Editing Control


Launched by the Window/Spinal Editing menu item. See Spinal Editing.

334
ADDITIONAL DIALOGS REFERENCE

Off/Align/Solve. Button. Controls the mode in which the spinal editing features
run, if at all. In align mode, the scene is re-aligned after a change. In solve
mode, a ―refine‖ solve cycle is run after a change.
Finish. Button. Used to finish a refine solve cycle that was truncated to maintain
response time. Equivalent to the Go button on the solver control panel.
Lock Weight. Spinner. This weight is applied to create a soft-lock key on each
applicable channel when the camera or object is moved or rotated. When
this spinner is dragged, the solver will run in Solve mode, so you can
interactively adjust the key weight.
Drag time (sec). Spinner. (Solve mode only.) Refine cycles will automatically be
stopped after this duration, to maintain an interactive response rate. If
zero, there will be no refine cycles during drag.
Time at release. Spinner. (Solve mode only.) An additional refine operation will
run at the completion of a drag, lasting for up to this duration. If zero, there
will not be a solve cycle at the completion of dragging (ie if the drag time is
long enough for a complete solve already).
Update ZWTs, lights, etc on drag. Checkbox. If enabled, ZWTs, lights, etc will
be updated as the camera is dragged, instead of only at the end.
Message area. Text. A text area displays the results of a solve cycle, including
the number of iterations, whether it completed or was stopped, and the
RMS error. In align mode, a total figure of merit is shown reflecting the
extent to which the constraints could be satisfied—the value will be very
small, unless the constraints are contradictory.
Preferences Controls
The spinal settings are stored in the scene file. When a new scene is
created, the spinal settings are initialized from a set of preferences. These
preferences are controlled directly from this panel, not from the preferences
panel.

335
ADDITIONAL DIALOGS REFERENCE

Set Prefs. Button. Stores the current settings as the preferences to use for new
scenes.
Get from Prefs. Button. Reloads the current scene‘s settings from the
preferences.
Restore Defaults. Button. Resets the current scene‘s settings to factory default
values. They are not necessarily the same as the current preferences, nor
are these values automatically saved as the preferences: you can hit Set
Prefs if that is your intent.

336
Viewport Features Reference
This section describes the mouse actions that can be performed within
various display windows. There are separate major sections for the graph editor
and the perspective view.
Most windows use the middle mouse button—pushing on the scroll
wheel—to pan. This can be difficult on trackballs or on Mac OSX with Microsoft‘s
Intellipoint mouse driver installed. There is a preferences setting, No middle-
mouse button, that you can enable to use ALT/Command-Left-drag to pan
instead. When this option is selected, the ALT/Command-Left-click combination,
which links trackers together, is selected using ALT/Command-Right-click
instead.
If you are using a tablet, you must turn off the Enable cursor wrap
checkbox on the preferences panel.

Timing Bar
The timing bar shows valid regions and keys for trackers, roto masks, etc,
depending on what is currently selected, and the active panel. Shows hold
regions with magenta bars at the top of the frames.
Green triangle: start of replay loop. Left-drag
Red triangle: end of replay loop. Left-drag.
Left Mouse: Click or drag the current frame. Drag the start and end of the replay
loop. Shift-drag to change the overall starting or ending frame. Control-
shift-drag to change the end frame, even past the end of the shot (useful
when the shot is no longer available).
Middle Mouse: Drag to pan the time bar left and right.
Middle Scroll: Scroll the current time. Shift-scroll to zoom the time bar.
Right Mouse: Horizontal drag to pan time bar, vertical drag to zoom time bar. Or,
right click cancels an ongoing left or middle-mouse operation.

Camera Window
The camera view can be floated with the Window/Floating camera menu
item.
Left Mouse: Click to select and drag a tracker, or create a tracker if the Tracker
panel‘s create button is lit. Shift-click to include or exclude a tracker from
the existing selection set. Drag to lasso 2-D trackers, control-drag to lasso
both the 2-D trackers and any 3-D points. ALT-Left-Click (Mac: Command-
Left-Click) to link to a tracker, when the Tracker 3-D panel is displayed.
Click the marker for a tracker on a different object, to switch to that object.
Drag a Lens panel alignment line. Click on nothing to clear the selection
set. If a single tracker is selected, and the Z or apostrophe/double-quote
key is pressed, pushing the left mouse button will place the tracker at the

337
VIEWPORT FEATURES REFERENCE

mouse location (and allow it to be dragged to be fine-tuned). Or, drag a


tracker‘s size or search region handles.
Middle Mouse Scroll: Zoom in and out about the cursor. (See mouse
preferences discussion above.)
Right Mouse: Drag vertically to zoom. Or, cancel a left or middle button action in
progress.

Tracker Interior View (on the Tracker Control Panel)


Left Mouse: Drag the tracker location.
Middle Scroll: Advance the current frame, tracking as you go.
Right Mouse: Add or remove a position key at the current frame. Or, cancel a
drag in progress.

3-D Viewport
Left Mouse: Click and Drag repeatedly to create an object, when the 3-D Panel‘s
Create button is lit. ALT-Left-Click (Mac: Command-Left-Click) to link to a
tracker, when the Tracker 3-D panel is displayed. Drag a lasso to select
multiple trackers. Or, move, rotate, or scale an object, depending on the
tool last selected on the 3-D Panel.
Middle Mouse: Drag to pan the viewport. (See mouse preferences discussion
above.)
Middle Scroll: Zoom the viewport.
Right Mouse: Drag vertically to zoom the viewport. Or, cancel an ongoing left or
middle-mouse operation.

Constrained Points Viewport


Left Mouse: Click to select a tracker. Shift-drag to add trackers to the selection
set. Control-click to invert a tracker‘s selection status. When selected a
trackers will flash in the camera and 3-D views. Clicking towards the right,
over a linked tracker, will flash that tracker instead.
Middle Mouse: Vertical pan.
Middle Scroll: Advance the current frame.
Right Mouse: Cancel an ongoing left or middle-mouse operation.

338
Graph Editor Reference

The graph editor can be launched from the Graph Editor button on the
main toolbar, the Window/Graph Editor menu item, or the F7 key. It can also
appear as a viewport in a layout. The graph editor contains many buttons; they
have extensive tooltips to help you identify the function and features.
It has two major modes, graphs and tracks, as these examples show:
Tracks Mode:

Tracker 7 is unlocked and selected in the main user interface, and a


selection of keys from trackers 6, 7, and 9 are selected in the graph editor. While
the other trackers are automatic, #7 is now supervised and tracks in the forward
direction (note the directionality in the key markers). The current frame # is off to
the left, before frame 35.

339
GRAPH EDITOR REFERENCE

Graphs Mode:

The capture shows a graph display of Camera01. The red, green, and
blue traces are solved camera X,Y, and Z velocities, though you would have to
expose the solved velocity node if you did not know. The magenta trace with key
marks every frame is a field-of-view curve from a zoom shot. The time area is in
scroll mode, the graph shows frames 62 to 130, and we are on frame 117.
Hint. This panel does a lot of different stuff. If you only read this, you will
probably not understand exactly what or why everything does what it does. We
could go on and on trying to describing everything exactly, to no purpose. Keep
alert for what SynthEyes can do, and give it a try inside SynthEyes—you will
understand a lot better.

Shared Features in All Modes


Main Buttons
Buttons are shown below in their unselected state. They have a green rim

such as when they are selected.

Tracks Mode. Switch the graph editor to the tracks mode.


Graphs Mode. Switch to graphs (curves) mode.
Alpha, Error, Time Sort. , , . Sort trackers in a modified
alphabetical order, by the error after solving, or by time. The button
sequences through these three modes.

340
GRAPH EDITOR REFERENCE

Selected Only. . When on, only the selected trackers appear in the ―Active
Trackers‖ node of the graph editor—it changes to read ―Selected
Trackers‖ instead.
Reset Time. . The time slider is adjusted to display the whole shot within the
visible width of the graph editor.
Toolbar Display Mode. . Clicking this button will show or hide the toolbar,
leaving only the time slider area shown at the bottom. Right clicking this
button will close both the tool and time areas—a minimal view for when
the graph editor is embedded as a viewport, instead of floating. Click in
the small gutter area at bottom to re-display the time and tools, or right-
click at bottom to re-display only the time area.

Show Status Background. . When on, a colored background is shown that


indicates whether the number of trackers visible on that frame is
satisfactory. The count is different for translating cameras, tripod shots,
and within hold regions. The ―safe count‖ configured on the preferences
panel is taken into account, above that, the background is white/gray.
Below the safe count, it turns a shade of green. At fewer trackers, it turns
yellowish on marginal levels, or reddish for unacceptable tracker counts.
See also the #Normal and #Far data channels of the Active Trackers
node.

Squish Tracks. . [Only in tracks mode.] When on, all the tracks are
squished vertically to fit into the visible canvas area. This is a great way to
quickly see an overview. You can see the display with or without individual
keys: it has three states: off, with keys, and without keys. Clicking the
button sequences among the 3 modes, right-clicking sequences in the
reverse direction.
Draw Selected. . [Only in graphs mode.] When on (normally), the curves of
all selected nodes or open nodes are drawn. When off, only open nodes
are drawn.
Time Slider
The graph editor time slider has two modes, controlled by the selector icon
at left in the images below.

Slider mode. The slider locks up with the


canvas area above it, showing only the
displayed range of times.

Scroll mode. The slider area always shows the


entire length of the shot. The dark gray box
(scroll knob) shows the portion displayed in the
canvas.

341
GRAPH EDITOR REFERENCE

In the time slider mode:


 left-click or –drag to change the current time.
 Middle-drag to pan the canvas, or
 right-drag to zoom/pan the time axis (same as in the canvas area
and main SynthEyes time bar).
In the time scroll mode:
 left-drag inside the gray scroll knob to drag the region being
displayed (panning the canvas opposite from usual),
 left-drag the blue current-time marker to change the time,
 left-click outside the knob to ―page‖ left or right,
 double-click to center the knob at a specific frame,
 middle-drag to pan the canvas in the usual way, or
 right-click to expand to display the entire shot.
Left Hierarchy Scroll
This is the scroll bar along the left edge of the graph editor in both graph
and tracks modes. In the hierarchy scroll:
 left-drag inside the knob to move it and pan the hierarchy vertically,
 left-click outside the knob to page up or down,
 right-click to HOME to the top, or
 double-click to center on that location.
The interior of the entire height of the scroll bar shows where nodes are
selected or open, even though they are not currently displayed. You can rapidly
see any of those open nodes by clicking at that spot on the scroll bar.
Hierarchy/Canvas Gutter
A small gutter area between the hierarchy and canvas area lets you
expand the hierarchy area to show longer tracker names, or even to compress it
down so that it can not be seen at all to save space if the graph editor is
embedded in a complex layout.
Note that the gutter can not be seen directly; it starts at the right edge of
the white border behind selected hierarchy nodes, and the cursor will change
shape to a left/right drag cursor.

Tracks Mode
Hierarchy Area
The middle-mouse scroll wheel scrolls the hierarchy area vertically.
Disclosure Triangle. . Click to expose or hide the node/nodes/tracks under
this node.
Visibility. . Show or do not show the node (tracker or mesh) in the viewports.

342
GRAPH EDITOR REFERENCE

Color. . Has the following modes for trackers; only the last applies to other
node types:
 shift-click to add trackers with this color to the selection set,
 control-click on the color square of an unselected tracker to select
all trackers of this color,
 control-click on the color square of a selected tracker to unselect all
trackers of this color, or
 double-click to set the color of the node (tracker or mesh).
Lock. . Lock or unlock the tracker.
Enable. . Tracker or spline enable or disable.
Tracker Name. Selected nodes have a white background. Only some types of
nodes can be selected, corresponding to what can be selected in SynthEyes‘s
viewports. In the following list, keep in mind that only one of most objects can be
selected at a time; only trackers can be multi-selected.
 click or drag to select one node (updating all the other views),
 control-click or drag to toggle the selection,
 control-shift-drag to clear a range of selections,
 shift-click to select an additional tracker,
 shift-click an already-selected tracker to select the range of trackers
from this one to the nearest selected one, or
 double-click to change the name of a node (if allowed).
Include in Composite. . When on (as shown), keys on this track are included
in the composite track of its parent (and possibly in the grandparent, great-
grandparent, etc). The ‗off‘ key of an enable track is never included on a
composite track.
Mouse Modes
The mouse mode buttons at the bottom center control what the mouse buttons
do in the canvas area. Common operations shared by all modes:
 Middle-mouse pan,
 Middle-scroll to change the current frame and pan if needed.
 Shift-middle-scroll to zoom the time axis
 Right-drag to zoom or pan the time axis (like the main timebar)
 Right-click to bring up the track mode‘s canvas menu.

Select Keys. . The shared operations plus:


 Left-click a key to select it,
 Left-drag a box to select all the keys in the box,
 Shift-left-click or –drag to add to the selected key set,
 Control-left-click or –drag to remove from the selected key set.

343
GRAPH EDITOR REFERENCE

Re-time Keys. . The shared operations plus:


 Left-click a key to select it,
 Left-drag a box to select all the keys in the box,
 Left-drag selected keys to re-time them (shift them in time),
 Control-left-drag to clone the select keys and drag them to a new
frame,
 Alt-left-drag to include keys on all tracks sharing keys.
 Double-click keys to bring up the Set Key Values dialog.

Add Keys. . The shared operations plus:


 Left-click a key to select it,
 Left-click a location where there is no key to add one.
 Left-drag a box to add keys at all possible key locations within the
box. The value will be determined by interpolating the existing curve at the
time the key is added.
 Shift-left-click to add to the selected key set,
 Double-click keys to bring up the Set Key Values dialog.

Delete Keys. . The shared operations plus:


 Left-click a key to delete it,
 Left-drag a region, all keys inside that can be deleted will be
deleted.
Squish Mode. This mode activates automatically when you select Squish mode

with the keys not shown (see Shared Features, above). With no keys
shown, the key manipulation modes do not make sense. Instead the following
mode, modified from the hierarchy‘s name area, is in effect:
 click or drag to select and flash one node,
 control-click or drag to toggle the selection,
 control-shift-drag to clear a range of selections,
 shift-click to select an additional tracker,
 shift-click an already-selected tracker to select the range of trackers
from this one to the nearest selected one.
Hierarchy Menu (Tracks mode)
This menu appears when you right-click in the hierarchy area. Note that
some menu items pay attention to the mouse location when you right-click.
Home. Scrolls the hierarchy up to the top.
End. Scrolls the hierarchy to the end.
Close except this. Closes all the other nodes except the right-clicked one.
Close all. Closes all nodes except the top-level Scene.

344
GRAPH EDITOR REFERENCE

Expose recursive. Exposes the clicked-on node, and all its children.
Close recursive. Closes the clicked-on node,and all its children.
Expose selected. Exposes all selected nodes.
Close selected. Closes all selected nodes.
Delete clicked. Deletes the node you right-clicked on. Note: the delete key (on
the keyboard) deletes keys, not nodes, in both the canvas and hierarchy
areas.
View Controls. The following items appear in the View Controls submenu,
abbreviated as v.c. Note that most have equivalent buttons, but these are
useful when the buttons are hidden.
v.c./To Graph Mode. Change the graph editor to graphs mode.
v.c./Sort Alphabetic. Sort trackers alphabetically (modified).
v.c./Sort By Error. Sort trackers by average error.
v.c./Sort By Time. Sort trackers by their start and end times (or end and start
times, if the playback direction is set to backwards).
v.c./List only selected trackers. List only the selected trackers, ‗Active
Trackers‘ node changes to ‗Selected Trackers.‘
v.c./Lock time to main. The time bar is made to synchronize with the main
timebar, for when the graph editor is embedded in a viewport. Not
recommended, likely to be substantially changed in the future.
v.c./Colorful background. Show the colorful background indicated whether or
not enough trackers are present.
v.c./Remove menu ghosts. Some OpenGL cards do not redraw correctly after a
pop-up menu has appeared, this control forces a delayed redraw to
remove the ghost. On by default and harmless, but this lets you disable it.
This setting is shared throughout SynthEyes and saved as a preference.
Canvas Menu (Tracks mode)
The canvas menu is obtained by right-clicking (without a drag) within the
canvas area. Many of the functions have icons in the main user interface, but the
menu can be handy when the toolbars are closed, and it also allows keyboard
commands to be set up. There are two submenus, Mode (abbreviated m.) and
View Controls (abbreviated v.c.).
m./Select. Go to select-keys mouse mode.
m./Time. Go to re-time keys mouse mode.
m./Add Keys. Go to add keys mouse mode.
m./Delete Keys. Go to delete keys mouse mode.
m./To Graph Mode. Change to graph mode.
Reset time axis. Reset the time axis so the entire length of the shot is shown.
Squish vertically. Squish the tracks vertically so they all can be seen. The keys
will still be shown and can be selected, though if there are many tracks this may
be hard.
Squish with no keys. Squish the tracks vertically, and do not show the keys.
Use the simplified hierarchy-type mouse mode to select trackers.
Squish off. Turn off squish mode.

345
GRAPH EDITOR REFERENCE

Delete Selected Keys(all). Deletes selected keys outright, including in shared-


key channel groups. Deleting a camera X key will delete keys on Y and Z also.
See the graph editor right-click menu for different versions.
Delete Selected Trackers. Deletes selected trackers.
Approximate Keys. Replaces the selected keys with a smaller number that
approximate the original curve.
Exactify trackers. Replaces selected tracker position keys with new values
based on the solved 3-D position of the tracker—same as the Exact button on
the Tracker Panel.
v.c./Lock time to main. The time bar is made to synchronize with the main
timebar, for when the graph editor is embedded in a viewport. Not
recommended, likely to be substantially changed in the future.
v.c./Colorful background. Show the colorful background indicated whether or
not enough trackers are present.
v.c./Remove menu ghosts. Some OpenGL cards do not redraw correctly after a
pop-up menu has appeared, this control forces a delayed redraw to
remove the ghost. On by default and harmless, but this lets you disable it.
This setting is shared throughout SynthEyes and saved as a preference.

Graphs Mode
Hierarchy Area
Disclosure Triangle. . Click to expose or hide the node/nodes/tracks under
this node.
Visibility. . Show or do not show the node (tracker or mesh) in the viewports.
Color (node). . Has the following modes for trackers; only the last applies to
other node types:
 shift-click to add trackers with this color to the selection set,
 control-click on the color square of an unselected tracker to select
all trackers of this color,
 control-click on the color square of a selected tracker to unselect all
trackers of this color, or
 double-click to set the color of the node (tracker or mesh).
Lock. . Lock or unlock the tracker.
Enable. . Tracker or spline enable or disable.
Tracker Name. Selected nodes have a white background. Only some types of
nodes can be selected, corresponding to what can be selected in SynthEyes‘s
viewports. In the following list, keep in mind that only one of most objects can be
selected at a time; only trackers can be multi-selected.
 click or drag to select one node (updating all the other views),
 control-click or drag to toggle the selection,
 control-shift-drag to clear a range of selections,

346
GRAPH EDITOR REFERENCE

 shift-click to select an additional tracker,


 shift-click an already-selected tracker to select the range of trackers
from this one to the nearest selected one, or
 double-click to change the name of a node (if allowed).
Show Channel(s). . When on (as shown), the channel‘s graph is drawn in the
canvas area. On a node, controls all the channels of the node, and the control
may have the on state shown, a partially-shown state (fainter with no middle dot),
or may be off (hollow, no green or dot).
Zoom Channel. . Controls the vertical zoom of this channel, and all others of
the same type: they are always zoomed the same to keep the values
comparable.
 Left-click to see all related channels (their zoom icons will light up)
and see the zero level of the channel in the canvas area, and see the
range of values displayed on the status line.
 Left-drag to change the scale. It will change the offset to keep the
data visible—hold the ALT key to keep the data visible over the entire
length of the shot.
 Right-click to reset the zoom and offsets to their initial values.
 Double-click to auto-zoom each channel in the same group so that
they have the same scale and same offsets. Compare to double-clicking
the pan icon.
 Shift-double-click auto-zooms all displayed channels, not just this
group.
 Alt-double-click auto-zooms over the entire length of the shot, not
just the currently-displayed portion. Can be combined with shift.
Pan Channel. . Pans all channels of this type vertically.
 Left-click to see the zero level of the channel in the canvas, and to
show the minimum/maximum values displayed on the status line.
 Left-drag to pan the channels vertically.
 Right-click to reset the offset to zero.
 Double-click to auto-zoom each channel in the same group so that
they have the same scale but different offsets. Compare to double-
clicking the zoom icon.
 Shift-double-click auto-zooms all displayed channels, not just this
group.
 Alt-double-click auto-zooms over the entire length of the shot, not
just the currently-displayed portion. Can be combined with shift.
Color (channel). . Controls the color of this channel, as drawn in the canvas:
 double-click to change the color for this exact node and channel
only, for example, only for Tracker23,
 shift-double-click to change the preference for all channels of this
type, or

347
GRAPH EDITOR REFERENCE

 right-click to change the color back to its preference setting.


Mouse Modes
The mouse mode buttons at the bottom center control what the mouse buttons
do in the canvas area. Common operations shared by all modes:
 Middle-mouse pan,
 Middle-scroll to change the current frame and pan if needed.
 Shift-middle-scroll to zoom the time axis
 Right-drag to zoom or pan the time axis (like the main timebar)
 Right-click to bring up the canvas menu.

Select Keys. . The shared operations at top plus:


 Left-click a key to select it,
 Left-drag a box to select all the keys in the box,
 Shift-left-click or –drag to add to the selected key set,
 Control-left-click or –drag to remove from the selected key set.

Set Value. . The shared operations at top plus:


 Left-click a key to select it,
 Left-drag a box (starting in empty space) to select all the keys in the
box,
 Shift-left-click or –drag to add to the selected key set,
 Control-left-click or –drag to remove from the selected key set.
 Left-drag a key or selected keys vertically to change their values.
 Double-click a key or selected keys to bring up the Set Key Values
dialog and set or offset their values numerically.

Re-time Keys. . The shared operations at top plus:


 Left-click a key to select it,
 Left-drag a box to select all the keys in the box,
 Left-drag selected keys to re-time them (shift them in time),
 Control-left-drag to clone the select keys and drag them to a new
frame,
 Alt-left-drag to include keys on all tracks sharing keys with the
selected ones.
 Double-click keys to bring up the Set Key Values dialog.

Add Keys. . The shared operations at top plus:


 Left-click a key to select it,
 Shift-left-click on a key to add to the selected key set.
 Control-left-click on a key to remove it from the selected key set.
 Left-click on a curve to add a key at that location.

348
GRAPH EDITOR REFERENCE

 Left-drag a box in empty-space to add keys at all possible key


locations within the box. The value will be determined by interpolating the
existing curve at the time the key is added.
 Double-click keys to bring up the Set Key Values dialog.

Delete Keys. . The shared operations at top plus:


 Left-click a key to delete it,
 Left-drag a region, all keys inside that can be deleted will be
deleted.

Deglitch. . The shared operations at top plus:


 Left-click a curve or key to fix a glitch by averaging, or by truncating
if it is the beginning or end of the curve. Warning: do not try to deglitch the
first frame of a velocity curve—it is the second frame of the actual data.
Turn on the position curve instead.
 Control-left-drag to isolate on the curve under the mouse cursor.
(Temporarily enters isolate mode.)

Isolate. . Intended to be used when all trackers are selected and displayed.
The shared operations at top plus:
 Left-click or -drag on a curve or key to isolate only that tracker, by
selecting it and unselecting all the others. Keep the left mouse button
down and roam around to quickly look at different tracker curves.
 Right-click on the isolate button at any time selects all the
trackers, even if isolate mode is not active.

Zoom. . The shared operations at top (except as noted) plus:


 Left-drag an area then release; then channel zooms and offsets are
changed to display only the dragged region. Simulates zooming the
canvas, but it is the zoom and pan of the individual channels that is
changing.
 Right-click on the zoom button resets the pans and zooms—even
if the zoom button is not active.
Hierarchy Menu (Graph mode)
This menu appears when you right-click in the hierarchy area. Note that
some menu items pay attention to the mouse location when you right-click.
Home. Scrolls the hierarchy up to the top.
End. Scrolls the hierarchy to the end.
Hide these curves. Turns off the display of all data channels of the node that
was right-clicked.
Close except this. Closes all the other nodes except the right-clicked one.

349
GRAPH EDITOR REFERENCE

Close all. Closes all nodes except the top-level Scene.


Expose recursive. Exposes the clicked-on node, and all its children.
Close recursive. Closes the clicked-on node,and all its children.
Expose selected. Exposes all selected nodes.
Close selected. Closes all selected nodes.
Delete clicked. Deletes the node you right-clicked on. Note: the delete key (on
the keyboard) deletes keys, not nodes, in both the canvas and hierarchy
areas.
View Controls. The following items appear in the View Controls submenu,
abbreviated as v.c. Note that most have equivalent buttons, but these are
useful when the buttons are hidden.
v.c./To Tracks Mode. Change the graph editor to tracks mode.
v.c./Sort Alphabetic. Sort trackers alphabetically (modified).
v.c./Sort By Error. Sort trackers by average error.
v.c./Sort By Time. Sort trackers by their start and end times (or end and start
times, if the playback direction is set to backwards).
v.c./List only selected trackers. List only the selected trackers, ‗Active
Trackers‘ node changes to ‗Selected Trackers.‘
v.c./Draw all selected nodes. Controls whether or not selected nodes are
drawn, equivalent to the button on the user interface.
v.c./Snap channels to grid. Controls whether or not channels being panned
have their origin (zero value) snapped onto one of the horizontal grid lines.
v.c./Lock time to main. The time bar is made to synchronize with the main
timebar, for when the graph editor is embedded in a viewport. Not
recommended, likely to be substantially changed in the future.
v.c./Colorful background. Show the colorful background indicated whether or
not enough trackers are present.
v.c./Remove menu ghosts. Some OpenGL cards do not redraw correctly after a
pop-up menu has appeared, this control forces a delayed redraw to
remove the ghost. On by default and harmless, but this lets you disable it.
This setting is shared throughout SynthEyes and saved as a preference.
Canvas Menu (Graph mode)
The canvas menu is obtained by right-clicking (without a drag) within the
canvas area. Many of the functions have icons in the main user interface, but the
menu can be handy when the toolbars are closed, and it also allows keyboard
commands to be set up. There are two submenus, Mode (abbreviated m.) and
View Controls (abbreviated v.c.).
m./Select. Go to select-keys mouse mode.
m./Value. Go to set-value mouse mode.
m./Time. Go to re-time keys mouse mode.
m./Add Keys. Go to add keys mouse mode.
m./Delete Keys. Go to delete keys mouse mode.
m./Deglitch. Go to deglitch mouse mode.
m./Isolate On. Go to isolate mouse mode.
m./Zoom. Go to zoom mouse mode.

350
GRAPH EDITOR REFERENCE

m./To Tracks Mode. Change to tracks mode.


Reset time axis. Reset the time axis so the entire length of the shot is shown.
Reset all channel zooms. Resets all channels to their nominal unzoomed
range.
Set to Linear Key. Sets all selected keys to be linear (corners).
Set to Smooth Key. Sets all selected keys to be smooth (spline).
Delete Selected Keys-only. Delete only the selected keys, which may require
replacing the value instead of deleting the key. Example, you delete X key of a
camera path. Y and Z have keys already. A new value is computed for X, what X
would be if there was no key there. Since it must have a key, this computed
value is used.
Delete Selected Keys-all. Deletes selected keys outright, including in shared-
key channel groups. Deleting a camera X key will delete keys on Y and Z also.
See the graph editor right-click menu for different versions.
Delete Selected Trackers. Deletes selected trackers.
Approximate Keys. Replaces the selected keys with a smaller number that
approximate the original curve.
Exactify trackers. Replaces selected tracker position keys with new values
based on the solved 3-D position of the tracker—same as the Exact button on
the Tracker Panel.
v.c./Draw all selected nodes. Controls whether or not selected nodes are
drawn, equivalent to the button on the user interface.
v.c./Lock time to main. The time bar is made to synchronize with the main
timebar, for when the graph editor is embedded in a viewport. Not
recommended, likely to be substantially changed in the future.
v.c./Snap channels to grid. Controls whether or not channels being panned
have their origin (zero value) snapped onto one of the horizontal grid lines.
v.c./Colorful background. Show the colorful background indicated whether or
not enough trackers are present.
v.c./Remove menu ghosts. Some OpenGL cards do not redraw correctly after a
pop-up menu has appeared, this control forces a delayed redraw to
remove the ghost. On by default and harmless, but this lets you disable it.
This setting is shared throughout SynthEyes and saved as a preference.

351
GRAPH EDITOR REFERENCE

Set Key Values Dialog

Activated by double-clicking a key from the graph or tracks views to


change one or more keys to new values, specified numerically.
If multiple keys are selected when the dialog is activated, the values can
all be set to the same value, or they can all be offset by the same amount, as
selected by the radio buttons at the bottom of the panel.
The value is controlled by the spinner, but also by up and down buttons for
each digit. You can add 0.1 to the value by clicking the ‗+‘ button immediately to
the right and below the decimal point. The buttons add or subtract from the
overall value, not from only a specific digit.
Right-clicking an up or down button clears that digit and all lower digits to
zero, rounding the overall value.
The values update into the rest of the scene as you adjust them. When
you are finished, click OK or Cancel to cancel the change.

Approximate Keys Dialog


This dialog is launched by right-clicking in the canvas area of the graph
editor, when it is in graphs mode, then selecting the Approximate Keys menu
item.

352
GRAPH EDITOR REFERENCE

Approximate Keys does what the name suggests, examining the collection
of selected keys, and replacing them with a smaller number that produces a
curve approximating the original. This feature is typically used on camera or
moving object paths, and zooming field of view curves.
Fine Print: SynthEyes approximates all keys between the first-selected
and the last-selected, including any in the middle even if they are not selected.
All channels in the shared-key channel group will be approximated: if you have
selected keys on the X channel of the camera, the Y and Z channels and rotation
angles will all be approximated because they all share key positions.
You can select the maximum number of keys permitted in the
approximated curve, and the desired error. SynthEyes will keep adding keys until
it reaches the allowed number, or the error becomes less than specified,
whichever comes first.
The error value is per mil (‰), meaning a part in a thousand of the
nominal range for the value, as displayed in the SynthEyes status line when you
left-click the zoom control for a channel. For example, the nominal range of field
of view is 0 to 90, so 1 per mil is 0.09 degrees. In practice the exact value should
rarely matter much.
At the bottom of the display, the error and number of keys will be listed.
You can dynamically change the number of keys and error values, and watch the
curves in the viewport and the approximation report to decide how to set the
approximation controls.

353
Perspective Window Reference
The perspective window defines quite a few different mouse modes, which
are selected by right-clicking in the perspective window. The menu modes and
mouse modes are described below.
The perspective window has four entries in the viewport manager:
Perspective, Perspective B, Perspective C, and Perspective D. The status of
each of these flavors is maintained separately, so that you can put a perspective
window in several different viewport configurations and have it maintain its view,
and you can up to four different versions each preserving its own different view.
There is a basic mouse handler (‗Navigate‘) operating all the time in the
perspective window. You can always left-drag a handle of a mesh object to move
it, or control-left-drag it to rotate around that handle. If you left-click a tracker, you
can select it, shift-select to add it to the selected trackers, add it to a ray for a
light, or ALT-click it to set it as the target of a selected tracker. While you are
dragging as part of a mouse operation, you can right-click to cancel it.
The middle mouse button navigates in 3-D. Middle-drag pans the camera,
ALT-middle-drag orbits, Control-ALT dollies it in or out. Control-middle makes the
camera look around in different directions (tripod-style pan and tilt). Doing any of
the above with the shift key down slows the motion for increased accuracy. The
camera will orbit around selected vertices or an object, if available. The text area
of the perspective window shows the navigation mode continuously.
The middle-mouse scroll wheel moves forward and back through time if
the view is locked to the camera (shift-scroll zooms the time bar), and changes
the camera zoom (field of view) when the camera is not locked.
The N key will switch to Navigate mode from any other mode.
If you hold down the Z key or apostrophe/double-quote when you click the
left mouse button in any mode, the perspective window will switch temporarily to
the Navigate mode, allowing you to use the left button to navigate. The original
mode will be restored when your release the mouse button.

Right-click Menu Items

No Change. Does nothing, makes it easier to take a quick look at the menu.
Lock to Current Camera. The perspective window is locked to look through the
camera selected in the overall SynthEyes user interface (ie appearing the
camera view window). The camera‘s imagery appears as the background
for the perspective view. You can no longer move the perspective view
around. If already locked, the camera is unlocked: the background
disappears, the camera is made upright (roll=0), and the view can be
changed. Keyboard: ‗L‘ key.
View. Submenu, see details below.

355
PERSPECTIVE WINDOW REFERENCE

Navigate. When this mode is selected, the mouse navigation actions are
activated by the left mouse button, not just the middle mouse. Keyboard:
‗N‘ key.
Place. Slide a tracker‘s seed position, an extra helper point, or a mesh around on
the surface of meshes. Use to place seed points on reference head
meshes, for example. With control key pushed, position snaps only onto
vertices, not anywhere on mesh.
Field of View. Adjust the perspective view‘s field of view (zoom). Normally you
should drive forward to get a closer view.
Lasso Trackers. Lasso-select trackers. Shift-select trackers to add to the
selection, and control-select to complement their selection status.
Lasso Mesh. Lasso-select vertices of the current edit mesh. Or click directly on
the vertices.
Add Vertices. Add vertices to the edit mesh, placing them on the current grid.
Use the shift key to move up or down normal to the grid. If control is down,
build a facet out of this vertex and the two previously added.
Move Vertices. Move the selected vertices around parallel to the current grid, or
if shift is down, perpendicular to it. Use control to slow the movement. If
clicking on a vertex, shift will add it to the selection set, control-shift will
remove it from the selection set.
Set as Edit Mesh. Open the currently-selected mesh for editing, exposing its
vertices. If no object is selected, any edit mesh is closed. Keyboard: ‗M‘
key.
Create Mesh Object. Creates a mesh object on the current grid. The type of
object created is controlled by the 3-D control panel, as it is for the other
viewports.
Creation Object. Submenu selecting the object to be created.
Mesh Operations. Submenu for mesh operations. See below.
Texturing. Submenu for texture mapping. See below.
Grid. Submenu for the grid. See below.
Preview Movie. Renders the perspective view for the entire frame range to
create a movie for playback. See the preview control panel referece
below.

View Submenu
Local coordinate handles. The handles on cameras, objects, or meshes can be
oriented along either the global coordinate axes, or the axes of the item
itself, this menu check item controls which is displayed.
Path-relative handles. The handles are positioned using the camera path: slide
the camera along the path, inwards with respective to curvature, or
upwards from the curvature. This option applies only for cameras and
objects.
Stereo Display. If in a stereo shot, selects a stereo display from both cameras.
See Perspective View Settings to configure.

356
PERSPECTIVE WINDOW REFERENCE

Whole path. Moves a camera or object and its trackers simultaneously. See 3-D
Control Panel.
Whole affects meshes. Controls whether or not the Whole button affects
meshes as it moves a scene. Turn on if you have already placed the
meshes, keep off if you are moving the scene relative the meshes to align
it.
Perspective View Settings. Brings up the Scene Settings dialog, which has
many sizing controls for the perspective view: clip planes, tracker size, etc.
Reset FOV. Reset the field of view to 45 degrees.
Lock Selection. Prevents the selection from being changed when clicking in the
viewport, good for dense work areas.
Freeze on this frame. Locks this perspective view at the current frame; you can
use it to look at the scene from a certain view or frame while you work on
it on a different frame in other viewports. Handy for working with
reference shots. Keyboard commands ‗A‘, ‗s‘, ‗d‘, F‘, ‗.‘, ‗,‘ allow you to
quickly change the frozen frame (with the default keyboard map).
Unfreeze. Releases a freeze, so the perspective view tracks the main UI time.
Show Only Locked. When the perspective view window is locked to a particular
object (and image), only the trackers for that particular object will be
shown.
Camera Frustum. Toggles the display of camera viewing frustums—the visible
area of the camera, which depend on field of view, aspect, and world size.
View/Reload mesh. Reloads the selected mesh, if any. If the original file is no
longer accessible, allows a new location to be selected.

Additional “show” controls in this menu are described on the main


window’s view menu.

Mesh Operations Submenu


Convert to Mesh. Converts the selected trackers, or all of them, and adds them
to the edit mesh as vertices, with no facets. If there is no current edit
mesh, a new one is created.
Triangulate. Adds facets to the selected vertices of the edit mesh. Position the
view to observe the collection from above, not from the side, before
triangulating.
Remove and Repair. The selected vertices are removed from the mesh, and the
resulting hole triangulated to paper it over without those vertices.
Subdivide Facets. Selected facets have a new vertex added at their center, and
each facet replaced with three new ones surrounding the new vertex.
Subdivide Edges. The selected edges are bisected by new vertices, and
selected facets replaced with four new ones.
Delete selected faces. Selected facets are deleted from the edit mesh. Vertices
are left in place for later deletion or so new facets can be added.
Delete unused vertices. Deletes any vertices of the edit mesh that are not part
of any facet.

357
PERSPECTIVE WINDOW REFERENCE

Texturing Submenu
Frozen Front Projection. The current frame is frozen to form a texture map for
every other frame in the shot. The object disappears in this frame; in other
frames you can see geometric distortion as the mesh (with this image
applied) is viewed from other directions.
Rolling Front Projection. The edit mesh will have the shot applied to it as a
texture, but the image applied will always be the current one.
Remove Front Projection. Texture-mapping front projection is removed from
the edit mesh.
Clear Texture Coords. Any UV texture coordinates are cleared from the edit
mesh, whether they are due to front projection or importing.
Create Smooth Normals. Creates a normal vector at each vertex of the edit
mesh, averaging over the attached facets. The smooth normals are used
to provide a smooth perspective display of the mesh.
Clear Normals. The per-vertex normals are cleared, so face normals will be
used subsequently.

Grid Submenu
Show Grid. Toggle. Turns grid display on and off in this perspective window.
Keyboard: ‗G‘ key.
Move Grid. Mouse mode. Left-dragging will slide the grid along its normal mode,
for example, allowing you to raise or lower a floor grid.
Floor Grid, Back Grid, Left Side Grid, Ceiling Grid, Front Grid, Right Side
Grid. Puts the grid on the corresponding wall of a virtual room (stage),
normally viewed from the front. The grids are described this way so that
they are not affected by the current coordinate system selection.
To Facet/Verts/Trkrs. Aligns the grid using an edit-mesh facet, 1 to 3 edit-mesh
vertices, if a mesh is open for editing, or 1 to 3 trackers otherwise. This is
a very important operation for detail work. With 3 points selected, the grid
is the plane that contains those 3 points, centered between them, aligned
to preserve the global upwards direction. With 2 points selected, the
current grid is spun to make its ―sideways‖ axis aligned with the two points
(in Z up mode, the X axis is made parallel to the two points). With 1 point
selected, the grid is moved to put its center at that point. Often it will be
useful to use this item 3 times in a row, first with 3 then with 2 and finally 1
vertex or tracker selected.
Return to custom grid. Use a custom grid set up earlier by To
Facet/Verts/Trkrs. The custom grid is shared between perspective
windows, so you can define it in one window, and use it in one or more
others as well.
Object-Mode Grids. Submenu. Contains forward-facing object, backward-facing
object, etc, selections. Requires that the SynthEyes main user interface be
set to a moving object, not a camera. Each of these modes creates a grid
through the origin of the object‘s coordinate system, facing in the direction
indicated. An upward-facing grid means that creating an object on it will go

358
PERSPECTIVE WINDOW REFERENCE

on the plus-object-Z side in Z-up mode. Downward-facing will go in nearly


the same spot, but on the flip side.

Preview Movie Control Panel

File name/… Select the output file name to which the movie should be written. A
Quicktime movie, BMP, Cineon, DPX, JPEG, OpenEXR, PNG, SGI,
Targa, or TIFF(Mac only) file sequence can be produced. For image
sequences, the file name given is that of the first frame; this is your
chance to specify how many digits are needed and the starting value, for
example, prev1.bmp or prevu0030.exr.
Compression Settings. Set the compression settings for Quicktime and various
image formats. Note that different codecs can have their own quirks, such
as H.264 which requires the keyframe every N frames checkbox to be off!
Show All Viewport Items. Includes all the trackers, handles, etc, shown in the
viewport as part of the preview movie.
Show Grid. Controls whether or not the grid is shown in the movie.
Square-Pixel Output. When off, the preview movie will be produced at the same
resolution as the input shot. When on, the resolution will be adjusted so
that the pixel aspect ratio is 1.0, for undistorted display on computer
monitors by standard playback programs.
RGB Included. Must be on to see the normal RGB images. See below.
Depth Included. Output a monochrome depth map. See below.
Anti-aliasing. Select None, Low, Medium, High to determine output image
quality.
The allowable output channels depend on the output format. Quicktime accepts
only RGB. Bitmap can take RGB or depth, but not both at once. OpenEXR
can have either or both.

359
Overview of Standard Tool Scripts
SynthEyes includes a number of standard tool scripts, in addition to the
import and export scripts. Additional scripts are announced regularly on the web
site and via the Msg message button in SynthEyes.
Here is a quick overview of when to use the standard scripts available at
time of publication. For usage details, consult the tutorials on the web site and
the control panels that pop up when they are started.
Apply/Remove Lens Distortion. If you track a shot, then discover there was
lens distortion, and want to switch to an undistorted version, but do not
want to re-track the shot—use this script to update the tracking data.
Camera to Tracker Distance. Prints the distance from the camera to the
selected tracker(s).
Convert Flex to Trackers. A flex is a 3-D curve in space; this script creates a
row of trackers along it, so you can make it into a mesh or export the
coordinates.
Duplicate Mesh. Use to create copies of a mesh object, possibly shifting each
one successively to make a row of fence posts, for example.
Duplicate Mesh onto Trackers. Duplicate a mesh onto selected (or all) trackers,
for example, many pine trees onto trackers on a mountainside. Use this
script to delete them later if you need to, it is otherwise difficult!
Filter Lens FOV. Use to smooth out a lens field of view track in a zoom shot, to
eliminate zoom/dolly jitter.
Grid of Trackers. Creates a grid of supervised trackers, optionally within a
spline. Use for open-ocean tracking and creating dense supervised
meshes.
Invert Perspective. Turn a low(no) perspective object track inside out.
Mark Seeds as Solved. You can create seed trackers at different locations,
possibly on a mesh, then make them appear to be solved at those
coordinates.
Motion Capture Camera Calibrate. See motion capture writeup.
Reverse Shot/Sequence. Use to avoid re-tracking when you‘re suddenly told to
reverse a shot. Reverses tracker data but not other animated data.
Select By Type. Use to select all Far trackers, all unsolved trackers, etc.
Shift Constraints. Especially using GPS survey data, use this script to adjust
the data to eliminate the common offset: if X values are X=999.95,
999.975, 1000.012, you can subtract 1000 from everything to improve
accuracy.
Splice Paths. Sometimes a shot has several different pieces that you can track
individually; this script can glue them together for a final track.

361
Preferences and Scene Settings Reference
Scene settings for the current scene are accessed through the Edit/Edit
Scene Settings menu item, while the default preference settings are accessed
through the Edit/Edit Preferences menu item. The preferences control the
defaults for the scene, taking effect only when a new scene is created, while the
scene settings affect the currently-open scene, and are stored in it.
The Edit/Reset Preferences item resets the preferences to the factory
values.
When you reset the preferences, you can select the user interface colors
to be either a light or dark color scheme. You can tweak the individual colors
manually after that as well.

Preferences
Preferences apply to the user interface as a whole. Some preferences that
are also found on the scene settings dialog, such as the coordinate axis setting,
take effect only as a new scene is created; subsequently the setting can be
adjusted for that scene alone with the scene settings panel. Other preferences
are set directly from the dialog that uses them, for example, the spinal editing
preferences.
Apologies in advance: We concede that there are too many controls on
this panel.

363
PREFERENCES AND SCENE SETTINGS REFERENCE

16 bit/channel (if available). Store all 16 bits per channel from a file, producing
more accurate image, but consuming more storage.
After … min. Spinner. The calculation-complete sound will be played if the
calculation takes longer than this number of minutes.
Anti-alias curves. Checkbox. Enables anti-aliasing and thicker lines for curves
displayed by the graph editor. Easier to read, but turn off if it is too slow for
less-powerful OpenGL cards.
Auto-switch to quad. Controls whether SynthEyes switches automatically to the
quad viewport configuration after solving. Switching is handy for beginners
but can be cumbersome in some situations for experts, so you can turn it
off.
Axis Setting. Selects the coordinate system to be used.
Back Plate Width. Width of the camera‘s active image plane, such as the film or
imager.
Back Plate Units. Shows in for inches or mm for millimeters, click it to change
the display units for this panel, and the default for the shot setup panel.
Bits/channel: 8/16/Half/Float. Radio buttons. Sets the default processing and
storage bit depth.

364
PREFERENCES AND SCENE SETTINGS REFERENCE

Click-on/Click-off. Checkbox. When turned on, the camera view, mini-tracker


view, 3-D viewports, perspective view, and spinners are affected as
follows: clicking the left or middle mouse button turns the mouse button
on, clicking again turns it off. Instead of dragging, you will click, move, and
click. This might help reduce strain on your hand and wrist.
Color Settings. (Drop-down and color swatch) Change the color of many user-
interface elements. Select an element with the drop-down menu, see the
current color on the swatch, and click the swatch to bring up a Windows
dialog box that lets you change the color.
Compress .sni files. When turned on, SynthEyes scene files are compressed as
they are written. Compressed files occupy about half the disk space, but
take substantially longer to write, and somewhat longer to read.
Constrain by default (else align). If enabled, constraints are applied rigorously,
otherwise, they are applied by rotating/translating/scaling the scene
without modifying individual points. This is the default for the checkbox on
the solver panel, used when a new scene is created.
Default Export Type. Selects the export file type to be created by default.
Enable cursor wrap. When the cursor reaches the edge of the screen, it is
wrapped back around onto the opposite edge, allowing continuous mouse
motion. Disable if using a tablet, or under Virtual PC. Enabled by default,
except under Virtual PC.
Enhanced Tablet Response. Some tablet drivers, such as Wacom, delay
sending tablet and keyboard commands when SynthEyes is playing shots.
Turning on this checkbox slows playback slightly to cause the tablet driver
to forward data more frequently.
Export Units. Selects the units (inches, meters, etc) in the exported files. Some
units may be unavailable in some file types, and some file types may not
support units at all.
Exposure Adjustment: increases or decreases the shot exposure by this many
f-stops as it is read in. The main window updates as you change this.
Supported only for certain image formats, such as Cineon and DPX.
First Frame is 1 (otherwise 0). Turn on to cause frame numbers to start at 1 on
the first frame.
Folder Presets. Helps workflow by letting you set up default folders for various
file types: batch input files, batch output files, images, scene files, imports,
and exported files. Select the file type to adjust, then hit the Set button. To
prevent SynthEyes from automatically to a certain directory for a given
function, hit the Clear button.
Maximum frames added per pass. During solving, limiting the number of
frames added prevents new tentative frames from overwhelming an
existing solution. You can reduce this value if the track is marginal, or
expand it for long, reliable tracks.
Maya Axis Ordering. Selects the axis ordering for Maya file exports.
Match image-sequence frame #’s. If you open an image sequence ‗in the
middle,‘ say at frame 35, SynthEyes will jimmy in additional extra frames
so that SynthEyes‘s frame numbers match the image sequence‘s. This will

365
PREFERENCES AND SCENE SETTINGS REFERENCE

require more memory in SynthEyes, but may simplify interacting with other
programs that have fixed ideas about sequence frame numbers, and also
eliminate the need to Prepend Extra Frames if the ―in‖ point of the shot
later changes.
Multi-processing. Drop-down list. Enable or disable SynthEyes use of multiple
processors, hyper-threading, or cores on your machine. The number in
parentheses for the Enable item shows the number of
processors/cores/threads on your machine. The Single item causes the
multiprocessing algorithms to be used, but only with a single thread,
mainly for testing. The ―Half‖ option will use half of the available cores,
which can be helpful when you have another major task running, such as
a render on an 8-core machine.
No middle-mouse button. For use with 2-button mice, trackballs, or Microsoft
Intellipoint software on Mac OSX. When turned on, ALT/Command-Left
pans the viewports and ALT/Command-Right links trackers.
Nudge size. Controls the size of the number-pad nudge operations. This value is
in pixels. Note that control-nudge selects a smaller nudge size; you should
not have to make this value too small—use a convenient value then
control-nudge for the most exacting tweaks.
Playbar on toolbar. When checked, the playbar (rewind, end, play, frame
forward etc) is moved from the command panel to a horizontal
configuration along the main toolbar. Usable only on wider monitors.
Prefetch enable. The default setting for whether or not image prefetch is
enabled. Disable if image prefetch overloads your processor, especially if
shot imagery is located on a slow network drive.
Put export filenames on clipboard. When checked (by default), whenever
SynthEyes exports, it puts the name of the output file onto the clipboard,
to make it easier to open in the target application.
Safe #trackers. Spinner. Used to configure a user-controlled desired number of
trackers in the lifetimes panel. If the number is above this limit, the lifetime
color will be white or gray, which is best. Below this limit, but a still
acceptable value, the background is the Safe color, by default a shade of
green: the number of trackers is safe, but not your desired level.
Shadow Level. Spinner. The shadow is dead black, this is an alpha that ranges
0 to 1, at 1 the shadow has been mixed all the way to black.
Sound [hurrah]. Button. Shows the name of the sound to be played after long
calculations.
Start with OpenGL Camera View. When on, SynthEyes uses OpenGL
rendering for the camera view, which is faster on a Mac and when large
meshes are loaded in the scene. When off, SynthEyes uses simpler
graphics that are often faster on PCs, as long as there aren‘t any complex
meshes. This preference is examined when you open SynthEyes or
change scenes. You can change the current setting from the View menu.
When you change the preference, the current setting is also changed.
Start with OpenGL 3-D Viewport. Same as for the camera view, but applies to
the 3-D viewports.

366
PREFERENCES AND SCENE SETTINGS REFERENCE

Thicker trackers. When check trackers will be 2-pixels wide (instead of 1) in the
camera, perspective, and 3-D views. Turned on by default for, and
intended for use with, higher-resolution displays.
Trails. The number of frames in each direction (earlier and later) shown in the
camera view for trackers and blips.
Undo Levels. The number of operations that are buffered and can be undone. If
some of the operations consume much memory (especially auto-tracking),
the actual limit may be much smaller.
Wider tracker-panel view. Checkbox. Selects which tracker panel layout is
used. The wider view makes it easier to see the interior contents of a
tracker, especially on high-resolution display. The smaller view is more
compact, especially for laptops.
Write .IFL files for sequences. When set, SynthEyes will write an industry- and
3ds MAX-standard image file list (IFL) file whenever it opens an image
sequence. Subsequently it will refer to that IFL file instead of re-scanning
the entire set of images in order to open the shot. Saves time especially
when the sequence is on a network drive.

Scene Settings
The scene settings, accessed through Edit/Edit Scene Settings, apply to
the current scene (file).
The perspective-window sizing controls are found here. Normally,
SynthEyes bases the perspective-window sizes on the world size of the active
camera or object. The resulting actual value of the size will be shown in the
spinner, and no ―key‖ will be indicated (a red frame around the spinner).
If you change the spinner, a key frame will be indicated (though it does not
animate). After you change a value, and the key frame marker appears, it will no
longer change with the world size. You can reset an individual control to the
factory default by right-clicking the spinner.
There are several buttons that transfer the sizing controls back and forth
to the preferences: there is no separate user interface for these controls on the
Preferences panel. If a value has not been changed, that value will be saved in
the preferences, so that when the preferences are applied (to a new scene, or
recalled to the current scene), unchanged values will be the default factory
values, computed from the current world size.
Important Note: the default sizes are dynamically computed from the
current world size. If you think you need to change the size controls here,
especially tracker size and far clip, this probably indicates you need to adjust
your world size instead.

367
PREFERENCES AND SCENE SETTINGS REFERENCE

Axis Setting. Selects the coordinate system to be used.


Camera Size. 3-D size of the camera icon in the perspective view.
Far Clip. Far clip distance in the perspective view
Inter-ocular. Spinner. Sets the inter-ocular distance (in the unitless numbers
used in SynthEyes). Used when the perspective view is not locked to the
camera pair.
Key Mark Size. Size of the key marks on camera/object seed paths.
Light Size. Size of the light icon in the perspective view.
Load from Prefs. Loads the settings from the preferences (this is the same as
what happens when a new scene is created).
Mesh Vertex Size. Size of the vertex markers in the perspective view—in pixels,
unlike the other controls here.
Near Clip. Near clipping plane distance.
Object Size. Size of the moving-object icon in the perspective view.
Orbit Distance. The distance out in front of the camera about which the camera
orbits, on a camera rotation when no object or mesh is selected.
Reset to defaults. The perspective window settings are set to the factory
defaults (which vary with world size). The preferences are not affected.
Save to prefs. The current perspective-view settings are saved to the
preferences, where they will be used for new scenes. Note that

368
PREFERENCES AND SCENE SETTINGS REFERENCE

unchanged values are flagged, so that they continue to vary with world
size in the new scene.
Stereo. Selector. Sets the desired color for each eye for anaglyph stereo display
(as enabled by View/Stereo Display on the perspective view's right-click
menu.)
Tracker Size. Size of the tracker icon (triangle) in the perspective view.
Vergence Dist. Spinner. Sets the vergence distance for the stereo camera pair,
when it is not locked to any actual cameras.

369
Keyboard Reference
SynthEyes has a user-assignable keyboard map, accessed through the
Edit/Edit Keyboard Map menu item. (Preview: use the Listing button to see
them all.)

The first list box shows a context (see the next section), the second a key,
and the third shows the action assigned to that key (there is a NONE entry also).
The Shift, Control, and Alt (Mac: Command) checkboxes are checked if the
corresponding key must also be down; the panel shown here shows a Select All
operation will result from Control-A in the ―Main‖ context.
Because several keys can be mapped to the same action, if you want to
change Select All from Control-A to Control-T, say, you should set Control-A
back to NONE, and when configuring the Control-T, select the T, then the Control
checkbox, and finally then change the action to Select All.
Time-Saving Hint: after opening any of the drop-down lists (for context,
key, or action), hit a key to move to that part of the list quickly.
The Change to button sets the current key combination to the action
shown, which is the last significant action performed before opening the
keyboard manager. In the example, it would be ―Reset Preferences.‖
Change to makes it easy to set up a key code: perform the action, open
the keyboard manager, select the desired key combination, then hit Change to.
The Change to button may not always pick up a desired action, especially if it is a
button—use the equivalent menu operation instead.
You can quickly remove the action for a key combination using the NONE
button.
Changes are temporary for this run of SynthEyes unless the Save button
is clicked. The Factory button resets the keyboard assignments to their factory
defaults. Listing shows the current key assignments; see the Default Key
Assignments section below.

371
KEYBOARD REFERENCE

Key Contexts
SynthEyes allows keys to have different functions in different places; they
are context-dependent. The contexts include:
 The main window/menu
 The camera view
 Any perspective view
 Any 3-D viewport
 Any command panel
There is a separate context for each command panel.
In each context, there is a different set of applicable operations, for
example, the perspective window has different navigation modes, whereas
trackers can only be created in the camera window. When you select a context
on the keyboard manager panel, only the available operations in that context will
be listed.
Here comes the tricky part: when you hit any key, several different
contexts might apply. SynthEyes checks the different contexts in a particular
order, and the first context that provides an action for that key is the context and
action that is applied. In order, SynthEyes checks
 The selected command panel context
 The context of the window in which the key was struck
 The main window/menu context
 The context of the camera window, if it is visible, even if the cursor was not in
the camera window.
This is a bit complex but should allow you to produce many useful effects.
Note that the 4th rule does have an ―action at a distance‖ flavor that might
surprise you on occasion, though it is generally useful.
You may notice that some operations appear in the main context and the
camera, viewport, or perspective contexts. This is because the operation appears
on the main menu and the corresponding right-click menu. Generally you will
want the main context.
Keys in the command-panel contexts can only be executed when that
command-panel is open. You can not access a button on the solver panel when
the tracker panel is open, say. The solver panel‘s context is not active, so the key
will not even be detected, the solver panel functionality is unavailable when it
isn‘t open, and changing settings on hidden panels makes for tricky user
interfaces (though there are some actions that basically do this).

Default Key Assignments


Rather than imprecisely try to keep track of the key assignments here,
SynthEyes provides a Listing button, which produces and opens a text file. The

372
KEYBOARD REFERENCE

file shows the current assignments sorted by action name and by the key, so you
can find the key for a given action, or see what keys are unused.
The listing also shows the available actions, so you can see what
functions you can assign a key to. All menu actions can be assigned, as can all
buttons, check boxes, and radio boxes on the main control panels, plus a variety
of special actions.
You will see the current key assignment listed after menu items and in the
tooltips of most buttons, checkboxes, and radio buttons on command panels.
These will automatically update when you close the keyboard manager.

Fine Print
Do not assign a function to plain Z or apostrophe/double-quote. These
keys are used as an extra click-to-place shift key in the camera view, and any Z
or ‘/‖ keyboard operation will be performed over and over while the key is down
for click-to-place.
The Reset Zoom action does two somewhat different things: with no shift
key, it resets the camera view so the image fills the view. When the shift key is
depressed, it resets the camera view so that the image and display pixels are 1:1
in the horizontal direction, ie the image is ―full size.‖ Consequently, you need to
set up your key assignments so that the fill operation is un-shifted, and the 1:1
operation is shifted.
The same thing applies to other buttons whose functionality depends on
the mouse button. If you shift-click a button to do something, then the function
performed will still depend on the shift setting of the keyboard accelerator key.
There may be other gotchas scattered through the possible actions; you
should be sure to verify their function in testing before trying them in your big
important scene file. You can check the undo button to verify the function
performed, for example.
The ―My Layout‖ action sets the viewport configuration to one named ―My
Layout‖ so that you can quickly access your own favorite layout.

Key Assignment File


SynthEyes stores the keyboard map in the file keybd08.ini. If you are very
daring, you can modify the file using the SynthEyes keyboard manager, Notepad,
or any text editor. SynthEyes‘ exact action and key names must be used, as
shown in the keyboard map listing. There is one keybd08.ini file for each user,
located like this:
C:\Documents and Settings\YourNameHere\Application Data\SynthEyes\keybd08.ini (PC)
/Users/YourNameHere/Library/Application Support/SynthEyes/keybd08.ini (Mac OSX)
You can quickly access this folder from SynthEyes‘s File/User Data Folder
menu item.

373
KEYBOARD REFERENCE

The preferences data and viewport layouts are also stored in prefs08.dat
and layout08.ini files in this folder. Note that the Application Data folder may be
―hidden‖ by the Windows Explorer; there is a Folder Option to make it visible.

374
Viewport Layout Manager
With SynthEyes‘s flexible viewport manager, you can adjust the viewports
to match how you want to work. In the main display, you can adjust the relative
sizes of each viewport in an overall view, or create quick temporary layouts by
changing the panes in an existing layout, but with the viewport manager,
accessed through the Window menu, you can add whole new configurations with
different numbers and types of viewports.

To add a new viewport configuration, do the following. Open the manager,


and select an existing similar configuration in the drop-down list. Hit the Duplicate
button, and give your new configuration a name.
If you created a new ―Custom‖ layout in the main user interface by
changing the panes, and you‘d like to keep that layout for future use, you can
give it a name here, so that it is not overwritten by your next ―Custom‖ layout
creation.
Tip: In the main user interface, the ‗7‘ key automatically selects a layout
called ―My Layout‖ so you can reach it quickly if you use that name.
Inside the view manager, you can resize the viewports as in the main
display, by dragging the borders (gutters). If you hold down shift while dragging a
border, you disconnect that section of the border from the other sections in the
same row or column. Try this on a quad viewport configuration and it will make
sense.
If you double-click a viewport, you can change its type. You can split a
viewport into two, either horizontally or vertically, by clicking in it and then the

375
VIEWPORT LAYOUT MANAGER

appropriate button, or delete a viewport. After you delete a viewport, you should
usually rearrange the remaining viewports to avoid leaving a hole in your screen.
When you are done, you can hit OK to return to the main window and use
your new configuration. It will be available whenever you re-open the same
scene file.
If you wish to save a set of configurations as preferences, for time you
create a new SynthEyes file, reopen the Viewport manager, and click the Save
All button.
If you need to delete a configuration, you can do that. But you should not
delete the basic Camera, Perspective, etc layouts.
If you would like to return a scene file to your personal preferences, or
even back to the factory defaults, click the Reset/Reload button and you can
select which.

376
Script Bar Manager Reference
The Script Bar Manager manipulates small text files that describe script
bars — a type of toolbar that can quickly launch scripts or commands on the
main menu. You can start the script manager from the Scripts menu.

The selector at top left selects the script bar being edited; its file name is
shown immediately below the script name, with the list of script buttons listed
under that. Use the New button to create a new script bar; you will enter a name
for your script bar, then select a file name for it within your personal scripts folder.
You can also use the Save As button to duplicate a script with a new name, use
Chg. Name to change the human-readable name (not the file name), or you can
Delete a script bar, which will delete the script bar‘s file from disk (but not any of
the scripts).
Each button has a short name, shown in the list, in addition to the longer
full script name and file name, both of which are shown when an individual button
is selected in the list. You can double-click a button to change the short name,
use the Move Up and Move Down buttons to change the order, or click Remove
to remove a button from the script bar (this does NOT delete the script from disk).
To add a button to a script bar, select the script name or menu command
in the selector at bottom, then click the add button. You will be able to select or
adjust the short name as the button is added.

377
SCRIPT BAR MANAGER REFERENCE

Once you have created a script bar, or even while you are working on it,
click Launch to open the script bar.
SynthEyes saves the position of script bars when it closes, and re-opens
all open scripts when it next starts. If you have changed monitor configurations, it
is possible for a script bar to be restored off-screen. If this should happen, click
the Find button and the script bar will appear right there.

378
Lens Information Files
Lens information files may be modified by hand or written by Sizzle
scripts or other tools using this information. They are XML files but have an
extension of .lni. They are automatically found by SynthEyes within the system
and user scripts folders.
Each file uses this general format:
<Lens title="Zeiss UP-10 @ Infinity" mm="1">
<Info>
<creator>c:\Viken\bin\scripts\Lens\zeiss1.szl</creator>
<date>Fri, Apr 03, 2009 10:57:30 PM</date>
<lin>0.058108</lin>
<sq>-0.128553</sq>
<cub>0.01099</cub>
<quar>-0.0002606</quar>
<maxAperture>20</maxAperture>
</Info>
<DATA>
<sample rin="0" rout="0"/>
<sample rin="0.425532" rout="0.425542"/>
<sample rin="0.851064" rout="0.850749"/>
<sample rin="1.2766" rout="1.27515"/>
<sample rin="1.70213" rout="1.69836"/>
<sample rin="2.12766" rout="2.12005"/>
</DATA>
</Lens>
The root tag must be Lens. The title attribute is the human-readable name
of the preset.
Important: XML tag and attribute names are case-sensitive, Lens is not
the same as LENS or lens. And quotes are required for attribute values.
The Info block (or any other unrecognized blocks) are not read; here they
are used to record, in a standard way, details of the file creation by a script.
After that comes a block of data with samples of the distortion table. They
must be sorted by rin and will be spline-interpolated. The last line above says
that pixels 2.128 mm from the center will be distorted to appear only 2.12 mm
from the lens center.
This is an ―absolute‖ file with radii measured in millimeters from the optic
center: the optional ―mm‖ attribute on the root Lens tag makes it absolute (1), by
default it is a ―relative‖ file(mm=‖0‖).
Relative files measure radii in terms of a unit that goes from 0 at the
center of the image (when it is properly centered) vertically to 1.0 at the center of
the top (or bottom) edge of the image. Literally, that is the ―V‖ coordinate of
trackers; it is resolution- and aspect-independent.

379
LENS INFORMATION FILES

For both file types, there is the question of how far the table should go,
what the maximum value should be. For an absolute file, this is determined by
the maximum image size of the lens. For a relative file, the maximum value is
determined from the maximum aspect ratio: sqrt(max_aspect*max_aspect + 1).
This value is required as an input to relative-type lni generator scripts, but is
essentially arbitrary as long as it is large enough.
With some distortion profiles, if large-enough radii are fed in, the radius
will bend back on itself, so that increasing the radius decreases the distorted
radius. SynthEyes ignores the superfluous rest of the file when this happens
without error.
Two other tags can appear in the file at the top level with Info and DATA: a
tag BPW with value 8.1 would say that the recommended nominal back-plate
width is 8.1 mm, and a tag of FLEN with value 10.5 would say that the nominal
focal length is 10.5 mm. These values are presented for display only on the
image processor‘s Lens tab for the user‘s convenience, if the table was
generated for a specific camcorder model.

380
Support
Technical support is available through techsupport@ssontech.com. A
response should generally be received within 24 hours except on weekends.
SynthEyes is written, supported, and ©2003-2009 by Andersson
Technologies LLC.
This software is based in part on the work of the Independent JPEG
Group, http://www.ijg.org. Based in part on TIFF library, http://www.libtiff.org,
Copyright ©1988-1997 Sam Leffler, and Copyright ©1991-1997 Silicon Graphics,
Inc. Also based in part on the LibPNG library, Glenn Randers-Pehrson and
various contributing authors.
OpenEXR library Copyright (c) 2004, Industrial Light & Magic, a division of
Lucasfilm Entertainment Company Ltd. Portions contributed and copyright held
by others as indicated. All rights reserved. Neither the name of Industrial Light &
Magic nor the names of any other contributors to this software may be used to
endorse or promote products derived from this software without specific prior
written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT
HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.
All of the contributors‘ efforts are greatly appreciated.

381

Vous aimerez peut-être aussi