Vous êtes sur la page 1sur 60

ICARUS

(v2.09)

User-Guide
Simon Gibson, Jon Cook, Toby Howard and Roger Hubbold Advanced Interfaces Group, University of Manchester, UK. (C) 2002-2003 University of Manchester (last updated February 5th 2003) http://aig.cs.man.ac.uk/icarus email: icarus@aig.cs.man.ac.uk

Table of Contents
1. Introduction..............................................................................................................5 1.1 The ICARUS System............................................................................................5 1.2 Pre-requisites ........................................................................................................5
1.2.1 Windows 98/2000/XP .............................................................................................................5 1.2.2 Linux (i686).............................................................................................................................6 1.2.3 SGI Irix 6.5 (IP32)...................................................................................................................6 1.2.4 Mac OS X................................................................................................................................6

1.3 General user-interface operation ..........................................................................6


1.3.1 Menus and toolbars..................................................................................................................7 1.3.2 Project overview......................................................................................................................7 1.3.3 Image workspace .....................................................................................................................8 1.3.4 The graph window...................................................................................................................8 1.3.5. On-line help system ................................................................................................................9

1.4 Documentation overview......................................................................................9 2. Capturing image sequences...................................................................................10 2.1. Capturing video sequences ................................................................................10
2.1.1 Camera motion and scene structure.......................................................................................10 2.1.2 Camera zoom.........................................................................................................................11 2.1.3. Object motion .......................................................................................................................11

2.2 Image/Video compression formats .....................................................................11


2.2.1 The IMS (ICARUS movie sequence) file format ..................................................................12 2.2.2 The FLO Optical Flow file format.........................................................................................12

3. Distortion module...................................................................................................13 3.1 Tutorial ...............................................................................................................13 4. Calibration module ................................................................................................16 4.1 Tutorial 1: Auto-feature tracking........................................................................16 4.2 Tutorial 2: User-feature tracking ........................................................................20 4.3 Tutorial 3: Pan/Tilt/Zoom motion and Mattes....................................................22 4.4 Tutorial 4: Tracking Mattes ................................................................................25 4.5 Tutorial 5: 2D and 3D Image stabilization .........................................................26 4.6 Tutorial 6: Motion Filtering................................................................................27 4.7 Tutorial 7: Single image calibration ...................................................................29 4.8 Tutorial 8: Multiple image calibration................................................................31 5. Reconstruction module..........................................................................................33

5.1 Tutorial 1: Simple shapes ...................................................................................33 5.2 Tutorial 2: Complex shapes ................................................................................39 6. Quick Reference .....................................................................................................41 6.1 Distortion module ...............................................................................................41
6.1.1 Menu options .........................................................................................................................41 6.1.2 Useful keys ............................................................................................................................41 6.1.3 Distortion parameter dialog ...................................................................................................42 6.1.4 Preferences dialog .................................................................................................................42

6.2 Calibration module .............................................................................................42


6.2.1 Menu options .........................................................................................................................42 6.2.2 Useful keys ............................................................................................................................44 6.2.3 Calibrate dialog .....................................................................................................................44 6.2.4 Auto-Feature popup menu .....................................................................................................44 6.2.5 User-feature popup menu ......................................................................................................45 6.2.6 Matte popup menu .................................................................................................................45 6.2.7 Matte boundary popup menu .................................................................................................45 6.2.8 Tracking parameters dialog ...................................................................................................46 6.2.9 Bundle adjustment dialog ......................................................................................................46 6.2.10 Camera parameters dialog ...................................................................................................46 6.2.11 Preferences dialog ...............................................................................................................47

6.3 Reconstruction module .......................................................................................47


6.3.1 Menu Options ........................................................................................................................47 6.3.2 Useful keys ............................................................................................................................48 6.3.3 The constraint toolbar............................................................................................................48 6.3.4 Primitive popup menu ...........................................................................................................49 6.3.5 Mesh popup menu .................................................................................................................49 6.3.6 Feature popup menu ..............................................................................................................50 6.3.7 Pull texture dialog..................................................................................................................50 6.3.8 Preferences dialog .................................................................................................................50

7. Notes ........................................................................................................................51 7.1 Inliers and outliers ..............................................................................................51 7.2 Sequential versus non-sequential calibration .....................................................51 7.3 Colour-keying mattes .........................................................................................51 7.4 Estimating focal lengths and orienting the scene using vanishing points ..........53 7.5 Alternative methods for orienting the scene.......................................................54
7.5.1 Selecting features in a plane ..................................................................................................54 7.5.2 Selecting two features in a line..............................................................................................54 7.5.3 Adjusting the orientation by hand..........................................................................................54 7.5.4 Setting the scale of the calibration.........................................................................................54

7.6 Fine-tuning a calibration.....................................................................................54

7.7 How to read the calibration progress graph ....................................................55


7.7.1 Auto-tracking.........................................................................................................................55 7.7.2 Free-motion calibration .........................................................................................................56 7.7.3 Pan/Tilt/Zoom calibration......................................................................................................58

8. References ...............................................................................................................60

1. Introduction
1.1 The ICARUS System
This document describes the installation and operation of the ICARUS system. The ICARUS system is a suite of software packages that allow a user to retrieve a variety of information from image sequences1, such as camera positions and geometric models of objects visible in the images. The ICARUS system was developed by the Advanced Interfaces Group at the University of Manchester in the UK over the course of a three year EPSRC funded project entitled REVEAL: Reconstruction from Video of Environments with Accurate Lighting. The general capabilities of the ICARUS system can be divided into three main modules. Each of these modules is supplied as a separate program. An overview of the operation of each module is given below: 1. Distortion module (removal of geometric lens distortion from image sequences): Due to the geometric distortion present in low-grade lenses, straight lines are not imaged as being completely straight. This distortion affects the accuracy of the calibration and reconstruction modules in the ICARUS system. The distortion module allows the amount of distortion to be easily calculated and its effect removed from image sequences if required. 2. Calibration module (estimation of intrinsic and extrinsic camera parameters for each frame of a sequence): Calibration is required before a geometric representation of the scene can be built. The calibration process calculates both the intrinsic and extrinsic camera parameters (i.e. the focal length and principal point of the camera, as well as its position and orientation in space) for each frame of a sequence. The calibration module can also do many more things (see the tutorials for some examples...) 3. Reconstruction component (reconstruction of scene geometry using calibrated image sequences): Once an image sequence has been calibrated, the calibration data can be used to reconstruct geometric representations of objects in the scene. This is achieved in an interactive manner, where the user manipulates the position and orientation of parametric primitives so that they match features visible in the calibrated images. Colours and texture information can also be automatically extracted from the image data.

1.2 Pre-requisites
The ICARUS system has been successfully tested on the following platforms: Microsoft Windows 98/2000/XP (NVidia GeForce 2/3/4 graphics), Mandrake Linux 8.1/8.2/9.0 (Nvidia GeForce 2/3/4 graphics), SGI O2 and Onyx2 (Irix 6.5.13, IP32), and Mac OS X 10.2. Other platforms may work, but are currently untested and unsupported. On all platforms, ICARUS uses the lapack libraries available from http://www.netlib.org/lapack. Thanks must go to the authors of the lapack software for saving us a lot of implementation time!.

1.2.1 Windows 98/2000/XP


Operation of the ICARUS system on Microsoft Windows platforms requires the DirectX 8.1 runtime libraries. Please also ensure that you have the latest drivers installed for your graphics card for optimal performance. Importing DV format movie files is supported via DirectX 8.1. If you want to save DV format movies, however, you will need a video codec that supports video exporting via the Video for Windows (vfw) interface (see FAQ for more details: http://aig.cs.man.ac.uk/icarus/faq.php). Installation is achieved by simply double-clicking on the setup.exe file and following the on-screen instructions.
1

Throughout this document, the term image sequence will be used to refer to both collections of one or more separate digital still images, and the set of frames in a digital video sequence. A frame of an image sequence refers to either one frame of a digital video sequence, or one of the set of digital still images.

1.2.2 Linux (i686)


To install and operate ICARUS on Linux systems, please ensure that you have TIFF library installed, which is available in binary RPM format from http://rpmfind.net/, or in source format from http://www.libtiff.org/. On Linux systems, you may use the IMSConvert program to convert AVI movie files into ICARUSs IMS movie format. IMSConvert is free software, distributed under the GNU General Public License, and is available from http://aig.cs.man.ac.uk/imsconvert.

1.2.3 SGI Irix 6.5 (IP32)


On SGI systems, you must make sure that you have SGIs Digital Media libraries installed, which should come with the Irix operating system. The TIFF library is also required, and is available in tardist format from the SGI Freeware web site (http://freeware.sgi.com/) or in source format from http://www.libtiff.org/.

1.2.4 Mac OS X
On Mac OS X systems, ICARUS should not need any 3rd-party libraries to operate. There are, however, a couple of known bugs that are described on the Mac download page: http://aig.cs.man.ac.uk/icarus/macDownload.php.

1.3 General user-interface operation


This section describes the general operation of the ICARUS user-interface. Further details of each of the three main components are given in later chapters. All components operate on digital images and video sequences captured by the user. Guidelines for the capture of such images are given in Chapter 2.

Figure 1: The user-interface of the calibration module in the ICARUS system. The menu and toolbars are at the top, the project overview is on the left, the image workspace is the main central window, and a graph widget is also shown at the bottom

We have tried to provide a consistent interface for each component of the ICARUS. There are four main sections to each components interface: the menu and toolbars, the project overview, the image workspace, and the graph window. Items common to each interface are described below. Widgets specific to each component will be described where necessary.

1.3.1 Menus and toolbars

Figure 2: The menu and toolbar from the calibration module There are several menus and toolbar buttons common to each component of the ICARUS system. The Project menu allows a new project to be created or saved, or a previously saved project to be loaded back into the system. These basic operations are also accessible via the first three buttons of the main toolbar. Note that project files are not transferable between the components of the ICARUS system. The next four buttons on the toolbar determine the cursor mode (i.e. the way in which mouse clicks and movements in the image workspace are interpreted). Cursor modes such as pan zoom in and zoom out, which are also available from the View menu, allow for the manipulation of the items in the image workspace. In addition to these three modes of operation, the toolbar provides a feature cursor mode. Selecting this mode allows the user to interact with items in the image window. The exact form of this interaction depends upon the component currently being used, and will be described later. Note that you can zoom in and out without changing the current cursor mode by holding down the control key and the moving the mouse wheel up and down. Note that the toolbars present in each component of the ICARUS system may be picked up and moved around by the user, allowing a small amount of user-interface customisation. Toolbars may also be moved outside the main window, and positioned on the desktop. Dont worry if you cant remember what each item in the toolbar does since tooltips are employed to remind you of their functionality.

1.3.2 Project overview


Figure 3: An example project overview from the calibration module The project overview widget is shown on the left-hand side of the window in each component of the ICARUS system. This provides information about the current state of the system, such as the number of images or video sequences loaded, lens and feature information etc. The exact contents of the Project Overview are dependent on the component of the system currently under use, and will be described in later chapters. Images, movies and project files may be dragged and dropped into the Project Overview window.

1.3.3 Image workspace


The image workspace allows the user to interact with image sequences. Multiple image windows may be opened to allow the user to simultaneously view multiple frames of a sequence. This is achieved by selecting the New Window item on the View menu, or clicking on the appropriate button in the main toolbar. Image windows may also be moved, resized, and closed to provide maximum flexibility. In addition the image can be zoomed in and out and panned within its frame. A button is also provided at the bottom right-hand corner of each window that resets the image position within the window. When a movie file is loaded, a slider is also provided to change the current frame of the sequence, along with a set of video control buttons that allow for fine control of movie playback.

Figure 4: An example image workspace from the reconstruction module

There is only ever one currently active image window present in the workspace. Clicking on an image name in the Project Overview will change the current image or movie shown in the currently active window. The size of the image workspace may be made larger than that of the ICARUS window. When necessary, scroll bars will appear on the bottom and right-hand sides of the workspace, allowing the user to scroll around the larger space. When a movie is loaded into a window, and the window has been selected by clicking in it, the cursor keys may be used to move through the sequence. The left and right cursor keys will move one frame forwards/backwards. Holding the shift key down as well will move forwards/backwards by 10 frames. If your mouse has a wheel button, moving the wheel forwards/backwards will also move forwards/backwards by 5 frames in the movie. If one of the zoom buttons is selected, however, moving the wheel mouse whilst the left button is pressed will zoom in/out of the image. The up and down cursor keys can also be used to navigate through a movie. In the distortion module, the up and down cursor keys will move to the start/end of the sequence. In the calibration module, these keys are used to move between keyframes, or between images shown in the Project Overview. Some information is shown in the bottom right of each image window. From left to right, this is: the image/movie name; the current frame expressed as a fraction of the total number of frames; the image/movie resolution; the amount of zoom; the movie frame-rate and the achieved frame rate (in brackets).

1.3.4 The graph window


The graph window is shown at the bottom of the distortion and calibration components when operating on movie sequences. The operation of the window is very similar for both components. The central part of the window is a region containing frame information, with two sliders above and below. These sliders are used to set the start and end frames for which information is shown in the middle, and the type-in boxes on the left can be used to set the frame range explicitly. Clicking with the left mouse-button inside the frame region will jump to the frame pointed at by the mouse (you can also click and drag the mouse to move. If your mouse has a wheel-button, you can use that to scroll the start and end frames through the sequence. The graph window also provides facilities for editing parameters. In the distortion component, these can be used to adjust the distortion parameters, and in the calibration component, these facilities are used to manually adjust the tracking parameters to correct for small errors. When an appropriate parameter is highlighted in the Project Overview, and feature mode is selected in the toolbar, then the parameter data will be draw in the graph window. The vertical scale of the graph may be changed by holding down the control key and clicking and dragging with the left mouse button within the graph window. Similarly, holding down the shift key and clicking and dragging with the left mouse button will shift the graph vertically. Moving the mouse cursor over one of the small dots will display the parameter value at that point, and dragging the point vertically will adjust the parameter appropriately.

Figure 5: An example graph window from the calibration component As well as adjusting parameters on a per-frame basis, you can also use key-point to help generate smoothly varying parameters. By clicking with the right mouse button on any of the small dots, you can add or remove key-points. You can also specify if linear or smooth (Hermite spline) interpolation is used to generate parameters between the keypoints. When smooth interpolation is used, small handles appear at the key-points. Adjusting these handles will alter the slope of the parameter at the key-point. The popup menu also has an option to indicate whether the parameter slopes on either side of the key-point are continuous. When this option is not selected, the two handles can be moved independently. Note also that when you start adjusting the key-point positions, the original data is drawn in the background, allowing you to adjust the key-point positions so that the data is correctly smoothed.

Figure 6: The graph window can also be used to smooth out the parameter data with key-points

1.3.5. On-line help system


ICARUS provides an on-line Whats This? help system that you can use to get information about the various icons, menus and widgets provided with each program. Accessing the help system is straightforward. Simply click the Whats This? button on the toolbar, or select the appropriate option from the Help menu. Alternatively, you can also press SHIFT+F1. Doing this will change the mouse cursor, and put the interface in help mode. Clicking on an icon, menu-item etc... will then popup a short description of the widget.

1.4 Documentation overview


The remainder of this document describes the operation of each component of the ICARUS system in more detail. The next Chapter provides guidelines that should be adhered to whilst capturing image or video sequences. Following that, Chapters 3 to 5 describe the operation of the three modules provided by the ICARUS system, mainly in the form of step-by-step tutorials; then Chapter 6 provides a quick reference to some of the features of the interface; Chapter 7 provides some useful notes and hints; and finally, Chapter 8 lists reference material. The suggested approach to learning how to use ICARUS well is to first go through all the step-by-step tutorials. After that, read through Chapter 6 to get a better understanding of all the menus and dialogs in the system. Finally, read the notes in Chapter 7. If you would like to understand more about the technical details of ICARUS, you may also download and read the reference papers listed in Chapter 8. Some screenshots of the ICARUS system that appear in this documentation may differ slightly from those in the current version of the software. This should not be a problem, and you should still be able to follow the tutorials. Any screenshots with version-specific information will be updated. Also, please be aware that an online FAQ is available at http://aig.cs.man.ac.uk/icarus/faq.php.

2. Capturing image sequences


This section describes the general principles that should be adhered to whilst capturing image sequences. Capturing good quality images and video sequences that satisfy the guidelines presented here is essential if the various components of the ICARUS system are to work correctly. If required, you can adjust the brightness, contrast, saturation and sharpness of a sequence before tracking. This is achieved by right-clicking on the sequence name listed in the Project Overview in the Calibration module, and selecting the Manipulate option. This will display a dialog box containing a number of adjustment sliders. All adjustments are made internally to ICARUS, and will not affect the movie file stored on disk.

2.1. Capturing video sequences


The limiting factor in the successful operation of the ICARUS system is the calibration module (see Chapter 4). Due to the nature of the algorithms used to estimate camera parameters such as position, orientation and focal length, it is important that the user understand the different types of camera motion that are supported (or not) by the ICARUS system.

Figure 7 General camera motion (left), pan/tilt motion (right)

2.1.1 Camera motion and scene structure


Generally, when capturing video footage of a scene, it is important to be aware of the type of motion that the camera is undergoing. The calibration module is capable of calibrating a wide variety of motions, such as general (Figure 7, left) and rotational motions (Figure 7, right), as long as the user indicates the type of motion present in the sequence. ICARUS is also capable of estimating the parameters of a static camera (see Chapter 4). Another important factor that must be considered when calibrating for general camera motions is the content of the scene. It is important that the scene being viewed has a significant amount of depth (i.e. the camera views more than just a flat wall). Without this depth, calibration will fail. Support for motion reconstruction for such planar configurations will be added to a later release of the ICARUS software.

10

2.1.2 Camera zoom


When capturing video sequences, better results will usually be obtained during calibration if the zoom setting on the camera is not changed. The calibration module can handle changes in focal length, but performs more reliably when these changes are small. Explicitly changing the zoom setting of the camera will probably still work, but dont count on it, and avoid vary rapid changes in zoom if possible.

2.1.3. Object motion


Ideally, the scene that is to be reconstructed should be static (that is, no objects should move whilst you capture the video sequence). Unfortunately, it will often be the case that image sequences contain moving objects not under the control of the user. As long as the image regions that contain moving objects are small, the automatic tracking and calibration algorithms will safely ignore them. If large regions of the image contain moving objects a matte can be used to mask off this area (see calibration tutorial for more detail). For user-identified features, you should ensure that these features are placed in static portions of the scene. If your scene contains a large number of moving objects, youre on your own

2.2 Image/Video compression formats


For reading image files, ICARUS supports most of the standard formats (BMP, GIF, JPEG, PBM, PGM, PNG, PPM, XBM and XPM). The ICARUS system is able to read and write video files in several formats, depending on the operating system you are using. Different compression formats affect the accuracy of image data. A format that achieves a high rate of compression (i.e. a small file size) does so at a cost of decreased image quality. Severely decreased image quality will impair the accuracy of the ICARUS system. Generally, you should use a format that gives you the best image quality possible, given the amount of hard disk storage you have available for video storage. The format with the best image quality is Microsoft AVI with DV (Digital Video) compression. This is the format used by most commercial digital capture and editing software, and is supported by ICARUS on Windows operating systems via the DirectX 8.1 library. DV format achieves a fixed compression ratio of over 8:1, and produces files requiring about 144K of space per frame (120K for NTSC resolution). On Windows platforms, ICARUS also supports many other video formats via the DirectShow interface, as well as Apple Quicktime. On Linux systems, we recommend that you download the IMSconvert utility which will allow you to convert AVI movie files into the IMS format supported by ICARUS (see below). IMSconvert is free software, distributed under the GNU General Public License (Version 2), and you can download it from http://aig.cs.man.ac.uk/imsconvert. SGI systems use SGIs Digital Media Library for movie import and export. The Digital Media library supports AVI, Quicktime, MPEG and SGI format movie files. See the mvIntro manual page on your machine for a complete description of the available compression schemes. On Mac OS X systems, Icarus supports the Quicktime movie format, as well as IMS movie files and the standard image formats described above. On all architectures, video playback in performed using a caching system so that frames may be displayed as close to real-time as possible. The size of this cache is set initially to 64Mb (see the Project->Preferences menu). If you have a large amount of memory available in your system, you may want increase this value to provide better playback performance. The minimum amount of memory required for the frame-cache is 16Mb.

11

2.2.1 The IMS (ICARUS movie sequence) file format


If your video footage is currently stored as individual frames, rather than as an encoded movie, then ICARUS can still read it using the IMS file format. An IMS file is a simple ASCII text file, looking something like: #IMS 5 25.0 C:\Documents and Settings\myname\images\frame0001.cin C:\Documents and Settings\myname\images\frame0002.cin C:\Documents and Settings\myname\images\frame0003.cin C:\Documents and Settings\myname\images\frame0004.cin C:\Documents and Settings\myname\images\frame0005.cin The first line of the IMS file must contain the #IMS identifier. The next line contains the number of frames in your sequence and this is followed by the frame rate. Each frame must be stored in either the TIFF format, or as Cineon FIDO or DPX files, or in any of the image file formats described in the previous section. Its pathname must be written in the IMS file and enclosed by double quotation-marks. This can be either the full pathname, or the pathname relative to the location of the IMS file itself. An IMS file can be loaded into ICARUS in the same way as an AVI, MPEG or any other supported movie format. These sets of individual frames may also be loaded into ICARUS without constructing an IMS file. Simply select Project->Import Movie in either the distortion or calibration module, select multiple images from the file dialog, and click Open. This will construct an internal IMS movie file from the images, and load them in as normal. If a project is then saved with these images, the internal IMS file will also be saved to disk.

2.2.2 The FLO Optical Flow file format


One ability of the calibration component in the Icarus system is to calculate the optical flow throughout a video sequence. As this may be of use for other applications, it is possible to save this data using the FLO file format: #FLOx <width> <height> <data> The first line of the FLO file must contain the #FLO identifier followed by either an a for ascii or b for binary (e.g. #FLOa or #FLOb). The next two lines contain the width and height of the image. Following these lines, the optical flow data is stored. For binary FLO files, this data is stored as two 4 byte binary floating-point numbers followed by a one-byte flag per pixel. The floating-point numbers representing the magnitude of flow in the x and y directions respectively, and the flag is 1 if there is a discontinuity at the pixel, and 0 otherwide. The floating-point numbers are stored in the standard Big-Endian Motorola binary floating-point data format. For ascii FLO files, the movement of each pixel is stored using two ascii floating-point numbers. The flag is a single 1 or 0, as described above. Optical flow data can also be written in floating-point TIFF format. In this case, the red and green channels of the TIFF image are used to encode the x and y pixel motions respectively. Each motion value is offset by 1.0e+06 to ensure that only positive values are stored in the image. The blue channel is used to encode flow discontinuities, and is non-zero where dicontinuities occur.

12

3. Distortion module
When images or sequences are captured with consumer level cameras at short focal lengths, distortion is introduced into the images by the system of lenses in the camera. Due to this distortion, images of straight-lines appear to be slightly curved. This distortion must be removed to ensure accuracy in later calibration and reconstruction phases. Typically, consumer grade cameras experience barrel distortion at short focal lengths, and small amounts of pin-cushion distortion at long focal lengths (see Figure 8).

No distortion

Barrel distortion

Pincushion distortion

Figure 8 Simple examples of lens distortion typically encountered with consumer level digital cameras. The amount of geometric distortion in each image/video sequence is estimated by having the user identify lines in the image that are supposed to be straight. The user may draw multiple lines in different images, or in different frames of a video sequence. After at least one line has been placed, ICARUS can calculate the distortion parameters required to straighten them. If the user has marked a number of lines in different frames of a video sequence, distortion parameters may be calculated independently for each frame. The parameters for in-between frames are then interpolated from these results. Alternatively, if the video sequence has been captured at approximately a constant focal length, a single set of average parameters may be estimated for the entire sequence. There are several different types of distortion parameter that may be calculated (see the reference literature for further details): 1.The centre of lens distortion, 2.Low order radial distortion, 3.Higher order radial distortion, 4.Low order tangential distortion, 5.Higher order tangential distortion. The only parameters which normal users of the system need worry about are the first three. Generally, the centre of the lens distortion can be kept at the centre of the image. Tangential distortion may sometimes be necessary for nonstandard lenses. As well an removing lens distortion from a sequence (as described below), the distortion module may be used to reapply the lens distortion. This can be achieved by loading in a previous distortion file and changing the Action option in movie or image export dialog. Distortion files can be loaded and saved using the options in the Project menu.

3.1 Tutorial
1.First, launch the distortion program, and create a new project by clicking on the New Project toolbar button, or by selecting Project->New menu item. Then select the Project->Import Movie menu option, and load the building2d video sequence. The name of the movie will appear in the project window, and the first frame of the sequence will be shown in the image window. You can change the current frame of the movie by moving the horizontal scroll-bar at the bottom of the window, or by dragging the frame indicator (the black vertical triangle) in the graph window.

13

2.Find a frame containing a straight line (take, for example, the upper-most horizontal edge of the main building). Put the system into feature mode by selecting the cross-hair button on the toolbar, and draw a line from the top to the bottom of the edge. Drawing is achieved by clicking with the left button at the desired start point on the image, dragging the line to the required end-point, and releasing the left mouse button. You should see a straight yellow line appear on the image. After the line is drawn, its end-points can be moved by simply clicking on them with the left button and dragging them to a new position. As this is being done, a window pops up to show a zoomed-in portion of the image. This helps with sub-pixel positioning of points. The user may also zoom in to the image by selecting the zoom in and zoom out buttons on the toolbar and clicking on the image with the left mouse-button. Note that by default, Icarus tries to snap points to the nearest interesting feature. This is useful because it allows you to position these points more accurately. In some situations, however, this is not always desireable, and so this snapping can be turned off by holding down the Shift key whilst placing the point. 3.You can see clearly that the line of the building is not straight, compared to the yellow line that has just been drawn on the image. This is the information that ICARUS uses to determine the distortion parameters for the lens. In order to calculate this information, more points need to be added to the line. The line can be subdivided by pointing at a section of the line with the mouse, clicking the left button, and moving the new vertex to a new position. This can be repeated as many times as necessary in order to follow the curve of the building. At least three points are required on each line to determine the distortion parameters.

4.More lines can be placed in other frames of the sequence if required. This will be necessary if the focal length changes significantly over the course of the sequence. If the focal length does not change significantly, it is still a good idea to create extra lines, as these will be used to increase the accuracy of the distortion calculations. At any time, you may save a project file containing the current state of the system. A project file for this sequence containing three distortion lines in a single frame is distributed with the system, and may be found in the tutorial directory (tutorial/distortion1.ipd).

5.The distortion parameters may now be calculated. Select the Distortion>Parameters menu option. This will bring up a window showing the various distortion parameters. Initially, only Low Order Radial distortion will be selected, and the parameters are assumed to be uniform over the entire sequence. These options can be changed if required. Clicking the Solve button will solve for the parameters the user has asked for. If no lines have been drawn in the sequence, but the user knows the distortion parameters, these may also be entered in the type-in boxes in the dialog, and solution process skipped.

14

6. Once the distortion parameters have been calculated, the image will be warped to show the amount of distortion necessary to straighten the lines drawn in steps 1-3. These straightened lines are drawn in white. The entire video sequence may now be un-distorted and saved to disk using the Project->Export Movie menu option. Also, you may adjust the parameters by hand, if required, by clicking on the lens parameters in the Project Overview, and adding/removing key-points as described in Section 1.3.4

15

4. Calibration module
The calibration module attempts to estimate parameters such as focal length, position, and orientation of the camera for each frame of a video sequence, or for a set of one or more images. Calibration is achieved by identifying common features shared between images or frames of a video sequence. For sets of images, the user must identify these common features. For video sequences, the user can ask ICARUS to automatically select and track a large number of features throughout the sequence (auto-features). Alternatively, the user can also select a set of features, and have ICARUS track these itself (user-features). Camera calibration is a bit of a black art, and it is very difficult to develop a system that will work in all situations. There will always be situations in which image/video calibration will fail. The key approach to get a calibration working is to give ICARUS as much information as possible. If you know that the focal length is constant for each frame of a sequence (of even a fixed, known value), use this information whilst setting the lens type (see below). Also, if an image sequence has an identifiable pair of vanishing points in one or more frames, mark them, and ICARUS will use this information to try and build an accurate calibration for the sequence.

4.1 Tutorial 1: Auto-feature tracking


1.First, launch the calibration program, start a new project, and import the movie file building2. After the movie is loaded, ICARUS will display a dialog box containing the camera parameters for this movie. First of all, make sure that the camera motion is set to Free Motion, and the the pixel aspect ratio is PAL D1 (to match this PAL movie). You can also use this dialog box to give ICARUS more details about the lens and format that was used to capture the video sequence. Most importantly, if you know that the focal length in your sequence is constant or variable, you can set this option. Also, by specifying the cameras aperture height, you can enter a fixed focal length in millimeters. Because the focal length of the sequence is approximately constant for this sequence, keep the default Constant focal option, and then close the dialog (Note that with this particualar sequence setting a constant focal length is not strictly necessary and the sequence will calibrate accurately without it). For brief descriptions of the other options available in this dialog, see Section 6. 2.You must now indicate the range of frames which will be calibrated. By default, the range is set the the entire movie, but you can change this by moving the frame indicator (the black triangle in the graph widget the bottom of the screen) to a frame and selecting Set Start Frame or Set End Frame from the Tracking menu (or by pressing SHIFT+S or SHIFT+E).

16

3.A number of features now need to be identified and tracked throughout the frames of the sequence. Select the Tracking->Tracking Parameters menu option. This will pop-up a dialog containing parameters such as the number of features, and the maximum allowable residual error etc.. Just leave these parameters as they are for now, click the Close button, and then select Tracking->Auto Track from the Tracking menu (or press F6). This will track 200 features from the start to the end of the frame range (i.e. from frame 0 to frame 100). Tracking these points will take several minutes. A project containing the tracked features is included in the tutorial directory (projects/autoFeatures1.ipc). Once tracking has finished a colour coded track is displayed for each feature indicting the motion of the feature over the previous and next 10 frames. The colour of the track, either green, yellow or red, indicates the ammount of error in the feature track. When the mouse moves over an individual feature a longer track is displayed. Features can be deleted with the delete key whilst hovering over them with the mouse cursor (remember to make sure that the image window has keyboard focus by clicking in it). Clicking the right mouse-button over a feature will also display the feature menu (see Chapter 6 for a description of these functions). 4.The next stage involves calculating a focal length for one of the frames of the reconstruction. This step is often not necessary, but for sequences where it is possible to do so, estimating a focal length can improve the quality of the final calibration. Put the system into feature mode by selecting the cross-hair button on the toolbar. To make things easier, hide the auto-features by unselecting the View->Auto Features option from the View menu (but remember to show them again after youve done this step). Open the Coordinate Frame options in the Project Overview on the left-hand side of the window, and highlight the X Axis entry. This allows you to draw over edges in the image that are parallel to the X Axis in the scene. You can pick any orientation you want for the directions of the coordinate axes, but the X, Y and Z axes must be orthogonal to each other (i.e. at right-angles). In this case, well choose the X axis to run down the long side of the building, and the Z axis to run down the short side. This means that the Y axis will be parallel with the vertical sides of the building. Draw two red X axis edges, as shown in the figure on the right. Edge drawing operates in the same was as in the distortion module, except edges cant be subdivided in this case. After both edges are marked, select the Z Axis entry in the Project Overview, and draw the two blue Z Axis edges. Each time you mark an edge, a small coloured indicator appears in the graph window indicating that an edge has been drawn in that frame. Pressing the right mouse-button on an edge end-point pops up a menu that allows you to delete the edge. Pressing the right mousebutton on the axis entry in the Project Overview allows all edges associated with that axis to be deleted. At least two edges are required for two out of the three axes in order to estimate a value for the focal length in a single frame (see Section 7.1 for further discussion about calculating focal lengths with vanishing points). Now select the Camera>Estimate focal length option from the camera menu. This should show a dialog indicating the focal length that has been calculated using these lines (for this sequence, you should get a value of between 750 and 800 pixels if youve placed your lines accurately). Click the Yes button in response to the question asking if you want to mark the frame as calibrated. This means that ICARUS will remember this focal length and use it to (hopefully) improve the accuracy of the calibration. Note that if you get a very inaccurate focal length estimate, you will need to adjust the edge end-points and re-estimate the focal length. Deleting an edge from a frame will also remove the focal length estimate.

17

5.Before the calibration process can start, you need to set the pixel residual error. Select the Project->Preferences option from the Project menu. The Residual entry in the preferences dialog can be used to set the error threshold used by the calibration algorithms. It is set to 1 pixel by default, which should be adequate for most sequences. Now select the Camera>Calibrate option from the Camera menu. A dialog will appear, showing several advanced options that let you control the accuracy and speed of the calibration process (see Section 6 for a description of these options). For now, just click Calibrate. This will show another dialog indicating the progress of the calibration. If all goes well, after a few minutes the dialog will disappear without any reported errors. You will notice that the features have been replaced with small dots representing the positions of the features in space. These dots are coloured depending on how well they fit the original feature positions: green for good (i.e. they are within the error threshold), red for bad (outside the error threshold), and white if the feature was not located in the frame. You can also display 3D markers at the feature locations by selecting the 3D Marker->Display option when you right-click on a feature position. This can give you a better idea of scale in the scene. Finally, the results of the calibration can also be examined from different viewpoints by clicking on the New Viewer button on the toolbar, or selecting the appropriate option from the View menu. This will display an additional window showing a three-dimensional view of the calibration camera path and the positions of the features. The viewpoint may be manipulated by using the left, middle, and right mouse-buttons to rotate, zoom, and translate a virtual camera accordingly 6.The next stage of the process is to orient and scale the world coordinate system. This stage is optional, but if the calibration data is to be used in the reconstruction module, it is often worthwhile. Orientation uses the same edges that were used when estimating a focal length in Step 4, so if that stage was skipped, close the viewer window and mark the edges now. For orientation, you will need to mark edges for at least one axis (and remember, all the edges need to be in the same frame). Sections 7.1 and 7.2 describe in more detail how the edges are used for orientation, as well as some alternative approaches to orientation. You will also need to specify an origin for the coordinate system, so select the Coordinate Frame->Origin Point entry in the Project Overview, and mark an appropriate point on the ground-plane (e.g. the bottom corner of the building). For free camera motion, the origin point must be marked in at least two frames, so go to another frame in which the same point is visible, and mark it again. Now select the Camera->Orient Scene option from the Camera menu. A wire-frame representation of the ground-plane should appear, and you can check to see if the world coordinate frame has been correctly oriented. A project file containing the correctly oriented calibration is included in the tutorial directory (projects/autoFeatures2.ipc). Opening up a viewer window will also allow you to check the camera position relative to the ground-plane. If you need to fix the scale of the calibration to a specific value, then you can do this by highlighting the Coordinate Frame->Scale entry in the Project Overview and

18

drawing a line in the image corresponding to a known distance. This line needs to be marked in at least two frames of the sequence. Once this is done, right-clicking on the Coordinate Frame->Scale entry and selecting Set Scale option allows you to specify the length of this line. The calibration will be scaled accordingly. 7.An alternative method to check the accuracy of the camera motion is to examine the different camera parameters in the graph window. Firstly, open the Camera entry in the Project Overview, and highlight Focal length. This shows a plot of the camera focal length for each frame of the sequence in the graph window (try looking at the other parameters as well to see how they vary, and remember that you can change the scale of the graph using the control key and the left mouse button, as described in Section 1.3.4). In some situations, parameters like focal length, skew, and principal point will be varying too much, due to error in the feature locations. Ideally, camera skew should be zero, the pixel aspect ratio should be a constant value, equal to that specified in the camera parameters dialog (step 1), and the principal point should be in the centre of the image (50%,50%). In some situations, this might not be the case and these errors must be corrected. Also, the camera focal length might be varying too much, even though it was specified as constant or fixed in the camera parameter dialog. If your camera calibration data appears to contain significant errors, these can be reduced by selecting the Camera->Bundle Adjust option from the Camera menu. Note that in this case, applying the bundle adjustment will have little effect on the accuracy, as the calibration is quite accurate already, but you can take a look at the dialog just to see the options available. The dialog allows you to place constraints on the camera parameters. Typically, you would need to select the Constrain Aspect, Constrain Skew, and Constrain Principal options, (and Constrain Focal if your focal length should be constant) and then click Apply. After a few minutes, the adjustment should finish and you can examine the results of the adjustment checking the parameters in the graph window. 8.Finally, the results of the calibration can be exported for later use during reconstruction by selecting the Project>Export 3D Motion option from the Project menu, and saving an ICARUS Sequence file (.isq). Alternatively, you can also export the calibration data to a number of commercial modelling packages using the same menu option. Please note that when exporting in these other formats, it is very important that a metric bundle adjustment has been applied to the calibration data, and that the camera skew, aspect ratio and principal point have been constrained. You can check to see the type of motion that will be exported by selecting the View->Restricted Camera option from the view menu. This will alter the current image window to match the data that will be exported. Ideally, selecing this option should not affect the image at all. If this is the case, then the cameara data exported to Maya/Lightwave etc.. will match the data calculated by ICARUS. If there are significant differences, then run a bundle adjustment as described above.

19

4.2 Tutorial 2: User-feature tracking


This tutorial describes how user-features may be created and tracked. User-features might be useful for difficult shots, as they can by positioned by hand. User-features can also be mixed with auto-features in the normal way, and will be used as gold-standard positions (i.e. they are always assumed to be correct). 1.Launch the calibration program, start a new project and import the building2 movie file. Set the appropriate camera motion and pixel aspect ratio as before, and set the camera focal length as Constant. Now place some keyframes, starting at frame 0 and ending at frame 100. Put the interface into feature mode by selecting the crosshair icon on the toolbar. Now move the frame slider to a frame in the middle of the sequence and select Tracking>New User Feature from the Tracking menu (or press F4).

2. Place the feature in this frame by clicking with the left mouse-button. Try to select a position that has some easily identifiable contrast or pattern (like the corners of objects etc..) Note that whilst placing a feature, if you hold down the shift key and then hold down the left mouse-button, a zoomed-in portion of the image is displayed. You may now place the feature more accurately. Now click the right mouse-button over the feature. This will display the feature menu options (see Chapter 6). Select Track forwards to track this feature forwards to the end of the video sequence. Then move back to where you first placed the feature and select Track backwards to track the feature back towards the first keyframe. If the tracking fails at any time, a dialog box will be displayed. If this happens, click Okay, re-position the feature in the frame where tracking failed, and start tracking again. Similarly, if a feature moves out of bounds (indicated by the yellow box), but re-appears later, you may re-position it in the later frame and continue tracking. As each feature is tracked, a graph is drawn in the graph window showing the residual error for the feature track. If this error rises above the limit set in the tracking parameters dialog, then tracking terminates. 3.Repeat this process until you have positioned around 20 features. Try to make sure that the features are evenly distributed over the frames, and that all the points you select do not lie on a single plane. In theory, ICARUS only requires 8 features but in practice, however, many more features are needed to get reliable results, and the tracking quality strongly depends on where the features have been placed. A project file projects/userFeatures1.ipc has been provided that contains over 20 user-features. Once a suitable number of features have been placed, select Camera>Calibrate from the Camera menu. Clicking the Calibrate button in the dialog will start the calibration process which may take several minutes to complete (a calibrated project file is also included as projects/userFeatures2.ipc)

20

4.Once finished, a viewer window may be opened up to examine the feature locations and camera motion. Notice that the motion is a more noisy than when auto-tracking is used. This is because of the smaller number of feature tracks used to estimate the camera parameters. In some situations, the motion might be too eratic to use. In this vase, you can manually adjust these parameters by selecting feature mode from the toolbar, and then moving the parameter values up and down using the left mouse button. You can also place key-points, as described in Section 1.3.4. Notice that as adjustments are made, the result can be viewed interactively in the image window. Manual adjustments can also be made to a sequence that has been tracked using auto-features.

21

4.3 Tutorial 3: Pan/Tilt/Zoom motion and Mattes


This tutorial will show you how to calibrate a panning video sequence, how to use mattes, and how to remove moving objects from the calibrated sequence. mattes may be used to identify areas of a sequence that should be ignored during feature tracking, typcally these will be areas that contain moving objects which, if features were tracked on these, could cause the calibration process to fail. The mattes used by ICARUS can be keyframed or tracked automatically. This tutorial shows how to keyframe mattes. 1.Launch the calibration program, start a new project, and import the panSequence movie. When the camera parameters dialog is displayed, make sure that you set the camera motion as pan/tilt/zoom. This will tell ICARUS that the camera in this sequence is held in a fixed position (e.g. mounted on a tripod), and is only rotating and/or zooming. select Tracking->New Matte (or press F5). This will generate a new matte. In this tutorial, well use the matte to mask out the road containing the moving objects2

2.Move to the start of the sequence, zoom out slightly, and draw the matte outline around the road by clicking with the left mouse button. Each click will create one new boundary key-point. Boundary keypoints can be placed inside or outside the frame. You can position key-points more accurately inside the frame by holding down the shift key before clicking. This will show a popup window around the mouse position with a zoomed-in view of the image. You will notice that as you create boundary key-points, small handles appear next to each point. These are used during matte tracking, and will be described in more detail in the next tutorial. After key-points have been created, their positions can be changed by clicking and holding the left mouse button, and dragging them into a new position. Holding down the Control key and clicking the left mouse button inside the matte will allow you to move all its boundary points at the same time. Clicking with the right mouse button inside a matte will show a popup menu. This provides options to insert extra key-points at the current mouse location, remove the matte from this frame (or this frame onwards if no key points have been placedin later frames), or an option to invert the matte. Note that when you first create a matte, its position is set in each frame of the sequence (indicated by small triangles in the graph window). Keypoint positions are indicated with light-blue triangles. The next stage is to go through the sequence and make sure that the matte tracks the road as it changes position in the camera view.

For this sequence, it is not actually necessary to use the matte in order to calibrate the camera motion, because ICARUS is actually able to detect and ignore small moving objects in image sequences without the use of mattes. Try it and see!

22

3.Move to frame 30 using the frame slider. You will notice that the matte stays in position. You can change its position in this frame by dragging the points around the screen. Move all the points into a new position so that the matte covers the road again. Moving to earlier frames you will see that the position of the matte is interpolated between the two positions you have created.

4.Position the matte in a few more frames of the sequence, adjusting its position until it covers the road throughout the entire sequence. An example project file is included (projects/panCalibration1.ipc) that contains a suitably positioned matte.

5.Now were ready to track some features. Select Tracking->Auto Track to track some features throughout the sequence (or press F6). Notice that no features are selected from the areas where the matte has been placed. The file projects/panCalibration2.ipc contains the matte, and a set of auto-features after tracking.

23

6. After tracking has completed and a suitable set of features have been identified, select Camera>Calibrate and then click the Calibrate button in order to reconstruct the camera motion. Once calibration is finished, open up a viewer window to take a look at the camera motion. Notice that all the camera centres are in the same place, and all the feature positions have been placed on a sphere. This is because feature locations for pan/tilt/zoom camera motion can only be represented as direction vectors, and not as exact locations in 3D space.

7.Because the camera motion has now been calibrated, you can orient the scene using vanishing points if you wish. Remember you need to mark edges for one or more axis directions in a single frame (see Sections 7.1 and 7.2). Note that for pan/tilt/zoom camera motions, you only need to mark a single origin point. The projects/panCalibration3.ipc file contains a correctly oriented scene. An ICARUS sequence file can then be saved out as normal, and loaded into the reconstruction module, allowing you to reconstruct geometry from the sequence. Alternatively, you can also merge all of the video frames together and generate an image mosaic. To do this, simply select Project->Export Image->Image Mosaic. This will prompt you for a filename, and then generate the mosaic image.

An image mosaic, generated from the pan/tilt/zoom tutorial video sequence.

24

4.4 Tutorial 4: Tracking Mattes


This tutorial is currently incomplete, but we will describe the basic operations of the matte tracking facilities that are available in ICARUS. These algorithms are currently in the beta stage, but weve included them here in the hope that they can be tested thoroughly and we can identify and problems. Well hopefully include a better tutorial (with some images and video) in a later release of the software... Mattes can be placed around objects in one frame using the techniques described in the previous tutorial. As you draw the matte, you will notice that each boundary key-point has three handles associated with it. These handles are used to partition the image into three areas: The region beyond the outer boundary contains pixels that are known to be completely outside the object you are wanting to matte out. Similarly, pixels inside the inner boundary are known to be completely inside the object. Pixels inbetween can be either, and will be automatically classified as inside/outside by ICARUS. If you can see that the initial placement of the inner and outer boundaries is incorrect, you can adjust the position of the handles using the left mouse-button. When you are satisfied that the boundaries are correct, click the right mouse-button inside the matte. This will display a popup menu with several options. The first option, Pull Matte can be used to extract a more accurate matte, and will re-classify the pixels between the inner and outer boundaries as either background or foreground. Select this option, and after a few moments you should see a silhouette appear over the object. You can get ICARUS to automatically track the matte by selecting the Track forwards or Track backwards options from the popup menu. This will try to work out how to move the silhouette so that it matches the position of the object in the next/previous frame. Tracking will continue until it reaches the end/start of a sequence, or until a frame is reached where you have marked more boundary key-points for this matte. At any time (e.g. if you see that the tracking algorithm has failed to track the object correctly), you can hit the Abort button on the progress dialog. You can then re-position the matte boundary and continue tracking. These more accurate mattes can be saved to disk as images or movie files. To do this, right-click on the matte and select the Export Matte option from the menu.

25

4.5 Tutorial 5: 2D and 3D Image stabilization


This tutorial describes one of the extra features of ICARUS: the ability to stabilize a video sequence using either 2D or 3D information. This hasnt really got anything to with geometry reconstruction, but weve had requests to include this feature in ICARUS, so here it is . The 3D stabliization algorithm allows you to transform a video sequence in such a way that it appears to have been shot using a steady-cam. 1.Launch the calibration program, start a new project, and import the building2.avi movie. Set a start and end frame that contains the part of the sequence you want to stabilize (for now, you can just stabilize the entire sequence by keeping the start/end frames in their default positions). Now create a new user-feature by selecting the Tracking->New User Feature option from the Tracking menu (or press F4). Position the feature at the location you want to stabilize to (e.g. at the corner of the building, as indicated on the right). Now click with the right mouse-button on the userfeature and select Track forwards. This will track the user-feature forwards throughout the sequence. If the feature was not placed in the first frame, also select Track backwards. Also, if tracking fails at any time, re-position the feature in the frame where tracking failed and repeat the forwards/backwards tracking as necessary. Eventually, you should end up with the feature positioned in every frame of the sequence (a project file projects/stabilize1.ipc is included with a suitably tracked feature). 2.Now that a feature has been tracked correctly, you can save out a movie file that is adjusted to keep the userfeature at a fixed location. First of all, make sure the user-feature is highlighted in the Project Overview, and move to a frame of the sequence where the user-feature is in the position you want to fix it. All other frames will be adjusted so the feature remains in this position. Now select Project->Export Movie->Stabilize. You will be prompted for a movie format and filename, and the stabilized sequence will be generated. If two user-features are present in the project then the movie will also be adjusted to preserve the orientation of the line joining the two user-features. 3.This approach to stabilizing the sequence works entirely in 2D, so it can work with any camera and object motion. For example, you can also load in the panSequence.avi movie and stabilize to one of the moving objects in the scene. A second project file (projects/stabilize2.ipc) is included containing a user-feature that has been attached to one of the people walking down the street (see right). Try loading this project and then exporting the stabilized movie. 4. As well as working in 2D, ICARUS is also capable uf using 3D camera information to stabilize a video sequence. Try calibrating one of the movie files included with ICARUS, or loading one of the pre-calibrated project files (e.g. projects/autoFeatures2.ipc). Now move to a frame that you want to use as the base-frame (this will be the frame that is distorted the least in the final movie). Select the Project->Export Movie->Stabilize option from the Project menu. This will prompt you for a filename and save a stabilized file. Looking at the final movie, you will notice that all high frequency camera motion effects have been removed.

26

4.6 Tutorial 6: Motion Filtering


This tutorial describes another beta feature of Icarus, namely using mattes to selectively remove moving objects from pan/tilt/zoom video sequences. This feature works best for small objects, like the people that are removed from the scene in this tutorial. Be aware that this feature is still in beta and undergoing development. Object removal will work best with panning sequences. You might also be able to use certain free-motion sequences, where the camera is not undergoing largae amounts of translation. 1.Start off by loading in the panSequence movie file, set the motion to pan/tilt/zoom and track some auto-features as normal. There is no need to actually calibrate the camera motion, as that will be done in a special way whilst removing the moving objects.

2.Now, imagine that we want to remove the cluster of three people who are walking on the pavement in the sequence. In order to do this, first create a new matte by pressing F5. Highlight the matte in the project overview by clicking on it with the left mouse button, and remove the autofeatures from the display by un-selecting View->AutoFeatures. Now, making sure the interface is in Feature mode, draw the matte around the people in one frame (frame 27 is shown in the image on the right). Make sure that the mattes outer boundary surrounds all the moving objecgts. If you want to mask out more objects, you can also draw mattes around those, but for now, well just stick with a single matte.

27

3.You must now re-position the matte so that it surrounds the moving people in each frame of the sequence. You can do this by moving forwards and backwards through the sequence and repositioning and re-sizing the matte each time the moving objects move outside (see Tutorial 3 for more discussion about how to adjust mattes, and remember that you can move the entire matte by holding down the Control key and dragging the matte around the screen using the left mouse-button). A project file motionFilter.ipc is included with the tutorial that contains a properly positioned matte surrounding the moving people.

4.Once the matte is positioned correctly, you can select Project->Export Image->Motion Filter or Project->Export Movie->Motion Filter to remove the objects under the matte in either the current frame or the entire movie. Doing this will display a file dialog prompting you for an image or movie filename, and then a second dialog containing options that affect the quality of the motion filtering. For now, just leave the threshold value at its default and click Filter to process the sequence and remove the objects.

5.You can use multiple mattes to filter out different motions, as well as inverting a matte (by right-clicking on the matte and selecting Invert) if you want to remove all object motion except that covered by the matte. Be warned, though, that motion filtering can be a time-consuming process, especially if youre removing motion from a large area of an image. Also, remember that this filtering is still a beta feature, and so may not always work correctly.

Before (left) and after (right) images showing the motion filtering in action. Notice that the three people that were covered by the matte have been removed from the image.

28

4.7 Tutorial 7: Single image calibration


This tutorial will show you how to calibrate a single image, using vanishing points to estimate the intrinsic and extrinsic parameters of the camera. 1.Launch the calibration program, start a new project, and import the image images/testImage.jpg. Because this image was captured with a digital stills camera, you wil first need to specify some different camera parameters than in the earlier tutorials. Make sure that the pixel aspect ratio is set to Square. This should be set by default, but it is important to realise that these parameters should be set properly for each image or video sequence you import.

2.Calibration of a single image is achieved using vanishing points. Put the interface into feature mode by selecting the cross-hair button on the toolbar. Now open the Coordinate-Frame options in the Project Overview, and highlight X-Axis. Draw two red lines on the image, as shown in the image on the right. These two lines define the vanishing point of the X axis in the world coordinate system (see Sections 3.1 and 4.1 for a description of how to draw these lines, and Section 7.1 for a more detailed description about how to select suitable lines).

29

3.Now select Y-Axis from the Project Overview, and repeat the process, drawing two green lines for the Y axis (again, see the image on the right to see where to place the lines). Finally, select an appropriate position for the origin of the coordinate system, by selecting Origin Point from the Project Overview, and choosing any point on the floor of the room.

4.Now the image is ready to be calibrated. Select the Calibrate option from the Camera menu. ICARUS will first use the vanishing points you have defined to estimate the focal length of the camera. A popup menu will be displayed, showing the estimated focal length (you should get something between 1300 and 1400 pixels), and asking you if you want to mark the frame as calibrated. Click Yes. A wire-frame representation of the ground-plane will then be drawn, indicating that the image has been calibrated. You may now save out the calibration as before, by selecting Project>Export 3D Camera Motion from the Project menu and choosing the ICARUS Sequence file format (.isq). A project file called projects/singleImageCalibration.ipc has been provided containing the finished calibration.

30

4.8 Tutorial 8: Multiple image calibration


This tutorial will show you how to calibrate multiple single images, so that each camera position is represented in the same coordinate system. This is done with user-features that are positioned in each image before calibration. 1.First of all, start a new project and import the 5 multipleImage JPEG files that are included with Icarus. This can be done by selecting Project->Import Image, highlighting the 5 images in the file dialog whilst holding the Shift key, and then selecting Open to load the images. Once loaded, the camera parameters dialog will be displayed. Because the camera moved freely whilst taking these images, make sure Free Motion is selected in the camera motion drop-down box. Then, click Close to close the camera parameters dialog.

2.Before calibration, you must position at least 8 userfeatures in each image. Start of by selecting Tracking>New User Feature (or press F4). This will create the first user-feature. Now select a suitable place in the first image to position the feature. You want the position to be visible in as many images as possible, so in this case, you could select the top-left corner of the monitor screen, as show on the right.

31

3.Once the feature is placed correctly in the first image, go through each other image and position the same feature wherever it is visible. Remember that you can use the Up and Down arrow keys to scroll through the images, and hold down the Shift key whilst positioning a feature to display a zoom window for more accurate positioning. If you place a feature incorrectly, you can simply replace it at the correct position. Now repeat this procerss until you have a large enough set of features positioned at different places in each image. The more features you position correctly, the more accurate the final calibration will be. A sample project file is included with this tutorial called multipleImageCalibration1.ipc that contains 14 features.

4.The final stage of the calibration setup process is to try and estimate a focal length for one of the images using vanishing points, as described in the first autofeature calibration tutorial. This can help a lot in terms of accuracy when calibrating multiple still images, especially when youre not using many userfeatures (as in this case..). Select the image called multImage4.jpg and position Y and X axis lines, as shown on the right (or take a look at the multipleImageCalibration2.ipc project file to see the lines more clearly). Remember, though, that this stage is not always necessary and you can also simply input a focal length value if you know one to achieve the same effect (focal length is about 1140 pixels for this image).

5.Now, simply select Camera->Calibrate to calibrate the cameras. You can position a groundplane as before, or open up a viewer window to see where the cameras are positioned. Camera calibration data can also be saved in the normal way by selecting Project->Export Motion. A calibrated project file, including the coordinate axis lines is included as multipleImageCalibration2.ipc.

32

5. Reconstruction module
Once a set of images/video sequence has been calibrated, it can be used in the Reconstruction module to build a geometric reconstruction of the scene. The reconstruction module uses camera calibration data to assist the user in positioning geometric primitives that represent objects in the scene.

5.1 Tutorial 1: Simple shapes


1.First launch the reconstruction program. Now make a new project by clicking on the New Project toolbar button, or selecting the appropriate entry in the Project menu. Now, select the Project->Import ICARUS Sequence option from the Project menu and load the calibration file called building2.isq from the sequences directory. This should load in a calibration sequence similar to the one obtained in calibration tutorial 4.1. Select the Display>Ground-plane option from the Display menu in order to draw a wire-frame representation of the ground over the sequence.

2.To start modelling the building, we will use a box to represent is overall shape. Click on the Box primitive icon at the bottom of the window. A wire-frame box should appear on the screen with red corners, and a new primitive called box should appear in the Project Overview. Now place the system in Feature mode by selecting the cross-hair button on the toolbar. At any time, you may click on the primitive with the right mousebutton to display the primitive menu (see Section 6.3 for details).

3.The box primitive must now be positioned and scaled correctly. Move the mouse cursor over the bottom-front corner of the primitive, and click the left mouse button. Keeping the button held, drag the cursor towards the nearest corner of the building and release the button. This will alter the position of the box primitive so that its corner projects onto the image plane in the place where you released the button. A small white circle appears at the vertex to indicate that it has been pinned at this location in the image. You can also hold down the shiftbutton whilst moving the pin to zoom into the image as in the distortion and calibration modules.

33

4.The other corners of the box can now be positioned. Select the bottom left corner, and drag it to the position indicated on the right. You will notice that as the corner moves, the position of the box remains constant, but its scale changes. This is because this primitive now has two image constraints one for the previous (pinned) vertex and one for this current vertex. At any time, you may click with the right mouse button on a vertex to show a pop-up menu that allows you to add, remove or clear pins.

5.In order to set the correct height of the building, select topfront corner of the box and drag it into position, as shown on the right. Again, the position of the box remains constant, and its vertical height is changing to satisfy the new image constraint. When the primitive is correctly positioned in space, its projection into each frame of the sequence will correctly match the outline of the building. You can check this at any time by moving the frame slider at the bottom of the currently active window.

6.One final corner must now be positioned in order to correctly set the size of the box primitive. Because this corner is not visible in the first frame of the sequence, move to a later frame where you can see this corner, and drag it into position. Note that the colours of the small pin circles at each previous corner have changed from white to yellow. This indicates that these corners are not pinned in this frame, but have been pinned in other frames. Each corner may be pinned in as many frames as required, and ICARUS will try to satisfy each constraint that is given by each pin. When you place pins in multiple images, you may notice that the red outline of the primitive does not match your pin positions. This occurs because the constraints that the pins specify are inconsistent, and can happen for two reasons either the pins do not correspond to a single point in space, or the calibration calculated by the Calibration module is incorrect. A project file containing a correctly positioned box primitive is included with the system (projects/reconstruction1.ipr).

34

7.Now that the first primitive is placed, we can model the side part of the building with another box. This second box will be created as a child of the first within the scene graph, so make sure the first primitive is selected by either clicking on its name in the Project Overview, or by clicking on it in the image window. The currently selected primitive has a thicker outline drawn around it, and other primitive are drawn with thinner yellow lines. Now click on the Box icon again at the bottom of the screen. This will create a new box, at the default position on top of its parent. This default position is specified by the hierarchy constraints shown in a toolbar on the left of the window.

8.To position the new box primitive on the side of the building, its hierarchy constraints must be changed. These constrains are specified in terms of the location of the primitive relative to its parent in each of the X, Y, and Z directions, shown by the red, green, and blue axis lines. Firstly, change the Z constraint so that the primitive sits on the outside of its parent in the direction of decreasing Z value by clicking on the Z Min Out constraint icon. Similarly, remove the constraint specifying that the primitive sits above its parent in the Y direction by de-selecting the Y Max Out icon. This should position the primitive as shown on the right. 9.You can now start to position this new box primitive. Select the bottom-left corner and move it into the position shown on the right. You will notice that the primitive is now moving in a vertical, rather than a horizontal plane. This is because the Z-Min-Out constraint is being satisfied as the primitive moves, rather than the Y-Max-Out constraint.

35

10.Now try to position two more corners of the new box primitive so that they are in the positions shown on the right. As these corners are moved and pinned, ICARUS attempts to satisfy both the hierarchy constraints and the image constraints.

11.Finally, the position of the new box primitive can be completed by positioning a fourth vertex (you may need to move to another frame to do this if the far corner of the block is not visible in the current frame of the sequence). You can compare your result to the projects/reconstruction2.ipr project file provided with this tutorial.

12.To improve this simple model of the building, we will position a horizontal polygon to represent the ground. The wire-frame groundplane you see at the moment is for visualisation purposes only, and is not a real part of the model, so turn it off by selecting Display>Ground Plane from the Display menu. This new ground-plane polygon will not be created as a child of any of the previous primitives, so make sure that none are selected by either clicking on the Primitives entry in the Project Overview, or by clicking in a region of the image window that does not contain any primitives. Now click on the xz-polygon icon in the primitive toolbar. A quadrilateral should appear in the image window. In order to create a larger polygon, zoom out slightly, and then drag one corner of its corners towards the camera, as shown on the right.

36

13.Now select the opposite corner of the primitive, and drag it towards the building. This will change the scale of the primitive. Pin this corner into a position so that the new ground-plane extends beyond the building you've just modelled.

14.Finally, see if you can create a new box primitive to model the small wall on the right of the building. Make this primitive a child of the original building. To set its position, you will need to select hierarchy constraints so that the new box sits inside and at the bottom of its parent (Y-Min-In), and that in the Z direction it sits outside its parent in the direction of minimum Z (Z-MinOut). A more complete model is included in the project file reconstruction3.ipr. 15.Once the model is completed ICARUS can automatically extract texture maps for each primitive. Select the Textures->Extract Textures option from the Texture menu. This will show a dialog that allows you to specify the frames from which to extract the textures, as well as the method used to fill in any missing areas. Click the Smear Fill option, and then the Extract button. After a few minutes, texture extraction will complete.

37

16.To view the final textured model, make sure a primitive is selected (either in the Project Overview, or by clicking on one in the image window). Now select the Display->Textures option from the Render menu, and the Viewpoint button on the toolbar. You will now be able to view the model from different directions by clicking with the left, middle, or right mouse buttons in the image window. This allows you to rotate, zoom, and translate the viewpoint respectively (hint: turning off the background image can make for a clearer view). The model may also be saved in Inventor V2.0, VRML97, Maya or Lightwave LWO2 formats by selecting the Project->Export Model option from the Project menu.

38

5.2 Tutorial 2: Complex shapes


Not all scenes can be reconstructed using the simple primitives described in the last tutorial. Here, you can learn how to use the feature and mesh facilities in the reconstruction software to build models of more complex shapes. This is a much more time-consuming process, but weve tried to make it as easy as possible. Note that the geometry well reconstruct in this tutorial isnt really complex its the same as in Tutorial 1, but youll get the idea of what to do 1.Launch the reconstruction program, start a new project, and load the building2.isq sequence file. Select Display->Features from the Display menu. This will display the set of features that were calculated during the calibration phase of the process. Note that these features dont necessarily correspond with useful features in the model (like corners of objects etc..). Weve provided these features just in case you want to use them, but for this example, well create some more that will give better definition to the geometry. Select the Mesh->Clear Features option from the Mesh menu to delete all the features, and then put the interface into feature mode by selecting the cross-hairs icon from the toolbar.

2.Before reconstructing any geometry, we need to create some features. These will be used as the vertices of a triangular mesh. Select Mesh->New Feature from the Mesh menu, and open the Features list in the Project Overview. You should see a feature called Feature1. Make sure this feature is highlighted in the Project Overview, and then click with the left mouse button to position it at the corner of the building. Feature positioning works in exactly the same way as for user-features (hold the shift key to get a zoomed-in view). You should see a feature appear in the image, with a blue line drawn through its centre. This line is a guide-line that should always pass through the centre of the scene-point that this feature is representing. You can see this by moving through the frames of the sequence, and checking that the blue line always passes through the corner of the building. 3.These guide-lines are used to give you hints as to where else to mark this feature. If you can mark a feature in more than one image, then its 3D location can be calculated. Move to another frame in the sequence and click to place the feature in the correct position. The accuracy of calculating the feature position increases with the distance between the camera centres, and also with increasing number of feature marks. Move through the sequence to see if the feature has been positioned correctly at the top of the building (see the projects/mesh1.ipr project file). Repeat the process, generating 3 more features at the other corners of the side face of the building. Pressing the right mouse-button over a feature will display the feature popup, allowing you to hide, delete, and rename features.

39

4.Now that weve created some features, we can connect them into a triangular mesh. Select Mesh>New Mesh from the Mesh menu. This will create a mesh called Mesh1. Highlight this mesh in the Project Overview by clicking on it, and then move your mouse pointer over one of the features in the image window. You will see that a small blue circle appears around it. Clicking on a feature will select it and turn the circle yellow. Once three features have been selected, they are connected together to form a triangular face in the mesh. Create a face and then move through the sequence to see that the triangle accurately tracks the side of the building (see projects/mesh2.ipr). At any time, you can rightclick on the triangle to display a popup menu for this mesh.

5.Connect three more vertices in the mesh to form another triangle as shown on the right (also see projects/mesh3.ipr). You can also click the Viewpoint button on the toolbar and view the model from a different angle. Meshes may be textured, and saved out from ICARUS as normal. Try to create more features and build a more complete model of the building. You can also mix meshes with normal primitives if you wish.

40

6. Quick Reference
6.1 Distortion module
6.1.1 Menu options
6.1.1.1 Project menu
Project->New: start a new project Project->Open: open an existing project Project->Save: save current project Project->Save As: save current project under a different name Project->Load Lens File: load a previously estimated sequence of distortion parameters Project->Save Lens File: save the current sequence of distortion parameters Project->Import Image: import a static image Project->Export Image: save a new image, after removing or adding lens distortion Project->Import Movie: import a movie file Project->Export Movie: save a new movie file, after removing or adding lens distortion Project->Preferences: display the preferences dialog Project->Quit: quit ICARUS.

6.1.1.2 View menu


View->Play Speed: set the playback speed for the current movie. View->Distortion: toggle the display of image distortion on/off View->Inverted: when set, will show how an image is distorted, rather than undistorted. View->Darken: reduce the intensity of the background image View->Widescreen: display the image using square pixels View->Feature: put the interface into feature mode View->Pan: put the interface into panning mode View->Zoom In: put the interface into zoom in mode View->Zoom Out: put the interface into zoom out mode View->New Window: create a new image window View->Tile Windows: arrange the windows neatly on the screen View->Fit To Window: fix the current pan/zoom so that the background image fills the current image window View->Actual Size display the image/movie frame at its original size.

6.1.1.3 Distortion menu


Distortion->Set Start Frame: set the start frame for calculating lens distortion. Distortion->Set End Frame: set the end frame for calculating lens distortion. Distortion->Clear Lines: delete all distortion lines Distortion->Parameters: show the distortion properties dialog

6.1.1.4 Help menu


Help->Log Output: store stdout in a log file Help->Show Log: show the current log file in a separate window Help->About: displays the version number and license information Help->Whats This?: put the system into help mode.

6.1.2 Useful keys


Left/Right cursor keys: move forwards/backwards by one frame through a sequence (make sure the window is selected by clicking in it with the left mouse-button if this doesn't appear to work).. Up/Down cursor keys: move to start/end frame for the sequence. Space bar: start/stop movie playback.

41

6.1.3 Distortion parameter dialog


Distortion Centre: calculate the centre of distortion. Low Order Radial: calculate the low order component of radial distortion. High Order Radial: calculate the high order component of radial distortion Tangential Decentering: calculate the tangential decentering parameters Uniform Parameters: if edges are marked in multiple frames and this option is set, then a single set of parameters are estimated for the entire sequence. If this option is not set, then the independent sets of parameters are calculated for each frame. Solve: calculate the specified parameters (above), given the current set of edges marked in the images.

6.1.4 Preferences dialog


Standard Import Dialog: when checked, a platform-specific file dialog box will be used to import images and movies. Resample OpenGL Textures: when checked, images exceeding the specified size (see below) will be resampled when rendered to the screen, rather than being tiled. Max Texture Size: maximum size of texture maps used to render images to the screen Cache Size (Mb): size of the video frame cache in Mb. Customize: change the keyboard accelerators or colours.

6.2 Calibration module


6.2.1 Menu options
6.2.1.1 Project menu
Project->New: start a new project Project->Open: open an existing project Project->Save: save current project Project->Save As: save current project under a different name Project->Import Image: import a static image Project->Import Movie: import a movie file Project->Import 2D Tracking: load 2D tracking information as user-features (see below for the file format) Project->Import Matte: load an image or movie to use as a matte. Project->Export 2D Tracking: save the 2D tracking data in a simple, human-readable format: <feature name> <number of frames> <frame num> <x> <y> ... Project->Export 3D Motion: save calibration data in a format compatible with the reconstruction component of the ICARUS system. Supported formats are ICARUS Sequence file (.isq), Lightwave LWS, Maya MEL script, Softimage XSI, Side FX Houdini CLIP and GEO files, Combustion 2 and Flame/Inferno. Also, an output is available in humanreadable format: Frame: <frame> translation: <x> <y> <z> rot_matrix: <3x3 rotation matrix> rot_quaternion: <equivalent quaternion> focal_length: <in millimeters> field_of_view: <horizontal degrees> <vertical degrees> pixel_aspect_ratio: <value> skew: <value> principal_point <x pixel> <y pixel> ...

42

<num features> <num> <name> <x> <y> <z> ... Project->Export Image->With Tracking Overlay: saves the current image (or a single frame from the current movie) with the feature tracks etc.. overlayed. Project->Export Image->Motion Filter: filter out moving objects from the current frame and save it as a single image. Project->Export Image->Image Mosaic: Merges all movie frames into a single image mosaic and saves it as an image file. Project->Export Image->Optical Flow: calculate the optical flow from the current frame into the next and save it in the FLO file format (see Chapter 2). Project->Export Movie->With Tracking Overlay: as above, but saves an entire movie rather than the current frame. Project->Export Movie->Motion Filter: filter out moving objects from the current pan/tilt/zoom movie sequence. Project->Export Movie->Stabilize: stabilizes the camera motion (2D version required a user-feature to be highlighted in the Project Overview, 3D version requires the camera motion to be calibrated). Project->Export Movie->Optical Flow: as above, but saves a FLO image for each frame of the movie. Project->Preferences: display the preferences dialog Project->Quit: quit ICARUS

6.2.1.2 View menu


View->Play Speed: set the playback speed for the current movie. View->Ground plane: toggle the ground-plane on/off for a calibrated scene View->Restricted Camera: render the current window with a restricted camera, where skew is assumed to be zero, pixel aspect ratio is constant, and the principal point is fixed in the centre of the image. View->Axis Lines: toggle drawing the coordinate-frame axis lines View->User Features: toggle drawing user-features View->Auto Features: toggle drawing auto-features View->3D Markers: toggle drawing the 3D feature markers View->Tracks: toggle drawing feature track locations View->Mattes: toggle the display of mattes View->Darken: reduce the intensity of the background image View->Widescreen: display the image using square pixels View->Feature: put the interface into feature mode View->Pan: put the interface into panning mode View->Zoom In: put the interface into zoom in mode View->Zoom Out: put the interface into zoom out mode View->New Window: create a new image window View->New Viewer: create a new viewer window View->Tile Windows: arrange the windows neatly on the screen View->Fit To Window: fix the current pan/zoom so that the background image fills the current image window View->Actual Size display the image/movie frame at its original size.

6.2.1.3 Tracking menu


Tracking->Set Start Frame: set the start of the tracking region. Tracking->Set End Frame: set the end of the tracking region. Tracking->Clear User Features: clear all user-features Tracking->Clear Auto Features: clear all auto-features Tracking->Tracking Parameters: display the tracking parameters dialog Tracking->New User Feature: create a new user-feature Tracking->Track UF Forwards: track a user-feature forwards Tracking->Track UF Backwards: track a user-feature backwards Tracking->New Matte: create a new matte to restrict feature tracking Tracking->Auto Track: automatically select and track features throughout the sequence

6.2.1.4 Camera menu


Camera->Camera Parameters: display the camera and lens parameter dialog Camera->Track and Calibrate: perform auto-feature tracking and calibration in a single step. Camera->Estimate Focal Length: estimates the focal length at the current frame, using coordinate axis lines.

43

Camera->Calibrate: calibrate the current image/video sequence Camera->Bundle Adjust: apply a metric bundle adjustment algorithm to a calibrated Camera->Orient Scene: orient a scene using either user-specified coordinate axis lines and origin points, or with the currently selected set of user/auto features. Camera->Delete Calibration: delete any camera motion and calibration data.

6.2.1.5 Help menu


Help->Log Output: store stdout in a log file Help->Show Log: show the current log file in a separate window Help->About: displays the version number and license information Help->Whats This?: put the system into help mode.

6.2.2 Useful keys


Left/Right cursor keys: move forwards/backwards by one frame through a sequence (make sure the window is selected by clicking in it with the left mouse-button if this doesn't appear to work). Up/Down cursor keys: move to next/previous keyframe for a video sequence, or next/previous image if multiple images are loaded Escape key: whilst tracking user-features or matte boundaries, hitting the escape key at any time will terminate the track. Space bar: start/stop movie playback.

6.2.3 Calibrate dialog


Sequential: Use a simple (but fast) sequential motion reconstruction algorithm Final Sequential: Use the sequential motion algorithm for the final stages of the reconstruction process Merging Passes: Perform N pair-wise merging passes. After N passes, a sparse merging algorithm is used (or the sequential algorithm if Final Sequential is checked). See the ISMAR 2002 paper for a more detailed description of the merging algorithms. If youre having problems getting an accurate calibration for long sequences, try increasing this number. Percentage Outliers: The estimated percentage of outlying features.

6.2.4 Auto-Feature popup menu


Right-mouse click on an auto-feature displays the following options: Terminate forwards: remove this feature from here onwards (higher frame numbers) Terminate backwards: remove this feature from here backwards (lower frame numbers) Remove from frame: remove this feature from this frame Delete selected: delete all selected auto-features Delete: delete this feature altogether Rename: show a "rename" dialog allowing the user to change the name of the feature Select: toggle the selection of this feature (it will be drawn in blue when selected). Selected features can be used to orient the world coordinate system. Tag For Export: mark this feature as one of the ones that will be saved with the calibration data, if the "Autofeatures:tagged" option is selected in the export dialog. Select: toggle the selection of this feature (it will be drawn in blue when selected). Selected features can be used to orient the world coordinate system. Set As Origin: set this feature location as the origin of the coordinate system. 3D Marker->Display Cube: draw a 3D cube at this feature. 3D Marker->Display Cone: draw a 3D cone at this feature. 3D Marker->Display Cylinder: draw a 3D cylinder at this feature. 3D Marker->Increase Size: increase the size of all 3D markers. 3D Marker->Decrease Size: decrease the size of all 3D markers.

44

6.2.5 User-feature popup menu


Right-mouse click on a user-feature displays the following options: Track forwards: track this feature forwards through the sequence. Track backwards: track this feature backwards towards the start of the sequence Terminate forwards: remove this feature from here onwards (higher frame numbers) Terminate backwards: remove this feature from here backwards (lower frame numbers) Remove from frame: remove this feature from this frame Delete selected: delete all selected user-features Delete: delete this feature altogether Rename: show a "rename" dialog allowing the user to change the name of the feature Select: toggle the selection of this feature (it will be drawn in blue when selected). Selected features can be used to orient the world coordinate system. Solve: if you create and track a user-feature after a calibration has been produced, you can select this option to solve for its position in space. This allows you to add extra features into an existing calibration. Select: toggle the selection of this feature (it will be drawn in blue when selected). Selected features can be used to orient the world coordinate system. Set As Origin: set this feature location as the origin of the coordinate system. 3D Marker->Display Cube: draw a 3D cube at this feature. 3D Marker->Display Cone: draw a 3D cone at this feature. 3D Marker->Display Cylinder: draw a 3D cylinder at this feature. 3D Marker->Increase Size: increase the size of all 3D markers. 3D Marker->Decrease Size: decrease the size of all 3D markers.

6.2.6 Matte popup menu


Right-mouse click inside a matte displays the following options: Insert key point: insert a new key-point in the matte boundary at a position nearest to the nouse click. Display Handles: toggle display of the inside/outside matte boundary handles used for pulling mattes from image data. Pull matte: use the inner and outer boundaries to estimate a more accuracte pixel matte. Clear matte: clear the pixel matte data. Export matte: save the pixel matte as image/movie file. Track matte forwards: track the pixel matte forwards through the sequence. Track matte backwards: as above, but will track backwards. Previous keyframe: move to the last frame containing keyed boundary points. Next keyframe: move to the next frame containing keyed boundary points Remove keyframe: un-key all boundary points in this frame. Move: move the entire matte boundary Scale: scale the entire matte boundary around its centroid. Invert: inverts the matte, so regions outside are ignored, rather than regions inside. Colour key: display the colour-keying interface for matte creation. Remove from frame: removes the matte from this frame, and frames above/below this point if no later/earlier keyframes have been created. Delete: delete the entire matte

6.2.7 Matte boundary popup menu


Right-mouse click on a matte boundary point displays the following options: Track point forwards: track this boundary point forwards through the sequence. This behaves much like tracking userfeatures, and can also be halted using the escape-key. Track point backwards: as above, but tracks backwards. Previous keyframe: move to the last frame containing a key for this boundary point. Next keyframe: move to the next frame containing a key for this boundary point. Remove keyframe: un-key this boundary point in this frame. Delete point: delete this boundary point from all frames.

45

6.2.8 Tracking parameters dialog


Num Auto-Features: the target number of auto features that should be present in each frame (only a rough guideline). Min Length: the minimum number of frames that a feature must be tracked in order for it to be kept. Features tracking for a smaller number of frames are discarded (units: frames). Window Size: the size of the window that represents each feature (units: pixels). Max Search Range: the maximum distance a feature can travel before it is considered to be tracking incorrectly (units: pixels) (not currently user-selectable). Max Residual: the residual level above which a feature is considered to have tracked incorrectly (units depend on the similarity score - see below). Pick Separation: the minimum distance between selected features (units: pixels). Min Separation: if two features move to within this distance of each other, their tracks are terminated (units: pixels). Pick Threshold: the threshold used to determine if an individual pixel contains an "interesting" feature (units: a bit complicated. read the research literature...) Min Displacement: during iterative tracking, if a feature moves by less than this distance, the iterative transformation estimate is considered to have converged (units: pixels) Similarity: the measure used to compare features to decide if they are the same. "NCC Score" uses the "Normalized Cross-Correlation" measure. "RMS Error" uses the root-mean-squared pixel difference. Illumination: tracking accounts for contrast/brightness changes. Affine: tracking accounts for an affine window deformation. Colour: tracking accounts for image colour, rather than just luminance. Back-Track: features are tracked backwards during a second tracking pass. Guided: estimates of camera motion are used to guide feature tracking. Motions: attempt to track and calibrate independent rigid body motions as well as the overall camera motion. This feature is currently very beta and requires a lot more work...

6.2.9 Bundle adjustment dialog


Constrain Focal: try to constrain the focal length to a constant value Constrain Aspect: try to constrain the aspect ratio to the value specified in the Camera dialog. Constrain Skew: try to constrain the skew to be zero. Constrain Principal: try to constrain the principal point to the values specified in the Camera dialog. Weights: the relative weights of each of the constraints. Larger numbers mean that more consideration is given to the constraints. Max Iterations: the maximum number of iterations that the bundle adjustment will run for. Convergence: the residual threshold change that signifies convergence (see the research literature).

6.2.10 Camera parameters dialog


Camera Parameters->Motion: specify the camera motion (free motion, pan/tilt/zoom) Camera Parameters->Focal Length: specify a variable/constant/fixed focal length, and the units it is measured in (pixels or millimeters). Camera Parameters->Aperture Height: specify the cameras aperture height, which determines how focal lengths in pixels are translated into millimeters. Camera Parameters->Pixel Aspect: specify the aspect ratio of pixels in the image. Camera Parameters->Principal Point: specify a variable/constant/fixed principal point. Camera Parameters->Re-calibrate: once a sequence has been calibrated, you may want to change the parameters of the camera lens. If you do this, clicking on the "Re-calibrate" button will adjust the calibration data to account for these changes. Lens Distortion->Distortion Model: specify the lens distortion model (None or user-specified) Lens Distortion->Distortion Centre: the centre of lens distortion as a percentage of the frame width and height. Lens Diistortion->Low Order Radial: the low order coefficient for radial lens distortion. Lens Distortion->High Order Radial: the high order coefficient for radial lens distortion. Lens Distortion->Tangential: the low and high order coefficients for tangential lens distortion. Lens Distortion->Frame: change the frame number for which the lens distortion parameters are displayed. Lens Distortion->Load: load a user-defined lens distortion parameters that were saved by the distortion module

46

Format->Format: specify NTSC or PAL frame rate. Format->Frame Rate: specify an alternative frame rate. Format->Interlacing: specify the method to correct for any interlacing in a video sequence. None make no changes, Use Upper Field uses only the upper field of each frame, Use Lower Field uses the lower, and Average Fields takes an average of both upper and lower, Independent treats each field as a separate frame, doubling the length of the sequence and also doubling the frame rate. Format->Field Dominance: for Independent interlacing modes, specify if the upper or lower field occurs first. Format->3:2 Pulldown: for NTSC sequences, specify the method for removing the 3:2 pulldown to translate back to 24fps footage. Format->Analyze Pulldown: attempt to automatically analyze the NTSC footage to determint the method for removing 3:2 pulldown. Pulldown analysis can be stopped at any point by clicking the Stop button in the progress dialog.

6.2.11 Preferences dialog


Max Error Residual: the maximum allowable error (in pixels) before a feature is considered outlying (i.e. a bad feature track) Feature Style: specify if features are drawn as boxes (default), crosses or circles. Software Off-Screen Buffer: when checked will force the use of a software off-screen rendering buffer. Standard Import Dialog: when checked, a standard platform-specific file dialog will be used when importing images and movies. Use Z Up: when checked, the Z axis will point upwards. When unchecked, the Y Axis points upwards. Resample Textures: when checked, images exceeding the specified size (see below) will be resampled when rendered to the screen, rather than being tiled. Max Texture Size: maximum size of texture maps used to render images to the screen Cache Size (Mb): size of the video frame cache in Mb. Customize: change the default keyboard accelerators or image colours.

6.3 Reconstruction module


6.3.1 Menu Options
6.3.1.1 Project Menu
Project->New: start a new project Project->Open: open an existing project Project->Save: save current project Project->Save As: save current project under a different name Project->Import ICARUS Sequence: import an ICARUS Sequence file (.isq) Project->Export Image: save an image of the current active window. Project->Export Movie: save a movie corresponding to the current active window. Project->Export Model: save the reconstructed geometry in a variety of formats: ICARUS proprietry format, Lightwave LWO2, Inventor v2.0 ASCII, VRML97, Maya MEL script and MGF. Project->Preferences: display the preferences dialog Project->Quit: quit ICARUS.

6.3.1.2 View menu


View->Play Speed: set the playback speed for the current movie. View->Widescreen: display the image using square pixels View->Feature: put the interface into feature mode View->Pan: put the interface into panning mode View->Zoom In: put the interface into zoom in mode View->Zoom Out: put the interface into zoom out mode View->Viewpoint: puit the interface into viewpoint mode View->New Window: create a new image window. View->Tile Windows: arrange the windows neatly on the screen View->Fit To Window: fix the current pan/zoom so that the background image fills the current image window View->Actual Size display the image/movie frame at its original size.

47

6.3.1.2 Display menu


Display->Background Image: toggle the background image Display->Features: toggle the feature points Display->Darken: reduce the intensity of the background image Display->Ground plane: toggle the ground-plane Display->Primitives: toggle the primitives on/off Display->Wireframe: display primitives in wireframe Display->Shaded: display primitives as shaded Display->Textures: display textures Display->Two Sided: toggle two-sided rendering Display->Shadows: toggle simple shadows

6.3.1.3 Primitive menu


Primitive->Cut: cut this primitive and its children into the clipboard Primitive->Copy This: copy the currently selected primitive (but ignore its children) Primitive->Copy All: copy the currently selected primitive and its children Primitive->Paste: paste a primitive from the clipboard as a child of the currently selected primitive (or as a child of the root primitive if none is selected). Primitive->Delete: delete the currently selected primitive and its children Primitive->Load Primitive: load a primitive file. These are stored in a very simple ascii text format, with nine floating-point numbers per-line, corresponding to three vertices of a triangle. Primitive->Rename: rename the currently selected primitive

6.3.1.4 Mesh menu


Mesh->New Mesh: create a new mesh Mesh->Delete Mesh: delete the currently selected mesh Mesh->New Feature: create a new feature Mesh->Clear Features: delete all the features Mesh->Rename Mesh: rename the currently selected mesh.

6.3.1.5: Texture menu


Texture->Clear Textures: clear all textures Texture->Pull Textures: display the texture dialog to automatically extract textures from the image/video sequence.

6.3.1.6: Help menu


Help->Log Output: store stdout in a log file Help->Show Log: show the current log file in a separate window Help->About: displays the version number and license information Help->Whats This?: put the system into help mode.

6.3.2 Useful keys


Left/Right cursor keys: move one frame forwards/backwards through the current video sequence/set of images. Shift+Left/Right cursor keys: move forwards/backwards by ten frames through a sequence. Space bar: play the entire video sequence once.

6.3.3 The constraint toolbar


The constraint bar is displayed down the left-hand side of the window by default. The widgets are grouped into five sets When this constraint is set, objects are allowed to be scaled independently in their X, Y and Z directions. Removing this constraint forces objects to be scaled equally in all directions. When this constraint is set, objects are allowed to rotate around any of their axes (see below).

48

When this icon is set, objects are allowed to rotate around their X axis This icon allows objects to rotate around their Y axis Setting this icon allows objects to rotate around their Z axis The next three sets of constraints operate on the X Y and Z directions of a primitive. These constrains operate relative to the coordinate system of a primitives parent in the Project Overview: Centre: this constrains a primitives X/Y/Z position so the centre of its bounding box is the same as the centre of its parent Min Out: this constrains a primitives X/Y/Z position so that it sits outside its parents bounding box in the negative X/Y/Z direction Min In: this constrains a primitives X/Y/Z position so that it sits inside its parents bounding box in the negative X/Y/Z direction Max In: this constrains a primitives X/Y/Z position so that it sits inside its parents bounding box in the positive X/Y/Z direction Max Out: this constrains a primitives X/Y/Z position so that it sits outside its parents bounding box in the positive X/Y/Z direction X Y Z

6.3.4 Primitive popup menu


Right mouse-click on a primitive (when in feature mode) displays the following options: Invert: invert the normals of the selected primitive Rename: rename the selected primitive Hidden: toggle the selected primitive as hidden/not hidden Occluder: toggle the selected primitive as an occluder. When in shaded/texture display mode, occluders are only drawn into the Z-buffer. Edit->Cut: cut the selected primitive and its children into the clipboard Edit->Copy This: copy the selected primitive into the clipboard, but ignore its children Edit->Copy All: copy the selected primitive and its children into the clipboard Edit->Paste: paste a primitive from the clipboard into the scene, as a child from the selected primitive Edit->Delete Face: delete the selected face from the primitive Edit->Delete All: delete the selected primitive. Rotate->X Axis: rotate the selected primitive by 90 degrees around its X axis Rotate->Y Axis: as above, but for the Y axis Rotate->Z Axis: as above, but for the Z axis Texture->Set Resolution: Specify the size of the texture maps for this primitive Texture->Clear Facet Texture: delete the texture associated with this primitive facet Texture->Clear Object Texture: delete the texture associated with this entire primitive Texture->Pull Facet Texture: extract a texture map for the current facet from this frame. Texture->Pull Object Texture: extract textures for the entire primitive from this frame.

6.3.5 Mesh popup menu


Right-click on a mesh when in feature mode displays the following options: Invert Facet: invert the selected facet of the current mesh Delete Facet: delete the selected facet Delete Mesh: delete the entire mesh

49

Hidden: toggle the selected mesh as hidden/not hidden Occluder: toggle the selected mesh as an occluder. When in shaded/texture display mode, occluders are only drawn into the Z-Buffer Rename: rename the selected mesh Texture->Set Resolution: Specify the size of the texture maps for this mesh Texture->Clear Facet Texture: delete the texture associated with this mesh face Texture->Clear Mesh Texture: delete the texture associated with this entire mesh Texture->Pull Facet Texture: extract a texture map for the current face from this frame. Texture->Pull Mesh Texture: extract textures for the entire primitive from this frame.

6.3.6 Feature popup menu


Right-click on a feature when in feature mode displays the following options: Next Pin: move to the next frame where a feature key-point has been marked Previous Pin: move to the previous frame where a keature key-point has been marked Clear Pins: clear marks in all frames. Hidden: toggle the selected feature as hidden/not hidden Delete: delete the selected feature and any mesh triangles that are connected to it. Rename: rename the feature

6.3.7 Pull texture dialog


This frame only: Only extract textures from the current frame Multiple frames: Extract textures from multiple frames, determined by the following options: From: the start frame To: the end frame Every: the number of frames to use between from and to No hole fill: no not attempt to fill in any holes Uniform fill: estimate the average texture colour, and fill in holes with this colour. Smear fill: Interpolate between nearby regions to fill in holes Synthesis fill: currently disabled (alpha).

6.3.8 Preferences dialog


Resample OpenGL Textures: when checked, images exceeding the specified size (see below) will be resampled when rendered to the screen, rather than being tiled. Max Texture Size: maximum size of texture maps used to render images to the screen Cache Size (Mb): size of the video frame cache in Mb. Customize: change the keyboard accelerators or colours.

50

7. Notes
7.1 Inliers and outliers
Looking at the log file that is generated during calibration, you might be wondering what inliers and outliers are. After Icarus has worked out the location of the feature points in 3D space and the position of the cameras, it can project each 3D point into each image and see if the projection agrees with the feature track in that image. If a feature projects to the same location as it's track, then it is called an "inlier". Otherwise, it is an "outlier". If all feature projections agree with all feature tracks then we have no outliers. The level of agreement required is controlled by the residual error, and the default of 1 pixel means that a feature must project to within 1 pixel of its track to be considered as an inlier. The "percentage outliers" number in the calibration dialog is used to specify roughly how many outliers are expected. If fewer outliers are expected, then some parts of the calibration algorithm can be speeded up. Reducing this number will not always speed things up, however, as the algorithms are already smart enough to speed themselves up when they see that very few features are outlying. The main use of this number is that it can be increased if things are going bad. This has the effect of trying to get Icarus to spend more time calculating some of the parameters it requires.

7.2 Sequential versus non-sequential calibration


In the calibration dialog there are options available for performing sequential and non-sequential calibrations. The sequential calibration algorithm can be much faster than non-sequential, but will possibly introduce further errors into the solution. The non-sequential algorithm (the default) attempts to distribute the error throughout the entire sequence. For many situations, however, the sequential algorithm can provide sufficiently accurate calibrations much more quickly. In certain situations, the sequential algorithm can also provide a better calibration for shots that the nonsequential algorithm has problems with. If youre calibration fails, we recommend that you try both sequential and nonsequential methods.

7.3 Colour-keying mattes


Tracking mattes can be specified using a variety of methods, including colour-key information. Right-clicking on a matte and selecting Colour Key will display the colour key dialog (see below). In this section, well explain how to use this feature to create tracking mattes.

The buttons on the bottom row of the dialog allow you to create and delete new colour range. The current list of colour ranges are given in the drop-box at the top-left. Each colour range can be used to mask out a particular set of colours. For example, if youve got a sequence that has been shot against a green-screen with tracking markers, you might want

51

to quickly mask out everything except the screen and markers. This could be done with two different colour ranges: one for the green screen and one for the tracking markers. You can do this very easily by first creating a new colour range using the New button. Now, click the Pick button. This will allow you to pick up colours from your image and store in the region. Holding the left mouse-button and dragging the dropper-cursor over your image will pick up the colours and create the initial region (see below). After colours have been picked up, their ranges are drawn within the 3 colour channels. By default, colour ranges operate in HLS colour space, but by using the drop-box in the top right corner of the dialog, you can change this to YUV, YIQ or even RGB.

You will probably notice that when picking up colours it is very difficult to get all the colours necessary for an accurate mask. You can adjust the colour ranges by hand by dragging the left and right-hand vertical lines delimiting the range in each channel. In the example below, both the H, L and S colour ranges have been extended until the matte covers the background green-screen.

You can create more than one colour range using the New button, and once you have masked out the background colour correctly, right-clicking on the matte name in the Project Overview and selecting Invert will give you a correctly masked image, as shown on the right.

52

7.4 Estimating focal lengths and orienting the scene using vanishing points
The accuracy of focal length estimation using vanishing points depends strongly on how well the edges are marked by the user. For each axis, ICARUS intersects the set of edges the user has identified in order to identify the vanishing point in the image that corresponds to that coordinate axis. The locations of two or more vanishing points in a single frame are then used to estimate the focal length of the camera for that frame. One or more vanishing points in a single frame can also be used to orient the coordinate system of the calibrated scene. Because there will always be a small amount of error in the location of these lines, the vanishing point that is identified will only be an estimate. Consider the two cases shown below: In the case on the left, the edges for one axis are almost parallel, and in the case on the right they are not.

Nearly parallel edges

Nonparallel edges

The intersection point for the lines on the left is very far from the image, because the lines are almost parallel (in the limit, when the lines are exactly parallel, they intersect at infinity). Conversely, the intersection point for the lines on the right is much nearer the image. Because of this, errors in the placement of the edges are going to affect the location of the vanishing point in different ways: Small pixel errors for the nearly-parallel edges are going to translate into large variations in the location of the vanishing point. Small pixel errors for the non-parallel edges, however, are going to have little relative effect on the accuracy of the vanishing point estimate. This leads to rule number one for estimating focal lengths with vanishing points: Try to avoid edges that are nearly parallel. Equally, an alternative way to increase the accuracy of the vanishing point is to increase the number of edges that are used to estimate its position. Rule two is therefore: Use as many edges as possible. If you can get estimates of the vanishing point for two or more orthogonal edges in a single frame, then ICARUS can estimate the focal length for that image. Orthogonal edges are those that are at 90 degrees to each other in the scene (e.g. the edges of a cube). For orienting the scene after calibration, however, at least one vanishing point is needed. The way in which the scene is oriented will depend on the number used, but for one edge, the coordinate system will be oriented so that the direction of the coordinate axis that has been chosen matches the direction calculated from the vanishing point. For example, if you mark out vertical edges in a frame, and then orient the scene, the Y (up) direction in your calibration will point upwards. The orientation of the horizontal plane around the Y axis will still be undetermined. If you mark two or three orthogonal vanishing points (and because of the properties of orthogonal lines, three is the most you can get), then you will be able to orient the horizontal plane as well.

53

7.5 Alternative methods for orienting the scene


When it is difficult to find suitable edges to orient the scene, there are several other ways in which you can orient the coordinate frame of the calibration:

7.5.1 Selecting features in a plane


First of all, you may use the calibrated positions of user or auto-features to do the job. To do this, you must first select at least three features that lie in the plane you wish to be the ground plane. This is done by right-clicking on a feature and choosing the Select option from the popup menu. After selecting three or more features, choose the Camera->Orient Scene menu option. This will attempt to fit a plane to the features you have selected, and rotate the scene so that this new plane becomes the ground plane (y = 0). One of the features will also be selected as the origin point. You may then alter the position of the origin point by right-clicking on another feature, and selecting Set As Origin. Alternatively, you may position origin markers (the Origin entry in the Project Overview) in two or more different frames of the sequence, and select Camera->Orient Scene again.

7.5.2 Selecting two features in a line


If you are unable to find three or more features lying on your ground-plane, a second approach is to identify two user or auto-features that define the vertical direction in the scene. These two points must then be selected, as described above, and then choosing Camera->Orient Scene from the Camera menu will orient the calibration so that the line joining these two features becomes vertical.

7.5.3 Adjusting the orientation by hand


The final approach to orienting the coordinate frame is to manually adjust its position, scale, and orientation. This is achieved by selecting the Coordinate Frame entry in the Project Overview. The left, middle, and right mouse buttons can then be used to rotate, scale, and translate the ground-plane. The origin of the coordinate system may be set by either placing origin point markers and then selecting Camera->Orient Scene, or by right-clicking on a feature and selecting the Set As Origin option from the popup menu. We recommend settings an origin point before attempting to orient the coordinate system be hand. Also, make sure that the focal lengths of the camera are good. Orienting the scene whilst the camera has a severely bad focal length estimate is very tricky... Translation is performed in directions parallel to the current image plane. Holding shift whilst translating will move the ground-plane in and out of the screen. Rotation may also be constrained to be around either the X, Y, or Z axes by holding down the Shift, Control, or Alt keys respectively.

7.5.4 Setting the scale of the calibration


The overall scale of a calibration ca be set in two ways: firstly, by highlighting the Coordinate Frame entry in the Project Overview and then using the middle mouse-button as described above. If you wish to set an exact scale then you may use the following method: Firstly, identify an edge in your scene that you want to fix the length of. Now open up the Coordinate Frame entry in the Project Overview and highlight Scale. Now draw a line over your edge in two or more different frames of the image sequence. Finally, right-click on the Scale entry and select Set Scale. This will display a popup box that allows you to enter the desired length of your edge.

7.6 Fine-tuning a calibration


After a calibration has been performed, you might see some problems with it. Typical problems, and how to solve them are described below: 1.There is a significant amount of variation in the aspect ratio, skew, or principal point parameters.

54

This can sometimes be solved by running a metric bundle adjustment, and constraining the aspect ratio, skew, and principal point parameters. Also, if you have specified a constant focal length, constrain the focal length parameter during bundle adjustment as well. 2.The focal length looks very wrong. If you have an approximate value for the focal length (it doesnt have to be very accurate), then open up the Camera Parameters dialog, select Fixed Focal Length, and enter the value on the right (you can specify it in units of pixels or millimeters, if you know the cameras aperture height). Now click the Re-Calibrate button. This will attempt to adjust the camera motion and feature positions so that the focal length of the camera is as close as possible to the value youve specified (note that it might not be exact, but it should get pretty near). After this is done, take a look at the variation of aspect ratio, skew and principal point by selecting their entries in the Project Overview. If things look bad, run a metric bundle adjustment with the appropriate constraints (also, if your initial focal length value is know to be quite accurate, select the Constrain Focal option as well). 3.Everything else has failed. It wont calibrate! Try adjusting the camera parameters by hand using key-points in the graph window. This is a last resort....

7.7 How to read the calibration progress graph


During tracking and calibration, a graph is displayed in the progress dialog that indicates the quality of the tracking or calibration for the current movie. Some of the data displayed in the log window is drawn into this dialog in graph form, and advanced users may use this graph to understand more about the calibration process. In this section, a brief descripton of how to read the graph will be given for both feature tracking and calibration. The progress dialog displays a bar graph, where the height indicates the current quality. The graph scrolls right-to-left as new quality measures are introduced. Each bar is also colour-coded: green for good, fading to yellow and orange for intermediate, and finally red for bad quality.

7.7.1 Auto-tracking
This example of auto-feature tracking uses the building2 movie file that comes with the Icarus distribution, and the default tracking parameters.

1. As tracking is performed, the percentage of features that are successfully tracked from one frame to the next are recorded in the graph.

55

2.Once back-tracking starts, the graph records the percentage of newly introduced features that are successfully tracked backwards for each frame. You can see that this percentage is slightly lower than during forward tracking.

7.7.2 Free-motion calibration


In this example we will explain the graph during various stages of calibrating the autoFeatures1 project file that is included with the Icarus distribution. The default calibration parameters were used. If you are interested in learning more about the information that you can get from this graph, we recommend you also take a look at the log output, and also read the reference papers given in Section 8.

1.At the start of phase 1, the graph displays the percentage of inliers. An inlier is a feature track whos predicted position (given by its 3D location and the estimated camera) matches the location of the feature track. If all features and cameras are estimated accurately, then the number of inliers will be 100%. As you can see from the graph above, the early stages of the phase 1 produce a large percentage of inliers, starting from 100% and then slowly decreasing.

2. Once a certain point is reached in the calibration, Icarus tries to re-estimate the camera positions using 3D feature positions that it has estimated (this is known as resectioning). The block of data shown above indicates how well this can be performed (again, green is good...)

56

3. At certain points in the calibration process, Icarus must merge two different chunks of camera data together. For example, it might have reconstructed camera motion from frames 40 to 50 and from frames 50 to 55. During phase 1, these camera motions are merged together to estimate the motion between frames 40 and 55. The regions of the graph highlighted above indicate how well this merging can be achieved. You will notice that each region contains 4 bars. At first, the quality is quite low (indicated by the first two bars in each group). The important thing to notice is that immediately after the low quality graph bars, there are two bars that indicate that the final result of the merge is actually of high quality. It is these final two bars that are important and affect the rest of the calibration process. In this case, the final two bars in each group indicate a high quality merge.

4.Once phase 2 of the calibration process is reached, the graph will be cleared. Phase 2 is the merging phase that hopefully results in a complete estimate of the camera motion through the entire sequence (see the research paper mentioned in Section 8..) Again, the graph indicates the number of inliers that are found after each merging phase. The example highlighted above shows two entire merging passes throughout the sequence. Each merging pass is delimited by a vertical black line, and within each pass the bars can be grouped into sets of four, as in the previous step. Again, the first pair of bars in each set indicates the initial merge quality, with the second two bars giving the final quality. In this case, the first merging pass has 3 sets of 4 bars, and for each set the final merge quality is very good.

5. The third phase of the calibration process involves estimating camera parameters such as focal length and skew. In this case, the height of the bars indicate how close to the required values the calibration algorithm has been able to get. In the example shown above, the bars are almost at the top of the graph, meaning that the camera parameters are fairly well behaved and have been estimated quite reliably.

57

6. Finally, phase 4 of the calibration process involves running a bundle adjustment on all the cameras and feature locations to try and improve the estimates of the camera parameters. Each step in the bundle adjustment process is separated by a vertical black line, and contains a series of vertical bars that indicate how well the camera parameters match their desired values. In the example above, the first few set of bars are of relatively low quality, but as you can see, as the bundle adjustment progresses the quality increases until the bars are almost all at the top of the graph.

7.7.3 Pan/Tilt/Zoom calibration


This example was produced whilst calibrating the panCalibration2 project file included with Icarus.

1. During phase 1 of a pan/tilt/zoom calibration, the graph shows the percentage of inliers as different parts of the overall camera motion are estimated. These different sections are separated by vertical black lines. In the example above, you can see that for each part, the number of inliers starts off at 100% and then decreases until it is decided that the camera motion can be reliably estimated.

2. Phase 2 of the calibration process is almost identical to phase 3 of the free-motion calibration. For a description of this phase, see Section 7.4.2.

58

3. The final phase of the pan/tilt/zoom calibration is a bundle adjustment, and is very similar to phase 4 of the freemotion calibration. See Section 7.4.2 for a descripton of the graph during this phase.

59

8. References
This section provides links to research papers describing the internals of the ICARUS system in more detail. These papers are available from http://aig.cs.man.ac.uk/icarus/papers.php S. Gibson, J. Cook, T.L.J. Howard, R.J. Hubbold, and D. Oram, "Accurate Camera Calibration for Off-line, VideoBased Augmented Reality", IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR 2002), Darmstadt, Germany, September 2002. S. Gibson, J. Cook, T.L.J. Howard, R.J. Hubbold, "ICARUS: Interactive Reconstruction from Uncalibration Image Sequences", ACM Siggraph 2002 Conference Abstracts and Applications. San Antonio, Texas, July, 2002. S. Gibson, R.J. Hubbold, J. Cook, and T.L.J. Howard, "Interactive Reconstruction of Virtual Environments from Video Sequences", Computers and Graphics, Volume 27, Number 2, 2003 (to appear).

60

Vous aimerez peut-être aussi