Académique Documents
Professionnel Documents
Culture Documents
(v2.09)
User-Guide
Simon Gibson, Jon Cook, Toby Howard and Roger Hubbold Advanced Interfaces Group, University of Manchester, UK. (C) 2002-2003 University of Manchester (last updated February 5th 2003) http://aig.cs.man.ac.uk/icarus email: icarus@aig.cs.man.ac.uk
Table of Contents
1. Introduction..............................................................................................................5 1.1 The ICARUS System............................................................................................5 1.2 Pre-requisites ........................................................................................................5
1.2.1 Windows 98/2000/XP .............................................................................................................5 1.2.2 Linux (i686).............................................................................................................................6 1.2.3 SGI Irix 6.5 (IP32)...................................................................................................................6 1.2.4 Mac OS X................................................................................................................................6
1.4 Documentation overview......................................................................................9 2. Capturing image sequences...................................................................................10 2.1. Capturing video sequences ................................................................................10
2.1.1 Camera motion and scene structure.......................................................................................10 2.1.2 Camera zoom.........................................................................................................................11 2.1.3. Object motion .......................................................................................................................11
3. Distortion module...................................................................................................13 3.1 Tutorial ...............................................................................................................13 4. Calibration module ................................................................................................16 4.1 Tutorial 1: Auto-feature tracking........................................................................16 4.2 Tutorial 2: User-feature tracking ........................................................................20 4.3 Tutorial 3: Pan/Tilt/Zoom motion and Mattes....................................................22 4.4 Tutorial 4: Tracking Mattes ................................................................................25 4.5 Tutorial 5: 2D and 3D Image stabilization .........................................................26 4.6 Tutorial 6: Motion Filtering................................................................................27 4.7 Tutorial 7: Single image calibration ...................................................................29 4.8 Tutorial 8: Multiple image calibration................................................................31 5. Reconstruction module..........................................................................................33
5.1 Tutorial 1: Simple shapes ...................................................................................33 5.2 Tutorial 2: Complex shapes ................................................................................39 6. Quick Reference .....................................................................................................41 6.1 Distortion module ...............................................................................................41
6.1.1 Menu options .........................................................................................................................41 6.1.2 Useful keys ............................................................................................................................41 6.1.3 Distortion parameter dialog ...................................................................................................42 6.1.4 Preferences dialog .................................................................................................................42
7. Notes ........................................................................................................................51 7.1 Inliers and outliers ..............................................................................................51 7.2 Sequential versus non-sequential calibration .....................................................51 7.3 Colour-keying mattes .........................................................................................51 7.4 Estimating focal lengths and orienting the scene using vanishing points ..........53 7.5 Alternative methods for orienting the scene.......................................................54
7.5.1 Selecting features in a plane ..................................................................................................54 7.5.2 Selecting two features in a line..............................................................................................54 7.5.3 Adjusting the orientation by hand..........................................................................................54 7.5.4 Setting the scale of the calibration.........................................................................................54
8. References ...............................................................................................................60
1. Introduction
1.1 The ICARUS System
This document describes the installation and operation of the ICARUS system. The ICARUS system is a suite of software packages that allow a user to retrieve a variety of information from image sequences1, such as camera positions and geometric models of objects visible in the images. The ICARUS system was developed by the Advanced Interfaces Group at the University of Manchester in the UK over the course of a three year EPSRC funded project entitled REVEAL: Reconstruction from Video of Environments with Accurate Lighting. The general capabilities of the ICARUS system can be divided into three main modules. Each of these modules is supplied as a separate program. An overview of the operation of each module is given below: 1. Distortion module (removal of geometric lens distortion from image sequences): Due to the geometric distortion present in low-grade lenses, straight lines are not imaged as being completely straight. This distortion affects the accuracy of the calibration and reconstruction modules in the ICARUS system. The distortion module allows the amount of distortion to be easily calculated and its effect removed from image sequences if required. 2. Calibration module (estimation of intrinsic and extrinsic camera parameters for each frame of a sequence): Calibration is required before a geometric representation of the scene can be built. The calibration process calculates both the intrinsic and extrinsic camera parameters (i.e. the focal length and principal point of the camera, as well as its position and orientation in space) for each frame of a sequence. The calibration module can also do many more things (see the tutorials for some examples...) 3. Reconstruction component (reconstruction of scene geometry using calibrated image sequences): Once an image sequence has been calibrated, the calibration data can be used to reconstruct geometric representations of objects in the scene. This is achieved in an interactive manner, where the user manipulates the position and orientation of parametric primitives so that they match features visible in the calibrated images. Colours and texture information can also be automatically extracted from the image data.
1.2 Pre-requisites
The ICARUS system has been successfully tested on the following platforms: Microsoft Windows 98/2000/XP (NVidia GeForce 2/3/4 graphics), Mandrake Linux 8.1/8.2/9.0 (Nvidia GeForce 2/3/4 graphics), SGI O2 and Onyx2 (Irix 6.5.13, IP32), and Mac OS X 10.2. Other platforms may work, but are currently untested and unsupported. On all platforms, ICARUS uses the lapack libraries available from http://www.netlib.org/lapack. Thanks must go to the authors of the lapack software for saving us a lot of implementation time!.
Throughout this document, the term image sequence will be used to refer to both collections of one or more separate digital still images, and the set of frames in a digital video sequence. A frame of an image sequence refers to either one frame of a digital video sequence, or one of the set of digital still images.
1.2.4 Mac OS X
On Mac OS X systems, ICARUS should not need any 3rd-party libraries to operate. There are, however, a couple of known bugs that are described on the Mac download page: http://aig.cs.man.ac.uk/icarus/macDownload.php.
Figure 1: The user-interface of the calibration module in the ICARUS system. The menu and toolbars are at the top, the project overview is on the left, the image workspace is the main central window, and a graph widget is also shown at the bottom
We have tried to provide a consistent interface for each component of the ICARUS. There are four main sections to each components interface: the menu and toolbars, the project overview, the image workspace, and the graph window. Items common to each interface are described below. Widgets specific to each component will be described where necessary.
Figure 2: The menu and toolbar from the calibration module There are several menus and toolbar buttons common to each component of the ICARUS system. The Project menu allows a new project to be created or saved, or a previously saved project to be loaded back into the system. These basic operations are also accessible via the first three buttons of the main toolbar. Note that project files are not transferable between the components of the ICARUS system. The next four buttons on the toolbar determine the cursor mode (i.e. the way in which mouse clicks and movements in the image workspace are interpreted). Cursor modes such as pan zoom in and zoom out, which are also available from the View menu, allow for the manipulation of the items in the image workspace. In addition to these three modes of operation, the toolbar provides a feature cursor mode. Selecting this mode allows the user to interact with items in the image window. The exact form of this interaction depends upon the component currently being used, and will be described later. Note that you can zoom in and out without changing the current cursor mode by holding down the control key and the moving the mouse wheel up and down. Note that the toolbars present in each component of the ICARUS system may be picked up and moved around by the user, allowing a small amount of user-interface customisation. Toolbars may also be moved outside the main window, and positioned on the desktop. Dont worry if you cant remember what each item in the toolbar does since tooltips are employed to remind you of their functionality.
There is only ever one currently active image window present in the workspace. Clicking on an image name in the Project Overview will change the current image or movie shown in the currently active window. The size of the image workspace may be made larger than that of the ICARUS window. When necessary, scroll bars will appear on the bottom and right-hand sides of the workspace, allowing the user to scroll around the larger space. When a movie is loaded into a window, and the window has been selected by clicking in it, the cursor keys may be used to move through the sequence. The left and right cursor keys will move one frame forwards/backwards. Holding the shift key down as well will move forwards/backwards by 10 frames. If your mouse has a wheel button, moving the wheel forwards/backwards will also move forwards/backwards by 5 frames in the movie. If one of the zoom buttons is selected, however, moving the wheel mouse whilst the left button is pressed will zoom in/out of the image. The up and down cursor keys can also be used to navigate through a movie. In the distortion module, the up and down cursor keys will move to the start/end of the sequence. In the calibration module, these keys are used to move between keyframes, or between images shown in the Project Overview. Some information is shown in the bottom right of each image window. From left to right, this is: the image/movie name; the current frame expressed as a fraction of the total number of frames; the image/movie resolution; the amount of zoom; the movie frame-rate and the achieved frame rate (in brackets).
Figure 5: An example graph window from the calibration component As well as adjusting parameters on a per-frame basis, you can also use key-point to help generate smoothly varying parameters. By clicking with the right mouse button on any of the small dots, you can add or remove key-points. You can also specify if linear or smooth (Hermite spline) interpolation is used to generate parameters between the keypoints. When smooth interpolation is used, small handles appear at the key-points. Adjusting these handles will alter the slope of the parameter at the key-point. The popup menu also has an option to indicate whether the parameter slopes on either side of the key-point are continuous. When this option is not selected, the two handles can be moved independently. Note also that when you start adjusting the key-point positions, the original data is drawn in the background, allowing you to adjust the key-point positions so that the data is correctly smoothed.
Figure 6: The graph window can also be used to smooth out the parameter data with key-points
10
11
12
3. Distortion module
When images or sequences are captured with consumer level cameras at short focal lengths, distortion is introduced into the images by the system of lenses in the camera. Due to this distortion, images of straight-lines appear to be slightly curved. This distortion must be removed to ensure accuracy in later calibration and reconstruction phases. Typically, consumer grade cameras experience barrel distortion at short focal lengths, and small amounts of pin-cushion distortion at long focal lengths (see Figure 8).
No distortion
Barrel distortion
Pincushion distortion
Figure 8 Simple examples of lens distortion typically encountered with consumer level digital cameras. The amount of geometric distortion in each image/video sequence is estimated by having the user identify lines in the image that are supposed to be straight. The user may draw multiple lines in different images, or in different frames of a video sequence. After at least one line has been placed, ICARUS can calculate the distortion parameters required to straighten them. If the user has marked a number of lines in different frames of a video sequence, distortion parameters may be calculated independently for each frame. The parameters for in-between frames are then interpolated from these results. Alternatively, if the video sequence has been captured at approximately a constant focal length, a single set of average parameters may be estimated for the entire sequence. There are several different types of distortion parameter that may be calculated (see the reference literature for further details): 1.The centre of lens distortion, 2.Low order radial distortion, 3.Higher order radial distortion, 4.Low order tangential distortion, 5.Higher order tangential distortion. The only parameters which normal users of the system need worry about are the first three. Generally, the centre of the lens distortion can be kept at the centre of the image. Tangential distortion may sometimes be necessary for nonstandard lenses. As well an removing lens distortion from a sequence (as described below), the distortion module may be used to reapply the lens distortion. This can be achieved by loading in a previous distortion file and changing the Action option in movie or image export dialog. Distortion files can be loaded and saved using the options in the Project menu.
3.1 Tutorial
1.First, launch the distortion program, and create a new project by clicking on the New Project toolbar button, or by selecting Project->New menu item. Then select the Project->Import Movie menu option, and load the building2d video sequence. The name of the movie will appear in the project window, and the first frame of the sequence will be shown in the image window. You can change the current frame of the movie by moving the horizontal scroll-bar at the bottom of the window, or by dragging the frame indicator (the black vertical triangle) in the graph window.
13
2.Find a frame containing a straight line (take, for example, the upper-most horizontal edge of the main building). Put the system into feature mode by selecting the cross-hair button on the toolbar, and draw a line from the top to the bottom of the edge. Drawing is achieved by clicking with the left button at the desired start point on the image, dragging the line to the required end-point, and releasing the left mouse button. You should see a straight yellow line appear on the image. After the line is drawn, its end-points can be moved by simply clicking on them with the left button and dragging them to a new position. As this is being done, a window pops up to show a zoomed-in portion of the image. This helps with sub-pixel positioning of points. The user may also zoom in to the image by selecting the zoom in and zoom out buttons on the toolbar and clicking on the image with the left mouse-button. Note that by default, Icarus tries to snap points to the nearest interesting feature. This is useful because it allows you to position these points more accurately. In some situations, however, this is not always desireable, and so this snapping can be turned off by holding down the Shift key whilst placing the point. 3.You can see clearly that the line of the building is not straight, compared to the yellow line that has just been drawn on the image. This is the information that ICARUS uses to determine the distortion parameters for the lens. In order to calculate this information, more points need to be added to the line. The line can be subdivided by pointing at a section of the line with the mouse, clicking the left button, and moving the new vertex to a new position. This can be repeated as many times as necessary in order to follow the curve of the building. At least three points are required on each line to determine the distortion parameters.
4.More lines can be placed in other frames of the sequence if required. This will be necessary if the focal length changes significantly over the course of the sequence. If the focal length does not change significantly, it is still a good idea to create extra lines, as these will be used to increase the accuracy of the distortion calculations. At any time, you may save a project file containing the current state of the system. A project file for this sequence containing three distortion lines in a single frame is distributed with the system, and may be found in the tutorial directory (tutorial/distortion1.ipd).
5.The distortion parameters may now be calculated. Select the Distortion>Parameters menu option. This will bring up a window showing the various distortion parameters. Initially, only Low Order Radial distortion will be selected, and the parameters are assumed to be uniform over the entire sequence. These options can be changed if required. Clicking the Solve button will solve for the parameters the user has asked for. If no lines have been drawn in the sequence, but the user knows the distortion parameters, these may also be entered in the type-in boxes in the dialog, and solution process skipped.
14
6. Once the distortion parameters have been calculated, the image will be warped to show the amount of distortion necessary to straighten the lines drawn in steps 1-3. These straightened lines are drawn in white. The entire video sequence may now be un-distorted and saved to disk using the Project->Export Movie menu option. Also, you may adjust the parameters by hand, if required, by clicking on the lens parameters in the Project Overview, and adding/removing key-points as described in Section 1.3.4
15
4. Calibration module
The calibration module attempts to estimate parameters such as focal length, position, and orientation of the camera for each frame of a video sequence, or for a set of one or more images. Calibration is achieved by identifying common features shared between images or frames of a video sequence. For sets of images, the user must identify these common features. For video sequences, the user can ask ICARUS to automatically select and track a large number of features throughout the sequence (auto-features). Alternatively, the user can also select a set of features, and have ICARUS track these itself (user-features). Camera calibration is a bit of a black art, and it is very difficult to develop a system that will work in all situations. There will always be situations in which image/video calibration will fail. The key approach to get a calibration working is to give ICARUS as much information as possible. If you know that the focal length is constant for each frame of a sequence (of even a fixed, known value), use this information whilst setting the lens type (see below). Also, if an image sequence has an identifiable pair of vanishing points in one or more frames, mark them, and ICARUS will use this information to try and build an accurate calibration for the sequence.
16
3.A number of features now need to be identified and tracked throughout the frames of the sequence. Select the Tracking->Tracking Parameters menu option. This will pop-up a dialog containing parameters such as the number of features, and the maximum allowable residual error etc.. Just leave these parameters as they are for now, click the Close button, and then select Tracking->Auto Track from the Tracking menu (or press F6). This will track 200 features from the start to the end of the frame range (i.e. from frame 0 to frame 100). Tracking these points will take several minutes. A project containing the tracked features is included in the tutorial directory (projects/autoFeatures1.ipc). Once tracking has finished a colour coded track is displayed for each feature indicting the motion of the feature over the previous and next 10 frames. The colour of the track, either green, yellow or red, indicates the ammount of error in the feature track. When the mouse moves over an individual feature a longer track is displayed. Features can be deleted with the delete key whilst hovering over them with the mouse cursor (remember to make sure that the image window has keyboard focus by clicking in it). Clicking the right mouse-button over a feature will also display the feature menu (see Chapter 6 for a description of these functions). 4.The next stage involves calculating a focal length for one of the frames of the reconstruction. This step is often not necessary, but for sequences where it is possible to do so, estimating a focal length can improve the quality of the final calibration. Put the system into feature mode by selecting the cross-hair button on the toolbar. To make things easier, hide the auto-features by unselecting the View->Auto Features option from the View menu (but remember to show them again after youve done this step). Open the Coordinate Frame options in the Project Overview on the left-hand side of the window, and highlight the X Axis entry. This allows you to draw over edges in the image that are parallel to the X Axis in the scene. You can pick any orientation you want for the directions of the coordinate axes, but the X, Y and Z axes must be orthogonal to each other (i.e. at right-angles). In this case, well choose the X axis to run down the long side of the building, and the Z axis to run down the short side. This means that the Y axis will be parallel with the vertical sides of the building. Draw two red X axis edges, as shown in the figure on the right. Edge drawing operates in the same was as in the distortion module, except edges cant be subdivided in this case. After both edges are marked, select the Z Axis entry in the Project Overview, and draw the two blue Z Axis edges. Each time you mark an edge, a small coloured indicator appears in the graph window indicating that an edge has been drawn in that frame. Pressing the right mouse-button on an edge end-point pops up a menu that allows you to delete the edge. Pressing the right mousebutton on the axis entry in the Project Overview allows all edges associated with that axis to be deleted. At least two edges are required for two out of the three axes in order to estimate a value for the focal length in a single frame (see Section 7.1 for further discussion about calculating focal lengths with vanishing points). Now select the Camera>Estimate focal length option from the camera menu. This should show a dialog indicating the focal length that has been calculated using these lines (for this sequence, you should get a value of between 750 and 800 pixels if youve placed your lines accurately). Click the Yes button in response to the question asking if you want to mark the frame as calibrated. This means that ICARUS will remember this focal length and use it to (hopefully) improve the accuracy of the calibration. Note that if you get a very inaccurate focal length estimate, you will need to adjust the edge end-points and re-estimate the focal length. Deleting an edge from a frame will also remove the focal length estimate.
17
5.Before the calibration process can start, you need to set the pixel residual error. Select the Project->Preferences option from the Project menu. The Residual entry in the preferences dialog can be used to set the error threshold used by the calibration algorithms. It is set to 1 pixel by default, which should be adequate for most sequences. Now select the Camera>Calibrate option from the Camera menu. A dialog will appear, showing several advanced options that let you control the accuracy and speed of the calibration process (see Section 6 for a description of these options). For now, just click Calibrate. This will show another dialog indicating the progress of the calibration. If all goes well, after a few minutes the dialog will disappear without any reported errors. You will notice that the features have been replaced with small dots representing the positions of the features in space. These dots are coloured depending on how well they fit the original feature positions: green for good (i.e. they are within the error threshold), red for bad (outside the error threshold), and white if the feature was not located in the frame. You can also display 3D markers at the feature locations by selecting the 3D Marker->Display option when you right-click on a feature position. This can give you a better idea of scale in the scene. Finally, the results of the calibration can also be examined from different viewpoints by clicking on the New Viewer button on the toolbar, or selecting the appropriate option from the View menu. This will display an additional window showing a three-dimensional view of the calibration camera path and the positions of the features. The viewpoint may be manipulated by using the left, middle, and right mouse-buttons to rotate, zoom, and translate a virtual camera accordingly 6.The next stage of the process is to orient and scale the world coordinate system. This stage is optional, but if the calibration data is to be used in the reconstruction module, it is often worthwhile. Orientation uses the same edges that were used when estimating a focal length in Step 4, so if that stage was skipped, close the viewer window and mark the edges now. For orientation, you will need to mark edges for at least one axis (and remember, all the edges need to be in the same frame). Sections 7.1 and 7.2 describe in more detail how the edges are used for orientation, as well as some alternative approaches to orientation. You will also need to specify an origin for the coordinate system, so select the Coordinate Frame->Origin Point entry in the Project Overview, and mark an appropriate point on the ground-plane (e.g. the bottom corner of the building). For free camera motion, the origin point must be marked in at least two frames, so go to another frame in which the same point is visible, and mark it again. Now select the Camera->Orient Scene option from the Camera menu. A wire-frame representation of the ground-plane should appear, and you can check to see if the world coordinate frame has been correctly oriented. A project file containing the correctly oriented calibration is included in the tutorial directory (projects/autoFeatures2.ipc). Opening up a viewer window will also allow you to check the camera position relative to the ground-plane. If you need to fix the scale of the calibration to a specific value, then you can do this by highlighting the Coordinate Frame->Scale entry in the Project Overview and
18
drawing a line in the image corresponding to a known distance. This line needs to be marked in at least two frames of the sequence. Once this is done, right-clicking on the Coordinate Frame->Scale entry and selecting Set Scale option allows you to specify the length of this line. The calibration will be scaled accordingly. 7.An alternative method to check the accuracy of the camera motion is to examine the different camera parameters in the graph window. Firstly, open the Camera entry in the Project Overview, and highlight Focal length. This shows a plot of the camera focal length for each frame of the sequence in the graph window (try looking at the other parameters as well to see how they vary, and remember that you can change the scale of the graph using the control key and the left mouse button, as described in Section 1.3.4). In some situations, parameters like focal length, skew, and principal point will be varying too much, due to error in the feature locations. Ideally, camera skew should be zero, the pixel aspect ratio should be a constant value, equal to that specified in the camera parameters dialog (step 1), and the principal point should be in the centre of the image (50%,50%). In some situations, this might not be the case and these errors must be corrected. Also, the camera focal length might be varying too much, even though it was specified as constant or fixed in the camera parameter dialog. If your camera calibration data appears to contain significant errors, these can be reduced by selecting the Camera->Bundle Adjust option from the Camera menu. Note that in this case, applying the bundle adjustment will have little effect on the accuracy, as the calibration is quite accurate already, but you can take a look at the dialog just to see the options available. The dialog allows you to place constraints on the camera parameters. Typically, you would need to select the Constrain Aspect, Constrain Skew, and Constrain Principal options, (and Constrain Focal if your focal length should be constant) and then click Apply. After a few minutes, the adjustment should finish and you can examine the results of the adjustment checking the parameters in the graph window. 8.Finally, the results of the calibration can be exported for later use during reconstruction by selecting the Project>Export 3D Motion option from the Project menu, and saving an ICARUS Sequence file (.isq). Alternatively, you can also export the calibration data to a number of commercial modelling packages using the same menu option. Please note that when exporting in these other formats, it is very important that a metric bundle adjustment has been applied to the calibration data, and that the camera skew, aspect ratio and principal point have been constrained. You can check to see the type of motion that will be exported by selecting the View->Restricted Camera option from the view menu. This will alter the current image window to match the data that will be exported. Ideally, selecing this option should not affect the image at all. If this is the case, then the cameara data exported to Maya/Lightwave etc.. will match the data calculated by ICARUS. If there are significant differences, then run a bundle adjustment as described above.
19
2. Place the feature in this frame by clicking with the left mouse-button. Try to select a position that has some easily identifiable contrast or pattern (like the corners of objects etc..) Note that whilst placing a feature, if you hold down the shift key and then hold down the left mouse-button, a zoomed-in portion of the image is displayed. You may now place the feature more accurately. Now click the right mouse-button over the feature. This will display the feature menu options (see Chapter 6). Select Track forwards to track this feature forwards to the end of the video sequence. Then move back to where you first placed the feature and select Track backwards to track the feature back towards the first keyframe. If the tracking fails at any time, a dialog box will be displayed. If this happens, click Okay, re-position the feature in the frame where tracking failed, and start tracking again. Similarly, if a feature moves out of bounds (indicated by the yellow box), but re-appears later, you may re-position it in the later frame and continue tracking. As each feature is tracked, a graph is drawn in the graph window showing the residual error for the feature track. If this error rises above the limit set in the tracking parameters dialog, then tracking terminates. 3.Repeat this process until you have positioned around 20 features. Try to make sure that the features are evenly distributed over the frames, and that all the points you select do not lie on a single plane. In theory, ICARUS only requires 8 features but in practice, however, many more features are needed to get reliable results, and the tracking quality strongly depends on where the features have been placed. A project file projects/userFeatures1.ipc has been provided that contains over 20 user-features. Once a suitable number of features have been placed, select Camera>Calibrate from the Camera menu. Clicking the Calibrate button in the dialog will start the calibration process which may take several minutes to complete (a calibrated project file is also included as projects/userFeatures2.ipc)
20
4.Once finished, a viewer window may be opened up to examine the feature locations and camera motion. Notice that the motion is a more noisy than when auto-tracking is used. This is because of the smaller number of feature tracks used to estimate the camera parameters. In some situations, the motion might be too eratic to use. In this vase, you can manually adjust these parameters by selecting feature mode from the toolbar, and then moving the parameter values up and down using the left mouse button. You can also place key-points, as described in Section 1.3.4. Notice that as adjustments are made, the result can be viewed interactively in the image window. Manual adjustments can also be made to a sequence that has been tracked using auto-features.
21
2.Move to the start of the sequence, zoom out slightly, and draw the matte outline around the road by clicking with the left mouse button. Each click will create one new boundary key-point. Boundary keypoints can be placed inside or outside the frame. You can position key-points more accurately inside the frame by holding down the shift key before clicking. This will show a popup window around the mouse position with a zoomed-in view of the image. You will notice that as you create boundary key-points, small handles appear next to each point. These are used during matte tracking, and will be described in more detail in the next tutorial. After key-points have been created, their positions can be changed by clicking and holding the left mouse button, and dragging them into a new position. Holding down the Control key and clicking the left mouse button inside the matte will allow you to move all its boundary points at the same time. Clicking with the right mouse button inside a matte will show a popup menu. This provides options to insert extra key-points at the current mouse location, remove the matte from this frame (or this frame onwards if no key points have been placedin later frames), or an option to invert the matte. Note that when you first create a matte, its position is set in each frame of the sequence (indicated by small triangles in the graph window). Keypoint positions are indicated with light-blue triangles. The next stage is to go through the sequence and make sure that the matte tracks the road as it changes position in the camera view.
For this sequence, it is not actually necessary to use the matte in order to calibrate the camera motion, because ICARUS is actually able to detect and ignore small moving objects in image sequences without the use of mattes. Try it and see!
22
3.Move to frame 30 using the frame slider. You will notice that the matte stays in position. You can change its position in this frame by dragging the points around the screen. Move all the points into a new position so that the matte covers the road again. Moving to earlier frames you will see that the position of the matte is interpolated between the two positions you have created.
4.Position the matte in a few more frames of the sequence, adjusting its position until it covers the road throughout the entire sequence. An example project file is included (projects/panCalibration1.ipc) that contains a suitably positioned matte.
5.Now were ready to track some features. Select Tracking->Auto Track to track some features throughout the sequence (or press F6). Notice that no features are selected from the areas where the matte has been placed. The file projects/panCalibration2.ipc contains the matte, and a set of auto-features after tracking.
23
6. After tracking has completed and a suitable set of features have been identified, select Camera>Calibrate and then click the Calibrate button in order to reconstruct the camera motion. Once calibration is finished, open up a viewer window to take a look at the camera motion. Notice that all the camera centres are in the same place, and all the feature positions have been placed on a sphere. This is because feature locations for pan/tilt/zoom camera motion can only be represented as direction vectors, and not as exact locations in 3D space.
7.Because the camera motion has now been calibrated, you can orient the scene using vanishing points if you wish. Remember you need to mark edges for one or more axis directions in a single frame (see Sections 7.1 and 7.2). Note that for pan/tilt/zoom camera motions, you only need to mark a single origin point. The projects/panCalibration3.ipc file contains a correctly oriented scene. An ICARUS sequence file can then be saved out as normal, and loaded into the reconstruction module, allowing you to reconstruct geometry from the sequence. Alternatively, you can also merge all of the video frames together and generate an image mosaic. To do this, simply select Project->Export Image->Image Mosaic. This will prompt you for a filename, and then generate the mosaic image.
24
25
26
2.Now, imagine that we want to remove the cluster of three people who are walking on the pavement in the sequence. In order to do this, first create a new matte by pressing F5. Highlight the matte in the project overview by clicking on it with the left mouse button, and remove the autofeatures from the display by un-selecting View->AutoFeatures. Now, making sure the interface is in Feature mode, draw the matte around the people in one frame (frame 27 is shown in the image on the right). Make sure that the mattes outer boundary surrounds all the moving objecgts. If you want to mask out more objects, you can also draw mattes around those, but for now, well just stick with a single matte.
27
3.You must now re-position the matte so that it surrounds the moving people in each frame of the sequence. You can do this by moving forwards and backwards through the sequence and repositioning and re-sizing the matte each time the moving objects move outside (see Tutorial 3 for more discussion about how to adjust mattes, and remember that you can move the entire matte by holding down the Control key and dragging the matte around the screen using the left mouse-button). A project file motionFilter.ipc is included with the tutorial that contains a properly positioned matte surrounding the moving people.
4.Once the matte is positioned correctly, you can select Project->Export Image->Motion Filter or Project->Export Movie->Motion Filter to remove the objects under the matte in either the current frame or the entire movie. Doing this will display a file dialog prompting you for an image or movie filename, and then a second dialog containing options that affect the quality of the motion filtering. For now, just leave the threshold value at its default and click Filter to process the sequence and remove the objects.
5.You can use multiple mattes to filter out different motions, as well as inverting a matte (by right-clicking on the matte and selecting Invert) if you want to remove all object motion except that covered by the matte. Be warned, though, that motion filtering can be a time-consuming process, especially if youre removing motion from a large area of an image. Also, remember that this filtering is still a beta feature, and so may not always work correctly.
Before (left) and after (right) images showing the motion filtering in action. Notice that the three people that were covered by the matte have been removed from the image.
28
2.Calibration of a single image is achieved using vanishing points. Put the interface into feature mode by selecting the cross-hair button on the toolbar. Now open the Coordinate-Frame options in the Project Overview, and highlight X-Axis. Draw two red lines on the image, as shown in the image on the right. These two lines define the vanishing point of the X axis in the world coordinate system (see Sections 3.1 and 4.1 for a description of how to draw these lines, and Section 7.1 for a more detailed description about how to select suitable lines).
29
3.Now select Y-Axis from the Project Overview, and repeat the process, drawing two green lines for the Y axis (again, see the image on the right to see where to place the lines). Finally, select an appropriate position for the origin of the coordinate system, by selecting Origin Point from the Project Overview, and choosing any point on the floor of the room.
4.Now the image is ready to be calibrated. Select the Calibrate option from the Camera menu. ICARUS will first use the vanishing points you have defined to estimate the focal length of the camera. A popup menu will be displayed, showing the estimated focal length (you should get something between 1300 and 1400 pixels), and asking you if you want to mark the frame as calibrated. Click Yes. A wire-frame representation of the ground-plane will then be drawn, indicating that the image has been calibrated. You may now save out the calibration as before, by selecting Project>Export 3D Camera Motion from the Project menu and choosing the ICARUS Sequence file format (.isq). A project file called projects/singleImageCalibration.ipc has been provided containing the finished calibration.
30
2.Before calibration, you must position at least 8 userfeatures in each image. Start of by selecting Tracking>New User Feature (or press F4). This will create the first user-feature. Now select a suitable place in the first image to position the feature. You want the position to be visible in as many images as possible, so in this case, you could select the top-left corner of the monitor screen, as show on the right.
31
3.Once the feature is placed correctly in the first image, go through each other image and position the same feature wherever it is visible. Remember that you can use the Up and Down arrow keys to scroll through the images, and hold down the Shift key whilst positioning a feature to display a zoom window for more accurate positioning. If you place a feature incorrectly, you can simply replace it at the correct position. Now repeat this procerss until you have a large enough set of features positioned at different places in each image. The more features you position correctly, the more accurate the final calibration will be. A sample project file is included with this tutorial called multipleImageCalibration1.ipc that contains 14 features.
4.The final stage of the calibration setup process is to try and estimate a focal length for one of the images using vanishing points, as described in the first autofeature calibration tutorial. This can help a lot in terms of accuracy when calibrating multiple still images, especially when youre not using many userfeatures (as in this case..). Select the image called multImage4.jpg and position Y and X axis lines, as shown on the right (or take a look at the multipleImageCalibration2.ipc project file to see the lines more clearly). Remember, though, that this stage is not always necessary and you can also simply input a focal length value if you know one to achieve the same effect (focal length is about 1140 pixels for this image).
5.Now, simply select Camera->Calibrate to calibrate the cameras. You can position a groundplane as before, or open up a viewer window to see where the cameras are positioned. Camera calibration data can also be saved in the normal way by selecting Project->Export Motion. A calibrated project file, including the coordinate axis lines is included as multipleImageCalibration2.ipc.
32
5. Reconstruction module
Once a set of images/video sequence has been calibrated, it can be used in the Reconstruction module to build a geometric reconstruction of the scene. The reconstruction module uses camera calibration data to assist the user in positioning geometric primitives that represent objects in the scene.
2.To start modelling the building, we will use a box to represent is overall shape. Click on the Box primitive icon at the bottom of the window. A wire-frame box should appear on the screen with red corners, and a new primitive called box should appear in the Project Overview. Now place the system in Feature mode by selecting the cross-hair button on the toolbar. At any time, you may click on the primitive with the right mousebutton to display the primitive menu (see Section 6.3 for details).
3.The box primitive must now be positioned and scaled correctly. Move the mouse cursor over the bottom-front corner of the primitive, and click the left mouse button. Keeping the button held, drag the cursor towards the nearest corner of the building and release the button. This will alter the position of the box primitive so that its corner projects onto the image plane in the place where you released the button. A small white circle appears at the vertex to indicate that it has been pinned at this location in the image. You can also hold down the shiftbutton whilst moving the pin to zoom into the image as in the distortion and calibration modules.
33
4.The other corners of the box can now be positioned. Select the bottom left corner, and drag it to the position indicated on the right. You will notice that as the corner moves, the position of the box remains constant, but its scale changes. This is because this primitive now has two image constraints one for the previous (pinned) vertex and one for this current vertex. At any time, you may click with the right mouse button on a vertex to show a pop-up menu that allows you to add, remove or clear pins.
5.In order to set the correct height of the building, select topfront corner of the box and drag it into position, as shown on the right. Again, the position of the box remains constant, and its vertical height is changing to satisfy the new image constraint. When the primitive is correctly positioned in space, its projection into each frame of the sequence will correctly match the outline of the building. You can check this at any time by moving the frame slider at the bottom of the currently active window.
6.One final corner must now be positioned in order to correctly set the size of the box primitive. Because this corner is not visible in the first frame of the sequence, move to a later frame where you can see this corner, and drag it into position. Note that the colours of the small pin circles at each previous corner have changed from white to yellow. This indicates that these corners are not pinned in this frame, but have been pinned in other frames. Each corner may be pinned in as many frames as required, and ICARUS will try to satisfy each constraint that is given by each pin. When you place pins in multiple images, you may notice that the red outline of the primitive does not match your pin positions. This occurs because the constraints that the pins specify are inconsistent, and can happen for two reasons either the pins do not correspond to a single point in space, or the calibration calculated by the Calibration module is incorrect. A project file containing a correctly positioned box primitive is included with the system (projects/reconstruction1.ipr).
34
7.Now that the first primitive is placed, we can model the side part of the building with another box. This second box will be created as a child of the first within the scene graph, so make sure the first primitive is selected by either clicking on its name in the Project Overview, or by clicking on it in the image window. The currently selected primitive has a thicker outline drawn around it, and other primitive are drawn with thinner yellow lines. Now click on the Box icon again at the bottom of the screen. This will create a new box, at the default position on top of its parent. This default position is specified by the hierarchy constraints shown in a toolbar on the left of the window.
8.To position the new box primitive on the side of the building, its hierarchy constraints must be changed. These constrains are specified in terms of the location of the primitive relative to its parent in each of the X, Y, and Z directions, shown by the red, green, and blue axis lines. Firstly, change the Z constraint so that the primitive sits on the outside of its parent in the direction of decreasing Z value by clicking on the Z Min Out constraint icon. Similarly, remove the constraint specifying that the primitive sits above its parent in the Y direction by de-selecting the Y Max Out icon. This should position the primitive as shown on the right. 9.You can now start to position this new box primitive. Select the bottom-left corner and move it into the position shown on the right. You will notice that the primitive is now moving in a vertical, rather than a horizontal plane. This is because the Z-Min-Out constraint is being satisfied as the primitive moves, rather than the Y-Max-Out constraint.
35
10.Now try to position two more corners of the new box primitive so that they are in the positions shown on the right. As these corners are moved and pinned, ICARUS attempts to satisfy both the hierarchy constraints and the image constraints.
11.Finally, the position of the new box primitive can be completed by positioning a fourth vertex (you may need to move to another frame to do this if the far corner of the block is not visible in the current frame of the sequence). You can compare your result to the projects/reconstruction2.ipr project file provided with this tutorial.
12.To improve this simple model of the building, we will position a horizontal polygon to represent the ground. The wire-frame groundplane you see at the moment is for visualisation purposes only, and is not a real part of the model, so turn it off by selecting Display>Ground Plane from the Display menu. This new ground-plane polygon will not be created as a child of any of the previous primitives, so make sure that none are selected by either clicking on the Primitives entry in the Project Overview, or by clicking in a region of the image window that does not contain any primitives. Now click on the xz-polygon icon in the primitive toolbar. A quadrilateral should appear in the image window. In order to create a larger polygon, zoom out slightly, and then drag one corner of its corners towards the camera, as shown on the right.
36
13.Now select the opposite corner of the primitive, and drag it towards the building. This will change the scale of the primitive. Pin this corner into a position so that the new ground-plane extends beyond the building you've just modelled.
14.Finally, see if you can create a new box primitive to model the small wall on the right of the building. Make this primitive a child of the original building. To set its position, you will need to select hierarchy constraints so that the new box sits inside and at the bottom of its parent (Y-Min-In), and that in the Z direction it sits outside its parent in the direction of minimum Z (Z-MinOut). A more complete model is included in the project file reconstruction3.ipr. 15.Once the model is completed ICARUS can automatically extract texture maps for each primitive. Select the Textures->Extract Textures option from the Texture menu. This will show a dialog that allows you to specify the frames from which to extract the textures, as well as the method used to fill in any missing areas. Click the Smear Fill option, and then the Extract button. After a few minutes, texture extraction will complete.
37
16.To view the final textured model, make sure a primitive is selected (either in the Project Overview, or by clicking on one in the image window). Now select the Display->Textures option from the Render menu, and the Viewpoint button on the toolbar. You will now be able to view the model from different directions by clicking with the left, middle, or right mouse buttons in the image window. This allows you to rotate, zoom, and translate the viewpoint respectively (hint: turning off the background image can make for a clearer view). The model may also be saved in Inventor V2.0, VRML97, Maya or Lightwave LWO2 formats by selecting the Project->Export Model option from the Project menu.
38
2.Before reconstructing any geometry, we need to create some features. These will be used as the vertices of a triangular mesh. Select Mesh->New Feature from the Mesh menu, and open the Features list in the Project Overview. You should see a feature called Feature1. Make sure this feature is highlighted in the Project Overview, and then click with the left mouse button to position it at the corner of the building. Feature positioning works in exactly the same way as for user-features (hold the shift key to get a zoomed-in view). You should see a feature appear in the image, with a blue line drawn through its centre. This line is a guide-line that should always pass through the centre of the scene-point that this feature is representing. You can see this by moving through the frames of the sequence, and checking that the blue line always passes through the corner of the building. 3.These guide-lines are used to give you hints as to where else to mark this feature. If you can mark a feature in more than one image, then its 3D location can be calculated. Move to another frame in the sequence and click to place the feature in the correct position. The accuracy of calculating the feature position increases with the distance between the camera centres, and also with increasing number of feature marks. Move through the sequence to see if the feature has been positioned correctly at the top of the building (see the projects/mesh1.ipr project file). Repeat the process, generating 3 more features at the other corners of the side face of the building. Pressing the right mouse-button over a feature will display the feature popup, allowing you to hide, delete, and rename features.
39
4.Now that weve created some features, we can connect them into a triangular mesh. Select Mesh>New Mesh from the Mesh menu. This will create a mesh called Mesh1. Highlight this mesh in the Project Overview by clicking on it, and then move your mouse pointer over one of the features in the image window. You will see that a small blue circle appears around it. Clicking on a feature will select it and turn the circle yellow. Once three features have been selected, they are connected together to form a triangular face in the mesh. Create a face and then move through the sequence to see that the triangle accurately tracks the side of the building (see projects/mesh2.ipr). At any time, you can rightclick on the triangle to display a popup menu for this mesh.
5.Connect three more vertices in the mesh to form another triangle as shown on the right (also see projects/mesh3.ipr). You can also click the Viewpoint button on the toolbar and view the model from a different angle. Meshes may be textured, and saved out from ICARUS as normal. Try to create more features and build a more complete model of the building. You can also mix meshes with normal primitives if you wish.
40
6. Quick Reference
6.1 Distortion module
6.1.1 Menu options
6.1.1.1 Project menu
Project->New: start a new project Project->Open: open an existing project Project->Save: save current project Project->Save As: save current project under a different name Project->Load Lens File: load a previously estimated sequence of distortion parameters Project->Save Lens File: save the current sequence of distortion parameters Project->Import Image: import a static image Project->Export Image: save a new image, after removing or adding lens distortion Project->Import Movie: import a movie file Project->Export Movie: save a new movie file, after removing or adding lens distortion Project->Preferences: display the preferences dialog Project->Quit: quit ICARUS.
41
42
<num features> <num> <name> <x> <y> <z> ... Project->Export Image->With Tracking Overlay: saves the current image (or a single frame from the current movie) with the feature tracks etc.. overlayed. Project->Export Image->Motion Filter: filter out moving objects from the current frame and save it as a single image. Project->Export Image->Image Mosaic: Merges all movie frames into a single image mosaic and saves it as an image file. Project->Export Image->Optical Flow: calculate the optical flow from the current frame into the next and save it in the FLO file format (see Chapter 2). Project->Export Movie->With Tracking Overlay: as above, but saves an entire movie rather than the current frame. Project->Export Movie->Motion Filter: filter out moving objects from the current pan/tilt/zoom movie sequence. Project->Export Movie->Stabilize: stabilizes the camera motion (2D version required a user-feature to be highlighted in the Project Overview, 3D version requires the camera motion to be calibrated). Project->Export Movie->Optical Flow: as above, but saves a FLO image for each frame of the movie. Project->Preferences: display the preferences dialog Project->Quit: quit ICARUS
43
Camera->Calibrate: calibrate the current image/video sequence Camera->Bundle Adjust: apply a metric bundle adjustment algorithm to a calibrated Camera->Orient Scene: orient a scene using either user-specified coordinate axis lines and origin points, or with the currently selected set of user/auto features. Camera->Delete Calibration: delete any camera motion and calibration data.
44
45
46
Format->Format: specify NTSC or PAL frame rate. Format->Frame Rate: specify an alternative frame rate. Format->Interlacing: specify the method to correct for any interlacing in a video sequence. None make no changes, Use Upper Field uses only the upper field of each frame, Use Lower Field uses the lower, and Average Fields takes an average of both upper and lower, Independent treats each field as a separate frame, doubling the length of the sequence and also doubling the frame rate. Format->Field Dominance: for Independent interlacing modes, specify if the upper or lower field occurs first. Format->3:2 Pulldown: for NTSC sequences, specify the method for removing the 3:2 pulldown to translate back to 24fps footage. Format->Analyze Pulldown: attempt to automatically analyze the NTSC footage to determint the method for removing 3:2 pulldown. Pulldown analysis can be stopped at any point by clicking the Stop button in the progress dialog.
47
48
When this icon is set, objects are allowed to rotate around their X axis This icon allows objects to rotate around their Y axis Setting this icon allows objects to rotate around their Z axis The next three sets of constraints operate on the X Y and Z directions of a primitive. These constrains operate relative to the coordinate system of a primitives parent in the Project Overview: Centre: this constrains a primitives X/Y/Z position so the centre of its bounding box is the same as the centre of its parent Min Out: this constrains a primitives X/Y/Z position so that it sits outside its parents bounding box in the negative X/Y/Z direction Min In: this constrains a primitives X/Y/Z position so that it sits inside its parents bounding box in the negative X/Y/Z direction Max In: this constrains a primitives X/Y/Z position so that it sits inside its parents bounding box in the positive X/Y/Z direction Max Out: this constrains a primitives X/Y/Z position so that it sits outside its parents bounding box in the positive X/Y/Z direction X Y Z
49
Hidden: toggle the selected mesh as hidden/not hidden Occluder: toggle the selected mesh as an occluder. When in shaded/texture display mode, occluders are only drawn into the Z-Buffer Rename: rename the selected mesh Texture->Set Resolution: Specify the size of the texture maps for this mesh Texture->Clear Facet Texture: delete the texture associated with this mesh face Texture->Clear Mesh Texture: delete the texture associated with this entire mesh Texture->Pull Facet Texture: extract a texture map for the current face from this frame. Texture->Pull Mesh Texture: extract textures for the entire primitive from this frame.
50
7. Notes
7.1 Inliers and outliers
Looking at the log file that is generated during calibration, you might be wondering what inliers and outliers are. After Icarus has worked out the location of the feature points in 3D space and the position of the cameras, it can project each 3D point into each image and see if the projection agrees with the feature track in that image. If a feature projects to the same location as it's track, then it is called an "inlier". Otherwise, it is an "outlier". If all feature projections agree with all feature tracks then we have no outliers. The level of agreement required is controlled by the residual error, and the default of 1 pixel means that a feature must project to within 1 pixel of its track to be considered as an inlier. The "percentage outliers" number in the calibration dialog is used to specify roughly how many outliers are expected. If fewer outliers are expected, then some parts of the calibration algorithm can be speeded up. Reducing this number will not always speed things up, however, as the algorithms are already smart enough to speed themselves up when they see that very few features are outlying. The main use of this number is that it can be increased if things are going bad. This has the effect of trying to get Icarus to spend more time calculating some of the parameters it requires.
The buttons on the bottom row of the dialog allow you to create and delete new colour range. The current list of colour ranges are given in the drop-box at the top-left. Each colour range can be used to mask out a particular set of colours. For example, if youve got a sequence that has been shot against a green-screen with tracking markers, you might want
51
to quickly mask out everything except the screen and markers. This could be done with two different colour ranges: one for the green screen and one for the tracking markers. You can do this very easily by first creating a new colour range using the New button. Now, click the Pick button. This will allow you to pick up colours from your image and store in the region. Holding the left mouse-button and dragging the dropper-cursor over your image will pick up the colours and create the initial region (see below). After colours have been picked up, their ranges are drawn within the 3 colour channels. By default, colour ranges operate in HLS colour space, but by using the drop-box in the top right corner of the dialog, you can change this to YUV, YIQ or even RGB.
You will probably notice that when picking up colours it is very difficult to get all the colours necessary for an accurate mask. You can adjust the colour ranges by hand by dragging the left and right-hand vertical lines delimiting the range in each channel. In the example below, both the H, L and S colour ranges have been extended until the matte covers the background green-screen.
You can create more than one colour range using the New button, and once you have masked out the background colour correctly, right-clicking on the matte name in the Project Overview and selecting Invert will give you a correctly masked image, as shown on the right.
52
7.4 Estimating focal lengths and orienting the scene using vanishing points
The accuracy of focal length estimation using vanishing points depends strongly on how well the edges are marked by the user. For each axis, ICARUS intersects the set of edges the user has identified in order to identify the vanishing point in the image that corresponds to that coordinate axis. The locations of two or more vanishing points in a single frame are then used to estimate the focal length of the camera for that frame. One or more vanishing points in a single frame can also be used to orient the coordinate system of the calibrated scene. Because there will always be a small amount of error in the location of these lines, the vanishing point that is identified will only be an estimate. Consider the two cases shown below: In the case on the left, the edges for one axis are almost parallel, and in the case on the right they are not.
Nonparallel edges
The intersection point for the lines on the left is very far from the image, because the lines are almost parallel (in the limit, when the lines are exactly parallel, they intersect at infinity). Conversely, the intersection point for the lines on the right is much nearer the image. Because of this, errors in the placement of the edges are going to affect the location of the vanishing point in different ways: Small pixel errors for the nearly-parallel edges are going to translate into large variations in the location of the vanishing point. Small pixel errors for the non-parallel edges, however, are going to have little relative effect on the accuracy of the vanishing point estimate. This leads to rule number one for estimating focal lengths with vanishing points: Try to avoid edges that are nearly parallel. Equally, an alternative way to increase the accuracy of the vanishing point is to increase the number of edges that are used to estimate its position. Rule two is therefore: Use as many edges as possible. If you can get estimates of the vanishing point for two or more orthogonal edges in a single frame, then ICARUS can estimate the focal length for that image. Orthogonal edges are those that are at 90 degrees to each other in the scene (e.g. the edges of a cube). For orienting the scene after calibration, however, at least one vanishing point is needed. The way in which the scene is oriented will depend on the number used, but for one edge, the coordinate system will be oriented so that the direction of the coordinate axis that has been chosen matches the direction calculated from the vanishing point. For example, if you mark out vertical edges in a frame, and then orient the scene, the Y (up) direction in your calibration will point upwards. The orientation of the horizontal plane around the Y axis will still be undetermined. If you mark two or three orthogonal vanishing points (and because of the properties of orthogonal lines, three is the most you can get), then you will be able to orient the horizontal plane as well.
53
54
This can sometimes be solved by running a metric bundle adjustment, and constraining the aspect ratio, skew, and principal point parameters. Also, if you have specified a constant focal length, constrain the focal length parameter during bundle adjustment as well. 2.The focal length looks very wrong. If you have an approximate value for the focal length (it doesnt have to be very accurate), then open up the Camera Parameters dialog, select Fixed Focal Length, and enter the value on the right (you can specify it in units of pixels or millimeters, if you know the cameras aperture height). Now click the Re-Calibrate button. This will attempt to adjust the camera motion and feature positions so that the focal length of the camera is as close as possible to the value youve specified (note that it might not be exact, but it should get pretty near). After this is done, take a look at the variation of aspect ratio, skew and principal point by selecting their entries in the Project Overview. If things look bad, run a metric bundle adjustment with the appropriate constraints (also, if your initial focal length value is know to be quite accurate, select the Constrain Focal option as well). 3.Everything else has failed. It wont calibrate! Try adjusting the camera parameters by hand using key-points in the graph window. This is a last resort....
7.7.1 Auto-tracking
This example of auto-feature tracking uses the building2 movie file that comes with the Icarus distribution, and the default tracking parameters.
1. As tracking is performed, the percentage of features that are successfully tracked from one frame to the next are recorded in the graph.
55
2.Once back-tracking starts, the graph records the percentage of newly introduced features that are successfully tracked backwards for each frame. You can see that this percentage is slightly lower than during forward tracking.
1.At the start of phase 1, the graph displays the percentage of inliers. An inlier is a feature track whos predicted position (given by its 3D location and the estimated camera) matches the location of the feature track. If all features and cameras are estimated accurately, then the number of inliers will be 100%. As you can see from the graph above, the early stages of the phase 1 produce a large percentage of inliers, starting from 100% and then slowly decreasing.
2. Once a certain point is reached in the calibration, Icarus tries to re-estimate the camera positions using 3D feature positions that it has estimated (this is known as resectioning). The block of data shown above indicates how well this can be performed (again, green is good...)
56
3. At certain points in the calibration process, Icarus must merge two different chunks of camera data together. For example, it might have reconstructed camera motion from frames 40 to 50 and from frames 50 to 55. During phase 1, these camera motions are merged together to estimate the motion between frames 40 and 55. The regions of the graph highlighted above indicate how well this merging can be achieved. You will notice that each region contains 4 bars. At first, the quality is quite low (indicated by the first two bars in each group). The important thing to notice is that immediately after the low quality graph bars, there are two bars that indicate that the final result of the merge is actually of high quality. It is these final two bars that are important and affect the rest of the calibration process. In this case, the final two bars in each group indicate a high quality merge.
4.Once phase 2 of the calibration process is reached, the graph will be cleared. Phase 2 is the merging phase that hopefully results in a complete estimate of the camera motion through the entire sequence (see the research paper mentioned in Section 8..) Again, the graph indicates the number of inliers that are found after each merging phase. The example highlighted above shows two entire merging passes throughout the sequence. Each merging pass is delimited by a vertical black line, and within each pass the bars can be grouped into sets of four, as in the previous step. Again, the first pair of bars in each set indicates the initial merge quality, with the second two bars giving the final quality. In this case, the first merging pass has 3 sets of 4 bars, and for each set the final merge quality is very good.
5. The third phase of the calibration process involves estimating camera parameters such as focal length and skew. In this case, the height of the bars indicate how close to the required values the calibration algorithm has been able to get. In the example shown above, the bars are almost at the top of the graph, meaning that the camera parameters are fairly well behaved and have been estimated quite reliably.
57
6. Finally, phase 4 of the calibration process involves running a bundle adjustment on all the cameras and feature locations to try and improve the estimates of the camera parameters. Each step in the bundle adjustment process is separated by a vertical black line, and contains a series of vertical bars that indicate how well the camera parameters match their desired values. In the example above, the first few set of bars are of relatively low quality, but as you can see, as the bundle adjustment progresses the quality increases until the bars are almost all at the top of the graph.
1. During phase 1 of a pan/tilt/zoom calibration, the graph shows the percentage of inliers as different parts of the overall camera motion are estimated. These different sections are separated by vertical black lines. In the example above, you can see that for each part, the number of inliers starts off at 100% and then decreases until it is decided that the camera motion can be reliably estimated.
2. Phase 2 of the calibration process is almost identical to phase 3 of the free-motion calibration. For a description of this phase, see Section 7.4.2.
58
3. The final phase of the pan/tilt/zoom calibration is a bundle adjustment, and is very similar to phase 4 of the freemotion calibration. See Section 7.4.2 for a descripton of the graph during this phase.
59
8. References
This section provides links to research papers describing the internals of the ICARUS system in more detail. These papers are available from http://aig.cs.man.ac.uk/icarus/papers.php S. Gibson, J. Cook, T.L.J. Howard, R.J. Hubbold, and D. Oram, "Accurate Camera Calibration for Off-line, VideoBased Augmented Reality", IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR 2002), Darmstadt, Germany, September 2002. S. Gibson, J. Cook, T.L.J. Howard, R.J. Hubbold, "ICARUS: Interactive Reconstruction from Uncalibration Image Sequences", ACM Siggraph 2002 Conference Abstracts and Applications. San Antonio, Texas, July, 2002. S. Gibson, R.J. Hubbold, J. Cook, and T.L.J. Howard, "Interactive Reconstruction of Virtual Environments from Video Sequences", Computers and Graphics, Volume 27, Number 2, 2003 (to appear).
60