Académique Documents
Professionnel Documents
Culture Documents
The word photogrammetry has been derived from three Greek words:
o Photos: means light
o Gramma: means something drawn or written
o Metron: means to measure
This definition, over the years, has been enhanced to include interpretation as well as
measurement with photographs.
Photogrammetry is the art, science, and technology of obtaining reliable information about
physical objects and the environment through process of recording, measuring, and interpreting
photographic images and patterns of recorded radiant electromagnetic energy and
phenomenon .
Originally photogrammetry was considered as the science of analysing only photographs.
But now it also includes analysis of other records as well, such as radiated acoustical
energy patterns and magnetic phenomenon.
It involves making precise measurements from photos and other information source to
determine, in general, relative location of points. Most common application: preparation of
plannimetric and topographic maps.
(2) Interpretative:
It involves recognition and identification of objects and judging their significance through
careful and systematic analysis. It includes photographic interpretation which is the study of
photographic images. It also includes interpretation of images acquired in Remote Sensing
using photographic images, MSS, Infrared, TIR, SLAR etc.
Definitions
Aerial Photogrammetry
Photographs of terrain in an area are taken by a precision photogrammetric camera mounted in
an aircraft flying over an area.
Terrestrial Photogrammetry
Photographs of terrain in an area are taken from fixed and usually known position or
near the ground and with the camera axis horizontal or nearly so.
Photo-interpretation
Aerial/terrestrial photographs are used to evaluate, analyse, and classify and interpret
images of objects which can be seen on the photographs.
Applications of photogrammetry
Photogrammetry has been used in several areas. The following description give an overview of
various applications areas of photogrammetry.
(1) Geology:
Structural geology, investigation of water resources, analysis of thermal patterns on earth's
surface, geomorphological studies including investigations of shore features.
engineering geology
stratigraphics studies
general geologic applications
study of luminescence phenomenon
recording and analysis of catastrophic events
earthquakes, floods, and eruption.
(2) Forestry:
Timber inventories, cover maps, acreage studies
(3) Agriculture
Soil type, soil conservation, crop planting, crop disease, crop-acreage.
(4) Design and construction
Data needed for site and route studies specifically for alternate schemes for photogrammetry.
Used in design and construction of dams, bridges, transmission lines.
(5) Planning of cities and highways
New highway locations, detailed design of construction contracts, planning of civic
improvements.
(6) Cadastre
Cadastral problems such as determination of land lines for assessment of taxes. Large scale
cadastral maps are prepared for reapportionment of land.
(7) Environmental Studies
Land-use studies.
(8) Exploration
To identify and zero down to areas for various exploratory jobs such as oil or mineral
exploration.
(9) Military intelligence
Reconnaissance for deployment of forces, planning manoeuvres, assessing effects of operation,
initiating problems related to topography, terrain conditions or works.
(10) Medicine and surgery
Stereoscopic measurements on human body, X-ray photogrammetry in location of foreign
material in body and location and examinations of fractures and grooves, biostereometrics.
(11) Miscellaneous
Crime detection, traffic studies, oceanography, meteorological observation, Architectural and
archaeological surveys, contouring beef cattle for animal husbandry etc.
Categories of photogrammetry
Photogrammetry is divided into different categories according to the types of
photographs or sensing system used or the manner of their use as given below:
(1) On the basis of orientation of camera axis:
(i) Terrestrial or ground photogrammetry
When the photographs are obtained from the ground station with camera axis
horizontal or nearly horizontal
(ii) Aerial photogrammetry
If the photographs are obtained from an airborne vehicle. The photographs are
called vertical if the camera axis is truly vertical or if the tilt of the camera axis is less
than 3o. If tilt is more than (often given intentionally), the photographs are called
oblique photographs.
(2) On the basis of sensor system used:
Following names are popularly used to indicate type of sensor system used in
recording imagery.
Radargrammetry: Radar sensor
X-ray photogrammetry: X-ray sensor
Hologrammetry: Holographs
Cine photogrammetry: motion pictures
Infrared or colour photogrammetry: infrared or colour photographs
(3) On the basis of principle of recreating geometry
When single photographs are used with the stereoscopic effect, if any, it is called
monoscopic photogrammetry. If two overlapping photographs are used to generate
three dimensional view to create relief model, it is called stereophotogrammetry. It is
the most popular and widely used form of photogrammetry.
(4) On the basis of procedure involved for reducing the data from
photographs
Three types of photogrammetry are possible under this classification:
(a) Instrumental or analogue photogrammetry
It involves photogrammetric instruments to carry out tasks.
(b) Semi-analytical or analytical
Analytical photogrammetry solves problems by establishing mathematical
relationship between coordinates on photographic image and real world objects. Semi-
analytical approach is hybrid approach using instrumental as well analytical principles.
(c) Digital Photogrammetry or softcopy photogrammetry
It uses digital image processing principle and analytical photogrammetry tools to
carry out photogrammetric operation on digital imagery.
(5) On the basis of platforms on which the sensor is mounted:
If the sensing system is spaceborne, it is called space photogrammetry, satellite
photogrammetry or extra-terrestrial photogrammetry.
Out of various types of the photogrammetry, the most commonly used forms are
stereophotogrammetry utilizing a pair of vertical aerial photographs (stereopair) or
terrestrial photogrammetry using a terrestrial stereopair.
CLASSIFICATION OF PHOTOGRAPHS
The following paragraphs give details of classification of photographs used in different
applications
(1) On the basis of the alignment of optical axis
(a) Vertical : If optical axis of the camera is held in a vertical or nearly vertical position.
(b) Tilted : An unintentional and unavoidable inclination of the optical axis from vertical
produces a tilted photograph.
(c) Oblique : Photograph taken with the optical axis intentionally inclined to the vertical.
Types of projections
1. Parallel : The projecting rays are parallel.
2. Orthogonal : Projecting rays are perpendicular to plane of projection. This is a special
case of parallel projection. Maps are orthogonal projection. The advantage of this
projection is that the distances, angles, and areas in plane are independent of elevation
differences of objects.
3. Central : Central projection is the starting point for all photogrammetry. In this
projection rays pass through a point called the projection center or perspective center.
The image projected by a lens system is treated as central projection although in
strictest senses it is not so.
X-axis of photo
Line on photo between opposite collimation marks, which most nearly parallels the flight direction.
Y-axis
Line normal to x-axis and join opposite collimation marks.
Principal point (o)
The point where the perpendicular dropped from the front nodal point strikes the photograph or the point
in which camera axis pierces the image plane.
Camera axis
It is a ray of light incident at front nodal point in the object space and at right angles to the image plane.
Fiducial marks or collimation marks
Index marks usually four in number, rigidly connected with the camera lens through the camera body and
forming images on the photographs to which the position on the photograph can be referred.
Photographs center
The geometrical center of the photograph as defined by the intersection of the lines joining the fiducial
marks.
Format
It is the planar dimension of photograph (9" x 9", 7" x 7", 23 cm x 23 cm, 18 cm x 18 cm, 15 cm x 15
cm).
Photogram
Photograph taken with a photogrammetric camera having fixed distance between negative plane and lens
and equipped with fiducial or collimating marks. For photograms the bundle of rays on the object side at
the moment of exposure can be reproduced. To achieve this the following data known as the elements of
interior orientation must be known:
Calibrated focal length
Lens distortion data
Location of the principal point with reference to the photograph center (normally these two
coincide)
Perspective Axis
Line CD where the two plane meet is called the perspective axis or horizontal trace.
Principal lines
A line UP drawn perpendicular to the perspective axis along the photograph plane. This projects
as Up on ground plane (CDEF) and is also perpendicular to perspective axis. These lines are
called the photo and ground principal lines respectively.
Principal plane
A plane containing P, V, and S is called the principal plane. Photo principal line (VP) and ground
principal lines (vp) are contained in this plane.
This shows that the scale along plate parallel through isocentre of a tilted photo is same as that over the
whole surface of a vertical photo if ground surface is plane. For any other plate parallel, scale will depend
on the tilt angle. Also, the scale along any plate parallel is constant.
Ground co-ordinates for vertical photographs
In figure 3, X and Y are ground co-ordinates with respect to a set of axes whose directions are parallel
with the photographic axes and whose origin is directly below the exposure station, x and y indicate x and
y photo coordinates with respect to the photo coordinate system with origin at o axes as shown. Using
similar triangles, we can write the following relations:
Click on the image for larger view
Figure 3: Ground coordinates from vertical photographs
Flying height for vertical photographs
The flying height can be calculated by two approaches
Direct
Indirect
Direct Method
In this method, if the ground coordinates of two points, A and B are given (XA, YA ) and (XB, YB ), then a
quadratic equation can be formed to derive the flying height as given below:
Indirect Method
In this method, one can find the flying height by an iterative approach. For this one can use equations (1)
and (2) where hAB = average elevation of points A, B, H app is approximate height and AB = known
ground distance
Get H app by equation
Numerical problems
1. The distance on a map between two road intersections in flat terrain measures 12.78 cm. The
distance between the same two points is 9.25 cm on vertical photograph. If the scale of the map is
1: 24,000, what is the scale of the photograph?
2. Fifteen photographs were taken in a strip each covering an area equal to 25.75 sq. km. If the
longitudinal overlap is 60%, find the total ground area covered by the strip.
3. A aircraft takes photographs at a scale of 1:10,000. Photo size is 23x23 cm. Overlaps are:
longitudinal 65% and lateral 30%. The photography consists of 5 strips of 21 photographs each.
Calculate: (a) Ground area covered by a single photograph. (b) Ground area covered by the first
strip. (c) Ground area covered by the whole photography.
4. A vertical photograph was taken with H above datum = 2400 m and f = 210 mm. The highest,
lowest, and average elevation of terrain appearing in the photograph is 1330, 617, and 960 m
respectively. Calculate minimum, maximum, and average photographic scale.
5. An aircraft flying at an altitude of 4600 m above MSL photographs 5 strips of 20 photographs
each of a terrain having h avg = 300 m above MSL. If f = 205.53 mm, find the scale of
photograph and area covered by each photograph of size 23 x 23 cm. Assuming 60% forward and
20% sidelap, find total area covered by the photography.
6. Points A and B are at elevations 273 m and 328 m above datum, respectively. The photographic
coordinates of their images on a vertical photograph are:
xa = -68.27 mm xb = -87.44 mm
ya = -32.37 mm yb = 26.81 mm
What is the horizontal length of the line AB if the photo was taken from 3200 meters above datum
with a 21 cm focal length camera?
7. An image of a hilltop is 87.5 mm from the centre of a photograph. The elevation of the hill is 665
meters and the flight altitude 4660 meters from the same datum. How much is the image displaced
due to elevation of the hill?
Answers:
1.
2.
3.
4.
5.
6.
7.
Scale of a tilted photograph
Also, PP' = WW' = NN' = hp(from construction) and therefore, plane nwp is also horizontal. In
Δs Lkp and LNP
But kp/NP = scale for a point lying in a plane kwp. Since p lies on this plane
Where
St scale of a tilted photograph at a point whose elevation is h.
f focal length.
t tilt angle
H flying height above datum.
y' y -co-ordinate of the point with respect to a set of axes whose origin is at the nadir point and whose
y' axis
coincides with the principal line.
1. Get photo-coordinates of the end points of control line with respect to axes defined by the
collimation marks.
2. Photographic length of line is scaled directly.
3. Use ratio of photographic length to known ground length to get first approximation as
where,
f - focal length
h AB - Avg. elevation of A and B
AB - ground length.
ab - scaled photo length.
By using Happ, together with other scale data, one can solve the following equations
1. On a tilted photo relief displacement (a'a) are radial from nadir point (n).
2. The amount of relief displacement depends upon: (i) Flying height (ii) distance from nadir point to
image (iii) elevation of ground point (iv) position of point with respect to principal line and to the
axis of the tilt.
3. Compared with an equivalent relief displacement on vertical photo, the RD on a tilted photo will
be
o less on the half of the photograph upward from the axis of the tilt,
o identical for points lying on the axis of the tilt and
o greater on downward half of the photo.
4. Image displacement due to the tilt (explained later) tends to compensate relief displacement on the
upward half and will be added to RD on the downward half.
5. Because tilts in near-vertical photos rarely > 3o, therefore the value of RD is given with sufficient
accuracy with following equation. However, the radial distance should be measured from the nadir
point rather than from the principal point
α is obtained by measuring the distance to the point from a line through the principal point and parallel
with the axis of the tilt. When point lies on principal line, as does the point C, the distance is measured
from principal point itself to the image point. This distance is then divided by focal length to obtain tan α.
The following observations should also be noted for the tilt displacement:
1. Tilt displacement for a point not lying on the principal line is greater than that of a corresponding
point on principal line.
2. Above ratio is equal to the secant of angle at isocentre from principal line to the point. Therefore,
tilt displacement on upper half of tilted photo is inward and is given as:
Figure 5 Tilt displacement with respect to an equivalent vertical photograph ( Wolf and Dewitt, 2000 )
Using figure 5, the following derivation can be accomplished for TD given as d.
For any point a lying on lower side of photo
TD is maximum when α = 0, i.e. point lies along principal line, minimum when point lies along isometric
parallel.
The general equation can also be written as
Depth perception
Monoscopic viewing provides only rough depth impression which is based on the following clues:
o Relative size of objects
o Hidden objects
o Shadows
o Differences in focussing of eye required for viewing objects at varying distances.
For stereoscopic depth perception usually two clues are involved
o Double image phenomenon.
o Relative convergence of optical axis of two eyes.
The human eye functions in a similar manner to a camera. The lens of the eye is biconvex in shape
and is composed of refractive transparent medium. The separation between eyes is fixed (called
eye base). Therefore, in order to satisfy the lens formula for varying object distance, the focal
length of eye lens changes. For example, when a distant object is seen, the lens muscle relax,
causing the spherical surface of the lens to become flatter. This increases the focal length to satisfy
the lens formula and accommodate the long object distance. When close objects are viewed, a
reverse phenomenon happens. The eye's ability to focus varying object distance is called
accommodation.
The figure shows two situations where eyes separated by eye base (b) are focussing on two objects
A and B located at two distances dA and dB respectively.
In binocular vision, there is convergence of axes of eyes when focusing at point A and B. The
corresponding angles are φ1 and φ2. Angle φ1 tells the mind that object A is distance dA. Similarly
for point B. These angle are called parallactic angles for points. The nearer the object, the greater
the parallactic angle and vice versa. Difference ( φ1 - φ2 ) tells the mind that the distance (depth)
between two points is e = d B - d A.
For average separation of eye and distinct vision of about 10 inch, the limiting upper value of Φ ≈
16°. Lower limiting value of φ ranges from 10 to 20 seconds and represents a distance of about
1700 to 1500 ft for average eye separation. It is called stereoscopic acuity of the person.
Figure 1: Binocular vision (Wolf and Dewitt, 2000)
To understand the stereoscopic vision by viewing photographs, let us replace the eyes by two cameras and
take photographs as shown in figure 2 assuming that the photographic plates are between the objects and
lenses. If these photographs could be placed in the same position and seen with both eyes, one would get
the same depth. Since it is not possible to keep eyes at such a wide separation (called air base B), a scaled
version is shown in figure (ii) where camera position are replaced by a scaled down geometrical
arrangement (eyes separated by eye base-b e ), the two images are fused and the brain perceives the scaled
down depth between objects.
Stereoscope
If two photographs taken from two exposure stations L1 and L2 are laid on table so that left
photograph is seen by left eye and the right photo is seen with right eye, a 3D-model is obtained.
However, viewing in this arrangement is quite difficult due to following reasons:
o Eye strain and focusing difficulty due to close range.
o There is disparity in viewing. The eyes are focused at short range on photos lying on table
where as the brain perceives parallactic angles which tends to form the stereoscopic model
at some depth below the table.
By using stereoscope, these problems can be alleviated. Different types of stereoscopes are
available for different purposes - from pocket (inexpensive) for viewing small area to mirror
(expensive) for viewing larger area.
Figure 6: A stereopair
Parallax
It is the central concept to the geometry of overlapping photographs and is defined as the apparent
shift in the position of a body with respect to a reference point caused by a shift in the point of
observation
In photogrammetry, it refers to the relative difference in position of an image point that appears in
each of the overlapping photo.
Absolute Parallax:
Absolute X parallax (P X ) is given by (x l - x r ), where x l and x r are x coordinates on left and right
photographs respectively
P x = x l- xr
Other names:
X-parallax, horizontal parallax, linear parallax, absolute stereo parallax, or just parallax
For the following photograph pair, the parallax is given as follows.
P x = x b - (-x b' )
It can be noted that the value of parallax has been used with sign
dP BA = P B - P A is the difference in absolute parallaxes between two points. Above equation can
be modified to give
where dp BA = p B - p A is the difference in parallax bar readings between two points. Thus the
above equation says that dPBA = dpBA under the assumption that flying height for both photographs
is same (i.e. H1 = H2 = H) and photograph is truly vertical.
It may, however, be noted that for a given point, there may be parallax in both directions X and Y.
The Y-parallax is caused due to the following reasons:
o Unequal flying height
o Photographic tilt
o Misalignment of flight line
o Misalignment of stereoscope
o Great difference in parallax between adjacent images (in highly mountainous/rugged
terrain)
If dp is small then
Some other forms of parallax equation are given in adjacent formats
Base lining
In order to get fused 3D images, the orientation relationship between the two photographs and
stereoscopic lenses should be same as that between the photographed ground and the camera lenses
at the time of photography. Base lining is the process to achieve such orientation. The step by step
procedure to achieve base lining is as follows:
o Find out the photographic center of the first photograph by joining the opposite fiducial
marks and getting the intersecting lines. Let it be A. Mark a corresponding point A' on the
other photo of the stereopair.
o Find the photograph center of the second photo B and locate corresponding point B' on the
first photograph of the stereopair.
o Join A, B' to get AB' and B and A' to get BA'.
o Fix a white sheet (called the base sheet) on table and draw a straight line on this sheet in the
middle.
o Set up stereoscope on this sheet such that the line is midway between stereoscope legs.
o Put the first photograph in such a way that the line AB' coincides with the line on the base
sheet.
o Now by trial and error, and viewing through the stereoscope, put the second photograph
(right photograph) in such a way that the image of point A coincides with A' and B
coincides with B' i.e line AB' coincides with A'B.
o This arrangement will give 3D view of stereopair.
o Now with the help of stereometer, the floating mark of left plate is put over a well defined
point on the left photograph. Now by giving lateral and longitudinal motion to the other
plate (with the help of stereometer drum), we try to bring the floating mark of other glass
plate on the corresponding point of second photograph. When the images of left and right
floating marks are fused, the reading on the parallax bar can be recorded as the parallax bar
reading for that point.
Numerical examples
1. Two ground points A and B appear on a pair of overlapping photographs, which have been taken
from a height of 3650 m above MSL. The base lines as measured on the two photographs are 89.5
and 90.5 mm respectively. The mean parallax bar reading from A and B are 29.32 mm and 30.82
mm respectively. If the elevation of A above MSL is 230.35 m, compute the elevation B.
2. In the above problem if the lengths of base lines are not known and the absolute parallax of A is
measured to be 89.80 mm, compute the elevation of B. Also, find the height of another point C
whose parallax bar reading is 32.32 mm.
Answers
1.
2.
FLIGHT PLANNING
Base lining
The flight planning is the process of making all relevant preparations and taking certain decisions
for taking photographs to satisfy certain application requirements. This may include the following:
o deciding about flying height above datum
o spacing between successive exposures
o separation between flight lines
After careful decision about these elements, the flight lines are carefully laid on the map of the
study area to be photographed. This map is called the flight map.
Relief displacement
Large relief displacement create difficulty in forming continuous interrupted picture. Relief
displacement decrease with height although increase in height reduces scale. Hence, these two
effects have to be balanced.
Tilt of photograph
The tilt in a photograph can be resolved into two components: x-tilt and y-tilt, along x and y
directions respectively. In a photo with y-tilt, the forward overlap will be higher on one side and
lower on opposite side. The x-tilt causes the side lap to decrease on one side and to increase on
another. Large x-tilt affects flight line spacing.
Ground coverage
After choosing scale and camera format, the ground coverage with a single photograph can be
calculated. If the longitudinal and lateral overlaps are known, the ground coverage by a
stereomodel can be calculated. This coverage is important since it provides approximate mapping
area.
Airbase (B)
This is the distance between two adjacent exposure stations. On photographs, it is the distance
between successive principal points which is also called the 'advance'.
Exposure interval
This is the time interval between two successive exposures and is a function of longitudinal
overlap and aircraft velocity. It is equal to the time taken by aircraft to cover airbase. This can be
done with a device known as intervalometer, which automatically make an exposures at fixed
interval of time.
Examples
The overall flight planning procedure can be understood by a few simple examples that follow.
Example 1 :
An area 45 km long and 36 km wide is to be photographed to an average scale of 1:12000, using an aerial
camera of f = 21 cm. The speed of the aircraft is 200 km/h. The photographs are 23 cm square, with a
longitudinal overlap of at least 60% and lateral overlap of 30%, average elevation of the terrain is 500 m
above MSL. Calculate the following:
The flying height above mean sea level (MSL).
Distance between successive exposures
Distance between flight lines for successive strips.
Flight line spacing on flight map at a scale of 1 cm = 600 m
Interval between successive exposures
Number of photographs per strip taking one extra photograph at either end
Number of strips with only one extra strip as a safety factor
Total number of photographs
Solution:
The flying height (H) can be calculated by using
Since required overlap is at least 60%, hence exposure interval can be kept as 19 seconds.
Adjusted ground distance between exposures
L = 55.55 x 19 = 1055.45 m
Allowing two extra photographs at each end, the total photographs per flight lines = 44 + 2 + 2 = 48.
Hence, total no. of photographs = 48 x 20 = 960
Example 2:
The following data is given for flight planning:
Format 18 x 18 cm
Focal length = 21 cm
Scale = 1/20,000
Longitudinal overlap = 60%
Lateral overlap = 20%
East-west terrain length = 100 km
North-south terrain width = 50 km
Flight direction: East to west
Aircraft velocity = 296 km
Permissible image movement = 0.02 mm
Wind velocity = 10 m /s from SSE direction
Solution
Airbase = 0.4 x 18 cm at photo scale. On ground
Exposure interval
Time taken to cover this distance = maximum exposure time (shutter speed)
Wind speed = 10 m/s = 36km/hr; flying speed = 296 km/hr. The ground speed of the aircraft and
the angle of drift can be found out by vector operations.
Effective ground speed = (2962 + 362 - 2x296x36xcos67.5) 1/2 = 284.176 km/hr.
Developments in photogrammetry
In a typical photogrammetric process, the information is obtained by establishing rigorously a
geometric relationship between the model created by images and the object, as it existed at the time
of the imaging event (Mikhail and Bethel, 2001).
The geometric relationship can be established by various means, which are broadly classified as:
o analog photogrammetric methods
o analytical photogrammetric methods
o digital photogrammetric methods
Further, based on the type of platform used for the data acquisition, photogrammetry can be
classified into topographical photogrammetry and non-topographical photogrammetry (also
known as close-range photogrammetry). Topographical photogrammetry uses satellite and/or
airborne imagery, whereas non-topographical photogrammetry uses imagery from ground based
platforms (Mikhail and Bethel, 2001).
Data processing system is used for data reduction from geometric information (e.g., image
coordinates of targets) on the image to object space information.
Depending on the type of input used and the desired output, there are three alternatives for the data
reduction: analog, analytical and digital.
Analog methods use optical, mechanical and electronic components for modeling and processing.
In this mode of processing, the data reduction is carried out using analog instruments (e.g., plotters
such as Kelsh K-480 stereoplotter). A typical instrument using this principle is shown in figure 1.
Analytical methods use mathematical modeling assisted with digital processing. In this mode, the
image coordinates are obtained using monocomparators or stereocomparators (equipments to
measure coordinates) and further processing is done through computer. Images used in both the
above methods are in hard copy form (e.g., on film). A typical instrument known as analytical
stereoplotter which uses this principle is shown in figure 2.
In recent times, with the advent of digital scanners and digital cameras, images in digital form are
used instead of hard copy form. In digital methods, the modeling is based on analytical principles
except that they use digital images as input. Hence, digital photogrammetric methods use strengths
of both analytical and image processing methods for stereoplotting and related photogrammetric
works. In digital mode of processing, the entire process is carried out using computer programs.
These programs constitute digital photogrammetric software. A typical instrument known as digital
photogrammetric work station (DPWS) is shown in figure 3.
Figure 1: Kern PG2 mechanical projection stereoplotter equipped with pantograph for direct manuscript
plotting, and interfaced with computer for digital mapping (Wolf and Ghilani, 2002)
(a) (b)
Figure 3: (a) Digital photogrammetric work station (b) Digital Video Plotter DVP (Wolf and Ghilani,
2002)
n general, a photogrammetric system is defined by its three components
1. Data acquisition
2. Data reduction
3. Data presentation
1. Data acquisition
Data acquisition systems are concerned with procuring the data or information. The data can be
acquired in terms of images. The data acquisition system is generally classified into two categories:
Conventional Imagery and Non-Conventional Imagery.
In conventional imagery, the imaging system has a lens and an image plane such as frame
photographs, which is based on the central projection of the object onto an image plane. These can
be obtained by using both metric and non-metric cameras. In Non-conventional imagery the
imaging system does not use a lens and image plane. Holograms, X-rays, T.V. systems etc. are
categorized to this type of imagery.
The metric cameras are those manufactured specially for photogrammetric applications. In these
types of cameras, the elements of interior orientation are known. The elements of interior
orientation include: the focal length and location of the centre of the photograph. The metric
cameras are further classified as single and stereometric cameras. A single metric camera is
mounted on a tripod (figure 4 a) whereas a stereometric camera consists of two identical metric
cameras mounted rigidly at the ends of a fixed base for photography (figure 4 b).
A non-metric camera is characterized by the off-the-shelf cameras which are often used for
conventional photography (Figure 5 a and b). These are not the cameras especially made for the
photogrammetric purposes. In these types of cameras, the elements of interior orientation are
unknown or partially available.
(a) (b)
Figure 4: Terrestrial cameras (a) Zeiss TMK6 camera (b) Tripod mounted Zeiss SMK 40 + SMK 120
cameras
(a) (b)
Figure 5: Non-metric cameras for photogrammetric application (a) Sony video camera (b) Sony still
camera
Digital close range photogrammetry (DCRP) is the latest development in photogrammetry, which
is especially used to obtain 3D spatial information about objects placed near the camera. In this
close range process, the cameras are generally positioned within 100 meters from the object and
camera axes essentially point towards the center of the object (Atkinson, 1996). From multiple
positions, the user is able to acquire imagery at many convergent angles.
The introduction of digital cameras into DCRP (as image acquisition systems) has given rise to on-
line systems, which facilitate both real time and near-real time 3D coordinate measurement. With
the availability of wide range of digital cameras including camcorders, CCD (charge coupled
device) video cameras and digital still cameras at affordable prices, the utilization of these cameras
as data acquisition systems in DCRP has increased considerably (Samson, 2003).
DCRP systems employing wide range of cameras coupled with automated image measurement and
mapping have attracted wide usage over the past decade for precise deformation measurement in
industrial and engineering applications. For example,
Engineering Applications
o Monitoring of dam structures
o Highway applications (DTM and GIS for alignment by computer)
o Measurements of sand deposits in Hydraulics channel for different flow conditions
Biomedical Applications
o Design of prosthesis for below knee amputees
o Facial reconstruction studies
o Physical education - monitoring the movements of athletes
Architectural Applications
o Depicting the existing state of monuments/buildings and preparing working drawings
o Studying the deformation decay and damage to buildings/structures of importance by
periodic monitoring
o Preserving the cultural heritage of various epochs in the form of stereo - photographs
o Reconstructing and restoring and architectural monuments to their past glory
o Mapping of sites and relocating/recapturing the landscape and location of monuments
Industrial Applications
o Automobile industry
o Shipping industry
o Antenna calibration
2. Data reduction
The data reduction is concerned with the process of extracting desired information from
photographs. Photogrammetric techniques have changed a lot with time.
In the earlier stages, analogue approach was used in which the imaging geometry is reconstructed
by orienting two images in such a way that a three-dimensional model of the object is formed
through optical or mechanical devices. With development in optics and mechanics the analogue
photogrammetric instruments have improved and can attain very high accuracy.
With the evolution of computers, analogue instruments have been replaced by analytical plotters,
where single or a pair of photographs is placed in X-Y measuring system which digitally records
image coordinates (using mono or stereo comparators). The relations between image points and
object points are described through numerical calculations based on the collinearity equations.
The collinearity condition specifies that the exposure station, ground point and its corresponding image
point must all lie along a straight line. Figure 6 shows the collinearity condition. Let O be the exposure or
perspective centre, P be the location of a point in object space whose corresponding point in image plane
is p. Therefore, O p P represent the collinearity condition. The basic transformation equation describing
the relationship between two mutually associated 3D coordinate systems is given by (Wolf and Dewitt,
2000)
3. Data presentation
It consists of preparing and presenting the results in a suitable form. The final output can be in the
form of contour maps, pictorial representation of objects, digital model, three-dimensional (3D)
spatial coordinates etc.
The developments in digital photogrammetry has resulted in digital photogrammetric work stations
(DPWS). In the late eighties, the International Society for Photogrammetry and Remote Sensing
(ISPRS) had defined a DPWS as "hardware and software to derive photogrammetric products from
digital imagery". Some of the well known DPWS are Autometric, LH Systems, Z/I Imaging, and
ERDAS.
A typical DPWS comprise standard hardware components such as stereo viewing devices and a
three-dimensional mouse with a core of specialized photogrammetric software. Majority of DPWS
are PCs with windows operating systems though UNIX based systems are also there. Such systems
carry our various operations such as vector, raster and attribute data storage and processing, image
handling, compression, processing and display, and several photogrammetric applications such as
image orientation, generation of digital terrain models (DTM) or the capture of structured vector
data, and the user interface level (Heipcke, 2003)