Vous êtes sur la page 1sur 4

DEM Extraction from High-Resolution Digital Frame and LIDAR Imagery Sensor Integration for Improved Surface Modeling

By Dr. Charles K. Toth and Dr. Dorota A. Grejner-Brzezinska, The Ohio State University Center for Mapping

Abstract With the recent introduction of new spatial data acquisition sensors, auxiliary observations offering a diversity of spatial/spectral information are complementing the so far mostly stereo image-based surface extraction techniques. Scanning laser-ranging (LIDAR) sensors have substantially improved over recent years and may soon become a primary tool for digital surface data acquisition. Our early experiences with integrating directly oriented LIDAR and CCD imagery for surface extraction are discussed here. Introduction Digital elevation data play an important role in many mapping applications, including spatial feature extraction and interpretation. The demand for DEM has grown tremendously over the past few years. Orthophoto production, engineering design, modeling, visualization, etc. all need surface data at various scales and accuracy ranges. Furthermore, the research community agrees that feature extraction in 3D spatial domain cannot be effectively completed without surface reconstruction, and vice-versa. Most of the currently used DEM extraction techniques are based on a combination of image domain feature- and area-based matching schemes usually organized in a hierarchical structure. The performance of these systems is very good for smooth, rolling terrain at small to medium scale, but it decreases rapidly for complex scenes, such as dense urban areas at large scale. The primary reasons for the reduced performance are the lack of modeling of man-made objects, occlusions, and motion artifacts. In fact, these problems render the gray-scale stereo image-based surface reconstruction process into an ill-posed task. The introduction of new spatial data acquisition sensors allows incorporating additional data, such as range observations or multi/hyper-spectral surface signatures, to the predominantly stereo image-based surface extraction techniques. Obviously, an optimal fusion of sensors that are based on different physical principles and record different properties of objects brings together complementary and often redundant information. This, however, ultimately leads to a more consistent scene description and results in a substantially improved surface estimation. Surface Extraction From an algorithmic point of view, existing surface extraction techniques have gone far beyond simple image correlation, although that is still an integral part of the process. Current systems can usually handle only fully oriented stereo pairs with monochrome image data. To minimize the number of operations, the massive amount of image data is handled at various resolutions by forming an image pyramid. By tracing down the conjugate image primitives, such as points, edges and regions from the coarsest to the highest resolution, not only are the computational savings enormous, but also this scale-space approach makes the whole procedure reasonably robust. Once the image features are matched at the highest level of resolution, an area correlation or a least squares matching is performed to refine the conjugate image locations. A comprehensive review of the large variety of surface extraction techniques is beyond the scope of this paper. Instead, some recent relevant trends impacting the surface extraction process are listed below.

Introduction of direct GPS/INS orientation data Increasing use of direct digital imagery Growing availability of LIDAR systems Combining road/building extraction techniques Exploitation of hyperspectral imagery/SAR data

Direct Orientation of Imaging Sensors Sensor orientation, also called image georeferencing, is defined by a transformation between the image coordinates specified in the camera frame and in the mapping frame. Traditionally, this task is accomplished through aerial triangulation. Direct georeferencing, however, can also be achieved by inertial navigation or multi-antenna GPS systems, or, for maximum accuracy, by integration of both systems in order to utilize their complementary features. Airborne integrated GPS/INS systems providing direct platform orientation are currently creating increased interest in the aerial survey and remote sensing community. The primary driving force behind this process is the need to accommodate the new spatial data sensors, such as LIDAR or SAR systems, that do not offer any indirect methods for georeferencing, or multi/hyperspectral scanners, for which the indirect methods are very complicated. In the case of analog or digital frame imagery, only one set of exterior parameters per image must be determined. However, for sensors such as push-broom line systems, three-line cameras or panoramic scanners, perspective geometry varies with the swing angle, and with each scan line, and thus; it is very difficult to determine by conventional aerotriangulation. The GPS/INS systems, with all their attractive attributes, including cost-effectiveness (primarily due to the reduced ground control requirements and decreasing cost of the sensors), increasing reliability and accessibility; could be considered an alternative to the costly aerial triangulation procedure. In fact, in many cases these systems are already an accepted alternative, as aerial surveying companies have acquired GPS/INS for their operations. On the other hand, the quality of the direct platform orientation is limited primarily by the quality of the calibration of the integrated system, including the boresight calibration, the rigidity of the imaging sensor/INS mount, the quality of the INS sensor, and the continuity of the lock to the GPS signal. Figure 1. The Airborne Integrated Mapping System. Figure 2. AIMS image from the Callahan, FL area. Acquiring Surface Data with LIDAR Light Detection and Ranging (LIDAR) sensors have substantially improved over recent years, and they are becoming advanced and cost-effective and are getting ready to enter map production. Similarly to its radio wave counterpart, the radar, LIDAR operates on the principle of measuring the round trip time of the transmitted laser pulse, which is reflected from a target and returned to the receiver. LIDARs used are primarily high-powered pulsed devices that operate in the visible wavelength range. Operational scanning systems can easily deliver a large number of elevation spots with excellent vertical accuracy, although that depends primarily on the positioning performance of the GPS/INS module (which provides direct georeferencing). An unmatched feature of the laser scanning sensors is their capability to process multiple returns, enabling the separation of vegetation from terrain surface and other objects, while their known deficiency is a strong signal dependence on the surface slope. Our recent test flight, supported by EarthData Technologies, was focused on the simultaneous collection of laser data and AIMS direct digital frame images over a test range in Hagerstown, MD. EarthData Technologies provided the complete LIDAR system and flight support, while OSU contributed the AIMS prototype, including a

4K by 4K digital camera and the GPS/INS system. LIDAR and digital image data were collected in several missions at different flying heights over a one square kilometer target area. Figure 3 shows a typical image patch with multiple LIDAR elevation point locations overlaid. Figure 3. The distribution of the LIDAR elevation spots from different flight lines.

LIDAR and Frame CCD DEM Comparison Using the hierarchical warped image-based surface reconstruction technique, DEM was generated automatically on a 5 m grid. There was no editing of the extracted surface, even though the densely built residential areas caused substantial difficulty. The photogrammetrically derived DEM from the 4K by 4K imagery is shown in Figure 4, followed by the LIDAR-produced surface in Figure 5. As can be observed in the figures, the image-based extraction is unable to effectively cope with buildings, and it tends to create a smoothed out, draped surface over natural and man-made objects. This is contrary to the LIDAR observations that give a rather good sampling and thus an excellent representation of the surface almost independently from the underlying object contents. A rigorous comparison of the surfaces is a rather difficult task, and no attempt was made here to analytically evaluate the differences among the different surface data sets. Figure 4. Photogrammetrically-derived DEM from the 4K by 4K imagery. Figure 5. LIDAR DEM of the Hagerstown, MD test area. To better illustrate the vertical behavior of the LIDAR-acquired elevations in comparison with the frame imagery-derived surface, a vertical profile is presented in Figure 6. For the flat areas, LIDAR data show an excellent match with the ground truth. The first peak in the LIDAR profile represents a building, while the smaller peak in the center is most likely a car. The stereo image-created spots exhibit the typical smoothed out pattern; surface discontinuities are smeared. The modest performance of this technique is primarily due to the coarse image resolution, about 25 cm GSD. Figure 6. Elevation profile of LIDAR and stereo image-created surfaces. Final Remarks Despite the notable success of the past two decades in the area of reconstructing three-dimensional surfaces from two-dimensional images, a significant performance increase can still be expected from combining such data with LIDAR range observations. The introduction of LIDAR data supplements the existing stereo image-based extraction techniques by providing strong geometric constraints to guide the image matching process. Additionally, LIDAR can separate vegetation canopy from topographic surfaces, a feature not available in stereo image-based techniques. We believe that the fusion of LIDAR with frame imagery is the intermediate step to the modeling of objects, which will ultimately lead to combined surface reconstruction and object extraction techniques. Other related issues that should be a part of the future research on extracting surfaces from combined LIDAR and stereo image data: DEM comparison techniques Automation of LIDAR boresighting. Use of overlapping LIDAR tracks

Dr. Charles Toth is a Research Scientist at The Ohio State University Center for Mapping. He received a Ph.D. in EE and Geoinformation Sciences from the Technical University of Budapest, Hungary. His research expertise covers broad areas of 2D signal processing, high-resolution spatial imaging, surface extraction, modeling, integration and calibration of multisensor systems, digital camera systems, and mobile mapping technology. Dr. Dorota A. Grejner-Brzezinska is a Research Specialist at The Ohio State University Center for Mapping. She received a Ph.D. in geodesy from The Ohio State University. Her research interests cover precise kinematic positioning with GPS, GPS/INS integration for direct platform orientation, mobile mapping technology, integrated multisensor geospatial data acquisition systems, and robust estimation techniques.

Vous aimerez peut-être aussi