Vous êtes sur la page 1sur 7

Radar tomography for the generation of three-dimensional images

K.K. Knaell G.P. Cardillo

Indexing ierms: Radar tomography, Three-dimensional images

Abstract: Computer-aided tomography is normally a process by which a 2D cross-sectional image of an object is otained by illuminating it from many different directions in a plane. For the case of radar imaging, microwave energy reflected by the object is processed to produce an image which maps the objects radar cross-section (RCS) density into the image plane. Each observation provides a 1D projection of the RCS density. The Fourier slice theorem states that the Fourier transform (FT) of each projection is equal to the functional value of the 2D FT of the RCS density along a related projection. By accumulating the FT of many 1D projections, it is possible to accumulate a sample representation of the FT of the RCS density. The image can then be obtained using the backprojection algorithm by taking the inverse FT of the sampled transform function. The authors extend the tomographic technique to the generation of 3D images from 1D range profiles. It is seen that the Fourier slice theorem, the backprojection image generation algorithm, and the backprojected function are useful concepts in the interpretation of 3D imagery. Point spread functions (PSFs) for various radar and observation parameters are illustrated.

A number of methods are available for extracting 3D target shape information from radar data. Monopulse [l] and ultrawideband radars are hardware refinements required of several of these, and aperture synthesis appears necessary in all except perhaps the monopulse case [2-61. Chan and Farhat [7] have discussed an extension of 2D spotlight mode processing using a standard pulse compression waveform and a sparse twodimensional synthesised aperture to obtain 3D diverse data in the frequency space required to produce 3D images. The investigation described herein is based on similar assumptions except that attention is focused on generating images, as indicated by 3D point spread functions (PSFs), that result from curvilinear motion such as from an aircraft during SAR operation or observation of a target during ISAR operation.
0 IEE, 1995
Paper 1791F (ElS), received 17th August 1994 K.K. Knaell is at the David Taylor Research Center, Bethesda, M D 20084-5000, USA G.P. Cardillo is with the Toyon Research Corporation, 75 Aero Camino, Suite A, Goleta, CA 93117, USA

The PSFs shown here are the images produced by the processing algorithm operating on data obtained from an isolated point target. The character of these PSFs are of interest for two reasons. First, they show the resolution and sidelobe structure resulting from different observation conditions and radar parameters. Secondly, scatterer responses as exemplified by the PSFs indicate the success with which a peak detecting technique may be used to determine positions for image enhancement by the CLEAN algorithm [S-lo]. We obtain the image processing algorithm used to generate the PSFs by extending to 3D the derivation of a computer-aided tomographic (CAT) technique used to produce 2D images. CAT is normally a process in which a 2D cross-sectional image of an object is obtained by illuminating it from many different directions in a plane. The generation of 2D images by tomographic processing is well documented [11-141, and extensions to three and more dimensions are contained in References 15-17. In particular, techniques have evolved, making use of a result obtained from the physical optics model of scattering given by Kennaugh and Moffatt [2]. This ultrawideband technique appears to be limited to imaging isolated metallic targets using algorithms developed by Das and Boerner [3], among others [4, 51. Equations are derived for the more conventional radar cross-section density model, to be discussed later, and used to generate point spread functions. Other techniques exist for the generation of 3D images. For example, 3D images have been formed by interrogating an object in a series of parallel planes, generating the 2D images in each of those planes and stacking the images [l 11. In contrast, the technique employed here allows the generation of an image distributed in three dimensions from observation data. This process requires that not all the data be taken in a plane.
2D radar imaging

The tomographic generation of radar images employs 1D projections of the imaged parameter that are accumulated to generate the 2D image. In the general 2D case, each 1D projection function is itself the result of integration of the radar cross-section (RCS) density along lines perpendicular to the radar line of sight. The integral of the RCS density g(x, y) along a line of constant round trip delay time, Fig. 1, is pm(u).The funcThis work was sponsored by Department of the Army, Harry Diamond Laboratories. The authors would like to express their thanks to Dr. Dean Mensa and Prof. Glen Heidbreder for their enlightened comments during this effort.
IEE Proc.-Radar, Sonar Nauig., Vol. 142, N o . 2, April 1995


tion g(x, y) has a 2D FT (Fourier transform) G(k,, ky). The projection slice theorem [14] states that the FT of a projection p@(u)in real space is equal to the functional value of G(k,, k,) along a related line (at angle 4) in the

the object so that the surfaces of integration may be considered planar.

3.1 30 projection slice theorem

Fig. 2 shows the 3D real-space observation geometry and the corrresponding 3D FT space. The observation direction is along the u-axis, and the return at any range is the

Fig. 1

Radar 1D projection

(U, w)

real space


tronstwrn space

Fig. 2

Coordinate systems

transform space. The importance of this theorem lies in the fact that many projections obtained at various angles 4 by radar measurement can be accumulated to produce a sampled repesentation of G(k,, k,) at these angles. The more projections used, the more accurate the repesentation of G(k,, k,). The image of g(x, y ) can then be obtained by taking the inverse FT of the sampled function G(k,, k J , Assumptions implicit in the formulation of this imaging process will carry over to the formation of 3D images. The RCS density function g(x, y) at each point (x, y ) is assumed to be isotropic (i.e. to introduce the same amplitude and phase effects to the return signal, regardless of direction). It is also assumed there is no mutual coupling between scatterers, and that the radiation is not attenuated as it passes through the target. These assumptions, though not often stated, are basic to most point target imaging techniques. While the inverse FT approach to reconstruction described above is conceptually simple, computational problems arise, primarily because 2D interpolation is required to calculate the inverse FT when data is sampled along radials. The backprojection method does not require 2D interpolation. The inverse FT method of image construction requires that all the FT data be accumulated in transform space, interpolated, and then inverse transformed. The backprojection method, on the other hand, allows individual 1D views to be merged sequentially, each new spatial view adding to the previously accumulated image. The backprojection algorithm requires that, for each observation direction, the projection function p,(u), be transformed, filtered (by a filter that linerly emphasises the high frequencies), and inverse transformed to form the backprojection function at the angle 4 (Q+(u)). In this paper, we extend the tomographic techniques described above to the generation of 3D images from 1D range profiles. We derive the projection slice theorem and the backprojection theorem for generating 3D images; generate the backprojection function for a point object; and evaluate and display the image for a point (the point spread function) for various radar and observation parameters.

integral of the RCS density in a plane parallel to the plane, perpendicular to the u-axis. The U direction is specified by the angles a and E (azimuth and elevation) from the x-axis. The function p,(u) describing the radar return like p@(u) the 2D geometry, is in
Pa, =


Y>4 du] dw


where g(x, y, z) is the complex values radar reflectivity at points (x, y, z). The transformation between the (U, U, w) and (x, y, z) coordinate systems is

[nj [

cos E sin a cos E cos a cos a -sin a -sin E cos a -sin E sin a cos E



Expanding p,(u):
P,,(U) =




Y ( U , U,

z(u, U, w ) ] du dw

Let Pm(k,) be the FT of the projection p,(u)



m : j




exp (-juku) du M x , Y , 2) do dwl exp !-juku) du


Substituting U


cos a cos E

+ y cos E sin a + z sin E

dx dy dz


m : j

j j c d x , Y , z) exp C-j(xk, cos a cos E

+ y k , cos E sin a + zk, sin E ) ]


G(k, cos a cos E, k , cos E sin a, k , sin E )

= G&)


3D tomographic imaging

A function p,,(u) is produced by the integration of the return from all points at a distance U from the radar. As for the 2D case, the radar is assumed to be far from
IEE Proc.-Radar, Sonar Navig., Vol. 142, N o . 2, April 1995

where G(k,, k,, k,) is the 3D FT of g(x, y , z). The right-hand side of eqn. 5 represents the 3D FT of g(x, y, z) evaluated at spatial frequencies along the radial line defined by the angles a and E. By taking the FT of projections at an infinite number of angles, the value of G(k,, k, , k,) would be known at all points in the transformed space. The object function g(x, y, z ) can then be recovered by using the inverse transform. In practice, only a finite number of projections of an object can be

taken and used to construct samples of the function G(k,, k,, kz). This estimate can be inverted to form an estimate of the function g(x, y , z).
3.2 30 backprojection algorithm

While the above procedure provides a simple conceptual model of tomographic reconstruction, practical implementation requires a different approach. As for the 2 D case, a filtered backprojection algorithm will be derived and used. Conceptually, the object function can be reconstructed from the inverse FT

sured and continues as each new projection is obtained. On the other hand, the direct inverse FT is a bulk process; all the projections in the transform domain must be obtained before the inverse transform is applied. The second difference relates to the domain in which interpolations (which arise due to the sampled nature of the processing) are performed. A 1D interpolation in real space is required by the backprojection algorithm when the value of Q,Ju) is obtained. The direct inverse FT requires that an interpolation be performed on the polar formated data in 3D transform space, where the sampled radial points are converted to a rectangular grid to compute the inversion.
4 3D point spread function

(6) The transform space coordinates are converted t o polar form using

x expj(xk,

+ yk, + zk,) dk, dk, dk,

k, k,

= k,


cos a

k, = k, cos E sin a
= k,

sin E


and the Jacobian

The point spread function (PSF) is the image produced by applying a processing procedure to the return from a point object. It provides a means of evaluating a measurement and image construction procedure. In many ways its use is similar to the use of an ambiguity function to evaluate a radar waveform. To calculate the PSF, a reflecting point is assumed at the origin giving G,,(k,) = 1 everywhere. The filtered backprojection function is
Q.,(u) = cos

( J ( k , , a,E ) ( = Ikt cos

resulting in


i :

k: exp (juk,) dk,


x exp (juk,) dk, de da

where U = x cos E cos a + y cos E sin a z sin E, and G,,(k,) is the FT of the projection p,(u). The bracketed term in eqn. 8 is the 3D filtered backprojection at the angle UE, i.e.

1 +


In principle, the integration is to be carried out over all k,. However, since the radar observes with a limited bandwidth, the corresponding spatial spectrum is measurable only over a limited region. This limitation causes the point image to be spatially extended, producing an image of finite width with an attendant sidelobe structure.
4.1 Window functions


= j:mG..(ku)

I kt


I exp (juku)dk,


The backprojection function being constructed from a limited region of the transform space can be written

This function (as in the 2D case) is obtained by taking the FT of the real-space 1 projection, weighting it by a D high-frequency emphasis filter, and inverse transforming. (Note the source of the filter terms is the Jacobian of the transformation from rectangular to spherical polar coordinates.) The remaining integrations in eqn. 8 result in the summation of the filtered backprojections Q,(u) over all azimuth and elevation angles. The straightforward sequence of steps involved in implementing the backprojection algorithm are then (i) accumulate radar data to get the FT(P,(k,)) of the projection function p,(u) (ii) multiply by the weighting function I kt cos E I (iii) form Q&), the inverse F T of [I k: cos E I P&3] (iv) determine the value of U for each pixel location of the image relative to the projection direction, evaluate (interpolate) Q,(u), and accumulate its value to the pixel. Actual implementations of the algorithm include variations of the above steps; for example, zero fill in the FT space could be used to facilitate later interpolation in real space. There are two main differences between the backprojection algorithm and the direct inverse FT as discussed relative to eqn. 5. First, the backprojection algorithm is a sequential process in that the image construction begins as soon as the first projection is mea56


j y m

H,,(k,) I kt cos

& Iexp (juku)dk,

(1 1)

where H,(k,) is a window function representing the shape of the spectrum of the radar waveform. Appropriate backprojection functions can be derived for a variety of window functions. As an example, in the remainder of this section we derive the filtered backprojection function for a Gaussian window function with standard deviation s equal to half the bandwidth of the radar waveform. This frequency spectrum corresponds to a Gaussian pulse shape in the time domain. If the frequency spectrum of the transmitted waveform is Gaussian, the window function has the following form

and the filtered backprojection function becomes
Q=(u) = A

I E E Proc.-Radar, Sonar Navig., Vol. 142, N o . 2, April 1995

Let w = ( k , - k,) in the first integral and w = ( k , the second. Then


+ k,)


)+ j 1:
FT [ j w G ( w ) ] and that

- ko)2 exp


exp [ j w - ko)u] dw


Expanding the quadratic terms and recognising that, if

g(u) and G ( w ) are related by the FT
= g(u)

function and for a rectangular window function, respectively. Both figures are for a radar operating at a carrier frequency of 300 MHz and half bandwidth of 150 MHz. The solid curves are the result with k-squared filtering, while the dashed curves are without. Note the narrowness of the backprojection envelope for the Gaussian window compared to the rectangular window. The effect will be reflected in the reduced sidelobes of the resulting point spread functions. 3D backjprojection for a reflection point not located at the origin can be obtained by shifting the U-value for each pixel by a bias recomputed for each observation direction. Multiple noninterfering reflection points can be accommodated by linear superposition. 4.2 Generation of the 30 point spread function A computer program was developed to calculate and display the PSFs for various radars and geometries. Fig. 4 shows the coordinate system used in the program to
,arc increment flight path arc length

FT [- w ~ G ( w ) = @() ] U

x cos ko U

- 2k0 us2 sin ko U}



region of

where A is an accumulation of contants. Figs. 3a and b show examples of the filtered backprojection (and the pulse shape) for a Gaussian window

/ v


/ 1

reflection point

Fig. 4

Coordinate system and geometric parameters





I 1

I 3





-1 .o I -3

locate the reflecting point and to describe the aperture path (path of observations), which is along a cone centred at the origin. (The cone half-angle is the complement of what is often called the lookdown or grazing angle. In the Figures of Section 5, the term lookdown angle will be used.) The angle of the cone, the extent of the aperture path along the cone, and the observation arc increments, are specified by input to the program. The radar is specified by its centre frequency, bandwidth, and window function; these determine the filtered backprojection function. Filtered backprojections for both the rectangular and Gaussian windows were implemented, and the resulting PSFs are illustrated. The program generates plots of image intensity versus pixel location in planar cuts through the 3D space of pixels. In the next Section, examples will be shown of planar cuts in the ground plane ( z = 0), parallel to the ground plane (z # 0), and in the vertical plane (x, z-plane, i.e. y = 0). Planar cuts at other orientations are also possible but will not be illustrated here.
Examples of 3D point spread functions





Fig. 3 Backprojectionfunction a Gaussian window b Rectangular window - k-squared filtered unfiltered Pw= I m
I = l m

This Section contains examples of the PSFs with different radar parameters, window functions, observation geometries (lookdown angle and look angle extent), and orientation of the planar cuts. Examples are shown for both rectangular and Gaussian window functions and the radar parameters of Fig. 3. In general, the Gaussian window produces PSFs which are less cluttered and with lower sidelobes. A total of 12 PSFs are shown, which

IEE Proc.-Radar, Sonar Nauig., Vol. 142, N o . 2, April 1995

can be compared to illustrate the effects of various parameters. Parameters for the first three Figures are characteristic of those of an ultrawideband radar emitting a Gaussian impulse. These were included to give a clear representation of the envelope of the PSF for the basic aperture configuration consisting of a complete circle. The sidelobe-free characteristic of the range envelope of Fig. 3a produced by the Gaussian frequency window is an advantage, in that ripple in the resulting PSF can thus be ascribed entirely to offsetting the window to the carrier frequency. For the sake of additional clarity in Figs. 5-16, the number of oscillations of the signal within the PSF envelope was limited by a high bandwidth to carrier frequency ratio. Slices through 3D PSFs are shown in Figs. 5 to 16 for selected parameter values. The lookdown angle is the
.. . .

For comparison purposes, the bandwidth of the rectangular window was made equal to K times the standard deviation of the Gaussian spectrum. In the time domain, this makes the - 4 d B pulsewidth of the transformed

Fig. 7 Horizontal slice 2 metres above object plane Pulse spectrum: Gaussian Look angle = 360" Look down angle 45" Carrier wavelength = m Scale = 0.IX 0.1Y m Pulse widlh = 1.0 m


. .

Fig. 5 Horizontal slice at object plane Pulse spectrum: Gaussian Look angle = 360" Look down angle = 45" Carrier wavelength = 00 Scale = 0.lX 0.1Y m Pulse width = 1.0 m

. , . . .

Fig. 8 Horizontal slice at object plane Puln spectrum: Gaussian Look angle = 360" Look down angle = 45" Carrier wavelength = 0.5 m Pulse width = 2.0 m Scale = 0.IX - 0.lY m

Vertical slice at object plane Fig. 6 Pulse spectrum: Gaussian Look angle = 360" Look down angle = 45" Carrier wavelength = CO Scale = 0.1X - 0.1Y m Pulse width = 1.0 m

complement of the half-cone angle shown in Fig. 4, while the look angle refers to the extent of the arc length of the observations around the cone: 360" is a complete circular aperture. In most cases the observation arc increment was about 5.6", which was sufficient to eliminate grating sidelobes from the PSFs. The scale on each Figure is the length of the coordinate increment per pixel. The radar carrier is specified in terms of its wavelength in metres; an infinite wavelength implying no carrier. The radar bandwidth is specified by the corresponding one-way pulsewidth (i.e. bandwidth = c/pulsewidth).

Vertical slice at object plane Fig. 9 Pulse spectrum: Gaussian Look angle = 360" Look down angle = 45" Carrier wavelength = 0.5 m Scale = 0.2X 0.2Y m Pulse width = 2.0 m

rectangular window equal to the Gaussian pulsewidth (defined to be twice the standard deviation). Later PSFs in the set are made only for pulses having a Gaussian window; these are less cluttered than those resulting from the rectangular window.
I E E Proc.-Radar, Sonar Nauig., Vol. 142, No. 2, April I995

Figs. 5, 6 and 7 are generated by backprojection of the k-squared filtered Gaussian pulses of Fig. 3a at a 45" lookdown angle over a complete circular aperture. The width of the central lobe of Fig. 5 is approximately onehalf the one-way pulsewidth divided by the cosine of the

space through two cones with their apexes touching at the object location in the centre of the patch. This biconical structure is an almost universal attribute of 3D PSFs generated from circular apertures [16, 171. Effects of changes in the radar and geometric parameters can be

Fig. 10

Vertical slice at object plane

Look angle = 360" Carrier wavelength = 0.5 m Pulse width = 2.0 m

Fig. 13

Horizontal slice at object plane

Look angle = 120" Carrier wavelength = 1.0 m Pulse width = 1.0 m

Pulse spectrum: Reclangular Look down angle = 45" scale = n.zx 0 . m m


Pulse spectrum: Gaussian Look down angle = 45" Scale = 0.lX - 0.lY m

. .

Fig. 11

Vertical slice at object plane

Look angle = 360" Carrier wavelength = 1.0 m Pulse width = 1.0 m

Fig. 14

Horizontal slice 2 metres above object plane

Look angle = 120" Carrier wavelength = 1.0 m Pulse width = 1.0 m

Pulse spectrum: Gaussian Look down angle = 60" Scale = 0.lX 0.1Y m

Pulse speclrum: Gaussian Look down angle = 45" Scale = 0.lX - 0.lY m

Fig. 12

Vertical slice at object plane object 30 metresfrom radar

Look angle = 360" Carrier wavelength = 1.0 m Pulse width = 1.0m

Fig. 15

Horizontal slice at object plane

Look angle = 2.0" Carrier wavelength = 0.05 m Pulse width = 1.0 m

Pulse spectrum: Gaussian Look down angle = 45" Scale = 0.1X - 0.lY m

Pulse spectrum: Gaussian Look down angle = 45" Scale = 0.1X - 0.IY m

lookdown angle. Fig. 6 is a vertical cut through the PSF illustrated in Fig. 5, and Fig. 7 is a horizontal cut two metres above the object. The characteristic X distribution extending to the corners of Fig. 6 is a section in 3D image
I E E Proc.-Radar, Sonar Nauig., Vol. 142, No. 2, April 1995

intepreted as producing modifications in this basic structure. The PSF for an approximately 25% bandwidth carrier based system is shown in Fig. 8. The pulse width is char59

acterised as above by the radial extent over which the disturbance exists, while the width of the central peak is one-fourth the carrier wavelength [l5]. Fine sttructure continues into the conical sidelobes as illustrated in Fig. 9.

Fig. 16

Vertical slice at object plane

Look angle = 30
= 45

Pulse spectrum:Gaussian

Look down angle

Scale = 0.tX

0.tV m

Carrier wavelength = 0.005 m Pulse width = 1.0 m

The increase in sidelobe structure over that of Fig. 9 as a result of the rectangular range response is evident in Fig. 10. The result, in the vertical plane, of increasing the lookdown angle is shown in Fig. 11. The cone angle is larger since projections causing the conical limbs of the PSF are perpendicular to the radar line of sight. Fig. 12 results from a reduced radar-to-target distance of 30 metres. The linear wavefronts of Figs. 6 and 11 now appear circular as a result of significant wavefront curvature. Lowering the look-angle extent from 360 to 120 results in the image of Figs. 13 and 14. The 120 aperture path is centred on the lower right corner of the grid. This PSF is essentially a vestige of the PSF that exists for a complete circular aperture. This vestige results from elimination of elements of the cone for look angles not appearing in the reduced radar dataset. Fig. 15 is an example for more typical SAR parameters. The Gaussian range response is almost completely isolated from the sin x/x azimuth response in this case. Calculation of azimuth and range resolutions agree with standard formulas, if the carrier wavelength and range pulse width are multiplied by the reciprocal of the cosine of the lookdown angle. Fig. 16 illustrates that widening the look angle by a factor of 15 over the previous example produces resolution orthogonal to both the down range and normal azimuth directions. Significant resolution in three dimensions is thus exhibited by this combination of parameters. Although the excessive sidelobe structure would interfere with the use of an image consisting of such PSFs, the possibility may exist of using a deconvolution technique in low noise situations to enhance such 3D images.

tion slice theorem and the backprojection theorem. These algorithms are extensions of 2D imaging algorithms. A significant aspect of these 3D images is that image peaks are generated at the position of the scatterer regardless of the aperture configuration. While the resulting PSFs accounted for radar parameters, object location, and observation geometry, they do not include, for example, the effects of signal-to-noise ratio and propagation effects on the phase and amplitude of the radar returns. The point spread functions demonstrate the effects of carrier frequency, bandspread, angle extent of the observations and the lookdown angle employed. When observations cover a large angle extent about the object, the resolution is primarily dependent on the carrier frequency, while the behaviour of the sidelobes (ambiguities) depends on the bandwidth. Tomographic based imaging can be used in place of normal Doppler processing to generate 3D SAR images. For simple scatterers which sustain reflectivity over suitably wide view angles, it is possible to estimate 3D position information from data gathered along practical curvilinear apertures. Possibility for generation of useful images by this method will depend, at least partially, upon the effectiveness of deconvolution techniques in removing the significant sidelobes caused by the restricted apertures involved.


1 WEHNER, D.R.: High resolution radar (Artech House Inc., Norwood, MA, 1987) 2 KENNAUGH, E.M., and MOFFATT, D.L.: Transient and impulse response approximations, Proc. IEEE, 1965, 53, pp. 893901 3 DAS, Y., and BOERNER, W.M.: On radar target shape estimation using algorithms for reconstruction from projections, IEEE Trans., 1978, AP-26, (2), pp. 274-279 4 GE, D.B.: A study of the Lewis method for target-shape reconstruction, /nu. Probl. (UK), 1990.6, pp. 363-370

5 LI, H.-J., and LIN, F.-L.: Near-field imaging for conducting objects, IEEE Trans., 1991,39, ( 9 , pp. 600-605 6 DEANS, S.: The radon transform and some of its applications (John Wiley, NY, 1983) 7 CHAN, C.K., and FARHAT, N.H.: Frequency swept imaging of three-dimensional perfectly conducting objects, IEEE Trans., 1981, AP-29, (2), pp. 307-319 8 CLARK, B.G.: An efficient implementation of the algorithm CLEAN, Astron. Astrophys., 1980,89, pp. 377-378 9 HOGBOM, J.A.: Aperture syntheses with a non-regular distribution of interferometer baselines, Astron. Astrophys. Suppl., 15, pp. 417-426 10 STYERWALT, D., and HEIDBREDER, G.: On a Bayesian approach to coherent radar imaging, in Maximum entropy and Bayesian methods (Kluwer Dordrecht, 1992) 1 1 KAK, A.C., and SLANEY, M.: Principles of computerised tomographic imaging(IEEE Press, 1988) 12 MENSA, D.L.: High resolution radar imaging (Artech House Inc.,
1984) 13 SCUDDER, H.J.: Introduction to computer aided tomography, IEEE Proc., 1978,66, (6), pp. 628-637 14 MUNSON, D.C. Jr., OBRIEN, J.D., and JENKINS, W.K.: A tomographic formulation of spotlight-mode synthetic aperture radar, Proc. IEEE, 1983,171, (8), pp. 917-925 15 MENSA, D., and HEIDBREDER, G.: Bistatic synthetic-aperture radar imaging or rotating objects, IEEE Trans., 1982, AES-18, pp. 423-431 16 WALKER, J.L.: Range Domler imaging of rotating objects. IEEE Trans., 1980, AES16,pp. 23-52 17 AUSHERMAN, D.A., KOZMAN, A., WALKER, J.L., JONES, H.M.. and POGGIO. E.C.: Develooments in radar imaeine. IEEE Trans., 1984, AESZO, pp. 363-400

_ _

A 3D imaging algorithm was developed based on tomographic techniques employing 3D versions of the projec-



IEE Proc.-Radar, Sonar Navig., Vol. 142, N o . 2, April 1995