Go to Paper
Return to GeoComputation 99 Index
Estimating Subpixel Geospatial Features
BERGER, Henry and SHINE, James (firstname.lastname@example.org), U.S. Army Topographic Engineering Center, 7701 Telegraph Road, Alexandria, VA 22315-3864
Key Words: small-scale geospatial non-uniformities, derivative-as-limit algorithms, radiance, irradiance
Geocomputational literature describes spatial data acquisition from motion video with pixel-level detail that aims at building 3-D models of ground surface features that may be used in applications to develop topographic maps for geographic information systems (GIS). This paper describes a method for estimating subpixel geospatial features from spatial imagery when the details of these features are too small, or have spatial nonuniformities that vary so rapidly, that normal pixel-level spatial imagery has insufficient resolution to determine these special features.
The paper "Spatial Data Acquisition From Motion Video," by M. Williams, in Volume II of the proceedings of GeoComputation 1996, lists and differentiates between a number of methods used to obtain spatial data for GIS from imagery. While conventional photogrammetry requires a widely spaced pair of images, the method employing motion video proposed in that paper, which is a form of photogrammetry that avoids the correspondence problem, uses a dense image set; thus, the availability of subpixel information may lead to more accurate, detailed results.
The current paper describes a method that estimates a mini-image within each pixel. Normally an image consists of an orderly array of pixels. Each pixel in an image is seen as a single dot on a display unit. This corresponds to an average value of all that is seen within the nonzero-sized instantaneous-field-of-view (IFOV) of a single detector, or smallest element of film represented in a digitization process. Such detectors or film elements represent the most basic sensing element within an array of elements that comprise the total optical sensor.
There may be many reasons for concern about representing all that is seen within the IFOV by a single averaged numerical value for each wavelength band. Some examples are when there are fine surface features, relatively rapid changes in ground composition causing corresponding changes in ground reflectance, relatively small structures and objects, or relatively abrupt changes in ground surface altitude (such as may occur with hills, buildings, and mountains).
The November 1998 Optical Engineering article, "Data Analysis Systems for Films," by D. Zhang, et al., on subpixel estimation for sequential spatial imagery of moving objects recorded on film, then converted to digital imagery, uses a correlation technique to better locate a small number of key spatial points in the image. The current paper describes a method that estimates, for the region within each pixel/IFOV, spatial coordinates of effective centers'-of-gravity of the optical illumination based on illumination data from neighboring pixels/IFOVs.
These, in turn, form the basis for setting up local coordinate systems in which interpolation based on data from neighboring sensing elements are used to estimate the illumination intensity at each spatial point within each IFOV for all of the detectors or film elements in the array. This all can be done recursively. Center-of-gravity techniques have been used very successfully in sky-viewing optical trackers tracking a single source of illumination with some instruments yielding accuracies better than 1/100 of a pixel; however, this seems to be the first method for ground-viewing sensors that require determination of centers-of-gravity within each pixel.