Sky is the limit
Home | Remote Sensing | Aerial Photography Applications | Spaceborne Platforms | Bibliography | Contact

Aerial Cameras

Photographic Camera

The photographic camera is perhaps the most important instrument in remote sensing because it is relatively inexpensive, and is capable of projecting very high quality image on photographic film. Aerial cameras can be classified into four basic types:
1) single lens frame cameras, 2) multilens frame cameras, 3) strip cameras, and 4) panoramic cameras (Lillesand and Kiefer, 1987).

The most common aerial cameras in use today, are single lens frame cameras. They are used for obtaining aerial photographs for remote sensing in general, as well as for photogrammetric mapping purposes. The single lens frame cameras employ a low distortion lens, between-the-lens shutter, and are designed to provide extremely high geometric image quality with accuracies approaching 1/10000 of the flying height (photography taken at 1000 meters will accurately locate an object within 10 cm of its position). The film is advanced automatically, based on the scene shooting frequency pre-set on the intervalometer. Rolls of film up to 120 m can be loaded into the magazine designed for the film format size of 240 mm (the frame size 230 mm x 230 mm). The  magazine is also equipped with a film flattening mechanism. Lenses with focal length 90 mm, 152 mm,  210 mm and 300 mm may be used. Commonly, lenses with focal length of 152 mm are used.

In order to take advantage of wavelength specific radiance emanating from various earth features, multilens frame cameras may be used to obtain several discrete photographs taken simultaneously from the same vantage point. Every lens of the camera could be equipped with a different filter combination.

Strip cameras record images by moving film past a fixed slit in the focal plane as the camera moves forward. The shutter for a strip camera remains open continually while the picture is made. The film advances in the speed proportional  to the aircraft ground speed.

With panoramic cameras, ground areas are covered by either rotating the camera lens or rotating a prism in front of the lens. The terrain is scanned from side to side, transverse to the direction of flight. The film is exposed along a curved surface located at the focal distance from the rotating lens assembly, and angular coverage of the camera can extend from horizon to horizon. NASA developed this camera mainly for defense purposes.

For many remote sensing applications, very high geometric image quality is not required. If one is more interested in monitoring changes of spectral reflectance to identify certain types of crops or other types of vegetation, or mapping algae bloom or an extent of water pollution, a small format digital camera can be used just as 35 mm or 70 mm cameras have been used successfully (Adams et al., 1977, Lillesand and Kiefer, 1987). Clegg and Scherz (1975) compared large format (9 in) frame camera, 70 mm and 35 mm systems for resolution, airphoto interpretation, and metric accuracy. They found the smaller formats to produce imagery of comparably good resolution for altitutes below 3000' AGL. Using a wide-angle 24 mm lens a 35 mm camera photo covering an area of  2223 x 1525 meters displayed metric accuracy of points within 2.5 m of identical control points on the 9 inch format photo. For vegetation airphoto interpretation, the smaller formats were actually prefered by a number of interpreters. The investigators concluded that medium and small format (70 mm and 35 mm respectively) cameras can be used for environmental mapping as effectively as a large format (230 mm) system at a fraction of the cost.

Photographic Films

While the role of a camera is to deliver a high quality, undistorted image using its refined optics and body construction, the film actually records the image. The importance of film cannot be overemphasized. The type and quality of recorded data is inevitably tight to the type and chemical composition of the film used.

The black and white film consists of a light sensitive photographic emulsion coated onto a plastic (polyester film) base. The emulsion consists of a thin layer of light sensitive silver halide crystals, or grains, held in place by a solidified gelatin. When exposed to light, the silver halide crystals within the gelatin undergo a photochemical reaction forming an invisible latent image in the form of silver molecules. Upon treatment with suitable chemical agents, the exposed silver salts are reduced to silver grains appearing black, and forming a visible image. The result is a record of the camera's view in which the film areas struck by the most light are darkened by metallic silver, while the areas struck by no light remain transparent due to lack of silver. The intermediate areas have varying amounts of silver creating shades of gray depending on the amount of light striking the film emulsion.

The emulsion can be made sensitive to only UV and blue light, UV, blue light and green light (orthochromatic film), or UV, blue, green and red light (panchromatic film). Most untreated silver salt emulsions are only sensitive to UV and shorter wavelengths of the spectrum. By treatment of film during manufacture with special dyes, it is possible to extend the emulsion's sensitivity to the near infrared (0.7 - 1.2 microns) region of the spectrum. The result is a black and white infrared film.

The film exposure at any point of the photograph is directly related to the reflectance of the object imaged. Theoretically, the film exposure varies linearly with the object reflectance. In practice however, the relationship between the radiance entering the camera lens and that recorded by the film depends on the particular film characteristic curve. Characteristic curves are different for different film types, for different manufacturing batches and even for films of the same batch. Manufacturing, handling, storage, and processing all affect the film's characteristic response. The following figures show typical components of a black and white negative film characteristic curve (A), and other important film characteristics such as density resolution, radiometric resolution and exposure latitude (B).

Typical negative B&W film characteristic curve (A). Two films with different characteristic curves (B). (Log film exposure "H" on the X axis vs. log film density "D" on the Y axis)

The film characteristic curve is expressed as the ratio of log D / log H, where log D = logarithm of film density, and log H = logarithm of film exposure. Notice the three divisions to the curve. First the density slowly increases with the exposure at an increasing rate (toe). This is followed by a  relatively linear increase in density (straight line portion). Finally, the maximum density is attained through a sinusoidal plateau (shoulder). The slope of density increase in the straight line portion of the film characteristic curve is the gamma (g) of the film. Gamma is given as: g = DD/Dlog H. It is an important determinant of the contrast of the film. For most aerial films, under manufacturers` recommended processing conditions and corresponding speeds quoted for the films give gammas in the neighbourhood of two. For medium- and high-altitude aerial photography, a gamma of two or three is usually desirable. Gamma is a function of not only the film emulsion, but also of the film development condition. It can be varied by changing developer, development time and/or processing temperature. The next important characteristic of film is its speed. It comprises the level of exposure to which film would respond. Graphically, it is represented by the horizontal position of the film characteristic curve. In the figure (B) above, the curve on the left is that of a "fast" film (it accomodates lower exposure limits; i.e. it is more sensitive). On the other hand, the curve on the right belongs to a "slow" film (less sensitive). "Fast" films are characterized by larger film grains, thus tend to have reduced spatial resolution as opposed to less sensitive films. On the same curve for the "slow" film, notice its wider exposure latitude. It expresses the log H (level of exposure) range which would yield an acceptable image (wider range of exposure would give an image with good density resolution; i.e. discrimination between different features would be better over wider range of exposure). The radiometric resolution is inversely proportional to contrast, so for a given density resolvability, a higher contrast film is able to resolve smaller difference in exposure.

Thus in general, the slower the speed of the film, the higher are the resolving power and the gamma, while the granularity and the latitude are less. Low contrast films offer greater radiometric range (exposure latitude) at the expense of radiometric resolution.  High contrast films offer a smaller exposure range, but improved radiometric resolution (Lillesand and Kiefer, 1987).

Color film is an advancement over the black and white film brought about by addition of two more light sensitive emulsion layers. Like on the black and white film, all emulsion layers consist of silver salts. However, the layers are treated during the manufacture so that they are made sensitive to separate colours of the visible or near infrared spectrum. The top layer in most colour films is sensitive to blue light. The second layer is responsive to green and blue light, and the bottom layer to red and blue. A blue blocking filter is inserted between the top layer and the other two, to prevent the two bottom layers to be exposed by the blue light. This results in three emulsions, each sensitive to blue, green, and red light respectively.

To develop the film as a negative, it is initially put into the colour developer, where the dyes are formed as the exposed silver halide is developed. The silver and the silver halide are then removed leaving the dye image. After processing, the blue sensitive layer contains yellow dye, the green sensitive layer contains magenta dye, and the red sensitive layer contains cyan dye. The amount of dye present in each layer is inversely proportional to the amount of  its corresponding primary color present in the original scene photographed. When viewed in composite, the dye layers produce visual sensation of the color in the original scene. 

  Sensitivity curves - Kodak Aerochrome Infrared Film 2443, and Kodak Aerocolor Negative Film 2445

The final film processing depends on whether the film is a negative, or reversal film. Negative film produces image in negative colors or shades of gray and requires print development to obtain the final product. In reversal film, whether color or black and white, the final color tone of the transparency is matched to the original color of the scene photographed.

False-color film

The relationship between the spectral sensitivity of the color film layer does not have to be the same as it was described  above for films with normal-color rendition. If different from the above, the result is an infrared (IR) film with a false-color rendition. In infrared sensitive color films, one of the layers is made sensitive to the infrared spectral region, while the other layers keep their sensitivities in the visual spectral region. These films may be made for either negative or positive mode of processing.

From the above discussion about the color film, it may be remembered that normal color film is sensitive to blue, green and red spectral regions. associated with these sensitivities are the yellow, magenta, and cyan dyes respectively, which after processing combine to produce the colors blue, green, and red which closely match those of the original scene photographed. With the color infrared sensitive film, the individual layers are sensitive to green,  red and infrared radiation. Again, the same three dyes - yellow, magenta and cyan are associated with the image formation, but their sensitivities have been shifted toward longer wavelenghts so that the yellow, magenta, and cyan dyes produce green, red, and infrared represented image (Doyle F.J et al., 1983). 

With both, color and infrared color films, since each layer is sensitive only in its own wavelength, our data is recorded in three separate bands. It is possible to break down electronically the original image into three separate bands approximately 100 nm wide. A  panchromatic color film for example, would generate three bands: red (600-700nm), green (500-600nm), and blue (400-500nm). Thus here we have a simple wide-band multispectral sensor.

Non-photographic Cameras and Multispectral Scanners

One well known example of this type of camera is a digital camera. Such systems use a camera body and a lens, but record image data with light sensitive detectors that generate electrical signals that are then stored on a medium other than photographic film.

Solid-State Array Cameras use one- or two-dimensional detector arrays of charge coupled device (CCD) for image data acquisition. A CCD is a microelectronic silicon chip, a solid-state sensor that detects light. When light strikes the CCD's silicon surface, electronic charges are produced with the magnitude of the charge being proportional to the light intensity and exposure time. Every single microelectric silicon chip within the array produces the smallest unit of the resulting image - the pixel. (Lillesand and Kiefer, 1987).  The digital camera resolution depends on the number of light sensitive detectors (pixels) in each array, the size of the array, the lens used, as well as on the camera's image processing. The resolution of today's high-end 35 mm type DSLR (digital single lens reflex) cameras with two-dimensional array sizes of 14 Mega Pixels or more and the sensor size of 36 x 24 mm is approaching that of a 35 mm film camera. However, few medium format digital cameras can approach the resolution of the medium format (70 mm) film. As of 2008 the 70 mm Hasselblad camera with Phase One digital camera back had a two-dimensional array with 60.5 Mega pixels (8984 x 6732) and an effective sensor size of 53.9 x 40.4 mm. 

In remote sensing applications, large format digital frame cameras are getting on the market as well. One example is a Vexcel UltraCamX aerial digital camera with a two-dimensional CCD of 216 MP (14430 x 9400) pixels in panchromatic mode and 4810 x 3140 pixels in RGB & NIR (400nm - 1000nm) mode. The ground resolution then depends on the size of the CCD, as well as the altitude of the sensor. For example the high-resolution-visible (HRV) systems on French SPOT-2 (Système probatoire d'observation de la terre) satellite operating in black and white (panchromatic) mode has ground resolution of 10 meters, and the ground resolution of 20 meters when operating in colour infrared (multispectral) mode. 

Due to the wavelength dependency of reflectance from various earth features, the need for narrow band multispectral sensors has been recognized from the early days of remote sensing. Therefore Multispectral scanners (MSS) were developed. They sense bands ranging from the UV, through visible to near-IR, mid-IR and thermal IR portions of the spectrum. MSS utilize electronic detectors and are designed to sense energy in a number of narrow spectral bands simultaneously. They are equipped with a scanning mirror and optics which directs the incoming energy to be separated into several spectral components that are sensed independently. A dichronic grating is used to separate the non-thermal wavelengths from the thermal wavelengths. The non-thermal wavelength component is directed from the grating through a prism that splits the energy into a continuum of UV, visible and IR wavelengths. By placing an array of detectors at the proper geometric position behind the grating and the prism, the incoming beam can be separated and measured independently in a multiple of narrow bands (Lillesand and Kiefer, 1987).

In general, digital imagery collected in panchromatic mode is of much higher resolution than the imagery obtained in multispectral mode. For this reason techniques have been developed to fuse co-georegistered high spatial resolution panchromatic images with a set of coarse (low) spatial resolution multispectral (colour) images to obtain a fine (high) spatial resolution colour images. Thus, the so called Pan-sharpening is a concept of combining multiple images into composite products, through which more information than that of individual input images can be revealed.

Home | Remote Sensing | Aerial Photography | Satellites | Bibliography | Contact

C 2010 Eco-Scientific Consultants.