NIIRS Civil NIIRS Reference Guide

Appendix III
History of NIIRS

33

Civil NIIRS Reference Guide

III-1 History of the National Imagery Interpretability Rating Scales

III-2 Introduction

For over 20 years, the Intelligence Community has used the National Imagery Interpretability Rating Scale (NIIRS) to quantify the interpretability or usefulness of imagery. The NIIRS was developed to provide an objective standard for image quality, since the term image quality can mean different things imagery analysts and engineers.

The NIIRS uses the phrase "information potential for intelligence purposes" as a substitute for image quality. NIIRS ratings thus serve as a shorthand description of the information that can be extracted from a given image.

The need for NIIRS arose from the inability of simple physical image quality measures, such as image scale or resolution, to adequately predict image interpretability. More complex measures, such as modulation transfer function (MTF)-based metrics, did not success fully communicate information to imagery analysts.

The NIIRS was developed and is maintained under the auspices of the US Government's Imagery Resolution Assessments and Reporting Standards (IRARS) Committee. In addition to the original NIIRS, IRARS has developed and published several other scales and products of potential interest to the civil remote-sensing community. This appendix provides an overview of the NIIRS and a description of NIIRS products.

III-3 Background

The need for a means of communicating the interpretability of an aerial image was recognized more than 40 vears ago. The 1954 edition of the Photographic Interpretation Handbook (US Government Printing Office, 1954) includes a table defining the minimum photographic scales for interpretation and identification of a wide range of military and industrial targets. An expanded version of the same table appears in a 1967 version of the same document (Naval Reconnaissance and Technical Support Center, 1967). Table III-1 shows a portion of this "scale".

As long as aerial films had similar resolutions and the performance of camera lenses was defined by their focal length, the interpretability of an image could largely be expressed by its photographic scale (i.e., ratio of film distance to ground distance). Advances in camera and film technology caused the guidelines relating scale and interpretability to become outdated. The 1967 edition of the Photographic Interpretation Handbook (Naval Reconnaissance and Technical Support Center, 1967) alludes to this issue by indicating that the guidelines were based on cameras having an average system resolution of 15 to 20 lines (line pairs) per millimeter. The handbook states that "Photographs from cameras known to have higher or lower resolution values should be adjusted accordingly," but does not specify how this resolution value adjustment should be accomplished.

The NIIRS was developed and is maintained under the auspices of the US Government's Imagery Resolution Assessments and Reporting Standards (IRARS) Committee. In addition to the original NIIRS, IRARS has developed and published several other scales and products of potential interest to the civil remote-sensing community. This appendix provides an overview of the NIIRS and a description of NIIRS products.

Table III-1. Minimum Scales for Interpretation and Identifications3

SubjectMinimum Scale IdentificationMinimum Scale Technical Analysis
Rail1/30,0001/8000
Road1/30,0001/5000
Gas Plant1/20,0001/8000
Hydroelectric1/30,0001/10,000
Vegetation1/20,0001/8000
Radar (fixed)1/10,0001/5000
Aircraft (<40 ft)1/10,0001/2000

3. Data extracted from US Government Printing Office, 1954.

34

Civil NIIRS Reference Guide

The problem was further exacerbated with increasing use of non-vertical photography (panoramic, strip and oblique), as well as the introduction of electro-optical, IR, and Radar systems. The concept of photographic scale as a universal indicator of interpretability was no longer viable.

Beginning in the late 1950's and continuing into the 1970's, a substantial body of research was developed that related physical measures of image quality to interpretability or interpretation performance. Most of this work was performed in the visible portion of the spectrum using television and photographic imagery. In some cases, synthetically generated imagery was used. Much of the effort was also focused on better understanding the process of interpretation. Initial efforts dealing with physical measures tended to concentrate on resolution related measures, but also included in some cases, the effects of contrast and noise (see for example, Bennett, et al., 1963, Applied Psychology Corporation, 1964, Erickson and Hemingway, 1969). Later work used more sophisticated measures such as the modulation transfer function area (MTFA), the optical power spectrum, and a signal-to-noise metric (Borrough, et al., 1967, Rossell, Schindler, 1976, Snyder, 1974, Task, 1979).

A literature review conducted in 1970 (Leachtenauer and Navle, 1970) concluded that "no single measure of image quality is now available to satisfactorily predict (interpretation) performance". This situation arose for several reasons. Image interpretation covers a wide diversity of tasks, and the impact of quality on the performance of those tasks is not uniform. Interactions occurred between the image quality parameters, regarding both physical and performance measures. For example, increasing noise may lower observed resolution, but may have a different effect on interpretation performance than simply lowering resolution. Most studies investigated a limited range of targets and interpretation tasks making generalization across studies difficult. Most of the studies assumed linear relationships between quality and performance. Later work showed that many of the relationships were non-linear. Simple measures of quality such as resolution failed to account for all the factors affecting interpretability and were subject themselves to measurement errors. More complex measures of quality (e.g., MTFA) required special targets and measurement expertise seldom available to the imagery user. Further, these measures did not directly communicate to the user of the imagery. To say that an image or imaging system has an MTFA of "X" has no meaning to an imagery analyst. As far as is known, none of this extensive body of image quality research was directly applied to the operational world.

Tri-bar resolution was perhaps the most often applied metric, but again gave ambiguous results. This approach used images known as tri-bar targets (Figure III-1) to assess total system performance. If the smaller tri-bar targets were resolvable on the imagery, then the image was of higher quality. Resolution is not the sole determinant of image interpretability, and thus measures of resolution did not always predict interpretability correctly. Attempts were made to develop standardized resolution targets and to standardize the readout process. However, the user community generally requested imagery on the basis of system knowledge and experience, and categorized the results on a subjective goodness scale (poor, fair good, excellent).

Again, with the proliferation of new systems having widely disparate characteristics, simple measures of quality, whether objective or subjective, were no longer adequate to request images or express the utility of the imagery. This situation led to the development of the National Imagery Interpretability Rating Scale (NIIRS).

III-4 NIIRS Origin

The original NIIRS was developed by a government/contractor team in the early 1970's. The team operated under the auspices of IRARS Committee. The IRARS had originally been formed to standardize the reading of tri-bar resolution targets.

The goal was to develop a perceptually-based scale relating the ability to perform defined interpretation tasks to the quality of the imagery. The capability of an image to support specified interpretation tasks was defined based on the judgment of a trained imagery analyst. A sample of imagery analysts first defined and then rated the relative difficulty of a large set of interpretation tasks. They ranged from very easy, low-level detection/identification of large objects or features to detailed analysis of very small features. All the tasks were "intelligence" related and dealt with military objects and features. Analysts were then presented with imagery of varying, but known, quality and asked to define the most difficult task that could "just be performed" on that image. Based on this effort, the original 10-level NIIRS was developed.

The NIIRS is a task- or criteria-based scale with criteria defined at each level for five general categories of military equipment (air, electronic, ground, missile, and naval). The NIIRS rating of an image was defined by examining the image and determining the most difficult criterion or task level that could be accomplished with the image in question. The NIIRS was officially introduced to the intelligence community in 1974 and has been in use in various formats since that time.

35

Civil NIIRS Reference Guide

The original NIIRS was adopted, at least in part, by NATO. However, it was termed the Imagery Interpretability Rating Scale (IIRS) and was combined with what appears to be an update of the photo scalerequirements defined in the 1950s (US Government Printing Office, 1954). A table relating interpretation task performance to ground resolved distance (GRD) was published as part of the Stanag documentation. The Stanag IIRS deleted or re-worded many of theNIIRS criteria and added many of the tasks in the GRD table as IIRS criteria. In addition, the IIRS included a GRD range for each of the IIRS levels.

III-5 Evolution of NIIRS

Since its introduction in 1974, the NIIRS has continued to evolve and grow in scope. In the late 1980's it was recognized that many of the objects referenced by the 1974 NIIRS criteria were no longer commonly seen. They were thus unfamiliar to most imagery analysts. Consequently analysts had difficulty using the scale.

Therefore IRARS undertook an effort to update the scale. This update entailed a community survey to define a new set of exploitation tasks (and associated objects) as well as a new set of scale development procedures (Irvine and Leachtenauer, 1996).

The initial NIIRS dealt only with military objects and features. When such objects were not present in a scene, analysts often had difficulty in providing a NIIRS rating. Accordingly, a decision was made to add a cultural criteria4 set to the scale. The cultural criteria set uses features present in any scene having cultural content. Thus, cultural criteria can be used when military objects are not present.

The development of the revised NIIRS examined the criteria from the 1974 version of the scale. The more rigorous development methods of the revised scale uncovered several flaws in the original scale. This in turn meant that the NATO IIRS was flawed. Unfortunately, this fact was not communicated to the proper forum. The IIRS continues to exist and causes confusion for people exposed to both the IIRS and NIIRS.

More recently, the need for interpretability scales for radar and IR imagery became evident. An initial thought was to simply apply the Visible NIIRS to radar and IR imagery. It quickly became apparent that the criteria/interpretability continuum for visible imagery did not apply to radar Imagery m particular and further didnot cover the unique contributions observed on thermal IR imagery. Consequently, the NIIRS development process was applied separately to radar and IR Imagery. This resulted in separate 10-level scales for both radar and IR (See Tables III-2 and III-3.) In addition, IRARS has updated the Visible NIIRS over the years to reflect user feedback and to correct problems uncovered through extensive use of the scale. The most recent modification to the Visible NIIRS was made in 1994, as shown in Table III-4.

The NIIRS is designed to relate to a wide range of image quality. A NIIRS level of 0 is defined as having no value for intelligence purposes. The definition of that level is perhaps open to debate. The upper end of the scale is defined by the best available imagery of a given type, i.e., a portion of the spectrum. The upper end of a scale is not necessarily indicative of any specific operational system.

4. Cultural criteria reference constructed objects such as buildings, roads, railroads, and bridges.

36

Civil NIIRS Reference Guide

Table III-2. Infrared National Imagery Interpretability Rating Scale (NIIRS) - April 1996

RATING LEVEL 0
Interpretability of the imagery is precluded by obscuration, degradation, or very poor resolution.

RATING LEVEL 1
Distinguish between runways and taxiways on the basis of size, configuration or pattern at a large airfield.
Detect a large (e.g., greater than 1 square kilometer cleared area in dense forest.
Detect large ocean-going vessels (e.g., aircraft carrier, super-tanker, KIROV) in open water.
Detect large areas (e.g., greater than 1 square kilometer) of marsh/swamp.

RATING LEVEL 2
Identify individual thermally active engine vents atop
Detect large aircraft (e.g., C-141, 707, BEAR, CANDID, CLASSIC).
Detect individual large buildings (e.g., hospitals, factories) in an urban area.
Distinguish between densely wooded, sparsely wooded and open fields.
Identify an SS-25 base by the pattern of buildings and roads.
Distinguish between naval and commercial port facilities based on type and configuration of large functional areas.

RATING LEVEL 3
Distinguish between large (e.g., C-141, 707, BEAR,A-300 AIRBUS) and small aircraft (e.g., A-4, FISHBED, L-39).
Identify individual thermally active flues running between the boiler hall and smoke stacks at a thermal power plant.
Detect a large air warning radar site based on the presence of mounds, revetments and security fencing.
Detect a driver training track at a ground forces garrison.
Identify individual functional areas (e.g., launch sites, electronics area, support area, missile handling area) of an SA-5 launch complex.
Distinguish between large (e.g., greater than 200 meter) freighters and tankers.

RATING LEVEL 4
Identify the wing configuration of small fighter aircraft (e.g., FROGFOOT, F-16, FISHBED).
Detect a small (e.g., 50 meter square) electrical transformer yard in an urban area.
Detect large (e.g., greater than 10-meter diameter) environmental domes at an electronics facility.
Detect individual thermally active vehicles in garrison.
Detect thermally active SS-25 MSV's in garrison.
Identify individual closed cargo hold hatches on large merchant ships.

RATING LEVEL 5
Distinguish between single-tail (e.g., FLOGGER, F-16, TORNADO) and twin-tailed (e.g., F-15, FLANKER, FOXBAT) fighters.
Identify outdoor tennis courts.

RATING LEVEL 5 (Cont.)
Identify the metal lattice structure of large (e.g., approximately 75 meter) radio relay towers.
Detect armored vehicles in a revetment.
Detect a deployed TET (transportable electronics tower) at an SA-10 site.
Identify the stack shave (e.g., square, round, oval) on large (e.g. greater than 200 meter) merchant ships.

RATING LEVEL 6
Detect wing-mounted stores (i.e., ASM, bombs) protruding from the wings of large bombers (e.g., B-52, BEAR, BADGER).
Identify individually thermally active engine vents atop diesel locomotives.
Distinguish between a FIX FOUR and FIX SIX site based on antenna pattern and spacing.
Distinguish beween thermally active tanks and APCs.
Distinguish between a 2-rail and 4-rail SA-3 launcher.
Identify missile tube hatches on submarines.

RATING LEVEL 7
Distinguish between around attack and interceptor versions of the MIG-23 FLOGGER based on the shape of the nose.
Identify automobiles as sedans or station wagons.
Identify antenna dishes (less than 3 meters in diameter) on a radio relay tower.
Identify the missile transfer crane on a SA-6 transloader.
Distinguish between an SA-2/CSA-1 and a SCUD-B massile transporter when missiles are not loaded.
Detect mooring cleats or bollards on piers.

RATING LEVEL 8
Identify the RAM airscoop on the dorsal spine of FISHBED J/K/L.
Identify limbs (e.g., arms, legs) on an individual.
Identify individual horizontal and vertical ribs on a radar antenna.
Detect closed hatches on a tank turret.
Distinguish between fuel and oxidizer Multi-System Propellant Transporters based on twin or single fitments on the front of the semi-trailer.

Identify individual posts and rails on deck edge life rails.

RATING LEVEL 9
Identify access panels on fighter aircraft.
Identify cargo (e.g., shovels, rakes, ladders) in an open bed, light-duty truck.
Distinguish between BIRDS EYE and BELL LACE antennas based on the presence or absence of small dipole elements.
Identify turret hatch hinges on armored vehicles.
Identify individual command guidance strip antennas on an SA-2/CSA-1 missile.
Identify individual rungs on bulkhead mounted ladders.

37

Civil NIIRS Reference Guide

Table III-3. Radar National Imagery Interpretability Rating Scale (NIIRS) - August 1992

RATING LEVEL O
Interpretability of the imagery is precluded by obscuration, degradation, or very poor resolution.

RATING LEVEL 1
Detect the presence of aircraft dispersal parking areas.
Detect a large cleared swath in a densely wooded area.
Detect, based on presence of piers and warehouses, a port facility.
Detect lines of transportation (either road or rail), but do not distinguish between.

RATING LEVEL 2
Detect the presence of large (e.g., BLACKJACK, CAMBER, COCK, 707, 747) bombers or transports.
Identify large phased-array radars (e.g., HEN HOUSE, DOG HOUSE) by type.
Detect a military installation by building pattern and site configuration.
Detect road pattern, fence and hardstand configuration at SSM launch sites (missile silos, launch control silos) within a known ICBM complex.
Detect large non-combatant ships (e.g., freighters or tankers) at a known port facility.
Identify athletic stadiums.

RATING LEVEL 3
Detect medium-sized aircraft (e.g., FENCER, FLANKER, CURL, COKE, F-15).
Identify an ORBITA site on the basis of a 12-meter dish antenna normally mounted on a circular building.
Detect vehicle revetments at a ground forces facility.
Detect vehicles/pieces of equipment at a SAM, SSM, or ABM fixed missile site.
Determine the location of the superstructure (e.g., fore, amidships, aft) on a medium-sized freighter.
Identify a medium-sized (approx. six track) railroad classification yard.
RATING LEVEL 4
Distinguish between large rotary-wing and medium fixed-wing aircraft (e.g., HALO helicopter vs. CRUSTY transport).
Detect recent cable scars between facilities or command posts.
Detect individual vehicles in a row at a known motor pool.
Distinguish between open and closed sliding roof areas on a single bay garage at a mobile missile base.
Identify square bow shape of ROPUCHA class (LST).
Detect all rail/road bridges.

RATING LEVEL 5
Count all medium helicopters (e.g., HIND, HIP, HAZE, HOUND, PUMA, WASP9.
Detect deployed TWIN EAR antenna.
Distinguish between river crossing equipment and medium/heavy armored vehicles by size and shape (e.g., MTU-20 vs. T-62 MBT).
Detect missile support equipment at an SS-25 RTP (e.g., TEL, MSV).
Distinguish bow shape and length/width differences of SSNs.
Detect the break between railcars (count railcars).

RATING LEVEL 6
Distinguish between variable and fixed-wing fighter aircraft (e.g., FENCER vs. FLANKER).
Distinguish between the BAR LOCK and SIDE NET antennas at a BAR LOCK/SIDE NET acquisition radar site.
Distinguish between small support vehicles (e.g., UAZ-69, UAZ-469) and tanks (e.g., T-72, T-80).
Identify SS-24 launch triplet at a known location.
Distinguish between the raised helicopter deck on a KRESTA II (CG) and the helicopter deck with main deck on a KRESTA I (CG).
Identify a vessel by class when singly deployed (e.g., YANKEE I, DELTA I, KRIVAKII FFG).
Detect cargo on a railroad flatcar or in a gondola.

RATING LEVEL 7
Identify small fighter aircraft by type (e.g., FISHBED, FITTER, FLOGGER).
Distinguish between electronics van trailers (without tractor) and van trucks in garrison.
Distinguish, by size and configuration, between a turreted, tracked APC and a medium tank (e.g., BMP-1/2 vs. T-64).
Detect a missile on the launcher in an SA-2 launch revetment.
Distinguish between bow mounted missile system on KRIVAKI/II and bow mounted gun turret on KRIVAK III.
Detect road/street lamps in an urban, residential area or military complex.

RATING LEVEL 8
Distinguish the fuselage difference between a HIND and a HIP helicopter.
Distinguish between the FAN SONG E missile control radar and the FAN SONG F based on the number of parabolic dish antennas (three vs. one)
Identify the SA-6 transloader when other SA-6 equipment is present.
Distinguish limber hole shape and configuration differences between DELTA I and YANKEE I (SSBNs).
Identify the dome/vent pattern on rail tank cars.

RATING LEVEL 9
Detect major modifications to large aircraft (e.g., Airings, pods, wingless).Identify the shape of antennas on EW/GCI/ACQ radars as parabolic, parabolic with clipped corners, or rectangular.
Identify, based on presence or absence of turret, size of gun tube, and chassis configuration, wheeled or tracked APCs by type (e.g., BTR-80, BMP-1/2, MT-LB, M113).
Identify the forward fins on an SA-3 missile.
Identify individual hatch covers of vertically launched SA-N-6 surface-to-air system.
Identify trucks as cab-over-engine or engine-in-front.

38

Civil NIIRS Reference Guide

Table III-4. Visible National Imagery Interpretability Rating Scale (NIIRS) - March 1994

RATING LEVEL O
Interpretability of the imagery is precluded by obscuration, degradation, or very poor resolution.

RATING LEVEL1
Detect a medium-sized port facility and/or distinguish between taxiways and runways at a large airfield.

RATING LEVEL2
Detect large hangars at airfields.
Detect large static radars (e.g., AN/FPS-85, COBRA DANE, PECHORA, HENHOUSE).
Detect military training areas.
Identify an SA-5 site based on road pattern and overall site configuration.
Detect large buildings at a naval facility (e.g., warehouses, construction hall).
Detect large buildings (e.g., hospitals, factories).

RATING LEVEL 3
Identify the wing configuration (e.g., straight, swept delta) of all large aircraft (e.g., 707, CONCORD, BEAR, BLACKJACK).
Identify radar and guidance areas at a SAM site by the configuration, mounds, and presence of concrete aprons.
Detect a helipad by the configuration and markings.
Detect the presence/absence of support vehicles at a mobile missile base.
Identify a large surface ship in port by type (e.g., cruiser, auxiliary ship, noncombatant/merchant).
Detect trains or strings of standard rolling stock on railroad tracks (not individual cars).

RATING LEVEL 4
Identify all large fighters by type (e.g., FENCER, FOXBAT, F-15, F-14).
Detect the presence of large individual radar antennas (e.g., TALL KING).
Identify, by general type, tracked vehicles, field artillery, large river crossing equipment, wheeled vehicles when in groups.
Detect an open missile silo door.
Determine the shape of the bow (pointed or blunt/rounded) on a medium-sized submarine (e.g., ROMEO, HAN, Type 209, CHARLIE II, ECHO II, VICTOR II/III).
Identify individual tracks, rail pairs, control towers, switching points in rail yards.

RATING LEVEL 5
Distinguish between a MIDAS and a CANDID by the presence of refueling equipment (e.g., pedestal and wing pod).
Identify radar as vehicle-mounted or trailer-mounted.
Identify, by type, deployed tactical SSM systems (e.g., FROG, SS-21, SCLID).
Distinguish between SS-25 mobile missile TEL and Missile Support Vans (MSVs) in a known support base, when not covered by camouflage.

RATING LEVEL 5 (Cont.)
Identify TOP STEER or TOP SAIL air surveillance radar on KIROV-, SOVREMENNY-, KIEV-, SLAVA-, MOSKVA-, KARA-, or KRESTA-II-class vessels.
Identify individual rail cars by type (e.g., gondola, flat, box) and/or locomotives by type (e.g., steam, diesel).

RATING LEVEL 6
Distinguish between models of small/medium helicopters (e.g., HELIX A from HELIX B from HELIX C, HIND D from HIND E, HAZE A from HAZE B from HAZE C).
Identify the shape of antennas on EW/GCI/ACQ radars as parabolic, parabolic with clipped corners or rectangular.
Identify the spare tire on a medium-sized truck.
Distinguish between SA-6, SA-ll, and SA-17 missile airframes.
Identify individual launcher covers (8) of vertically launched SA-N-6 on SLAVA-class vessels.
Identify automobiles as sedans or station wagons.

RATING LEVEL 7
Identify fitments and fairings on a fighter-sized aircraft (e.g., FULCRUM, FOXHOUND).
Identify ports, ladders, vents on electronics vans.
Detect the mount for antitank guided missiles (e.g., SAGGER on BMP-1).
Detect details of the silo door hinging mechanism on Type III-F, III-G, and II-H launch silos and Type III-X launch control silos.
Identify the individual tubes of the RBU on KIROV-, KARA-, KRIVAK-class vessels.
Identify individual rail ties.

RATING LEVEL 8
Identify the rivet lines on bomber aircraft.
Detect horn-shaped and W-shaped antennas mounted atop BACKTRAP and BACKNET radars.
Identify a hand-held SAM (e.g., SA-7/14, REDEYE, STINGER).
Identify joints and welds on a TEL or TELAR.
Detect winch cables on deck-mounted cranes.
Identify windshield wipers on a vehicle.

RATING LEVEL 9
Differentiate cross-slot from single slot heads on aircraft skin panel fasteners.
Identify small light-toned ceramic insulators that connect wires of an antenna canopy.
Identify vehicle registration numbers (VRN) on trucks.
Identify screws and bolts on missile components.
Identify braid of ropes (1 to 3 inches in diameter).
Detect individual spikes in railroad ties.

39

Civil NIIRS Reference Guide

III-6 References

Applied Psychology Corporation, Performance of Photographic Interpreters as a Function of Time and Image Characteristics, RADC-TDR-63-313, Rome Air Development Center, Rome, NY 1963.

Bennett, C.C., Winterstein, S.H., Taylor, J.D., and Kent, R.E., A Study of Image Quality and Speeded Intrinsic Target Recognition, IBM No. 63-535-1, IBM Federal Systems Division, Owego, NY 1963.

Borrough, H.C., Fallis, R.F., Warnock, T.H., and Britt, J.H., Quantitative Determination of Image Quality, D2-114058, Boeing Aerospace Co., Kent, WA, 1967.

Erickson, R.A., and Hemingway, J.C., Identification via Television: Size and Scan Lines, Paper presented at the NATO Symposium on Image Evaluation, Munich, Germany, August 1969.

Irvine, J.M., and Leachtenauer, J.C., (1996), A Methodology For Developing Image Interpretability Ratings Scales, Proceedings of the American Society of Photogrammetry and Remote Sensing Annual Meetings, April 1996.

Leachtenauer, J.C. and Navle, D.W., Image Quality State of the Art Review, D2-121692, Boeing Company, Seattle, WA, 1970.

Naval Reconnaissance and Technical Support Center, Image Interpretation Handbook, TM30-45/NAVAER 10-35-610/AFM 200-50, US Government Printing Office, Washington, DC, 1967. Nill, N.H. and Bouzas, B.H., Objective Image Quality Measure Derived from Digital Image Power Spectra, Optical Engineering, Vol. 31, No. 4, April 1992.

Rossell, F.A., and Willson, R.H., Recent Psychophysical Experiments and the Display Signal-to-Noise Ratio Concept in Biberman, L.M., Editor, Perception of Displayed Information, Plenum Press, NY, 1973.

Snyder, H.L., Visual Search and Image Quality, AMRL-TR-76-89, Aerospace Medical Research Laboratory, Wright- Patterson AFB, OH, 1976.

Schindler, R.A., Optical Power Spectrum Analysis of Processed Imagery, AMRL-TR-79-29, Aerospace Medical Research Laboratory, Wright-Patterson AFB, OH, 1979.

Task, H.L., An Evaluation and Comparison of Several Measures of Image Quality for Television Displays, AMRL-TR-79-7, USAF Aerospace Medical Research Laboratories, Wright-Patterson AFB, OH, 1979.

US Government Printing Office, Photo Interpretation Handbook, TM30-45/NAVAER 10-35-610/AFM 200-50, Washington, DC, 1954.