Patuxent Wildlife Research Center
NAAMP III Archive - statistical issues
Home | Archive by Alphabetical Order | Archive by Category
to the paper
A Heuristic Approach to Validating
Based on Count Indices (Damn the Statisticians, Full Speed Ahead)
Homines id quod volunt credunt.
[ Original Paper ]
Sam has raised a critical issue for all large-scale monitoring surveys. He defines the problem as:
"Validation is a process in which you, the person who wishes to start a new monitoring program, investigates the relationship between your proposed count index (e.g., number of salamanders hiding under boards, counts of calling frogs in wetlands, number of tadpoles caught in a sweep of a net) and the real number of critters in the area. If your index (or count) behaves properly (a graph of count vs. true population = a straight line. Figure 1), then your count would appear to be an unbiased index to population size. Congratulations. If your index behaves improperly (graph does not yield a straight line. Figure 2) then you need to make some corrections to your index."
Although one would like to have a relationship like Figure 1, I do not think it is necessary. In my view, monitoring surveys should be designed to detect and document gross population changes, especially those that raise concerns about the viability of a population. To achieve that, I think all one needs is a monotonic relationship between the index (count) and the population size over the range of the critically small population changes. If the count approaches an asymptote as the population continues to increase (e.g., call saturation, Fig. 2, line 2), there will not be a problem if the concern is limited to the ascending portion of the curve at lower population levels. I think the critical question is whether or not the index will reliably reflect a substantial population decrease at low population densities. This requires that there is a strong causal relationship between the population size and the index so that the index will reflect population changes in all situations. Although an index may be affected by many factors in addition to population size, the measurement error and bias should be small compared to the population effects one wants to detect. Thus, I think that even very rough, inexpensive indices can be very useful for monitoring. More accurate methods are available, but they are probably not practical for monitoring large areas because to the cost of measuring populations at a large number of sites. As always, the usefulness of any tool depends on what you plan to do with it.
The world's finest rip saw makes an awful hammer.
We may be interested in small changes, but if we are honest we must admit that meaningful action will probably not be taken until 50% to 90% of the population is lost. Our society does not respond unless there is a crisis. Look at the example of African famine, where nothing is done until there are television pictures, although the problem is well-known.
Don't measure it with a micrometer, if you are going to hew it with an ax.
Feasible monitoring programs may not detect small anthropogenetic changes (<50% of the population) over large areas, because:
If you spend all the money on monitoring, you won't have any left to fix the problem.
Because of the objectives of monitoring surveys, I think the best way to test their effectiveness is to use natural experiments. They are designed to detect major changes over large areas, and these effects cannot be produced experientially. However, we can observe whether or not a survey can detect naturally occurring population fluctuations, such as those resulting from DDT, hurricanes, and cold winters. It has been shown that bird surveys can detect these events, and it can be inferred that they would be able to detect anthropogenic events of similar magnitude.
U.S. Department of the Interior
U.S. Geological Survey
Patuxent Wildlife Research Center
Laurel, MD, USA 20708-4038
Contact: Sam Droege, email: Sam_Droege@usgs.gov
Last Modified: June 2002