Contents:

Natalie Helferty

Jim Harding

Charles Francis




From: Natalie Helferty <natalie@zoo.toronto.edu>

Date: Mon, 4 Mar 1996 17:33:41 -0500 (EST)

Subject: Re: Monitoring Protocol

In response to Paul Geissler's message (in response to Charles Francis' message), it seems to me that a compromise between 'statistically defensible' data using random sampling and 'volunteer retention' using non-random volunteer 'selection sampling' has to be reached. There is a trade-off between the two. Charles and I have hashed this out several times.

As the former amphibian coordinator for the Marsh Monitoring Program at Long Point Bird Observatory in 1995 and the author of the 1995 Marsh Monitoring Amphibian Protocol, I have had to deal with both issues. Since last year was the first year of the amphibian survey of the Marsh Monitoring Program, our volunteers, on the whole, were new to frog surveys and did not have any prior knowledge as to where the "best" spots for listening for frogs were. In fact, most often we (Amy Chabot--the other coordinator--and myself) had to suggest marshes that we knew or heard of as possible survey locations. We also looked on topographic maps for possible marsh habitat and sent the surveyor on a 'marsh hunt' to see if that site met our requirements defining a 'marsh'. This in itself, may cut down on the bias of picking only the 'best' sites to survey. I would say that for the most part, volunteers do not know where the 'best' sites are unless they happen to be avid herpetologists and have surveyed at many sites and then only pick the sites with the most frogs or the most species.

I think there may be some confusion when speaking about the 'best' sites. I must stress that just because there is suitable amphibian habitat available, e.g. water and wetland plants, this does not mean the site is 'good'.All amphibians need these basics, well water anyways--I have witnessed hundreds of leopard frog tadpoles basking in a newly-built dug-out pond in Sam Smith Park in Etobicoke in 1994, devoid of vegetation except for a willow branch fallen in the pond (the pond edge was subsequently bulldozed and attached to the existing channel to Lake Ontario, thus allowing predatory fish access to the pond--not one tadpole/froglet/frog was found after that!) The 'best' sites would only be where a known amphibian population exists and the number of species and population size surpasses other sites with known species and population size. In almost all cases, this information is not known or has not been accurately determined. I think this argument should stand up in court.

The major draw-back to a randomized survey is not just missing suitable wetland habitat where frogs could be calling, but volunteer drop-out. I would also argue that doing the same route year after year and going 800 meters between stops would result in the same stations being surveyed year after year (which would happen with a volunteer chosen route as well) and still not pick up any newly-formed wetlands. As well, if the volunteer has nothing to hear, they get bored very, very quickly unless they believe that the 'non-data' they are collecting is important (they can't just know that it is, they have to believe it in order to remain enthusiastic). For the Marsh Monitoring Program, we have had a few people tell us that they want to look for a new route for 1996 because the one they originally chose was not productive. This of course, is where the bias enters in and changing/dropping routes has to be discouraged.

It may be possible to randomly select stopping sites along a route (even the volunteer could do this for either odometer reading for road-side surveys or walking-distance/length of walking time for a foot survey using a random number table prior to the survey date) year after year so that newly-formed wetlands will not be missed and then a random sample of the representative habitat and it's frog populations would be obtained. Thus route to route comparison for trends could be done. In my experience, volunteers given or allowed to choose a route, will haphazardly choose stations along a route anyways, so why not make it truly random? Thus there is an element of randomness and a greater chance that the volunteers will hear frogs at least once along the route if the stations vary from year to year. As well, loss of marking stakes is then not such a big deal if they are lost from one year to the next. Removal of stakes could be done at the end of the season as well.

I hope this diatribe is not too long. I would appreciate any comments/criticisms/concerns to my proposal. Thanks for reading.

Natalie J. Helferty  --University of Toronto
        @@      --Department of Zoology
        ()      --Toronto, Ontario, Canada
        ^^      --natalie@zoo.toronto.edu






From: jharding@museum.cl.msu.edu (Jim Harding)

Date: Tue, 5 Mar 1996 11:07:15 -0500

Subject: Re: Monitoring Protocol/ response to N. Halferty

In response to Nancy Helferty's comments regarding the Long Point survey, I think you have made many good points regarding the use of volunteers and survey routes. Speaking from my vantage point in south-central Michigan, I can state quite safely that the trend in wetlands is mostly that of disappearance-- not a shift of wetlands from one spot to another.

The "statistically correct" routes are largely going to be run by trained and motivated biologists, naturalists, and state personnel. The vast number of enthusiastic "lay" volunteers will be surveying places that are either convenient or interesting to them-- and I continue to argue that a few decades from now, the folks then in charge of herp conservation will be darned glad for any reasonably reliable data they can get, whether or not it passes the muster of the mathematicians. I say this because I am presently chairman of our state's Technical Advisory Committee on Amphibians and Reptiles, and our committee is constantly crippled in our advisory role by the lack of practically ANY data on a great many species in many parts of the state. We are forced to depend on the limited (and frequently questionable) data attached to old museum specimens, and the anecdotal field notes that we and other naturalists have gathered over the years. Sure, we would be happy if those 19th and early 20th century biologists had done statewide "stratified statistical" surveys (a rough job before the days of computers and paved roads), but at this point any "presence/ absence" data would sure be a help.

I guess that those frog surveyors lucky enough to work in largely undeveloped parts of the country may be able to say that the disappearance of a small population of amphibians might be "insignificant to the species as a whole"-- but in the urbanized and agriculturalized areas, the loss of any little marsh or pond (i.e., "presence" to "absence") might be a significant loss to a particular species.

One more quick thought-- I've heard the argument that only "statistically significant" data will be useful to document population trends and thus convince politicians to act in the legislative arena. I would question this assumption. At least in the present political atmosphere, politicians would seem to be moved as much (or more) by perceived public opinion as they are by data presented by biologists. If the local fishermen, landowners, and farmers are openly expressing their belief that the frogs are disappearing, this might predictably influence the action of the local representative or congressman more efficiently than the often hedged statistical presentations of the professional biologist. If this view seems cynical-- I submit that it is also realistic.

J Harding

jharding@museum.cl.msu.edu






From: "Charles M. Francis" <102706.3672@compuserve.com>

Date: 05 Mar 96 23:47:49 EST

Subject: Amphibian Monitoring: Response to J. Harding

Your arguments that statistically uncertain data are better than no data are certainly valid. However, statistically valid sampling data would be even better.

Even with a volunteer based scheme, I am sure that sampling designs can be selected that will meet the requirements of being statistically valid, without losing most volunteers. The major requirement is to select an appropriate sampling scheme. It is not necessary that all parts of a state or province be covered evenly, nor that all marshes have equal probabilities of coverage. However, it is important that the sampling scheme be known. If marshes near towns are much more likely to be selected, this needs to be known, so that one can temper one's conclusions accordingly, or weight the analyzes. One way to do this is through a stratified sampling scheme. For example, much larger numbers of routes can be selected near cities where potential volunteers live, to accommodate the availability of surveyors. However, it is still important that these routes be chosen with some random component, to ensure they are representative of the area they are purporting to sample. Obviously, one could still encourage volunteers to take routes away from the towns, because in a statewide/provincewide trend analysis, individual routes in areas where they are more thinly scattered would be weighted more heavily than individual routes in areas with many routes. Weighting of route selection could equally be based on other criteria such as size of marshes, etc. As long as the sampling scheme has a random or random-stratified component, it can probably be accommodated in the analyzes.

Something else to remember is that there are many different ways of choosing random samples. One approach is to pick a random starting point on a UTM or latitude/longitude grid, and then pick the nearest suitable route from a certain point (as used by the Breeding Bird Survey). Another approach, if there are a finite (and known) number of potential routes in an area, is to list all potential routes and pick randomly from amongst them. In these days of computers, this may be more viable than one might expect, even on a large scale basis. With a suitable GIS data base (e.g. a wetlands inventory) one could even devise a perfectly randomized (or stratified random) sampling scheme based on wetlands rather than roads, provided that one is only interested in wetlands large enough to be classified.

One aspect that could be a potential problem, is matching volunteers to the appropriate route. For example, one might pick randomly 100 routes, and then allow volunteers to pick from amongst that list of routes. However, if only 80 volunteers participate, it is quite likely that the 20 routes not selected will be a very non-random sample, e.g. the 20 routes farthest away from where volunteers live, which could then lead to bias. One solution is to pick exactly the right number of routes, but how many coordinators of volunteer surveyors ever know exactly how many people will respond (and complete the survey even if they planned it)? Furthermore, what happens if the last volunteers to choose don't like any of the routes? Even with the guidelines that have been developed at the NAAMP meeting, it should be clear that additional issues need to be considered before a fully randomized, statistically defensible and unbiased volunteer-based survey can actually be implemented.

The problem with routes selected entirely by volunteers is that, even if they were perfectly representative, we would not know this, and so we could not be sure of the results. The challenge is to pick routes using an appropriate sampling scheme that still meet the desires of volunteers (e.g. convenient and interesting). I think this challenge can be met, but I am not sure we have yet devised a recipe. As indicated by Natalie, we have discussed this issue quite often at Long Point Bird Observatory for the Great Lakes Marsh Monitoring

Program, but I don't think we have yet come up with the best solution. With respect to J. Harding's view of the necessity of hard science, I think it depends very much upon who needs to be convinced. If the majority of the general public believe that frogs are declining, some sort of survey results (statistically sound or otherwise) support this notion, and people are willing to put their money where their mouths are, and start some conservation action, then you are right--the soundness of the science is irrelevant. However, what happens when most of the general public are indifferent or unconvinced, and large scale developers are busy trying to fill in marshes? Or a large company refuses to spend several hundred million dollars to reduce acid rain causing emissions from their smoke stacks? In that case, statistically defensible data are infinitely more valuable than data from an unknown sampling scheme, however suggestive the latter might be. Admittedly, even sound data might be ignored in some of these cases, but wouldn't it be better in 10 years time to have a survey that one can really believe?

This is also true from the science/management perspective, as can be illustrated with another anecdote from the bird world on the dangers of inadequate survey design. The American black duck, as some people may know, has experienced a substantial population decline, at least up until the early 1980's. But how big is this decline? The most widely quoted data are based upon a mid-winter inventory, which consists of air and ground crews counting all the ducks and geese (of all species) they can find in January and February. It is not based upon a statistically sound sampling scheme, but at least since the early 1970s, the procedure has been sufficiently standardized that it probably provides a fairly good index (it was actually tested against a randomized transect scheme in the mid-1980s and stood up well). However, prior to that date, the survey was much less standardized. Casual conversations with pilots indicate that when crews changed, the sampling protocol often changed. Some pilots searched every piece of suitable habitat they knew about, others searched only areas with large concentrations of ducks. Because individual pilots may fly an area for many years before changing, this can produce major bias when a pilot changes. What do these data tell us about black ducks? They suggest a really dramatic decline between 1955 and 1970 from about 750,000 ducks to 400,000 ducks, largely occurring in the first few years. From 1970 to 1983, the population declined further to about 300,000, after which it has apparently stabilized. So does that mean that the declined slowed down in the 1970s, after a small reduction in hunting? Maybe, but because we can't trust the early data, we don't really know. Quite possibly some of the earliest counts were inflated and the decline has been at a steady rate over the whole period. Or perhaps these early accounts were grossly exaggerated, and the rate of decline was actually slower in the early period. In conclusion, those early counts may have been better than nothing, because they at least gave some early warning of declines (they probably helped prompt some restrictions on harvest in the mid 1960s), but in retrospect, they give us very little information about what population levels might have been in the 1950s (relevant from the perspective of setting management targets), and whether the hunting restrictions in the 1960s did any good, mostly because of their poor sampling scheme.

If we have a chance to devise a good sampling scheme for amphibians, we should do so. I think the problems arise not with getting volunteers to carry out sound surveys (and we are not asking them to survey routes with no frogs--we all agree that is useless), but rather working with statisticians to devise a sampling scheme that meets the needs of both volunteer surveyors and biologists. Such schemes most certainly exist, but further work may be required to find the best ways to implement them.

Charles M. Francis

Long Point Bird Observatory

file: 15s6.wpd