05-24-2007, 04:55 PM
Not a statistician, but know some of the basics. Basically, if you can get a relatively representative random sample of a group, you can make a very good estimate, plus or minus a couple percent, with an even smaller sample than 1 in 2000. And the mathematics behind statistical sampling show that you do not need to double the size of the sample if you double the size of the group. I forget the actual amount a sample has to be increased to get the same reliability when doubling the group, but it was something like only a few percent larger. This is where methodology becomes very important. If the sample is not random, but say "self-selected", then the results can be skewed from the actual. Even more important can be the wording of the questions and the interview techniques. People do have a tendency to give the "expected" or what they perceive as the "expected" answer in situations where they are answering questions placed by an interviewer. This is as opposed to anonymous surveys with no interviewer.
Now your example has taken the idea that the methodology is to take a random 1 in 2000 sample. But the is not the actual way the sample size is determined. There are formulas that give a sample size for a given size group and the statistical accuracy the sampler wants. Again, without looking up the actual numbers and the equations, so if you wanted the accuracy to be plus/minus 5%, it might take 100 persons in your group of 4,000, but for a group of 8.000 it might only take a sample size of 110 to get the same accuracy. Essentially statistics is using the rules of chance to determine what the probability is that the random sample picks out the equivalent of all heads for a small group of coins picked up off the floor at random after dumping a box of coins out.
P.S. I have not actually heard about the Pew report, and just took a quick look at the news report. Here is an important quote from that, "The Pew Research Center conducted more than 55,000 interviews to obtain a national sample of 1,050 Muslims living in the United States." What this means is that there is this large mass of interview reports, 55,000 plus. They then used some means, probably outlined in the full report, to select reports at random from that bunch of reports. If done correctly, that often corrects for interviewing methodology errors and any subjective ranking of the interview reports. I suspect many persons are looking over that full report to see if they can find any methodology errors. Unfortunately, that kind of thing often only gets reported in journals, not in the mass media news except in the back pages.
Now your example has taken the idea that the methodology is to take a random 1 in 2000 sample. But the is not the actual way the sample size is determined. There are formulas that give a sample size for a given size group and the statistical accuracy the sampler wants. Again, without looking up the actual numbers and the equations, so if you wanted the accuracy to be plus/minus 5%, it might take 100 persons in your group of 4,000, but for a group of 8.000 it might only take a sample size of 110 to get the same accuracy. Essentially statistics is using the rules of chance to determine what the probability is that the random sample picks out the equivalent of all heads for a small group of coins picked up off the floor at random after dumping a box of coins out.
P.S. I have not actually heard about the Pew report, and just took a quick look at the news report. Here is an important quote from that, "The Pew Research Center conducted more than 55,000 interviews to obtain a national sample of 1,050 Muslims living in the United States." What this means is that there is this large mass of interview reports, 55,000 plus. They then used some means, probably outlined in the full report, to select reports at random from that bunch of reports. If done correctly, that often corrects for interviewing methodology errors and any subjective ranking of the interview reports. I suspect many persons are looking over that full report to see if they can find any methodology errors. Unfortunately, that kind of thing often only gets reported in journals, not in the mass media news except in the back pages.