03-09-2025, 05:26 AM
I know this is all over, but I have to get a word in.
"If the sample size is large enough, then all those factors you mentioned above will average out. The only thing that matters is the percentage of kids with Autism in vaccinated versus unvaccinated kids."
This is one of the most frustrating things you can say to a statistician. You have two groups of people, unvaccinated and vaccinated. You look at the number of people who have autism in each and compare them. You find a difference, perhaps 500 more people have autism in the vaccinated group. That's "more", but is it a statistically different amount? Iow, could you expect to see this difference occur just by chance? That's where statistics come in. You often see a confidence level like .005. If the difference is significant to the .005 level, it means you would expect to see it by chance only 5 times in 1000. So not very often, which means you can be confident it's not just by chance (it's a real difference).
You have to look at the size of the difference relative to how much "noise" or variability there is in the data. If it's really noisy, you need either a huge difference or a really big sample size to outweigh the noise. Confounds are part of what introduce noise, so you want to reduce those.
I think the hardest part of research like this is the actual "thing" that we call autism. The definition has changed continually throughout the years because we don't really know that much about it. Imagine if we were to find there are multiple causes for the symptoms we have decided identify as autism. In that case vaccines might affect the likelihood of one sub-type of autism, but not all the others, which would make the above comparison between vaccinated and unvaccinated groups less likely to show a difference. The effect would be "washed out". In other words, like Diana said, research is complex, and research involving humans is the most complex. It may seem simple, but it gets crazy real quick.
"If the sample size is large enough, then all those factors you mentioned above will average out. The only thing that matters is the percentage of kids with Autism in vaccinated versus unvaccinated kids."
This is one of the most frustrating things you can say to a statistician. You have two groups of people, unvaccinated and vaccinated. You look at the number of people who have autism in each and compare them. You find a difference, perhaps 500 more people have autism in the vaccinated group. That's "more", but is it a statistically different amount? Iow, could you expect to see this difference occur just by chance? That's where statistics come in. You often see a confidence level like .005. If the difference is significant to the .005 level, it means you would expect to see it by chance only 5 times in 1000. So not very often, which means you can be confident it's not just by chance (it's a real difference).
You have to look at the size of the difference relative to how much "noise" or variability there is in the data. If it's really noisy, you need either a huge difference or a really big sample size to outweigh the noise. Confounds are part of what introduce noise, so you want to reduce those.
I think the hardest part of research like this is the actual "thing" that we call autism. The definition has changed continually throughout the years because we don't really know that much about it. Imagine if we were to find there are multiple causes for the symptoms we have decided identify as autism. In that case vaccines might affect the likelihood of one sub-type of autism, but not all the others, which would make the above comparison between vaccinated and unvaccinated groups less likely to show a difference. The effect would be "washed out". In other words, like Diana said, research is complex, and research involving humans is the most complex. It may seem simple, but it gets crazy real quick.