09-03-2012, 02:00 AM
After viewing a few organic foods, comfort foods or control foods, participants who were exposed to organic foods volunteered significantly less time to help a needy stranger, and they judged moral transgressions significantly harsher than those who viewed non-organic foods.
"viewing"?!! ... and then he thinks those results justify this conclusion:
I was honestly hedging my bets on the moral licensing approach, according to which people feel licensed to act less ethically when their moral identities are made salient," Erskin said.
No way, ray. There's other equally plausible explanations - like you had many people in the "view organic food" group who felt guilty because they don't eat organic food and that got them feeling defensive and led to them being less generous. I mean, though, really, what part of the design if this experiment yields outcomes that test the hypothesis that viewing organic food leads to people feeling licensed to act less ethically because their moral identities are made salient? Unless there is a whole lot more to this study than acknowledged on the web page cited in the OP, that conclusion seems to be projection on the part of the researcher of what he wanted to read into the results. Perhaps he hedged his bets more than he is aware.
And I have questions about the methodology of the research. I still wonder about the usefulness of data about random people (most of whom surely do not regularly eat organic food if they truly are chosen randomly) who are just viewing organic food. What question is this experiment designed to hopefully find an answer to? At most the results are suggestive of something, but what it is suggestive of is not at all evident to me (and certainly doesn't, as I said, justify the conclusion the researcher draws). I am also concerned with the use of "significantly less time" and "significantly harsher". If the researcher thinks that they have set up the experiment sufficiently well, they may claim that there was a statistically significant difference - which could mean that there is a possible 5% error rate in the data and they measured a 6% difference between the "view organic food group" and other groups. That may be a statistically significant difference assuming the researchers made all the right calls on experimental design and statistical analysis (which is probably two of the most common areas where errors are made even amongst experienced researchers), but in common parlance, that 1% beyond the 5% error rate isn't what most people would think of as much of a significant difference. Perhaps the difference was much more than that, but based on the cited web page, there isn't enough information to know.
It would be good to know where the funds came from to pay for the research.