Wednesday, April 16, 2014

Comments on Data Analysis and Statistics

Analyzing data is quite an odd experience in the research world. In statistics classes, you learn about a lot of complicated models, tests, and assumptions. But, my experience analyzing data from experiments is that much of what is learned in the classroom is ignored. That isn't to say that I willingly defy the instruction I received in my classes. Instead it is that the things I learned in the majority of my stats classes are not that important for studying experiments.

Why is there this disconnect? First, most experiments are technically immune from a lot of the potential problems that stats classes teach you about. If there is random assignment to condition then individual differences shouldn't matter. Manipulations and some dependent measures can be thought of as perfect measures because they are the thing itself. I don't have a huge amount of experience in this area but the data I have typically gotten from experiments does not typically violate assumptions or is unable to violate the assumptions on ANOVA or regression such as independence. A social psychologist once implied to me that, in our field, if you use fancy statistical techniques or describe all of the tests for assumptions that you ran on the data that it can make any effects less believable. The researcher proposed this because most of the effects we investigate are measurable by ANOVA or linear regression. The researcher may be using fancy statistics because that is the only occasion when the effect exists.

This is an interesting situation because, if this is really the case, it suggests that at least part of social psychology is unwilling to accept advances in statistical procedures or statistical rigor because they don't want to be seen as hiding behind the math. If a paper doesn't use one of a small handful of methods then they are open to criticism for their statistical methods. If they use simple analyses, however, they are less open to complaints about their statistics. This may not be the true case or the case in the majority of social psychology, but I have reason to believe that it exists. It is also certainly true that a sign of experimenter 'p-hacking' is the use of convoluted analyses that may not be entirely appropriate, leading to spurious effects.

I don't mean to suggest that new innovations never make it into social psychological research. Preacher and Hayes have made a huge splash in the psychological community by introducing a way to more accurately gauge the existence and the effect size for many kinds of statistical mediations. I think that a partial reason for this acceptance, however, was their demonstration that the traditional ways of testing for mediations were more likely than their method to say that there is no mediation when a mediation does actually exist. This fact made the acceptance of a new method more appealing to the community at large, partially because it is more accurate and especially because it is more accurate in such a direction that mediations that were previously not supported may now be supported.

It is an interesting world that I honestly do not know much about. As far as journal publications go, if the editor and reviewers (normally no more than 5 people in total) think the stats you use are okay then your work can be published. If the stats are easier to understand, then your work is more likely to be published. Also, if your stats are very hard to understand because the methods are very obscure or new can also lead to your work being published. Though there were multiple issues with Daryl Bem's 2011 paper in JPSP (a very prestigious journal), but one criticism was that the stats he used were too complex and picked up on subtle, random differences. I think that the analytical world I live in is very interesting, but I just don't understand it sometimes.

3 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. This comment has been removed by a blog administrator.

    ReplyDelete
  3. This comment has been removed by a blog administrator.

    ReplyDelete