Thursday, June 5, 2014

The Kinds of People in a PhD Program

When I joined the PhD program, I was unsure what kind of other people would also be getting their PhD. I decided that I wanted a PhD before I left undergrad so I entered the program the Fall after I graduated. When I visited the PhD program at the university I am now attending, I knew there was some variety in the other candidates as well as in the students currently in the program I met. In the program I joined, maybe a third of the students in OB came directly out of an undergrad program. I have been told that there are some programs in management that typically require you to have a masters degree (in business, statistics, mathematics, etc.) to get real consideration.

When I entered the program, the student that began with me had been a consultant for a number of years after getting an MBA and was also married with a child. Though I think a lot of the other students that started the year I did were unmarried, many had had some background in  a job or had done some higher education already. Also, I was a bit surprised by the number of students from South America. As an undergraduate, most foreign students I knew were from Asia or Africa, so I was a little surprised.

More generally, I'd like to give an idea of the kinds of social types of people that are in the PhD program I am in. Though all of them are loosely doing some kind of social science, some groups are much more analytical than others.

Because of the wide range of backgrounds of the students I encounter, there are some rough categories that my social experience kind of classifies them into. Those that have families I see around less in the building itself (for the first 3 years I spent at least some time in the office nearly every day). They often come to social functions that this is where I see them the most often. On the opposite side are those that I always see in the office that also typically don't go to social functions like BBQs (except for the free food). Though I think this is partially because many are working very hard, I think that some like being i the office more than being at home. I have seen students spend time in their office playing games, or surfing the web but also staying until very late hours. I had an office mate at one point that was much like this. I had a running joke that one of us was in the office 24/7 for a while. When I would leave at 10 or 11pm some nights, he'd still be there and he'd still be there when I returned at 8am but would soon leave for a large part of the rest of the morning.

Students that are in joint programs are typically pulled between multiple departments and sometimes multiple buildings, spending half of their time in each place.

There are a lot of experiences in a PhD program - based on my anecdotal experience - and you'll likely encounter a bunch of people with a variety of backgrounds and experiences if you ever join one.

Friday, May 30, 2014

The publication system of progress

As I think I mentioned before, different parts of the social sciences have very different norms about what kinds of things 'count' as viable work and what kinds of publications are the most prestigious. In management, my area, journal articles are the most prestigious in general. High quality management journals are the most prestigious, followed by psychology journals, and then lower quality management journals. Though it may seem like there is some big differences between management and psychology journals, they truth is, the kinds of articles in them is mostly overlapping Venn diagrams. The content is mostly the same though the focus may be a bit different. You may see two very similar articles about employee turnover, for example, one in a management journal and one in a psychology journal. Though all of the methods, scales, and structure of the writing is similar, their outcome variable is likely different. Turnover in management is typically studied from either the desire to reduce it or determining what effects it has on performance. Psychologists may not care as much about reducing turnover and focus their effort on just determining why it occurred. Their outcome variable is more likely to be not based on performance and is instead about the emotional state of the group that lost a member or the member who left.

Because the work itself is so similar, their is a lot of cross publishing by authors who target the journals to which their work is the most similar. It varies based on the school, but faculty in many business schools are sometimes given a list of approved journals. They are not forbidden from publishing in other journals but they are informed that publications in journals not on the list won't be considered when that faculty member is up for review. This process of generating a list seems very perverse in some respects but it has a reason for existing. Because an institution wants to make sure their faculty are doing good work, they make a list of places where, the assumption is, only good work can be published. The school also wants to assure that their name is seen prominently in good articles. The criticisms that I have heard of this system are two-fold: the lists don't change to reflect the current prestige of the journals and there are penalties for doing interdisciplinary work. Because I feel that my work is interdisciplinary, I may run into situations where I send work to a journal where the fit is not as good as some other journal if that second journal isn't on the list.

There is a secondary disparity that is interesting though I will not discuss it deeply. Though there is some extent of crossover between psychologists and management in our publications, there is less in actual placement. A professor mentioned to me once that psychology is somewhat protective of their positions and that jobs in psychology departments typically require you to have graduated from a core psychology program as opposed to a applied field such as management. The majority of the faculty in my own department, however, are also from core disciplines, psychology and sociology. It does seem a bit strange then that those of us working toward applied degrees might be unable to get a job at the institution we are studying at because we are earning applied as opposed to core degrees. It will not matter for most of us, but it does seem a bit strange to feel that opting for organizational behavior as opposed to social psychology could have reduced future employment options.

Thursday, May 29, 2014

Learning to do a meta-analysis

My last post about what a meta-analysis is, was partially because I decided to learn how to do a meta-analysis. I decided that while I was walking home last Friday and I realized I could have more than one blog. I quickly came up with a fantastic name for a new blog that had something to do with meta-analyses and then promptly forgot about it. I don't know if I'll start a whole new blog to discuss my process of learning about and/or carrying out a meta-analysis but I figured I would start blogging here.

I have a tendency to want to learn about new statistical techniques without then using them to do anything. I think I have to agree that actually using (or thinking about using) a new technique is much more useful in the end. Kind of like you may not think that doing example problems will help you understand a concept but (at least when going through some of the meta-analysis material I've been looking at) it can be really helpful. I have decided that the question I am going to plan / actually pursue is transactive memory's role on performance and turnover's role as a moderator. Not only is this an area I am interested in so I have a lot of understanding already, but there are not any meta-analyses I know of looking at this topic. DeChurch and Mesmer-Magnus did a meta-analysis in 2010 about team cognition which encompassed TMS but I think that a more narrow approach may be enlightening.

I am basing my current exploration on of the article I mentioned in my last post "How to do a meta-analysis". The accompanying website for the article is not super easy to find but is here: resource page. The first author's website, Discovering Statistics (aka Statistics Hell) seems to have a lot of good resources as well. The researchers who wrote the article, also wrote several scripts that can be used in SPSS and R (two statistical packages, the second is free). The webpage doesn't describe the process of preparing the data (you'll want to read the paper or this short article for that) but it does provide some example data for you already. The authors claim this data is (or is based on) published articles, so I'm guessing that I should be able to replicate the work those researchers did.

If I continue this exploration further, I'll keep you all in the loop.

Tuesday, May 27, 2014

What is a meta-analysis?

Many of you may have heard the term meta-analysis either in this blog or other places on the web. Because of the amount of data and the power of our analysis software, these kinds of analyses are done fairly frequently in a lot of topics. So what is a meta-analysis? Essentially, it is a statistical way of adding a bunch of different studies together that are looking at the same thing to determine what the real effects are. Let me use an example.

If you are looking at 2 studies (or 2 articles reporting on studies) that are looking at the same question but come to different results, what can you use to determine which is the most valid? There are a few general rules of thumb. If one of them comes from a more notable research institution, it may be better because these schools typically have stricter institutional controls. Another important factor is sample size and population. If the sample is entirely college students, there are reasons why you might not trust that finding as much as if the sample was more diverse. Also, if one study had 50 people and the other had 500, then you might trust the larger one more.

It may seem obvious, but why actually do we trust the more diverse or the larger studies? Studies where they find effects even with diverse samples suggests that the effect is likely to be more prevalent. Diversity always adds some amount of variation to human subjects research. In a study I am running, we are using computers and we found that it was a good idea to limit the age of participants because some participants had much more trouble since they were not as familiar with computers as the younger participants. So, reducing the diversity of the sample can let researchers narrow in on results they are interested in. Sample size effects the likelihood of finding an effect to begin with. As the sample size increases, a number called the standard error decreases in the analyses. This means that the analyses can become more confident of the effects each variable has.

What a meta-analysis is, is a tool that lets researchers combine multiple studies together. Through that process, the sample size gets bigger which allows us to be more confident and, due to the aggregation process, the sample also becomes more diverse because studies will have used different kinds of people and possibly different methods in carrying out the experiment. Meta-analyses can be done incorrectly and can be misleading, but a good rule of thumb would be to trust a meta-analysis about a topic more than any single study.

Extra Fun Facts: File-Drawer Effect

The process of doing a meta-analysis of course adds some difficulties for the researcher in trying to 'wash out' the potential added noise (a term for unintended variance) from the analysis. There are many possible problems such as the 'file-drawer effect'. It is well-known that a lot of the work that scientists do never gets published. A big factor of this is non-significant effects. If you run an experiment, for example, and do not find what you are looking for, you may assume that you did something wrong. One professor I had mentioned that he ran one study over 3 times, never quite finding the effects he was interested in. Because of this, he never published any of the studies. [Later on he did a small meta-analysis of just these experiments and found that there was a small effect that he was only able to see when adding all of the data he had collected together.]

There are two main reasons for the file-drawer effect. Researchers may be embarrassed or not see value in proclaiming to the world that they found nothing (significant effects are 8 times more likely to be submitted), and academic journals are hesitant to publish articles without significant effects for the primary variables of interest (7 times less likely to be published). There are some legitimate reasons for this hesitancy. A study can fail to find effects for a lot of different reasons (actually no effect, poor design, too small sample size, inappropriate analysis, etc.), but there are fewer conditions under which a study will find effects when there are none. Therefore, if you did a meta-analysis only using the data that were published, there may be an over-representation of the actual effect than in reality. If you are trying to determine the average grade for the class but only included students that made above a certain grade or attended every class session, you will get an average that is likely to be different from the actual average. There are various and sometimes complex ways that researchers try to deal with these problems but it is always a concern.

* I used Field & Gillett (2010) "How to do a Meta-Analysis" significantly in this post.

Friday, May 23, 2014

A refreshed view of the scientists in Godzilla

Last night I went to see the new Godzilla. As I had recently discussed the original Godzilla, I thought I'd discuss the new film as it also touches some on the presentation of the scientist. I do not intend to spoil the movie in this commentary but there may be some spoilers for those who have not seen it. I thought it was good though I agree with the friend I went to see it with (a professor of film history) that the new Godzilla has much less to say about the world than the original did.

One of the main characters of the film is a nuclear engineer played by Bryan Cranston. The character becomes obsessed at one point that the failure of the nuclear plant he was working at was not a natural disaster or an accident but something else entirely. His apartment it littered with newspaper clippings as well as charts and graphs of newly collected data that he is convinced represents an imminent second event. His son at one point picks up a book on echolocation that the father does not explain but states that it is important. The father is adament that something is happening that is being covered up but that he needs data from the first event to be sure.

This is an interesting representation in the film. We, the audience, know that this is a monster movie so we can assume that the father is right, that there is something being covered up. But, the character is acting a bit crazy and much like a raving conspiracy theorist. But, this conspiracy theorist is seeking out hard, reasonable evidence, which I thought was a surprising portrayal.

Later on, the son and father enter the ruins of the nuclear plant to retrieve the data he needs to prove his case. They are caught by the authorities and brought to the scientists in charge of the coverup because the father "said he used to work here". Bryan Cranston's character then states several that there will be an EMP at some point which I don't recall there being any evidence for before. The movie's Dr. Serizawa begins to take everything Bryan Cranston says as gospel and seems to throw out his legitimate observations for the ravings of Bryan Cranston. Granted, the father turns out to be right though it is too late.

The new Dr. Serizawa, though matching the character of the original is much different in other ways. He is a solumn individual (played by Ken Watanabe) who has been following the prehistoric beasts of the film, particularly Godzilla for many years. He ends up not affecting the action of the script much as his suggestions are dismissed by the military authorities in the film. His primary role seems to tell the viewer that Godzilla could be a good guy who will bring balance to the word. In the original, the Dr. Serizawa character had the key, the answer but thought it was too dangerous to use. In this version, Dr. Serizawa seems to helplessly look on as mistake after mistake is made.

There is another undercurrent of the film, nuclear power and weapons. In the first film, Godzilla is literally woken up by a nuclear weapons test. In this film, different monsters are awoken as there is more radioactivity in the environment. Dr. Serizawa points out that they could be made stronger by a nuclear weapons attack because they may consume the radioactivity becoming stronger, a concern the military dismisses. Dr. Serizawa almost seems to represent the lack of respect science is given in this situation which the military deems to be under its "sphere of influence" (a term used in the film). Granted, Dr. Serizawa isn't really much of a scientist.

Though I doubt this was the filmmaker's intention, an overwhelming feeling I have at the end of writing this post is that the world as represented by the filmmaker is one in which we are helpless to change our fate. Though the characters effect the world, they do not save the world on their own. The only situation in which the characters save something, it is saving San Francisco from their own poor judgement.

Update:
Yesterday I was listening to an episode of "Pop Culture Happy Hour" on NPR where they also discussed Godzilla. They mentioned that one critic had called Godzilla "the first post human blockbuster." He said this partially because of the use of non-human characters to tell the story, the lack of empathy for the human carnage that unfolds, and the lack of the humans from making a significant difference to the outcomes. I think these concepts were also my initial takeaways from my analysis of the film. I just thought it was nice to have someone with a similar kind of perspective :)

Wednesday, May 21, 2014

The Skeptical Outlook and Life After Death

I have recently begun to consume a lot of skeptical media such as the podcasts Skeptoid and the Skeptics Guide to the Universe. I like these shows because they have a slightly different worldview than I am used to and because they have a lot of interesting science content. Skeptoid focuses primarily on discussing events or concepts to see what explanations fit the facts and known science. The skeptics guide contains a variety of sections, mostly composed of science discussions or skeptical opinions about current news stories. When I say skeptical opinions, I mean that they attempt to deduce an explanation that fits science and the facts and do not feel that supernatural phenomenon or unmeasurable forces need be included for a full explanation of the situation. I think the colloquial interpretation of skeptic means more that the person is judgemental, negative, and doubtful about some large part of our experience. This perception ignores the foundational tenants of skepticism making it seem more bleak than I think it is.

It is somewhat ironic that the increase in my consumption of and agreement with much of this media goes nearly hand-in-hand with my own recommitment to attending church and exploring my own spiritual life more fully. It is still unclear to me if the two factors of my life are compensating for one another or if they are just merely compatible in a way that certainly the skeptics community does not agree with.

When I was in a writing workshop, my senior year of college, the first assignment we had was to write down our favorite book and explain why it was our favorite on a piece of paper. As I am wont to do, I wrote down my actual favorite book as opposed to one that I actually wanted to discuss or defend [once at my family's Christmas celebration we were asked to write our favorite Christmas song down and then asked to lead a singing of it. I chose my favorite song, which I didn't know the lyrics to, and which was too obscure for any of my family to know]. I wrote Stephen J. Gould's Rocks of Ages as my favorite book. It has been many years since I read it, but a core argument of the book was essentially: Science is good at answering some questions and Religion is good at answering other questions. When they attempt to answer questions that are in the other's domain, there are typically problems. He called this idea: Non-Overlapping Magisteria or NOMA. This idea has been criticized by other scientists but I found it quite interesting and compelling. I had never had severe concerns that science and religion were conflicting in my real life. If I believe in an all powerful diety, then I do not know why many Christians (typified by the young-earth creationists) feel the necessity to defend God.

I listen to quite a lot of podcasts (I am subscribed to 33) one which is called Intelligence Squared US. It is a debate program that typically addresses points of law or policy. In the most recent program, however, the item on the docket was "Death Is Not Final" When I first saw the title, my first thought was that this was not a topic that is really a debatable topic. I believe that it is not a topic that we can know until we die and that it does not fit within the magestria of science. The slant the debate ended up taking (I think agreed upon beforehand by the debaters) was the validity of claims from those that have experienced near-death experiences as proof of an afterlife. Those arguing for the motion were the man that literally coined the term near-death experience (NDE) and a man that experienced one. Those against were a physicist and academic neurologist (Stephen Novella who also hosts the Skeptics Guide to the Universe). From this panel makeup you can see that the debate was always going to be about near-death experiences  and their scientific foundation as opposed to other conceptions of the afterlife. At one point in the debate, Raymond Moody (author of the book on NDEs) said that the debate should really be about philosophy and not science but he received pushback from all sides on that line. The moderator also rejected a few questions (you can hear them in the unedited audio/video) because they would not progress the science based question.

I am fairly skeptical of near death experiences and have been for a long time so I was aligned with the opposing side from the beginning. This led to a bit of a conundrum when I decided to send a link to the debate to my father. I believe in an afterlife though I was against the side for this motion because I think NDE's are not a valid source of information. So when I was writing the email to my father (a deeply religious person who believes in NDEs) I wasn't sure how to frame the recommendation.

I ended up listening to the podcast version, which has edited down and the full unedited version which was nearly twice the length of the edited version, within a few days of each other. Even now I am not sure if I listened to the unedited one because I was overly interested in the episode or if it was just more obvious that some content had been removed. Regardless, my feelings about the debate have improved from being totally dismissive of the topic to thinking that it was a really interesting debate though with an inaccurate title. I insulated myself from too much personal attachment to the topics recalling Gould's ideas of NOMA but also my personal beliefs weren't particularly attacked in this debate.

It is kind of ironic that, especially for podcasts, I look for the science perspective on most topics but I have little personal, spiritual doubt still.

Tuesday, May 20, 2014

The presentation of science and scientists in movies and television

I have been watching the new version of the Cosmos television program, and it started making me think about how science has been presented in general. In Cosmos and other educational media, science is typically presented in a positive instructive light, but other presentations of science and scientists vary in the way they construct the image of the scientist. The mad scientist is a common cultural construction that has been prevalent since at least Frankenstein created his monster. This and many other slightly less mad scientists have seemed to often present scenarios that the scientist had never considered and show how a massive problem could stem from it. Frankenstein never thought that his monster would be violent for example and never took precautions.

Then there are the scientists that are immune to human danger when something worthy of research becomes apparent. Walter in the television show Fringe acted in such a way when he experimented on children, and many scientists in media have exclaimed the opportunities for research as a grand explosion/tear in space time/contact with aliens or other such calamities begin their desolation. Scientists in these situations are painted as naive, narrow-minded, or lacking in common sense. When I was in middle school I was also told that I lacked common sense. It has sometimes made me wonder if this societal endowment of unintentional malice on the scientist is a way of aggrandizing the every-man to a higher level than the thinker because he has common sense to know that the alien is probably going to kill everyone. Militarists often get this presentation, such as in Aliens [most recently to me in Final Fantasy the Spirits Within (which notably has a rather positive presentation of scientists though they happen to practice something more akin to magic)].

Social scientists typically are presented in the media only to the extent that psychotherapists and the occasional academic appears on television. I know little about psychotherapy but their presentation in the media I consume often highlights the Freudian influenced therapies of the past due to their notable eccentricities as opposed to more reasonable modern therapies. Group psychology as it is brought up in procedural crime shows such as Criminal Minds typically centers on cases like Kitty Genovese's. She was molested and murdered while in a public area without intervention from bystanders. I have heard characters spout of other social psychy sorts of things but often guffaw at the ridiculousness of the claim. Granted, the writers of the show are most likely to have experienced a Psych 101 type course where these more shocking cases are typically presented. The hard scientists seem to sometime have an easier time in their presentation, I think often due to the physical nature of their work.

I just finished watching a short film where apparently the designers of a robot somehow allowed it to be possible for him to be abused into murdering a family. This type of plot device seems to assume ignorance on the part of the designers. A logical extension could be, we should hold back such and such work because, as we saw in such and such a film, there can be unintended consequences. We shouldn't research AI because robots might kill us. This is the type of concept that can only really slow down the work that is being done. I often wish I could talk to the creator of the art and ask "Why do you as an ignorant observer feel like you know more about a scientist's work than they do."

A recent example of this occuring in the real world would be when the Food babe exclaimed that a chemical used in yoga mats is also used in Subway's bread. Her ignorance of science was held up by many in the media and the public as a liberating force in the war against non-natural food. Subway was forced by public opinion to promise removal of this chemical though it has not been found to be dangerous. Her subsequent judgement on cookies given out freely by Doubletree appears to be based on a misreading of the chemical list. A possibly honest, but illuminating, mistake that highlights the authors lack of journalistic fact checking.


One of the more interesting presentations of a research scientist I have seen recently was actually from the 1954 film Godzilla. This was the original Japanese version though I have also seen the American version (which makes some notable changes to the character of the scientist that I will address) several years ago. In this film, the main scientist has made a decision long ago that his research should not be made public until he has found a way to use it in a non-violent way. The product of his research is the "oxygen destroyer" which will remove all the oxygen from a body of water, effectively killing any living thing in that water. The scientist, Dr. Serizawa, makes a very ethical stance in this case, if the work is revealed at this stage, it will only be seen as a force for destruction. A German colleague outs the relevance of his work to a reporter in the wake of a desolating attack of Godzilla on Tokyo. After some convincing, the scientist agrees to use the oxygen destroyer on Godzilla after he has destroyed all of his research. After he knows Godzilla has been destroyed, he ultimately commits suicide to seal the secret of the oxygen destroyer away. In the version of the movie for the US, an American reported called Dr. Serizawa to help convince him to use the "oxygen destroyer" on Godzilla. I do not recall specifics but I think the reporter, played by Raymond Burr, thinks the scientists concerns are not very credible and dismisses his hesitancy. Ultimately, Serizawa takes the only option he thinks is logical, the use of his weapon to save Japan, but the prevention of his weapon being used to destroy the world. Serizawa was always looking out for the long-term good of human life as opposed to the short-term needs of the people around him. Though this may seem as dissmissive of the current struggles, I think that this representation shows the forethought and consideration Serizawa put into his decision.

This type of representation of a scientist is very different from films like Day of the Dead where the scientist is seen as an insane and strange pseudo-villain whose work is nonsensical and dangerous with no regard for those around him. The scientist has long abandoned his work to cure the zombies but instead has sought to teach them. He is then murdered, to comical effect, for his belief that zombies could be civilized. The assumption the director is implying is that there is no solution to the problem, there is no hope except isolation. Research is pointless and can only lead to danger and death.

There is also the long line of fiction where the scientist builds some great machine that spectacularly fails, destroying the scientist and everything around him/her. In many of these works, the simple solution is the correct one. If the powers that be had only listened to the hero (who often has low status) then bad things could have been avoided (there are many examples of this in Japanese cinema, Final Fantasy the Spirits Within an example).

Thursday, May 8, 2014

Dream Weaving Part 2

After an unnecessarily long delay, I am back to discuss the second study in the paper about seeing other's problems in your dreams. Before I get into the details of the study, I'd like to mention a few things that initially set off an alarm in my head. First, the researcher proposes to be testing a phenomena  but the design of the study is entirely different. From the author's own description of the dream helper ceremony, the dreamers all know the problem of the individual involved and are purportedly trying to direct their dreams toward identifying solutions. This seems to me like another spin on a typical support group as opposed to a psychic experience. It is possible that the dream helper ceremony is poorly described in the paper but I find it more likely that the researcher is merely using this ceremony as a an outside reference point to the internal craziness of the proposed concepts.

In the second study, the researcher attempted to better experimentally manipulate the participants in the study. Whereas before there was a self-selection bias where only participants that claimed to have had a relevant dream submitted their journals for analysis, all participants had to submit their journals. Also, there was an experimental condition where half of the participants were given a picture of a fictional individual though they believed it was a real person. The participants recorded 2 dreams, were shown the picture of the target person. They were told that this person had several life problems but not specifically that they were medical. The participants then recorded 2 more dreams and the dreams were then compared for content.

The new target person was a woman who had a multitude of problems. She had multiple sclerosis, her mother was dying, her husband had died in an industrial accident, her son had been in a car accident, she had been in a car accident where her cousin had died, and her new partner was going through a messy divorce. The experimenter claims to have been blind to the person that was chosen; a friend of the experimenter had volunteered her. This raises a few problems. Whereas in the pilot, the person had 1 problem (breast cancer), this still allowed for the researchers to include a multitude of related codes (torso, limbs, torso, cancer, clinical setting). This person has so many potential problems in her life that a dream about nearly anything could possibly be coded as relevant. Also, if the experimenter's friend proposed this individual, I find it unlikely that the researcher had never heard about any of this person's many problems at some point. Though the researcher may not have realized it, they may have subconsciously considered that the woman with all the problems could be the target and may have directed the students toward her particular kinds of issues.

Additionally, the codes for the person include problems that are not her own, though they are problems she experiences. Her mother's lung cancer, for example, requires a respirator. The researcher proposes that the target's conscious mind is sending out information about her problems that the dreamers are able to connect to and interpret. Why would her mind think about inanimate objects unrelated to the target's problems? It is possible that the respirator is very prevalent but I don't see why it would be sent out as a problem unless the individual was having a problem with the respirator itself. This, as far as I can tell, is not known by the researcher.

Now we get into the results. I have a few problems with the way the analyses were handled. First, the researchers combined the two pre dreams and the two post dreams into a single dream value (one fore pre and one for post), aggregating the codes across the two dreams. The researchers claim that this provides a more conservative estimate because the sample size is lower since the dreams are combined instead of kept separate. This is an irrelevant point. The dreams were about different things and could reflect life events other than a connection to the target individual. Therefore, I feel that a more accurate data treatment would be to keep the values separate. I think aggregating the dreams is probably fine for a final analysis but there is valuable information in comparing the individual dreams pre and post. It makes no sense to aggregate two distinctly different dream types. Additionally, the post treatment dreams should be more similar to one another because they are focused on the target which would give more validity to the results (if they were possibly true)

The researcher compared each code in the dreams separately by both pre and post test values. Therefore, the researcher compares the pre-test values for the torso, head, etc. between the control and the experimental group. He found that there were significant differences for the post-tests for limb problems, breathing problems, and car/driving problems. Note that this is that the mean values in the post-tests were higher for those in the treatment condition than in the control condition. Those that saw a picture of a real woman were more likely to have dreams containing these components than people that saw a fictional person. While this is intriguing, the test the researcher used is rather poor. Since there was a pre-test the experimenter should have used this value as a control. This would wash out the individual characteristics of the dreamers dreams and make a stronger case that the manipulation did something. It would test whether there was a change in the kinds of things the dreamer dreamt instead of just comparing the dreams at each period. As is, the lack of this analysis throws up red flags for me.

From the increased likelihood of the dreamers dreaming about the 3 aspects the experimenter mentioned, he claims that this shows that individuals can accurately dream about other's problems. Though these results could be real, I don't think the interpretation the author makes really stands up. The content of the dreams is not definitively the problems of the target and even the excerpts the experimenter presents in the paper seem vastly different from the real problems of the target person.

The podcasters thought that the willingness of Psychology Today to reprint the findings of this study with little criticism was abhorrent. I honestly do not know why the author was not more skeptical of this piece when it has obvious theoretical and methodological concerns.

Wednesday, April 30, 2014

Dream Weaving Part 1 - My first post about pseudo science

The paper I want to discuss in this post was brought to my attention by a podcast I enjoy called "The Skeptics Guide to the Universe". This is a variety show that covers several different skeptical or science topics with a few staple segments. In the "News" section they either discuss some new scientific discovery, something from the general news, or articles they think deserve ridicule. In the last episode I listened to, they discussed an article published in Psychology Today, about a scientific paper (by a different author) that had been published in 2013. After I heard the discussion, the dismissive tone of the commentators, and the way in which they wrote off the study, I felt like they hadn't given the studys a fair shake (though the premise was fairly ridiculous).

Once I started reading the Psychology Today article and the original research paper, I started noticing some of the issues that arise when non-social psychologists try and look at our work. I still think that that the paper is ludicrous and poorly done but not for all of the same reasons as the people on the podcast. It was obvious after looking at both that the podcasters didn't have access to the original article and were using the vague generalizations about the methods that the Psychology Today article as actual methods as opposed to a summary of methods. The methods had problems in the study but the criticisms of the podcasters ended up being just far removed enough from what I saw as justifiable criticisms that the authors of either the article or the original research might be able to dismiss the "Rouges" as uninformed.

The article - titled "Can our dreams solve problems while we sleep" - is very short (840 words) and is an overview of 1 of 2 studies published in a paper called "Can healthy, young adults uncover personal details of unknown target individuals in their dreams?" The articles primary goal is to briefly describe the experiment and then provide some elaboration, suggesting that more similar work should be done in the past. As a critique of the research paper, the author seems to take the paper uncritically, praising the author as rigorous and ending the article with two paragraphs that begin "Lets say that some sort of dream telepathy is real" and suggest that there is something very real going on in this study. I am unconvinced by this paper, and the lack of a measurable mechanism in the paper.

I am now onto the 4th paragraph and I have not said what the paper was about. The research paper - published in a fairly low-tier journal called EXPLORE: The Journal of Science and Healing" - provides a rather detailed narrative of the process of this paper's development, which is not common in many of the articles I read. The paper's sole author is Carlyle Smith, a notable researcher on sleep. His past research appears to have primarily focused on the relationship of sleep states and the amount of sleep on memory and learning. Regardless of his past work, this paper arose directly from a course that Dr. Smith was teaching on "Dreams and Dreaming", a reasonable topic of study for a sleep researcher. A student in the class brought up the topic of the "Dream Helper Ceremony" and the instructor decided to do a pilot test in the class. The paper mentioned that this was a senior-level psychology class. From my experience in similar classes, the interests of the students often drive the class and there are sometimes not rigorous syllabi provided, so this seemed reasonable to me.

The "Dream Helper Ceremony" is essentially the idea that a group of individuals come together, hear about the life problem of an individual, and then all go to sleep, focusing their mind on the other's life problem and hopefully dreaming about said problem. The dreams are then shared with the target, who hopefully takes some value from this process. The researcher then decided to design a study that would get at one of the factors of this scenario, whether individuals can dream about the problems of others. In the dream helper ceremony description, the author suggests that the problem is discussed before dreaming, so the jump to looking at whether the content of the problem can come across a dream seems a large one to me.

The researcher provided the students in the class a picture of a person with a problem (the problem was not known to the researcher or the students) but they were told it was health-related. A subset of the students returned with a dream log that they believed represented the target (12 of 65). The researcher coded the dream logs based on a set of criteria that specifically captured elements of the target's health that would be negatively affected. This is a bit of a dubious practice because if the coder has more categories that fit the health diagnosis than other categories, they will be more likely to find matches for the health categories. The podcasters noted this problem. The researcher did weight the extent to which the health mention matched the problem of the person which helps alleviate some concern.  The researchers compared earlier dreams of the 12 with the dreams that the individuals reported as having been about the target. And, surprise, surprise, there was more language that matched the health outcomes in the second dreams. As should be obvious, the students knew that the target had a health problem so they were more likely to dream about those kinds of issues. There is also a self-selection bias because the others did not think they dreamed about the target. This could mean that only those that dreamed about health outcomes reported their dreams and are included in the sample. The researcher noticed these issues and attempted to correct them in the second study.

I'll discuss this study tomorrow.

Monday, April 28, 2014

Computers and Communication

One of my interests since I first became a PhD student was the process of organization through computers. I am actually not sure where this interest comes from precisely as I haven't had a huge amount of experience organizing with others over computers. When I was in undergrad, I took a course called Organizational Communication which I found extremely interesting. The focus of the class was mostly on the ways in which we fail  or succeeded at communicating within organizations. An example in class was how poor communication has led to helicopter crashes or accidental shootdowns.

A part of this course was focused on groups that communicate over the internet. The course was taught by a researcher that studies Wikipedia and the course was partially taught through the Human Computer Interaction group. After taking this class, I became more interested in this topic, literally in the academic sense. I still don't participate in much internet organized work and am notoriously bad at keeping up with friends. While I was working on a book chapter about the rise of the globally distributed group, I was part of one as my adviser spent some time at other schools. This was a period when I severely lost my way in my focus and ended up going through one of the roughest paper proposals I think there has been in my program.

I'd like to discuss some old, but interesting work on the way that people use computers to interact with one another. Sara Kiesler is a prolific and diverse researcher. When I first met her, she discussed how she had recently returned from a trip to Africa where she was interested in their nascent educational system and acted as an adviser. She taught us interviewing techniques, how to engage with subjects and find out what their true reasons were for their actions or thoughts. She seems to have a deep interest in increasing the quality of life for people wherever they are and through many different mechanisms.

She was part of a group of researchers that were interested in how the internet would influence the lives of those that had ready access to it. In this study, the researchers gave internet access to a large number of families in the Pittsburgh area for free. The researchers then looked at the outcomes for each family member and tracked their individual usage. Initially, the signs were not good with several negative outcomes (primarily depression), specifically for adolescents in the household. After more time, however, the positive effects of the internet on the families became more pronounced. Of all possible uses for the internet, the most common was interpersonal communication. The researchers concluded that using the internet to make new ties had a relationship to increased depression but using the internet for other uses decreased depression: http://homenet.hcii.cs.cmu.edu/progress/index.html

Another interesting study that Sara Kiesler performed that is even older than the Internet study focused just on the nature of the communication that individuals engaged in with one another. In this study, the researchers compared the communications of groups that did a task when the members were either in the same place or communicated over computer text messaging. The use of computers had various effects, positive and negative. Group members were more likely to get angry with one another, make extreme statements, and they had trouble coming to a collective consensus. This was partially that people seem more real when they are in person so it is harder to criticize them so heavily directly. Another way to think about the phenomena is instead that the ability to communicate distributively led the group members to speak their mind more freely.

Another interesting finding was the amount of discussion that was contributed by females in the group. In the face-to-face groups, men were dominant and their opinions were used more as the basis for the decision making process. When the groups instead used a computer, however, the women spoke more and contributed more to the discussion. The researchers suggested that the relative anonymity that the computer mediated communication allowed for let women not feel as self conscious about sharing their opinions. They also suggested that because there were fewer obvious status cues, women weren't in a position where they felt their opinions were less valuable.

Lastly, the researchers were curious if the change in the way people communicate changes the kinds of decisions that they are likely to make (instead of just their ability to make a decision). The researchers found that there was a definite 'risky shift' such that members were more willing to take on ventures that seemed risky if they were communicating online as opposed to face-to-face.

Though this research was published in 1992 (22 years before the publication of this article), we can see that people are using the internet to communicate and engage with one another in the same kinds of ways. Discussions on the internet often devolve into 'flame wars' quickly get off topic, and is full of overly superlative language about the love or hate of particular topics. Risky or at least random decisions being made by groups coordinating over the internet are not uncommon to hear about. It is comforting to a certain extent to consider that we have always found computer mediated communication to be just disconnected from others enough to be incredibly mean to one another. This is not a new phenomenon, it is inherent in human nature. Us humans who have evolved to recognize faces and see truth in one's eyes are sullied by using online communication,...but it does have its benefits. The convenience is unparalleled and studies have shown that we are much more civil when we know who the other person is we are talking to, which is something.

Thursday, April 24, 2014

Sensemaking in Organizations

In the Fall of 2010, I was taking a seminar in organizational behavior. It was a morning class, and in a much different format than I was used to. We read what, at the time, seemed like a ludicrous number of papers and then proposed questions we had about each to the professor. The professor then spent 15-20 minutes per paper summarizing and discussing the significance of each paper, answering our questions as he went along. It was a small and intimate class which made the moments that I dosed off that more embarrasing. It was a very interesting class, but the lecture-like format was not engaging enough at 9 in the morning when I had stayed up until 1-2 to read all of the required papers.

One day we read a paper that deeply impacted my perception f how research can be done and explained in organizational behavior. The paper was called "The collapse of sensemaking" by Karl Weick, an influential but controversial individual within the field. Van Mannan argued in the article I mentioned yesterday that the article I am about to describe was extremely powerful but never would have seen the light of day under Pfeffer's system. Pfeffer shot back that Weick was not formally rigorous enough which only stoked Van Maanan's dislike for Pfeffer.

The article is very, very different from what you typically see in academic literature. It is a narrative about the Mann Gulch disaster that holds some information to its chest in order to make the impacts of the revelation of Weick's theory that more convincing. The article has nearly 2500 citations according to Google Scholar. There are no formal hypotheses, no statistical anlayses, but it's also not quite a theory paper. It is a kind of paper that I have only seen Karl Weick write. I mentioned the argument between Pfeffer and Van Maannan to a professor at my institution about their discussion of Weick's style. I do not remember the specifics, but they were clear that his pursuits are only possible after tenure and that few besides him can write these narrative theoretic pieces.

The paper begins with a description of the Mann Gulch disaster. Weick relies on a book "Young Men and Fire" written by Normal Maclean who interviewed survivors of the event.  As a very brief summary, a group of young firefighters parachuted into a forest where a fire had been reported. Their role was to act quickly to prevent the fire from spreading by digging fire lines as well as repairs damage from a fire. The men unfortunately were unprepaired for a large active and fast moving fire. They found themselves in a position where fire was rapidly approaching and they needed to act fast to survive. 13 of the 16 men died that day. Of those that survived, two found a way through a rock crevice, the other survived by lighting a brush fire at his feet and lying down in the ashes. The actions of this last individual, Wagner Dodge, led Weick to begin his theorizing about the collapse of sensemaking within this group of men.

Sensemaking is the way in organizations act in ways to create order in their environment through their actions based on their purpose and culture. The theory of sensemaking apparently arose as an alternative to focusing just on the decision making process itself (as proposed by March of the Carnegie School). In other words, the organization's actions are in response to the way reality is perceived in order to maintain their perception of reality. Weick's primary argument is that the actions of the firefighters were in line with their incorrect perception of reality and when they were faced with a new reality, they were unable to 'make sense' of the situation. Their training became useless because they were no longer in a situation they could understand. Dodge was able to make sense of the situation when others could not and essentially set a fire line where he stood. This prevented the fire from coming as close to him as the ground was already burned. His command to the others to join him in the fire seemed to go against their identity as firefighters.

I don't want to get into the details of the paper as it is extremely dense and it is certainly worth a read. This paper is particularly important to me because of the way it is presented. It is intuitive and rigorous within the setting. Even though there is no data, you can tell that an extraordinary amount of thought went into the construction of the paper. I don't use sensemaking in my research and I'm not sure I agree with it in opposition to other concepts that it somewhat collides with (like the Carnegie School) but damn does WEick make a good argument.

Wednesday, April 23, 2014

Finding your niche, and then it being invaded

Social science contains is an extremely large range of things being studies. There are all these pockets of work that is being done that it can seem that it would be unlikely to run into do something identical to you.

I was walking in Boston when I was at the Academy of Management a few years ago. I stopped to chat with 2 PhD students from the Netherlands who had been at a session I had also just attended. As I talked to one student I gave a short summary of what I had been working on: "transactive memory and how structure of communication affects TMS formation." The one student mentions that her partner was studying something else involving TMS. Her topic was "How the structure of a group influences the usefulness of the TMS". Not the same question, but really very similar. It was in one way nice to somewhat randomly meet someone with such a similar topic but in another way, I felt uneasy, like this person was my competition.

The outward view of much of science seems to be that scientists are extremely collaborative, share ideas freely, and learn from one another. This is, in one way, an ideal that many strive for, but it is in another very real way the antithesis of how some of science is structured. In a recent episode of Cosmos, Neil Degrasse Tyson explained this relationship as a great lineage of student and teacher, learning and expanding our overall knowledge. Though I couldn't quite identify it, something in his statement seemed very dissimilar from aspects on my own experience.

Within social science, possibly because there is so much space to explore, I sometimes feel very protective of my little niche. I don't want anyone to come in and publish papers similar to these ideas I have before me. One reason for that is that the more publishing in an area, the more prior material that you have to read and account for in your own paper or design. Second, if you are the first with an idea, it can be helpful in increasing your citations or just your general recognition. I don't know if this protectiveness exists in other areas of science but it is something I personally feel.

The other day a professor suggested I read a paper due to my interests. Though they go about their theorizing and methods differently than I would, they were essentially interested in the same primary question I have in my dissertation. Thankfully, I think that this paper informs mine more than replaces it but I was scared when I looked at it. My thoughts were "Oh no, if this is what I think it is, will there still be room in the literature for my research?"

There have been potential solutions for this problem raised for years within social psychology. Van Maanen and Pfeffer had an argument in a journal over a series of articles. Pfeffer was essentially suggesting that a small set of scientists should determine what the important questions are. Then all the researchers that do that kind of work (or all researchers) would pursue answering that question thoroughly. This directed approach, he called paradigm consensus, seems similar to some pushes in other sciences such as the recent push in Physics to identify the Higgs Boson.

Van Maanen made some cogent arguments against this (primarily about how the 'taste-makers' will be chosen in a field that retains some element of subjectivity). The title of his response "Fear and Loathing in Organization Studies" is particularly witty I felt. In the current model where it is a bit of a free for all, then some of us are put in this funny position that work that takes a long time to complete is successively more dangerous because someone else might get there before you. Replications are not highly valued in Social Psych either leading to some bogus concepts sticking around for much longer than they should.

Tuesday, April 22, 2014

Coding (in the social science sense)

I often wish that science did not occasionally use the same word to mean the same thing. In this case, the thing is coding. When the majority of people think of coding, they probably think of writing computer code. Those who write computer code are called coders and all is right with the world. But there is another kind of code that we use in social science much more often, qualitative coding. This essentially means that you take some output, communication, interview, etc. that cannot be directly translated into numbers and you create a scheme to do that translation. It could mean that you create a list of possible topics and match each sentence in an interview onto those topics. Going through this process can give you a quantitative idea of what a person was discussing.

I have already introduced one form of coding in one of the previous posts. Liang, Moreland, and Argote developed a kind of coding scheme to measure transactive memory in 1995. In that case, 2 individuals watched a video and rated the level of coordination, credibility, and expertise within the group overall. This measure was not as taxing as some versions of coding but it still requires 2 individuals to watch videos of all the groups and make a judgement. Once the two coders have finished, their codes are compared using a formula like cronbach's alpha. Cronbach's alpha determines how consistent the raters are at judging the same thing, in this case the groups i the videos.

Then we come to my personal difficulty using coding in my work. I have used coding for several projects, including some where the thing we were measuring was, I felt, objective. Therefore, using multiple coders is useful to help pickup on when one person missed a specific thing, but not to compare groups based on the coder's perception of their qualities. In my case, I say count the number of times this thing occurs. One coder sees 5 but the other only sees 3. The one that sees 5 may objectively be correct, but something like Cronbach's alpha just sees this as an inconsistency between the coders when the actual problem is either attention or honest mistake.

Whenever I start a coding process, I typically dread starting and looking at the outcomes because I just feel that the mistakes are arbitrary, that I don't need the coders to do this correctly, or that the coders are just not doing a good job. It's frustrating. But, it's frustrating in a way that feels unnecessary.

Machine learning has started being introduced which provides a more objective way of looking at a set of hard to quantify data though it may not do as a good a job as a person can. I think that, in the near future, machine learning will begin supplementing person coding when there are large data bases available. I think that this is a good way forward and also I like that it may reduce my personal reliance on other people. It becomes impersonal, removed from the imbued meaning of the words, and disconnected from theoretical constructs. But it is a slave that reduces the need for the me to act mechanically, which is something I suppose.

Thursday, April 17, 2014

Carnegie School of Thought - Bounded Rationality

Discuss Herb Simon some and Organizations

When I began my PhD, I was introduced early on to a particular strain of work on management called the Carnegie School of thought. This work was primarily done in the 50s and 60s by researchers at Carnegie Institute of Technology (now called the Carnegie Mellon University). Herbert Simon and Jim March were the primary individuals involved, March continuing the work with Cyert.

Herb Simon was a very complex, analytical man. Though Simon began his life as a political scientist, publishing his dissertation in a book called Administrative Behavior, he became more interested later in artificial intelligence. My first introduction to him was in a cognitive psychology course. The instructor was describing how Simon began the first lecture of the Fall semester of one of his courses by asking his students what they accomplished. After all of the students had described their summers, Simon said that, over the summer he had designed a computer that could think like a person. It was an early version of artificial intelligence that based its decision making on the same ways in which people make decisions. Simon was an extremely influential person in computer science, artificial intelligence, cognitive psychology, and management. His influence in management, for the most part, is due to his encouragement and collaboration with James March.

The ideas within the Carnegie School are quite diverse and for this post I will focus on a concept called bounded rationality (also known as satisficing). Within Economics, it is assumed that actors make the best choice in any given decision. Satisficing, proposes that there are increased costs for some decisions or that the outcome is not as important to the actor, leading to the actor to willingly make a suboptimal choice. An example that Simon used to give was about lunch [I have modified the story from the original but the idea is the same]. If an actor is in their office and needs to get lunch, they could have multiple values that they desire to maximize: timeliness, cost, health, etc. An actor could determine the relative weight of those characteristics and make the optimal choice. But as Herb said "I would instead just always go to [the student center]. For those who have been there, it is obviously a non-optimal choice." The humorous example does have certain limitations but provides a good overall example of the concept. Satisficing proposes that the act of making a choice is costly and ones own desires are not always clear, leading to a "good enough" choice being much easier to determine than the best.

Though this concept, while somewhat of a refinement of the economic theory of optimization, was a revelation to the academic world. It is not without its detractors. A comment that I have heard from several proponents of bounded rationality is that it is not testable meaning it is not a proper theory. The reason for this is that people may actually be making optimal choices but they are optimizing on unknown or unmeasured criteria. I personally think that satisficing is a very useful concept though it does have a subcurrent of nondeterminism that also arises in March's concept of the Garbage Can Model of Organizational Choice. This concept is a bit unsettling, but still interesting to me.




Wednesday, April 16, 2014

Comments on Data Analysis and Statistics

Analyzing data is quite an odd experience in the research world. In statistics classes, you learn about a lot of complicated models, tests, and assumptions. But, my experience analyzing data from experiments is that much of what is learned in the classroom is ignored. That isn't to say that I willingly defy the instruction I received in my classes. Instead it is that the things I learned in the majority of my stats classes are not that important for studying experiments.

Why is there this disconnect? First, most experiments are technically immune from a lot of the potential problems that stats classes teach you about. If there is random assignment to condition then individual differences shouldn't matter. Manipulations and some dependent measures can be thought of as perfect measures because they are the thing itself. I don't have a huge amount of experience in this area but the data I have typically gotten from experiments does not typically violate assumptions or is unable to violate the assumptions on ANOVA or regression such as independence. A social psychologist once implied to me that, in our field, if you use fancy statistical techniques or describe all of the tests for assumptions that you ran on the data that it can make any effects less believable. The researcher proposed this because most of the effects we investigate are measurable by ANOVA or linear regression. The researcher may be using fancy statistics because that is the only occasion when the effect exists.

This is an interesting situation because, if this is really the case, it suggests that at least part of social psychology is unwilling to accept advances in statistical procedures or statistical rigor because they don't want to be seen as hiding behind the math. If a paper doesn't use one of a small handful of methods then they are open to criticism for their statistical methods. If they use simple analyses, however, they are less open to complaints about their statistics. This may not be the true case or the case in the majority of social psychology, but I have reason to believe that it exists. It is also certainly true that a sign of experimenter 'p-hacking' is the use of convoluted analyses that may not be entirely appropriate, leading to spurious effects.

I don't mean to suggest that new innovations never make it into social psychological research. Preacher and Hayes have made a huge splash in the psychological community by introducing a way to more accurately gauge the existence and the effect size for many kinds of statistical mediations. I think that a partial reason for this acceptance, however, was their demonstration that the traditional ways of testing for mediations were more likely than their method to say that there is no mediation when a mediation does actually exist. This fact made the acceptance of a new method more appealing to the community at large, partially because it is more accurate and especially because it is more accurate in such a direction that mediations that were previously not supported may now be supported.

It is an interesting world that I honestly do not know much about. As far as journal publications go, if the editor and reviewers (normally no more than 5 people in total) think the stats you use are okay then your work can be published. If the stats are easier to understand, then your work is more likely to be published. Also, if your stats are very hard to understand because the methods are very obscure or new can also lead to your work being published. Though there were multiple issues with Daryl Bem's 2011 paper in JPSP (a very prestigious journal), but one criticism was that the stats he used were too complex and picked up on subtle, random differences. I think that the analytical world I live in is very interesting, but I just don't understand it sometimes.

Tuesday, April 15, 2014

Turnover and enactment of change

Describe Kane et al (2005) and maybe Levine & Choi (2009)

In much of the literature about turnover, it is unclear how newcomers are influencing the outcomes for the groups that they join. More recent literature has attempted to categorize the ways that newcomers adapt to the groups they join and how groups adapt to the newcomers. In the study I describe today, the researchers were curious what factors influence whether a newcomer is able or willing to share their ideas with the rest of the group. There is an assumption in much of the management literature that newcomer's primary value is in the new ideas they bring to the group. But, under what conditions that occurs is less clear. It certainly is not as often as possible or there would be much more value in the world.

The study I want to describe is Kane, Argote, & Levine (2005). These researchers decided to use an experiment to investigate some of their ideas using the frame of social identification. The researchers proposed that if group member shared a common social identity with a newcomer that they would be more willing to accept new ideas into the group. Unfortunately, this study was not able to determine directionality (whether the effect is newcomers' willingness to give ideas or oldtimers' willingness to accept them) but this step was a great step forward in this research.

In this task, the participants made paper boats in assembly lines. The researchers demonstrated how paper boats could be made but were clear that their requirement was to make as many paper boats that fit the requirements for the task and not just this specific boat. Some groups learned a method of making paper boats that required 7 folds while other groups learned a method that took 12 folds. Though the group with the smaller number of folds had one fold that was somewhat complex, they were much more efficient in general than groups with 12 folds (based on pretesting). The groups were told to use an assembly line to construct the boats and the more difficult fold is done by the middle member.

The other manipulation in the study was whether the groups shared a sense of collective social identity with each other. In each experimental session, 2 groups were brought into the lab at the same time. They both also participated in a training period in the same room. However, in the high social identity condition, the groups were given the same names, seated in an integrated fashion differently, and given a reward scheme where the performance of both groups would lead to better outcomes for all the individuals. In the low social identity condition, these three factors were changed so the groups seemed less similar to one another and their reward was not interdependent with the other team.

The other action in the study that the experimenters made was very clever. The middle members for each group switched from participating with one group to participating in another group. Therefore, for some groups the new member had the same experience as the group they are entering (experience with low or high efficiency folding techniques) whereas for other groups there was a difference (the new member has the low or high efficiency folding technique but the group they join has the opposite). The new member is therefore in a position where they need to learn the technique the group is using, or the member needs to try and get the group to accept the way of doing the task that they are most used to.

Skipping ahead to the results, almost no groups accepted the newcomers folding strategy if the strategy was worse than the one that they already have. There was also a main effect of the identity. Shen the groups had a shared identity, they were much more likely to accept the new member's strategy into their group. If the groups shared an identity (from having the same group name and having a shared reward structure, then they accepted the new member's better way of constructing the boat about 70% of the time. If they didn't share that same identity, however, then the group didn't accept the new member's superior way of making the boat that often (only 25% of the time).

The results for performance were a bit harder to interpret. All groups performed better over time, generating more boats in the last trial than in the first one. But there wasn't a strong direct relationship of the new member having a superior routine and performance. The researchers found that when the new member introduced a better routine to the group, that they experienced a larger increase in their performance than when the new member has a worse routine. These differences, however, are only for groups that share an identity with the new member. If the group didn't share an identity with the new member, then it didn't matter whether the new member had a better or worse routine, partially because these groups accepted that routine infrequently.

In context, this study was very significant for a few reasons. First, Argote had done significant work on learning within groups and organizations. This, however, was one of the first study that demonstrates both how learning can occur within a group and that there are certain variables that influence whether a group can learn from a new member. The variable of interet here was social identity but other work has looked at a lot of other factors (see Rink et al., 2013 or a review). Second, this study demonstrated that groups have the ability to recognize advantageous strategies and use them. This was demonstrated in some earlier work by McGrath, but Kane's study is a very clean experimental setting. Lastly, the results suggest that learning new strategies can be costly to a group, hence the small differences in performance for groups where the new member had a better routine compared to groups that received a new member with a less efficient routine.

Monday, April 14, 2014

The research process and study design

I was talking to another PhD student the other day that was presenting a schedule of the work that she was planning to do over the next few months. One project that she is deeply involved in at the moment in is analyzing data that has been collected from a set of real organizations. However, she also wants to then test these findings in the lab. I thought this might be a good opportunity to talk briefly about study design and what different kinds of research exist within social science.

Ideally, the research process goes in an order vaguely like this: A researcher comes up with an idea about how the world works, the relationship between some set of variables, etc. In most branches of social science, the researcher then creates a set of predictions about how different variables will be related to one another. This is less necessary in some fields such as (non-behavioral) economics. The next thing the researcher decides is what is the best way to determine if this relationship exists. Sometimes the question itself will inform what data should be used to test for an effect. If the question is, for example, about the relationship between stock price and employee stealing, then looking at a real organization may be ideal. Once a data source is identified, the researcher collects the data and does analysis on these data. After the researcher has interpreted the results, the work will go onto the publication process, either into a journal article, book, book chapter, or conference presentation.

I work in a very small world where I have used experiments in all of my work. My experiments, though not identical, have certain elements of design that I consistently use which adds familiarity to the design process for my studies. I know the manipulations and the kinds of acceptable tasks very well. Though the specifics have taken some time in the past to work out, I don't think it took me more than a few days to design each of the studies that I have used. The longest time has always been determining the task to use. The difficulty with tasks sometimes is the balance between creating a new, novel task that the participants won't be familiar with and choosing a task that has been tried and tested by you or your colleagues.

When I looked at the other student's schedule, I was genuinely surprised that she had 3 weeks scheduled for study design. When I talked to another student, she thought that 3 weeks was just about enough time. This interaction got me thinking why I was so surprised that the student chose such a long period of time to dedicate to study design. I don't think I am overly skilled at study design, but I could be using a different definition of study design than they were.

When a lab study is designed, the major decisions that have to be made are the task, the manipulations, and the measures. My manipulations have always been rather blunt and heavily tested: employee turnover or restricting communication. The manipulation of more delicate factors, such as feelings of group belonging, of fear, or feelings surrounding the exchange of favors, are, I imagine, much more difficult and may have smaller impacts on people. There are huge literatures investigating these factors, which may actually instead the time it takes to choose a manipulation because the researcher may feel like they need to be familiar with most of the prior work. I don't mean to come off as dismissive of other work, but if you spend all of your time reading all the published literature in your area, you'll never add to that literature yourself. It is a dangerous game of academia, unless you work along narrow specialties (which has been my strategy).

Once the core vision of a study has been determined and the three decisions mentioned earlier (task, manipulation, and measure) have been chosen, the materials have to be put together. I don't typically think of this as design, but it is a necessary part of the research process. This is the phase where study materials are drafted, the specifics of the task are decided, materials are purchased, and advertisement materials are readied. Another unsung, but important aspect of this process is the writing of a script. I was fortunate to have a reader on a student project strongly suggest I write one for my first solo project and graciously provided me an example. The script lists all the actions the experimenter does to prepare for the study, all the things the experimenter says, when things occur in relationship to one another, and the timeline of the study. Writing the script always has a way of highlighting to me glaring issues with the design of the study in both a shallow sense (operalization) and a deeper (theoretical) sense.

I hope this post provides you with some insight into the nuts and bolts of the social science research processes and may provide some tips to other scientists.

Wednesday, April 9, 2014

Goodman

Coal miner study

One part of my graduate education that I count as one of the most fortunate was the limited amount of experience I had interacting with Professor Paul Goodman. Paul Goodman unfortunately passed away shortly after I passed my qualifying exams. Paul Goodman was an extremely interesting and committed researcher that allowed his personal feelings of justice influence the direction of his work in a very real way without allowing them to cloud the scientific process. Paul was truely one of a kind.

After Paul passed away, I spent some time talking to his wife and children as they discussed his upbringing and what motivated some of this work. From what I recall, both of his parents were liberal social activists in New England. From an early age they instilled in Paul that organizations have a responsibility to treat their employees well. Though I'm sure many other things influenced his choice of career, Paul eventually began studying the ways that employees interact with management in organizations. Paul was an avid film-maker who did a series of videos about the current state and future of work. He typically interviewed average people in industries that were changing. Many of these films can be found at a permanent collection at Carnegie Mellon's library website: http://dli.library.cmu.edu/paulgoodman/

The last two projects that I know of Paul perusing was a long-term project on the science of science teams. Though I do not know his specific motivation, scientists often apply much less social science to their organization than what we actually know. After Paul died, this project dissolved due to the cohesive power of Paul's personality disappearing. The other was a more amorphous process that I think perfectly sums up Paul's outlook on the world. He and his assistants conducted hundreds of long-form interviews asking average people what they thought the American dream was, if they strove for it, and what kind of world they wanted for their children.

Though Paul completed a lot of interesting work, what I'd like to talk about today was some of the work that came out of his multi-year coal-mining project. In this work, Paul went to coal mines in the mid-Atlantic and interviewed miners in their place of work. By that I mean underground in the mine itself. Paul told me on multiple occasions that he thought that the ability to conduct the interviews and collect data in the mine itself gave him a much more accurate perception of what it was like to work in this environment. My father, who is from Pennsylvania described to me when I was very young that my great grandfather's worked in a coal mine. This profession and the work that Paul did therefore always seemed to touch me a bit closer as I always imagined my great grandfather in the place of the miners in the papers.

The paper I would like to describe of Paul's is one that he wrote with Dennis Leyden. This work was supported by the U.S. Bureau of Mines. I can only make a guess but I think that the Bureau were interested in how the relationships between the individual workers in the mine were related to mine outcomes. Mines vary in productivity enormously and one possible reason is the kinds of relationships the individual workers have with one another.  Goodman and Leyden proposed that the mines provided a good opportunity to look at the effects of familiarity on the small teams that work together within a coal mine. (In a prior study, the researchers had already identified that an individual with little familiarity with a mine was more likely to have an accident.)

Mining crews were sets of workers doing one of three unique roles. Those roles were: the miner operator, the bolter, and the car operator. Each crew typically had a pair of people performing each role. Though each role is unique and there are skills associated with the roles, the authors argue that the specific strategies the individuals use vary from crew to crew based on personal differences and the features of the part of the mine the group is in. Though the researchers do not mention it specifically in this paper, another factor that I imagine is important is the cognitive interdependence of the individuals on one another.

Without getting into too much analytical detail, the researchers used information about which groups individual were working on to create a measure of whether individuals had worked with one another before and to what extent a given group's members were familiar with one another. Overall, the researchers found that the levels of familiarity between the group members was predictive of the overall mine productivity. They found some evidence that different kinds of familiarity mattered more than others but they felt that overall familiarity mattered more.

In rereading this paper, I found myself reminded of some other interesting work by Karl Weick on aircrews. Like I am attempting to do in this blog, Weick preferred description over analytics and wrote extremely provoking papers based on his reading and observations of real events. Weick's observation of air crews found very similar effects of familiarity on the air crews ability to perform without errors. I'm sure I will discuss some of his other work later in this blog.

Tuesday, April 8, 2014

Coginitive Interdependence [Deep Dive] - Moreland series - Part 3

This post describes the studies commissioned by the Army that Levine et al. explored. In Part 2, the studies on productivity were explored. In this post, I focus on the experiments about innovation.

Creativity Experiment 1 -Assigned and/or maligned (published as Choi & Levine, 2004)

The researchers then shifted away from performance as the primary variable of interest and into the effect of turnover on group innovation. These studies used a air-surveillance task that John Levine and his students have used in several papers that I know about. Groups work together to monitor the radar at a base and assign threat levels to the different radar contacts. In each of the three member groups, 2 individuals were specialists and 1 acted as the commander. The specialists essentially collect information about the radar contacts and the commander receives that information and is tasked with making a decision. There were two different strategies that could be used in the strategies to collect information that varied on whether the importance of the information the specialists collected was the same for both or whether the difficulty of getting the information was the same for both.

In the first experiment, the researchers manipulated whether the group was able to choose their strategy and how well their feedback suggested that they had performed. In the experimental setup, the group was either assigned one of the two strategies above or they were allowed to choose one. Then the group performed the task. Half the groups were told that they had performed well and the other half were told that they had performed below a passing rate. One of the specialists was then chosen and replaced with a confederate. In social psych research, a confederate is someone who pretends to be a normal participant but has been coached to act in a particular way. The newcomer then proposed that the group switch to the opposite strategy of whichever they had chosen in the first trial.

The researchers used whether the group accepted or rejected the strategy the newcomer proposed as the variable of interest. Because this could be affected by a multitude of factors, the researchers measured how committed the members were to the previous strategy, how much they liked the team, performance in the first trial, etc. The researchers found results that were inline with what they anticipated. If groups were told they failed to perform well in the first trial, they were more willing to accept the newcomers idea. The groups were also more likely to accept the idea of the newcomer if the group had not been allowed to choose their own strategy.

The researchers then did some additional analyses and proposed what led to the group's receptivity to the newcomer's proposal. The two variables the researchers proposed mediate the effect of team choice on the acceptance of the newcomer: commitment and perceived performance. If the group had a choice in their strategy, they were more committed to their strategy and they perceived their performance as better.  The researchers were fairly satisfied in these findings but they also thought that the way the newcomer proposed their innovative idea likely would have an effect on whether the group accepted it. This led to the second experiment.

Creativity Experiment 2 -An Assertive Story

This study was run very similarly to the first creativity study except that the kind of language the newcomer used was varied. As before, groups are more likely to accept the newcomer's innovation when the group was told that they had failed in the first trial. There was also what is called a statistically significant interaction. An interaction just means that whether one variable has an influence depends on another variable. When the groups were told that they had succeeded, it did not matter whether the newcomer was assertive or not, the acceptance rate was always about 45%. If the group had been told they failed, however, they were more likely to accept the ideas of the newcomer if the newcomer was assertive (~85%) versus if the newcomer was not assertive (~60%). [Note: this effect is only 'marginally significant' meaning that our confidence in the effect is not overly high.] The researchers had hoped for stronger effects but still thought this study was valuable.

Computational simulations, the shallowest dive

The last part of the technical report provides some information about a series of computational simulations that were included in this project. Very briefly, a computational simulation puts a bunch of agents into a box. Each agent represents a person, organization, etc. The agents are given some rules to live by, some of which may vary systematically (share information with another agent if they are within 2 spaces vs. share information with another agent if they occupy the same space). There is also a level of randomness that is added to the agents decisions to help simulate the real world. Simulations are becoming more and more accepted within management-type research though I am not sure how accepted they are within general social psychology.

In the series of simulations presented in the report, the authors focus on the effect of transactive memory and changes in the environment. In the first simulation, the researchers find some evidence that suggsts that the value of a transactive memory is curvilinear with the size of the group.  They found that if the group is fairly small, the difference in speed to completion of a task by the agents was about the same regardless of whether the group had a transactive memory or not. There was a definite benefit of TMS when groups were larger (between 15 and 27), but the benefit reduced for larger groups (35). I personally think that this is an artifact of how the agent's task is structured, but it does seem fairly reasonable. The last simulation suggested that transactive memory is particularly useful if the group completes multiple different kinds of tasks that are completed in alternating order. A transactive memory allows the groups to more quickly shift tasks, leading to a consistency in time to completion.

Though the studies in this report were not all successful, I found it particularly interesting. The ability to try out new ideas that this study provided also certainly helped the researchers develop their later studies and directed other researchers toward these topics.

I think this post completes my sequence on cognitive interdependence for now, though I'm sure it will crop back up :P

Monday, April 7, 2014

Coginitive Interdependence [Deep Dive] - Moreland series - Part 2

The US Army funded work by four social scientists in Pittsburgh all centered around the influence of turnover on small groups. Work groups in the Army often experience member turnover for a variety of reasons (e.g. transfer, injury, death, etc.) which makes their interest in this area very understandable. In this post, I hope to walk through some of the studies that the Army funded. As far as I know, only one of the studies in this set has been published in an academic journal. There was, however, a technical report given to the Army that I will be basing the information from this post on. This report can be found here: http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA433897

On the project were 4 primary researchers: John Levine, Dick Moreland, Linda Argote, and Kathleen Carley. Dick and Linda were mentioned in prior posts on cognitive interdependence from their extremely important experiments. John Levine was a frequent collaborator with Dick and Linda who is also interested in group behavior. Kathleen Carley is a somewhat different kind of researcher, specializing in computational simulations. In computational simulations, researchers create a set of rules for a world and then see what the outcomes of the world are once the actors in the world interact for a while. The rules in the world can then be adjusted to see if the actors behave much differently or if the outcomes are different. From the abstract of the study, we can see that the researchers intended to gain insight into how personnel turnover impacted groups completing different kinds of tasks. The variety of the researchers also allowed the use of laboratory and simulation-based approaches. Due to two of the researcher's prior investment in the concept of transactive memory, this was included as a component in these studies. Indeed, the lab studies that these researchers competed were a direct extension of those studies.

Productivity Experiment 1 - Turnover and Rumors of Turnover

In the first study, groups of 3 were trained together on a construction task (it isn't made completely clear but I believe it was the radio assembly task used in Liang et al., 1995). There were two manipulations: the groups were warned that there would be turnover (or not) and groups experienced turnover (or not). The warning occurred before the group trained together and the turnover occurred at the beginning of the second performance session. The researchers measured transactive memory and two measured of performance: whether the group could recall the task without having access to the circuit and assembly errors. The results for this first study, in the words of the researchers "were difficult to interpret".

When groups didn't actually experience turnover, they recalled more of the task if they were told that they were going to experience turnover. This makes sense because the group members may have tried more to individually memorize how to do the task if they knew that they couldn't rely on each other. For groups that experienced turnover, however, groups that did not expect turnover recalled more of the task than those that did expect turnover. As for errors, if groups experienced turnover, they performed much better, regardless of whether they were warned that there could be turnover. The researchers made a guess that the newcomers may have just tried really hard, which could explain the effects with errors. In future studies, they made sure to limit the newcomers training harder than the other members.

Productivity Experiment 2 - Turnover and Expertise Information

In this study, all groups were trained together on the task. In the control condition, the group was not warned of turnover and there was no turnover. In the second condition, turnover occurred without warning. In the other three conditions, the groups were warned there would be turnover and then given information about the newcomer's skills. The conditions varied on who received the information, just oldtimers, just newcomers, or both. The researchers measured transactive memory and errors.

As expected, groups that didn't experience turnover made fewer errors than those that experienced unexpected turnover. Groups in the other three conditions where someone received information about the newcomer, all made fewer mean errors than the groups that unexpectedly experienced turnover. Groups where the oldtimers received information about the newcomer made the same number of errors as groups that didn't experience turnover. Interestingly, when the information only went to the newcomer or to both newcomers and oldtimers, groups made slightly more errors. The researchers found nearly mirror results for transactive memory. Groups that didn't experience turnover had the highest transactive memory and groups were oldtimers received information had similarly high levels of TMS.

The researchers then shifted into looking at the effects of turnover on innovation. These studies will be considered next.

Thursday, April 3, 2014

Coginitive Interdependence [Deep Dive] - Moreland Series Part 1

In this post, I hope to describe in more detail a few of the transactive memory studies that were conducted at the University of Pittsburgh and Carnegie Mellon University. Richard Moreland was typically on these studies with Linda Argote also involved in several. These researchers were continuing the series of studies that began with the seminal paper with Diane Liang as lead author that was published in 1995. This study was followed in 1996 and 1998 by other experiments. It was not until 2000 that another TMS paper by this group was accepted into an academic journal.

Richard Moreland had been involved with the transactive memory studies using the electrical circuit tasks since the beginning. He, like Daniel Wegner, was a social psychologist and was primarily interested in how this interdependent view of memory influenced what was known about group psychology. He, with frequent coauthor John Levine, had been extremely influential in the area of groups research. Dick, as Moreland often goes by, and John had both proposed a fairly comprehensive theory of group socialization throughout the 80s that had been widely accepted. The seminal aspect of this theory in the chart below.


Before and after a member joins the group, their level of commitment increases to the group up to a point. At different points in an individual's commitment, they are likely to be accepted, to put in more effort, and eventually to leave the group. Their work after these theory was, to a certain extent, focused on how group members could be brought up the commitment curve faster and be more quickly socialized. This interest, I believe, led the researchers to consider group training and transactive memory as an interesting avenue to explore.

After the initial round of studies, these researchers felt like they had a good handle on the phenomena of transactive memory development. Group members spending time together led them to have more accurate perceptions of expertise, leading the group members to more easily coordinate and trust one another. The manipulation to encourage transactive memory, however, includes more information than just expertise to the group members. It could possibly lead the group member to like one another more because they have spend more time together. A few of the experiments controlled for this factor but the researchers thought that there might be other ways to methodologically deal with this concern.

Enter, Moreland and Myaskovsky (2000). Wegner's theory and the prior papers proposed that the transactive memory of the group is composed of information about expertise. In Wegner's experiments, this was due to the romantic couple spending time together and in the earlier Moreland studies it was due to the group members interacting during the training period. In the 2000 paper, however, the researchers isolated the manipulation to just the aspect that the theory mentions, information about expertise. I think this study is perfect in that is smartly builds on prior work, isolating the mechanism, but keeps many other aspects identical which allows us to generalize the findings to past work more easily.

In Moreland and Myaskovsky (2000), all of the members engaging in the radio construction task worked independently or in a group during that first meeting. Then, for half of the independent groups, their work was systematically graded based on area of ability and compared to the other members. A member would then receive a sheet that said the rank of each group member on each of several different categories of skill. Other groups did not receive this information. The researchers found that just providing this limited information about other members, groups performed just as well as if they had been trained together. This suggests that the information about the member's relative skill is helpful for performance, and as helpful as training the group members altogether. The groups that received this performance feedback were not statistically different from the groups that trained together in their level of TMS as measured from the video tapes. Granted, the groups that received performance feedback instead of training together did perform worse and have lower TMS at a mean level, but the values were close.

This particular study attracted the US Army's attention. These researchers applied for and received a grant from the Army to more deeply investigate the effects of performance feedback on groups, especially groups that have employee turnover (like many army groups do). The Army was interested if transactive memory is helpful in small work teams and if providing individualized performance feedback to the group could be a way of quickly building a team's sense of being a group and performance. I will next discuss these studies (never formally published but available in a technical report).

**Personal information about the researchers was attained second-hand and may not be accurate.