Wednesday, April 30, 2014

Dream Weaving Part 1 - My first post about pseudo science

The paper I want to discuss in this post was brought to my attention by a podcast I enjoy called "The Skeptics Guide to the Universe". This is a variety show that covers several different skeptical or science topics with a few staple segments. In the "News" section they either discuss some new scientific discovery, something from the general news, or articles they think deserve ridicule. In the last episode I listened to, they discussed an article published in Psychology Today, about a scientific paper (by a different author) that had been published in 2013. After I heard the discussion, the dismissive tone of the commentators, and the way in which they wrote off the study, I felt like they hadn't given the studys a fair shake (though the premise was fairly ridiculous).

Once I started reading the Psychology Today article and the original research paper, I started noticing some of the issues that arise when non-social psychologists try and look at our work. I still think that that the paper is ludicrous and poorly done but not for all of the same reasons as the people on the podcast. It was obvious after looking at both that the podcasters didn't have access to the original article and were using the vague generalizations about the methods that the Psychology Today article as actual methods as opposed to a summary of methods. The methods had problems in the study but the criticisms of the podcasters ended up being just far removed enough from what I saw as justifiable criticisms that the authors of either the article or the original research might be able to dismiss the "Rouges" as uninformed.

The article - titled "Can our dreams solve problems while we sleep" - is very short (840 words) and is an overview of 1 of 2 studies published in a paper called "Can healthy, young adults uncover personal details of unknown target individuals in their dreams?" The articles primary goal is to briefly describe the experiment and then provide some elaboration, suggesting that more similar work should be done in the past. As a critique of the research paper, the author seems to take the paper uncritically, praising the author as rigorous and ending the article with two paragraphs that begin "Lets say that some sort of dream telepathy is real" and suggest that there is something very real going on in this study. I am unconvinced by this paper, and the lack of a measurable mechanism in the paper.

I am now onto the 4th paragraph and I have not said what the paper was about. The research paper - published in a fairly low-tier journal called EXPLORE: The Journal of Science and Healing" - provides a rather detailed narrative of the process of this paper's development, which is not common in many of the articles I read. The paper's sole author is Carlyle Smith, a notable researcher on sleep. His past research appears to have primarily focused on the relationship of sleep states and the amount of sleep on memory and learning. Regardless of his past work, this paper arose directly from a course that Dr. Smith was teaching on "Dreams and Dreaming", a reasonable topic of study for a sleep researcher. A student in the class brought up the topic of the "Dream Helper Ceremony" and the instructor decided to do a pilot test in the class. The paper mentioned that this was a senior-level psychology class. From my experience in similar classes, the interests of the students often drive the class and there are sometimes not rigorous syllabi provided, so this seemed reasonable to me.

The "Dream Helper Ceremony" is essentially the idea that a group of individuals come together, hear about the life problem of an individual, and then all go to sleep, focusing their mind on the other's life problem and hopefully dreaming about said problem. The dreams are then shared with the target, who hopefully takes some value from this process. The researcher then decided to design a study that would get at one of the factors of this scenario, whether individuals can dream about the problems of others. In the dream helper ceremony description, the author suggests that the problem is discussed before dreaming, so the jump to looking at whether the content of the problem can come across a dream seems a large one to me.

The researcher provided the students in the class a picture of a person with a problem (the problem was not known to the researcher or the students) but they were told it was health-related. A subset of the students returned with a dream log that they believed represented the target (12 of 65). The researcher coded the dream logs based on a set of criteria that specifically captured elements of the target's health that would be negatively affected. This is a bit of a dubious practice because if the coder has more categories that fit the health diagnosis than other categories, they will be more likely to find matches for the health categories. The podcasters noted this problem. The researcher did weight the extent to which the health mention matched the problem of the person which helps alleviate some concern.  The researchers compared earlier dreams of the 12 with the dreams that the individuals reported as having been about the target. And, surprise, surprise, there was more language that matched the health outcomes in the second dreams. As should be obvious, the students knew that the target had a health problem so they were more likely to dream about those kinds of issues. There is also a self-selection bias because the others did not think they dreamed about the target. This could mean that only those that dreamed about health outcomes reported their dreams and are included in the sample. The researcher noticed these issues and attempted to correct them in the second study.

I'll discuss this study tomorrow.

Monday, April 28, 2014

Computers and Communication

One of my interests since I first became a PhD student was the process of organization through computers. I am actually not sure where this interest comes from precisely as I haven't had a huge amount of experience organizing with others over computers. When I was in undergrad, I took a course called Organizational Communication which I found extremely interesting. The focus of the class was mostly on the ways in which we fail  or succeeded at communicating within organizations. An example in class was how poor communication has led to helicopter crashes or accidental shootdowns.

A part of this course was focused on groups that communicate over the internet. The course was taught by a researcher that studies Wikipedia and the course was partially taught through the Human Computer Interaction group. After taking this class, I became more interested in this topic, literally in the academic sense. I still don't participate in much internet organized work and am notoriously bad at keeping up with friends. While I was working on a book chapter about the rise of the globally distributed group, I was part of one as my adviser spent some time at other schools. This was a period when I severely lost my way in my focus and ended up going through one of the roughest paper proposals I think there has been in my program.

I'd like to discuss some old, but interesting work on the way that people use computers to interact with one another. Sara Kiesler is a prolific and diverse researcher. When I first met her, she discussed how she had recently returned from a trip to Africa where she was interested in their nascent educational system and acted as an adviser. She taught us interviewing techniques, how to engage with subjects and find out what their true reasons were for their actions or thoughts. She seems to have a deep interest in increasing the quality of life for people wherever they are and through many different mechanisms.

She was part of a group of researchers that were interested in how the internet would influence the lives of those that had ready access to it. In this study, the researchers gave internet access to a large number of families in the Pittsburgh area for free. The researchers then looked at the outcomes for each family member and tracked their individual usage. Initially, the signs were not good with several negative outcomes (primarily depression), specifically for adolescents in the household. After more time, however, the positive effects of the internet on the families became more pronounced. Of all possible uses for the internet, the most common was interpersonal communication. The researchers concluded that using the internet to make new ties had a relationship to increased depression but using the internet for other uses decreased depression: http://homenet.hcii.cs.cmu.edu/progress/index.html

Another interesting study that Sara Kiesler performed that is even older than the Internet study focused just on the nature of the communication that individuals engaged in with one another. In this study, the researchers compared the communications of groups that did a task when the members were either in the same place or communicated over computer text messaging. The use of computers had various effects, positive and negative. Group members were more likely to get angry with one another, make extreme statements, and they had trouble coming to a collective consensus. This was partially that people seem more real when they are in person so it is harder to criticize them so heavily directly. Another way to think about the phenomena is instead that the ability to communicate distributively led the group members to speak their mind more freely.

Another interesting finding was the amount of discussion that was contributed by females in the group. In the face-to-face groups, men were dominant and their opinions were used more as the basis for the decision making process. When the groups instead used a computer, however, the women spoke more and contributed more to the discussion. The researchers suggested that the relative anonymity that the computer mediated communication allowed for let women not feel as self conscious about sharing their opinions. They also suggested that because there were fewer obvious status cues, women weren't in a position where they felt their opinions were less valuable.

Lastly, the researchers were curious if the change in the way people communicate changes the kinds of decisions that they are likely to make (instead of just their ability to make a decision). The researchers found that there was a definite 'risky shift' such that members were more willing to take on ventures that seemed risky if they were communicating online as opposed to face-to-face.

Though this research was published in 1992 (22 years before the publication of this article), we can see that people are using the internet to communicate and engage with one another in the same kinds of ways. Discussions on the internet often devolve into 'flame wars' quickly get off topic, and is full of overly superlative language about the love or hate of particular topics. Risky or at least random decisions being made by groups coordinating over the internet are not uncommon to hear about. It is comforting to a certain extent to consider that we have always found computer mediated communication to be just disconnected from others enough to be incredibly mean to one another. This is not a new phenomenon, it is inherent in human nature. Us humans who have evolved to recognize faces and see truth in one's eyes are sullied by using online communication,...but it does have its benefits. The convenience is unparalleled and studies have shown that we are much more civil when we know who the other person is we are talking to, which is something.

Thursday, April 24, 2014

Sensemaking in Organizations

In the Fall of 2010, I was taking a seminar in organizational behavior. It was a morning class, and in a much different format than I was used to. We read what, at the time, seemed like a ludicrous number of papers and then proposed questions we had about each to the professor. The professor then spent 15-20 minutes per paper summarizing and discussing the significance of each paper, answering our questions as he went along. It was a small and intimate class which made the moments that I dosed off that more embarrasing. It was a very interesting class, but the lecture-like format was not engaging enough at 9 in the morning when I had stayed up until 1-2 to read all of the required papers.

One day we read a paper that deeply impacted my perception f how research can be done and explained in organizational behavior. The paper was called "The collapse of sensemaking" by Karl Weick, an influential but controversial individual within the field. Van Mannan argued in the article I mentioned yesterday that the article I am about to describe was extremely powerful but never would have seen the light of day under Pfeffer's system. Pfeffer shot back that Weick was not formally rigorous enough which only stoked Van Maanan's dislike for Pfeffer.

The article is very, very different from what you typically see in academic literature. It is a narrative about the Mann Gulch disaster that holds some information to its chest in order to make the impacts of the revelation of Weick's theory that more convincing. The article has nearly 2500 citations according to Google Scholar. There are no formal hypotheses, no statistical anlayses, but it's also not quite a theory paper. It is a kind of paper that I have only seen Karl Weick write. I mentioned the argument between Pfeffer and Van Maannan to a professor at my institution about their discussion of Weick's style. I do not remember the specifics, but they were clear that his pursuits are only possible after tenure and that few besides him can write these narrative theoretic pieces.

The paper begins with a description of the Mann Gulch disaster. Weick relies on a book "Young Men and Fire" written by Normal Maclean who interviewed survivors of the event.  As a very brief summary, a group of young firefighters parachuted into a forest where a fire had been reported. Their role was to act quickly to prevent the fire from spreading by digging fire lines as well as repairs damage from a fire. The men unfortunately were unprepaired for a large active and fast moving fire. They found themselves in a position where fire was rapidly approaching and they needed to act fast to survive. 13 of the 16 men died that day. Of those that survived, two found a way through a rock crevice, the other survived by lighting a brush fire at his feet and lying down in the ashes. The actions of this last individual, Wagner Dodge, led Weick to begin his theorizing about the collapse of sensemaking within this group of men.

Sensemaking is the way in organizations act in ways to create order in their environment through their actions based on their purpose and culture. The theory of sensemaking apparently arose as an alternative to focusing just on the decision making process itself (as proposed by March of the Carnegie School). In other words, the organization's actions are in response to the way reality is perceived in order to maintain their perception of reality. Weick's primary argument is that the actions of the firefighters were in line with their incorrect perception of reality and when they were faced with a new reality, they were unable to 'make sense' of the situation. Their training became useless because they were no longer in a situation they could understand. Dodge was able to make sense of the situation when others could not and essentially set a fire line where he stood. This prevented the fire from coming as close to him as the ground was already burned. His command to the others to join him in the fire seemed to go against their identity as firefighters.

I don't want to get into the details of the paper as it is extremely dense and it is certainly worth a read. This paper is particularly important to me because of the way it is presented. It is intuitive and rigorous within the setting. Even though there is no data, you can tell that an extraordinary amount of thought went into the construction of the paper. I don't use sensemaking in my research and I'm not sure I agree with it in opposition to other concepts that it somewhat collides with (like the Carnegie School) but damn does WEick make a good argument.

Wednesday, April 23, 2014

Finding your niche, and then it being invaded

Social science contains is an extremely large range of things being studies. There are all these pockets of work that is being done that it can seem that it would be unlikely to run into do something identical to you.

I was walking in Boston when I was at the Academy of Management a few years ago. I stopped to chat with 2 PhD students from the Netherlands who had been at a session I had also just attended. As I talked to one student I gave a short summary of what I had been working on: "transactive memory and how structure of communication affects TMS formation." The one student mentions that her partner was studying something else involving TMS. Her topic was "How the structure of a group influences the usefulness of the TMS". Not the same question, but really very similar. It was in one way nice to somewhat randomly meet someone with such a similar topic but in another way, I felt uneasy, like this person was my competition.

The outward view of much of science seems to be that scientists are extremely collaborative, share ideas freely, and learn from one another. This is, in one way, an ideal that many strive for, but it is in another very real way the antithesis of how some of science is structured. In a recent episode of Cosmos, Neil Degrasse Tyson explained this relationship as a great lineage of student and teacher, learning and expanding our overall knowledge. Though I couldn't quite identify it, something in his statement seemed very dissimilar from aspects on my own experience.

Within social science, possibly because there is so much space to explore, I sometimes feel very protective of my little niche. I don't want anyone to come in and publish papers similar to these ideas I have before me. One reason for that is that the more publishing in an area, the more prior material that you have to read and account for in your own paper or design. Second, if you are the first with an idea, it can be helpful in increasing your citations or just your general recognition. I don't know if this protectiveness exists in other areas of science but it is something I personally feel.

The other day a professor suggested I read a paper due to my interests. Though they go about their theorizing and methods differently than I would, they were essentially interested in the same primary question I have in my dissertation. Thankfully, I think that this paper informs mine more than replaces it but I was scared when I looked at it. My thoughts were "Oh no, if this is what I think it is, will there still be room in the literature for my research?"

There have been potential solutions for this problem raised for years within social psychology. Van Maanen and Pfeffer had an argument in a journal over a series of articles. Pfeffer was essentially suggesting that a small set of scientists should determine what the important questions are. Then all the researchers that do that kind of work (or all researchers) would pursue answering that question thoroughly. This directed approach, he called paradigm consensus, seems similar to some pushes in other sciences such as the recent push in Physics to identify the Higgs Boson.

Van Maanen made some cogent arguments against this (primarily about how the 'taste-makers' will be chosen in a field that retains some element of subjectivity). The title of his response "Fear and Loathing in Organization Studies" is particularly witty I felt. In the current model where it is a bit of a free for all, then some of us are put in this funny position that work that takes a long time to complete is successively more dangerous because someone else might get there before you. Replications are not highly valued in Social Psych either leading to some bogus concepts sticking around for much longer than they should.

Tuesday, April 22, 2014

Coding (in the social science sense)

I often wish that science did not occasionally use the same word to mean the same thing. In this case, the thing is coding. When the majority of people think of coding, they probably think of writing computer code. Those who write computer code are called coders and all is right with the world. But there is another kind of code that we use in social science much more often, qualitative coding. This essentially means that you take some output, communication, interview, etc. that cannot be directly translated into numbers and you create a scheme to do that translation. It could mean that you create a list of possible topics and match each sentence in an interview onto those topics. Going through this process can give you a quantitative idea of what a person was discussing.

I have already introduced one form of coding in one of the previous posts. Liang, Moreland, and Argote developed a kind of coding scheme to measure transactive memory in 1995. In that case, 2 individuals watched a video and rated the level of coordination, credibility, and expertise within the group overall. This measure was not as taxing as some versions of coding but it still requires 2 individuals to watch videos of all the groups and make a judgement. Once the two coders have finished, their codes are compared using a formula like cronbach's alpha. Cronbach's alpha determines how consistent the raters are at judging the same thing, in this case the groups i the videos.

Then we come to my personal difficulty using coding in my work. I have used coding for several projects, including some where the thing we were measuring was, I felt, objective. Therefore, using multiple coders is useful to help pickup on when one person missed a specific thing, but not to compare groups based on the coder's perception of their qualities. In my case, I say count the number of times this thing occurs. One coder sees 5 but the other only sees 3. The one that sees 5 may objectively be correct, but something like Cronbach's alpha just sees this as an inconsistency between the coders when the actual problem is either attention or honest mistake.

Whenever I start a coding process, I typically dread starting and looking at the outcomes because I just feel that the mistakes are arbitrary, that I don't need the coders to do this correctly, or that the coders are just not doing a good job. It's frustrating. But, it's frustrating in a way that feels unnecessary.

Machine learning has started being introduced which provides a more objective way of looking at a set of hard to quantify data though it may not do as a good a job as a person can. I think that, in the near future, machine learning will begin supplementing person coding when there are large data bases available. I think that this is a good way forward and also I like that it may reduce my personal reliance on other people. It becomes impersonal, removed from the imbued meaning of the words, and disconnected from theoretical constructs. But it is a slave that reduces the need for the me to act mechanically, which is something I suppose.

Thursday, April 17, 2014

Carnegie School of Thought - Bounded Rationality

Discuss Herb Simon some and Organizations

When I began my PhD, I was introduced early on to a particular strain of work on management called the Carnegie School of thought. This work was primarily done in the 50s and 60s by researchers at Carnegie Institute of Technology (now called the Carnegie Mellon University). Herbert Simon and Jim March were the primary individuals involved, March continuing the work with Cyert.

Herb Simon was a very complex, analytical man. Though Simon began his life as a political scientist, publishing his dissertation in a book called Administrative Behavior, he became more interested later in artificial intelligence. My first introduction to him was in a cognitive psychology course. The instructor was describing how Simon began the first lecture of the Fall semester of one of his courses by asking his students what they accomplished. After all of the students had described their summers, Simon said that, over the summer he had designed a computer that could think like a person. It was an early version of artificial intelligence that based its decision making on the same ways in which people make decisions. Simon was an extremely influential person in computer science, artificial intelligence, cognitive psychology, and management. His influence in management, for the most part, is due to his encouragement and collaboration with James March.

The ideas within the Carnegie School are quite diverse and for this post I will focus on a concept called bounded rationality (also known as satisficing). Within Economics, it is assumed that actors make the best choice in any given decision. Satisficing, proposes that there are increased costs for some decisions or that the outcome is not as important to the actor, leading to the actor to willingly make a suboptimal choice. An example that Simon used to give was about lunch [I have modified the story from the original but the idea is the same]. If an actor is in their office and needs to get lunch, they could have multiple values that they desire to maximize: timeliness, cost, health, etc. An actor could determine the relative weight of those characteristics and make the optimal choice. But as Herb said "I would instead just always go to [the student center]. For those who have been there, it is obviously a non-optimal choice." The humorous example does have certain limitations but provides a good overall example of the concept. Satisficing proposes that the act of making a choice is costly and ones own desires are not always clear, leading to a "good enough" choice being much easier to determine than the best.

Though this concept, while somewhat of a refinement of the economic theory of optimization, was a revelation to the academic world. It is not without its detractors. A comment that I have heard from several proponents of bounded rationality is that it is not testable meaning it is not a proper theory. The reason for this is that people may actually be making optimal choices but they are optimizing on unknown or unmeasured criteria. I personally think that satisficing is a very useful concept though it does have a subcurrent of nondeterminism that also arises in March's concept of the Garbage Can Model of Organizational Choice. This concept is a bit unsettling, but still interesting to me.




Wednesday, April 16, 2014

Comments on Data Analysis and Statistics

Analyzing data is quite an odd experience in the research world. In statistics classes, you learn about a lot of complicated models, tests, and assumptions. But, my experience analyzing data from experiments is that much of what is learned in the classroom is ignored. That isn't to say that I willingly defy the instruction I received in my classes. Instead it is that the things I learned in the majority of my stats classes are not that important for studying experiments.

Why is there this disconnect? First, most experiments are technically immune from a lot of the potential problems that stats classes teach you about. If there is random assignment to condition then individual differences shouldn't matter. Manipulations and some dependent measures can be thought of as perfect measures because they are the thing itself. I don't have a huge amount of experience in this area but the data I have typically gotten from experiments does not typically violate assumptions or is unable to violate the assumptions on ANOVA or regression such as independence. A social psychologist once implied to me that, in our field, if you use fancy statistical techniques or describe all of the tests for assumptions that you ran on the data that it can make any effects less believable. The researcher proposed this because most of the effects we investigate are measurable by ANOVA or linear regression. The researcher may be using fancy statistics because that is the only occasion when the effect exists.

This is an interesting situation because, if this is really the case, it suggests that at least part of social psychology is unwilling to accept advances in statistical procedures or statistical rigor because they don't want to be seen as hiding behind the math. If a paper doesn't use one of a small handful of methods then they are open to criticism for their statistical methods. If they use simple analyses, however, they are less open to complaints about their statistics. This may not be the true case or the case in the majority of social psychology, but I have reason to believe that it exists. It is also certainly true that a sign of experimenter 'p-hacking' is the use of convoluted analyses that may not be entirely appropriate, leading to spurious effects.

I don't mean to suggest that new innovations never make it into social psychological research. Preacher and Hayes have made a huge splash in the psychological community by introducing a way to more accurately gauge the existence and the effect size for many kinds of statistical mediations. I think that a partial reason for this acceptance, however, was their demonstration that the traditional ways of testing for mediations were more likely than their method to say that there is no mediation when a mediation does actually exist. This fact made the acceptance of a new method more appealing to the community at large, partially because it is more accurate and especially because it is more accurate in such a direction that mediations that were previously not supported may now be supported.

It is an interesting world that I honestly do not know much about. As far as journal publications go, if the editor and reviewers (normally no more than 5 people in total) think the stats you use are okay then your work can be published. If the stats are easier to understand, then your work is more likely to be published. Also, if your stats are very hard to understand because the methods are very obscure or new can also lead to your work being published. Though there were multiple issues with Daryl Bem's 2011 paper in JPSP (a very prestigious journal), but one criticism was that the stats he used were too complex and picked up on subtle, random differences. I think that the analytical world I live in is very interesting, but I just don't understand it sometimes.

Tuesday, April 15, 2014

Turnover and enactment of change

Describe Kane et al (2005) and maybe Levine & Choi (2009)

In much of the literature about turnover, it is unclear how newcomers are influencing the outcomes for the groups that they join. More recent literature has attempted to categorize the ways that newcomers adapt to the groups they join and how groups adapt to the newcomers. In the study I describe today, the researchers were curious what factors influence whether a newcomer is able or willing to share their ideas with the rest of the group. There is an assumption in much of the management literature that newcomer's primary value is in the new ideas they bring to the group. But, under what conditions that occurs is less clear. It certainly is not as often as possible or there would be much more value in the world.

The study I want to describe is Kane, Argote, & Levine (2005). These researchers decided to use an experiment to investigate some of their ideas using the frame of social identification. The researchers proposed that if group member shared a common social identity with a newcomer that they would be more willing to accept new ideas into the group. Unfortunately, this study was not able to determine directionality (whether the effect is newcomers' willingness to give ideas or oldtimers' willingness to accept them) but this step was a great step forward in this research.

In this task, the participants made paper boats in assembly lines. The researchers demonstrated how paper boats could be made but were clear that their requirement was to make as many paper boats that fit the requirements for the task and not just this specific boat. Some groups learned a method of making paper boats that required 7 folds while other groups learned a method that took 12 folds. Though the group with the smaller number of folds had one fold that was somewhat complex, they were much more efficient in general than groups with 12 folds (based on pretesting). The groups were told to use an assembly line to construct the boats and the more difficult fold is done by the middle member.

The other manipulation in the study was whether the groups shared a sense of collective social identity with each other. In each experimental session, 2 groups were brought into the lab at the same time. They both also participated in a training period in the same room. However, in the high social identity condition, the groups were given the same names, seated in an integrated fashion differently, and given a reward scheme where the performance of both groups would lead to better outcomes for all the individuals. In the low social identity condition, these three factors were changed so the groups seemed less similar to one another and their reward was not interdependent with the other team.

The other action in the study that the experimenters made was very clever. The middle members for each group switched from participating with one group to participating in another group. Therefore, for some groups the new member had the same experience as the group they are entering (experience with low or high efficiency folding techniques) whereas for other groups there was a difference (the new member has the low or high efficiency folding technique but the group they join has the opposite). The new member is therefore in a position where they need to learn the technique the group is using, or the member needs to try and get the group to accept the way of doing the task that they are most used to.

Skipping ahead to the results, almost no groups accepted the newcomers folding strategy if the strategy was worse than the one that they already have. There was also a main effect of the identity. Shen the groups had a shared identity, they were much more likely to accept the new member's strategy into their group. If the groups shared an identity (from having the same group name and having a shared reward structure, then they accepted the new member's better way of constructing the boat about 70% of the time. If they didn't share that same identity, however, then the group didn't accept the new member's superior way of making the boat that often (only 25% of the time).

The results for performance were a bit harder to interpret. All groups performed better over time, generating more boats in the last trial than in the first one. But there wasn't a strong direct relationship of the new member having a superior routine and performance. The researchers found that when the new member introduced a better routine to the group, that they experienced a larger increase in their performance than when the new member has a worse routine. These differences, however, are only for groups that share an identity with the new member. If the group didn't share an identity with the new member, then it didn't matter whether the new member had a better or worse routine, partially because these groups accepted that routine infrequently.

In context, this study was very significant for a few reasons. First, Argote had done significant work on learning within groups and organizations. This, however, was one of the first study that demonstrates both how learning can occur within a group and that there are certain variables that influence whether a group can learn from a new member. The variable of interet here was social identity but other work has looked at a lot of other factors (see Rink et al., 2013 or a review). Second, this study demonstrated that groups have the ability to recognize advantageous strategies and use them. This was demonstrated in some earlier work by McGrath, but Kane's study is a very clean experimental setting. Lastly, the results suggest that learning new strategies can be costly to a group, hence the small differences in performance for groups where the new member had a better routine compared to groups that received a new member with a less efficient routine.

Monday, April 14, 2014

The research process and study design

I was talking to another PhD student the other day that was presenting a schedule of the work that she was planning to do over the next few months. One project that she is deeply involved in at the moment in is analyzing data that has been collected from a set of real organizations. However, she also wants to then test these findings in the lab. I thought this might be a good opportunity to talk briefly about study design and what different kinds of research exist within social science.

Ideally, the research process goes in an order vaguely like this: A researcher comes up with an idea about how the world works, the relationship between some set of variables, etc. In most branches of social science, the researcher then creates a set of predictions about how different variables will be related to one another. This is less necessary in some fields such as (non-behavioral) economics. The next thing the researcher decides is what is the best way to determine if this relationship exists. Sometimes the question itself will inform what data should be used to test for an effect. If the question is, for example, about the relationship between stock price and employee stealing, then looking at a real organization may be ideal. Once a data source is identified, the researcher collects the data and does analysis on these data. After the researcher has interpreted the results, the work will go onto the publication process, either into a journal article, book, book chapter, or conference presentation.

I work in a very small world where I have used experiments in all of my work. My experiments, though not identical, have certain elements of design that I consistently use which adds familiarity to the design process for my studies. I know the manipulations and the kinds of acceptable tasks very well. Though the specifics have taken some time in the past to work out, I don't think it took me more than a few days to design each of the studies that I have used. The longest time has always been determining the task to use. The difficulty with tasks sometimes is the balance between creating a new, novel task that the participants won't be familiar with and choosing a task that has been tried and tested by you or your colleagues.

When I looked at the other student's schedule, I was genuinely surprised that she had 3 weeks scheduled for study design. When I talked to another student, she thought that 3 weeks was just about enough time. This interaction got me thinking why I was so surprised that the student chose such a long period of time to dedicate to study design. I don't think I am overly skilled at study design, but I could be using a different definition of study design than they were.

When a lab study is designed, the major decisions that have to be made are the task, the manipulations, and the measures. My manipulations have always been rather blunt and heavily tested: employee turnover or restricting communication. The manipulation of more delicate factors, such as feelings of group belonging, of fear, or feelings surrounding the exchange of favors, are, I imagine, much more difficult and may have smaller impacts on people. There are huge literatures investigating these factors, which may actually instead the time it takes to choose a manipulation because the researcher may feel like they need to be familiar with most of the prior work. I don't mean to come off as dismissive of other work, but if you spend all of your time reading all the published literature in your area, you'll never add to that literature yourself. It is a dangerous game of academia, unless you work along narrow specialties (which has been my strategy).

Once the core vision of a study has been determined and the three decisions mentioned earlier (task, manipulation, and measure) have been chosen, the materials have to be put together. I don't typically think of this as design, but it is a necessary part of the research process. This is the phase where study materials are drafted, the specifics of the task are decided, materials are purchased, and advertisement materials are readied. Another unsung, but important aspect of this process is the writing of a script. I was fortunate to have a reader on a student project strongly suggest I write one for my first solo project and graciously provided me an example. The script lists all the actions the experimenter does to prepare for the study, all the things the experimenter says, when things occur in relationship to one another, and the timeline of the study. Writing the script always has a way of highlighting to me glaring issues with the design of the study in both a shallow sense (operalization) and a deeper (theoretical) sense.

I hope this post provides you with some insight into the nuts and bolts of the social science research processes and may provide some tips to other scientists.

Wednesday, April 9, 2014

Goodman

Coal miner study

One part of my graduate education that I count as one of the most fortunate was the limited amount of experience I had interacting with Professor Paul Goodman. Paul Goodman unfortunately passed away shortly after I passed my qualifying exams. Paul Goodman was an extremely interesting and committed researcher that allowed his personal feelings of justice influence the direction of his work in a very real way without allowing them to cloud the scientific process. Paul was truely one of a kind.

After Paul passed away, I spent some time talking to his wife and children as they discussed his upbringing and what motivated some of this work. From what I recall, both of his parents were liberal social activists in New England. From an early age they instilled in Paul that organizations have a responsibility to treat their employees well. Though I'm sure many other things influenced his choice of career, Paul eventually began studying the ways that employees interact with management in organizations. Paul was an avid film-maker who did a series of videos about the current state and future of work. He typically interviewed average people in industries that were changing. Many of these films can be found at a permanent collection at Carnegie Mellon's library website: http://dli.library.cmu.edu/paulgoodman/

The last two projects that I know of Paul perusing was a long-term project on the science of science teams. Though I do not know his specific motivation, scientists often apply much less social science to their organization than what we actually know. After Paul died, this project dissolved due to the cohesive power of Paul's personality disappearing. The other was a more amorphous process that I think perfectly sums up Paul's outlook on the world. He and his assistants conducted hundreds of long-form interviews asking average people what they thought the American dream was, if they strove for it, and what kind of world they wanted for their children.

Though Paul completed a lot of interesting work, what I'd like to talk about today was some of the work that came out of his multi-year coal-mining project. In this work, Paul went to coal mines in the mid-Atlantic and interviewed miners in their place of work. By that I mean underground in the mine itself. Paul told me on multiple occasions that he thought that the ability to conduct the interviews and collect data in the mine itself gave him a much more accurate perception of what it was like to work in this environment. My father, who is from Pennsylvania described to me when I was very young that my great grandfather's worked in a coal mine. This profession and the work that Paul did therefore always seemed to touch me a bit closer as I always imagined my great grandfather in the place of the miners in the papers.

The paper I would like to describe of Paul's is one that he wrote with Dennis Leyden. This work was supported by the U.S. Bureau of Mines. I can only make a guess but I think that the Bureau were interested in how the relationships between the individual workers in the mine were related to mine outcomes. Mines vary in productivity enormously and one possible reason is the kinds of relationships the individual workers have with one another.  Goodman and Leyden proposed that the mines provided a good opportunity to look at the effects of familiarity on the small teams that work together within a coal mine. (In a prior study, the researchers had already identified that an individual with little familiarity with a mine was more likely to have an accident.)

Mining crews were sets of workers doing one of three unique roles. Those roles were: the miner operator, the bolter, and the car operator. Each crew typically had a pair of people performing each role. Though each role is unique and there are skills associated with the roles, the authors argue that the specific strategies the individuals use vary from crew to crew based on personal differences and the features of the part of the mine the group is in. Though the researchers do not mention it specifically in this paper, another factor that I imagine is important is the cognitive interdependence of the individuals on one another.

Without getting into too much analytical detail, the researchers used information about which groups individual were working on to create a measure of whether individuals had worked with one another before and to what extent a given group's members were familiar with one another. Overall, the researchers found that the levels of familiarity between the group members was predictive of the overall mine productivity. They found some evidence that different kinds of familiarity mattered more than others but they felt that overall familiarity mattered more.

In rereading this paper, I found myself reminded of some other interesting work by Karl Weick on aircrews. Like I am attempting to do in this blog, Weick preferred description over analytics and wrote extremely provoking papers based on his reading and observations of real events. Weick's observation of air crews found very similar effects of familiarity on the air crews ability to perform without errors. I'm sure I will discuss some of his other work later in this blog.

Tuesday, April 8, 2014

Coginitive Interdependence [Deep Dive] - Moreland series - Part 3

This post describes the studies commissioned by the Army that Levine et al. explored. In Part 2, the studies on productivity were explored. In this post, I focus on the experiments about innovation.

Creativity Experiment 1 -Assigned and/or maligned (published as Choi & Levine, 2004)

The researchers then shifted away from performance as the primary variable of interest and into the effect of turnover on group innovation. These studies used a air-surveillance task that John Levine and his students have used in several papers that I know about. Groups work together to monitor the radar at a base and assign threat levels to the different radar contacts. In each of the three member groups, 2 individuals were specialists and 1 acted as the commander. The specialists essentially collect information about the radar contacts and the commander receives that information and is tasked with making a decision. There were two different strategies that could be used in the strategies to collect information that varied on whether the importance of the information the specialists collected was the same for both or whether the difficulty of getting the information was the same for both.

In the first experiment, the researchers manipulated whether the group was able to choose their strategy and how well their feedback suggested that they had performed. In the experimental setup, the group was either assigned one of the two strategies above or they were allowed to choose one. Then the group performed the task. Half the groups were told that they had performed well and the other half were told that they had performed below a passing rate. One of the specialists was then chosen and replaced with a confederate. In social psych research, a confederate is someone who pretends to be a normal participant but has been coached to act in a particular way. The newcomer then proposed that the group switch to the opposite strategy of whichever they had chosen in the first trial.

The researchers used whether the group accepted or rejected the strategy the newcomer proposed as the variable of interest. Because this could be affected by a multitude of factors, the researchers measured how committed the members were to the previous strategy, how much they liked the team, performance in the first trial, etc. The researchers found results that were inline with what they anticipated. If groups were told they failed to perform well in the first trial, they were more willing to accept the newcomers idea. The groups were also more likely to accept the idea of the newcomer if the group had not been allowed to choose their own strategy.

The researchers then did some additional analyses and proposed what led to the group's receptivity to the newcomer's proposal. The two variables the researchers proposed mediate the effect of team choice on the acceptance of the newcomer: commitment and perceived performance. If the group had a choice in their strategy, they were more committed to their strategy and they perceived their performance as better.  The researchers were fairly satisfied in these findings but they also thought that the way the newcomer proposed their innovative idea likely would have an effect on whether the group accepted it. This led to the second experiment.

Creativity Experiment 2 -An Assertive Story

This study was run very similarly to the first creativity study except that the kind of language the newcomer used was varied. As before, groups are more likely to accept the newcomer's innovation when the group was told that they had failed in the first trial. There was also what is called a statistically significant interaction. An interaction just means that whether one variable has an influence depends on another variable. When the groups were told that they had succeeded, it did not matter whether the newcomer was assertive or not, the acceptance rate was always about 45%. If the group had been told they failed, however, they were more likely to accept the ideas of the newcomer if the newcomer was assertive (~85%) versus if the newcomer was not assertive (~60%). [Note: this effect is only 'marginally significant' meaning that our confidence in the effect is not overly high.] The researchers had hoped for stronger effects but still thought this study was valuable.

Computational simulations, the shallowest dive

The last part of the technical report provides some information about a series of computational simulations that were included in this project. Very briefly, a computational simulation puts a bunch of agents into a box. Each agent represents a person, organization, etc. The agents are given some rules to live by, some of which may vary systematically (share information with another agent if they are within 2 spaces vs. share information with another agent if they occupy the same space). There is also a level of randomness that is added to the agents decisions to help simulate the real world. Simulations are becoming more and more accepted within management-type research though I am not sure how accepted they are within general social psychology.

In the series of simulations presented in the report, the authors focus on the effect of transactive memory and changes in the environment. In the first simulation, the researchers find some evidence that suggsts that the value of a transactive memory is curvilinear with the size of the group.  They found that if the group is fairly small, the difference in speed to completion of a task by the agents was about the same regardless of whether the group had a transactive memory or not. There was a definite benefit of TMS when groups were larger (between 15 and 27), but the benefit reduced for larger groups (35). I personally think that this is an artifact of how the agent's task is structured, but it does seem fairly reasonable. The last simulation suggested that transactive memory is particularly useful if the group completes multiple different kinds of tasks that are completed in alternating order. A transactive memory allows the groups to more quickly shift tasks, leading to a consistency in time to completion.

Though the studies in this report were not all successful, I found it particularly interesting. The ability to try out new ideas that this study provided also certainly helped the researchers develop their later studies and directed other researchers toward these topics.

I think this post completes my sequence on cognitive interdependence for now, though I'm sure it will crop back up :P

Monday, April 7, 2014

Coginitive Interdependence [Deep Dive] - Moreland series - Part 2

The US Army funded work by four social scientists in Pittsburgh all centered around the influence of turnover on small groups. Work groups in the Army often experience member turnover for a variety of reasons (e.g. transfer, injury, death, etc.) which makes their interest in this area very understandable. In this post, I hope to walk through some of the studies that the Army funded. As far as I know, only one of the studies in this set has been published in an academic journal. There was, however, a technical report given to the Army that I will be basing the information from this post on. This report can be found here: http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA433897

On the project were 4 primary researchers: John Levine, Dick Moreland, Linda Argote, and Kathleen Carley. Dick and Linda were mentioned in prior posts on cognitive interdependence from their extremely important experiments. John Levine was a frequent collaborator with Dick and Linda who is also interested in group behavior. Kathleen Carley is a somewhat different kind of researcher, specializing in computational simulations. In computational simulations, researchers create a set of rules for a world and then see what the outcomes of the world are once the actors in the world interact for a while. The rules in the world can then be adjusted to see if the actors behave much differently or if the outcomes are different. From the abstract of the study, we can see that the researchers intended to gain insight into how personnel turnover impacted groups completing different kinds of tasks. The variety of the researchers also allowed the use of laboratory and simulation-based approaches. Due to two of the researcher's prior investment in the concept of transactive memory, this was included as a component in these studies. Indeed, the lab studies that these researchers competed were a direct extension of those studies.

Productivity Experiment 1 - Turnover and Rumors of Turnover

In the first study, groups of 3 were trained together on a construction task (it isn't made completely clear but I believe it was the radio assembly task used in Liang et al., 1995). There were two manipulations: the groups were warned that there would be turnover (or not) and groups experienced turnover (or not). The warning occurred before the group trained together and the turnover occurred at the beginning of the second performance session. The researchers measured transactive memory and two measured of performance: whether the group could recall the task without having access to the circuit and assembly errors. The results for this first study, in the words of the researchers "were difficult to interpret".

When groups didn't actually experience turnover, they recalled more of the task if they were told that they were going to experience turnover. This makes sense because the group members may have tried more to individually memorize how to do the task if they knew that they couldn't rely on each other. For groups that experienced turnover, however, groups that did not expect turnover recalled more of the task than those that did expect turnover. As for errors, if groups experienced turnover, they performed much better, regardless of whether they were warned that there could be turnover. The researchers made a guess that the newcomers may have just tried really hard, which could explain the effects with errors. In future studies, they made sure to limit the newcomers training harder than the other members.

Productivity Experiment 2 - Turnover and Expertise Information

In this study, all groups were trained together on the task. In the control condition, the group was not warned of turnover and there was no turnover. In the second condition, turnover occurred without warning. In the other three conditions, the groups were warned there would be turnover and then given information about the newcomer's skills. The conditions varied on who received the information, just oldtimers, just newcomers, or both. The researchers measured transactive memory and errors.

As expected, groups that didn't experience turnover made fewer errors than those that experienced unexpected turnover. Groups in the other three conditions where someone received information about the newcomer, all made fewer mean errors than the groups that unexpectedly experienced turnover. Groups where the oldtimers received information about the newcomer made the same number of errors as groups that didn't experience turnover. Interestingly, when the information only went to the newcomer or to both newcomers and oldtimers, groups made slightly more errors. The researchers found nearly mirror results for transactive memory. Groups that didn't experience turnover had the highest transactive memory and groups were oldtimers received information had similarly high levels of TMS.

The researchers then shifted into looking at the effects of turnover on innovation. These studies will be considered next.

Thursday, April 3, 2014

Coginitive Interdependence [Deep Dive] - Moreland Series Part 1

In this post, I hope to describe in more detail a few of the transactive memory studies that were conducted at the University of Pittsburgh and Carnegie Mellon University. Richard Moreland was typically on these studies with Linda Argote also involved in several. These researchers were continuing the series of studies that began with the seminal paper with Diane Liang as lead author that was published in 1995. This study was followed in 1996 and 1998 by other experiments. It was not until 2000 that another TMS paper by this group was accepted into an academic journal.

Richard Moreland had been involved with the transactive memory studies using the electrical circuit tasks since the beginning. He, like Daniel Wegner, was a social psychologist and was primarily interested in how this interdependent view of memory influenced what was known about group psychology. He, with frequent coauthor John Levine, had been extremely influential in the area of groups research. Dick, as Moreland often goes by, and John had both proposed a fairly comprehensive theory of group socialization throughout the 80s that had been widely accepted. The seminal aspect of this theory in the chart below.


Before and after a member joins the group, their level of commitment increases to the group up to a point. At different points in an individual's commitment, they are likely to be accepted, to put in more effort, and eventually to leave the group. Their work after these theory was, to a certain extent, focused on how group members could be brought up the commitment curve faster and be more quickly socialized. This interest, I believe, led the researchers to consider group training and transactive memory as an interesting avenue to explore.

After the initial round of studies, these researchers felt like they had a good handle on the phenomena of transactive memory development. Group members spending time together led them to have more accurate perceptions of expertise, leading the group members to more easily coordinate and trust one another. The manipulation to encourage transactive memory, however, includes more information than just expertise to the group members. It could possibly lead the group member to like one another more because they have spend more time together. A few of the experiments controlled for this factor but the researchers thought that there might be other ways to methodologically deal with this concern.

Enter, Moreland and Myaskovsky (2000). Wegner's theory and the prior papers proposed that the transactive memory of the group is composed of information about expertise. In Wegner's experiments, this was due to the romantic couple spending time together and in the earlier Moreland studies it was due to the group members interacting during the training period. In the 2000 paper, however, the researchers isolated the manipulation to just the aspect that the theory mentions, information about expertise. I think this study is perfect in that is smartly builds on prior work, isolating the mechanism, but keeps many other aspects identical which allows us to generalize the findings to past work more easily.

In Moreland and Myaskovsky (2000), all of the members engaging in the radio construction task worked independently or in a group during that first meeting. Then, for half of the independent groups, their work was systematically graded based on area of ability and compared to the other members. A member would then receive a sheet that said the rank of each group member on each of several different categories of skill. Other groups did not receive this information. The researchers found that just providing this limited information about other members, groups performed just as well as if they had been trained together. This suggests that the information about the member's relative skill is helpful for performance, and as helpful as training the group members altogether. The groups that received this performance feedback were not statistically different from the groups that trained together in their level of TMS as measured from the video tapes. Granted, the groups that received performance feedback instead of training together did perform worse and have lower TMS at a mean level, but the values were close.

This particular study attracted the US Army's attention. These researchers applied for and received a grant from the Army to more deeply investigate the effects of performance feedback on groups, especially groups that have employee turnover (like many army groups do). The Army was interested if transactive memory is helpful in small work teams and if providing individualized performance feedback to the group could be a way of quickly building a team's sense of being a group and performance. I will next discuss these studies (never formally published but available in a technical report).

**Personal information about the researchers was attained second-hand and may not be accurate.

Wednesday, April 2, 2014

Cognitive Interdependence - Part 4

Competing theoretical underpinnings

Discuss the extensions of TMS more into management, prevalence in that area, the eventual acceptance of Sparrow into Science.

Liang, Moreland, and Argote (1995) sought to bring Wegner's work into wider recognition within both the worlds of management and experimental psychology. Though I am sure that there was other interest percolating in transactive memory in the meantime, Liang's study brought significant added momentum to the research area. As mentioned before, the researchers proposed that groups that had been trained together would perform better than those that trained individually. The researchers found that groups that trained together made about 2 errors on average whereas groups that trained individually made more than 5 errors. When the measures from the videotapes were included, the results were clear. Groups that trained together engaged in more of those three processes than other groups. They coordinated better, developed more distinct specializations in the task, and trusted one another's expertise. The researchers proposed that groups that were trained together were able to coordinate in this way because they had the opportunity to develop transactive memory. These three factors are still considered fundamental components (though some say indicators) of transactive memory within a group. The most widely used scale to measure transactive memory systems within groups was developed by Kyle Lewis in 2003 and measures these three components.

What does it all mean though? The going idea within groups research for some time did not really have a good explanation for why groups typically perform better over time. It was clear that individual perform better over time and that group members grow to like one another over time. However, there were still effects of 'group learning' controlling for these other factors. The researchers made a guess that the development of a shared system for coordination and expertise exchange could help explain how group learning was occurring. After looking into the literature, there was some work in the area of shared mental models but this work suggests that over time groups develop a sharedness in how they think things should be done. The researchers had a feeling though that the reason groups do better over time has more to do with how individuals differentiate, specialize into unique roles. And that is essentially what they found.

To confirm that there weren't other effects, the researchers then did a series of studies that looked at the effects of team building exercises and scrambling team members so that they no longer worked with the same people int he second half of the study. They found consistent results that team-building was not as good as training in terms of group performance. This study helped clarify a secondary point earlier that team-building exercises, though good for some things, do not really help groups perform better. If you are most interested in performance, on-the-job training is much more effective than building bonds with your coworkers. The researchers also found that the effects of training together were not just individual. To test this, they randomly assigned people to groups after they were trained together. What they found was that even if an individual was trained as part of a group, that training isn't that helpful if they are working in a new group. This all suggested that there was something important about group training and keeping that group together.

After these studies came out, Andrea Hollingshead began doing some extremely influential work at the University of Illinois going back to the roots of transactive memory research (starting in 1998). She explored transactive memory using romantic couples and has found some really intriguing effects. Even if a romantic couple can't talk to one another, they are able to implicitly coordinate when given a list of words such that one person remembers a set of words and the other person remembers a different set of words. Hollingshead proposed that if the words fit within one of the person's areas of expertise that they would take more effort to remember it and the other person would know not to commit effort to remembering it.

Lewis's scale, published in 2003, has made the measurement of TMS much easier for researchers. An alternative scale (Austin, 2003) is also used sometime though the difficulty in implementing the scale has led it to be less popular. Transactive memory research is now discussed in many research areas and has been accepted into top academic journals. After a controversial article in Science (Sparrow et al., 2011) it even got a mention on the Colbert Report. As with most scientific phenomena, after the initial flourish, there has been more reanalysis and reevaluation of the phenomenon. Lewis and Herndon (2011) proposed more concrete and systematic ways to think of transactive memory, possibly as an attempt to reduce abuses of the concept by researchers less familiar with its intricacies.

I believe I will publish a few more blog posts on this concept but I hope these four posts have provided a deep and (at least marginally) interesting insight into the origin of transactive memory.

**Personal information about the researchers was attained second-hand and may not be accurate.