The consequences of being powerful

An Atlantic article, Being Powerful Distorts Time Perception recently caught my attention. The article discusses a few studies that induced feelings of power in a lab setting in order to observe different time-related cognitive consequences.

The first suggested that the more power people have, the more available time they perceive they have. The authors attributed the finding to an overall increased sense of control that powerful people feel, including control over time.

The next study concluded that powerful people tend to underestimate how much time something will take. This seems pretty consistent with the conclusion that people with power perceive themselves to have more time as a result of having control over time. In general, the first two studies discussed seem to suggest that perceiving yourself as powerful distorts your sense of time in a negative way. While it might be less stressful to believe that you have more time in the future, if it leads you to underestimate how long things actually take, it seems like the stress-reducing benefit could be easily reversed. In a real world situation, if an authority figure underestimates the time needed to do things, it seems likely that stressed will be increased for subordinates as well.

I think this image is so awesome. It's by Javier Jaén, from an interesting NYT article on time poverty

I think this image is so awesome. It’s by Javier Jaén, from an interesting NYT article on time poverty

But the third study discussed in the article suggests that people who perceive themselves as more powerful make better future-oriented financial decisions. In a lab setting, people who are primed to feel powerful are less likely than others to take an immediate reward if they’re told they can have a greater sum of money in the future. In other words, they’re less likely to discount future rewards in favor of those in the present. Outside the lab, the researchers found that a person’s perceived power at work actually predicts the amount he or she has in savings. The perception of power is undeniably helpful, according to these results.

So how to reconcile the findings that shine light on the detrimental effects of perceived power with those that suggest that it’s beneficial? The authors of the third study on temporal discounting suggest that people who feel powerful discount the future less because they feel an increased sense of continuity between their present and future selves. Could that same sense of continuity underlie the perception that you have more time or that future tasks will require less time? The connection is unclear to me, but as someone who’s deeply interested in our perception of time and the factors that affect it, I’d like to try to figure it out.

How can language reveal insecurity?

I just read an interesting Slate post by Katy Waldman on the linguistic nuances that can reveal insecurity. It was an enjoyable read, but also induced some eyebrow raising. The article is based on studies that use sentiment analysis, a natural language processing technique that aims to extract subjective information about a writer from a piece of text. Sentiment analysis is based on extensive correlations and machine learning. For example, an algorithm might be derived by analyzing hundreds of thousands of texts written by men and hundreds of thousands written by women in order to identify systematic differences between the two groups of texts. This approach reveals that elements of a text like use of specific pronouns can determine much about an author or a text as a whole.

The study highlighted by the post pointed to linguistic overcompensation as an indicator of insecurity. One study (not yet published but discussed here) focused on insecurity at the level of an entire university. The rationale was that since it’s more prestigious to be a university (defined as an institution conferring at least one graduate degree) than a college, universities that are on the outer edge of the university group (i.e., ones that don’t offer Ph.Ds) might feel more insecure about their university standing.

The researchers collected a sample of websites for top universities and top Master’s programs. The dependent variable was how often the institution mentioned its own name or used the word “university.” They found that Master’s universities were much more likely to emphasize the word “university” (using it in 60% of self-references) than Ph.D.-granting institutions (which used it in less than half of self-references).

Next, they looked at a similar phenomenon in a different domain: airports. They assumed that those offering international flights were higher status than those only offering domestic ones, so they compared use of the word “international” in big airports and smaller airports (that are still international). They predicted that the smaller ones would feel more of a need to emphasize their international status and would consequently use the word more, and this is exactly what they found.

Returning to academia, experiment 3 examined a similar effect in students at different universities. Specifically, they were interested in students at Harvard (a university that everyone knows very well is a part of the Ivy League) and students at Penn, often overlooked as an Ivy League institution. They asked the students to enumerate “things you think of” when you think about your school or describe it to people. They found that students at Penn were significantly more likely to use the phrase “Ivy League,” consistent with the previous two experiments.

The idea that members on the border of a prestigious group emphasize their membership more is an interesting possibility, and I don’t doubt that it’s true in many cases. However, it doesn’t seem to be the only explanation for the findings. It seems extremely likely, for example, that small international airports emphasize that they service international flights because many people might actually be unsure, whereas LAX might not need to highlight this feature nearly as much. The same could be said for Master’s universities and students at Penn. The Slate author acknowledges this concern, but doesn’t do much to rule it out (I’m not convinced enough by her statement that “the researchers’ interpretation of their findings feels at least partially correct to me,” and her description of being an avid cheerer on her swim team because she was one of the worst swimmers). Projecting insecurity onto Master’s universities, small airports, and Penn students might not be a fair assumption. This type of research necessarily overlooks individual differences, which might be problematic for a topic as individualized as insecurity, and it will be cool to see more examples that either follow or refute this pattern.

What’s in a name of a hurricane?

claimtoken-53d083baf0f27

A few months ago, a study came out in PNAS that sparked a lot of media interest: Female hurricanes are deadlier than male hurricanes. The idea is not that the most severe hurricanes happen to have female names, but instead that more people die in hurricanes that have female names than in those with male names.

500px-Cyclone_Monica The study involved the analysis of death rates for over 60 years, which included 94 hurricanes. The archival data showed that for hurricanes that did little damage, the difference in the death tolls between masculine and feminine hurricanes was marginal. For hurricanes that had greater damage, however, the number of fatalities was substantially higher for female-named storms than for male-named ones. Further, they classified names for how masculine or feminine they are (referred to as the Masculinity-Femininity Index, or MFI). For example, a highly feminine name would be “Eloise,” (with a score of 8.944) while the female name “Charley” was rated as much less feminine (MFI = 2.889). The researchers found that even within feminine-named hurricanes, the more feminine a name was (the greater the MFI score), the higher the number of fatalities. Specifically, their data suggest that if a severe hurricane’s name is Eloise, it will kill 3 times as many people as if it’s named Charley. The explanation for the correlation between might seem intuitive and surprising at the same time: we have gender-based expectations that females are less aggressive. This unconscious bias seems to invoke a lower perceived risk for female hurricanes, so people take fewer precautions like evacuating. In light of these findings, The World Meteorological Organization (WMO), the group who names the storms, might want to reevaluate its naming practices to avoid names that might encourage dismissal of a hurricane’s danger. In case they’re looking for inspiration, I have a few suggestions. What NOT to name a female hurricane:

  • Any flower name: this includes Daisy, Petunia, Lilly, and sadly, Rose
  • Pooh Bear (imagine the reactions if meteorologists announced that Hurricane Pooh Bear was headed for the coast)
  • Any name that has repeated syllables: can we expect people to take Coco or Fifi seriously?
  • Any name that’s shared with a Barbie doll, like Skipper, Stacie, and certainly Barbie
"Hurricane Barbie is on her way!"

Hurricane Barbie is on her way!

And some names that people might take more seriously:

  • Names that invoke big, tough women you wouldn’t want to mess with: Bertha, Agnes, or Madea
  • Gender-neutral names, like Alex, Casey, or Jamie
  • Non-human names: names like PX-750 or The Hulk might do the job
If you heard "Hurricane Madea is heading for the coast," what would you do?

Don’t mess with Hurricane Madea.

Featured Image -- 718

Synesthesia: The sky, the number 7, and sadness are all blue

Originally posted on NeuWrite San Diego:

If you were shown the shapes below and told that one is called a “kiki” and the other a “bouba,” which name would you attribute to which shape? Between 95 and 98% of people agree that the more rigid shape is “kiki,” and the curvy one is “bouba.” This is not because they learned these names in school (they’re made up), but because we’re predisposed to associate information from different modalities. As such, we pair the sharper “k” sound with the shape that has sharper points, and the rounder “b” sound with the rounder shape.

kiki_bouba

Although we all naturally integrate information from multiple senses to some extent, people with synesthesia do so to a much greater extent. Generally, when synesthetes perceive something through one modality, they have a simultaneous and involuntary perceptual experience in another. There are many different types of synesthesia, but one common form is grapheme-color synesthesia, in…

View original 1,534 more words

Exponential Learning

We toss around the  phrase, “learn something new everyday” jokingly, but in reality, we learn so much more than one thing per day. Many of these things are implicit, so we don’t realize we’re learning, but each experience we have is making its mark on our cognition. Many other things we learn, though, are explicit – we’re consciously learning in an effort to get better at something. Before we can master a skill or knowledge set, we often have to learn how to learn that thing. What strategies facilitate optimal learning? Which are ineffective? A recent NYT column by David Brooks highlights some overarching differences in the learning processes in different domains.

In some domains, progress is logarithmic. This means that for every small increase in x (input, or effort), there is a disproportionately large increase in y (output, or skill) early on. Over time, the same increases in x will no longer yield the same return, and progress will slow. Running and learning a language are two examples of skills that show logarithmic learning processes.

logarithmic

Other domains have exponential learning processes. Early on, large increases in effort are needed to see even minimal progress. Eventually, though, progress accelerates and might continue to do so without substantial additional effort.

Mastering an academic discipline is an exponential domain. You have to learn the basics over years of graduate school before you internalize the structures of the field and can begin to play creatively with the concepts.

My advisor has also told me a version of this story. She’s said that working hard in grad school (specifically I think she phrased it as “tipping the work-life balance in favor of work”) is an investment in my career. Just as monetary investments become exponentially more valuable over time, intense work early in my career will be exponentially more valuable in the long run than trying to compensate by working extra later on.

exponential_graph

Even in my first year of grad school, I developed a clear sense that even learning how the field works and what are good questions to ask takes time. When I wrote my progress report for my first year, I concluded that most of what I learned this year has been implicit. I can’t point to much technical knowledge that I’ve acquired, but I can say that I’ve gained a much better idea of what cognitive science is about as a field. I’ve gained this by talking (and especially by listening) to others’ ideas, by attending talks, and by reading as much as I could. This implicit knowledge doesn’t necessarily advance my “PhD Progress Meter” (a meter that exists only in my mind), but it is also necessary to at least start to acquire before I’ll see any real progress on that meter. Once the PhD meter is complete, I will merely have built the foundation for my career, but will probably still have much learning to do before I reach the steepest and most gratifying part of the learning curve.

Brooks points out that many people quit the exponential domains early on. He uses the word “bullheaded” as a requirement for someone who wants to stick with one of these domains, since you must be able to continually put in work while receiving no glory. I think that understanding where you are on the curve at any given time is crucial for sticking with one of these fields, so that you can recognize that eventually, the return on effort will accelerate, and the many hours (tears, complaints, whatever) that went into mastering the domain early on were not in vain. Where I stand right now, progress is pretty flat… so I must be doing something right.

Study says, suck it, Shakespeare

When I was growing up, a lot of people, upon learning that my name is Rose, found it clever to say “a rose by any other name would smell as sweet.” I eventually realized that what Shakespeare was saying when he wrote the line is that names are irrelevant – a rose is a rose, regardless of what we call it. The Shakespeare-quoters were basically saying to me (unknowingly, I assume): your name is irrelevant, but hey, look! I know a line from Shakespeare.

A team of researchers at the Montreal Neurological Institute conducted a study to investigate the role that an odor’s name has on people’s perception of the smell. They had people smell different odors that were accompanied by either a positive, negative, or neutral name. Positive names included countryside farm (is that really a positive-sounding smell?) and dried cloves. Negative ones included dry vomit and dentist’s office. Neutral ones were things like numbers. The names did not actually correspond to the smells, so any effects of name on perception didn’t result from the positive sounding smells actually smelling better. The researchers had participants rate the pleasantness, intensity, and arousal of the smells, and they also collected participants’ heart rates and skin conductances as they smelled the scents as measures of physiological arousal.

Perhaps not surprisingly, smells were rated to be significantly more pleasant and arousing when they were accompanied by positive names than when accompanied by neutral or negative names. Smells were rated as most intense when they had negative names, as opposed to neutral or positive ones. Taken together, the findings suggest that the names we use to describe odors (and many other aspects of our world) affect the way we perceive the actual smells. More specifically, we probably use the odor names to make a prediction, even if it’s a very general one, about what we’re about to experience. These predictions, in turn, seem to color our actual experience with the world, often in self-fulfilling manners.

I wonder if we could harness this knowledge of the effect of positive-sounding odor names to make certain jobs, like latrine odor judges, slightly more pleasant…

The inseparability of writing and science

Whether you agree with Steven Pinker‘s views on cognition or not, it’s hard to deny that he’s an eloquent writer.  I recently found an interesting clip of Pinker discussing his new writing manual, The Sense of Style, which will be out in September.

I was first captivated by this quote: “There’s no aspect of life that cannot be illuminated by a better understanding of the mind from scientific psychology. And for me the most recent example is the process of writing itself.”

Throughout the video, Pinker explains why knowing more about the mind can help us to become better writers, which in turn will facilitate communication about scientific innovations like the mind. One reason Pinker makes this claim is because, in his view, “writing is cognitively unnatural.” In conversations, we can adjust what we’re saying based on feedback we receive from our audience, but we don’t have this privilege when writing. Instead, we must imagine our audience ahead of time in order to convey our message as clearly as possible.

Pinker points out that many writers write with an agenda of proving themselves as a good scientist, lawyer, or other professional. This stance doesn’t give rise to good writing. A writer should instead try to show the writer something that’s cool about the world.

He also points out that to be a good writer, you must first be a good reader, specifically “having absorbed tens or hundreds of thousands of constructions and idioms and irregularities from the printed page.” He uses the verbs “savor” and “reverse-engineer” to describe the process of reading to become a better writer. This echoes a lot of advice I’ve encountered (often in written form) since I first decided to pursue a PhD: read as much as you can. (I have also learned that any amount of reading I do will never feel like enough).

Regarding his style manual, Pinker wants to avoid the prescriptivist (someone who prescribes what constitutes correct language) vs. descriptivist (someone who reports how language is used in practice, regardless of correctness) distinction. Another great quote:

The controversy between ‘prescriptivists’ and ‘descriptivists’ is like the choice in ‘America: Love it or leave it,’ or ‘Nature versus Nurture’—a euphonious dichotomy that prevents you from thinking.

His overall point is that the humanities and sciences should not be seen as mutually exclusive. Instead, science should be used to inform humanities (in this case, writing, but I think his argument generalizes beyond this), and a knowledge of the humanities should inform science as well. To me, this is what cognitive science must necessarily be – an understanding of the human mind and behavior requires rigorous science, no doubt, but I think we need to continue to look outside the three pounds of neural tissue inside our skulls for the most complete understanding.