What’s in a name of a hurricane?

A few months ago, a study came out in PNAS that sparked a lot of media interest: Female hurricanes are deadlier than male hurricanes. The idea is not that the most severe hurricanes happen to have female names, but instead that more people die in hurricanes that have female names than in those with male names.

500px-Cyclone_Monica

The study involved the analysis of death rates for over 60 years, which included 94 hurricanes. The archival data showed that for hurricanes that did little damage, the difference in the death tolls between masculine and feminine hurricanes was marginal. For hurricanes that had greater damage, however, the number of fatalities was substantially higher for female-named storms than for male-named ones. Further, they classified names for how masculine or feminine they are (referred to as the Masculinity-Femininity Index, or MFI). For example, a highly feminine name would be “Eloise,” (with a score of 8.944) while the female name “Charley” was rated as much less feminine (MFI = 2.889). The researchers found that even within feminine-named hurricanes, the more feminine a name was (the greater the MFI score), the higher the number of fatalities. Specifically, their data suggest that if a severe hurricane’s name is Eloise, it will kill 3 times as many people as if it’s named Charley.

The explanation for the correlation between might seem intuitive and surprising at the same time: we have gender-based expectations that females are less aggressive. This unconscious bias seems to invoke a lower perceived risk for female hurricanes, so people take fewer precautions like evacuating.

In light of these findings, The World Meteorological Organization (WMO), the group who names the storms, might want to reevaluate its naming practices to avoid names that might encourage dismissal of a hurricane’s danger. In case they’re looking for inspiration, I have a few suggestions.

What NOT to name a female hurricane:

  • Any flower name: this includes Daisy, Petunia, Lilly, and sadly, Rose
  • Pooh Bear (imagine the reactions if meteorologists announced that Hurricane Pooh Bear was headed for the coast)
  • Any name that has repeated syllables: can we expect people to take Coco or Fifi seriously?
  • Any name that’s shared with a Barbie doll, like Skipper, Stacie, and certainly Barbie
"Hurricane Barbie is on her way!"

Hurricane Barbie is on her way!

And some names that people might take more seriously:

  • Names that invoke big, tough women you wouldn’t want to mess with: Bertha, Agnes, or Madea
  • Gender-neutral names, like Alex, Casey, or Jamie
  • Non-human names: names like PX-750 or The Hulk might do the job
If you heard "Hurricane Madea is heading for the coast," what would you do?

Don’t mess with Hurricane Madea.

Featured Image -- 718

Synesthesia: The sky, the number 7, and sadness are all blue

Originally posted on NeuWrite San Diego:

If you were shown the shapes below and told that one is called a “kiki” and the other a “bouba,” which name would you attribute to which shape? Between 95 and 98% of people agree that the more rigid shape is “kiki,” and the curvy one is “bouba.” This is not because they learned these names in school (they’re made up), but because we’re predisposed to associate information from different modalities. As such, we pair the sharper “k” sound with the shape that has sharper points, and the rounder “b” sound with the rounder shape.

kiki_bouba

Although we all naturally integrate information from multiple senses to some extent, people with synesthesia do so to a much greater extent. Generally, when synesthetes perceive something through one modality, they have a simultaneous and involuntary perceptual experience in another. There are many different types of synesthesia, but one common form is grapheme-color synesthesia, in…

View original 1,534 more words

Exponential Learning

We toss around the  phrase, “learn something new everyday” jokingly, but in reality, we learn so much more than one thing per day. Many of these things are implicit, so we don’t realize we’re learning, but each experience we have is making its mark on our cognition. Many other things we learn, though, are explicit – we’re consciously learning in an effort to get better at something. Before we can master a skill or knowledge set, we often have to learn how to learn that thing. What strategies facilitate optimal learning? Which are ineffective? A recent NYT column by David Brooks highlights some overarching differences in the learning processes in different domains.

In some domains, progress is logarithmic. This means that for every small increase in x (input, or effort), there is a disproportionately large increase in y (output, or skill) early on. Over time, the same increases in x will no longer yield the same return, and progress will slow. Running and learning a language are two examples of skills that show logarithmic learning processes.

logarithmic

Other domains have exponential learning processes. Early on, large increases in effort are needed to see even minimal progress. Eventually, though, progress accelerates and might continue to do so without substantial additional effort.

Mastering an academic discipline is an exponential domain. You have to learn the basics over years of graduate school before you internalize the structures of the field and can begin to play creatively with the concepts.

My advisor has also told me a version of this story. She’s said that working hard in grad school (specifically I think she phrased it as “tipping the work-life balance in favor of work”) is an investment in my career. Just as monetary investments become exponentially more valuable over time, intense work early in my career will be exponentially more valuable in the long run than trying to compensate by working extra later on.

exponential_graph

Even in my first year of grad school, I developed a clear sense that even learning how the field works and what are good questions to ask takes time. When I wrote my progress report for my first year, I concluded that most of what I learned this year has been implicit. I can’t point to much technical knowledge that I’ve acquired, but I can say that I’ve gained a much better idea of what cognitive science is about as a field. I’ve gained this by talking (and especially by listening) to others’ ideas, by attending talks, and by reading as much as I could. This implicit knowledge doesn’t necessarily advance my “PhD Progress Meter” (a meter that exists only in my mind), but it is also necessary to at least start to acquire before I’ll see any real progress on that meter. Once the PhD meter is complete, I will merely have built the foundation for my career, but will probably still have much learning to do before I reach the steepest and most gratifying part of the learning curve.

Brooks points out that many people quit the exponential domains early on. He uses the word “bullheaded” as a requirement for someone who wants to stick with one of these domains, since you must be able to continually put in work while receiving no glory. I think that understanding where you are on the curve at any given time is crucial for sticking with one of these fields, so that you can recognize that eventually, the return on effort will accelerate, and the many hours (tears, complaints, whatever) that went into mastering the domain early on were not in vain. Where I stand right now, progress is pretty flat… so I must be doing something right.

Study says, suck it, Shakespeare

When I was growing up, a lot of people, upon learning that my name is Rose, found it clever to say “a rose by any other name would smell as sweet.” I eventually realized that what Shakespeare was saying when he wrote the line is that names are irrelevant – a rose is a rose, regardless of what we call it. The Shakespeare-quoters were basically saying to me (unknowingly, I assume): your name is irrelevant, but hey, look! I know a line from Shakespeare.

A team of researchers at the Montreal Neurological Institute conducted a study to investigate the role that an odor’s name has on people’s perception of the smell. They had people smell different odors that were accompanied by either a positive, negative, or neutral name. Positive names included countryside farm (is that really a positive-sounding smell?) and dried cloves. Negative ones included dry vomit and dentist’s office. Neutral ones were things like numbers. The names did not actually correspond to the smells, so any effects of name on perception didn’t result from the positive sounding smells actually smelling better. The researchers had participants rate the pleasantness, intensity, and arousal of the smells, and they also collected participants’ heart rates and skin conductances as they smelled the scents as measures of physiological arousal.

Perhaps not surprisingly, smells were rated to be significantly more pleasant and arousing when they were accompanied by positive names than when accompanied by neutral or negative names. Smells were rated as most intense when they had negative names, as opposed to neutral or positive ones. Taken together, the findings suggest that the names we use to describe odors (and many other aspects of our world) affect the way we perceive the actual smells. More specifically, we probably use the odor names to make a prediction, even if it’s a very general one, about what we’re about to experience. These predictions, in turn, seem to color our actual experience with the world, often in self-fulfilling manners.

I wonder if we could harness this knowledge of the effect of positive-sounding odor names to make certain jobs, like latrine odor judges, slightly more pleasant…

The inseparability of writing and science

Whether you agree with Steven Pinker‘s views on cognition or not, it’s hard to deny that he’s an eloquent writer.  I recently found an interesting clip of Pinker discussing his new writing manual, The Sense of Style, which will be out in September.

I was first captivated by this quote: “There’s no aspect of life that cannot be illuminated by a better understanding of the mind from scientific psychology. And for me the most recent example is the process of writing itself.”

Throughout the video, Pinker explains why knowing more about the mind can help us to become better writers, which in turn will facilitate communication about scientific innovations like the mind. One reason Pinker makes this claim is because, in his view, “writing is cognitively unnatural.” In conversations, we can adjust what we’re saying based on feedback we receive from our audience, but we don’t have this privilege when writing. Instead, we must imagine our audience ahead of time in order to convey our message as clearly as possible.

Pinker points out that many writers write with an agenda of proving themselves as a good scientist, lawyer, or other professional. This stance doesn’t give rise to good writing. A writer should instead try to show the writer something that’s cool about the world.

He also points out that to be a good writer, you must first be a good reader, specifically “having absorbed tens or hundreds of thousands of constructions and idioms and irregularities from the printed page.” He uses the verbs “savor” and “reverse-engineer” to describe the process of reading to become a better writer. This echoes a lot of advice I’ve encountered (often in written form) since I first decided to pursue a PhD: read as much as you can. (I have also learned that any amount of reading I do will never feel like enough).

Regarding his style manual, Pinker wants to avoid the prescriptivist (someone who prescribes what constitutes correct language) vs. descriptivist (someone who reports how language is used in practice, regardless of correctness) distinction. Another great quote:

The controversy between ‘prescriptivists’ and ‘descriptivists’ is like the choice in ‘America: Love it or leave it,’ or ‘Nature versus Nurture’—a euphonious dichotomy that prevents you from thinking.

His overall point is that the humanities and sciences should not be seen as mutually exclusive. Instead, science should be used to inform humanities (in this case, writing, but I think his argument generalizes beyond this), and a knowledge of the humanities should inform science as well. To me, this is what cognitive science must necessarily be - an understanding of the human mind and behavior requires rigorous science, no doubt, but I think we need to continue to look outside the three pounds of neural tissue inside our skulls for the most complete understanding.

The similarities between cartooning and researching

I recently heard Terry Gross interview New Yorker cartoon editor Bob Mankoff on NPR and was impressed by the parallels between producing good cartoons and producing good cognitive research (interestingly enough, Mankoff has also been involved in some psychological research on humor, but the links that intrigued me were less obvious.

Regarding the process of creating a good cartoon, Mankoff says:

people think you get one idea for a cartoon every week, and that’s not the way it works. You usually get 10 or 15… And people say, well, why, you know, new cartoonists especially ask me: Why do you want me to do 10 cartoons every week? I say because nine out of 10 things in life don’t work out.

Like cartoonists, researchers generate many more ideas for experiments than they can actually implement. And they implement many more experiments than they publish because lots of them flop. The reasons that both cartoonists and researchers (and many other people, I’m sure) need to count on the fact that 9 out of 10 things won’t work out might be the same: First, sometimes people have bad ideas – they’re not funny cartoons or accurate hypotheses. Second, sometimes people implement things poorly – a funny idea might not be expressed clearly in a cartoon or a good hypothesis might not be investigated through the right methods.

Another interesting comment Mankoff made was regarding the “shifting character of humor in our society.” He referred to it a few times as “meta,” and noted that “it’s become much more humor about humor.” This reminds me specifically of what cognitive science researchers do all the time – we’re thinking about thinking. Since I’m currently procrastinating getting some work done, I guess right now I could say I’m thinking about thinking about thinking…

This aligns with my experiences so far.

Metaphors we speed by

I was half-listening to this Ted Talk by Carl Honoré, In praise of slowness, as I folded laundry. Honoré’s argument is exactly what you might expect: the pace of modern life continues to accelerate, and it’s wreaking havoc on our mental, physical, and environmental health (NB: I do realize the slight irony of listening to this talk while folding laundry). As I was listening, it occurred to me that, although he never says this, he blames our modern speed on the metaphors we use to talk about time. He mentions that in western cultures we talk about time as a “draining resource,” and frequently say things like, “you either use it or lose it. And we don’t just talk about time this way, saying things like “time is money,” but we think about time as a limited resource, and we act accordingly; in other words, we speed up our actions. Listening to Honoré, I immediately thought back to the 1980 book by Lakoff and Johnson, Metaphors we Live By, which claims that the metaphors we speak shape the way we conceive of and act in the world. I think about time a lot and how we make sense of it, but this was a novel angle for reflecting about the question for me. I wonder if we were to start talking about time differently, would we think about it differently too? What if we all adapted my favorite metaphor for time, Thoreau’s “time is but a stream I go a-fishing in”? Could new metaphors encourage us to slow down?