An exercise in wrongness

Wrongness isn’t a word, you say? Then I’m off to a great start. (It is, though).

My department makes a pretty big deal of our second year projects. We don’t have any qualifying exams, just an oral presentation and paper. We’re still 4 long weeks away from presenting these projects, but there have already been plenty of eye-opening moments for me to write about. For many of us, this is the first time we’ve done a project of this nature and magnitude from start to “finish” (are these projects really ever over?) largely independently. This means that there are a lot of surprise moments for making mistakes.

Going back to last summer when I started running the experiments that will be included in my project, I screwed up plenty of things. My sloppy programming meant that the experiment crashed sometimes. Other times, I failed to communicate important details to the research assistants running the experiment, and we had to trash the data. It turned out that the data collection was actually the phase of the project in which I made the fewest mistakes, though. The process of analyzing the data was a cycle of mistakes and inefficiencies that were usually followed up by more mistakes and inefficiencies. Every once in a while, I’d do something useful, and that was enough to keep me going.

Sometimes, I’ve gotten annoyed at myself for making these mistakes, especially when deadlines are approaching or when my advisor has to be the one to point them out to me. I’ve been frustrated by the messiness of the data, though logically I know that I should probably be skeptical if my data weren’t messy), and all those things I should have done differently continue to come to mind and nag at me.

"Piled Higher and Deeper" by Jorge Cham

“Piled Higher and Deeper” by Jorge Cham

Luckily, I’m pretty sure I’m not alone. A handful of older grad students have told me about their second year project mistakes, and mine start to look like par for the course.

And then I discovered a Nautilus interview of physicist David Deutsch. It’s a pretty philosophical interview on the importance of fallibility, but the takeaway is that the ability to be wrong is something we should embrace because the very fact that we’re error-prone means that it’s possible to be right. He points out that so often in science, people prove things wrong that have been assumed for many years to be truths.

What makes progress possible is not whether one is right or wrong, but how one deals with ideas. And it doesn’t matter how wrong one is. Because there’s unlimited progress possible, it must mean that our state of knowledge at any one time hasn’t even scratched the surface yet. [As the philosopher Karl Popper said], “We’re all alike in our infinite ignorance.”

This interview lifted a lot of weight off my second-year grad student shoulders. I’ve made lots of mistakes throughout the process of putting together this project (and I’m not finished making them, I feel pretty confident), and therefore, there is a such thing as doing the work correctly. In the end, the p-values that I find when I analyze my data aren’t really the important part (though, unfortunately, they’re what will determine if and where the work gets published…). Instead, it’s a reminder to focus on the ideas – the ones the work was based on and the one the work opens up – and embrace the wrongness.

Is there a link between intelligence and worrying?

A Slate post I read this week – Scary smart: do intelligent people worry more? – left me feeling uneasy. It’s not because I like to think of myself as smart, and therefore discovered that I might worry more than the average person (figured that one out in second grade, I think). The uneasiness resulted from feeling like the author’s overall story was really not the one that the data he reported show.

The post started by discussing a study by a group at Lakehead University in Ontario which had students complete a survey with items like “I am always worrying about something” and complete a verbal intelligence test. They found that those two scores were positively correlated: people who reported worrying more also had higher scores on the verbal test.

Next the author reports an amusing study conducted by researchers at the Interdisciplinary Center Herzliya in Tel Aviv, in which participants thought their task was to assess artwork on a computer. While doing this, the computer informed them that they had just activated a virus, and the research assistant running the experiment frantically asked them to go get help. As they were going to get help, more stress was thrown at them – someone in the hall asked them to answer a survey, another person dropped a stack of papers at their feet… The researchers found that the people who scored highest on an anxiety measure were the least distracted from their mission to find computer help by the additional stressors they encountered on the way. The Slate article reports, “Nervous Nellies proved more alert and effective.” I’m not sure I would come to the same conclusion, since in some cases, being fixed on one goal when other important things arise might not actually be a good trait. Regardless, it’s hardly a sign of intelligence, so I’m still not sure why that research was included in this piece. These same researchers have also shown that people who are higher in anxiety sense threats like smoke more quickly than others. Again, this might not always be a good thing (is it really beneficial to smell your neighbor’s BBQ and get distracted from the task at hand?), and even in cases where hyper vigilance is helpful, it’s not how most of us define intelligence.

Image from original slate article

Image from original slate article

There are a few other examples that seem consistent with the idea that higher intelligence (defined in a variety of ways) is associated with more worrying (also defined in a variety of ways). But then there’s some evidence that shows that the positive relationship between intelligence and worrying might not be so clear-cut. For example, although higher IQ and anxiety seem to be positively correlated among people who have diagnosed generalized anxiety disorder, the reverse was true in a control group: less worrying was associated with people with higher IQs.

But not much attention is paid to the contradictory evidence. The author writes “Still, the suspicion persists that a tendency to be twitchy just might bequeath a mental advantage.” He lists famous people who have been considered intelligent and have had anxiety – Nikola Tesla, Charles Darwin, Kurt Gödel, and Abraham Lincoln. While this is interesting, it’s not evidence – he conveniently forgot to list the many geniuses who didn’t have excessive anxiety and the many anxious people who don’t have exceptional intelligence.

Kudos to the article for presenting two sides, one step in the direction of showing that the question is not cut-and-dry (so few are, especially in psychology). But what should a reader think after this? That psychologists are wasting time and money running experiments that contradict each other and that we might never know which ones to believe? (If I wasn’t a grad student in a related field, that’s what I’d take away). Instead, readers should understand that “worrying” is complicated. Maybe, just maybe… “worrying” is not all one thing. We might use the same word to describe it, but that doesn’t mean that it’s really just one concept. There’s rumination about things you’ve done, worry about far-off future events, fear of public speaking, being mugged, or spiders, jitteriness, pessimism, and many other flavors of worrying. While a person who engages in one type might often engage in others, that’s not necessarily the case. And since there’s no absolute way to measure “worry,” researchers have to operationalize – they have to create a working definition for worrying, something measurable that they take to reflect worrying. We shouldn’t expect that a finding based, for example, on a generalized anxiety questionnaire, will apply to all types of people. Further, these studies involve testing people in different contexts, places (many cultural characteristics can affect performance on the measures researchers use to reflect worry) and by different researchers (even subtle differences in mannerisms, experiment design, or environmental controls could affect the results.

We need to be careful how much we generalize. Instead of concluding from the study that correlated people’s verbal test scores with their anxiety inventory scores that intelligent people worry more, it might be more accurate to say something like, college students in Ontario who have high verbal scores (according to one particular test) also have high anxiety scores (according to another particular test).

Granted, if all these caveats were heaped on readers, they’d probably be really disillusioned with research, and maybe stop reading catchy articles in the public press like this one, and that’s not the goal either. It’s just important to point out that not all DVs are created equal, and that single experiments, especially on such abstract traits as “worrying” shouldn’t be recklessly generalized.

Overgeneralization is a problem in psychology, probably because the flashy conclusions are much more interesting to non-psychologists, and popular press writers’ goal is to engage their audience. I think our goal should be engaging people in a way that doesn’t overgeneralize, though. Is society really becoming more scientifically literate if they’re reading articles about science but misunderstanding the implications of that science? I have higher hopes for improving scientific literacy. I think we can engage people, tell them about exciting and controversial findings, and help them to think critically in order to generalize when it’s appropriate, and take things with a grain of salt when that’s appropriate. We can have our scientific cake and eat it too, as long as we remember that that’s the goal of science communication.


Here’s one effective way to communicate science

Science is very cool. But the way it’s often taught – seemingly arbitrary facts to be memorized or lab procedures to be blindly followed – is less cool. It’s not too surprising that many people decide at some point during their education that science is not for them. Not only do they forgo scientific careers (which is fine – variety is important), but they avoid science in all forms. They skip the science section of newspapers and blogs, comment on the uncharacteristically dry and warm winter without questioning its causes or consequences, and take medications that they’re prescribed without researching the condition they’re being treated for or alternative treatments. In many cases, science that’s relevant to everyday life flies under the radar and people don’t even notice it; in others, they read a sensational headline and run with it or post photos of a seemingly magical dress on all their social media accounts.

And who can blame people for feeling like pursuing scientific information is a waste of time? If their science education brings up painful or boring memories and the rare scientific writing that they do engage with may as well have been written in another language, non-scientists are not going to seek out science in their lives. Exciting more students about science is one way to avoid societal scientific ignorance, but another is to improve the quality of science communication. Efforts to do so are widespread (for example, this summer I’ll be attending a workshop, ComSciCon, whose goal is to improve communication between scientists and their readers), but we still have a lot of work to do.

Nautilus has become one of my favorite sources for science news. At its core, it’s a science blog, but it’s very different from any other science blog I’ve encountered. For one, each issue has a theme, like the current one – Dominoes (subtitle: one thing leads to another). The pieces within an issue do all relate to the theme, but are from seemingly-unconnected domains, resulting in a surprising web of connections among ideas you’ve probably never thought about together (or in isolation, as is often the case for me). Nautilus is also different because there’s a clear effort made to present the content of a post in the format that works most for that post. I recently wrote about the cool experience of learning about how music hijacks our perception of time through an audio tour, consisting of clips and annotations.

Screen Shot 2015-04-05 at 3.18.57 PM

A recent post about an interview with Helen Fisher, a prominent sex/love/relationship researcher and communicator, also provided a non-traditional reading experience. The post embodied so many goals of science communication. It opened by describing the experience of the interview – an interesting comment Fisher made and the actual apartment that the writers met in. Then, once we can picture the environment that the dialogue took place in, the author told us why we should care about the interview: Fisher makes some provocative claims, such as suggesting that an increase in casual sex has caused our divorce rate to stop increasing – casual sex might lead to long-term marital happiness. The rest of the interview is presented in transcript form, but a video of the interview is the main draw. It’s not posted as one chunk, as most videos are. It always bothers me that I don’t get to experience an online video at my own pace in the way that I experience written materials. The Nautilus interview eliminates this bother by posting Fisher’s responses to each question as individual mini movies that are linked to the questions she’s responding to. Thanks to this format, readers can preview the questions, skip the ones they find less interesting, and listen to the interesting ones in any order they want (all of which I did). This solution is fairly low on complexity, but high on genius. More, please!

How should we talk about sex? Ditch the baseball

Whenever we talk about abstract or complex topics (and even when we talk about things that are neither abstract or complex) we can almost guarantee that metaphors will be at the center of the stage. This is especially true when it’s not appropriate to focus on certain aspects of a situation or when we want to lighten a mood: talking about how our neighbor croaked or kicked the bucket is less morbid than saying that he died. Depending on context, we’re more likely to hear that someone lost his lunch instead of what he really did, which was blow chunks. Considering that sex is something that is awkward or inappropriate to talk about in many circumstances (and it would be crazy to suggest that we just avoid the topic in the first place!), it comes as little surprise that sexual discourse is highly metaphorical.

In this Ted talk, Al Vernacchio challenges the predominant American metaphor for sex: baseball. There are lots of ways that baseball talk is used to talk about sex, but the most common might be using the bases to refer to different sexual acts. You have to do them in order, just like you have to run the bases in order in a ball game, and hitting a home run is supposed to be a cause for celebration in both contexts. The metaphor extends beyond the bases, though. You can be a pitcher, a catcher, or a benchwarmer. The game involves a bat, a nappy dugout, and a catcher’s mitt. You can be a switch-hitter or just flat-out play for the other team.

Baseball isn’t a terrible metaphor for sex. Even if you’ve never heard of some of those metaphors (I hadn’t), if you know what generally goes down during a baseball game and you know what generally goes down during sex, you can probably figure out many of the mappings. In fact, that’s part of what makes a metaphor good – there are common relations between the two domains. Their commonalities are highlighted and their differences are ignored. But metaphors, especially ones like this with many mappings between the two topics, aren’t just ways of talking. They’re ways of thinking. Vernacchio argues that if we want to foster better views of sex in our society, maybe we need to ditch the baseball metaphor. He proposes that we use a pizza metaphor instead. Here are the different inferences the two might encourage:

  • You play baseball when it’s baseball season and a game is scheduled. You eat pizza when you’re hungry for pizza. When should you have sex?
  • There is no baseball game if there aren’t opposing teams. When you’re getting a pizza, you (hopefully) ask the others who will enjoy it with you what toppings they want. You’re all on the same pizza-ordering team!
  • Once you start making your way around the bases, there’s only one acceptable way to do it. You can’t stop midway and decide you’re good where you are. There’s no wrong way to eat pizza. You can eat it with a knife and fork, you can fold it in half, and you don’t have to eat it all – I’d bet a third of America doesn’t even eat the crust.
  • When you’re playing baseball, the goal is to defeat the other team. When you’re eating pizza, the goal is to have something you enjoy and that will be satisfying.

I like the pizza metaphor that Vernacchio proposes because it encourages productive inferences that baseball doesn’t. But it’s not perfect. Most pizza is not very good for you – to be consumed only rarely and with a modicum of guilt. Then there’s the problem of Italy – Italians have all the claim to it, and they’re probably still way better at making it than people anywhere else. Plus, what role does the delivery guy play in sex?

These picky ways that pizza and sex aren’t alike aren’t the point. By definition, metaphors align two topics, and there will always be some mappings between them that don’t work. If everything about the two topics were alignable, comparing them would no longer be metaphorical – it would just be talking about two things that are literally the same.

Check your tweets

It’s no secret that the information we share on social media can get us in trouble. You can embarrass yourself, ruin your reputation, and even get arrested using fewer than 140 characters.


Tweets are also reflections of a person’s current state – they shed light on things we find interesting, the events in our lives, and our opinions. In these cases, we’re conscious of the states our tweets reflect. However, our tweets may also be able to predict aspects of our lives that we’re not conscious of at the time of tweet composition, like the rate of heart attacks in the communities we live in.

If you think about it, it’s not that surprising that negative tweets come from places with greater incidences of cardiac events. The authors crucially point out that it was not the tweeters who were dying, though. One person’s angry tweets did not predict that same person’s later risk of heart attack (though to me this doesn’t seem like too far-fetched a possibility). Instead, the counties that the most negative tweets were coming from were the same ones that had the highest incidence of cardiac events. I don’t think anyone would argue that the angry tweets (coming primarily from young people) were causing high rates of heart attack (in primarily old people). Instead, the correlation probably reflects that good physical and mental health are often associated – both in individuals and on a larger geographical level. So what should we do with this knowledge? Is there anything we can do beyond existing efforts to improve heath and wellness in the communities that need it most? What other warning signs are evident in corpora containing millions of tweets and other social media behaviors?

I don’t know. I’m about to go tweet about rainbows and daisies though, just in case.

Music makes me lose control

Nautilus, you’ve done it again: an elegant post on two of my favorite topics: music and time.  Time and music are inseparable – music takes place over time, and both can be very precise and mathematical. But music also reminds us how subjective time is, which is the theme of Jonathan Berger’s post. The post weaves together connections between music and temporal perception. Here are a few highlights:

  • The tempo of music alters our behaviors – slower music encourages us to slow down and buy more drinks at a bar or spend more time in a grocery store, and familiar background music gives shoppers the impression that they spent longer in a store (though they actually spend more when novel music is played).
  • Our musical attention span is about 4 minutes, thanks to Thomas Edison’s cylinder recordings, which maxed out at 4 minutes.  Even when technology progressed to allow for longer songs, the 4-minute standard remained.
  • When we’re deeply engrossed in something perceptual (like listening to music), the prefrontal cortex, which is crucial for introspecting and high-level cognition, becomes less active than usual, while the sensory cortex becomes more active than usual. These activation patterns likely explain the feeling of flow and timelessness that can occur while listening to music.


In the second half of the post, Berger uses Schubert’s String Quintet to illustrate how “music hijacks our perception of time.” He describes the time warp going on in one section at a time, supporting each with a clip of the audio during the part of the piece he’s describing.

This was a fun “audio tour.” I found that I had to close my eyes to be able to experience the time shifts, though. This could be for a number of reasons, but one interesting possibility is that when a sound clip is embedded in a web page, the bottom right corner of the clip counts down the seconds remaining. Maybe some people can ignore the steadily decreasing numbers, but I am just so drawn to anything marking time. Why might this matter? I’d guess that a large proportion of the music-listening that people do today happens through a computer-like device (iPod, phone, computer) that exposes the listener to a ticking clock. Do we experience less of this music-induced timelessness today than in the past as a result? Or maybe songs like Time of Our Lives could be to blame?

Thanks to this song for title inspiration:

Butts on fire

English speakers use a lot of butt-on-fire metaphors: we can say someone’s ass is on fire, that he needs to light a fire under his ass, and even the visual of someone flying by the seat of her pants in a chaotic situation conjures an image (for me) of smoking butt. These metaphors all mean different things, but are (appropriately) all descriptions of intense situations (or attempts to intensify a situation, in the case of lighting a fire under someone).

pantsWhat’s up with this fiery butt obsession? Do other languages share it?

Conceptual metaphor theory (CMT) suggests that we actually understand many concepts and experiences in terms of others. For example, our understanding of time relies on an understanding of space, and the way we think about love is often based the the way we think about a journey. Our language can reflect these conceptual metaphors, as in the deadline is approaching, the best is ahead of us, our relationship is rocky, or referring to an anniversary as a milestone. According to proponents of CMT (George Lakoff is probably best known), we also think of anger as heated fluid under pressure. Angry people might blow their top, explode, or have steam coming from their ears. I don’t know whether we really do conceptualize anger as a heated fluid under pressure, but if we do, it’s interesting to think that the heat isn’t confined to escaping from our head – all orifices seem to be fair game.


P.S. I made the mistake of Googling “ass on fire” while writing this. Bad idea.