Faith in science

We have a bit of a science problem in America. For some reason, our students aren’t learning it very well, or at least not as well as they are in many other countries. Most people seem to acknowledge this issue and advocate for an improvement in our education.

Screen Shot 2015-06-16 at 5.20.48 PM

 

And if students aren’t learning it, that probably means that adults, even those who are generally educated and motivated, probably have some conceptual gaps too (for example, when asked whether the earth orbits the sun or the sun orbits the earth – a question that gives people a 50% chance of getting it right if they guessed blindly! – only 74% of Americans correctly reported that the earth orbits the sun). Our widespread knowledge gaps are Problem Number 1.

A related problem is that many people tend to distrust science. For one, science is not always right on the first try (eggs are bad for your cholesterol! No wait, eggs are good for you!). Relatedly, some people do actually do crappy science (Problem Number 2), and other times good science gets reported badly (Problem Number 3). The recent “study” that recruited a very small number of participants, gave half of them chocolate, measured a ton of correlations to find a few that would reveal significant results, and published these results in a phony journal highlighted this. This hoax demonstrated both bad science and exaggerated, sensational reporting, and people who were initially fooled into believing that chocolate is the key to weight loss probably feel duped – rightfully so.

Problems 1-3 are a recipe for societal skepticism about science. It’s really difficult to evaluate science even when you’re being trained as a scientist, let alone if all of your training is in an entirely different field. Science can easily seem foreign, unrelatable, and unreliable. Who has the power to do something about this? Prominent scientists could maybe help sway the public’s opinion, but we might need a revolution in our cultural ethos towards science, and old people are rarely behind revolutions. What about wide-eyed and idealistic science grad students?

In a few hours, I’ll be on a plane heading to ComSciCon, a workshop on communicating science for grad students. The goal is to help us become better at communicating our own science as well as other people’s science – hopefully a step towards society’s impending science ethos revolution.

Are memories just pasta?

I just read a really fun description of memories in a Nautilus post: The pasta theory of memory & your personal beginning of time. It’s a post on childhood amnesia, the frustrating phenomenon that we just don’t remember much from the earliest part of our lives.

The piece is written by Dana Mackenzie, but the rich title inspiration comes from an Emory University psychologist that he interviewed, Patricia Bauer. Here’s how Bauer describes children’s memory:

“I compare memory to a colander,” Bauer says. “If you’re cooking fettucine, the pasta stays in. But if you’re cooking orzo, it goes right through the holes. The immature brain is a lot like a colander with big holes, and the little memories are like the orzo. As you get older, you’re either getting bigger pasta or a net with smaller holes.”

Why do I like this metaphor? It paints a nice picture of what happens. Kids still make memories, but those memories tend to escape. Older people’s memories are more likely to be contained by the colander brain.

This metaphor is compelling, but is it the best thing since sliced bread? Pasta easily trumps bread on my carbs hierarchy, but what about in the context of describing memory? Importantly, it demonstrates that children retain fewer memories than adults (which we probably don’t need much convincing of), but it doesn’t tell us why this is so. Why are children’s memories orzo-like, and how to do they become fettucine-like over time? There’s a lot about this process that scientists still don’t know, but the metaphor can’t capture those things they actually do know. For example, Mackenzie acknowledges in the piece, when we retell a memory, we increase our chances of remembering that event later (though retelling memories also introduces inaccuracies that seem to increase the more we retell…). A similar issue with the metaphor is that our brains are constantly changing, and a large part of the reason that kids don’t remember as much as adults do results from that dynamic property. But colanders don’t change as they age, so the pasta metaphor might make it less evident that the massive changes that take place in our brains underlie many of the memory differences throughout our lives.

Metaphors highlight some things – they play up certain features of the two things they’re comparing, and they downplay others. It’s probably not possible to accurately capture every important aspect of a phenomenon like childhood amnesia in one metaphor. And that’s ok, because metaphors can be supplemented by other information. But metaphors don’t only leave out relevant details. They can also mislead, as I think the static colander has the potential to do. Maybe the best way, then, to communicate the complexity of childhood amnesia is to remind ourselves (and those we’re communicating with) that although some features of children’s forgetting and orzo pasta do map onto each other well, other features, like the colander, fall short – at least until we design one that develops in a brain-like way over the course of its lifespan.

An exercise in wrongness

Wrongness isn’t a word, you say? Then I’m off to a great start. (It is, though).

My department makes a pretty big deal of our second year projects. We don’t have any qualifying exams, just an oral presentation and paper. We’re still 4 long weeks away from presenting these projects, but there have already been plenty of eye-opening moments for me to write about. For many of us, this is the first time we’ve done a project of this nature and magnitude from start to “finish” (are these projects really ever over?) largely independently. This means that there are a lot of surprise moments for making mistakes.

Going back to last summer when I started running the experiments that will be included in my project, I screwed up plenty of things. My sloppy programming meant that the experiment crashed sometimes. Other times, I failed to communicate important details to the research assistants running the experiment, and we had to trash the data. It turned out that the data collection was actually the phase of the project in which I made the fewest mistakes, though. The process of analyzing the data was a cycle of mistakes and inefficiencies that were usually followed up by more mistakes and inefficiencies. Every once in a while, I’d do something useful, and that was enough to keep me going.

Sometimes, I’ve gotten annoyed at myself for making these mistakes, especially when deadlines are approaching or when my advisor has to be the one to point them out to me. I’ve been frustrated by the messiness of the data, though logically I know that I should probably be skeptical if my data weren’t messy), and all those things I should have done differently continue to come to mind and nag at me.

"Piled Higher and Deeper" by Jorge Cham www.phdcomics.com

“Piled Higher and Deeper” by Jorge Cham
http://www.phdcomics.com

Luckily, I’m pretty sure I’m not alone. A handful of older grad students have told me about their second year project mistakes, and mine start to look like par for the course.

And then I discovered a Nautilus interview of physicist David Deutsch. It’s a pretty philosophical interview on the importance of fallibility, but the takeaway is that the ability to be wrong is something we should embrace because the very fact that we’re error-prone means that it’s possible to be right. He points out that so often in science, people prove things wrong that have been assumed for many years to be truths.

What makes progress possible is not whether one is right or wrong, but how one deals with ideas. And it doesn’t matter how wrong one is. Because there’s unlimited progress possible, it must mean that our state of knowledge at any one time hasn’t even scratched the surface yet. [As the philosopher Karl Popper said], “We’re all alike in our infinite ignorance.”

This interview lifted a lot of weight off my second-year grad student shoulders. I’ve made lots of mistakes throughout the process of putting together this project (and I’m not finished making them, I feel pretty confident), and therefore, there is a such thing as doing the work correctly. In the end, the p-values that I find when I analyze my data aren’t really the important part (though, unfortunately, they’re what will determine if and where the work gets published…). Instead, it’s a reminder to focus on the ideas – the ones the work was based on and the one the work opens up – and embrace the wrongness.

Is there a link between intelligence and worrying?

A Slate post I read this week – Scary smart: do intelligent people worry more? – left me feeling uneasy. It’s not because I like to think of myself as smart, and therefore discovered that I might worry more than the average person (figured that one out in second grade, I think). The uneasiness resulted from feeling like the author’s overall story was really not the one that the data he reported show.

The post started by discussing a study by a group at Lakehead University in Ontario which had students complete a survey with items like “I am always worrying about something” and complete a verbal intelligence test. They found that those two scores were positively correlated: people who reported worrying more also had higher scores on the verbal test.

Next the author reports an amusing study conducted by researchers at the Interdisciplinary Center Herzliya in Tel Aviv, in which participants thought their task was to assess artwork on a computer. While doing this, the computer informed them that they had just activated a virus, and the research assistant running the experiment frantically asked them to go get help. As they were going to get help, more stress was thrown at them – someone in the hall asked them to answer a survey, another person dropped a stack of papers at their feet… The researchers found that the people who scored highest on an anxiety measure were the least distracted from their mission to find computer help by the additional stressors they encountered on the way. The Slate article reports, “Nervous Nellies proved more alert and effective.” I’m not sure I would come to the same conclusion, since in some cases, being fixed on one goal when other important things arise might not actually be a good trait. Regardless, it’s hardly a sign of intelligence, so I’m still not sure why that research was included in this piece. These same researchers have also shown that people who are higher in anxiety sense threats like smoke more quickly than others. Again, this might not always be a good thing (is it really beneficial to smell your neighbor’s BBQ and get distracted from the task at hand?), and even in cases where hyper vigilance is helpful, it’s not how most of us define intelligence.

Image from original slate article

Image from original slate article

There are a few other examples that seem consistent with the idea that higher intelligence (defined in a variety of ways) is associated with more worrying (also defined in a variety of ways). But then there’s some evidence that shows that the positive relationship between intelligence and worrying might not be so clear-cut. For example, although higher IQ and anxiety seem to be positively correlated among people who have diagnosed generalized anxiety disorder, the reverse was true in a control group: less worrying was associated with people with higher IQs.

But not much attention is paid to the contradictory evidence. The author writes “Still, the suspicion persists that a tendency to be twitchy just might bequeath a mental advantage.” He lists famous people who have been considered intelligent and have had anxiety – Nikola Tesla, Charles Darwin, Kurt Gödel, and Abraham Lincoln. While this is interesting, it’s not evidence – he conveniently forgot to list the many geniuses who didn’t have excessive anxiety and the many anxious people who don’t have exceptional intelligence.

Kudos to the article for presenting two sides, one step in the direction of showing that the question is not cut-and-dry (so few are, especially in psychology). But what should a reader think after this? That psychologists are wasting time and money running experiments that contradict each other and that we might never know which ones to believe? (If I wasn’t a grad student in a related field, that’s what I’d take away). Instead, readers should understand that “worrying” is complicated. Maybe, just maybe… “worrying” is not all one thing. We might use the same word to describe it, but that doesn’t mean that it’s really just one concept. There’s rumination about things you’ve done, worry about far-off future events, fear of public speaking, being mugged, or spiders, jitteriness, pessimism, and many other flavors of worrying. While a person who engages in one type might often engage in others, that’s not necessarily the case. And since there’s no absolute way to measure “worry,” researchers have to operationalize – they have to create a working definition for worrying, something measurable that they take to reflect worrying. We shouldn’t expect that a finding based, for example, on a generalized anxiety questionnaire, will apply to all types of people. Further, these studies involve testing people in different contexts, places (many cultural characteristics can affect performance on the measures researchers use to reflect worry) and by different researchers (even subtle differences in mannerisms, experiment design, or environmental controls could affect the results.

We need to be careful how much we generalize. Instead of concluding from the study that correlated people’s verbal test scores with their anxiety inventory scores that intelligent people worry more, it might be more accurate to say something like, college students in Ontario who have high verbal scores (according to one particular test) also have high anxiety scores (according to another particular test).

Granted, if all these caveats were heaped on readers, they’d probably be really disillusioned with research, and maybe stop reading catchy articles in the public press like this one, and that’s not the goal either. It’s just important to point out that not all DVs are created equal, and that single experiments, especially on such abstract traits as “worrying” shouldn’t be recklessly generalized.

Overgeneralization is a problem in psychology, probably because the flashy conclusions are much more interesting to non-psychologists, and popular press writers’ goal is to engage their audience. I think our goal should be engaging people in a way that doesn’t overgeneralize, though. Is society really becoming more scientifically literate if they’re reading articles about science but misunderstanding the implications of that science? I have higher hopes for improving scientific literacy. I think we can engage people, tell them about exciting and controversial findings, and help them to think critically in order to generalize when it’s appropriate, and take things with a grain of salt when that’s appropriate. We can have our scientific cake and eat it too, as long as we remember that that’s the goal of science communication.

 

Here’s one effective way to communicate science

Science is very cool. But the way it’s often taught – seemingly arbitrary facts to be memorized or lab procedures to be blindly followed – is less cool. It’s not too surprising that many people decide at some point during their education that science is not for them. Not only do they forgo scientific careers (which is fine – variety is important), but they avoid science in all forms. They skip the science section of newspapers and blogs, comment on the uncharacteristically dry and warm winter without questioning its causes or consequences, and take medications that they’re prescribed without researching the condition they’re being treated for or alternative treatments. In many cases, science that’s relevant to everyday life flies under the radar and people don’t even notice it; in others, they read a sensational headline and run with it or post photos of a seemingly magical dress on all their social media accounts.

And who can blame people for feeling like pursuing scientific information is a waste of time? If their science education brings up painful or boring memories and the rare scientific writing that they do engage with may as well have been written in another language, non-scientists are not going to seek out science in their lives. Exciting more students about science is one way to avoid societal scientific ignorance, but another is to improve the quality of science communication. Efforts to do so are widespread (for example, this summer I’ll be attending a workshop, ComSciCon, whose goal is to improve communication between scientists and their readers), but we still have a lot of work to do.

Nautilus has become one of my favorite sources for science news. At its core, it’s a science blog, but it’s very different from any other science blog I’ve encountered. For one, each issue has a theme, like the current one – Dominoes (subtitle: one thing leads to another). The pieces within an issue do all relate to the theme, but are from seemingly-unconnected domains, resulting in a surprising web of connections among ideas you’ve probably never thought about together (or in isolation, as is often the case for me). Nautilus is also different because there’s a clear effort made to present the content of a post in the format that works most for that post. I recently wrote about the cool experience of learning about how music hijacks our perception of time through an audio tour, consisting of clips and annotations.

Screen Shot 2015-04-05 at 3.18.57 PM

A recent post about an interview with Helen Fisher, a prominent sex/love/relationship researcher and communicator, also provided a non-traditional reading experience. The post embodied so many goals of science communication. It opened by describing the experience of the interview – an interesting comment Fisher made and the actual apartment that the writers met in. Then, once we can picture the environment that the dialogue took place in, the author told us why we should care about the interview: Fisher makes some provocative claims, such as suggesting that an increase in casual sex has caused our divorce rate to stop increasing – casual sex might lead to long-term marital happiness. The rest of the interview is presented in transcript form, but a video of the interview is the main draw. It’s not posted as one chunk, as most videos are. It always bothers me that I don’t get to experience an online video at my own pace in the way that I experience written materials. The Nautilus interview eliminates this bother by posting Fisher’s responses to each question as individual mini movies that are linked to the questions she’s responding to. Thanks to this format, readers can preview the questions, skip the ones they find less interesting, and listen to the interesting ones in any order they want (all of which I did). This solution is fairly low on complexity, but high on genius. More, please!

How should we talk about sex? Ditch the baseball

Whenever we talk about abstract or complex topics (and even when we talk about things that are neither abstract or complex) we can almost guarantee that metaphors will be at the center of the stage. This is especially true when it’s not appropriate to focus on certain aspects of a situation or when we want to lighten a mood: talking about how our neighbor croaked or kicked the bucket is less morbid than saying that he died. Depending on context, we’re more likely to hear that someone lost his lunch instead of what he really did, which was blow chunks. Considering that sex is something that is awkward or inappropriate to talk about in many circumstances (and it would be crazy to suggest that we just avoid the topic in the first place!), it comes as little surprise that sexual discourse is highly metaphorical.

In this Ted talk, Al Vernacchio challenges the predominant American metaphor for sex: baseball. There are lots of ways that baseball talk is used to talk about sex, but the most common might be using the bases to refer to different sexual acts. You have to do them in order, just like you have to run the bases in order in a ball game, and hitting a home run is supposed to be a cause for celebration in both contexts. The metaphor extends beyond the bases, though. You can be a pitcher, a catcher, or a benchwarmer. The game involves a bat, a nappy dugout, and a catcher’s mitt. You can be a switch-hitter or just flat-out play for the other team.

Baseball isn’t a terrible metaphor for sex. Even if you’ve never heard of some of those metaphors (I hadn’t), if you know what generally goes down during a baseball game and you know what generally goes down during sex, you can probably figure out many of the mappings. In fact, that’s part of what makes a metaphor good – there are common relations between the two domains. Their commonalities are highlighted and their differences are ignored. But metaphors, especially ones like this with many mappings between the two topics, aren’t just ways of talking. They’re ways of thinking. Vernacchio argues that if we want to foster better views of sex in our society, maybe we need to ditch the baseball metaphor. He proposes that we use a pizza metaphor instead. Here are the different inferences the two might encourage:

  • You play baseball when it’s baseball season and a game is scheduled. You eat pizza when you’re hungry for pizza. When should you have sex?
  • There is no baseball game if there aren’t opposing teams. When you’re getting a pizza, you (hopefully) ask the others who will enjoy it with you what toppings they want. You’re all on the same pizza-ordering team!
  • Once you start making your way around the bases, there’s only one acceptable way to do it. You can’t stop midway and decide you’re good where you are. There’s no wrong way to eat pizza. You can eat it with a knife and fork, you can fold it in half, and you don’t have to eat it all – I’d bet a third of America doesn’t even eat the crust.
  • When you’re playing baseball, the goal is to defeat the other team. When you’re eating pizza, the goal is to have something you enjoy and that will be satisfying.

I like the pizza metaphor that Vernacchio proposes because it encourages productive inferences that baseball doesn’t. But it’s not perfect. Most pizza is not very good for you – to be consumed only rarely and with a modicum of guilt. Then there’s the problem of Italy – Italians have all the claim to it, and they’re probably still way better at making it than people anywhere else. Plus, what role does the delivery guy play in sex?

These picky ways that pizza and sex aren’t alike aren’t the point. By definition, metaphors align two topics, and there will always be some mappings between them that don’t work. If everything about the two topics were alignable, comparing them would no longer be metaphorical – it would just be talking about two things that are literally the same.

Check your tweets

It’s no secret that the information we share on social media can get us in trouble. You can embarrass yourself, ruin your reputation, and even get arrested using fewer than 140 characters.

Tweet-smallversionfinal

Tweets are also reflections of a person’s current state – they shed light on things we find interesting, the events in our lives, and our opinions. In these cases, we’re conscious of the states our tweets reflect. However, our tweets may also be able to predict aspects of our lives that we’re not conscious of at the time of tweet composition, like the rate of heart attacks in the communities we live in.

If you think about it, it’s not that surprising that negative tweets come from places with greater incidences of cardiac events. The authors crucially point out that it was not the tweeters who were dying, though. One person’s angry tweets did not predict that same person’s later risk of heart attack (though to me this doesn’t seem like too far-fetched a possibility). Instead, the counties that the most negative tweets were coming from were the same ones that had the highest incidence of cardiac events. I don’t think anyone would argue that the angry tweets (coming primarily from young people) were causing high rates of heart attack (in primarily old people). Instead, the correlation probably reflects that good physical and mental health are often associated – both in individuals and on a larger geographical level. So what should we do with this knowledge? Is there anything we can do beyond existing efforts to improve heath and wellness in the communities that need it most? What other warning signs are evident in corpora containing millions of tweets and other social media behaviors?

I don’t know. I’m about to go tweet about rainbows and daisies though, just in case.