Words matter in the Presidential Debate

If there’s one thing this Presidential race and debate have reminded me of, it’s that everything is subjective. A few thoughts on the content of the first 2016 Presidential debate from a linguistically-inclined cognitive scientist:

  • America is a piggy bank

    You look at what China is doing to our country in terms of making our product. They are devaluing their currency and there’s nobody in our government to fight them and we have a very good fight and we have a winning fight because they are using our country as a piggy bank to rebuild China and many other countries are doing the same thing. -Donald Trump

    If the US is truly a piggy bank, then China may have to smash us to pieces to get their money out. We should watch out.

  • Trump and Clinton argue over Trump’s statement that: You [Clinton] have regulations on top of regulations and new companies cannot form and old companies are going out of business and you want to increase the regulations and make them even worse.

    Clinton: I kind of assumed there would be a lot of these charges and claims and so –Trump: Facts.

    What you call a thing matters. Both candidates agree on that.

  • There’s been some innovative language use from both Clinton and Trump.

    Clinton defines her phrase “Trumped up trickle down”:

    And the kind of plan that Donald has put forth would be trickle down economics. It would be the most extreme version, the biggest tax cuts for the top percents of the people in this country that we’ve ever had. I call it trumped up trickle down because that’s exactly what it would be.

    Trump’s new word, bragadocious, needs no formal definition:

    I have a great company and I have tremendous income. I say that not in a bragadocious way but it’s time that this country has somebody running the country who has an idea about money.

  • Oh! Hillary just wrote my conclusion for me: “Words matter, my friends, and if you are running to be President or you are President of the United States, words can have tremendous consequences.”

Recap of ComSciCon-San Diego

This week, ComSciCon – a science communication workshop by grad students for grad students – came to San Diego. Over two days, we enjoyed thought-provoking panels and talks on science communication, touching on topics like: how do we convey the uncertainty in science without teaching the public to be skeptical of researchers? What do we make of the current “edu-tainment” movement? And what is the role of social media in science communication? Attendees also worked with each other and with invited experts to hone their own work, whether abstracts for an academic paper, or a blog post. We ate, we talked, we admired the ocean from the 15th floor, and, luckily, we tweeted. Here’s a collection of some of the tweets that capture the energy from the workshop and highlight many of the impactful moments of ComSciCon-SD.

csc-sd

CogSci 2016 Day 3 Personal Highlights

  • There is more to gesture than meets the eye: Visual attention to gesture’s referents cannot account for its facilitative effects during math instruction (Miriam Novack, Elizabeth Wakefield, Eliza Congdon, Steven Franconeri, Susan Goldin-Meadow): Earlier work has shown that gestures can help kids learn math concepts, but this work explores one possible explanation for why this is so: that gestures attract and focus visual attention. To test this, kids watched a video in which someone explained how to do a mathematical equivalence problem (a problem like 5 + 6 + 3 = __ + 3. For some kids, the explainer gestured by pointing to relevant parts of the problem as she explained; for others, she just explained (using the exact same speech as for the gesture-receiving kids). The researchers used eye tracking while the kids watched the videos and found that those who watched the video with gestures looked more to the problem (and less at the speaker) than who watched the video sans gesture. More importantly, those who watched the gesture video did better on a posttest than those who didn’t. The main caveat was that the kids’ eye patterns did not predict their posttest performance; in other words, looking more at the problem and less at the speaker while learning may have contributed to better understanding of the math principle, but not significantly; other mechanisms must also be underlying gesture’s effect on learning. 

    But in case you started to think that gestures are a magic learning bullet:

  • Effects of Gesture on Analogical Problem Solving: When the Hands Lead You Astray (Autumn Hostetter, Mareike Wieth, Keith Moreno, Jeffrey Washington): There’s a pretty famous problem for cognitive science tests studying people’s analogical abilities, referred to as Duncker’s radiation problem: A person has a tumor and needs radiation. A strong beam will be too strong and will kill healthy skin. A weak beam won’t be strong enough to kill the tumor. What to do? The reason this problem is used as a test of analogical reading is that participants are presented a different story – an army wants to attack a fortress (and the fortress is at the intersection of a bunch of roads), but there are mines placed on the roads leading up to it, so the whole army can’t pass down one road at a time. Yet if they only send a small portion of the army down a road, the attack will be too weak. The solve this by splitting up and all converging on the fortress at the same time. Now can you solve the radiation problem? Even though the solution is analogous (target the tumor with weak rays coming from different directions) people (college undergrads) usually still struggle. It’s a testament to how hard analogical reasoning is.
    But that’s just background leading to the current study, where the researchers asked: if people gesture while retelling the fortress story, will they have more success on the radiation problem? To test this, they had one group of participants that they explicitly told to gesture, one group that they told not to gesture, and a final group that they didn’t instruct at all regarding gestures. They found that the gesturers in fact did worse than non-gesturers, and after analyzing the things that people actually talked about in the different conditions, discovered that when people gestured, they tended to talk more about concrete details of the situation – for example, the roads and the fortress – and this focus on the perceptual features of the fortress story actually inhibited their ability to apply the analogical relations of that story to the radiation case.
    Taking this study into consideration with the previous one, it’s clear that gesture is not all good or all bad; there are lots of nuances of a situation that need to be taken into account and lots of open questions ripe for research.
  • tDCS to premotor cortex changes action verb understanding: Complementary effects of inhibitory and excitatory stimulation (Tom Gijssels, Daniel Casasanto): We know the premotor cortex is involved when we execute actions, and there’s quite a bit of debate about to what extent it’s involved in using language about actions. They used transcranial direct current stimulation – a method that provides a small electrical current to a targeted area of the brain – over the premotor cortex (PMC) to test for its involvement in processing action verbs (specifically, seeing a word or a non-word and indicating whether it’s a real English word). People who received PMC inhibitory stimulation (which decreases the likelihood of the PMC neurons firing) were more accurate for their responses about action verbs, while those who received PMC excitatory stimulation (increasing the likelihood of the PMC neurons firing). This at first seems paradoxical – inhibiting the motor area helps performance and exciting it hurts, but there are some potential explanations for this finding. One that seems intriguing to me is that since the PMC is also responsible for motor movements, inhibiting the area helped people suppress the inappropriate motor action (for example, actually grabbing if they read the verb grab), and as a consequence facilitated their performance on the word task; excitatory stimulation over the same area had the opposite effect. Again, this study makes it clear that something cool is going on in the parts of our brain responsible for motor actions when we encounter language about actions… but as always, more research is needed.

journey

  • Tacos for dinner. After three days of long, stimulating conference days, the veggie tacos at El Vez were so good that they make the conference highlight list.

For every cool project I heard about, there were undoubtedly many more that I didn’t get to see. Luckily, the proceedings are published online, giving us the printed version of all the work presented at the conference. Already looking forward to next year’s event in London!

CogSci 2016 Day 2 Personal Highlights

Cool stuff is happening at CogSci 2016 (for some evidence, see yesterday’s highlights; for more evidence, keep reading). Here are some of the things I thought were especially awesome during the second day of the conference:

  • Temporal horizons and decision-making: A big-data approach (Robert Thorstad, Phillip Wolff): We all think about the future, but for some of us, that future tends to be a few hours or days from now, and for others it’s more like months or years. These are our temporal horizons, and someone with a farther temporal horizon thinks (and talks) more about distant future events than someone with a closer temporal horizon. These researchers used over 8 million tweets to find differences in people’s temporal horizons across different states. They found that people in some states tweet more about near future events than in others – that temporal horizons vary from state to state (shown below, right panel). They then asked, if you see farther into the future (metaphorically), do you engage in more future-oriented behaviors (like saving money – either at the individual or state level; or doing fewer risky things, like smoking or driving without a seatbelt)? Indeed, the the farther the temporal horizon revealed through people in a given a state’s tweets, the more future-oriented behavior the state demonstrated on the whole (below, left panel).
    Screen Shot 2016-08-12 at 9.28.54 AM
    Then, recruited some participants for a lab experiment. The researchers then compared the temporal horizons expressed in people’s tweets with their behavior in a lab task, asking whether those who wrote about events farther in the future displayed a greater willingness to delay gratification – for example, waiting a period of time for a monetary quantity if the future quantity will be greater than taking the money today. They also compared the language in people’s tweets with their risk taking behavior in an online game. They found that the language people generated on Twitter predicted both their willingness to delay gratification (more references to the more distant future were associated with more patience for rewards) and their risk-taking behaviors in the lab (more references to the more distant future were associated with less risk taking). While the findings aren’t earth shattering – if you think and talk more about the future, you delay gratification more and take fewer risks – this big data approach using tweets, census information, and lab tasks opens up possibilities for findings that could not have arisen from any of these in isolation.
  • Extended metaphors are very persuasive (Paul Thibodeau, Peace Iyiewuare, Matias Berretta): Anecdotally, when I read an extended metaphor – especially one that an author carries throughout a paragraph, pointing out the various features that the literal concept and metaphorical idea have in common – persuades me. But this group quantitatively showed the added strength that an extended metaphor has over a reduced (or simple, one-time) or inconsistent metaphor. For example, a baseline metaphor that they used is crime is a beast (vs. crime is a virus). People are given two choices for dealing with the crime: they can increase punitive enforcement solutions (beast-consistent) or get to the root of the issue and heal the town (virus-consistent). In this baseline case, people tend to reason in metaphor consistent ways. When the metaphor is extended into the options, though (for example adding a metaphor-consistent verb like treat or enforce to the choices), the framing has an even stronger effect. When there are still metaphor-consistent responses but the verbs are now reversed – so that the virus-consistent verb (treat) is with the beast-consistent solution (be harsher on enforcement), the metaphor framing goes away. Really cool way to test the intuition that extended metaphors can be really powerful in a controlled lab setting.
  • And, I have to admit, I had a lot of fun sharing my own work and discussing it with people who stopped by my poster – Emotional implications of metaphor: Consequences of metaphor framing for mindsets about hardship [for an abridged, more visual version, with added content – see the poster]. When people face hardships like cancer or depression, we often talk about them in terms of a metaphorical battle – fighting the disease, staying strong. Particularly in the domain of cancer, there’s pushback against that dominant metaphor: does it imply that if someone doesn’t get better, they’re not a good enough fighter? Should they pursue life-prolonging treatments no matter the cost to quality of life? We found that people who read about someone’s cancer or depression in terms of a battle felt that he’d feel more guilty if he didn’t recover than those who read about it as a journey (other than the metaphor, they read the exact same information). Those who read about the journey, on the other hand, felt he’d have a better chance of making peace with his situation than those who read about the battle. When people had a chance to write more about the person’s experience, they tended to perpetuate the metaphor they had read: repeating the same words they had encountered but also expanding on them, using metaphor consistent words that hadn’t been present in the original passage. These findings show some examples of the way that metaphor can affect our emotional inferences and show us how that metaphorical language is perpetuated and expanded as people continue to communicate.
  • But the real treat of the conference was hearing Dedre Gentner’s Rumelhart Prize talk: Why we’re so smart: Analogical processing and relational representation. In the talk, Dedre offered snippets of work that she and her collaborators have been working on over the course of her productive career to better understand relational learning. Relational learning is anything involving relations – so something as simple as Mary gave Fido to John or more complex like how global warming works. Her overarching message was that relational learning and reasoning are central in higher-order cognition, but it’s not easy to acquire relational insights. In order to achieve relational learning, people must engage in a structure-mapping process, connecting like features of the two concepts. For example, when learning about electrical circuits, students might use an analogy to water flowing pipes, and would then map the similarities – the water is like the electricity, for example – to understand the relation. My favorite portion of the talk was about the relationship that language and structure-mapping have with each other: language (especially relational language) can support the structure-mapping process, which can in turn support language. The title of her talk promised we would learn about why humans are so smart, and she delivered on that promise with the claim that “Our exceptional cognitive powers stem from combining analogical ability with language.” Many studies of the human mind and behavior highlight the surprising ways that our brains fail, so it was fun to hear and think instead about the important ways that our brains don’t fail; instead, to hear about “why we’re so smart.”
  • And finally, the talk I wish I had seen because the paper is great: Reading shapes the mental timeline but not the mental number line (Benjamin Pitt, Daniel Casasanto). By having people read backwards (mirror-reading) and normally, they found that while the mental timeline was disrupted: people who read from right to left instead of the normal left to right showed an attenuated left-right mental timeline compared to those who read normally from left to right. This part replicates prior work, and they built on it by comparing the effects of these same reading conditions on people’s mental number lines. This time they found that backwards reading did not influence the mental number line in the way it had decreased people’s tendency to think of time as flowing from left to right, suggesting that while reading direction plays a role in our development of mental timelines that flow from left to right, it does not have the same influence on our mental number lines; these must instead arise from other sources.

One more day to absorb and share exciting research in cognitive science – more highlights to be posted soon!

CogSci 2016 Day 1 Personal Highlights

I stepped out of the airport Wednesday night and my glasses fogged up. Ah, what a reminder of the world that awaits outside southern California, where I’m immersed in my PhD work. I had arrived in Philadelphia for CogSci 2016 to be bombarded by fascinating new work on the mind and behavior and the clever researchers responsible for it.

With 9 simultaneous talks at any time and over 150 posters on display during each poster session, I of course only got to learn about a fraction of all that was there. Nonetheless, here are some projects that are still on my mind after day 1:

  • Cognitive biases and social coordination in the emergence of temporal language (Tessa Verhoef, Esther Walker, Tyler Marghetis): Across languages, people use spatial language to talk about time (i.e., looking forward to a meeting, or reflecting back on the past). How does this practice come about? To investigate language evolution on a much faster time scale than occurs in the wild, this team had pairs of participants use a vertical tool (I believe the official term was bubble bar, see below). to create a communication system for time concepts like yesterday and next year. The pairs were in separate rooms, so this new communication system was their only way of communicating. Each successive pair inherited the previous pair’s system, allowing the researchers to observe the evolution of the bubble bar communication system for temporal concepts. Over the generations, participants became more accurate at guessing the term their partner was communicating (as the bubble bar language was honed), and systematic mappings between space and time emerged; that is, although each chain ended up with pretty different systems, within a single chain people tended to use the top part of the bar to indicate the same types of concepts (i.e., past or future), and used systematic motions (for example, small rapid oscillations for relatively close times like tomorrow and yesterday and larger, slower oscillations for more temporally distant concepts).

    Screen Shot 2016-08-12 at 6.27.45 AM

    The bubble bar

  • Deconstructing “tomorrow”: How children learn the semantics of time (Katharine Tillman, Tyler Marghetis, David Barner, Mahesh Srinivasan): This team had children of varying ages place time points (like yesterday and last week) on a timeline. They analyzed different features of the kids’ timelines to investigate at what age kids seem to understand three different concepts of time (or that they begin to understand these concepts in ways that adults do). The first was whether a time is in the past or future relative to now (did kids place it to the left or right of the now mark on the timeline?). The second aspect they looked at was whether kids understand sequences of different times – for example, that last week comes before (to the left on a timeline) yesterday (regardless of where those events were placed compared to now). Finally, they compared the way kids’ timelines showed remoteness – how temporally distant different events are from now – to how adults showed the same concept. Adults, for example, will place tomorrow quite close to the now mark and next year significantly farther away. They found that kids acquired an adult-like sense of remoteness much later than the first two – deictic (past vs. future) and sequence – concepts. While the latter two concepts reliably emerged in kids by 4 years old, but knowledge of remoteness wasn’t present until much later – after 7 years old. These data are an indication that while kids can pick up a lot of information about what different time words mean from the language they encounter, they may need formal education in order to really grasp that tomorrow is much closer to today than last year was.
  • Gesture reveals spatial analogies during complex relational reasoning (Kensy Cooperrider, Dedre Gentner, Susan Goldin-Meadow): After reading about positive feedback systems (i.e., an increase in A leads to an increase in B, which leads to more increase in A…) and negative feedback systems (an increase in A leads to an increase in B, which leads to a decrease in A), participants had to explain these complicated concepts. Even though the material that people read had almost no spatial language , spatial gestures were extremely common during their explanations (often occurring without any accompanying spatial language in speech). These gestures often built off each other, acting as a way to show relational information through space, and they suggest that people invoke spatial analogies in order to reason about complex relational concepts.

    Screen Shot 2016-08-12 at 6.49.08 AM.png

    Sample Gestures showing (from left to right) a factor reference, a change in a factor, a causal relation, and a whole system explanation.

  • Environmental orientation affects emotional expression identification (Stephen Flusberg, Derek Shapiro, Kevin Collister, Paul Thibodeau): Past work has shown us that we not only talk about emotions by using spatial metaphors (for example, I’m feeling down today, or your call lifted me up), but we also invoke these same aspects of space to think about emotions. In the first experiment, the researchers found that people were faster to say that a face was happy when it was oriented upwards and that it was sad when oriented downwards (both of which are considered congruent with the metaphor) than for the incongruent cases. Then, to differentiate between an egocentric (facing up or down with respect to the viewer’s body) and environmental (facing up or down with respect to the world) reference frames, people completed the same face classification task while lying on their sides. This time, they only showed the metaphor consistent effect (faster to say happy when faces were oriented up and to say sad when faces were oriented down) when the face was oriented with respect to the world – not when the orientation was with respect to the person’s own position. This talk won my surprising finding award for the day, since researchers often explain our association between emotion and vertical space as originating in our bodily experiences: we physically droop when we’re sad and we rise taller when we’re happy. That explanation isn’t consistent with what these researchers found, though, suggesting that people’s association between vertical space and emotions was critically an association involving vertical space with respect to their environment, and not their own bodies.
  • Context, but not proficiency, moderates the effects of metaphor framing: A case study in India (Paul Thibodeau, Daye Lee, Stephen Flusberg): People use metaphors they encounter to reason about complex issues. For example, when a crime problem is framed as a beast, they think that the town should take a more punitive approach to dealing with it than when that same problem is framed as a virus. What if you encounter this metaphor in English, but English isn’t your native language – does the metaphor frame influence your reasoning less than it would influence a native English speaker’s? People from India (all of whose native language was not English) read the metaphor frames embedded in contexts, and reasoned about the issues that were framed metaphorically. Overall, people reasoned in metaphor-consistent ways (i.e., saying that crime should be dealt with more punitively after it was framed as a beast than a virus). Their self-reported proficiency in English did not affect the degree to which people were influenced by the metaphor; people who were more fluent in English were not more swayed by the frames. However, the context in which they typically spoke English, did play a role: Those who reported using English mostly in informal contexts, such as with friends and family and through the media, were more influenced by the frames than those who reported using English more in formal contexts, like educational and professional settings. These experiments don’t explain why those who use English more in informal settings were more swayed by metaphorical frames than those who use the language more in formal settings, but it opens the door for some cool future research possibilities.

Check back for highlights from days 2 and 3!

For the love of language: Guest post

I met Emi Karydes as she was beginning her last year as an undergrad at UCSD. I knew she wanted to be involved in linguistic research, and although my work is really more about cognition, I convinced her that language was my first love and is at the center of my cognitive science research, and Emi became a research assistant. She taught me cool things about American Sign Language, made me laugh, and was tremendously helpful with whatever project I threw at her. A couple months after graduation, Emi reflects on her relationship with language – past, present, and future:

I started my freshman year at UCSD with an interest in everything, but no idea what I wanted to focus on. I had narrowed down my options to “something in the Arts/Humanities.” Then I took LING7, the Linguistics of American Sign Language, and I fell in love. (I am still a bit upset that it took me so long to learn about linguistics, but that is an issue for another day.) I don’t know how I feel about the idea of predestination or fate, but it certainly feels like I was always meant to be a linguist. The study of language touches on so many different aspects of life, from communication, to culture, to technology, to art, just to name a few, that it was the perfect major for someone who wanted to study everything.

Language is something that most people are fortunate enough to take for granted, so when you take a step back and analyze how and why language works it can be mind-boggling. I remember sitting in Phonology freaking out about the fact that as we are talking about the different sounds on the IPA chart, we’re producing them. It is impossible to study linguistics without using language, which I will admit has led me to start speaking  and just not stop because I get distracted by how my vocal tract produces the different phonemes. Lots of fun for the people stuck listening to me listening to myself, I’m sure. But my point is that there are so many amazing things happening in your brain and your body allowing you to communicate almost effortlessly, and we aren’t even consciously aware of it most of the time. Language is as close to magic as I’ve been able to find concrete proof for, and I love it.

“So, what exactly are you planning to DO with a linguistics major?” Honestly, whatever you want. Don’t let this question scare you away from a Linguistics major. Since language is so engrained in our everyday lives, linguistics can be applied to almost anything. Scratching the surface, there is speech pathology, or computational linguistics, or language construction, or gathering data on a language that is nearing extinction, or research into any of a number of unanswered questions. I graduated this year with a degree in General Linguistics, and am taking a year off to relocate from San Diego to Portland, but I am really looking forward to applying to grad schools and furthering my study of linguistics. I genuinely feel that what I’ve learned these last four years has helped me grow as a person, expanding my personal perspective and giving me a new method by which to think about the world as a community. So if you are going into this year with no idea what to study, try linguistics. It might just change your life.

P.S. Emi has some other talents you might want to check out.

The Time Keeper: Review and Reflection

Try to imagine a life without timekeeping.
You probably can’t. You know the month, the year, the day of the week. There is a clock on your wall or the dashboard of your car. You have a schedule, a calendar, a time for dinner or a movie.
Yet all around you, timekeeping is ignored. Birds are not late. A dog does not check its watch. Deer do not fret over passing birthdays.
Man alone measures time.
Man alone chimes the hour.
And, because of this, man alone suffers a paralyzing fear that no other creature endures.
A fear of time running out.

How do our minds make sense of such vastly complex concepts like time? It’s a perennial cognitive science question and one taken up by Mitch Albom in his novel The Time Keeper. The book is about an ancient man named Dor, the inventor of the first clock and a time keeping hobbyist. As a punishment for trying to measure time, Dor is sent to a cave for 6,000 years. While in the cave, he hears voices from people all over earth, constantly asking for more time. He experiences intense loneliness, and quickly realizes that the immortality he’s received is no gift. When he’s allowed out of the cave, he’s given an hourglass that lets him selectively slow time to a near halt and the task to teach two people what he’s learned about time.

time keeper

One of these people wants too much time. This is Victor Delamonte, fourteenth richest man in the world and dying from cancer. Victor decides that he will have his body cryogenically frozen, to be rejuvenated and cured once medicine has advanced enough. Victor wants to live forever.

The other character Dor is sent to help wants too little time. Sarah Lemon is a high school senior who has been humiliated and cast off by a boy she mistakenly believed to be her boyfriend. Sarah wants to die.

Both Victor and Sarah cross paths with Dor in modern New York City in the watch shop where Dor now works. Victor decides he will be frozen before he’s officially dead to increase his chances of success, and Sarah decides she will kill herself. Moments before they follow through with their radical and opposite actions, Dor slows time to bring them together and teach them what he’s learned about time: “‘Man wants to own his existence. But no one owns time… When you are measuring time, you are not living it.'”

We treat time as a thing. My Google calendar may as well be my homepage. The rare room lacking a clock feels like a prison. We take ownership of our time when we capture it in photographs, sign contracts for work we will complete, and invest our money for the future. We talk about wasting or saving time just the same way we talk about wasting or saving food. Albom reminds us that despite our language, cultural practices, and technological innovations, despite the fact that we can measure and quantify time in amazingly precise and meticulous ways, we do not control time. As Dor was told at the beginning of his sentence in the cave, “‘The length of your days does not belong to you.'”