Artificial language tests

Languages created by humans (often referred to as “artificial” or “constructed” languages) fascinate me to no end (this is probably the best book I’ve read on the topic). Creating a language is not exactly the most respectable of academic pursuits, since artificial languages never truly catch on and they’ve often been created by zany and radical people, but I think they can teach us a lot about natural languages. Apparently, I’m not alone, since they’re being used as language aptitude tests in a number of contexts. At first when I hear about this, I was skeptical, but I’m coming around to the idea a bit more now.

One test being used in schools is the Modern Language Aptitude Test (MLAT), which is reported to be a better predictor of a student’s success in learning common foreign languages (maybe like Spanish) than more exotic ones. The test seems to work by testing skills that will be important for learning a new language by using a made-up one. For example, it tests how well students can distinguish different sounds, form associations between sounds and symbols, and retain those associations; how well they can recognize grammatical functions of words; and whether they can infer grammatical rules when given samples of a new language. These do all seem like skills that facilitate learning a new language, so testing them with a novel language seems pretty reasonable.

http://www.theatlantic.com/technology/archive/2013/04/the-first-sat-tested-students-using-a-fake-language/275046/
http://www.theatlantic.com/technology/archive/2013/04/the-first-sat-tested-students-using-a-fake-language/275046/

Another test I looked at was the Oxford Language Aptitude Test. I consider myself pretty adept at learning new languages, but when I started taking the test, I was surprised by how challenging I found it. Here’s the beginning of the test:

I. The following sentences are in This Language (an invented language). Isolate the individual words and work out their meanings. Your analysis should be such that every segment of every sentence is assigned to some word; that is, when a sentence is broken up into words, there should be no residue:

  1. hi-tiacumya-? ‘Is a cat listening carefully?’
  2. hi-tisno-sist? ‘Is the little girl listening sleepily?’
  3. mya-tsno-hi-ti. ‘The cat is listening sleepily.’
  4. sisacuhi-ti. ‘A little girl is listening carefully.’

How does one express the following in This Language?:

  1. ‘cat’?_________
  2. ‘little girl’?_________
  3. ‘carefully’?_________
  4. ‘sleepily’?_________
  5. ‘a’?_________
  6. ‘the’?_________
  7. ‘is listening’?_________

By the end of the test, participants need to translate “The boy came home and annoyed the women.” into This Language. This is not child’s play.

I even found out that the SAT even used to contain an artificial language section, thanks to this article in The Atlantic. Students were given a vocabulary of 10 novel words, a list of 6 grammatical rules, and then some sentences to translate. This, apparently, was quite taxing, and as was the case with analogies and antonyms, it was eliminated from the test.

So now for my questions: Why did the SAT get rid of their artificial language section? If the English artificial language tests are most accurate predictors for languages similar to English, can we really call them “language aptitude tests?” (The vast majority of the world’s languages have little in common with English) Can we make aptitude tests to test aptitudes at learning specific languages? How reliable are they?

3 thoughts on “Artificial language tests

  1. This is very interesting! I spent SO long just staring at the screen trying to analyze the parts of This Language… it’s fascinating. In your opinion, why do you think they removed the artificial language section from their tests? Do you think it’s because of the difficulty? Or because it was challenging to score? Or some other reason?

    1. Trying to decipher This Language certainly does not boost my confidence in my intelligence! My best guess might be that it was taken off the test because it wasn’t a good predictor of how a student would fare in college (maybe because it was so hard!). I think this was a similar case with analogies not too long ago- the test makers decided that the skill they tested wasn’t comparable to anything students actually do in college, so they just didn’t make sense… Do you have any other guesses? I almost wish they had kept some section of the test aimed at probing students’ understanding of language in general (not just the specific rules of the English language), since most colleges DO make students either take a foreign language or prove that they’re already proficient. Maybe I’m biased being such a linguaphile, but it seems like an important skill!

Leave a comment