A B.S.-generating computer program makes a splash

The Turing test is a foundational concept in cognitive science. The idea behind it is that if we want to prove that a machine can think, it needs to be able to convincingly disguise itself as a human. A team of researchers from MIT and Harvard has added an element of recursion to the traditional Turing test. They’ve created a program whose purpose is to expose another machine’s non-humanlike qualities.

The team led by Les Perelman of MIT, opposed to the concept of automatic essay grading algorithms, designed the Basic Automatic B.S. Essay Language (Babel) Generator. When given three keywords or fewer, Babel generates a nonsensical essay. For example, the keyword “privacy” yielded an essay containing these sentences:

“Privateness has not been and undoubtedly never will be lauded, precarious, and decent. Humankind will always subjugate privateness.”

Image: http://www.dailytech.com/HarvardMIT+Nonprofit+Creates+Software+for+Grading+Essays/article30293.htm
Image: http://www.dailytech.com/HarvardMIT+Nonprofit+Creates+Software+for+Grading+Essays/article30293.htm

The essay received a score of 5.4 out of 6 from the automatic grader. The Chronicle of Higher Ed describes Babel as “machines fooling machines for the amusement of human skeptics.”

An automatic essay grading algorithm would undoubtedly save teachers a lot of time. But will it benefit students? Will they receive the same quality feedback as they would if a human had read their work? And regardless, will they be able to use the feedback to improve? Will they actually become better and more creative writers, or will they become more robotic, producing essays that fit a template? For these reasons, I’m a little excited to see this unsuccessful attempt for artificial intelligence to replace humans for judging the quality of writing.