A recent assignment had me revisit Searle’s thought experiment referred to as “the Chinese room” and the debate of whether machines can understand as humans do. He puts forward this scenario: He doesn’t know any Chinese, but is sitting in a room with reference books that allow him to produce coherent written responses to any message he’s given in Chinese. Thus, the person outside the room receiving his responses will believe that the person understands Chinese, when in reality, there is no understanding going on. Searle uses this as an analogy for computer programs- even if a computer program can produce logical or correct or sophisticated outputs, it doesn’t truly understand what it’s computing. Computers simulate intelligence but simulation is not sufficient for true consciousness or to be considered a “mind”.
Searle first wrote about the Chinese Room in 1980, but when I just recently read about the deep learning, the most current advance in artificial intelligence, I couldn’t help think that the Chinese Room is still quite pertinent. The process of deep learning is accomplished by a network in which connections among concepts. The smallest concepts form one layer, and above them are slightly larger ones, and so on, so that to retrieve information, the network doesn’t have to search a massive pool of data, but instead has to find the right associations between data (a much less onerous computational task). The name “Deep Learning” alone is telling to me because typically only humans, or agents with human cognitive capacities, can truly learn. If it helps make my Facebook newsfeed more interesting (i.e., excludes status updates from those random people from elementary school I’ve just never gotten around to unfriending), that’s great. But have technological innovations like this one brought us closer to a feeling that true artificial understanding is achievable (or already achieved?)