One of the things I like most about machine learning is how it illustrates very neatly that engineers don’t know how humans work. Take, for example, the large language models. I was told they will take my job, making me redundant; that they are intelligent; that they will plan the perfect itinerary for my trip to Paris, with highlights on bars and restaurants that are absolutely accurate and complete.
Inspired by a tweet about mayonnaiseI now plan to do a fun experiment with Google’s Bard.
I choose to do this for two reasons. First, this kind of quiz is something you do with little kids while teaching them to read. You let them recognize letters and the sounds they make. But second, I have a strong suspicion that this general activity isn’t captured in the data that Bard comes out of, because it’s not the kind of thing you write down.
This is obviously absurd, but it’s absurd because we can look at the word “ketchup” and clearly see the “e”. Bart can’t. It lives in a completely closed world of training data.
This kind of hits the problem with LLMs. Language is a very old human technology, but our intelligence preceded it. Like all social animals, we need to keep track of status relationships, which is why our brains are so big and weird. Language is a very useful tool – hello, I write for a living! – but it is not the same as knowledge. It floats on top of a lot of other things we take for granted.
I often think of Rodney Brooks’ 1987 article, “Intelligence Without Representation,” which is more relevant than ever. I won’t deny that language use and intelligence are linked, but intelligence precedes language. If you work with language without intelligence, as we see with LLMs, you get weird results. Brooks compares what’s going on with LLMs to a group of early researchers trying to build an airplane by focusing on the seats and windows.
I’m pretty sure he’s still right about that.
I understand the temptation to try and have a complicated conversation with an LLM. Many people really want us to be able to build an intelligent computer. These fantasies are common in science fiction, a genre widely read by geeks, and suggest a desire to know that we are not alone in the universe. It is the same impulse that drives our attempts to contact extraterrestrial intelligence.
But trying to pretend LLMs can think is a fantasy. You can inquire into a subconscious, if you like, but you get an overkill. There’s nothing there. I mean look at his attempts at ASCII art!
When you do something like this — a job your average five-year-old is good at and an advanced LLM falls into — you begin to see how intelligence Actually to work. Sure, there are those who believe that LLMs have consciousness, but those people strike me as tragically undersocialized, incapable of understanding or appreciating just how brilliant ordinary people are.
Yes, Bard can produce Glurge. In fact, like most chatbots, it excels at auto-completion for marketing texts. This is likely a reflection of how much ad text appears in the training data. Bard and his engineers probably don’t see it that way, but what a damning commentary that is on our daily lives online.
Advertising is one thing. But being able to produce ad copy is not a sign of intelligence. There are many things we don’t write down because we don’t have to and other things we do know can not write it down – like cycling. We take much shortcuts to talk to each other because people largely work with the same basic information about the world. There is a reason for this: we all are in the world. A chatbot is not.
I’m sure someone will show up and tell me that the chatbots will improve and that I’m just mean. First of all, it’s vaporware until it ships, honey. But second, we really don’t know how smart we are or how we think. If there’s one real use for chatbots, it’s clarifying things about our own intelligence that we take for granted. Or, as someone wiser than me put it, the map is not the territory. Language is the map; knowledge is the territory.
There is a wide variety of things chatbots don’t know and can’t know. The truth is, it doesn’t take much effort to fail an LLM on a Turing test as long as you ask the right questions.