A few weeks ago, my wife and I made a bet. I said there was no way ChatGPT could believably mimic my writing style for a smartwatch review. I had asked the bot to do that months ago, and the results were laughable. My husband bet they could ask ChatGPT the exact same thing, but get one a lot of better result. My problem, they said, was that I didn’t know the right questions to ask to get the answer I wanted.
To my regret they were right. ChatGPT wrote a lot of better reviews like me when my husband asked.
That memory flashed through my mind as I was blogging about Google I/O. This year’s keynote was essentially a two-hour thesis on AI, how it will affect Search and all the ways it could brave And responsible make our lives better. Much was neat. But I felt a shiver run down my spine when Google openly acknowledged that it’s hard to ask AI the right questions.
During the demo of Duet AI, a suite of tools that will live in Gmail, Docs, and more, Google showed off a feature called Sidekick that can proactively provide you with prompts that change based on the Workspace document you’re working on. In other words, it calls you about how to ask It by telling you what it can do.
That came up again later in the keynote when Google demonstrated its new AI search results, called Search Generative Experience (SGE). SGE takes every question you type into the search bar and generates a mini-report or ‘snapshot’ at the top of the page. At the bottom of that snapshot are follow-up questions.
As a person whose job it is to ask questions, both demos were disturbing. The questions and prompts Google used on stage are nothing like the ones I type into my search bar. My searches often read like a talking toddler. (They’re also usually followed by “Reddit,” so I get answers from a non-SEO content mill.) Things like “Bald Dennis BlackBerry movie actor name.” If I’m looking for anything I’ve written about Peloton earnings in 2022, I’ll come across “Site: theverge.com Peloton McCarthy ship metaphors.” Rarely do I search for things like “What should I do for a weekend in Paris?” I don’t even think about asking Google that stuff.
I admit that when I stare at a generative AI, I don’t know what to do. I can watch countless demos, and yet the blank window challenges me. It’s like I’m back in second grade and my grumpy teacher just called me with a question I don’t know the answer to. When I ask for something, the results I get are laughably bad – things that would take me longer to make presentable than if I just did it myself.
On the other hand, my husband has gone to AI like a fish in water. After our bet, I watched them play with ChatGPT for an hour. What struck me most was how different our directions and questions were. Mine were short, open and wide. My husband left the AI very little room for interpretation. “You have to hold it by hand,” they said. “You have to give it exactly what you need.” Their assignments and questions are hyper-specific, long, and often contain reference links or datasets. But even they have to reword prompts and questions over and over to get exactly what they’re looking for.
This is just ChatGPT. Google’s pitching goes one step further. Duet AI aims to extract contextual data from your emails and documents and sense what you need (which is hilarious because half the time I don’t even know what I need). SGE is designed to answer your questions – even those that don’t have a “right” answer – and then anticipate what you might ask next. For this more intuitive AI to work, programmers need to make sure the AI knows what questions to ask users so that users can in turn ask the right questions. This means that programmers need to know which questions users want answered before they even ask them. It gives me a headache just thinking about it.
Not to get too philosophical, but you could say that all of life is about figuring out the right questions to ask. For me, the most uncomfortable thing about the AI era is that I don’t think any of us know what we are Real want of AI. Google says it’s what it showed on stage at I/O. OpenAI thinks they are chatbots. Microsoft thinks it’s a very horny chatbot. But when I talk to the average person about AI these days, the question everyone wants answered is simple. How will AI change and influence Mine to live?
The problem is that no one, not even the bots, has a good answer for that yet. And I don’t think we’ll get a satisfactory answer until everyone takes the time to rewire their brains to talk more fluently with AI.