People are often more effective when we express ourselves a little. “I think,” “maybe,” or “I might be missing something, but…” are great ways to give us a chance to consider our claims.
The solar-powered LED calculator we used in school did nothing like that. 6 x 7 is 42, not ifs, ands or buts.
Part of the magic of Google Search was that it was not only funny, it was often accurate. The combination of its confidence and its usefulness made it seem like a miracle.
Of course, Google was never completely right. It rarely finds the right page every time. It was left to us. But the aura of omnipotence persisted – in fact, when Google failed, we should have blamed evil black-hat SEO hackers, not an imperfect algorithm and greedy monopolists.
And now, ChatGPT shows up with a completely clear statement about everything we ask.
I’m not surprised that the biggest criticism we’re hearing, even from insightful pundits, is that it’s too confident. That it declares without qualification that biryani is part of a traditional South Indian tiffin, but it is not.
What difference would it make if every single response started, “I’m a beta of a program that doesn’t really understand anything, but the human brain concludes that I do, so take it with a grain of salt…”
In fact, this is our job.
When a simple, convenient bit of data appears on your computer screen, take it with a grain of salt.
Not all emails are spam.
Not all offers are scams.
And not all GPT3 responses are wrong.
But it can’t hurt to insert your own role before you accept it as fact.
Overconfidence is not AI’s problem. There are many cultural and economic changes that this will cause. Our understanding is one of the things we should remember.