Who is smarter, AI or people?
This is a dumb, unanswerable question. Why? Because without definitions, it makes no sense.
What do we mean by ‘smarter?’
Psychologists used to think intelligence was a unitary ability. You could take an IQ test and a simple numerical score would tell you how smart you were. Average intelligence was 100, and every 15 points was a standard deviation up or down, which meant that a certain proportion of the population was smarter or less smart than you.
So if your score was 145, you were smarter than 99.9% of people, but if it was 85, about 84% of people were smarter than you.
The trouble is, we now know that intelligence is much more complex than that. I’m sure you know people who are marvelous writers but terrible at math, and there are plenty of very smart people who can’t spell worth a darn. So until we define what we mean by “smarter,” we can’t compare AI with people.
Which AI are we talking about?
There are new AI tools being developed every day. Some are very narrow and task-specific, while others are broad and comprehensive.
Which people are we talking about?
Human beings vary widely in their intelligences. Are we comparing AI with the Einsteins or the Homer Simpsons?
So the answer to the broad question, “Who is smarter, AI or people?” is “Who knows? It depends.”
Having said that, I recently read an interesting article in The Wall Street Journal about a contest between AI and human teams. The teams had one hour to make predictions about real-world events. Some teams worked alone, while others worked in combination with AI. Which team do you think made the most accurate predictions?
The worst: Human groups working alone
Second worst: Human groups who submitted their own predictions into AI and asked for confirming evidence
Third worst: Human groups who relied solely on AI and submitted the answer as their own
Second best: AI alone
Best: Human groups who challenged AI, demanding evidence, questioning assumptions, and asking AI to offer a counterargument
The author proposed that two qualities differentiated the most successful teams: perspective-taking and intellectual humility. But wait — these aren’t intellectual skills, they’re emotional skills. And they require the willingness to tolerate discomfort. It’s about challenging AI (or other authorities), rather than just agreeing and moving on.
The good news is that these skills can be learned. And then we’re building “hybrid intelligence.”
The bottom line? As we continue to explore the interface between people and AI, it will be critical to focus on these so-called “soft skills.”
Want to talk more about helping your team build “hybrid intelligence”? Contact me at ggolden@gailgoldenconsulting.com

