The Expectation of Intelligence

In modern Western society, we prize intelligence above most other attributes. The higher your SAT score, the higher-ranked university you can attend. The higher your SAT score, the more pleased your parents will be. The higher your SAT score, the more attention you get from mentors—after all, you can really make something of yourself.

But what about those other kids, the ones with lower scores?

Most people working on AI today are of the super-smart variety. They score in the 91st or 99th percentile and believe machines can push human intelligence to a full 100, that AI can be infallible. The great optimization. AI could not only replicate human intelligence but even hurdle the boundary of maximum humanoid computational power. But, I want to consider this: Does artificial intelligence actually seek to replicate human intelligence if our goal is to hurdle that boundary of limitation? Or does it seek some yet-unattainable goal of perfection, of complete mastery—of achieving a hyperbole?

Scary speculation on AI surpassing human intelligence and growing at an exponential pace has been around since the industrial revolution, but recently, it’s made a comeback on the internet. And it’s left me juggling some big questions.

Human intelligence is flawed. We don’t know much about the human brain, though we’ve made great leaps in the past century; phrenology, as it were, was only discredited as a science in the 1840s. But we do know the human brain has an impressive ability to deceive itself—just watch any episode of Criminal Minds to see the lengths a mind will go to protect its owner. We also know the brain has a great amount of matter devoted to interpersonal interaction. Humans and other social species have evolved to recognize the emotional cues of others. Moreover, a team working in tandem—you can think of The West Wing or, yet again, the Criminal Minds investigative team—has a much greater collective capacity for intelligence than do the individuals.

So what do we hope for from AI? Do we expect a single computer will have the working capacity of a human team? Will all AI be capable of the same super-intelligence, or will some AI be higher functioning than other AI? and will those more capable AI machines be able to manipulate less intelligent AI?

Woah, that’s a lot to consider.

Here’s a simpler question: Do we expect that AI will be able to score a perfect 2400 on the SAT every time? We don’t have this expectation for humans—in fact, people largely believe that getting a perfect score is due in small part to luck. We recognize there’s little difference between scores of 2400 and 2390, or even 2300. It comes down to just a few questions, usually a handful of vocabulary terms that a kid is either familiar with or not. They’re all top scorers, no great divide among them. We know human intelligence to be, by definition, limited and fallible.

We forget, particularly among people in the 90+ percentiles, that the average SAT score is a 1490 out of 2400. Most people are, in fact, comforted by the knowledge that they are average, happy to not have fallen behind the pack. So why does AI need to attain some sort of exceptionalism? Intelligence, as I know it, is not a pursuit of perfection nor an encyclopedic recall of written history. Intelligence is often described as specialized: book smart, street smart, emotional intelligence, muscle memory, analytical reasoning, quantitative reasoning, creative problem-solving. No human is intended to master them all.

I started thinking about all this because I (sometimes) tutor ESL students preparing for the SAT. They’re too fluent to take the TOEFL but still struggle to score high on SAT Critical Reading. I start by working on vocabulary—if they can’t understand the words in the answer choices, for example, they can’t reason between them effectively. This works for some, and their scores skyrocket. Vocabulary breadth was their missing piece.

Second, I work with them on skimming. I tell them to stop reading the passages word-for-word, a scary prospect for a non-native speaker. But I can usually convince them in a few weeks that effective skimming can serve them just as well as word-for-word reading. Now, they’re all at least finishing the sections within the time limit. I get them to read in a new way. Another couple students’ scores skyrocket at this point.

Some students inevitably still score low. They say they understand the passage, the questions, and the answer choices. But they eliminate the correct answer, sometimes as much as 50 percent of the time. Why is that? My hypothesis: they struggle to reason between answer choices (and would in any language). So, I start them on reasoning exercises. Can we identify an argument? What’s the support for that argument? And—here’s where we put the critical in critical reading—what might be wrong with an argument (or answer choice)? What assumptions are being made? I ask them to think in a different way. And another group of students’ scores leap ahead.

I wonder about these kids a lot. I was a 99 percenter all my life. I even enjoyed (edit: enjoy) taking tests because I know I’ll score well. I always have. I wonder what it does to a kid when tests tell them again and again that they’re not smart. I wonder if I teach them to think like me, they’ll start to score like me. I wonder why tests favor my way of thinking and if I actually “think better” than anyone else at all.

Has the exceptionalism I’ve been told my whole life I possess made me skeptical of super-intelligent AI? I don’t know. But I don’t think AI can represent human intelligence because human intelligence is too variable. We can engineer bridges, we can comfort one another, we can make sacrifices all within the same intelligence, a single human mind too complex to be perfect. We do what’s right instead of what’s easy (sometimes).

A creative writing professor once told my class—an introductory fiction university course—that the best characters have complicated motivations. They do something selfish, and then they do something so self-sacrificing and earnest that we, as human readers, are moved. Machines aren’t capable, so far, of this type of variable decision-making. Humans are unique not only in our ability to reason but in our ability to ignore our reasoning and act in spite of what we know to be right or true. We play with fire because we yield to temptation. Could a machine learn to be curious, to be self-destructive, to be forgiving… to be human?

So I’ll ask you again: what do we expect from artificial intelligence? What do we expect from our children? And what do we expect from ourselves? I think a few moments, or days, of reflection on these questions could do us all a little good. Leave any answers you find in the comments section below.