Thinking about AI

I’m still not sure how to think about AI. While some aspects of it seem useful, I’m not sure I care about them. The few times I’ve tried it out on topics of interest to me, using both ChatGPT and Perplexity it’s failed.

And there have also been failures on tests that I didn’t mean to run. Last week, during the Illinois-Northwestern football game, my sons and I were wondering whether a Northwestern receiver, Calvin Johnson, was related to the former Detroit Lions receiver of the same name (but who is probably better remembered for his nickname, Megatron). My older son pulled out his phone and Googled. The Gemini answer, which appeared above the links, said he was Megatron’s son, but the very first line of one of the top links said

He may not be related to Megatron, but Northwestern will welcome this Calvin Johnson to Evanston with open arms.

More disturbing than obvious outright errors like that is the possibility that using AI will affect our ability to judge its value. I’m thinking of something that came up in a recent episode of The Talk Show, the one with Joanna Stern. Starting about 53 minutes into the show, they start talking about they both asked ChatGPT to make an image of what it thinks their life looks like. Joanna tried it twice, and you can see the images by following links in the show notes. Prominent in both images were representations of scouting.

Why? Well, one of Joanna’s sons recently joined the Cub Scouts, and she’s asked ChatGPT about certain aspects of scouting. ChatGPT has taken these questions as an indication of her deep interest in scouting. In one image, there’s a big Boy Scouts poster on the wall; in the other, her computer screen has the BSA logo above her name and what look like a merit badge or two sitting on her desk. Both images have a boy with a neckerchief in the background.

Both Joanna and John seemed to think this is a reasonable (albeit funny) thing for ChatGPT to do. She asked about scouting, so she must be interested in it, right? And as I was listening to the show, I thought so, too.

But as I thought about it more, I realized this was backward. Instead of ChatGPT thinking like a person, we were thinking like it. The scouting imagery in Joanna’s pictures tells the viewer that she’s deeply into scouting, but the reason she asked questions is that she’s new to it. If she had asked her questions of any person—a scout leader or even another parent who’s kid had been in Scouts for a while—that person would have immediately known that Joanna was a newcomer, not an aficionado who’d have scouting posters on her walls, merit badges scattered across her desk, and the BSA icon on her Desktop wallpaper.

I find this insidious. Certainly, we were right about the reason ChatGPT put the scouting imagery in its pictures. But it was a dumb thing for ChatGPT to do, and none of us immediately called it out as such. We are, I fear, so deeply into computers1 that we accept the stupid things they do as normal and reasonable.

And because these photos aren’t “wrong”—unlike saying the Northwestern receiver is Megatron’s son—there’s no chance that ChatGPT will be improved in this regard. I’ll bet every programmer at OpenAI would look at Joanna’s photos and think “Oh yeah, that makes sense” just like we did.

Before I go, I want to mention that there are AI haters out there who strongly object to using words like “think” and “understand” when talking about AI bots. As someone who’s spent decades talking about how structures “want” to deflect in certain ways and “refuse” to move in other ways, this kind of anthropomorphism doesn’t bother me. It’s my own cybermorphism that worries me.


  1. Something ChatGPT didn’t pick up on with regard to John—there’s no computer in his picture