AI is turning you basic

As we have seen in recent months, there are many reasons to be nervous about the growth and impact of artificial intelligence. AI is polarizing, it can spread misinformation and there are many signs that it’s coming for our jobs.

These macro issues are worrisome, but there is a quiet and surprisingly more personal danger: AI makes us boring. Not just collectively, but individually.  

Recent studies, including our own, have shown that when we use AI for guidance, our interests become more normative and less diverse. Our creative output becomes less unique. Even our selection of the “most important” scientists, athletes or historical figures becomes the same as everyone else’s.

AI turns the infinite diversity that makes humans special into statistically safe sameness. It strips away the parts of each individual’s identity that make us different and collapses our complexity into a unidimensional, static version of who we are and could be.

AI turns the infinite diversity that makes humans special into statistically safe sameness. It strips away the parts of each individual’s identity that make us different and collapses our complexity into a unidimensional, static version of who we are and could be.

Take the following dilemma: Every time a customer walks into Baskin-Robbins and sees 31 flavors, they face a tough decision. They can either play it safe and pick an old favorite, or take a chance on something new. This everyday choice captures what scientists call the exploration-exploitation trade-off: Capitalize on past learning and stick with the familiar (exploitation) or venture into the unknown (exploration) in hopes of discovering something better. It’s a balance that evolution has wired into the human brain.

But AI has started to dangerously tilt the scale. The systems that society relies upon to help navigate the world — from Spotify to Netflix to ChatGPT — are overwhelmingly trained to exploit. They optimize for short-term engagement and satisfaction. Did you click the link? Watch the suggested video? Like the song? If yes, clap on the shoulder. You will see more like this. Risk, discovery and exploration are simply not part of their programming.

If 60% of people prefer the flavor Pralines ‘N Cream when going to Baskin-Robbins, that’s what the algorithm will suggest. It’s not that AI lacks imagination — it just lacks incentive. Safe bets reduce customer churn. Unfamiliar options are risky. And so AI errs on the side of playing it safe.

But in doing so, AI doesn’t just narrow people’s choices — it narrows the people themselves. 

To illustrate this tendency, we ran a simple experiment. We asked ChatGPT to recommend an ice cream flavor from Baskin-Robbins 100 times, pretending to be a different customer each time. Ninety-six of those times, it suggested one of the two most popular flavors: Pralines ‘N Cream or Mint Chocolate Chip.

Outsource enough decisions to AI, and Baskin-Robbins might soon only offer two flavors. Which sounds trivial — until you realize this flattening is happening across every domain of life. 

To counter this flattening, there is a growing push towards personalized AI agents that learn an individual’s preferences and offer more customized recommendations. But here’s the catch. Even when AI learns the quirks of the people using it — it discovers, say, someone’s preference for Nutty Coconut over Pralines ‘N Cream — it still plays it safe within a person’s individual preferences. 

Returning to ChatGPT for another experiment, we created a custom AI ice cream assistant for a person who had previously chosen Nutty Coconut 70% of the time — and evenly distributed their remaining picks across three other flavors. Then we asked it to choose the next 100 flavors. It picked Nutty Coconut every single time. 

Where you were once a colorful mosaic of evolving preferences, AI leaves you a grey cardboard cutout of your former self. New York Times reporter Kashmir Hill captured this perfectly after letting AI guide her decisions for a week. She said it had an agenda: “Turn me into a Basic B.”

The insidious part is that this flattening won’t be noticeable overnight. Rather, it’s like death by a thousand algorithmic recommendations. One slightly safer movie, one slightly more popular book, one slightly more mainstream vacation at a time.


Want more sharp takes on politics? Sign up for our free newsletter, Standing Room Only, written by Amanda Marcotte, now also a weekly show on YouTube or wherever you get your podcasts.


Each decision a person outsources to AI makes them narrower. Then AI learns from that narrow self, and narrows its recommendations even more. That person might start the year vibing with Sudanese jazz, only to take a “recommended” detour through indie electronica, before flirting with Britpop and finally scream-singing Top 40 like it’s the only music that matters. The cycle continues — until people don’t recognize their past selves anymore (and couldn’t surprise themselves if they tried).

We cannot rewind or unwind the AI revolution. Nor should we. AI helps individuals and societies navigate an increasingly complex world with overwhelming amounts of choice. But this is an inflection point. AI is no longer just recommending; it’s beginning to act on society’s behalf. It’s choosing, not just suggesting. That’s where the stakes get existential. 

To stop the slow march toward becoming basic, it’s time to reclaim exploration. And paradoxically, AI could help do just that — if asked the right questions and rewarded for the right actions. 

AI has the data and capacity to optimize for exploration smartly. Because it has seen the entire universe of preferences, it knows not only what a person currently likes, but also what lies just beyond the boundaries of their typical preferences. With the help of AI, people could take curated risks (when they choose to). 

Instead of “What will I like?”, we could start asking, “What’s something unexpected I might like — even if I’ve never tried it before?” Or we could offer users a dial they can toggle anywhere between “Spot on,” “Me with a twist” and “Wild card.” Using its superpower — the ability to identify patterns across millions of data points — AI can then find the hidden gems that match tastes in ways people wouldn’t have known to look for.

But for AI to act on this implicit knowledge, we need to incentivize it to do so. Merely asking it to be “creative” or show us something more “adventurous” isn’t going to do the trick. In most cases, it will still default to the tried-and-true output because it craves that clap on the shoulder. 

Rather than punishing AI every time it takes a swing and misses, we need to start rewarding it for taking smart swings. Both when we initially train the algorithm, and subsequently when we use it and offer feedback on its performance.

This means encouraging AI to make bets that are a little bold and a little unusual, but still grounded in what it knows about us. AI doesn’t have to throw darts in the dark — it can take informed risks, provided we treat that action as success and not failure.

This isn’t just about playlists or ice cream. It’s about preserving what makes us unique, messy, glorious humans. Whether it’s a semi-ironic obsession with artisanal cheese-making, a random passion for sitar music or a stubborn preference for a flip phone, there are many wonderful contradictions about humans that algorithms can’t quite pin down.

The American poet and novelist Walt Whitman once wrote, “I contain multitudes.” But in the age of curated feeds and chatbot emails, we need to be careful that we don’t become a single algorithm-approved silhouette. So the next time AI offers you Pralines ‘N Cream, maybe say no, and ask it to help you go Wild ‘N Reckless instead.

Read more

about this topic

Comments

Leave a Reply

Skip to toolbar