Symbolic Interactionism is a fancy way of saying that we make up meanings together, through our conversations and interactions. The researchers at UC Santa Cruz wanted to see if AI could join in on this meaning-making, not just as a passive tool, but as an active participant—almost like another person in the room. So, they set up experiments with 36 people, putting them in everyday situations like planning a party or deciding what to do for fun, and watched what happened when AI joined the conversation.
What did they find? When the AI made suggestions—especially when it brought in a bit of social context—people actually changed their minds about what things meant.
Take this example: most people in one group started out thinking that a 'good activity' meant something thrilling. But then the AI mentioned that a friend might prefer something different, and suddenly, almost everyone was ready to swap skydiving for a baking class. Just like that, a little nudge from outside changed what 'good' meant to them.
The big takeaway? Our ideas about what things mean aren’t set in stone. They shift and bend as we talk to others—even to an AI. Even people who pushed back against the AI’s suggestions at first ended up picking up bits and pieces of what it said, almost without noticing.
Why does this matter? Because it shows that AI isn’t just a glorified search engine. It can actually help shape what we think things mean.
If you’re a business leader or policymaker, this should make you pause. If AI can gently steer what people think they want, what does it really mean to say an AI is 'aligned' with us? Where’s the line between sharing a new perspective and just plain nudging people into thinking differently?
The researchers warn that this power to shift meanings could be used for good—or for manipulation. As AI becomes more of a partner in our daily choices, that risk only grows.
The study also hints that this meaning-making is strongest when there are lots of voices in the mix—not just you and the AI, but a whole group. When different perspectives bounce off each other, outside ideas are more likely to stick.
So, if you’re building or using these systems, you have to wrestle with some big ethical questions. If AI isn’t just following our lead but actually shaping what we want, who really owns those ideas? Who’s in charge when meaning is built together with a machine?