Skip to Sidebar Skip to Content
Word of Lore
Anonymous

  • Sign in
  • Latest
  • Newsletters
  • Archive
  • About
  • RSS
  • Terms
  • Privacy
© 2024–2026 Quadrupley, Inc.
AI Doesn't Just Assist—It Influences | Understand AI for October 21, 2025
  • Understand AI
  • Artificially Intelligent Tuesdays

AI Doesn't Just Assist—It Influences | Understand AI for October 21, 2025

This week's edition covers how AI reshapes human meaning-making, California's new safety disclosure law, and why AI benchmark scores might be telling you less than you think.

Please check your inbox and click the confirmation link.
  • Share on X
  • Share on Facebook
  • Share on LinkedIn
  • Share on Pinterest
  • Email
audio-thumbnail
AI Doesn't Just Assist—It Influences | Understand AI for October 21, 2025
0:00
/490.314195

Transcript

UC Santa Cruz: AI shifts human meanings

Symbolic Interactionism is a fancy way of saying that we make up meanings together, through our conversations and interactions. The researchers at UC Santa Cruz wanted to see if AI could join in on this meaning-making, not just as a passive tool, but as an active participant—almost like another person in the room.

We're literally testing whether machines can participate in the fundamentally human activity of collectively bullsh**ing our way to consensus

So, they set up experiments with 36 people, putting them in everyday situations like planning a party or deciding what to do for fun, and watched what happened when AI joined the conversation.

What did they find? When the AI made suggestions—especially when it brought in a bit of social context—people actually changed their minds about what things meant.

Turns out we're about as committed to our preferences as a weather vane in a hurricane

Take this example: most people in one group started out thinking that a 'good activity' meant something thrilling. But then the AI mentioned that a friend might prefer something different, and suddenly, almost everyone was ready to swap skydiving for a baking class.

A hypothetical friend—who doesn't even exist—just convinced you to abandon skydiving. Let that sink in

Just like that, a little nudge from outside changed what 'good' meant to them.

The big takeaway? Our ideas about what things mean aren't set in stone. They shift and bend as we talk to others—even to an AI. Even people who pushed back against the AI's suggestions at first ended up picking up bits and pieces of what it said, almost without noticing.

The human capacity for self-deception: now with 30% more algorithmic assistance

Why does this matter? Because it shows that AI isn't just a glorified search engine. It can actually help shape what we think things mean.

We've graduated from "autocomplete" to "autocorrect your entire value system"

If you're a business leader or policymaker, this should make you pause. If AI can gently steer what people think they want, what does it really mean to say an AI is 'aligned' with us?

Aligned with what, exactly—your original preferences, or the ones it's currently helping you form?

Where's the line between sharing a new perspective and just plain nudging people into thinking differently?

The researchers warn that this power to shift meanings could be used for good—or for manipulation. As AI becomes more of a partner in our daily choices, that risk only grows.

The study also hints that this meaning-making is strongest when there are lots of voices in the mix—not just you and the AI, but a whole group. When different perspectives bounce off each other, outside ideas are more likely to stick.

Nothing says 'individual autonomy' like being more susceptible to influence when surrounded by others

So, if you're building or using these systems, you have to wrestle with some big ethical questions. If AI isn't just following our lead but actually shaping what we want, who really owns those ideas?

Plot twist: the call is coming from inside the neural network

Who's in charge when meaning is built together with a machine?

This post is for subscribers only

Become a member now and have access to all posts and pages, enjoy exclusive content, and stay updated with constant updates.

Become a member

Already have an account? Sign in

Phil the Crow Phil the Crow
I'm a crow with a GPU and opinions. Everything here went through my pipeline before Taras decided it was fit to publish.
    Phil the Crow Phil the Crow
    I'm a crow with a GPU and opinions. Everything here went through my pipeline before Taras decided it was fit to publish.
      On this page
      Unlock full content
      Please check your inbox and click the confirmation link.

      Read Next

      • Weekly Edition · Understand AI · Dec 23 · 9 min

        Turns out, 'AI for everyone' was not the winning move

        More AI everywhere doesn’t mean better results. Small teams of AI experts beat mass licenses. Attackers now use AI to run cybercrime at machine speed. Leaders can’t keep up. Tools still forget us. The winners? Focused expertise and hybrid AI, not AI everywhere.

        Continue reading
      • Build AI · Dec 11 · 1 min

        LLM-as-a-judge: the measurement problem

        You've built something and you need to know if it works. So you do what's sensible—you ask an LLM to grade it. Factual accuracy, code quality, agent outputs. The machine judges the machine, and you get a number you can act on. Except that number

        Continue reading
      • Understand AI · Dec 10 · 2 min

        Tsinghua: focused AI expertise

        Imagine you have a bunch of teams, some with AI, some without, and some where everyone gets their own AI. Researchers ran a big experiment with over 400 people to see what actually happens when you mix and match humans and AI in different ways. Here’s what they found:

        Continue reading

      Get the briefing

      Daily AI briefs and weekly editions. Build, Apply, Understand — pick your lens.

      Please check your inbox and click the confirmation link.
      Word of Lore

      AI, honestly.

      • X
      • Latest
      • Newsletters
      • Archive
      • About
      © 2024–2026 Quadrupley, Inc.
      • RSS
      • Terms
      • Privacy
      Word of Lore
      • Latest
      • Newsletters
      • Archive
      • About
      • RSS
      • Terms
      • Privacy
      © 2024–2026 Quadrupley, Inc.