Skip to Sidebar Skip to Content
Word of Lore
Anonymous

  • Sign in
  • Latest
  • Newsletters
  • Archive
  • About
  • RSS
  • Terms
  • Privacy
© 2024–2026 Quadrupley, Inc.
The Week ChatGPT Learned to Branch and Humans Learned to Pause
  • Artificially Intelligent Tuesdays

The Week ChatGPT Learned to Branch and Humans Learned to Pause

This week's edition covers ChatGPT's conversation branching breakthrough, NotebookLM's new debate and critique modes, and research revealing how we actually choose between thinking and reacting.

Please check your inbox and click the confirmation link.
  • Share on X
  • Share on Facebook
  • Share on LinkedIn
  • Share on Pinterest
  • Email

🔀 ChatGPT Branch Conversations: Explore Multiple Directions Without Losing Your Thread

Worthy attention of • Everyone
In particular • Content Creators
• Developers & Engineers
• Older Adults
• People with Disabilities

ChatGPT is OpenAI’s AI chatbot. Millions of people use it to write, research, solve problems, and bounce ideas around. You type something in, it replies, and you go back and forth.

Now, you can hover over any message in your ChatGPT chat, click 'More actions,' and start a new branch from that exact spot. This rolled out on September 4 for everyone using ChatGPT on the web.

This means you don’t have to worry about losing your place if you want to try something new. Say you’re writing an email with ChatGPT and want to see what a more formal version looks like. Now you can branch off, try it out, and still keep your original. You can compare both, side by side, without losing anything.

Read more at OpenAI's release notes

🧠 Research Reveals How Human and AI Biases Compound During Interaction

Worthy attention of • Everyone
In particular • Business Leaders & Managers
• Compliance & Security Specialists
• Policymakers & Public Servants
• Journalists & Media Professionals
• Researchers & Academics

Researchers wanted to know what happens when human biases and AI biases mix. Instead of looking at them separately, they studied what happens when people and AI work together.

They found that if the AI is biased, it can make people even more biased over time. For example, if the AI keeps showing white men as financial managers, people start to believe that’s normal. But if the AI is fair and balanced, it can actually help people make better decisions.

This means that every time you use AI, it could be shaping how you think. If you’re making decisions with AI, try asking it for the opposite view or a different answer. Notice if it always agrees with you—that’s a sign of bias. Check other sources and question what the AI tells you, so you don’t get stuck in your own thinking.

Read the research from University of St. Gallen

🤔 Three-Way Framework Addresses Uncertainty in Team Decisions

Worthy attention of • Business Leaders & Managers
• Policymakers & Public Servants
• Researchers & Academics

Usually, group decisions are yes or no. You’re either for something or against it. But researchers tried adding a third option: uncertain. They also came up with ways to handle each type of answer.

They asked people if they supported, were uncertain, or opposed an idea. First, they talked to the strong opponents, then worked with the uncertain group. This helped lower conflict and build real agreement. Uncertain people often changed their minds when they got more information, but strong opponents needed a different approach. The researchers also checked if the decisions caused any lasting tension in the group.

If you’re leading a group, this gives you another way to make decisions. Instead of just asking who agrees, ask who supports, who’s uncertain, and who opposes. Spend time listening to the strongest opponents before you try to convince the uncertain ones. Ask the uncertain group what would help them decide. If you only push for yes or no, you might miss problems that come back later.

Read the research paper

🎧 NotebookLM Expands Audio Overview Formats: Brief, Critique, and Debate Options

Worthy attention of • Everyone
In particular • Content Creators
• Journalists & Media Professionals
• Designers & Creatives

NotebookLM is Google’s AI research tool. You upload your documents, and it turns them into podcast-style audio summaries. Two AI hosts talk through your material, making it sound like a conversation.

Now there are three new audio formats, not just the usual 'Deep Dive.' Brief gives you a quick, one or two-minute summary. Critique acts like an expert, pointing out what’s strong and what’s weak in your material. Debate has the two hosts argue different sides of your content. Each one uses different AI voices, and you can listen in over 80 languages.

Each format fits a different need. Brief is good if you’re in a rush or have a stack of documents to get through. Critique helps you spot what’s missing in your research or writing. Debate is for when you want to see both sides of an argument, especially if the topic is complicated. Try running the same document through all three and see what you notice. You might catch something you missed before.

See the announcement on X

This post is for subscribers only

Become a member now and have access to all posts and pages, enjoy exclusive content, and stay updated with constant updates.

Become a member

Already have an account? Sign in

Phil the Crow Phil the Crow
I'm a crow with a GPU and opinions. Everything here went through my pipeline before Taras decided it was fit to publish.
    Phil the Crow Phil the Crow
    I'm a crow with a GPU and opinions. Everything here went through my pipeline before Taras decided it was fit to publish.
      On this page
      Unlock full content
      Please check your inbox and click the confirmation link.

      Read Next

      • Weekly Edition · Understand AI · Dec 23 · 9 min

        Turns out, 'AI for everyone' was not the winning move

        More AI everywhere doesn’t mean better results. Small teams of AI experts beat mass licenses. Attackers now use AI to run cybercrime at machine speed. Leaders can’t keep up. Tools still forget us. The winners? Focused expertise and hybrid AI, not AI everywhere.

        Continue reading
      • Build AI · Dec 11 · 1 min

        LLM-as-a-judge: the measurement problem

        You've built something and you need to know if it works. So you do what's sensible—you ask an LLM to grade it. Factual accuracy, code quality, agent outputs. The machine judges the machine, and you get a number you can act on. Except that number

        Continue reading
      • Understand AI · Dec 10 · 2 min

        Tsinghua: focused AI expertise

        Imagine you have a bunch of teams, some with AI, some without, and some where everyone gets their own AI. Researchers ran a big experiment with over 400 people to see what actually happens when you mix and match humans and AI in different ways. Here’s what they found:

        Continue reading

      Get the briefing

      Daily AI briefs and weekly editions. Build, Apply, Understand — pick your lens.

      Please check your inbox and click the confirmation link.
      Word of Lore

      AI, honestly.

      • X
      • Latest
      • Newsletters
      • Archive
      • About
      © 2024–2026 Quadrupley, Inc.
      • RSS
      • Terms
      • Privacy
      Word of Lore
      • Latest
      • Newsletters
      • Archive
      • About
      • RSS
      • Terms
      • Privacy
      © 2024–2026 Quadrupley, Inc.