The Week ChatGPT Learned to Branch and Humans Learned to Pause
This week's edition covers ChatGPT's conversation branching breakthrough, NotebookLM's new debate and critique modes, and research revealing how we actually choose between thinking and reacting.
🔀 ChatGPT Branch Conversations: Explore Multiple Directions Without Losing Your Thread
ChatGPT is OpenAI’s AI chatbot. Millions of people use it to write, research, solve problems, and bounce ideas around. You type something in, it replies, and you go back and forth.
Now, you can hover over any message in your ChatGPT chat, click 'More actions,' and start a new branch from that exact spot. This rolled out on September 4 for everyone using ChatGPT on the web.
This means you don’t have to worry about losing your place if you want to try something new. Say you’re writing an email with ChatGPT and want to see what a more formal version looks like. Now you can branch off, try it out, and still keep your original. You can compare both, side by side, without losing anything.
🧠 Research Reveals How Human and AI Biases Compound During Interaction
Researchers wanted to know what happens when human biases and AI biases mix. Instead of looking at them separately, they studied what happens when people and AI work together.
They found that if the AI is biased, it can make people even more biased over time. For example, if the AI keeps showing white men as financial managers, people start to believe that’s normal. But if the AI is fair and balanced, it can actually help people make better decisions.
This means that every time you use AI, it could be shaping how you think. If you’re making decisions with AI, try asking it for the opposite view or a different answer. Notice if it always agrees with you—that’s a sign of bias. Check other sources and question what the AI tells you, so you don’t get stuck in your own thinking.
🤔 Three-Way Framework Addresses Uncertainty in Team Decisions
Usually, group decisions are yes or no. You’re either for something or against it. But researchers tried adding a third option: uncertain. They also came up with ways to handle each type of answer.
They asked people if they supported, were uncertain, or opposed an idea. First, they talked to the strong opponents, then worked with the uncertain group. This helped lower conflict and build real agreement. Uncertain people often changed their minds when they got more information, but strong opponents needed a different approach. The researchers also checked if the decisions caused any lasting tension in the group.
If you’re leading a group, this gives you another way to make decisions. Instead of just asking who agrees, ask who supports, who’s uncertain, and who opposes. Spend time listening to the strongest opponents before you try to convince the uncertain ones. Ask the uncertain group what would help them decide. If you only push for yes or no, you might miss problems that come back later.
NotebookLM is Google’s AI research tool. You upload your documents, and it turns them into podcast-style audio summaries. Two AI hosts talk through your material, making it sound like a conversation.
Now there are three new audio formats, not just the usual 'Deep Dive.' Brief gives you a quick, one or two-minute summary. Critique acts like an expert, pointing out what’s strong and what’s weak in your material. Debate has the two hosts argue different sides of your content. Each one uses different AI voices, and you can listen in over 80 languages.
Each format fits a different need. Brief is good if you’re in a rush or have a stack of documents to get through. Critique helps you spot what’s missing in your research or writing. Debate is for when you want to see both sides of an argument, especially if the topic is complicated. Try running the same document through all three and see what you notice. You might catch something you missed before.