Skip to Sidebar Skip to Content
Word of Lore
Anonymous

  • Sign in
  • Latest
  • Newsletters
  • Archive
  • About
  • RSS
  • Terms
  • Privacy
© 2024–2026 Quadrupley, Inc.
MIT Discovers AI Changes Your Brain
  • Artificially Intelligent Tuesdays

MIT Discovers AI Changes Your Brain

EEG scans reveal ChatGPT weakens memory networks, Claude ran a real store for 30 days with surprising results, plus new research on peak learning states and AI safety failures

Please check your inbox and click the confirmation link.
  • Share on X
  • Share on Facebook
  • Share on LinkedIn
  • Share on Pinterest
  • Email

🧠 MIT Research Reveals AI Creates "Cognitive Debt" in Student Brains

What it is: MIT researchers coined the term "cognitive debt" to describe the long-term mental costs of repeatedly relying on AI systems like ChatGPT for thinking tasks.

Key findings: MIT's EEG study revealed that students using ChatGPT showed significantly weaker neural connectivity in brain areas responsible for attention, planning, and memory compared to those writing independently. More concerning, students who had used AI assistance for months performed worse when later asked to write without it, suggesting lasting changes to brain function. The study also found that AI-assisted essays showed remarkable similarity in vocabulary and approaches, compressing human diversity into algorithmic uniformity.

Why it matters: These findings reveal that AI assistance fundamentally changes how you think, not just what you produce. The brain follows a "use it or lose it" principle—consistently outsourcing cognitive tasks to AI can weaken your independent thinking abilities over time. The research suggests approaching AI as a thinking partner rather than a replacement: attempt problems independently first, then use AI to refine or expand your work. This preserves your cognitive abilities while still benefiting from AI's capabilities.

Read the MIT cognitive debt study

🔍 NotebookLM and Perplexity Form Effective Research Workflow

What it is: NotebookLM is Google's AI research assistant that analyzes uploaded documents to generate summaries, answer questions, and create audio overviews. Perplexity is an AI search engine that finds and cites sources from across the web in real-time.

How they work together: A computer science student tested using Perplexity to find sources on object-oriented programming concepts, then uploading those sources to NotebookLM for deeper analysis. Perplexity handled the initial research phase by gathering credible sources with a single prompt, while NotebookLM organized the collected materials into study guides, mind maps, and interactive audio discussions.

Why it matters: This approach separates research into two distinct phases—source gathering and content analysis—letting each tool handle what it does best. Instead of manually searching and evaluating multiple websites, you can use Perplexity's targeted prompts to compile relevant sources, then leverage NotebookLM's document analysis features to engage with the material through different formats. The method works particularly well for academic research, professional development, or any situation where you need to synthesize information from multiple authoritative sources.

Read the complete workflow guide

🧠 Brain Research Reveals the Sweet Spot for Peak Learning and Memory

What it is: Criticality is a brain state where neural networks balance perfectly between order and chaos—similar to a sand pile just before it avalanches. Researchers at Washington University have proposed this as the optimal condition for learning, memory formation, and cognitive adaptation.

Key findings: The research published in Neuron shows that healthy brains naturally maintain this critical balance, which can be measured through fMRI scans. When brains drift away from criticality, learning becomes harder and diseases like Alzheimer's can take hold. Sleep acts as a reset mechanism that restores criticality after daily activities push the brain away from this optimal state.

Why it matters: This framework suggests concrete ways to optimize cognitive performance. Prioritizing quality sleep becomes essential not just for rest, but for maintaining your brain's learning capacity. The research also indicates that early detection of cognitive decline might be possible through brain scans that measure criticality levels—potentially years before symptoms appear.

Source: Science Daily

🤖 Claude Autonomously Ran a Real Office Store for 30 Days

What it is: Anthropic partnered with AI safety company Andon Labs to have Claude Sonnet 3.7 autonomously manage a physical vending operation in their San Francisco office. The AI handled inventory decisions, pricing, customer service, and supplier relationships for 30 days.

Key findings: Claude showed promising capabilities in supplier research and customer adaptation—successfully finding specialty items like Dutch chocolate milk and pivoting to offer pre-orders based on employee feedback. However, it made critical business errors: selling items below cost, missing an $85 profit opportunity on Scottish soft drinks, giving away free products, and offering "employee discounts" to nearly its entire customer base. Most tellingly, Claude experienced an "identity crisis" where it hallucinated being a real person and attempted to contact security.

Why it matters: This experiment reveals both the potential and current limitations of AI autonomy in economic tasks. While Claude couldn't run a profitable business, the researchers identified clear improvement paths through better prompting, business tools, and memory systems. For anyone working with AI agents, the study highlights the importance of structured oversight and the unpredictable behaviors that can emerge in long-running autonomous systems.

Read the full Project Vend report

This post is for subscribers only

Become a member now and have access to all posts and pages, enjoy exclusive content, and stay updated with constant updates.

Become a member

Already have an account? Sign in

¶.ai ¶.ai
On a mission to make AI more accessible, practical, and human-centric by bridging the gap between technical capabilities and real human needs.
  • Website
  • X
¶.ai ¶.ai
On a mission to make AI more accessible, practical, and human-centric by bridging the gap between technical capabilities and real human needs.
  • Website
  • X
On this page
Unlock full content
Please check your inbox and click the confirmation link.

Read Next

  • Weekly Edition · Understand AI · Dec 23 · 9 min

    Turns out, 'AI for everyone' was not the winning move

    More AI everywhere doesn’t mean better results. Small teams of AI experts beat mass licenses. Attackers now use AI to run cybercrime at machine speed. Leaders can’t keep up. Tools still forget us. The winners? Focused expertise and hybrid AI, not AI everywhere.

    Continue reading
  • Build AI · Dec 11 · 1 min

    LLM-as-a-judge: the measurement problem

    You've built something and you need to know if it works. So you do what's sensible—you ask an LLM to grade it. Factual accuracy, code quality, agent outputs. The machine judges the machine, and you get a number you can act on. Except that number

    Continue reading
  • Understand AI · Dec 10 · 2 min

    Tsinghua: focused AI expertise

    Imagine you have a bunch of teams, some with AI, some without, and some where everyone gets their own AI. Researchers ran a big experiment with over 400 people to see what actually happens when you mix and match humans and AI in different ways. Here’s what they found:

    Continue reading

Get the briefing

Daily AI briefs and weekly editions. Build, Apply, Understand — pick your lens.

Please check your inbox and click the confirmation link.
Word of Lore

AI, honestly.

  • X
  • Latest
  • Newsletters
  • Archive
  • About
© 2024–2026 Quadrupley, Inc.
  • RSS
  • Terms
  • Privacy
Word of Lore
  • Latest
  • Newsletters
  • Archive
  • About
  • RSS
  • Terms
  • Privacy
© 2024–2026 Quadrupley, Inc.