Artificial Thought
Artificial Thought Podcast
Ep. 14: Extending minds with generative AI
0:00
-13:22

Ep. 14: Extending minds with generative AI

Generative AI is changing how people use external tools to support thinking, decision-making, and creativity.

Much of the public conversation around AI centres on its outputs: what it can generate, how well it performs, what tasks it might take over. Those questions often obscure a more foundational shift that AI systems are becoming embedded in how people think - not just as occasional tools, but as part of the cognitive process itself.

A recent paper by Andy Clark (Nature Communications, May 2025) situates this shift within a broader cognitive history. Clark is best known for the “extended mind” hypothesis, which argues that human thinking routinely spans across brain, body, and environment. In this article, he applies that lens to generative AI, treating it not as a foreign agent but as a new layer in an already distributed system.

Key points:

  • Human cognition has always relied on external tools; generative AI continues this pattern of extension.

  • The impact of AI depends on how it is integrated into the thinking process - not just on what it can produce.

  • Clark introduces the idea of “extended cognitive hygiene” - a new skillset for navigating AI-supported reasoning.

Human cognition is already externalised

Clark’s starting point is that humans have always used external tools to think. Fingers, gestures, sketches, and digital notes are not just supports for cognition - they are part of it. From this view, the brain is not a container of thought but one node in a larger cognitive loop.

Generative AI, then, isn’t a break from the past. It’s an intensification of a long-standing pattern: using the environment to offload, organise, and extend cognitive labour. What makes AI distinctive is the speed and fluency with which it generates content that feels interpretively complete.

Integration depends on interaction

The paper argues that the cognitive consequences of generative AI depend less on the technology itself and more on how it is integrated. Tools that generate suggestions, complete text, or simulate reasoning become influential by shaping what the user pays attention to, what they revise, and what they leave unquestioned.

Clark gives the example of “Digital Andy,” a personalised language model fine-tuned on his own work. Its responses begin to feel like a continuation of his own voice because they operate within the same rhythm of reasoning. This kind of interaction starts to blur the line between internal thought and external suggestion.

Cognitive hygiene for hybrid systems

As AI systems become more personalised and context-aware, Clark introduces the idea of “extended cognitive hygiene.” This refers to the metacognitive skills required to navigate hybrid thinking environments—knowing when to accept a suggestion, when to question it, and when to step outside the frame it creates.

This isn’t just a technical skill. It’s a form of epistemic awareness: recognising that what feels intuitive or fluent might be shaped by the defaults of the system. Without this awareness, there's a risk that convenience substitutes for reflection, and that fluency begins to feel like truth.

Implications for design and practice

The core implication is that design decisions about how and when AI participates shape cognition over time. A system that suggests before the user thinks will produce different effects than one that responds to an already-formed idea. These patterns accumulate. Users begin to relate to the tool not just as a support, but as part of how they think.

Clark’s argument doesn’t rest on whether AI is good or bad for cognition. Instead, he invites us to notice what kind of cognitive ecosystem is emerging around these tools and what habits are forming in response. The work ahead isn’t to reclaim a “pure” human mind, but to design better interactions between minds and their supports.

Until next time.


Source: Clark, A. (2025). Extending minds with Generative AI. Nature Communications, 16(1), 1-4. (Open Access)

Discussion about this episode

User's avatar