The behavioural patterns of human–AI collaboration

Exploring how AI changes the way we think, decide, and work

I'm a behavioural scientist with 15 years of experience in decision-making and behaviour change. I've become increasingly interested in exploring what happens when human psychology meets AI systems: how AI is changing how we think, interact, and make sense of things around us.

More about how I got into this topic

Since starting to work extensively with GenAI tools, I noticed they were influencing my cognitive patterns. They altered how I framed questions, tested ideas and navigated ambiguity. As someone trained in behavioural science and cross-cultural psychology, I wanted to unpack what was going on.

Working with these systems exposed my beliefs about control, trust, and intelligence. I realised that the way people interact with AI often mirrors how they relate to other minds and how they see the world. Do they treat AI as a partner or an instrument? Are they focused on control or exploration? These patterns are both individual and cultural, revealing deeper values around trust, authority and autonomy.

AI serves as a mirror for human values and worldviews. How someone uses AI tells you whether they see the world through a lens of control or partnership, whether they value efficiency over exploration, whether they're comfortable with uncertainty. So, one of the big questions I want to explore is what does how people work with AI reveal about human psychology and social systems?

Cultural differences shape how people collaborate with AI. I'm particularly interested in exploring this across different contexts. Why do some people anthropomorphise these systems while others treat them as tools? How do cultural backgrounds influence trust-building? What does resistance reveal about underlying views of control?

I work with AI while studying it. You can't interpret the mirror from a distance. Experiencing these systems directly helps surface the habits and assumptions they interact with.

Artificial Thought is for you if:

  • You use AI tools and wonder how they are shaping your thinking

  • You're curious about what human–AI collaboration reveals about cognition, values and behaviour

  • You’re interested in exploring how cultural differences influence trust, control and interaction with AI systems

And if you want to have a chat about any of these, you can contact me at elina@prismaticstrategy.com or you can find me on Linkedin.


Note: Although much of the research I analyse highlights the potential benefits of human–AI collaboration, that doesn’t mean I endorse these systems uncritically or ignore their risks. The problems are real and well-documented, but the critical conversation is already loud - what’s often missing is a psychological account of how these systems shape cognition in practice.

More my thoughts about the dark side of GenAi


Podcast available on:

Amazon Music | Spotify | Castbox | Pocketcasts (Apple coming soon)

User's avatar

Subscribe to Artificial Thought

Exploring how AI changes the way we think, decide, and work

People

Behavioural Strategist | BeSci, AI & Systems Thinking