When we talk about AI collaboration, the question is usually whether AI was used or not. This binary misses something crucial about how humans actually experience working with generative systems. The question is not just about whether AI was involved but also when and how it participated in the creative process.
A recent paper suggests reveals that the order of operations fundamentally shapes how people perceive their own contribution, the quality of their work, and how they expect others to judge it. These cognitive patterns emerge from role assignment alone and create predictable tensions between internal confidence and external credibility.
The choreography of contribution
The study examined two distinct AI roles across creative (narrative writing) and functional (summary writing) tasks:
Driver role: AI generates initial output, human edits and refines
Advisor role: Human creates initial work, AI provides feedback and suggestions
This seemingly simple distinction creates cascading effects on perception and judgment. When AI takes the driver role, participants consistently reported that the AI did more of the work, regardless of how much editing they actually performed. The initial generation sets a cognitive anchor, and everything that follows feels like refinement rather than creation.
Conversely, when AI serves as advisor, participants rated the final output as higher quality. Starting from a human base appears to preserve contextual nuance and voice while allowing AI to optimize for clarity or structure. This order of operations maintains a sense of ownership while enhancing results.
The creativity paradox
The study uncovered an interesting paradox at the heart of AI-human collaboration: the very improvements that boost a user’s internal sense of accomplishment may undermine their confidence in how the work will be received externally.
When participants believed AI had improved their work's quality, they were more likely to see their output as creative. Perceived enhancement through better style, structure, or clarity led people to re-evaluate their work positively, even when the novel ideas were originally their own. Iterative refinement reinforced feelings of originality and inventiveness.
Curiously, these same participants also believed others would devalue their work if they knew AI had contributed. The stigma effect is real: AI-enhanced quality becomes a liability in terms of external recognition. Work may be seen as "too polished" or lacking in human authenticity.
The effort attribution problem
The relationship between perceived AI effort and creativity reveals another layer of complexity. In creative tasks specifically, when participants believed AI had done more of the work, they rated their output as less creative. This points to a psychological link between effort and ownership: when AI dominates, humans feel like editors rather than creators.
Yet paradoxically, participants who perceived high AI effort also anticipated higher external valuation if AI involvement was disclosed. This reverses conventional wisdom about effort justification. Instead of human effort increasing perceived value, AI effort signals innovation, technological competence, or strategic tool use.
The downstream effects of role assignment depend heavily on task type. In creative domains like narrative writing who originates ideas matters deeply, both for individual authorship and anticipated external evaluation. The effects on internal creativity and expected external value were pronounced.
In functional tasks like summary writing role assignment still affected perceptions of quality and AI effort, but these perceptions had minimal impact on creativity or anticipated value. Attributional concerns were muted. How well something works mattered more than how it was made.
These findings reveal something fundamental about how value gets constructed in AI-human collaboration. Perceptions of effort and quality are filtered through social assumptions about tools, originality, and legitimacy. The disclosure of AI involvement isn't neutral either, because transparency may boost credibility in some contexts while undermining authorship in others. There's no universal "right" way to handle AI attribution because the social meaning of AI assistance is still being negotiated.
Implications for Design and Practice
These patterns have immediate implications for how we structure AI collaboration:
Workflow design shapes perception: It's not just what AI does, but when and how it participates that influences human judgment. The same AI capabilities can feel empowering or diminishing depending on their sequence.
Creativity involves process, not just content: Task structure and contribution order influence felt ownership and perceived authenticity. Designers need to consider the psychological experience of collaboration, not just its functional outcomes.
Value signals are socially negotiated: What counts as "good work" depends on assumptions about effort, originality, and tool use that are still evolving. Systems that ignore these social dynamics may risk creating value that feels hollow.
From a behavioral perspective, these perception patterns matter because they become habitual. Instead of just using AI tools, many people develop relationships with them based on repeated experience of contribution and credit. If someone consistently experiences AI as doing the "real work" while they provide polish, they might internalize that dynamic.
The cues are subtle but powerful: the blank page versus the draft to edit, the suggestion versus the generation, the feeling of starting versus the feeling of finishing. These experiential differences compound over time, shaping not just what people think about their work, but what they expect from themselves and their tools.
For those designing AI collaboration tools, these findings raise several questions worth sitting with:
How does your interface cue who's in the driver's seat?
What assumptions about effort and creativity are embedded in your workflow defaults?
How do you handle the tension between internal confidence and external credibility?
What social scripts about AI collaboration are you reinforcing through design choices?
The answers aren't straightforward, but the patterns are clear: the order of operations matters more than we thought, and the psychology of collaboration is more complex than our tools currently acknowledge. What we're witnessing is the emergence of new social norms around creative contribution. The question isn't whether these norms are right or wrong, but whether we're being intentional about the ones we're creating.
Source: Schecter, A., & Richardson, B. (2025, April). How the Role of Generative AI Shapes Perceptions of Value in Human-AI Collaborative Work. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (pp. 1-15). (open access)











