Welcome to DreamsPlus

Upskilling, New-Skilling, and No-Skilling: What AI Is Really Doing to Your Workforce

A landmark study from researchers at Harvard, Wharton, and MIT tracked how consultants at BCG engage with generative AI — and found three distinct patterns with dramatically different consequences for individual growth.

The promise of generative AI in the workplace is seductive: faster output, smarter decisions, leaner teams. But a landmark study from researchers at Harvard, Wharton, and MIT — conducted inside Boston Consulting Group — reveals a more complicated truth. AI doesn’t uniformly make employees better. It amplifies how they work. And for a significant portion of the workforce, that’s not good news.

Centaur
Upskilling
14%
of consultants

Directed co-creation — precise, targeted queries

Domain expertiseHigh
AI fluencyModerate
Output qualityHighest

Cyborg
New-skilling
60%
of consultants

Fused co-creation — constant conversational dialogue

Domain expertiseLow
AI fluencyHigh
Output qualityGood

Self-automator
No-skilling
27%
of consultants

Abdicated co-creation — full task offloading

Domain expertiseNone
AI fluencyNone
Output qualityShallow

The Centaur — Directed Co-Creation

Named after the mythical half-human, half-horse hybrid, Centaurs divide cognitive labour deliberately. They bring human judgment to the front end — framing the problem, setting the strategy, deciding what questions to ask — and then deploy AI with precision to execute specific subtasks.

Think of a management consultant who personally maps a client’s competitive landscape, defines the analytical framework, and then uses AI to synthesise industry data across 40 reports in minutes. The human leads; the AI executes a bounded task. The result? Centaurs significantly increased their domain expertise and outperformed all other groups.

Real-World Example — Software Engineering

Senior architects who write the system design themselves, then use AI tools like GitHub Copilot to generate boilerplate code for specific modules. They grow their architectural judgment; the AI saves keystrokes.

The Cyborg — Fused Co-Creation

Cyborgs are in constant dialogue with AI. They think out loud with it, refine ideas through back-and-forth exchanges, and treat it less like a tool and more like a thought partner. The line between their thinking and the AI’s output blurs — intentionally.

Cyborgs gained little domain expertise from this approach, but they built something else: genuine AI fluency. They learned how to steer models, write effective prompts, recognise hallucinations, and orchestrate AI for complex tasks. In a world where AI literacy is increasingly valued, that’s not nothing — it’s a new kind of professional capability.

“They built fluency with the tool rather than depth in their field — which raises a longer-term question about what happens when underlying domain expertise gradually erodes.”

Real-World Example — Marketing Strategy

A strategist who opens a new brief and immediately starts a conversational thread with an AI: “I’m trying to position this product for Gen Z without it sounding performative — what tensions should I be aware of?” They iterate, push back, redirect, and synthesise. The final deliverable feels authored.

The Self-Automator — Abdicated Co-Creation

Self-automators hand the task over entirely. They write a prompt, wait for output, do light editing, and move on. The work looks finished. It often isn’t.

Self-automators saw no skill gains whatsoever. Their output was quick, but shallow and less persuasive than that of either Cyborgs or Centaurs. They traded growth for speed — and may not have noticed the bargain they made.

Real-World Example — Business Analysis

A junior analyst pastes a client brief into an AI tool and asks it to produce a competitive analysis. The output arrives in minutes — clean formatting, confident language, all the right headings. But when a partner pushes back in a client meeting, the analyst has nothing to offer. They never engaged with the material.

Why This Matters Beyond BCG

This isn’t a consulting industry problem. The same archetypes are emerging across every sector where knowledge work meets generative AI.

Journalism

Some reporters use AI to draft entire articles from press releases, while others use it to rapidly surface sources, then write every word themselves. The former may be producing more content; the latter are becoming better journalists.

Medicine

Clinicians who use AI to pre-draft clinical notes and then critically review and revise them are sharpening diagnostic thinking. Those who accept AI summaries of patient histories at face value — without interrogating the underlying data — are quietly eroding the habits that clinical judgment depends on.

Education

Students who use AI to explain a concept they don’t understand, ask follow-up questions, and test their own comprehension are studying more effectively than ever. Students who use it to write their essays are producing work they cannot reproduce, defend, or build upon.

The Organisational Implication

For managers and leaders, the BCG findings carry a pointed warning: measuring AI adoption by output misses the point entirely. Self-automators produce output. Plenty of it.

The better questions to ask are not “Are your people using AI?” but rather: How are they using it? Are they engaging with the hard parts, or offloading them? Are they building knowledge, or just building decks?

The difference between a Centaur and a Self-automator isn’t discipline or talent — it’s intention. Organisations that make that intention explicit, through how they train, evaluate, and reward AI use, will develop durable human capability alongside powerful technology. The rest will just have faster output and shallower people.

Research Source & Attribution

This article is based on original research conducted by academics from Harvard University, the Wharton School of the University of Pennsylvania, and the Massachusetts Institute of Technology (MIT), in partnership with Boston Consulting Group (BCG). The study tracked the use of generative AI tools among BCG consultants and identified three co-creation archetypes — Centaurs, Cyborgs, and Self-automators — along with their impact on skill development and output quality.

The concepts of directed co-creation, fused co-creation, and abdicated co-creation were coined by the study’s research team. All findings, statistics, and terminology are attributed to and remain the intellectual property of the original research institutions and authors.

This article represents independent editorial commentary and analysis. It does not reproduce the full study. Readers are encouraged to seek out the primary research publication for complete methodology and findings.

Leave a Reply

Your email address will not be published. Required fields are marked *