Co-Intelligence by Ethan Mollick
Ethan Mollick argues that jobs are bundles of tasks embedded in organizational systems and purpose, not monolithic entities. He emphasizes that AI excels at specific tasks (often unexpectedly), fails at others, and thus creates a “jagged frontier” between what AI can and cannot do. Mollick’s Co-Intelligence framework urges companies to decompose jobs into tasks, analyze which tasks AI can automate or augment, and align any changes with the organization’s broader purpose. He warns that optimizing only for productivity leads to a “sea of PowerPoints” – AI-generated volume without meaningful impact – unless organizations rethink why and for whom the work is done.
Scholars from economics and management broadly agree on a task-based view of AI’s impact. Acemoglu and Restrepo’s work models automation at the task level (with new tasks emerging) rather than wholesale job loss. Recent evidence shows AI primarily augments human labor, boosting productivity without widespread layoffs (e.g. as of 2025, AI adoption raised output but did not significantly cut overall employment). Studies of AI in firms highlight that uneven AI performance (the jagged frontier) means good task-assignment and oversight are crucial. For example, a Boston Consulting Group/HBS experiment found GPT-4 improved outcomes on tasks within its capability but hurt performance on slightly harder tasks, due to overreliance on AI outputs. The lesson is that human judgment, verification protocols, and training are needed whenever AI is deployed.
The “will AI take jobs?” debate centers on this nuance. Pessimists (e.g. Daron Acemoglu or AGI-believers) warn of massive labor displacement, while optimists (Sam Altman, Oren Etzioni, etc.) note historical tech revolutions changed work rather than eliminated it. Today’s consensus is that AI will transform most jobs: many routine tasks will be automated or shifted to AI (especially digital, language-based tasks), but new tasks and roles will emerge. For instance, Mollick notes that GPT-4 helped researchers reproduce a 12-review Cochrane medical analysis in two days (≈12 work-years), a task humans perform laboriously. Yet the final few steps (retrieving documents, contacting authors) still required human effort, illustrating the jagged frontier in action. Across industries we see similar patterns: AI as co-pilot. Microsoft’s CEO notes 20–30% of code in some projects is now AI-generated; media companies use AI to generate draft headlines or creative content, with humans providing the “special sauce” of creativity and oversight.
Implications for Skills: The shift reinforces the importance of human-centric skills. Critical thinking, creativity, and ethical judgment become more valuable as tasks that machines can’t do (or can do inconsistently) dominate. Collaboration and communication skills grow in importance when humans oversee or guide AI tools. Digital and AI literacy are needed so workers can design workflows and verify AI outputs. As one synthesis put it, leaders should ask not “how do we increase productivity?” but “productivity for what purpose?” – an alignment that requires strategic and humanistic skills. We map the literature’s findings onto an assumed NorthStar Skills Academy critical-skills framework (analytical thinking, creativity, digital literacy, collaboration, adaptability, etc.), noting that AI changes the mix of skills needed even as humans remain central.
Conclusion: Mollick’s task-level lens and the “jagged frontier” notion are echoed widely. The broad lesson is that AI will take some tasks but not whole jobs; organizations must redesign work by decomposing roles, allocating tasks between humans and machines, and ensuring the overall work serves the organization’s purpose. This nuanced view suggests future workforces will emphasize uniquely human skills — creativity, empathy, oversight, and strategic thinking — while collaborating with AI on tasks best suited to the machines.
Mollick’s Key Ideas: Tasks, Systems, and the Jagged Frontier
Mollick’s work (book Co-Intelligence and related writings) frames a job as layers: at the bottom are tasks; these tasks are embedded in systems and processes; and all exist within the context of an organizational purpose or mission. He insists that AI must be understood at the task level, not by treating each job as a monolith. “Jobs are composed of bundles of tasks. Jobs fit into larger systems,” he writes. In practice, a “Professor” job splits into tasks like lecturing, research, paperwork, writing recommendations, etc. AI can automate some of these (e.g. draft text, basic analysis) but others (like setting research agenda or nuanced teaching) remain human responsibilities.
Tasks connect to systems – rules, institutions, and technology – that shape whether AI substitution is possible. Mollick uses the example of tenure in academia: even if an AI could teach better, tenure and students’ preferences form a system that blocks full automation. Similarly, tasks that require human trust or compliance (medical advice, legal judgments) sit in systems of accountability that slow automation. Above systems is the organizational purpose: if a company’s goal is “better patient outcomes,” as one executive put it, then leaders must align AI-driven efficiencies with that purpose. Deploying AI to maximize generic metrics (e.g. lines of code, number of reports) without regard to purpose risks producing only trivial or even harmful output – the “sea of PowerPoints” Mollick warns against. As one corporate CHRO put it in a panel, organizations should re-sort work so that each person focuses on tasks “on-purpose” for both the individual and the organization – using AI or reassigning other tasks to match that alignment.
Central to Mollick’s analysis is the “jagged technological frontier.” In a paper with BCG/MIT researchers, he showed that AI’s capabilities are extremely uneven: it “excels at some stuff you wouldn’t expect and bad at some stuff you wouldn’t expect”. For example, a GPT-4 model solved complex legal puzzles (winning math awards) but still hallucinated factual details in other cases. Unlike past technologies (where progress was relatively smooth), AI’s growth has sharp peaks of super-human performance and valleys of glaring errors. Mollick emphasizes that this jaggedness is persistent: even as AI improves overall, the shape of its strengths/weaknesses keeps changing. Practically, this means no off-the-shelf solution can replace a job; intelligent work design is required to match AI to tasks within its competency.
This insight is captured by the term “jagged frontier.” The ICLE review summarizes that concept: AI “exhibits uneven capabilities across tasks that appear similar in difficulty,” improving productivity on some complex tasks while making errors on others that seem straightforward. In a BCG field trial, tasks carefully selected to fit the AI’s strengths saw big gains, but tasks just outside those strengths saw performance drop (due to unnoticed AI mistakes). In Mollick’s words, “the ability to make a short video clip…does not equal performance gains” unless an organization re-thinks workflows and possibly the purpose of tasks. In sum, Mollick argues that organizations must decide which tasks to automate, which to augment, and which to scrap, always in service of their goals.
Other Perspectives on Task Decomposition and Job Impact
Task-based models: Economists have long modeled automation in a task-centric way. Acemoglu & Restrepo (2019) pioneered the view that technology displaces tasks, not entire occupations. New tasks also emerge (a “reinstatement” effect) as humans shift to what machines can’t do. This contrasts with early fears of wholesale job loss; instead it predicts a dynamic reallocation of labor. Recent empirical work aligns with this: surveys in 2024–25 find tens of percent of workers using AI tools daily, boosting productivity, but with no significant drop in total jobs so far. Instead, early evidence shows young or junior roles being most affected (new hires doing routine tasks are often replaced by AI), while senior roles remain relatively stable. These findings echo the task-based theory: routine, entry-level tasks get automated first, altering career ladders and skill demands, but not immediately producing mass unemployment.
Scholarly reviews: The ICLE review of AI’s labor market effects concludes that so far AI is mainly augmenting workers rather than automating them out of jobs. It notes that AI exposure tends to create within-firm productivity gains (making all workers better) and shifts tasks rather than eliminating roles. A large Danish study found roughly 30–65% of workers in various fields using AI tools, believing it halved their work time on many tasks. Controlled experiments (in coding, law, customer service, translation, etc.) consistently show large output boosts (often 20–50% improvements) from AI assistance, especially for lower-performing workers. In most cases, human oversight was key: accountants used AI to do routine data entry but focused on judgment and client communication; legal students using AI drafts still needed to verify citations. The overarching conclusion is that the economic effects of AI depend on how tasks are chosen, assigned, and managed.
Journalistic and industry views: Many articles echo Mollick’s framing. For instance, a digital strategy newsletter summarizes Mollick’s advice: approach a job “top-down…decompose [it] into tasks, then into use cases with the greatest potential for AI automation or enrichment”. Consulting firms likewise recommend analyzing each role’s task list: an Accenture report urges business leaders to “reinvent work…starting now, in job redesign, task redesign and reskilling,” noting that “every role…has the potential to be reinvented once…jobs are decomposed into tasks”. This organizational focus aligns with Mollick’s call for leadership: AI adoption must be driven top-down, with clear vision and incentives to ensure the technology serves meaningful ends, not just short-term efficiency.
The Jobs Debate: Media coverage highlights the split between extreme forecasts and incremental views. High-profile figures warn that future AGI could substitute for all human labor within years, leading to societal upheaval (e.g. economist Korinek in 2025). Others – including Sam Altman and AI researchers – counter that such doom scenarios have historically failed to materialize: “In every technological revolution, people predict the end of jobs, and it never happens,” Altman noted at Harvard, stressing “jobs will change”. Similarly, AI leaders emphasize augmentation: OpenAI’s CEO and Allen Institute’s Etzioni argue AI will make professionals “better and faster,” even in fields like creative writing and medicine. Early data supports this middle ground: large firms report 40–50% of employees using AI tools by early 2025, productivity rising, but no detectable wage or employment collapse yet. Even Microsoft’s Satya Nadella points out that AI is changing roles, not ending them: roughly a quarter of some development work is AI-generated code, but humans still set strategy and final design.
Implications of the Jagged Frontier: A recurring theme is that uneven AI capability requires new workplace practices. The law/econ review argues that firms must invest in internal governance – task selection, worker training, verification protocols – rather than just buying blanket AI solutions. Similarly, Mollick and others stress the need for human-in-the-loop processes: AI outputs must be checked against the “best available human standard,” and humans must remain engaged lest they “fall asleep at the wheel” when AI is good enough. This implies an upskilling requirement: employees need to develop AI literacy (to design prompts and spot errors), as well as perennial skills like critical thinking, domain expertise, and ethical judgment.
Industry Examples and Case Studies
Medical research (Systematic Reviews): In a striking example, researchers used GPT-4.1 to replicate 12 Cochrane systematic reviews in 2 days, a task that would take 12 work-years by hand. The AI searched 146,000 citations, read papers, extracted data, and did statistics – even outperforming humans on accuracy. However, the final 0.1% of the task (retrieving supplementary files, emailing study authors) still needed human effort. This shows AI handling heavy lifting in research, but humans still managing edge tasks and quality control.
Education (University Professors): Mollick’s excerpt illustrates academia: professors do diverse tasks (teaching, research, admin). AI can automate “mundane tasks” (grading, paperwork), freeing faculty for uniquely human work like mentoring or designing new courses. But any shift must consider academic systems (tenure, accreditation) that make full automation unlikely. For example, even if an AI could lecture, would students accept it? Mollick notes faculty jobs wouldn’t vanish simply by removing tasks – much as spreadsheets speed up accounting but didn’t eliminate accountants.
Software Development: Reports show AI tools (like GitHub Copilot) boosting coding speed dramatically, especially for junior programmers. One Silicon Valley exec found 20–30% of some code written by AI. Yet teams reorganize: some companies disperse coders into cross-functional squads (“vibeworking”) where AI-assisted prototyping accelerates projects, but coders still define architecture and handle integration. Skill needs shift toward higher-order design, debugging AI suggestions, and inter-team communication.
Marketing & Creative Industries: Advertising and content agencies report using AI for ideation and draft creation. For instance, WPP executives describe AI tools generating headline drafts across marketing funnels in seconds – “not that great, but pretty good” drafts that human creatives then refine. The key distinction is that AI is not “creating” content in isolation; humans are using AI as a tool. Agencies are focusing on the “special sauce” – storytelling, brand strategy, client relationships – tasks they believe will become more valuable as routine creative work is accelerated by AI.
Customer Service & Knowledge Work: Field experiments find that AI chatbots and assistants can resolve routine customer inquiries or code debugging tasks much faster, raising baseline output. For example, a trial with customer support staff saw a 15% overall productivity gain (36% for the lowest-skilled workers) when agents had AI tools. However, the same studies note that high-quality outcomes require human supervision of the AI. In these cases, the human’s role shifts from doing rote work to reviewing AI suggestions and focusing on complex customer needs.
Skills Implications
Our analysis suggests:
Analytical/Digital Literacy: As tasks become modular, workers need the ability to analyze workflows and data. Mollick and others stress that understanding AI’s capabilities and limits is crucial (e.g. knowing when a model might “hallucinate”). Skills in data interpretation, AI tool design, and system thinking let teams decompose jobs intelligently and monitor AI outputs.
Creativity and Innovation: With AI handling many routine or language tasks, uniquely human creativity is a key differentiator. The marketing and ad examples show agencies betting on human creativity to add value beyond AI-generated drafts. Mollick likewise notes that freeing humans from mundane tasks should enable more creative, high-level work. Thus, creative thinking — generating original ideas, storytelling, design intuition — remains a critical skill.
Critical Thinking and Judgment: The “jagged frontier” requires workers to judge what AI can safely do. The literature repeatedly emphasizes oversight: humans must evaluate AI suggestions for errors or bias. This demands strong critical-thinking, skepticism, and domain knowledge. For example, an AI may draft a legal memo or diagnosis, but a lawyer or doctor must catch the AI’s mistakes. Workers also need ethical reasoning (ensuring AI is used responsibly) and problem-solving skills to reassign or redesign tasks as capabilities evolve.
Collaboration and Communication: With AI in the loop, interdisciplinary teamwork is vital. Brookings and others describe “disaggregated work” where tasks are split among people, AI, and even external contractors. Coordinating these requires collaborative skills — cross-functional communication, project management, and the ability to integrate AI outputs into human workflows. Mollick’s “Leadership, Lab, Crowd” framework highlights engaging employees (“the crowd”) in AI adoption, which relies on communication and collaborative culture.
Adaptability and Learning: The speed of AI change means workers must continually update their skills. The case studies show rapid tool adoption (e.g. widespread GPT use by 2025). Success will go to those who can learn new tools and pivot their task focus. Mollick’s advice to “work with employees…to determine the best way to use AI” implies nurturing a growth mindset and willingness to experiment. Moreover, as the jagged frontier shifts (e.g. AI suddenly aces math or new languages), the workforce must adapt training and processes on the fly.
Leadership and Strategic Vision: Finally, leaders must define purpose. The Valence interview stresses connecting AI use to organizational mission: AI projects should answer “productivity for what?”. Crafting that vision requires strategic thinking, ethical foresight, and change management. Skills in guiding transformation (designing new job roles, reskilling programs) are highlighted by Mollick’s emphasis on leadership and lab experimentation.
These mappings illustrate that AI shifts the skill emphasis but doesn’t eliminate core human strengths. Critical, creative, and interpersonal skills gain even more weight as machines absorb the routine elements of work.
[References available. Just contact NorthStar Skills Academy at contact(at)northstarskills.com]

