← Back to blog

The new unit of productivity is not a person

Justin Tannenbaum

For over a hundred years, productivity has meant one thing: how much can one person do in one hour?

That framing shaped how we built companies, how we hired, how we decided if things were working. It was baked into management theory, labor economics, every SaaS pricing model you've ever seen.

I think it's breaking down. Bret Taylor, former CEO of Salesforce and current Chairman of OpenAI, said something on the Cheeky Pint podcast that I keep coming back to: "The atomic unit of productivity in AI is a process, not a person."

I've been thinking about that sentence a lot. Here's why.

How we got here

To get why Taylor's framing matters, it helps to see what it's replacing.

The stopwatch era

In 1898, Frederick Winslow Taylor stood on the floor of Bethlehem Steel with a stopwatch and timed a man named Henry Noll loading pig iron. Under the old method, workers loaded 12.5 tons per day. Taylor broke the job into precise motions, dictating when to pick up, when to walk, when to rest, and got Noll to 47.5 tons. Nearly 4x.

Noll's pay went from $1.15 to $1.85. A 60% raise for 280% more output. Taylor called this "scientific management." Workers called it something else. A strike at one factory was triggered by the use of stopwatches. Congress got involved.

But the idea stuck. Productivity equals motions per unit of time. Speed up the human.

Frank Gilbreth, a former bricklayer's apprentice, took it further. He analyzed bricklaying motions and increased output from 1,000 to 2,700 bricks per day. He invented 18 fundamental units of motion called "therbligs" (Gilbreth spelled roughly backwards) and built a camera that could record time to 1/2000th of a minute.

The unit of productivity was the human body. The instrument of measurement was the clock.

The assembly line

Henry Ford changed the denominator. Before the moving assembly line, a Model T took 12 hours and 30 minutes to build. By early 1914, one rolled off every 93 minutes. At peak production, every 24 seconds.

Ford moved the metric from individual output to system throughput. Units per hour. The factory as a machine.

But workers hated the monotony. By late 1913, labor turnover hit 380%. To keep 100 workers on the floor, Ford had to hire 963. His famous $5/day wage, more than double the previous rate, wasn't generosity. It was survival. Turnover collapsed. Profits doubled within two years.

Ford produced over 15 million Model Ts and the price dropped from $850 to $260. The productivity gains were real. The unit of measurement was still time, just applied to the system instead of the person.

The knowledge worker problem

Then Peter Drucker broke everything.

In 1959, he coined "knowledge worker" and spent four decades wrestling with a question we still haven't answered: how do you measure the productivity of someone who thinks for a living?

His take was humbling. In 1999, Drucker wrote: "In terms of actual work on knowledge worker productivity, we are roughly where we were in the year 1900 in terms of the productivity of the manual worker."

A century of management science, and we still couldn't tell if a knowledge worker had a productive day.

Drucker argued knowledge work required measuring quality and effectiveness, not quantity and efficiency. Organizations mostly didn't listen. They defaulted to the only proxy they had: visible busyness. Cal Newport later called this "pseudo-productivity," the use of visible activity as the primary means of approximating actual productive effort.

We went from counting bricks to counting emails. That's not progress.

The software measurement wars

The software industry tried harder than most to solve this. It mostly failed.

Lines of code became the metric in the 1980s. Bill Gates killed it: "Measuring programming progress by lines of code is like measuring aircraft building progress by weight."

My favorite story: in 1982, Apple engineer Bill Atkinson optimized QuickDraw's region calculation by removing 2,000 lines of code. His managers tracked LOC per developer. He reported -2,000 for the week. Management abandoned LOC tracking.

Then came story points, invented by Kent Beck and Ron Jeffries as part of Extreme Programming. Teams estimated effort in abstract "points" divorced from calendar time. It was supposed to be a planning tool. It became a performance metric. Jeffries later wrote: "I like to say that I may have invented story points, and if I did, I'm sorry now."

DORA metrics (2014) shifted focus to system-level delivery: deployment frequency, lead time, change failure rate, mean time to recovery. Better. But still measuring the machinery of output, not the value of what came out.

McKinsey kicked the hornet's nest in 2023 with "Yes, you can measure software developer productivity." Kent Beck's response: "The report is so absurd and naive that it makes no sense to critique it in detail... What they published damages people I care about."

We've been fighting about how to measure knowledge work for 65 years. Never solved it. And now I think the question itself is becoming obsolete.

The productivity paradox, again

There's an uncomfortable pattern here.

In 1987, Robert Solow wrote the most quoted line in productivity economics: "You can see the computer age everywhere but in the productivity statistics."

US computing capacity had increased 100x through the 1970s and 80s. Labor productivity growth had been cut in half, from 2.9% annually (1948-1973) to 1.1% (1973-1995). Computers were everywhere. The numbers didn't move.

The paradox eventually "resolved" in the late 1990s when IT investment pushed productivity back to 2.8%. But it took decades. Electrification showed the same lag. Invented in the 1880s, peak productivity impact didn't land until the 1920s. Manufacturing productivity hit 5% annual growth that decade, accounting for 84% of total national productivity gains. The lesson: transformative technologies take a generation to show up in the macro data because organizations need to restructure around them.

It's happening again with AI. Apollo's chief economist Torsten Slok said in early 2026: "AI is everywhere except in the incoming macroeconomic data." A February 2026 NBER-backed survey of about 6,000 executives found roughly 90% said AI has had no impact on productivity or employment at their business. Goldman Sachs found "no meaningful relationship between AI and productivity at the economy-wide level."

This isn't because AI doesn't work. It's because we're measuring the wrong thing.

The process, not the person

This is what makes Taylor's framing useful.

AI doesn't show up in traditional productivity statistics because those statistics measure output per person per hour. AI doesn't work in person-hours. It works in processes.

Taylor on the Cheeky Pint podcast: "I think part of the reason it's been slow to get the productivity enhancement is that we ship our org charts as companies naturally. There's usually not a person responsible for that process. There's the legal team for the contract. There's the procurement team."

Companies are organized around people and departments. Work flows across them in processes that nobody owns end-to-end. When you measure AI's impact per-person, you miss the point. The gains come from collapsing entire processes, not from making individual people 15% faster at their existing job.

Taylor goes further: "I think we will end up reimagining our companies with the benefit of AI. Will we actually think of our companies as a collection of processes, have people responsible for the KPIs who can apply AI?"

That restructuring took 30 years with electrification and 20 years with computing. The technology arrives. The org chart takes a while to catch up.

The economics of a token

If the process is the new unit of productivity, the token is what powers it.

A token is the fundamental unit that large language models process, roughly three-quarters of a word. Every API call is metered in tokens. And the cost curve is wild.

Stanford's AI Index found that achieving GPT-3.5-level performance cost $20.00 per million tokens in November 2022. By October 2024: $0.07. That's a 280x reduction in 18 months.

Sam Altman's framing: "The cost to use a given level of AI falls about 10x every 12 months." Then the comparison that stuck with me: "Moore's law changed the world at 2x every 18 months; this is unbelievably stronger."

Epoch AI found the median price decline was 50x per year overall, accelerating to 200x per year looking at post-January 2024 data.

To make it concrete: a human customer service agent costs $2.70-$5.60 per interaction. An AI agent handles the same thing for about $0.40. An insurance claim that took 10 days now takes 36 hours. A mid-level analyst costs $50-80 per hour fully loaded. Continuous AI inference at 100 tokens per second costs $1.44 per hour, below any US minimum wage.

Jensen Huang at NVIDIA: "For the first time, we're producing something entirely new at extremely high volume, tokens. These tokens have value because they represent artificial intelligence." He calls data centers "AI factories" that do "one thing every single day: producing tokens."

But here's the thing. Taylor explicitly rejects the idea that tokens equal value. "I don't think token usage or utilization and value... there's not a strong correlation." Sierra, his company, charges per resolved customer issue, not per token consumed. If it takes fewer tokens to resolve the issue, that's Sierra's margin improvement, not a price cut for the customer.

Tokens are the input cost of running a process. The process, and its outcome, is what has value. You don't measure a factory's productivity in kilowatt-hours. You shouldn't measure an AI process in tokens either.

The business model shift

This is already playing out in how companies price and sell.

Sierra hit $100 million in annual recurring revenue in 21 months. Their model: when the AI agent resolves a customer issue, there's a pre-negotiated rate. If it escalates to a human, it's free. Taylor's line: "Salespeople get paid a sales commission. Why not the AI as well?"

Satya Nadella called SaaS dead on the BG2 podcast in late 2024. His argument: business applications are fundamentally CRUD databases whose logic will migrate to the AI tier. Pricing shifts from per-user seats to per-agent, per-outcome. IDC predicts by 2028, 70% of software vendors will refactor pricing around consumption or outcomes.

Salesforce's Q4 FY26 earnings tell the story in numbers: 19 trillion tokens processed, 2.4 billion "Agentic Work Units" delivered, a 50/50 revenue split between seat-based and consumption-based pricing. The transition is already underway.

The SaaS industry, $600 billion built on per-seat pricing, assumed the user was always a person. If the user is an agent, what is a "seat"?

So what

McKinsey says generative AI could add $2.6 to $4.4 trillion annually to the global economy. Goldman projects a 7% lift to global GDP over a decade. But those numbers assume companies reorganize around processes, not just hand AI tools to existing employees in existing org structures.

That's the hard part. The technology is there. The economics are there, costs falling 200x per year is not the bottleneck. What's missing is the organizational imagination.

Drucker spent 40 years trying to figure out how to measure knowledge worker productivity. He never cracked it. Maybe the answer isn't better measurement of people. Maybe it's measuring processes instead.

Electrification took 30 years to show up in the productivity data. The PC took 20. Every time, the technology showed up first and the org chart caught up later. The 90% of executives who say AI hasn't moved the needle aren't wrong about their numbers. They're just measuring the wrong thing.

Recommended Reading

Follow Me

© 2026 JET Ventures LLC