Workday, Inc. has put a name to something many teams feel every day but rarely articulate well: the strange sensation of moving faster with AI while somehow ending up in the same place, or occasionally even behind. The company’s new global research lands squarely on that tension. On paper, AI looks like a gift—most employees are clawing back one to seven hours a week, which is not nothing in a work culture already stretched thin. Yet those hours have a habit of dissolving. They leak away into checking, correcting, rewriting, and second-guessing outputs from generic AI tools that promise speed but quietly hand responsibility for trust and accuracy back to the user. Productivity rises, but value stalls. It’s a bit like buying a faster car and then spending the extra time pulling over to check whether the wheels are still attached.
What makes the findings resonate is that they refuse the easy narrative that “AI just isn’t ready yet.” Instead, the research frames the problem as structural and human. AI is increasing capacity just fine; what hasn’t changed are the roles, workflows, and expectations wrapped around that capacity. People are using 2025-grade tools inside job descriptions designed a decade ago, and the mismatch shows. Nearly forty percent of time saved is immediately eaten by rework, which explains why only a small minority of employees consistently feel they’re getting clean, positive outcomes. Heavy AI users, paradoxically, feel both the most hope and the most strain. They believe in the upside—overwhelmingly so—but they also shoulder the daily burden of acting as human validators, scrutinizing machine output at least as closely as anything written by a colleague. Speed, without redesign, just shifts where the effort lives.
One of the more quietly uncomfortable insights sits with younger employees. Those in the 25–34 bracket, often assumed to be “naturally good with AI,” are actually absorbing a disproportionate share of the cleanup work. They’re faster to adopt, faster to experiment, and faster to spot when something feels off—which means they’re also faster to fix it. Without training and role clarity, tech savvy turns into invisible labor. Leaders say skills training is a priority, but the people drowning most in AI rework don’t reliably feel it reaching them. That gap between intention and lived experience is where optimism curdles into fatigue, and it’s where early AI enthusiasm quietly starts to fray.
What separates the organizations seeing real returns isn’t the sophistication of the model or the novelty of the tool. It’s what they do with the time AI gives back. Many companies default to reinvesting savings into more technology or simply piling on additional tasks, as if reclaimed hours are an invitation to sprint harder. The standout performers make a more counterintuitive move: they slow down in the right places. They treat saved time as a strategic asset and channel it into skill-building, deeper analysis, better decisions, and work that actually requires judgment. Employees in these environments don’t just move faster; they feel their work becoming more valuable, and the amount of rework drops because people are taught how to collaborate with AI rather than babysit it.
This is where Workday’s philosophy comes into sharper focus. As Gerrit Kazmaier puts it, pushing the hardest questions of trust and accuracy onto individual users is a design failure, not a feature. The argument isn’t anti-AI; it’s anti-friction disguised as empowerment. When AI is delivered as a raw, generic tool, every employee becomes an informal systems integrator and quality assurance analyst, whether they signed up for that role or not. When it’s embedded as a human-centered system—doing the heavy lifting quietly in the background—it frees people to concentrate on what machines still don’t do well: judgment, creativity, and real connection. That distinction matters, because it determines whether AI becomes a multiplier of insight or just a faster way to make mistakes that someone else has to clean up later.
What lingers after reading the research isn’t fear about AI replacing jobs, but a more subtle warning about wasted potential. The biggest risk right now isn’t that AI moves too fast; it’s that organizations move too slowly in redesigning how work actually happens. Reinvesting in people—teaching them when to trust AI, when to challenge it, and how to reshape roles around it—turns reclaimed hours into durable advantage. Ignore that step, and the paradox remains: faster days, fuller calendars, and the nagging sense that all that new speed somehow isn’t taking you any further than before.