I read this HBR piece this morning and it resonated with me immediately. The headline says it all: AI Doesn't Reduce Work. It Intensifies It.

The researchers spent eight months embedded at a 200-person tech company, and what they found contradicts the whole promise we've been sold. As they put it: "AI tools didn't reduce work, they consistently intensified it."

Here's what actually happened. Employees "worked at a faster pace, took on a broader scope of tasks, and extended work into more hours of the day, often without being asked to do so." Sound familiar?

The article identifies three ways this plays out:

Scope Creep. Because AI fills knowledge gaps, people started doing work that wasn't theirs. Product managers writing code. Researchers taking on engineering tasks. The tools made it feel accessible, even empowering. But as the researchers note, "workers increasingly absorbed work that might previously have justified additional help or headcount." That's not efficiency. That's just... more work.

Boundary Erosion. AI made starting tasks so frictionless that people slipped work into lunch breaks, meetings, and random moments. The conversational nature of prompting makes it feel like chatting, not working. One engineer summarized it perfectly: "You had thought that maybe, oh, because you could be more productive with AI, then you save some time, you can work less. But then really, you don't work less. You just work the same amount or even more."

Cognitive Overload. Workers described managing "several active threads at once" and constantly context-switching between their own work and AI outputs. The researchers found this created "a sense of always juggling, even as the work felt productive."

The scary part? This is self-reinforcing. "AI accelerated certain tasks, which raised expectations for speed; higher speed made workers more reliant on AI." It's a treadmill that speeds up the more you run.

I think the researchers are onto something when they recommend building an "AI practice," intentional norms that structure when and how AI gets used. But what does that actually look like day to day?

For me, it starts with defining the job before reaching for the tool. If I can't articulate exactly what I'm trying to accomplish and what "done" looks like, I'm not ready to delegate it. To a human or a machine.

I've also started building small delays into my workflow. Instead of immediately prompting when a question comes up, I sit with it for a moment. Sometimes the answer is simpler than I thought. Sometimes I realize I don't actually need to do the thing at all.

I find it helpful to use AI for specific phases of work, editing, formatting, or research, while keeping the core thinking and creative decisions for myself. Knowing which is which keeps me from outsourcing the parts that actually matter.

When I catch myself reaching for AI to solve something outside my lane, I pause and ask: is this actually my job? Or am I just making it easy to absorb work that should go elsewhere?

And maybe most importantly, I'm trying to preserve friction intentionally. Not everything should be instant. Some tasks should take effort. That's often where the value lives. I'm being more deliberate about which workflows I optimize and which I leave deliberately slow.

We've been so focused on what AI can do that we haven't stopped to ask what we want it to do for us. The goal isn't to use AI less. It's to use it on purpose. To decide where it belongs and where it doesn't.

Because the alternative, faster work, more of it, endlessly, isn't the future we were promised. It's just burnout with better tooling.