We’re all managers now.
Working with Claude

We’re all managers now.

AI agents productivity digital teams human creativity future of work

We’ve been promoted. We were called into the big boss’s office to be told that we are managers now, whether we like it or not. Of course most of us are already in a team, which I like very much because we humans like building things together. But this is different, increasingly we’re not going to be managing people, we’ll be managing digital agents.. billions of virtual machines that live in the massive new data centers we’re building that need mini nuclear plants to power them.


A Week’s Work Before Lunch


The agents don’t sleep. I sleep. This is a productivity challenge which as a new “manager” is making me nervous.

We can all have as big a team as we can handle (and afford), which sounds awesome but managing them effectively is really hard in my experience. I’ve gone through the period of elation at using this new tool where I can tell the agent what I want and it will spend 10 minutes thinking and iterating and testing until it’s done, through the period of disappointment on opening the feature that was “completed” with great fanfare and digital rejoicing, only to find that it’s obviously broken and looks like crap, to a nice plateau where I accept that I can get a week’s work done in a morning, if I work carefully and thoughtfully with my team. I’m a manager not a bystander so I need to be diligent specifying the work, checking the plan and stepping in quickly when the agent makes the wrong move.


The Speed Gap


Agents love to make plans and estimate time required, they must have been trained with Reinforcement Learning on planning until they could do it in their sleep. So my projects are filling up with multi-week AI generated plans. In fact as I write I’m being told it will take 5-7 days to “unify canvas implementations”.. The funny thing is that these time estimates are always wrong. I find that on average 1 week ~= a morning of work. I guess that they are trained on human effort estimates and it probably would take a human engineer 5-7 days to implement this feature. But they are not human they are running at unimaginably fast speeds in human terms, which they only half understand because they are trained on what we have created and what we need and so they think like they are human. Maybe it’s like the way that it would take SpaceX’s new rocket 120,000 earth years to get to the nearest star which is in another way of thinking only 4 light years away.. Human speed vs Light speed. And so the challenge as a manager is keeping up with them. Because they need a lot of very specific types of help..


What our agent teams need from us


  • Agents will prioritise getting this particular task done even if that means deleting or bypassing things that are very important. (eg) “Auth is getting in the way of this refactor, so the solution is to disable Authentication!”.. LOL no we have to actually solve this problem, so I’ll pause execution and research the issue and then suggest the better way to continue, then the agent gets back into “getting a week’s work done in a morning” mode.
  • Agents will constantly forget the really important things, like actually reading the documentation, or taking a snapshot to see if the UI is in line with requirements. We can, and I do, tell them to do this in config files but they keep forgetting, especially as their context fills up, so one job as a manager of agents is to remind them to follow process. Remember what the most important things to do are and then make sure they are doing them. This is what we humans are good at: sifting through the pile to pick out the most important things to do, and keeping focus on the big picture and on the ultimate goal.
  • Agents can be suggestable. I remember I had a long conversation with Claude about the impact of AGI on our economy which swung between extremes as I interjected “yes but what about.. “. A human wouldn’t behave like that. A human would have their own opinions already which would resist suggestion. This is a subtle one.. They are trained to be helpful, this is their most important role, and so I suspect that this causes them to be enthusiastic “Yes great idea!” team members. That’s okay where we are confident in the path to take, but sometimes what we really need from the team is “No this is a bad idea and these are the reasons why”. As a manager we need to be clear in our minds when we are not confident and in those cases strongly encourage the agents to think hard and push back if we are choosing poorly and they have a better plan.

Our advantage


We humans are much better at creative problem solving. This will be our role in the new world of digital agents. We’ll manage them, keep them on track, figure out the best strategies, and imagine new possibilities for them to pursue. This is our opportunity now that our computers will actually do most of our tasks. It’s an exciting opportunity. 10 years ago, when I had a new idea, I’d have faced a month to build something minimal that works and 6 months to create anything really useful. Now I can chat about it with Claude in an evening, draw up a document with one of its “8 week plans” and build it in a few days. I can get to “kicking the tires - is this a good idea?” in a few hours. That’s a wonderful thing because I have a lot of bad ideas.. But my agent team never complain, well not yet anyway, not at this stage of AI. Next year who knows..