AI Adoption at my friend's company: Becoming a product builder

Stay up to date with the latest insights

In December 2025, I had a chat with my friend about AI adoption at her company.

She works as a Lead Product Manager at a startup in the resource management industry and, at the time, they had just announced that the company was adopting AI across most of their workflows. Her response was anything but enthusiastic:

"Oh, great. Now we all have to write code too."

2 months later, we spoke again. The sarcasm had been replaced by genuine awe. Not just at AI, but at what she is now capable of doing with the tools she’s been using.

I want to share what she told me because I think it's one of the most concrete pictures I've seen of what AI adoption actually looks like inside a product organisation.

Here’s what she told me.

The Starting Line

The AI adoption was started by the founder.

He'd been experimenting with AI himself, building automations, pushing code to GitHub, and testing tools. He was impressed enough to decide that the whole company would adopt it. That's roughly 50 people in total, with about half sitting within the product, design, and tech organisation. The other half is mostly operations and sales.

The mandate came from the top and applied to everyone.

Actually, McKinsey data shows that the organisations that perform better with AI are the ones where the mandate is explicitly owned at the very top, not left to a lone PM or engineer. (You probably know that I don't really value McKinsey, but this was one of the few times they did something useful.)

They also had a legacy codebase problem that had been unaddressed for a while. Engineers avoided it, and they even discarded building some features because nobody wanted to touch it! The AI adoption ended up solving it because they stopped adding features to the old system and started building fresh, with AI-assisted coding as the new way of working.

The tool they chose was Cursor, and only that one. They connected it to GitHub and to Linear, their project management tool. They also shared with Cursor information about their design system and CI/CD (Corporate Identity / Corporate Design, not Continuous Integration / Continuous Deployment 😉) setup, so the agent understood their conventions and constraints before building anything.


What the Workflow Looks Like Now

Before the AI adoption, their workflow was as standard as you can imagine.

Product Managers decided what to build and why, and how that fit into the strategy. Designers handled discovery and research, and ensured the product followed usability criteria. Finally, Engineers built what had been agreed upon, picking up tickets from Linear. There was the usual back-and-forth between engineering and the rest of the team to clarify intent and requirements.

With the new workflow, PMs are still doing similar work but in a different way.

My friend says she describes what should be built, for whom, and why. The Cursor agent refines the brief, creates a plan, and they go back and forth until she’s happy with it. Then the agent builds it. She can also write tickets in Linear, and the agent picks them up and works on them. Oh, and there’s more: when something is done, she also asks Cursor to write the documentation.

By the end, she has what she calls a "prototype."

These prototypes are way more powerful than the traditional ones. Some of them are even used by real enterprise clients as live tools. The only reason they’re not considered part of the product has to do with a scalability concern. They wouldn't hold up in a high-traffic B2C environment. But for enterprise clients with lower usage, they work well enough that the company has sold them as finished products.

The prototype has become the product.

And honestly? I think this will be true for more companies than we expect. The bar for "good enough" is context-dependent, and for a lot of B2B use cases, good enough is actually good enough.


What Happened to the Team

The product, design, and tech organisation had around 25 people before the adoption. That number has been reduced drastically. Not because of AI. But because of AI they can now do the heavy lifting with the same amount of people. Now there is one designer, one data analyst, one frontend engineer, lots of backend engineers, some full-stack engineers and a small number of PMs.

The remaining designer's main job has changed from designing to reviewing what the PMs build with Cursor, such as checking that the user experience is consistent and user-friendly. It’s not really a designer role in the traditional sense anymore.

The data analyst has also seen parts of their role bypassed. Not by another person, but by the PM asking Cursor to build an analytics dashboard directly, specifying what it should show and asking it to calculate the underlying data. That code also goes to GitHub for engineer review.

There is one frontend engineer left, but only because he's full-stack and is maintaining the legacy system, which still requires traditional frontend work. For new features, frontend development has moved to the PMs.

Back-end engineers are still writing code, mainly with Cursor. Their job is now mostly about reviewing other people's code rather than producing their own. That's created two problems:

  1. Volume: writing code has become faster, but reviewing it takes longer. There is simply more code to review than before. My friend was honest: they don't actually know yet whether they've gained productivity overall, because the workload hasn't disappeared, it has just moved from writing to reviewing.

  2. Context: engineers can check whether a function does what it's supposed to do, sure. But to judge whether it fits well into the overall architecture, they need context they no longer have because they weren't in the loop when it was built. The same problem applies to the designer: to understand whether something is good design, she needs to understand the intent behind it, and that's challenging when she hasn't been part of the process.

Speed has gone up. Context has gone down. What I find interesting about this is that it's not a tooling problem. You can't fix missing context with better documentation or a smarter agent. It's a collaboration problem, and those are much harder to solve.

They have no plans to hire frontend engineers, and they’re hesitant about hiring more designers. They do plan to bring in one or two more data people. Beyond that, what they're looking for are PMs with strong usability instinct and backend engineers who understand architecture and can judge code quality.


Market Expectations Are Changing

What does all this mean? That the market now values different skills and is expecting more from the same roles.

My friend pointed out two major changes she sees after her organisation has gone through this transformation: the market has little room for non-senior people anymore, and every role is converging towards the others.

Which leaves me with two big worries:

  1. The juniors gap
    If entire teams are reduced to 1-2 people, those people need to understand what "good" looks like, and that takes years of hands-on experience. It’s true for writing code, architecture, design, or strategic product thinking. Though it generates an obvious catch-22 issue: if you stop hiring juniors, you eventually stop producing seniors. We're digging our own hole, and I'm not sure we're paying enough attention to it.

  2. Burnout
    The roles are all blurring, which means the fundamentals (like strategy, judgement, and experience) matter more than ever. But asking one person to own product, design, and frontend is a lot. Asking another to own coding, architecture, and commercial thinking is outside of what any one person should have to handle. I've been writing about the entrepreneurial PM and the product creator (that we now call product builder) for years. I believe in that profile. But it works in specific contexts, not universally, for everyone working in the industry. Rising expectations to do more, across more disciplines, in the same amount of time, is not healthy in the long-term.

My hope is that companies going through this now will course-correct in a year or two. Not reverse it, but find a better balance before people start breaking down.


What I’m Taking Away From This

Over the past few months, my Lead PM friend has gotten familiar with the engineering workflow.

She now knows what a trunk is, what merging and forking mean, what pushing code and peer review involve. She still doesn't write code by hand, but she had to learn the process in order to participate in it meaningfully.

My friend’s job hasn't fundamentally changed. Before AI, she had to explain what she wanted to build: the problem, the intent, what the result should look like. That part of the work was always there, and it still is.

The difference is who she's explaining it to.

Before, she explained it to engineers, and there was often a long back-and-forth until the result matched what she had in mind. Now she explains it to the machine, which interprets her intent and often builds something surprisingly close to what she meant.

I don't know how much of this is specific to one company moving fast in one moment, and how much of it is simply what product organisations will look like in two years. But I'd rather be asking that question now than realising I should have asked it sooner.

What are you seeing in your own teams? Is the PM role leaning in this direction where you work?

Reach out, I'd really like to know.



This article was edited by Diana Bernardo.

Product management insights, delivered to your inbox

Sign up for weekly product insights. No spam.