Over the past 15 years that I’ve built software, I’ve seen a good number of hype cycles. I started my career in tech when the idea of a “Web 2.0” motivated developers to add comment sections to their product, regardless of what it was, and I laughed at how how even my local flower store in Grand Rapids, MI sought to build a mobile app after the Wall Street Journal boldly claimed mobile was the new Internet.

Later on in my career, I saw startups brand simple linear regression analysis as “AI/ML”, and vendors replacing well established server-client architectures with “blockchain” because they assumed their customers would (and should?) mistrust them or something like that (1).

Now, AI is the next big thing, and it’s tempting to write it off as yet another hype cycle who’s broad commercial success is another generation away. After all, over 50% of VC dollars deployed in 2024 funded AI startups. But after spending some time building with AI, both using developer copilots and actually deploying agentic workflows, I think this hype cycle is already different.

In this post, I hope to capture and share why I think that’s the case, and practical things I’m doing in my daily workflow to leverage AI better.

Runaway High Agency Organizations

I’ll lead with my thesis. Tech companies who put AI front and center in their practices and culture in 2025 are going to begin to massively stand apart from their peers. They’ll release features faster, undercut the price of competitors, and attract the best talent, creating an amazing virtuous cycle (2):

The trend today is most prominent amongst developers, but I suspect it’ll quickly catch on elsewhere as agents and agent frameworks mature. Already we see Jasper for SEO Marketers, Relevance for BDRs, and Lindy general knowledge worker tasks. Over time, customers will decide on the right level of abstractions and specialization, but my money is on the tools which are easiest to use and fastest at deploying “good” AI (perhaps charging on outcomes) win.

Internally, the winning companies will provide agents or agent frameworks to each employee, and consequently will attract the best kind of employees - ultra high agency individuals who thrive with ambiguity, autonomy, and ability to impact with minimal red tape.

For a Product Manager, that might look like the following skills stack being developed and highly sought after:

image.png

While these are always the individuals you want to hire, the bar has been raised (both for talent seeking opportunities and employers looking to attract talent). Like many things, I believe the nexus of this transformation is starting within the product organization and about to percolate outwards across the rest of the company.

Let’s explore what this is already starting to look like.

Automating Product Ownership

Judging from conversations with my peers who lead product teams elsewhere, high agency PMs are beginning to automate much of their day-to-day work, focusing first on the work that’s long been debated as distracting from the core value PMs bring to the organization. Ticket grooming, acceptance criteria reviews, PRD creation, and coordination activities are all targets for automation which includes LLMs summarizing or expanding upon a core idea (4).

In smaller organizations, this means fewer junior hires as PMs can scale independent of additional headcount. In mid-sized ones, this actually is starting to look like junior PMs being trained by agents their managers deploy. And in the largest organizations, this looks absolutely Kafka-esq with PMs generating specs and their counterpart engineering teams summarizing them, both using LLMs. Positively, the humor of the situation is driving good conversations and starting to streamline processes.

Rapid Prototyping

For technically minded PMs, AI driven prototyping and codegen is phenomenal today. Last year, I built a side project that ultimately turned into a feature we released. I built it with VSCode and did 90% of the coding but just 10% of the debugging. This year, using Cursor for another side project, I wrote 10% of the code and 0% of debugging as Cursor was able to address any issue it accidentally created.