2025 was the year of AI agents. However, agents weren’t a temporary hype. Agents and Generative AI will change fundamentally how we’ll work. This post describes this transformation which has already started in enterprises.
Note: This is a different type of post on my blog. It’s the first time I’ve used AI to generate content for my blog. Ironically, the content of this post is about AI will change work of knowledge users.
My colleague Florin Manaila is not only a brilliant leader and engineer, but also a great thought leader. On his blog he covers various Generative AI related topics.
I like especially these two posts:
The following text has been generated by AI. I’ve only tweaked it slightly. I think it’s a great summary of the two longer posts. It even took over Florin’s writing style to some degree.
Prompt:
1
2
3
4
5
Write a thought leadership talk about the future of agents in 2026.
The talk must not be longer than ten minutes.
Use the following two articles as input for the content.
The audience are technical people and non technical sellers.
Avoid technical terms and acronyms.
Evolution
Let’s discuss the future of agents in 2026, not science fiction, but the shift that’s already changing how we work and how we lead.
This year is when agents move from ‘interesting add‑ons’ to core coworkers. And a big change is their location. Instead of living in distant data centers, they’re moving into every business system. Running them locally is now fast enough, private enough, and inexpensive enough to make sense.
That shift means AI stops showing up as a separate product and becomes a built‑in feature. Your spreadsheet starts thinking with you. Your customer tools anticipate next steps. You won’t ask IT to “integrate AI”, it will be embedded in the tools you already use.
Accountability
But the real transformation comes next: Agents stop waiting for your permission. Not because they’re going rogue, but because human approval has become the slowest part of many workflows. As these tools get faster and more trusted, they start planning, acting, verifying, and correcting on their own. What once needed a human click becomes an automatic loop.
And that leads to the hard question: If an agent can act, who is responsible?
This is where governance becomes the challenge. Agents make thousands of tiny decisions, often based on data no one person fully controls. They interact with other agents you didn’t hire or design. If something goes wrong and no one knows who approved what, you end up with responsibility gaps, outcomes everyone hates but no one officially authorized.
Companies are already adapting. Boards are taking ownership of AI risk. New roles are emerging to set autonomy levels and maintain inventories of every agent in the business. Teams are building registries that track which agent does what and how to turn it off if needed. Because you can delegate activity, but not accountability.
And there’s another hurdle: Agent sprawl. Every team can create an agent, so they will. Without guardrails, you’ll get dozens of disconnected mini‑systems running at all hours. Some will quietly influence prices or routes or workflows in ways no one intended.
Recommendation
So, what’s the playbook?
Set clear autonomy levels. Track every agent like any other important business asset. Require oversight for anything that touches money, people, or safety. Test them under real‑world conditions. Practice rollback. And communicate openly with staff so trust doesn’t erode.
Two futures are emerging. In the first, agents follow visible rules, are audited regularly, and make the company faster and safer. In the second, agents grow unchecked, decisions become unpredictable, and the risk becomes costly. The difference isn’t smarter tech, it’s better leadership.
As agents spread into every workflow, the real question is not whether they’ll act on your behalf. They will. The question is whether you guide them, or they quietly guide you. 2026 is the year to decide.
