Companies are already using agentic AI to make decisions, but governance is lagging behind

More organizations are letting AI act on their behalf, but far fewer have mature governance to manage the consequences.

Author: Murugan Anandarajan on Jan 22, 2026
 
Source: The Conversation

Businesses are acting fast to adopt agentic AI – artificial intelligence systems that work without human guidance – but have been much slower to put governance in place to oversee them, a new survey shows. That mismatch is a major source of risk in AI adoption. In my view, it’s also a business opportunity.

I’m a professor of management information systems at Drexel University’s LeBow College of Business, which recently surveyed more than 500 data professionals through its Center for Applied AI & Business Analytics. We found that 41% of organizations are using agentic AI in their daily operations. These aren’t just pilot projects or one-off tests. They’re part of regular workflows.

At the same time, governance is lagging. Only 27% of organizations say their governance frameworks are mature enough to monitor and manage these systems effectively.

In this context, governance is not about regulation or unnecessary rules. It means having policies and practices that let people clearly influence how autonomous systems work, including who is responsible for decisions, how behavior is checked, and when humans should get involved.

This mismatch can become a problem when autonomous systems act in real situations before anyone can intervene.

For example, during a recent power outage in San Francisco, autonomous robotaxis got stuck at intersections, blocking emergency vehicles and confusing other drivers. The situation showed that even when autonomous systems behave “as designed,” unexpected conditions can lead to undesirable outcomes.

This raises a big question: When something goes wrong with AI, who is responsible – and who can intervene?

Why governance matters

When AI systems act on their own, responsibility no longer lies where organizations expect it. Decisions still happen, but ownership is harder to trace. For instance, in financial services, fraud detection systems increasingly act in real time to block suspicious activity before a human ever reviews the case. Customers often only find out when their card is declined.

So, what if your card is mistakenly declined by an AI system? In that situation, the problem isn’t with the technology itself – it’s working as it was designed – but with accountability. Research on human-AI governance shows that problems happen when organizations don’t clearly define how people and autonomous systems should work together. This lack of clarity makes it hard to know who is responsible and when they should step in.

Without governance designed for autonomy, small issues can quietly snowball. Oversight becomes sporadic and trust weakens, not because systems fail outright, but because people struggle to explain or stand behind what the systems do.

When humans enter the loop too late

In many organizations, humans are technically “in the loop,” but only after autonomous systems have already acted. People tend to get involved once a problem becomes visible – when a price looks wrong, a transaction is flagged or a customer complains. By that point, the system has already been decided, and human review becomes corrective rather than supervisory.

Late intervention can limit the fallout from individual decisions, but it rarely clarifies who is accountable. Outcomes may be corrected, yet responsibility remains unclear.

Recent guidance shows that when authority is unclear, human oversight becomes informal and inconsistent. The problem is not human involvement, but timing. Without governance designed upfront, people act as a safety valve rather than as accountable decision-makers.

How governance determines who moves ahead

Agentic AI often brings fast, early results, especially when tasks are first automated. Our survey found that many companies see these early benefits. But as autonomous systems grow, organizations often add manual checks and approval steps to manage risk.

Over time, what was once simple slowly becomes more complicated. Decision-making slows down, work-arounds increase, and the benefits of automation fade. This happens not because the technology stops working, but because people never fully trust autonomous systems.

This slowdown doesn’t have to happen. Our survey shows a clear difference: Many organizations see early gains from autonomous AI, but those with stronger governance are much more likely to turn those gains into long-term results, such as greater efficiency and revenue growth. The key difference isn’t ambition or technical skills, but being prepared.

Good governance does not limit autonomy. It makes it workable by clarifying who owns decisions, how systems function is monitored, and when people should intervene. International guidance from the OECD – the Organization for Economic Cooperation and Development – emphasizes this point: Accountability and human oversight need to be designed into AI systems from the start, not added later.

Rather than slowing innovation, governance creates the confidence organizations need to extend autonomy instead of quietly pulling it back.

The next advantage is smarter governance

The next competitive advantage in AI will not come from faster adoption, but from smarter governance. As autonomous systems take on more responsibility, success will belong to organizations that clearly define ownership, oversight and intervention from the start.

In the era of agentic AI, confidence will accrue to the organizations that govern best, not simply those that adopt first.

Murugan Anandarajan does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Read These Next