As enterprises move forward with AI experimentation at scale, governance has become a board-level concern. The challenge for executives is no longer whether governance matters, but how to design it in a way that enables speed, innovation and trust at the same time.
To find out how that balance is working in practice, I sat down with David Meyer, senior vice president of product at Databricks. Working closely with clients across industries and sectors, David has a clear view of where organizations are making real progress, where they are stuck, and how today’s governance decisions shape tomorrow’s possibilities.
What stood out in our conversation was his practicality. Rather than treating AI governance as something new or abstract, David consistently returned to first principles: engineering discipline, visibility, and accountability.
AI governance as a way to move forward faster
Katherine Brown: You spend a lot of time with clients from a variety of industries. What is changing in how leaders are thinking about governance as they plan for the next year or two?
David Meyer: One of the clearest patterns I see is that governance challenges are both organizational and technical, and the two are tightly linked. On the organizational side, leaders are trying to figure out how to let teams move faster without creating chaos.
Organizations that struggle tend to avoid taking excessive risks. They centralize every decision, add burdensome approval processes, and inadvertently slow everything down. The irony is that the consequences are often worse, not safer.
Interestingly, strong technical governance can actually unlock organizational resilience. When leaders have real visibility into what data, models, and agents are being used, they don’t need to manually control every decision. They can give teams more freedom because they understand what is happening across the entire system. In practice, this means teams don’t need to ask for permission for every model or use case – access, auditing, and updates are controlled centrally, and governance is by design rather than the exception.
Katherine Brown: Many organizations are stuck between moving too fast and shutting everything down. Where do you see companies getting this right?
David Meyer: I usually see two extremes.
On one hand, you have companies that decide they are “AI first” and encourage everyone to build freely. He works for a while. People move fast, there is a lot of excitement. Then you blink, and suddenly you have thousands of agents, no real inventory, no idea what their costs are, and no clear picture of what’s actually going on in production.
On the other hand, there are organizations that try to control everything up front. They kept the same choke points in place for approval, and the result was that almost nothing worthwhile was ever deployed. Those teams usually feel constant pressure that they are falling behind.
The companies that are doing this well are stuck somewhere in the middle. Within each business function, they identify people who are AI-literate and can guide experimentation at the local level. Those people compare notes across the organization, share what’s working, and narrow down the set of recommended tools. Going from dozens of tools to just two or three makes a bigger difference than people realize.
Agents are not as new as they seem
Catherine: One thing you said earlier was really noteworthy. You suggested that agents are not as fundamentally different as many people believe.
David: This is correct. Agents seem new, but many of their features are actually very familiar.
They spend money constantly. They expand your protection surface area. They connect to other systems. Those are all things we’ve dealt with before.
We already know how to control data assets and APIs and the same principles apply here. If you don’t know where an agent exists, you can’t turn it off. If an agent touches sensitive data, someone has to be accountable for it. Many organizations believe that agent systems require an entirely new rulebook. In fact, if you borrow proven lifecycle and governance practices from data management, you’re mostly there.
Catherine: If an executive asked you for an easy place to start, what would you tell them?
David: I’ll start with observability.
Meaningful AI almost always relies on proprietary data. You need to know what data is being used, what models are involved, and how those pieces come together to create the agent.
Many companies are using multiple model providers in different clouds. When those models are managed separately, it becomes very difficult to understand cost, quality or performance. When data and models are controlled together, teams can test, compare, and improve more effectively.
That observability matters even more because the ecosystem is changing so rapidly. Leaders need to be able to evaluate new models and approaches without having to rebuild their entire stack every time something changes.
Catherine: Where are organizations making rapid progress and where do they get stuck?
David: Knowledge-based agents usually stand out the fastest. You point them to a set of documents and suddenly people can ask questions and get answers. He is powerful. The problem is that many of these systems break down over time. Content changes. The indices are out of date. Decline in quality. Most teams don’t plan for this.
Maintaining value means thinking beyond initial deployment. You need a system that constantly refreshes data, evaluates outputs, and improves accuracy over time. Without it, many organizations have great activity for the first few months, then see a decline in usage and impact.
Treating agentic AI as an engineering discipline
Catherine: How are leaders in practice balancing speed with trust and control?
David: Organizations that do this treat agentic AI as an engineering problem. They apply the same discipline they use to software: continuous testing, monitoring, and deployment. Failures are expected. The goal isn’t to prevent every problem – it’s to limit the blast radius and fix problems quickly. When teams can do this, they move faster and with more confidence. If nothing ever goes wrong, you are probably being too conservative.
Catherine: How are expectations around trust and transparency evolving?
David: Trust does not come from assuming that the system will be perfect. This comes from knowing what happened after something went wrong. You need traceability – what data was used, what models were involved, who interacted with the system. When you have that level of auditability, you can afford to experiment more.
Large-scale distributed systems have always been run this way. You optimize for recovery, not the absence of failure. As AI systems become more autonomous, this mindset becomes even more important.
Building an AI Governance Strategy
Rather than thinking of agentic AI as something completely different from the past, it is an extension of what enterprises already know how to operate. Three themes stand out for executives thinking about what really matters next:
- Use governance to enable momentum, not impede it. The strongest organizations establish foundational controls so teams can move quickly without losing visibility or accountability.
- Apply familiar engineering and data practices to agents. Inventory, lifecycle management and traceability matter as much to agents as they do to data and APIs.
- Treat AI as a production system, not a one-time launch. Continued value depends on ongoing assessment, fresh data, and the ability to quickly detect and correct problems.
Together, these ideas point to a clear conclusion: sustainable AI value comes not from chasing the latest tools or shutting everything down, but from building a foundation that lets organizations learn, adapt, and move forward with confidence.
To learn more about building an effective operating model, download the Databricks AI Maturity Model.
