As enterprises move from early experimentation with generative AI to building agentic, goal-driven systems, the questions executives are asking have changed. There is less talk about What AI can do much more How It can be trusted, controlled, and integrated into the way the business is actually run.
To learn how leading organizations are preparing for this next phase, I sat down with Craig Wiley, Senior Director of Product at Databricks, as part of our Executive Lens series. This series is designed to uncover the strategic shifts shaping enterprise data and AI through direct conversations with executives who are navigating these changes in real time.
Craig and I talked candidly about what readiness really looks like, how architecture and governance need to evolve, and what milestones leadership teams and boards should plan for when starting to scale agentic systems.
Craig Wiley is Senior Director of Product, Artificial Intelligence at Databricks. Previously, he was the founding general manager of AWS SageMaker and leader of AI products at Google Cloud. He has deep experience building scalable machine learning and AI platforms that help enterprises bring together data and intelligent systems in practical, sustainable ways.
From GenAI experiments to systems leaders can trust
Catherine: You’ve been talking to a lot of CIOs, CDOs and CTOs lately. As companies move from GenAI use to more agentic, goal-driven systems, what changes are you seeing?
Craig: Initially, I think a lot of people were confused about how to leverage GenAI in a useful way. We still hear about a large percentage of use cases that are very deterministic. People say, “I want to build a system that does this,” whether it’s supply chain, customer service management, or whatever.
The problem was that with early GenAI, it was really hard to build or deploy anything deterministic. With agents, we can now use GenAI to create nearly deterministic systems, and we can also be more savvy about accuracy.
If you think about what it takes for a CXO to say yes to deploying an agentic solution, it comes down to control and accuracy. Can I control it, and does it really work? This shift towards agents has made it possible to achieve a level of accuracy that we couldn’t achieve when everything was quick-and-react based.
The least exciting answer is still correct
Catherine: How do you know if an organization is truly ready for agentic AI?
Craig: The boring answer is correct: is your data in order?
You may be very excited about agentic AI, but for enterprises it really depends on the context. And when we say context, we mean data and information. Can you give the agent the right information at the right time in his argument?
We see it all the time. Smaller, cheaper, less sophisticated models can perform just as well as the most advanced models if given the right context at the right time. There is no shortcut to this. You need a well-curated data lake with robust metadata. If you don’t have that, it’s similar to classical machine learning. You say, “Let’s build this model,” and two and a half months are spent organizing the data, and the last few weeks are actually building the system. There is no success in work without data.
Two ways to go when your data isn’t ready
Catherine: Many organizations are not as mature with their data as they would like to be. If an executive looks at his environment and thinks, “This is a mess, where do I start?” What work have you seen?
Craig: There are actually two ways.
One is from bottom to top. You look at all your data and say, “How do I get this to a good place?” The good news is that the tools have improved dramatically. Moving data from older systems is easy, and GenAI can even help write some code to do so.
The second approach is use-case driven. If a CEO or CIO says, “We have a big agentic ambition and we want to do X,” and the data is messy, you can start by asking: What data do I really need for this use case? Then you find those pieces, modernize them, and bring them forward in the service of that goal.
No one approach is universally better. Bottom-up gives you more flexibility later. When the problem is existential the use-case may be faster first. The only real mistake is not giving the data the time and attention it needs.
Why are early wins moving beyond negotiations?
Catherine: Where are early adopters focusing on right now? What types of use cases are you seeing uptake in?
Craig: A year ago, many early adopters were leaning into marketing and other use cases where the productive nature of the model was not a liability. Now, thanks to things like tool calling and improved accuracy, customers can do so much more. People are still very chat-centric. “I want my employees to talk about something.” “I want customers to talk about something.”
But the real excitement I’m seeing is around automation and workflow optimization. I recently spoke with a large bank that is trying to agentize their entire loan origination process. It used to take hours for humans to read documents. Now they hope to run it in a fully agented manner with close human monitoring. This is a far more compelling result than any other chatbot.
Governance becomes difficult when agents become users
Catherine: As systems become more autonomous, how are leaders rethinking architecture and governance?
Craig: For decades, we have focused on managing structured data and ensuring that the right people have access to it and the wrong people don’t. Now we also have to think about unstructured data, and we have to think about agents as new entities. How do I ensure that these agents have access to the right data at the right time?
You also need to think about the user on the other end of the agent. An excellent example is building a chatbot on top of Jira. Often Jira or other similar systems may contain confidential information. If it is not controlled, anyone can expose that information. So it’s not just about who the agent can reach. It’s also about what the agent can return depending on who’s asking. The building blocks are there, but governance should be treated as a first-order problem, not an afterthought.
A simpler way to think about identity, access, and agents
Catherine: This sounds a lot like identity and access management. How should leaders think about this when preparing?
Craig: Basically, it’s identity and access management, but with a new class of identity: agents.
If you don’t have strong identity and access policies, the world is going to be a lot tougher. If you do this, it fits more naturally.
A simple way to think about it is this:
- Who it is? Strong detection systems that work for humans and non-human actors.
- What Are they allowed to do this? Governance over APIs and data.
- How Do they do that? Documentation and metadata. What’s in this table? What does this API do?
If identification systems and documentation are good, it becomes very easy to point an agent in the direction and move forward quickly.
Next 12-24 Months: Build Muscle Before Chasing ROI
Catherine: Over the next year or two, what should leadership teams plan for as agentic systems scale?
Craig: Many companies are stuck on the question of build versus buy. If I were the CEO, I would want clarity on this. I think you have to be able to create. I can’t imagine running a large company and outsourcing all of my software development.
If you have developers, you should plan on building this muscle. In the near term, I care less about ROI and more about whether my people can build and deliver these systems. Before competition comes practice. Nurture talent in the first six months. In six to twelve months, create things you can be proud of. After that, start getting real business results.
It’s time to buy. If functionality is not at the center of your differentiation, consider purchasing this. But if you’re already building software to differentiate your company, your teams should create agents to differentiate your company.
The misconception that stops progress
Catherine: What is the biggest misconception you see when companies first try agentic AI?
Craig: Dismissal after failure.
They make something up, get the answer wrong one time, and they say, “See? I told you it would be wrong. I’m done.” That’s not how evolution works. If it was wrong, ask why. Fix the root cause and move on.
GenAI seemed easy in the beginning, so people expect it to always be easy. But building great AI systems is hard. You are going to have failures. Success is about continuous improvement, not getting it right the first time.
I gave a speech a few years ago where a global financial services firm talked about an agent they built to help call center staff respond faster. I asked how they measure success. The response was, “That wasn’t the issue. The issue was to build my team’s experience.”
That mentality rubbed off on me. Only those companies that come forward with this attitude will win.
Catherine: growth mindset.
Craig: Absolutely.
closing thoughts
The thing that stood out to me most from this conversation was that agentic AI doesn’t reward shortcuts. The organizations that move fastest are the ones that aren’t skipping the hard parts. They’re doing uneconomical work around data, identity, governance and documentation, and they’re investing early in building internal capacity.
Agent systems don’t just replace what technology can do. They emphasize how prepared an organization must be to use it well.
To learn more about building an effective operating model, download the Databricks AI Maturity Model.
