From AI projects to operational efficiencies

by
0 comments
From AI projects to operational efficiencies

As enterprises move beyond pilots and proofs of concept, a new question is emerging in executive conversations: When will AI stop being a series of projects and start becoming part of how business is run?

At Databricks, CIO Naveen Zutshi works with CIOs and business leaders to transform enterprise-scale AI through experimentation. In this quiz, Naveen tapped prior leadership roles at companies such as Palo Alto Networks, Gap Inc., and Walmart, where he led complex modernization efforts that transformed legacy environments into scalable, cloud-first architectures.

What emerged in our conversation is clear: the inflection point is not about models. It is about modernisation, governance and operational discipline.

AI is moving from experiments to P&L

Catherine: What clear signs are you seeing that AI experimentation is giving way to AI as an operational capability?

New: I believe the industry still has more work to do to generate real value from AI. But in the last six to twelve months, I have noticed a remarkable change. I spend time with CIOs and business leaders in a variety of industries, and three patterns emerge.

First, I am hearing increasingly concrete examples of the use of AI in daily tasks. Interestingly, regulated industries that were considered laggards in the cloud journey – healthcare and financial services for example – are now early adopters. We are seeing AI used for back-office automation, fraud detection, generating alpha in investment returns, clinician note taking, drug discovery, and even crisis center support and prevention. Second, business leaders are increasingly joining the conversation. Historically, AI discussions were dominated by data engineers and data scientists. Now business groups are coming to the table to discuss how data and AI can transform their operations. More importantly, they’re sharing examples of how they’ve already done it. AI has really come to fruition when it appears in business KPIs.

Third, funding has shifted. AI innovation used to come from the budget or discretionary funds. It is now a major line item in the P&L – funded either directly by business units or centrally through the CIO or CTO organization. That change itself signals operational commitment. It won’t be long before AI spending on tools will be a major line item next to headcount and cloud spending. At Databricks, we are separating AI spend from overall SaaS spend.

The real obstacle: heritage, not talent

Catherine: In conversations with your industry peers, what common topics come up as friction points for producing AI projects?

New: I was with just 20 CIOs this week, and talent again topped the survey results as a top obstacle. But in my experience, the root cause is often inheritance.

Organizations suffer from legacy systems, SaaS sprawl, on-premises sprawl, and architectural complexity. Over time, whether through inaction or competing priorities, they have not taken decisive action to end it. But retaining legacy systems is fraudulent. Modernization not only increases speed, but legacy systems also drain talent. It becomes difficult to attract and retain top engineers when their primary job is to keep the lights on rather than building modern systems.

Every time I’ve chosen to modernize – whether compute, storage, data architecture, or application layers – I’ve regretted not doing it sooner. Modernization increases productivity, restores a sense of mission and simplifies the environment. This has always been a move with no regrets.

A modern, open architecture that allows you to plug in the best AI models without breaking or changing your stack, providing these benefits:

  • A unified governance layer that reduces data movement complexity.
  • Simplicity and velocity by reducing equipment sprawl.
  • The ability to focus top talent on high-value work rather than maintenance.

Often this is the real solution.

Platform Decisions That Determine Whether AI Scales

Catherine: What are the key platform decisions that most strongly determine whether AI scales?

New: First, the data layer. Both structured and unstructured (which makes up about 80% of enterprise data). You need to combine the two under a common governance layer. The most important thing is to bring the model to the data, not the data to the model. Sending data across different environments creates complexity and control challenges. An integrated architecture simplifies management and improves security.

It is also important to avoid locking yourself into a single model provider. Marginal models are evolving rapidly. The AI ​​gateway or abstraction layer allows you to use multiple models and choose the best model for the task at hand.

Finally, treat AI as a core competency by investing heavily in observability, quality, validation, and testing. Development is accelerating. Testing is where discipline matters. You can spend 80% of your time on validation and refinement and only 20% on building. And I would like to add one more – context and situation increasingly matter. AI systems require memory and persistence so that they can improve over time.

When data and AI are no longer separate conversations

Catherine: What will be the consequences of keeping business executives away from data and AI initiatives?

New: In many companies, AI strategy is led by data teams. But it is also a business imperative. Without clean, high-quality enterprise data, AI will not be useful in an enterprise setting. Frontier Labs trains models on the web. Enterprises have to post train models on their data. Additionally, innovation can occur at the edge. If you have a consistent data and AI stack with proper authentication and access controls, teams can securely build agents and applications without fragmenting the architecture. The key under distributed innovation is sustainability and governance.

Where agentic AI is ready—and where it isn’t

Catherine: Which workflows are most ready for agentic ownership?

New: In addition to software development workflows that are mature in using AI, we are seeing strong success in go-to-market workflows. Marketing and pre-sales teams are using agents to improve outbound reach and targeting, often outperforming manual processes.

Agents also excel when processing large amounts of information to support decisions. Instead of waiting weeks for ad-hoc reports from analysts, leaders can query data directly and gain instant insight into both structured and unstructured data.

Where agents are not ready yet is in deterministic workflows that require 100% consistency and accuracy. AI can help, but it should not replace human judgment. There is also a risk of what is called “AI slope” – outputs that seem credible but lack depth. Leaders must combine adoption with oversight.

Defining success beyond the hype

Catherine: How do you define success when scaling data and AI?

New: I am based on four dimensions:

  1. Capacity
  2. Effectiveness and Revenue Impact
  3. quality of results
  4. risk reduction

For AI systems, I also focus on controllable inputs. For example, in a sales AI system, what percentage of data entry is now automated by an agent? That input metric must be related to productivity gains. Or, what percentage of agent recommendations are adopted, and what is their efficacy compared to manual approaches? You can A/B test them. Cycle time reductions and cost savings matter – but only in the context of broader business outcomes.

12 months start, stop, continue

Catherine: If you had to give your colleagues 12 months to start, stop, continue, what would it be?

New: I would say stop feeding the legacy beast. Stop treating AI governance and security as an afterthought. And avoid replacing SaaS sprawl with agent sprawl. If agents aren’t adopted or don’t provide value, prune them.

Then I would say take a skill-based or actionable approach. Instead of replacing the entire application, identify specific tasks that agents can perform better. Build credibility through focused wins. Map out your crawl, walk, run journey. And finally, I would say continue to invest in data and governance—especially for unstructured data. And most importantly, stay business-focused. Start with the user, the customer, and the outcome. Technology alone does not create value.

working inflection point

The executive inflection point is about operational readiness, modern architecture, integrated governance, disciplined testing, measurable results and business alignment.

AI becomes an operational capability when it moves from experimentation to accountability – when it shows up in KPIs, budget lines, and architectural decisions. Organizations that recognize this shift early will not deploy as much AI. They will create enterprises that are structurally prepared for this.

To learn more about building an effective operational model, download the Databricks AI Maturity Model.

Related Articles

Leave a Comment