Lessons from Early Adopters – O’Reilly

by
0 comments
Lessons from Early Adopters - O'Reilly

My first post made it clear what the semantic layer can bring to the modern enterprise: a single source of truth, accessible to everyone who needs it – BI teams in Tableau and Power BI, Excel-savvy analysts, application integration via APIs, and the AI ​​agents now proliferating across organizations – all pulling from a single governed, performance metrics layer. The promise is compelling. But what happens when organizations actually build and deploy one? To find out, I interviewed several early adopters who have moved the semantic layers from concept to production. Four themes emerged from those conversations: some surprising, some predictable, and some that will sound familiar to anyone who has ever shipped data infrastructure.

The first theme: Semantic layers appearing in unexpected places. Most discussion presents them as enterprise-level infrastructure – a single location that captures all company metrics for centralized access and governance. This is still the primary use case. But practitioners are also deploying semantic layers for narrower purposes. For example, one organization built its own semantic layer specifically to power a targeted chatbot application – letting users query data conversationally without any traditional BI tools in the mix. No Power BI, no Excel, just an AI interface pulling from controlled metrics. The rationale for these smaller deployments is straightforward: Semantic layers provide higher accuracy on structured data, even with lightweight models. The key value drivers remain speed, accuracy and access – but organizations are looking for more ways than just an enterprise-wide approach to extract that value.

Second theme: AI is the reason organizations are moving forward now. Other benefits still matter – single source of truth, multitool compatibility, true self-service access, cost reduction in cloud environments – but when I asked practitioners why they prioritized the semantic layer today rather than two years ago, the answer was consistent: AI. Be it a specific chatbot project or enabling AI-powered analytics at scale, AI needs were the catalyst. This goes back to what I discussed in my first post: structured data alone is not enough for reliable AI analytics. Adding semantic context—field descriptions, model definitions, object relationships—dramatically improves accuracy. The data industry has taken notice. Semantic layers have moved from typical infrastructure to a strategic priority: Snowflake, Databricks, DBT Labs, and Microsoft have all made significant investments in the last year.

The third theme: Semantic layers reduce the work for developers while simplifying access to trusted data. Many practitioners cited the value of maintaining metrics and business logic in one place. Any analyst knows the pain of metric sprawl – leadership requests a change to a core KPI, and you find it has been defined a dozen different ways in databases, BI tools, and spreadsheets scattered through the organization. The semantic layer gets rid of it. An engineering lead described a financial metric that had accumulated more than 60 versions across the company. After deploying the semantic layer, there was a.

Want radar delivered straight to your inbox? Join us on Substack. Sign up here.

Access also becomes easier. Instead of provisioning control across warehouses, BI workspaces, personal dashboards, and cloud storage locations, users connect directly to the semantic layer and pull data into the tools of their choice. One organization was surprised to find that after deployment, the most common access point was Excel. But with the semantic layer, this was not a problem: the data served in Excel was identical to the data driving their AI tools, Power BI dashboards, and application integrations through APIs.

The fourth theme will be familiar to those who have navigated data infrastructure: the biggest challenge is not the technology – it is the data itself. Every practitioner I spoke to identified the same hurdles: the stability, availability, and accuracy of the underlying data. Engineers and analysts can build the semantic layer, but they can’t bring the data into existence. Success requires close collaboration with business stakeholders, clear ownership of metrics, and leadership alignment to prioritize work. None of this is new. But despite these challenges, everyone I interviewed came to the same conclusion: The semantic layer is worth the effort.

Semantic layer technology is still rudimentary. Tools, vendors, and best practices are rapidly evolving – what works today may look different in a year. But these conversations revealed a clear signal beneath the noise: Semantic layers are becoming critical AI infrastructure. The practitioners I spoke to are no longer practicing. They are being implemented. And despite the expected challenges around data quality and organizational alignment, they are seeing real returns: fewer metric versions to maintain, simpler access controls, and AI tools that actually deliver reliable answers.

My first article explained what a semantic layer could be. It asked what happens when organizations actually create them. Answer: It’s hard, it’s worth it, and for companies serious about AI-powered analytics, the semantic layer is no longer good enough. This is the foundation.

Related Articles

Leave a Comment