As more interactions with institutional information happen through AI systems, interpretation often occurs before a human ever reads the source. In the systems I work on, most failures are not model failures but interpretation failures caused by missing semantic structure. Structured data tells machines where information lives, but not what it means. Building shared semantic infrastructure across ontologies, knowledge graphs, and governance is increasingly becoming a prerequisite for reliable AI.
The Assumption That Shaped Digital Systems
For most of the past two decades, digital transformation largely meant putting information online. Organisations built websites, portals, and platforms to make services accessible. Progress was measured through availability. Is the information there? Can users find it?
That model assumed a human reader.
A person would arrive at a page, read a policy, scan a definition, and interpret what it meant in context. If something was unclear, they might read two pages instead of one. If a rule conflicted with another, they would notice and reconcile it. Human interpretation absorbed a surprising amount of structural ambiguity.
The interaction pattern is now changing in quiet but consequential ways. Increasingly, people do not navigate systems to understand them. They ask AI systems and accept the answer they receive.
The information itself may still live in the same places. The difference is that interpretation often happens before a human ever sees the content.
We built digital systems to be read by humans. Now we expect them to be understood by machines. That is where the gap begins to show.
When Machines Interpret First
Interpretation failures appear before model failures
In the AI systems I work on, the most persistent issues rarely come from model capability. Retrieval works well. Language models generate fluent responses with very little effort.
Where problems appear is in interpretation.
AI systems are good at retrieving fragments of information. They struggle when they need to determine authority. Which definition is binding. Which rule overrides another. Which statement represents the institution’s official position.
When ambiguity exists, the system resolves it probabilistically. It assembles an answer from patterns in the text it retrieves.
The result usually sounds plausible. Sometimes it is correct. It is rarely governed.
A pattern I see repeatedly is that misinformation does not originate from fabrication. It comes from partial understanding. A model compresses meaning across several sources and produces a confident answer that quietly collapses distinctions along the way.
At small scale this appears as a minor error. At organisational scale the effect accumulates. Over time that compression erodes trust in the system providing the answers.
Structured Data Isn’t the Same as Shared Meaning
Why most “structured” information still confuses machines
Many organisations assume their content is already structured. Yes, in a technical sense, that is often true as pages contain metadata, APIs expose fields, and databases store clean records.
But most structured data describes documents rather than domains.
It tells machines where information lives, not what it means.
A policy might be clearly written. An exception may exist in another document. A definition may sit on a glossary page maintained by a different team. Humans read across these pieces and reconcile them instinctively.
Machines do not.
As long as people were the primary interpreters, the arrangement worked reasonably well. Once AI systems began mediating access to information, the ambiguity that humans quietly resolved became an operational risk.
In practice, the issue is rarely missing content. The content already exists.
What is missing is semantics.
The Missing Layer
Where semantic infrastructure actually sits
Semantic infrastructure sits between raw information and applications. It defines what things are, how they relate to one another, and how meaning is shared across systems.
In most implementations, this layer appears through several components working together. Ontologies define the concepts within a domain and the relationships between them. Knowledge graphs connect distributed information sources to those concepts. Schema governance manages how definitions evolve and who owns them.
The important point is that this layer does not belong to a single application.
It sits at the level of a domain or institution and is reused everywhere. The same semantic layer can support websites, APIs, internal copilots, automated workflows, and any other interface that needs to interpret the organisation’s information.
Once that structure exists, AI systems stop guessing relationships between concepts. They inherit them.
This is a structural change in how digital systems are organised. Network infrastructure enables applications regardless of interface. Semantic infrastructure plays a similar role for machine interpretation. It provides a stable layer that systems rely on regardless of which model, channel, or vendor is used.
When semantics are treated as application logic, fragmentation follows. Every system invents its own interpretation of the same concepts.
When semantics are treated as infrastructure, systems begin to align.
Why Bigger Models Won’t Solve It
Pattern recognition cannot replace institutional definitions
There is a persistent belief that more capable models will eventually close these interpretation gaps. Larger context windows and retrieval pipelines certainly improve recall, but they do not establish authority.
Language models detect patterns in text. They do not inherently distinguish between definition and description, policy and commentary, rule and exception. Without explicit semantic structure, they approximate the relationships between concepts rather than relying on formal ones.
In effect, we ask predictive systems to behave like reference systems.
Sometimes they approximate that role convincingly. The difficulty is that approximation is not the same as institutional meaning. Without an authoritative semantic layer, the system still has to infer relationships that should have been defined.
Why This Is An Infrastructure Question
AI changes where interpretation happens
The challenge is not confined to any particular sector. Any organisation whose information is accessed through AI assistants will encounter the same issue.
Once machines mediate access to information, correct interpretation becomes foundational.
Semantic consistency starts to behave like other infrastructure layers. Automation depends on it. Interoperability depends on it. Reliable AI depends on it.
Digital maturity used to focus on accessibility. Information had to be reachable online.
The next step is interpretability. Systems need to understand what that information means.
Governance Moves Up The Stack
Meaning becomes an institutional responsibility
Historically, governance discussions around data focused on privacy, access control, and compliance. Semantic infrastructure introduces a different layer of governance, where the questions move closer to meaning itself.
Someone has to decide which definitions are official. Schemas need to evolve without fragmenting across departments. Conflicts between domains need resolution. When multiple interpretations exist, the organisation has to determine which one is operationally binding.
These decisions are not primarily technical, but more institutional.
Without clear ownership of meaning, AI systems operate on inferred interpretation. As automation expands across decision processes, that becomes a risky foundation.
A system that cannot trace its reasoning back to an authoritative definition will always carry a degree of uncertainty, regardless of how fluent its output appears.
The Next Stage Of Digital Maturity
We often talk about the adoption of AI as if organisations are upgrading intelligence.
In practice, much of the work ahead looks more like an upgrade in understanding.
Institutions that recognise this tend to treat semantics as shared infrastructure. Definitions are managed at the domain level, relationships between concepts are made explicit, and systems inherit that structure rather than recreating it.
Others continue focusing on tuning model outputs while the underlying ambiguity remains unchanged.
Machines are increasingly capable of producing answers, but whether those answers reflect institutional meaning depends on a layer that most digital systems have never formally built.
Shared meaning, expressed in a form machines can interpret, is quietly becoming one of the most important pieces of infrastructure organisations will need to develop.