knowledge assets Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/knowledge-assets/ Tue, 04 Nov 2025 14:03:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg knowledge assets Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/knowledge-assets/ 32 32 How Taxonomies and Ontologies Enable Explainable AI https://enterprise-knowledge.com/how-taxonomies-and-ontologies-enable-explainable-ai/ Fri, 31 Oct 2025 15:18:09 +0000 https://enterprise-knowledge.com/?p=25955 Taxonomy and ontology models are essential to unlocking the value of knowledge assets. They provide the structure needed to connect fragmented information across an organization, enabling explainable AI. As part of a broader Knowledge Intelligence (KI) strategy, these models help … Continue reading

The post How Taxonomies and Ontologies Enable Explainable AI appeared first on Enterprise Knowledge.

]]>
Taxonomy and ontology models are essential to unlocking the value of knowledge assets. They provide the structure needed to connect fragmented information across an organization, enabling explainable AI. As part of a broader Knowledge Intelligence (KI) strategy, these models help reduce hallucinations and make AI-generated content more trustworthy. This blog provides an overview of why taxonomies and ontologies are essential to connect disparate knowledge assets within an organization and improve the quality and accuracy of AI generated content. 

 

The Anatomy of AI

Here is a conceptual analogy to help illustrate how taxonomies and ontologies support AI. While inspired by the human musculoskeletal system, this analogy is not intended to represent anatomical accuracy, but rather to illustrate how taxonomies provide foundational structure and ontologies enable flexible, contextual connections of knowledge assets within AI systems.

Just like the musculoskeletal system gives structure, support, and coherence to the human body, taxonomies and ontologies provide the structural framework that organizes and contextualizes knowledge assets for AI. Here is the analogy: the spine and the bones represent the taxonomies, in other words, the hierarchical, backbone structure for categorizing and organizing concepts that describe an organization’s core knowledge assets. Similarly, the joints, ligaments, and muscles represent the ontologies that provide the flexibility to connect related concepts across assets in an organization’s knowledge domain. 

Just as the musculoskeletal system provides structure, support, and coherence to the human body, taxonomies and ontologies serve as a structural framework that organizes and contextualizes knowledge assets for AI. When those assets are consistently tagged with taxonomies and linked through ontologies, AI systems can trace how decisions are made, reducing the likelihood of hallucinations.

Taxonomies: the spine and the bones represent the taxonomies, in other words, the hierarchical backbone structure for categorizing and organizing concepts.

Ontologies: the joints, ligaments, and muscles represent the ontologies that provide the flexibility to connect related concepts across an organization's knowledge domain.

Depending on the organization’s domain or industry, certain types of knowledge assets become more relevant or strategically important. In the case of a healthcare organization, key knowledge assets may include content such as patients’ electronic health records, clinical guidelines and protocols, multidisciplinary case reviews, and research publications, as well as data such as diagnostic data and clinical trial data. Taxonomies that capture and group together key concepts, such as illnesses, symptoms, treatments, outcomes, medicines, clinical specialties can be used to tag and structure these assets. Continuing with the same scenario, an ontology in a healthcare organization can incorporate those key concepts (entities) from the taxonomy, along with their properties and relationships, to enable alignment and consistent interpretation of knowledge assets across systems. Both taxonomies and ontologies in healthcare organizations make it possible to connect, for instance, a patient’s health record with diagnostic data and previous case reviews for other patients based on the same (or similar) conditions, including illnesses, symptoms, treatments, and medicines. As a result, healthcare professionals can quickly access the information they need to make well-informed decisions about a patient’s care.

 

Where AI is Failing

On multiple occasions, AI has repeatedly failed to provide reliable information to employees, customers, and patients, undermining their confidence in the AI supported system and sometimes leading to serious organizational consequences. You may be familiar with the case in which a chatbot of a medical association was unintentionally giving harmful advice to people with eating disorders. Or maybe you heard in the news about the bank with a faulty AI system that misclassified thousands of transactions as fraudulent due to a programming error, resulting in significant customer dissatisfaction and harming the organization’s reputation. There was also a case in which an AI-powered translation system failed to accurately assess asylum seekers’ applications, raising serious concerns about its fairness and accuracy, and potentially affecting critical life decisions for those applicants. In each of these cases, had the corresponding AI systems effectively aggregated both unstructured and structured knowledge assets, and reliably linked them to encoded expert knowledge and relevant business context, these cases would have produced very different and positive outcomes. By leveraging taxonomies and ontologies to aggregate key knowledge assets, the result of these cases would have been much more closely aligned with intended objectives, ultimately, benefiting the end users as it was initially intended. 

 

How Taxonomies And Ontologies Enable Explainable AI

When knowledge assets are consistently tagged with taxonomies and related via ontologies, AI systems can trace how a decision was made. This means that end users can understand the reasoning path, supported by defined relationships. This also means that bias and hallucinations can be more easily detected by auditing the semantic structure behind the results.

As illustrated in the healthcare organization example, diagnoses can be tagged with medical industry taxonomies, while ontologies can help create relationships among symptoms, treatments, and outcomes. This can help physicians tailor treatments to individual patient needs by leveraging past patient cases and the collective expertise from other physicians. Similarly, a retail organization can enhance its customer service by implementing a chatbot that is linked to structured product taxonomies and ontologies to help deliver consistent and explainable answers about products to customers. More consistent and trustworthy customer interactions result in streamlining end user support and strengthening brand confidence.

 

Do We Really Need Taxonomies and Ontologies to be Successful With AI?

The examples above illustrate that explainability in AI really matters. Whether end users are patients, bank customers, or any individuals requesting specific products or services, they all want more transparent, trustworthy, and human-centered AI experiences. Taxonomies and ontologies help provide structure and connectedness to content, documents, data, expert knowledge and overall business context, so that they all are machine readable and findable by AI systems at the moment of need, ultimately creating meaningful interactions for end users.  

 

Conclusion

Just like bones, joints, ligaments, and muscles in the human body, taxonomies and ontologies provide the essential structure and connection that allow AI systems to stand up to testing, be reliable, and perform with clarity. At EK we have extensive experience identifying key knowledge assets as well as designing and implementing taxonomies and ontologies to successfully support AI initiatives. If you want to improve the Knowledge Intelligence (KI) of your existing or future AI applications and need help with your taxonomy and ontology efforts, don’t hesitate to get in touch with us

The post How Taxonomies and Ontologies Enable Explainable AI appeared first on Enterprise Knowledge.

]]>
How to Leverage LLMs for Auto-tagging & Content Enrichment https://enterprise-knowledge.com/how-to-leverage-llms-for-auto-tagging-content-enrichment/ Wed, 29 Oct 2025 14:57:56 +0000 https://enterprise-knowledge.com/?p=25940 When working with organizations on key data and knowledge management initiatives, we’ve often noticed that a roadblock is the lack of quality (relevant, meaningful, or up-to-date) existing content an organization has. Stakeholders may be excited to get started with advanced … Continue reading

The post How to Leverage LLMs for Auto-tagging & Content Enrichment appeared first on Enterprise Knowledge.

]]>
When working with organizations on key data and knowledge management initiatives, we’ve often noticed that a roadblock is the lack of quality (relevant, meaningful, or up-to-date) existing content an organization has. Stakeholders may be excited to get started with advanced tools as part of their initiatives, like graph solutions, personalized search solutions, or advanced AI solutions; however, without a strong backbone of semantic models and context-rich content, these solutions are significantly less effective. For example, without proper tags and content types, a knowledge portal development effort  can’t fully demonstrate the value of faceting and aggregating pieces of content and data together in ‘knowledge panes’. With a more semantically rich set of content to work with, the portal can begin showing value through search, filtering, and aggregation, leading to further organizational and leadership buy-in.

One key step in preparing content is the application of metadata and organizational context to pieces of content through tagging. There are several tagging approaches an organization can take to enrich pre-existing content with metadata and organizational context, including manual tagging, automated tagging capabilities from a taxonomy and ontology management system (TOMS), using apps and features directly from a content management solution, and various hybrid approaches. While many of these approaches, in particular acquiring a TOMS, are recommended as a long-term auto-tagging solution, EK has recommended and implemented Large Language Model (LLM)-based auto-tagging capabilities across several recent engagements. Due to LLM-based tagging’s lower initial investment compared to a TOMS and its greater efficiency than manual tagging, these auto-tagging solutions have been able to provide immediate value and jumpstart the process of re-tagging existing content. This blog will dive deeper into how LLM tagging works, the value of semantics, technical considerations, and next steps for implementing an LLM-based tagging solution.

Overview of LLM-Based Auto-Tagging Process

Similar to existing auto-tagging approaches, the LLM suggests a tag by parsing through a piece of content, processing and identifying key phrases, terms, or structure that gives the document context. Through prompt engineering, the LLM is then asked to compare the similarity of key semantic components (e.g., named entities, key phrases) with various term lists, returning a set of terms that could be used to categorize the piece of content. These responses can be adjusted in the tagging workflow to only return terms meeting a specific similarity score. These tagging results are then exported to a data store and applied to the content source. Many factors, including the particular LLM used, the knowledge an LLM is working with, and the source location of content, can greatly impact the tagging effectiveness and accuracy. In addition, adjusting parameters, taxonomies/term lists, and/or prompts to improve precision and recall can ensure tagging results align with an organization’s needs. The final step is the auto-tagging itself and the application of the tags in the source system. This could look like a script or workflow that applies the stored tags to pieces of content.

Figure 1: High-level steps for LLM content enrichment

EK has put these steps into practice, for example, when engaging with a trade association on a content modernization project to migrate and auto-tag content into a new content management system (CMS). The organization had been struggling with content findability, standardization, and governance, in particular, the language used to describe the diverse areas of work the trade association covers. As part of this engagement, EK first worked with the organization’s subject matter experts (SMEs) to develop new enterprise-wide taxonomies and controlled vocabularies integrated across multiple platforms to be utilized by both external and internal end-users. To operationalize and apply these common vocabularies, EK developed an LLM-based auto-tagging workflow utilizing the four high-level steps above to auto-tag metadata fields and identify content types. This content modernization effort set up the organization for document workflows, search solutions, and generative AI projects, all of which are able to leverage the added metadata on documents. 

Value of Semantics with LLM-Based Auto-Tagging

Semantic models such as taxonomies, metadata models, ontologies, and content types can all be valuable inputs to guide an LLM on how to effectively categorize a piece of content. When considering how an LLM is trained for auto-tagging content, a greater emphasis needs to be put on organization-specific context. If using a taxonomy as a training input, organizational context can be added through weighting specific terms, increasing the number of synonyms/alternative labels, and providing organization-specific definitions. For example, by providing organizational context through a taxonomy or business glossary that the term “Green Account” refers to accounts that have met a specific environmental standard, the LLM would not accidentally tag content related to the color green or an account that is financially successful.

Another benefit of an LLM-based approach is the ability to evolve both the semantic model and LLM as tagging results are received. As sets of tags are generated for an initial set of content, the taxonomies and content models being used to train the LLM can be refined to better fit the specific organizational context. This could look like adding additional alternative labels, adjusting the definition of terms, or adjusting the taxonomy hierarchy. Similarly, additional tools and techniques, such as weighting and prompt engineering, can tune the results provided by the LLM and evolve the results generated to achieve a higher recall (rate the LLM is including the correct term) and precision (rate the LLM is selecting only the correct term) when recommending terms. One example of this is  adding weighting from 0 to 10 for all taxonomy terms and assigning a higher score for terms the organization prefers to use. The workflow developed alongside the LLM can use this context to include or exclude a particular term.

Implementation Considerations for LLM-Based Auto-Tagging 

Several factors, such as the timeframe, volume of information, necessary accuracy, types of content management systems, and desired capabilities, inform the complexity and resources needed for LLM-based content enrichment. The following considerations expand upon the factors an organization must consider for effective LLM content enrichment. 

Tagging Accuracy

The accuracy of tags from an LLM directly impacts end-users and systems (e.g., search instances or dashboards) that are utilizing the tags. Safeguards need to be implemented to ensure end-users can trust the accuracy of the tagged content they are using. These help ensure that a user is not mistakenly accessing or using a particular document, or that they are frustrated by the results they get. To mitigate both of these concerns, a high recall and precision score with the LLM tagging improves the overall accuracy and lowers the chance for miscategorization. This can be done by investing further into human test-tagging and input from SMEs to create a gold-standard set of tagged content as training data for the LLM. The gold-standard set can then be used to adjust how the LLM weights or prioritizes terms, based on the organizational context in the gold-standard set. These practices will help to avoid hallucinations (factually incorrect or misleading content) that could appear in applications utilizing the auto-tagged set of content.

Content Repositories

One factor that greatly adds technical complexity is accessing the various types of content repositories that an LLM solution, or any auto-tagging solution, needs to read from. The best content management practice for auto-tagging is to read content in its source location, limiting the risk of duplication and the effort needed to download and then read content. When developing a custom solution, each content repository often needs a distinctive approach to read and apply tags. A content or document repository like SharePoint, for example, has a robust API for reading content and seamlessly applying tags, while a less widely adopted platform may not have the same level of support. It is important to account for the unique needs of each system in order to limit the disruption end-users may experience when embarking on a tagging effort.

Knowledge Assets

When considering the scalability of the auto-tagging effort, it is also important to evaluate the breadth of knowledge asset types being analyzed. While the ability of LLMs to process several types of knowledge assets has been growing, each step of additional complexity, particularly evaluating multiple types, can result in additional resources and time needed to read and tag documents. A PDF document with 2-3 pages of content will take far fewer tokens and resources for an LLM to read its content than a long visual or audio asset. Going from a tagging workflow of structured knowledge assets to tagging unstructured content will increase the overall time, resources, and custom development needed to run a tagging workflow. 

Data Security & Entitlements

When utilizing an LLM, it is recommended that an organization invest in a private or an in-house LLM to complete analysis, rather than leveraging a publicly available model. In particular, an LLM does not need to be ‘on-premises’, as several providers have options for LLMs in your company’s own environment. This ensures a higher level of document security and additional features for customization. Particularly when tackling use cases with higher levels of personal information and access controls, a robust mapping of content and an understanding of what needs to be tagged is imperative. As an example, if a publicly facing LLM was reading confidential documents on how to develop a company-specific product, this information could then be leveraged in other public queries and has a higher likelihood of being accessed outside of the organization. In an enterprise data ecosystem, running an LLM-based auto-tagging solution can raise red flags around data access, controls, and compliance. These challenges can be addressed through a Unified Entitlements System (UES) that creates a centralized policy management system for both end users and LLM solutions being deployed.

Next Steps:

One major consideration with an LLM tagging solution is maintenance and governance over time. For some organizations, after completing an initial enrichment of content by the LLM, a combination of manual tagging and forms within each CMS helps them maintain tagging standards over time. However, a more mature organization that is dealing with several content repositories and systems may want to either operationalize the content enrichment solution for continued use or invest in a TOMS. With either approach, completing an initial LLM enrichment of content is a key method to prove the value of semantics and metadata to decision-makers in an organization. 
Many technical solutions and initiatives that excite both technical and business stakeholders can be actualized by an LLM content enrichment effort. By having content that is tagged and adhering to semantic standards, solutions like knowledge graphs, knowledge portals, and semantic search engines, or even an enterprise-wide LLM Solution, are upgraded even further to show organizational value.

If your organization is interested in upgrading your content and developing new KM solutions, contact us!

The post How to Leverage LLMs for Auto-tagging & Content Enrichment appeared first on Enterprise Knowledge.

]]>
Defining Governance and Operating Models for AI Readiness of Knowledge Assets https://enterprise-knowledge.com/defining-governance-and-operating-models-for-ai-readiness-of-knowledge-assets/ Wed, 08 Oct 2025 18:57:59 +0000 https://enterprise-knowledge.com/?p=25729 Artificial intelligence (AI) solutions continue to capture both the attention and the budgets of many organizations. As we have previously explained, a critical factor to the success of your organization’s AI initiatives is the readiness of your content, data, and … Continue reading

The post Defining Governance and Operating Models for AI Readiness of Knowledge Assets appeared first on Enterprise Knowledge.

]]>
Artificial intelligence (AI) solutions continue to capture both the attention and the budgets of many organizations. As we have previously explained, a critical factor to the success of your organization’s AI initiatives is the readiness of your content, data, and other knowledge assets. When correctly executed, this preparation will ensure your knowledge assets are of the appropriate quality and semantic structure for AI solutions to leverage with context and inference, while identifying and exposing only the appropriate assets to the right people through entitlements.

This, of course, is an ongoing challenge, rather than a moment in time initiative. To ensure the important work you’ve done to get your content, data, and other assets AI-ready is not lost, you need governance as well as an operating model to guide it. Indeed, well before any AI readiness initiative, governance and the organization must be top of mind. 

Governance is not a new term within the field. Historically, we’ve identified four core components to governance in the context of content or data:

  • Business Case and Measurable Success Criteria: Defining the value of the solution and the governance model itself, as well as what success looks like for both.
  • Roles and Responsibilities: Defining the individuals and groups necessary for governance, as well as the specific authorities and expectations of their roles.
  • Policies and Procedures: Detailing the timelines, steps, definitions, and actions for the associated roles to play.
  • Communications and Training: Laying out the approach to two-way communications between the associated governance roles/groups and the various stakeholders.

These traditional components of governance all have held up, tried and true, over the quarter-century since we first defined them. In the context of AI, however, it is important to go deeper and consider the unique aspects that artificial intelligence brings into the conversation. Virtually every expert in the field agrees that AI governance should be a priority for any organization, but that must be detailed further in order to be meaningful.

In the context of AI readiness for knowledge assets, we focus AI governance, and more broadly its supporting operating model, on five key elements for success:

  • Coordination and Enablement Over Execution
  • Connection Instead of Migration
  • Filling Gaps to Address the Unanswerable Questions
  • Acting on “Hallucinations”
  • Embedding Automation (Where It Makes Sense)

There is, of course, more to AI governance than these five elements, but in the context of AI readiness for knowledge assets, our experience shows that these are the areas where organizations should be focusing and shifting away from traditional models. 

1) Coordination and Enablement Over Execution

In traditional governance models (i.e. content governance, data governance, etc.), most of the work was done in the context of a single system. Content would be in a content management system and have a content governance model. Data would be in a data management solution and have a data governance model. The shift is that today’s AI governance solutions shouldn’t care what types of assets you have or where they are housed. This presents an amazing opportunity to remove artificial silos within an organization, but brings a marked challenge. 

If you were previously defining a content governance model, you most likely possessed some level of control or ownership over your content and document management systems. Likewise, if you were in charge of data governance, you likely “own” some or all of the major data solutions like master data management or a data warehouse within your organization. With AI, however, an enormous benefit of a correctly architected enterprise AI solution that leverages a semantic layer is that you likely don’t own these source systems. The system housing the content, data, and other knowledge assets is likely, at least partly, managed by other parts of your organization. In other words, in an AI world, you have less control over the sources of the knowledge assets, and thereby over the knowledge assets themselves. This may well change as organizations evolve in the “Age of AI,” but for now, the role and responsibility for AI governance becomes more about coordination and less about execution or enforcement.

In practice, this means an AI Governance for Knowledge Asset Readiness group must coordinate with the owners of the various source systems for knowledge assets, providing additive guidance to define what it means to have AI-ready assets as well as training and communications to enable and engage system and asset owners to understand what they must do to have their content, data, and other assets included within the AI models. The word “must” in the previous sentence is purposeful. You alone may not possess the authority of an information system owner to define standards for their assets, but you should have the authority to choose not to include those assets within the enterprise AI solution set.

How do you apply that authority? As the lines continue to blur between the purview of KM, Data, and AI teams, this AI Governance for Knowledge Asset Readiness group should comprise representatives from each of these once siloed teams to co-own outcomes as new AI use cases, features, and capabilities are developed. The AI governance group should be responsible for delineating key interaction points and expected outcomes across teams and business functions to build alignment, facilitate planning and coordination, and establish expectations for business and technical stakeholders alike as AI solutions evolve. Further, this group should define what it means (and what is required) for an asset to be AI-ready. We cover this in detail in previous articles, but in short, this boils down to semantic structure, quality, and entitlements as the three core pillars to AI readiness for knowledge assets. 

2) Connection Instead of Migration

The idea of connections over migration aligns with the previous point. Past monolithic efforts in your organization would commonly have included massive migrations and consolidations of systems and solutions. The roadmaps of past MDMs, data warehouses, and enterprise content management initiatives are littered with failed migrations. Again, part of the value of an enterprise AI initiative that leverages a semantic layer, or at least a knowledge graph, is that you don’t need to absorb the cost, complexity, and probable failure of a massive migration. 

Instead, the role of the AI Governance for Knowledge Asset Readiness group is one of connections. Once the group has set the expectation for AI-ready knowledge assets, the next step is to ensure the systems that house those assets are connected and available, ready for the enterprise AI solutions to be ingested and understood. This can be a highly iterative process, not to be rushed, as the sanctity of the assets ingested by AI is more important than their depth. Said differently, you have few chances to deliver wrong answers—your end users will lose trust quickly in a solution that delivers inaccurate information that they know is unmistakably incorrect; but if they receive an incomplete answer instead, they will be more likely to raise this and continue to engage. The role of this AI governance group is to ensure the right systems and their assets are reliably available for the AI solution(s) at the right time, after your knowledge assets have passed through the appropriate requirements.

 

3) Filling Gaps to Address the Unanswerable Questions

As the AI solutions are deployed, the shift for AI governance moves from being proactive to reactive. There is a great opportunity associated with this that bears a particular focus. In the history of knowledge management, and more broadly the fields of content management, data management, and information management, there’s always been a creeping concern that an organization “doesn’t know what it doesn’t know.” What are the gaps in knowledge? What are the organizational blind spots? These questions have been nearly impossible to answer at the enterprise level. However, with enterprise-level AI solutions implemented, the ability to have this awareness is suddenly a possibility.

Even before deploying AI solutions, a well-designed semantic layer can help pinpoint organizational gaps in knowledge by finding taxonomy elements lacking in applied knowledge assets. However, this potential is magnified once the AI solution is fully defined. Today’s mature AI solutions are “smart” enough to know when they can’t answer a question and highlight that unanswerable question to the AI governance group. Imagine possessing the organizational intelligence to know what your colleagues are seeking to understand, having insights into that which they are trying to learn or answer, but are currently unable to. 

In this way, once an AI solution is deployed, the primary role of the AI governance group should be to diagnose and then respond to these automatically identified knowledge gaps, using their standards to fill them. It may be that the information does, in fact, exist within the enterprise, but that the AI solution wasn’t connected to those knowledge assets. Alternatively, it may be that the right semantic structure wasn’t placed on the assets, resulting in a missed connection and a false gap from the AI. However, it may also be that the answer to the “unanswerable” question only exists as tacit knowledge in the heads of the organization’s experts, or doesn’t exist at all. This is the most core and true value of the field of knowledge management, and has never been so possible.

4) Acting on “Hallucinations”

Aligned with the idea of filling gaps, a similar role for the AI governance group should be to address hallucinations or failures for AI to deliver an accurate, consistent, and complete “answer.” For organizations attempting to implement enterprise AI, a hallucination is little more than a cute word for an error, and should be treated as such by the AI governance group. There are many reasons for these errors, ranging from poor quality (i.e., wrong, outdated, near-duplicate, or conflicting) knowledge assets, insufficient semantic structure (e.g., taxonomy, ontology, or a business glossary), or poor logic built into the model itself. Any of these issues should be treated with immediate action. Your organization’s end users will quickly lose trust in an AI solution that delivers inaccurate results. Your governance model and associated organizational structure must be equipped to act quickly, first to leverage communications and feedback channels to ensure your end users are telling you when they believe something is inaccurate or incomplete, and moreover, to diagnose and address it.

As a note, for the most mature organizations, this action won’t be entirely reactive. For the most mature, organizational subject matter experts will be involved in perpetuity, especially right before and after enterprise AI deployment, to hunt for errors in these systems. Commonly, you can consider this governance function as the “Hallucination Killers” within your organization, likely to be one of the most critical actions as AI continues to expand.

5) Embedding Automation (Where It Makes Sense)

Finally, one of the most important roles of an AI governance group will be to use AI to make AI better. Almost everything we’ve described above can be automated. AI can and should be used to automate identification of knowledge gaps as well as solve the issue of those knowledge gaps by pinpointing organizational subject matter experts and targeting them to deliver their learning and experience at the right moments. It can also play a major role in helping to apply the appropriate semantic structure to knowledge, through tagging of taxonomy terms as metadata or identification of potential terms for inclusion in a business glossary. Central to all of this automation, however, is to ensure the ‘human is in the loop’, or rather, the AI governance group plays an advisory and oversight role throughout these automations, to ensure the design doesn’t fall out of alignment. This element further facilitates AI governance coordination across the organization by supporting stakeholders and knowledge asset stewards through technical enablement.

All of this presents a world of possibility. Governance was historically one of the drier and more esoteric concepts within the field, often where good projects went bad. We have the opportunity to do governance better by leveraging AI in the areas where humans historically fell short, while maintaining the important role of human experts with the right authority to ensure organizational alignment and value.

If your AI efforts aren’t yet yielding the results you expected, or you’re seeking to get things started right from the beginning, contact EK to help you.

The post Defining Governance and Operating Models for AI Readiness of Knowledge Assets appeared first on Enterprise Knowledge.

]]>
Semantic Layer Strategy: The Core Components You Need for Successfully Implementing a Semantic Layer https://enterprise-knowledge.com/semantic-layer-strategy-the-core-components-you-need-for-successfully-implementing-a-semantic-layer/ Mon, 06 Oct 2025 16:03:47 +0000 https://enterprise-knowledge.com/?p=25718 Today’s organizations are flooded with opportunities to apply AI and advanced data experiences, but many struggle with where to focus first. Leaders are asking questions like: “Which AI use cases will bring the most value? How can we connect siloed … Continue reading

The post Semantic Layer Strategy: The Core Components You Need for Successfully Implementing a Semantic Layer appeared first on Enterprise Knowledge.

]]>
Today’s organizations are flooded with opportunities to apply AI and advanced data experiences, but many struggle with where to focus first. Leaders are asking questions like: “Which AI use cases will bring the most value? How can we connect siloed data to support them?” Without a clear strategy, quick start-ups and vendors are making it easy to spin wheels on experiments that never scale. As more organizations recognize the value of meaningful, connected data experiences via a Semantic Layer, many find themselves unsure of how to begin their journey, or how to sustain meaningful progress once they begin. 

A well-defined Semantic Layer strategy is essential to avoid costly missteps in planning or execution, secure stakeholder alignment and buy-in, and ensure long-term scalability of models and tooling.

This blog outlines the key components of a successful Semantic Layer strategy, explaining how each component supports a scalable implementation and contributes to unlocking greater value from your data.

What is a Semantic Layer?

The Semantic Layer is a framework that adds rich structure and meaning to data by applying categorization models (such as taxonomies and ontologies) and using semantic technologies like graph databases and data catalogs. Your Semantic Layer should be a connective tissue that leverages a shared language to unify information across systems, tools, and domains. 

Data-rich organizations often manage information across a growing number of siloed repositories, platforms, and tools. The lack of a shared structure for how data is described and connected across these systems ultimately slows innovation and undermines initiatives. Importantly, your semantic layer enables humans and machines to interpret data in context and lays the foundation for enterprise-wide AI capabilities.    

 

What is a Semantic Layer Strategy?

A Semantic Layer Strategy is a tailored vision outlining the value of using knowledge assets to enable new tools and create insights through semantic approaches. This approach ensures your organization’s semantic efforts are focused, feasible, and value-driven by aligning business priorities with technical implementation. 

Regardless of your organization’s size, maturity, or goals, a strong Semantic Layer Strategy enables you to achieve the following:

1. Articulate a clear vision and value proposition.

Without a clear vision, semantic layer initiatives risk becoming scattered and mismanaged, with teams pulling in different directions and value to the organization left unclear. The Semantic Layer vision serves as the “North Star,” or guiding principle for planning, design, and execution. Organizations can realize a variety of use cases via a Semantic Layer (including advanced search, recommendation engines, personalized knowledge delivery, and more), and Semantic Layer Strategy helps to define and align on what a Semantic Layer can solve for your organization.

The vision statement clearly answers three core questions:

  • What is the business problem you are trying to solve?
  • What outcomes and capabilities are you enabling?
  • How will you measure success?

These three items create a strategic narrative that business and technical stakeholders alike can understand, and enable discussions to gain executive buy-in and prioritize initiative efforts. 

Enterprise Knowledge Case Study (Risk Mitigation for a Wall Street Bank): EK led the development of a  data strategy for operational risk for a bank seeking to create a unified view of highly regulated data dispersed across siloed repositories. By framing a clear vision statement for the Bank’s semantic layer, EK guided the firm to establish a multi-year program to expand the scope of data and continually enable new data insights and capabilities that were previously impossible. For example, users of a risk application could access information from multiple repositories in a single knowledge panel within the tool rather than hunting for it in siloed applications. The Bank’s Semantic Layer vision is contained in a single easy-to-understand one-pager  that has been used repeatedly as a rallying point to communicate value across the enterprise, win executive sponsorship, and onboard additional business groups into the semantic layer initiative. 

2. Assess your current organizational semantic maturity.

A semantic maturity assessment looks at the semantic structures, programs, processes, knowledge assets and overall awareness that already exist at your organization. Understanding where your organization lies on the semantic maturity spectrum is essential for setting realistic goals and sequencing a path to greater maturity. 

  • Less mature organizations may lack formal taxonomies or ontologies, or may have taxonomies and ontologies that are outdated, inconsistently applied, or not integrated across systems. They have limited (or no) semantic tooling and few internal semantic champions. Their knowledge assets are isolated, inconsistently tagged (or untagged) documents that require human interpretation to understand and are difficult for systems to find or connect.
  • More mature organizations typically have well-maintained taxonomies and/or ontologies, have established governance processes, and actively use semantic tooling such as knowledge graphs or business glossaries. More than likely, there are individuals or groups who advocate for the adoption of these tools and processes within the organization. Their knowledge assets are well-structured, consistently tagged, and interconnected pieces of content that both humans and machines can easily discover, interpret, and reuse.

Enterprise Knowledge Case Study (Risk Mitigation for a Wall Street Bank): EK conducted a comprehensive semantic maturity assessment of the current state of the Bank’s semantics program to uncover strengths, gaps, and opportunities. This assessment included:

  • Knowledge Asset Assessment: Evaluated the connectedness, completeness, and consistency of existing risk knowledge assets, identifying opportunities to enrich and restructure them to support redesigned application workflows.
  • Ontology Evaluation: Reviewed existing ontologies describing risk at the firm to assess accuracy, currency, semantic standards compliance, and maintenance practices.
  • Category Model Evaluation: Created a taxonomy tracker to evaluate candidate categories for a unified category management program, focusing on quality, ownership, and ongoing governance.
  • Architecture Gap Analysis and Tooling Recommendation : Reviewed existing applications, APIs, and integrations to determine whether components should be reused, replaced, or rebuilt.
  • People & Roles Assessment: Designed a target operating model to identify team structures, collaboration patterns, and missing roles or skills that are critical for semantic growth.

Together, these evaluations provided a clear benchmark of maturity and guided a right-sized strategy for the bank. 

3. Create a shared conceptual knowledge asset model. 

When it comes to strategy, executive stakeholders don’t want to see exhaustive technical documentation–they want to see impact. A high-level visual model of what your Semantic Layer will achieve brings a Semantic Layer Strategy to life by showing how connected knowledge assets can enable better decisions and new insights. 

Your data model should show, in broad strokes, what kinds of data will be connected at the conceptual level. For example, your data model could show that people, business units, and sales reports can be connected to answer questions like, “How many people in the United States created documents about X Law?” or “What laws apply to me when writing a contract in Wisconsin?” 

In sum, it should focus on how people and systems will benefit from the relationships between data, enabling clearer communication and shared understanding of your Semantic Layer’s use cases. 

Enterprise Knowledge Case Study (Risk Mitigation for a Wall Street Bank): EK collaborated with data owners to map out core concepts and their relationships in a single, digestible diagram. The conceptual knowledge asset model served as a shared reference point for both business and technical stakeholders, grounding executive conversations about Semantic Layer priorities and guiding onboarding decisions for data and systems. 

By simplifying complex data relationships into a clear visual, EK enabled alignment across technical and non-technical audiences and built momentum for the Semantic Layer initiative.

4. Develop a practical and iterative roadmap for implementation and scale.

With your vision, assessment, and foundational conceptual model in place, the next step is translating your strategy into execution. Your Semantic Layer roadmap should be outcome-driven, iterative, and actionable. A well-constructed roadmap provides not only a starting point for your Semantic Layer initiative, but also a mechanism for continuous alignment as business priorities evolve. 

Importantly, your roadmap should not be a rigid set of instructions; rather, it should act as a living guide. As your semantic maturity increases and business needs shift, the roadmap should adapt to reflect new opportunities while keeping long-term goals in focus. While the roadmap may be more detailed and technically advanced for highly mature organizations, less mature organizations may focus their roadmap on broader strokes such as tool procurement and initial category modeling. In both cases, the roadmap should be tailored to the organization’s unique needs and maturity, ensuring it is practical, actionable, and aligned to real priorities.

Enterprise Knowledge Case Study (Risk Mitigation for a Wall Street Bank): EK led the creation of a roadmap focused on expanding the firm’s existing semantic layer. Through planning sessions, EK identified the necessary categories, ontologies, tooling, and architecture uplifts needed to chart forward on their Semantic Layer journey. Once a strong foundation was built, additional planning sessions centered on adding new categories, onboarding additional data concepts, and refining ontologies to increase coverage and usability. Through sessions with key stakeholders responsible for the growth of the program, EK prioritized high-value expansion opportunities and recommended governance practices to sustain long-term scale. This enabled the firm to confidently evolve its Semantic Layer while maintaining alignment with business priorities and demonstrating measurable impact across the organization.

 

Conclusion

A successful Semantic Layer Strategy doesn’t come from technology alone; it comes from a clear vision, organizational alignment, and intentional design. Whether you’re just getting started on your semantics journey or refining your Semantic Layer approach, Enterprise Knowledge can support your organization. Contact us at info@enterprise-knowledge.com to discuss how we can help bring your Semantic Layer strategy to life.

The post Semantic Layer Strategy: The Core Components You Need for Successfully Implementing a Semantic Layer appeared first on Enterprise Knowledge.

]]>
How to Ensure Your Data is AI Ready https://enterprise-knowledge.com/how-to-ensure-your-data-is-ai-ready/ Wed, 01 Oct 2025 16:37:50 +0000 https://enterprise-knowledge.com/?p=25670 Artificial intelligence has the potential to be a game-changer for organizations looking to empower their employees with data at every level. However, as business leaders look to initiate projects that incorporate data as part of their AI solutions, they frequently … Continue reading

The post How to Ensure Your Data is AI Ready appeared first on Enterprise Knowledge.

]]>
Artificial intelligence has the potential to be a game-changer for organizations looking to empower their employees with data at every level. However, as business leaders look to initiate projects that incorporate data as part of their AI solutions, they frequently look to us to ask, “How do I ensure my organization’s data is ready for AI?” In the first blog in this series, we shared ways to ensure knowledge assets are ready for AI. In this follow-on article, we will address the unique challenges that come with connecting data—one of the most unique and varied types of knowledge assets—to AI. Data is pervasive in any organization and can serve as the key feeder for many AI use cases, so it is a high priority knowledge asset to ready for your organization.

The question of data AI readiness stems from the very real concern that when AI is pointed at data that isn’t correct or that doesn’t have the right context associated with it, organizations could face risks to their reputation, their revenue, or their customers’ privacy. With the additional nuance that data brings by often being presented in formats that require transformation, lacking in context, and frequently containing multiple duplicates or near-duplicates with little explanation of their meaning, data (although seemingly already structured and ready for machine consumption) requires greater care than other forms of knowledge assets to comprise part of a trusted AI solution. 

This blog focuses on the key actions an organization needs to perform to ensure their data is ready to be consumed by AI. By following the steps below, an organization can use AI-ready data to develop end-products that are trustworthy, reliable, and transparent in their decision making.

1) Understand What You Mean by “Data” (Data Asset and Scope Definition)

Data is more than what we typically picture it as. Broadly, data is any raw information that can be interpreted to garner meaning or insights on a certain topic. While the typical understanding of data revolves around relational databases and tables galore, often with esoteric metrics filling their rows and columns, data takes a number of forms, which can often be surprising. In terms of format, while data can be in traditional SQL databases and formats, NoSQL data is growing in usage, in forms ranging from key-value pairs to JSON documents to graph databases. Plain, unstructured text such as emails, social media posts, and policy documents are also forms of data, but traditionally not included within the enterprise definition. Finally, data comes from myriad sources—from live machine data on a manufacturing floor to the same manufacturing plant’s Human Resources Management System (HRMS). Data can also be categorized by its business role: operational data that drives day-to-day processes, transactional data that records business exchanges, and even purchased or third-party data brought in to enrich internal datasets. Increasingly, organizations treat data itself as a product, packaged and maintained with the same rigor as software, and rely on data metrics to measure quality, performance, and impact of business assets.

All these forms and types of data meet the definition of a knowledge asset—information and expertise that an organization can use to create value, which can be connected with other knowledge assets. No matter the format or repository type, ingested, AI-ready data can form the backbone of a valuable AI solution by allowing business-specific questions to be answered reliably in an explainable manner. This raises the question to organizational decision makers—what within our data landscape needs to be included in our AI solution? From your definition of what data is, start thinking of what to add iteratively. What systems contain the highest priority data? What datasets would provide the most value to end users? Select high-value data in easy-to-transform formats that allows end users to see the value in your solution. This can garner excitement across departments and help support future efforts to introduce additional data into your AI environment. 

2) Ensure Quality (Data Cleanup)

The majority of organizations we’ve worked with have experienced issues with not knowing what data they have or what it’s intended to be used for. This is especially common in large enterprise settings as the sheer scale and variety of data can breed an environment where data becomes lost, buried, or degrades in quality. This sprawl occurs alongside another common problem, where multiple versions of the same dataset exist, with slight variations in the data they contain. Furthermore, the issue is exacerbated by yet another frequent challenge—a lack of business context. When data lacks context, neither humans nor AI can reliably determine the most up-to-date version, the assumptions and/or conditions in place when said data was collected, or even if the data warrants retention.

Once AI is introduced, these potential issues are only compounded. If an AI system is provided data that is out of date or of low quality, the model will ultimately fail to provide reliable answers to user queries. When data is collected for a specific purpose, such as identifying product preferences across customer segments, but not labeled for said use, and an AI model leverages that data for a completely separate purpose, such as dynamic pricing models, harmful biases can be introduced into the results that negatively impact both the customer and the organization.

Thankfully, there are several methods available to organizations today that allow them to inventory and restructure their data to fix these issues. Examples include data dictionaries, master data (MDM data), and reference data that help standardize data across an organization and help point to what is available at large. Additionally, data catalogs are a proven tool to identify what data exists within an organization, and include versioning and metadata features that can help label data with their versions and context. To help populate catalogs and data dictionaries and to create MDM/reference data, performing a data audit alongside stewards can help rediscover lost context and label data for better understanding by humans and machines alike. Another way to deduplicate, disambiguate, and contextualize data assets is through lineage. Lineage is a feature included in many metadata management tools that stores and displays metadata regarding source systems, creation and modification dates, and file contributors. Using this lineage metadata, data stewards can select which version of a data asset is the most current or relevant for a specific use case and only expose said asset to AI. These methods to ensure data quality and facilitate data stewardship can aid in action towards a larger governance framework. Finally, at a larger scale, a semantic layer can unify data and its meaning for easier ingestion into an AI solution, assist with deduplication efforts, and break down silos between different data users and consumers of knowledge assets at large. 

Separately, for the elimination of duplicate/near-duplicate data, entity resolution can autonomously parse the content of data assets, deduplicate them, and point AI to the most relevant, recent, or reliable data asset to answer a question. 

3) Fill Gaps (Data Creation or Acquisition)

With your organization’s data inventoried and priorities identified, it’s time to start identifying what gaps exist in your data landscape in light of the business questions and challenges you are looking to address. First, ask use case-based questions. Based on your identified use cases, what data would an AI model need to answer topical questions that your organization doesn’t already possess?

At a higher level, gaps in use cases for your AI solution will also exist. To drive use case creation forward, consider the use of a data model, entity relationship diagram (ERD), or ontology to serve as the conceptual map on which all organizational data exists. With a complete data inventory, an ontology can help outline the process by which AI solutions would answer questions at a high level, thanks to being both machine and human-readable. By traversing the ontology or data model, you can design user journeys and create questions that form the basis of novel use cases.

Often, gaps are identified that require knowledge assets outside of data to fill. A data model or ontology can help identify related assets, as they function independently of their asset type. Moreover, standardized metadata across knowledge assets and asset types can enrich assets, link them to one another, and provide insights previously not possible. When instantiated in a solution alongside a knowledge graph, this forms a semantic layer where data assets, such as data products or metrics, gain context and maturity based on related knowledge assets. We were able to enhance the performance of a large retail chain’s analytics team through such an approach utilizing a semantic layer.

To fill these gaps, organizations can look to collect or create more data, as well as purchase publicly available/incorporate open-source datasets (build vs. buy). Another common method of filling identified organizational gaps is the creation of content (and other non-data knowledge assets) to identify a gap via the extraction of tacit organizational knowledge. This is a method that more chief data officers/chief data and AI officers (CDOs/CDAOs) are employing, as their roles expand and reliance on structured data alone to gather insights and solve problems is no longer feasible.

As a whole, this process will drive future knowledge asset collection, creation, and procurement efforts and consequently is a crucial step in ensuring data at large is AI ready. If no such data exists for AI to rely on for certain use cases, users will be presented unreliable, hallucination-based answers, or in a best-case scenario, no answer at all. Yet as part of a solid governance plan as mentioned earlier, the continuation of the gap analysis process post-solution deployment can empower organizations to continually identify—and close—knowledge gaps, continuously improving data AI readiness and AI solution maturity.

4) Add Structure and Context (Semantic Components)

A key component of making data AI-ready is structure—not within the data per se (e.g., JSON, SQL, Excel), but the structure relating the data to use cases. As a term, ‘structure’ added meaning to knowledge assets in our previous blog, but can introduce confusion as a misnomer in this section. Consequently, ‘structure’ will refer to the added, machine-readable context a semantic model adds to data assets, rather than the format of the data assets themselves, as data loses meaning once taken out of the structure or format it is stored in (e.g., as takes place when retrieved by AI).

Although we touched on one type of semantic model in the previous step, there are three semantic models that work together to ensure data AI readiness: business glossaries, taxonomies, and ontologies. Adding semantics to data for the purpose of getting it ready for AI allows an organization to help users understand the meaning of the data they’re working with. Together, taxonomies, ontologies, and business glossaries imbue data with the context needed for an AI model to fully grasp the data’s meaning and make optimal use of it to answer user queries. 

Let’s dive into the business glossary first. Business glossaries define business context-specific terms that are often found in datasets in a plaintext, easy-to-understand manner. For AI models which are often trained generally, these glossary terms can further assist in the selection of the correct data needed to answer a user query. 

Taxonomies group knowledge assets into broader and narrower categories, providing a level of hierarchical organization not available with traditional business glossaries. This can help data AI readiness in manifold ways. By standardizing terminology (e.g., referring to “automobile,” “car,” and “vehicle” all as “Vehicles” instead of separately), data from multiple sources can be integrated more seamlessly, disambiguated, and deduplicated for clearer understanding. 

Finally, ontologies provide the true foundation for linking related datasets to one another and allow for the definition of custom relationships between knowledge assets. When combining ontology with AI, organizations can perform inferences as a way to capture explicit data about what’s only implied by individual datasets. This shows the power of semantics at work, and demonstrates that good, AI-ready data enriched with metadata can provide insights at the same level and accuracy as a human. 

Organizations who have not pursued developing semantics for knowledge assets before can leverage traditional semantic capture methods, such as business glossaries. As organizations mature in their curation of knowledge assets, they are able to leverage the definitions developed as part of these glossaries and dictionaries, and begin to structure that information using more advanced modeling techniques, like taxonomy and ontology development. When applied to data, these semantic models make data more understandable, both to end users and AI systems. 

5) Semantic Model Application (Labeling and Tagging) 

The data management community has more recently been focused on the value of metadata and metadata-first architecture, and is scrambling to catch up to the maturity displayed in the fields of content and knowledge management. Through replicating methods found in content management systems and knowledge management platforms, data management professionals are duplicating past efforts. Currently, the data catalog is the primary platform where metadata is being applied and stored for data assets. 

To aggregate metadata for your organization’s AI readiness efforts, it’s crucial to look to data stewards as the owners of, and primary contributors to, this effort. Through the process of labeling data by populating fields such as asset descriptions, owner, assumptions made upon collection, and purposes, data stewards help to drive their data towards AI readiness while making tacit knowledge explicit and available to all. Additionally, metadata application against a semantic model (especially taxonomies and ontologies) contextualizes assets in business context and connects related assets to one another, further enriching AI-generated responses to user prompts. While there are methods to apply metadata to assets without the need for as much manual effort (such as auto-classification, which excels for content-based knowledge assets), structured data usually dictates the need for human subject matter experts to ensure accurate classification. 

With data catalogs and recent investments in metadata repositories, however, we’ve noticed a trend that we expect will continue to grow and spread across organizations in the near future. Data system owners are more and more keen to manage metadata and catalog their assets within the same systems that data is stored/used, adopting features that were previously exclusive to a data catalog. Major software providers are strategically acquiring or building semantic capabilities for this purpose. This has been underscored by the recent acquisition of multiple data management platforms by the creators of larger, flagship software products. With the features of the data catalog being adapted from a full, standalone application that stores and presents metadata to a component of a larger application that focuses as a metadata store, the metadata repository is beginning to take hold as the predominant metadata management platform.

6) Address Access and Security (Unified Entitlements)

Applying semantic metadata as described above helps to make data findable across an organization and contextualized with relevant datasets—but this needs to be balanced alongside security and entitlements considerations. Without regard to data security and privacy, AI systems risk bringing in data they shouldn’t have access to because access entitlements are mislabeled or missing, leading to leaks in sensitive information.

A common example of when this can occur is with user re-identification. Data points that independently seem innocuous, when combined by an AI system, can leak information about customers or users of an organization. With as few as just 15 data points, information that was originally collected anonymously can be combined to identify an individual. Data elements like ZIP code or date of birth would not be damaging on their own, but when combined, can expose information about a user that should have been kept private. These concerns become especially critical in industries with small population sizes for their datasets, such as rare disease treatment in the healthcare industry.

EK’s unified entitlements work is focused on ensuring the right people and systems view the correct knowledge assets at the right time. This is accomplished through a holistic architectural approach with six key components. Components like a policy engine capture can enforce whether access to data should be given, while components like a query federation layer ensure that only data that is allowed to be retrieved is brought back from the appropriate sources. 

The components of unified entitlements can be combined with other technologies like dark data detection, where a program scrapes an organization’s data landscape for any unlabeled information that is potentially sensitive, so that both human users and AI solutions cannot access data that could result in compliance violations or reputational damage. 

As a whole, data that exposes sensitive information to the wrong set of eyes is not AI-ready. Unified entitlements can form the layer of protection that ensures data AI readiness across the organization.

7) Maintain Quality While Iteratively Improving (Governance)

Governance serves a vital purpose in ensuring data assets become, and remain, AI-ready. With the introduction of AI to the enterprise, we are now seeing governance manifest itself beyond the data landscape alone. As AI governance begins to mature as a field of its own, it is taking on its own set of key roles and competencies and separating itself from data governance. 

While AI governance is meant to guide innovation and future iterations while ensuring compliance with both internal and external standards, data governance personnel are taking on the new responsibility of ensuring data is AI-ready based on requirements set by AI governance teams. Barring the existence of AI governance personnel, data governance teams are meant to serve as a bridge in the interim. As such, your data governance staff should define a common model of AI-ready data assets and related standards (such as structure, recency, reliability, and context) for future reference. 

Both data and AI governance personnel hold the responsibility of future-proofing enterprise AI solutions, in order to ensure they continue to align to the above steps and meet requirements. Specific to data governance, organizations should ask themselves, “How do you update your data governance plan to ensure all the steps are applicable in perpetuity?” In parallel, AI governance should revolve around filling gaps in their solution’s capabilities. Once the AI solutions launch to a production environment and user base, more gaps in the solution’s realm of expertise and capabilities will become apparent. As such, AI governance professionals need to stand up processes to use these gaps to continue identifying new needs for knowledge assets, data or otherwise, in perpetuity.

Conclusion

As we have explored throughout this blog, data is an extremely varied and unique form of knowledge asset with a new and disparate set of considerations to take into account when standing up an AI solution. However, following the steps listed above as part of an iterative process for implementation of data assets within said solution will ensure data is AI-ready and an invaluable part of an AI-powered organization.

If you’re seeking help to ensure your data is AI-ready, contact us at info@enterprise-knowledge.com.

The post How to Ensure Your Data is AI Ready appeared first on Enterprise Knowledge.

]]>
How to Fill Your Knowledge Gaps to Ensure You’re AI-Ready https://enterprise-knowledge.com/how-to-fill-your-knowledge-gaps-to-ensure-youre-ai-ready/ Mon, 29 Sep 2025 19:14:44 +0000 https://enterprise-knowledge.com/?p=25629 “If only our company knew what our company knows” has been a longstanding lament for leaders: organizations are prevented from mobilizing their knowledge and capabilities towards their strategic priorities. Similarly, being able to locate knowledge gaps in the organization, whether … Continue reading

The post How to Fill Your Knowledge Gaps to Ensure You’re AI-Ready appeared first on Enterprise Knowledge.

]]>
“If only our company knew what our company knows” has been a longstanding lament for leaders: organizations are prevented from mobilizing their knowledge and capabilities towards their strategic priorities. Similarly, being able to locate knowledge gaps in the organization, whether we were initially aware of them (known unknowns), or initially unaware of them (unknown unknowns), represents opportunities to gain new capabilities, mitigate risks, and navigate the ever-accelerating business landscape more nimbly.  

AI implementations are already showing signs of knowledge gaps: hallucinations, wrong answers, incomplete answers, and even “unanswerable” questions. There are multiple causes for AI hallucinations, but an important one is not having the right knowledge to answer a question in the first place. While LLMs may have been trained on massive amounts of data, it doesn’t mean that they know your business, your people, or your customers. This is a common problem when organizations make the leap from how they experience “Public AI” tools like ChatGPT, Gemini, or Copilot, to attempting their own organization’s AI solutions. LLMs and agentic solutions need knowledge—your organization’s unique knowledgeto produce results that are unique to your and your customers’ needs, and help employees navigate and solve challenges they encounter in their day-to-day work. 

In a recent article, EK outlined key strategies for preparing content and data for AI. This blog post builds on that foundation by providing a step-by-step process for identifying and closing knowledge gaps, ensuring a more robust AI implementation.

 

The Importance of Bridging Knowledge Gaps for AI Readiness

EK lays out a six-step path to getting your content, data, and other knowledge assets AI-ready, yielding assets that are correct, complete, consistent, contextual, and compliant. The diagram below provides an overview of these six steps:

The six steps to AI readiness. Step one: Define Knowledge Assets. Step two: Conduct cleanup. Step three: Fill Knowledge Gaps (We are here). Step four: Enrich with context. Step five: Add structure. Step six: Protect the knowledge assets.

Identifying and filling knowledge gaps, the third step of EK’s path towards AI readiness, is crucial in ensuring that AI solutions have optimized inputs. 

Prior to filling gaps, an organization will have defined its critical knowledge assets and conducted a content cleanup. A content cleanup not only ensures the correctness and reliability of the knowledge assets, but also reveals the specific topics, concepts, or capabilities that the organization cannot currently supply to AI solutions as inputs.

This scenario presupposes that the organization has a clear idea of the AI use cases and purposes for its knowledge assets. Given the organization knows the questions AI needs to answer, an assessment to identify the location and state of knowledge assets can be targeted based on the inputs required. This assessment would be followed by efforts to collect the identified knowledge and optimize it for AI solutions. 

A second, more complicated, scenario arises when an organization hasn’t formulated a prioritized list of questions for AI to answer. The previously described approach, relying on drawing up a traditional knowledge inventory will face setbacks because it may prove difficult to scale, and won’t always uncover the insights we need for AI readiness. Knowledge inventories may help us understand our known unknowns, but they will not be helpful in revealing our unknown unknowns

 

Identifying the Gap

How can we identify something that is missing? At this juncture, organizations will need to leverage analytics, introduce semantics, and if AI is already deployed in the organization, then use it as a resource as well. There are different techniques to identify these gaps, depending on whether your organization has already deployed an AI solution or is ramping up for one. Available options include:

Before and After AI Deployment

Leveraging Analytics from Existing Systems

Monitoring and assessing different tools’ analytics is an established practice to understand user behavior. In this instance, EK applies these same methods to understand critical questions about the availability of knowledge assets. We are particularly interested in analytics that reveal answers to the following questions:

  • When are our people giving up when navigating different sections of a tool or portal? 
  • What sort of queries return no results?
  • What queries are more likely to get abandoned? 
  • What sort of content gets poor reviews, and by whom?
  • What sort of material gets no engagement? What did the user do or search for before getting to it? 

These questions aim to identify instances of users trying, and failing, to get knowledge they need to do their work. Where appropriate, these questions can also be posed directly to users via surveys or focus groups to get a more rounded perspective. 

Semantics

Semantics involve modeling an organization’s knowledge landscape with taxonomies and ontologies. When taxonomies and ontologies have been properly designed, updated, and consistently applied to knowledge, they are invaluable as part of wider knowledge mapping efforts. In particular, semantic models can be used as an exemplar of what should be there, and can then be compared with what is actually present, thus revealing what is missing.

We recently worked with a professional association within the medical field, helping them define a semantic model for their expansive amount of content, and then defining an automated approach to tagging these knowledge assets. As part of the design process, EK taxonomists interviewed experts across all of the association’s organizational functional teams to define the terms that should be present in the organization’s knowledge assets. After the first few rounds of auto-tagging, we examined the taxonomy’s coverage, and found that a significant fraction of the terms in the taxonomy went unused. We validated our findings with our clients’ experts, and, to their surprise, our engagement revealed an imbalance of knowledge asset production: while some topics were covered by their content, others were entirely lacking. 

Valid taxonomy terms or ontology concepts for which few to no knowledge assets exist reveal a knowledge gap where AI is likely to struggle.

After AI Deployment

User Engagement & Feedback

To ensure a solution can scale, evolve, and remain effective over time, it is important to establish formal feedback mechanisms for users to engage with system owners and governance bodies on an ongoing basis. Ideally, users should have a frictionless way to report an unsatisfactory answer immediately after they receive it, whether it is because the answer is incomplete or just plain wrong. A thumbs-up or thumbs-down icon has traditionally been used to solicit this kind of feedback, but organizations should also consider dedicated chat channels, conversations within forums, or other approaches for communicating feedback to which their users are accustomed.

AI Design and Governance 

Out-of-the-box, pre-trained language models are designed to prioritize providing a fluid response, often leading them to confidently generate answers even when their underlying knowledge is uncertain or incomplete. This core behavior increases the risk of delivering wrong information to users. However, this flaw can be preempted by thoughtful design in enterprise AI solutions: the key is to transform them from a simple answer generator into a sophisticated instrument that can also detect knowledge gaps. Enterprise AI solutions can be engineered to proactively identify questions which they do not have adequate information to answer and immediately flag these requests. This approach effectively creates a mandate for AI governance bodies to capture the needed knowledge. 

AI can move beyond just alerting the relevant teams about missing knowledge. As we will soon discuss, AI holds additional capabilities to close knowledge gaps by inferring new insights from disparate, already-known information, and connecting users directly with relevant human experts. This allows enterprise AI to not only identify knowledge voids, but also begin the process of bridging them.

 

Closing the Gap

It is important, at this point, to make the distinction between knowledge that is truly missing from the organization and knowledge that is simply unavailable to the organization’s AI solution. The approach to close the knowledge gap will hinge on this key distinction. 

 

If the ‘missing’ knowledge is documented or recorded somewhere… but the knowledge is not in a format that AI can use it, then:

Transform and migrate the present knowledge asset into a format that AI can more readily ingest. 

How this looks in practice:

A professional services firm had a database of meeting recordings meant for knowledge-sharing and disseminating lessons learned. The firm determined that there is a lot of knowledge “in the rough” that AI could incorporate into existing policies and procedures, but this was impossible to do by ingesting content in video format. EK engineers programmatically transcribed the videos, and then transformed the text into a machine-readable format. To make it truly AI-ready, we leveraged Natural Language Processing (NLP) and Named Entity Recognition (NER) techniques to contextualize the new knowledge assets by associating them with other concepts across the organization.

If the ‘missing’ knowledge is documented or recorded somewhere… but the knowledge exists in private spaces like email or closed forums, then:

Establish workflows and guidelines to promote, elevate, and institutionalize knowledge that had been previously informal.

How this looks in practice:

A government agency established online Communities of Practice (CoPs) to transfer and disseminate critical knowledge on key subject areas. Community members shared emerging practices and jointly solved problems. Community managers were able to ‘graduate’ informal conversations and documents into formal agency resources that lived within a designated repository, fully tagged, and actively managed. These validated and enhanced knowledge assets became more valuable and reliable for AI solutions to ingest.

If the ‘missing’ knowledge is documented or recorded somewhere… but the knowledge exists in different fragments across disjointed repositories, then: 

Unify the disparate fragments of knowledge by designing and applying a semantic model to associate and contextualize them. 

How this looks in practice:

A Sovereign Wealth Fund (SWF) collected a significant amount of knowledge about its investments, business partners, markets, and people, but kept this information fragmented and scattered across multiple repositories and databases. EK designed a semantic layer (composed of a taxonomy, ontology, and a knowledge graph) to act as a ‘single view of truth’. EK helped the organization define its key knowledge assets, like investments, relationships, and people, and weaved together data points, documents, and other digital resources to provide a 360-degree view of each of them. We furthermore established an entitlements framework to ensure that every attribute of every entity could be adequately protected and surfaced only to the right end-user. This single view of truth became a foundational element in the organization’s path to AI deployment—it now has complete, trusted, and protected data that can be retrieved, processed, and surfaced to the user as part of solution responses. 

If the ‘missing’ knowledge is not recorded anywhere… but the company’s experts hold this knowledge with them, then: 

Choose the appropriate techniques to elicit knowledge from experts during high-value moments of knowledge capture. It is important to note that we can begin incorporating agentic solutions to help the organization capture institutional knowledge, especially when agents can know or infer expertise held by the organization’s people. 

How this looks in practice:

Following a critical system failure, a large financial institution recognized an urgent need to capture the institutional knowledge held by its retiring senior experts. To address this challenge, they partnered with EK, who developed an AI-powered agent to conduct asynchronous interviews. This agent was designed to collect and synthesize knowledge from departing experts and managers by opening a chat with each individual and asking questions until the defined success criteria were met. This method allowed interviewees to contribute their knowledge at their convenience, ensuring a repeatable and efficient process for capturing critical information before the experts left the organization.

If the ‘missing’ knowledge is not recorded anywhere… and the knowledge cannot be found, then:

Make sure to clearly define the knowledge gap and its impact on the AI solution as it supports the business. When it has substantial effects on the solution’s ability to provide critical responses, then it will be up to subject matter experts within the organization to devise a strategy to create, acquire, and institutionalize the missing knowledge. 

How this looks in practice:

A leading construction firm needed to develop its knowledge and practices to be able to keep up with contracts won for a new type of project. Its inability to quickly scale institutional knowledge jeopardized its capacity to deliver, putting a significant amount of revenue at risk. EK guided the organization in establishing CoPs to encourage the development of repeatable processes, new guidance, and reusable artifacts. In subsequent steps, the firm could extract knowledge from conversations happening within the community and ingest them into AI solutions, along with novel knowledge assets the community developed. 

 

Conclusion

Identifying and closing knowledge gaps is no small feat, and predicting knowledge needs was nearly impossible before the advent of AI. Now, AI acts as both a driver and a solution, helping modern enterprises maintain their competitive edge.

Whether your critical knowledge is in people’s heads or buried in documents, Enterprise Knowledge can help. We’ll show you how to capture, connect, and leverage your company’s knowledge assets to their full potential to solve complex problems and obtain the results you expect out of your AI investments. Contact us today to learn how to bridge your knowledge gaps with AI.

The post How to Fill Your Knowledge Gaps to Ensure You’re AI-Ready appeared first on Enterprise Knowledge.

]]>
Top Ways to Get Your Content and Data Ready for AI https://enterprise-knowledge.com/top-ways-to-get-your-content-and-data-ready-for-ai/ Mon, 15 Sep 2025 19:17:48 +0000 https://enterprise-knowledge.com/?p=25370 As artificial intelligence has quickly moved from science fiction, to pervasive internet reality, and now to standard corporate solutions, we consistently get the question, “How do I ensure my organization’s content and data are ready for AI?” Pointing your organization’s … Continue reading

The post Top Ways to Get Your Content and Data Ready for AI appeared first on Enterprise Knowledge.

]]>
As artificial intelligence has quickly moved from science fiction, to pervasive internet reality, and now to standard corporate solutions, we consistently get the question, “How do I ensure my organization’s content and data are ready for AI?” Pointing your organization’s new AI solutions at the “right” content and data are critical to AI success and adoption, and failing to do so can quickly derail your AI initiatives.  

Though the world is enthralled with the myriad of public AI solutions, many organizations struggle to make the leap to reliable AI within their organizations. A recent MIT report, “The GenAI Divide,” reveals a concerning truth: despite significant investments in AI, 95% of organizations are not seeing any benefits from their AI investments. 

One of the core impediments to achieving AI within your own organization is poor-quality content and data. Without the proper foundation of high-quality content and data, any AI solution will be rife with ‘hallucinations’ and errors. This will expose organizations to unacceptable risks, as AI tools may deliver incorrect or outdated information, leading to dangerous and costly outcomes. This is also why tools that perform well as demos fail to make the jump to production.  Even the most advanced AI won’t deliver acceptable results if an organization has not prepared their content and data.

This blog outlines seven top ways to ensure your content and data are AI-ready. With the right preparation and investment, your organization can successfully implement the latest AI technologies and deliver trustworthy, complete results.

1) Understand What You Mean by “Content” and/or “Data” (Knowledge Asset Definition)

While it seems obvious, the first step to ensuring your content and data are AI-ready is to clearly define what “content” and “data” mean within your organization. Many organizations use these terms interchangeably, while others use one as a parent term of the other. This obviously leads to a great deal of confusion. 

Leveraging the traditional definitions, we define content as unstructured information (ranging from files and documents to blocks of intranet text), and data as structured information (namely the rows and columns in databases and other applications like Customer Relationship Management systems, People Management systems, and Product Information Management systems). You are wasting the potential of AI if you’re not seeking to apply your AI to both content and data, giving end users complete and comprehensive information. In fact, we encourage organizations to think even more broadly, going beyond just content and data to consider all the organizational assets that can be leveraged by AI.

We’ve coined the term knowledge assets to express this. Knowledge assets comprise all the information and expertise an organization can use to create value. This includes not only content and data, but also the expertise of employees, business processes, facilities, equipment, and products. This manner of thinking quickly breaks down artificial silos within organizations, getting you to consider your assets collectively, rather than by type. Moving forward in this article, we’ll use the term knowledge assets in lieu of content and data to reinforce this point. Put simply and directly, each of the below steps to getting your content and data AI-ready should be considered from an enterprise perspective of knowledge assets, so rather than discretely developing content governance and data governance, you should define a comprehensive approach to knowledge asset governance. This approach will not only help you achieve AI-readiness, it will also help your organization to remove silos and redundancies in order to maximize enterprise efficiency and alignment.

knowledge asset zoom in 1

2) Ensure Quality (Asset Cleanup)

We’ve found that most organizations are maintaining approximately 60-80% more information than they should, and in many cases, may not even be aware of what they still have. That means that four out of five knowledge assets are old, outdated, duplicate, or near-duplicate. 

There are many costs to this over-retention before even considering AI, including the administrative burden of maintaining this 80% (including the cost and environmental impact of unnecessary server storage), and the usability and findability cost to the organization’s end users when they go through obsolete knowledge assets.

The AI cost becomes even higher for several reasons. First, AI typically “white labels” the knowledge assets it finds. If a human were to find an old and outdated policy, they may recognize the old corporate branding on it, or note the date from several years ago on it, but when AI leverages the information within that knowledge asset and resurfaces it, it looks new and the contextual clues are lost.

Next, we have to consider the old adage of “garbage in, garbage out.” Incorrect knowledge assets fed to an AI tool will result in incorrect results, also known as hallucinations. While prompt engineering can be used to try to avoid these conflicts and, potentially even errors, the only surefire guarantee to avoid this issue is to ensure the accuracy of the original knowledge assets, or at least the vast majority of it.

Many AI models also struggle with near-duplicate “knowledge assets,” unable to discern which version is trusted. Consider your organization’s version control issues, working documents, data modeled with different assumptions, and iterations of large deliverables and reports that are all currently stored. Knowledge assets may go through countless iterations, and most of the time, all of these versions are saved. When ingested by AI, multiple versions present potential confusion and conflict, especially when these versions didn’t simply build on each other but were edited to improve findings or recommendations. Each of these, in every case, is an opportunity for AI to fail your organization.

Finally, this would also be the point at which you consider restructuring your assets for improved readability (both by humans and machines). This could include formatting (to lower cognitive lift and improve consistency) from a human perspective. For both humans and AI, this could also mean adding text and tags to better describe images and other non-text-based elements. From an AI perspective, in longer and more complex assets, proximity and order can have a negative impact on precision, so this could include restructuring documents to make them more linear, chronological, or topically aligned. This is not necessary or even important for all types of assets, but remains an important consideration especially for text-based and longer types of assets.

knowledge asset zoom in 2

3) Fill Gaps (Tacit Knowledge Capture)

The next step to ensure AI readiness is to identify your gaps. At this point, you should be looking at your AI use cases and considering the questions you want AI to answer. In many cases, your current repositories of knowledge assets will not have all of the information necessary to answer those questions completely, especially in a structured, machine-readable format. This presents a risk itself, especially if the AI solution is unaware that it lacks the complete range of knowledge assets necessary and portrays incomplete or limited answers as definitive. 

Filling gaps in knowledge assets is extremely difficult. The first step is to identify what is missing. To invoke another old adage, organizations have long worried they “don’t know what they don’t know,” meaning they lack the organizational maturity to identify gaps in their own knowledge. This becomes a major challenge when proactively seeking to arm an AI solution with all the knowledge assets necessary to deliver complete and accurate answers. The good news, however, is that the process of getting knowledge assets AI-ready helps to identify gaps. In the next two sections, we cover semantic design and tagging. These steps, among others, can identify where there appears to be missing knowledge assets. In addition, given the iterative nature of designing and deploying AI solutions, the inability of AI to answer a question can trigger gap filling, as we cover later. 

Of course, once you’ve identified the gaps, the real challenge begins, in that the organization must then generate new knowledge assets (or locate “hidden” assets) to fill those gaps. There are many techniques for this, ranging from tacit knowledge capture, to content inventories, all of which collectively can help an organization move from AI to Knowledge Intelligence (KI).    

knowledge asset zoom in 3

4) Add Structure and Context (Semantic Components)

Once the knowledge assets have been cleansed and gaps have been filled, the next step in the process is to structure them so that they can be related to each other correctly, with the appropriate context and meaning. This requires the use of semantic components, specifically, taxonomies and ontologies. Taxonomies deliver meaning and structure, helping AI to understand queries from users, relate knowledge assets based on the relationships between the words and phrases used within them, and leverage context to properly interpret synonyms and other “close” terms. Taxonomies can also house glossaries that further define words and phrases that AI can leverage in the generation of results.

Though often confused or conflated with taxonomies, ontologies deliver a much more advanced type of knowledge organization, which is both complementary to taxonomies and unique. Ontologies focus on defining relationships between knowledge assets and the systems that house them, enabling AI to make inferences. For instance:

<Person> works at <Company>

<Zach Wahl> works at <Enterprise Knowledge>

<Company> is expert in <Topic>

<Enterprise Knowledge> is expert in <AI Readiness>

From this, a simple inference based on structured logic can be made, which is that the person who works at the company is an expert in the topic: Zach Wahl is an expert in AI Readiness. More detailed ontologies can quickly fuel more complex inferences, allowing an organization’s AI solutions to connect disparate knowledge assets within an organization. In this way, ontologies enable AI solutions to traverse knowledge assets, more accurately make “assumptions,” and deliver more complete and cohesive answers. 

Collectively, you can consider these semantic components as an organizational map of what it does, who does it, and how. Semantic components can show an AI how to get where you want it to go without getting lost or taking wrong turns.

5) Semantic Model Application (Tagging)

Of course, it is not sufficient simply to design the semantic components; you must complete the process by applying them to your knowledge assets. If the semantic components are the map, applying semantic components as metadata is the GPS that allows you to use it easily and intuitively. This step is commonly a stumbling block for organizations, and again is why we are discussing knowledge assets rather than discrete areas like content and data. To best achieve AI readiness, all of your knowledge assets, regardless of their state (structured, unstructured, semi-structured, etc), must have consistent metadata applied against them. 

When applied properly, this consistent metadata becomes an additional layer of meaning and context for AI to leverage in pursuit of complete and correct answers. With the latest updates to leading taxonomy and ontology management systems, the process of automatically applying metadata or storing relationships between knowledge assets in metadata graphs is vastly improved, though still requires a human in the loop to ensure accuracy. Even so, what used to be a major hurdle in metadata application initiatives is much simpler than it used to be.

knowledge asset zoom in 4

6) Address Access and Security (Unified Entitlements)

What happens when you finally deliver what your organization has been seeking, and give it the ability to collectively and completely serve their end users the knowledge assets they’ve been seeking? If this step is skipped, the answer is calamity. One of the express points of the value of AI is that it can uncover hidden gems in knowledge assets, make connections humans typically can’t, and combine disparate sources to build new knowledge assets and new answers within them. This is incredibly exciting, but also presents a massive organizational risk.

At present, many organizations have an incomplete or actually poor model for entitlements, or ensuring the right people see the right assets, and the wrong people do not. We consistently discover highly sensitive knowledge assets in various forms on organizational systems that should be secured but are not. Some of this takes the form of a discrete document, or a row of data in an application, which is surprisingly common but relatively easy to address. Even more of it is only visible when you take an enterprise view of an organization. 

For instance, Database A might contain anonymized health information about employees for insurance reporting purposes but maps to discrete unique identifiers. File B includes a table of those unique identifiers mapped against employee demographics. Application C houses the actual employee names and titles for the organizational chart, but also includes their unique identifier as a hidden field. The vast majority of humans would never find this connection, but AI is designed to do so and will unabashedly generate a massive lawsuit for your organization if you’re not careful.

If you have security and entitlement issues with your existing systems (and trust me, you do), AI will inadvertently discover them, connect the dots, and surface knowledge assets and connections between them that could be truly calamitous for your organization. Any AI readiness effort must confront this challenge, before your AI solutions shine a light on your existing security and entitlements issues.

knowledge asset zoom in 5

7) Maintain Quality While Iteratively Improving (Governance)

Steps one through six describe how to get your knowledge assets ready for AI, but the final step gets your organization ready for AI. With a massive investment in both getting your knowledge assets in the right state for AI and in  the AI solution itself, the final step is to ensure ongoing quality of both. Mature organizations will invest in a core team to ensure knowledge assets go from AI-ready to AI-mature, including:

  • Maintaining and enforcing the core tenets to ensure knowledge assets stay up-to-date and AI solutions are looking at trusted assets only;
  • Reacting to hallucinations and unanswerable questions to fill gaps in knowledge assets; 
  • Tuning the semantic components to stay up to date with organizational changes.

The most mature organizations, those wishing to become AI-Powered organizations, will look first to their knowledge assets as the key building block to drive success. Those organizations will seek ROCK (Relevant, Organizationally Contextualized, Complete, and Knowledge-Centric) knowledge assets as the first line to delivering Enterprise AI that can be truly transformative for the organization. 

If you’re seeking help to ensure your knowledge assets are AI-Ready, contact us at info@enterprise-knowledge.com

The post Top Ways to Get Your Content and Data Ready for AI appeared first on Enterprise Knowledge.

]]>
When Should You Use An AI Agent? Part One: Understanding the Components and Organizational Foundations for AI Readiness https://enterprise-knowledge.com/when-should-you-use-an-ai-agent/ Thu, 04 Sep 2025 15:39:43 +0000 https://enterprise-knowledge.com/?p=25285 It’s been recognized for far too long that organizations spend as much as 30-40% of their time searching for or recreating information. Now, imagine a dedicated analyst who doesn’t just look for or analyze data for you but also roams … Continue reading

The post When Should You Use An AI Agent? Part One: Understanding the Components and Organizational Foundations for AI Readiness appeared first on Enterprise Knowledge.

]]>
It’s been recognized for far too long that organizations spend as much as 30-40% of their time searching for or recreating information. Now, imagine a dedicated analyst who doesn’t just look for or analyze data for you but also roams the office, listens to conversations, reads emails, and proactively sends you updates while spotting outdated data, summarizing new information, flagging inconsistencies, and prompting follow-ups. That’s what an AI agent does; it autonomously monitors content and data platforms, collaboration tools like Slack, Teams, and even email, and suggests updates or actions—without waiting for instructions. Instead of sending you on a massive data hunt to answer “What’s the latest on this client?”, an AI agent autonomously pulls CRM notes, emails, contract changes, and summarizes them in Slack or Teams or publishes findings as a report. It doesn’t just react, it takes initiative. 

The potential of AI agents for productivity gains within organizations is undeniable—and it’s no longer a distant future. However, the key question today is: when is the right time to build and deploy an AI agent, and when is simpler automation the more effective choice?

While the idea of a fully autonomous assistant handling routine tasks is appealing, AI agents require a complex framework to succeed. This includes breaking down silos, ensuring knowledge assets are AI-ready, and implementing guardrails to meet enterprise standards for accuracy, trust, performance, ethics, and security.

Over the past couple of years, we’ve worked closely with executives who are navigating what it truly means for their organizations to be “AI-ready” or “AI-powered”, and as AI technologies evolve, this challenge has only become more complex and urgent for all of us.

To move forward effectively, it’s crucial to understand the role of AI agents compared to traditional or narrow AI, automation, or augmentation solutions. Specifically, it is important to recognize the unique advantages of agent-based AI solutions, identify the right use cases, and ensure organizations have the best foundation to scale effectively.

In the first part of this two-part series, I’ll outline the core building blocks for organizations looking to integrate AI agents. The goal of this series is to provide insights that help set realistic expectations and contribute to informed decisions around AI agent integration—moving beyond technical experiments—to deliver meaningful outcomes and value to the organization.

Understanding AI Agents

AI agents are goal-oriented autonomous systems built from large language and other AI models, business logic, guardrails, and a supporting technology infrastructure needed to operate complex, resource-intensive tasks. Agents are designed to learn from data, adapt to different situations, and execute tasks autonomously. They understand natural language, take initiative, and act on behalf of humans and organizations across multiple tools and applications. Unlike traditional machine learning (ML) and AI automations (such as virtual assistants or recommendation engines), AI agents offer initiative, adaptability, and context-awareness by proactively accessing, analyzing, and acting on knowledge and data across systems.

 

Infographic explaining AI agents and when to use them, including what they are, when to use, and its limitations

 

Components of Agentic AI Framework

1. Relevant Language and AI Models

Language models are the agent’s cognitive core, essentially its “brain”, responsible for reasoning, planning, and decision-making. While not every AI agent requires a Large Language Model (LLM), most modern and effective agents rely on LLMs and reinforcement learning to evaluate strategies and select the best course of action. LLM-powered agents are especially adept at handling complex, dynamic, and ambiguous tasks that demand interpretation and autonomous decision-making.

Choosing the right language model also depends on the use case, task complexity, desired level of autonomy, and the organization’s technical environment. Some tasks are better served to remain simple, with more deterministic workflows or specialized algorithms. For example, an expertise-focused agent (e.g., a financial fraud detection agent) is more effective when developed with purpose-built algorithms than with a general-purpose LLM because the subject area requires hyper-specific, non-generalizable knowledge. On the other hand, well-defined, repetitive tasks, such as data sorting, form validation, or compliance checks, can be handled by rule-based agents or classical machine learning models, which are cheaper, faster, and more predictable. LLMs, meanwhile, add the most value in tasks that require flexible reasoning and adaptation, such as orchestrating integration with multiple tools, APIs, and databases to perform real-world actions like dynamic customer service process, placing trades or interpreting incomplete and ambiguous information. In practice, we are finding that a hybrid approach works best.

2. Semantic Layer and Unified Business Logic

AI agents need access to a shared, consistent view of enterprise data to avoid conflicting actions, poor decision-making, or the reinforcement of data silos. Increasingly, agents will also need to interact with external data and coordinate with other agents, which compounds the risk of misalignment, duplication, or even contradictory outcomes. This is where a semantic layer becomes critical. By standardizing definitions, relationships, and business context across knowledge and data sources, the semantic layer provides agents with a common language for interpreting and acting on information, connecting agents to a unified business logic. Across several recent projects, implementing a semantic layer has improved the accuracy and precision of initial AI results from around 50% to between 80% and 95%, depending on the use case.

The semantic layer includes metadata management, business glossaries, and taxonomy/ontology/graph data schemas that work together to provide a unified and contextualized view of data across typically siloed systems and business units, enabling agents to understand and reason about information within the enterprise context. These semantic models define the relationships between data entities and concepts, creating a structured representation of the business domain the agent is operating in. Semantic models form the foundation for understanding data and how it relates to the business. By incorporating two or more of these semantic model components, the semantic layer provides the foundation for building robust and effective agentic perception, cognition, action, and learning that can understand, reason, and act on org-specific business data. For any AI, but specifically for AI agents, a semantic layer is critical in providing access to:

  • Organizational context and meaning to raw data to serve as a grounding ‘map’ for accurate interpretation and agent action;
  • Standardized business terms that establish a consistent vocabulary for business metrics (e.g., defining “revenue” or “store performance” ), preventing confusion and ensuring the AI uses the same definitions as the business; and
  • Explainability and trust through metadata and lineage to validate and track why agent recommendations are compliant and safe to adopt.

Overall, the semantic layer ensures that all agents are working from the same trusted source of truth, and enables them to exchange information coherently, align with organizational policies, and deliver reliable, explainable results at scale. Specifically, in a multi-agent system with multiple domain-specific agents, all agents may not work off the same semantic layer, but each will have the organizational business context to interpret messages from each other as courtesy of the domain-specific semantic layers.

The bottom line is that, without this reasoning layer, the “black box” nature of agents’ decision-making processes erodes trust, making it difficult for organizations to adopt and rely on these source systems.

3. Access to AI-Ready Knowledge Assets and Sources

Agents require accurate, comprehensive, and context-rich organizational knowledge assets to make sound decisions. Without access to high-quality, well-structured data, agents, especially those powered by LLMs, struggle to understand complex tasks or reason effectively, often leading to unreliable or “hallucinated” outputs. In practice, this means organizations making strides with effective AI agents need to:

  • Capture and codify expert knowledge in a machine-readable form that is readily interpretable by AI models so that tacit know-how, policies, and best practices are accessible to agents, not just locked in human workflows or static documents;A callout box that explains what AI-ready knowledge assets are
  • Connect structured and unstructured data sources, from databases and transactional systems to documents, emails, and wikis, into a connected, searchable layer that agents can query and act upon; 
  • Provide semantically enriched assets with well-managed metadata, consistent labels, and standardized formats to make them interoperable with common AI platforms; 
  • Align and organize internal and external data so agents can seamlessly draw on employee-facing knowledge (policies, procedures, internal systems) as well as customer-facing assets (product documentation, FAQs, regulatory updates) while maintaining consistency, compliance, and brand integrity; and
  • Enable access to AI assets and systems while maintaining strict controls over who can use it, how it is used, and where it flows.

This also means, beyond static access to knowledge, agents must also query and interact dynamically with various sources of data and content. Doing this includes connecting to applications, websites, content repositories, and data management systems, and taking direct actions, such as reading/writing into enterprise applications, updating records, or initiating workflows.

Enabling this capability requires a strong design and engineering foundation, allowing agents to integrate with external systems and services through standard APIs, operate within existing security protocols, and respect enterprise governance and record compliance requirements. A unified approach, bringing together disparate data sources into a connected layer (see semantic layer component above), helps break down silos and ensures agents can operate with a holistic, enterprise-wide view of knowledge.

4. Instructions, Guardrails, and Observability

Organizations are largely unprepared for agentic AI due to several factors: the steep leap from traditional, predictable AI to complex multi-agent orchestration, persistent governance gaps, a shortage of specialized expertise, integration challenges, and inconsistent data quality, to name a few. Most critically, the ability to effectively control and monitor agent autonomy remains a fundamental barrier—posing significant security, compliance, and privacy risks. Recent real-world cases highlight how quickly things can go wrong, including tales of agents deleting valuable data, offering illegal or unethical advice, and amplifying bias in hiring decisions or in public-sector deployments. These failures underscore the risks of granting autonomous AI agents high-level permissions over live production systems without robust oversight, guardrails, and fail-safes. Until these gaps are addressed, autonomy without accountability will remain one of the greatest barriers to enterprise readiness in the agentic AI era.

As such, for AI agents to operate effectively within the enterprise, they must be guided by clear instructions, protected by guardrails, and monitored through dedicated evaluation and observability frameworks.

  • Instructions: Instructions define an AI agent’s purpose, goals, and persona. Agents don’t inherently understand how a specific business or organization operates. Instead, that knowledge comes from existing enterprise standards, such as process documentation, compliance policies, and operating models, which provide the foundational inputs for guiding agent behavior. LLMs can interpret these high-level standards and convert them into clear, step-by-step instructions, ensuring agents act in ways that align with organizational expectations. For example, in a marketing context, an LLM can take a general directive like, “All published content must reflect the brand voice and comply with regulatory guidelines”, and turn it into actionable instructions for a marketing agent. The agent can then assist the marketing team by reviewing a draft email campaign, identifying tone or compliance issues, and suggesting revisions to ensure the content meets both brand and regulatory standards.
  • Guardrails: Guardrails are safety measures that act as the protective boundaries within which agents operate. Agents need guardrails across different functions to prevent them from producing harmful, biased, or inappropriate content and to enforce security and ethical standards. These include relevance and output validation guardrails, personally identifiable information (PII) filters that detect unsafe inputs or prevent leakage of PII, reputation and brand alignment checks, privacy and security guardrails that enforce authentication, authorization, and access controls to prevent unauthorized data exposure, and guardrails against prompt attacks and content filters for harmful topics. 
  • Observability: Even with strong instructions and guardrails, agents must be monitored in real time to ensure they behave as expected. Observability includes logging actions, tracking decision paths, monitoring model outputs, cost monitoring and performance optimization, and surfacing anomalies for human review. A good starting point for managing agent access is mapping operational and security risks for specific use cases and leveraging unified entitlements (identity and access control across systems) to apply strict role-based permissions and extend existing data security measures to cover agent workflows.

Together, instructions, guardrails, and observability form a governance layer that ensures agents operate not only autonomously, but also responsibly and in alignment with organizational goals. To achieve this, it is critical to plan for and invest in AI management platforms and services that define agent workflows, orchestrate these interactions, and supervise AI agents. Key capabilities to look for in an AI management platform include: 

  • Prompt chaining where the output of one LLM call feeds the next, enabling multi-step reasoning; 
  • Instruction pipelines to standardize and manage how agents are guided;
  • Agent orchestration frameworks for coordinating multiple agents across complex tasks; and 
  • Evaluation and observability (E&O) monitoring solutions that offer features like content and topic moderation, PII detection and redaction, and protection against prompt injection or “jailbreaking” attacks. Furthermore, because training models involve iterative experimentation, tuning, and distributed computation, it is paramount to have benchmarks and business objectives defined from the onset in order to optimize model performance through evaluation and validation.

In contrast to the predictable expenses of standard software, AI project costs are highly dynamic and often underestimated during initial planning. Many organizations are grappling with unexpected AI cost overruns due to hidden expenses in data management, infrastructure, and maintenance for AI. This can severely impact budgets, especially for agentic environments. Tracking system utilization, scaling resources dynamically, and implementing automated provisioning allows organizations to maintain consistent performance and optimization for agent workloads, even under variable demand, while managing cost spikes and avoiding any surprises.

Many traditional enterprise observability tools are now extending their capabilities to support AI-specific monitoring. Lifecycle management tools such as MLflow, Azure ML, Vertex AI, or Databricks help with the management of this process at enterprise scale by tracking model versions, automating retraining schedules, and managing deployments across environments. As with any new technology, the effective practice is to start with these existing solutions where possible, then close the gaps with agent-specific, fit-for-purpose tools to build a comprehensive oversight and governance framework.

5. Humans and Organizational Operating Models

There is no denying it—the integration of AI agents will transform ways of working worldwide. However, a significant gap still exists between the rapid adoption plans for AI agents and the reality on the ground. Why? Because too often, AI implementations are treated as technological experiments, with a focus on performance metrics or captivating demos. This approach frequently overlooks the critical human element needed for AI’s long-term success. Without a human-centered operating model, AI deployments continue to run the risk of being technologically impressive but practically unfit for organizational use.

Human Intervention and Human-In-the-Loop Validation: One of the most pressing considerations in integrating AI into business operations is the role of humans in overseeing, validating, and intervening in AI decisions. Agentic AI has the power to automate many tasks, but it still requires human oversight, particularly in high-risk or high-impact decisions. A transparent framework for when and how humans intervene is essential for mitigating these risks and ensuring AI complies with regulatory and organizational standards. Emerging practices we are seeing are showing early success when combining agent autonomy with human checkpoints, wherein subject matter experts (SMEs) are identified and designated as part of the “AI product team” from the onset to define the requirements for and ensure that AI agents consistently focus on and meet the right organizational use cases throughout development. 

Shift in Roles and Reskilling: For AI to truly integrate into an organization’s workflow, a fundamental shift in the fabric of an organization’s roles and operating model is becoming necessary. Many roles as we know them today are shifting—even for the most seasoned software and ML engineers. Organizations are starting to rethink their structure to blend human expertise with agentic autonomy. This involves redesigning workflows to allow AI agents to automate routine tasks while humans focus on strategic, creative, and problem-solving roles. 

Implementing and managing agentic AI requires specialized knowledge in areas such as AI model orchestration, agent–human interaction design, and AI operations. These skill sets are often underdeveloped in many organizations and, as a result, AI projects are failing to scale effectively. The gap isn’t just technical; it also includes a cultural shift toward understanding how AI agents generate results and the responsibility associated with their outputs. To bridge this gap, we are seeing organizations start to invest in restructuring data, AI, content, and knowledge operations/teams and reskilling their workforce in roles like AI product management, knowledge and semantic modeling, and AI policy and governance.

Ways of Working: To support agentic AI delivery at scale, it is becoming evident that agile methodologies must also evolve beyond their traditional scope of software engineering and adapt to the unique challenges posed by AI development lifecycles. Agentic AI, requires an agile framework that is flexible, experimental, and capable of iterative improvements. This further requires deep interdisciplinary collaboration across data scientists, AI engineers, software engineers, domain experts, and business stakeholders to navigate complex business and data environments.

Furthermore, traditional CI/CD pipelines, which focus on code deployment, need to be expanded to support continuous model training, testing, human intervention, and deployment. Integrating ML/AI Ops is critical for managing agent model drift and enabling autonomous updates. The successful development and large-scale adoption of agentic AI hinges on these evolving workflows that empower organizations to experiment, iterate, and adapt safely as both AI behaviors and business needs evolve.

Conclusion 

Agentic AI will not succeed through technology advancements alone. Given the inherent complexity and autonomy of AI agents, it is essential to evaluate organizational readiness and conduct a thorough cost-benefit analysis when determining whether an agentic capability is essential or merely a nice-to-have.

Success will ultimately depend on more than just cutting-edge models and algorithms. It also requires dismantling artificial, system-imposed silos between business and technical teams, while treating organizational knowledge and people as critical assets in AI design. Therefore, a thoughtful evolution of the organizational operating model and the seamless integration of AI into the business’s core is critical. This involves selecting the right project management and delivery frameworks, acquiring the most suitable solutions, implementing foundational knowledge and data management and governance practices, and reskilling, attracting, hiring, and retaining individuals with the necessary skill sets. These considerations make up the core building blocks for organizations to begin integrating AI agents.

The good news is that when built on the right foundations, AI solutions can be reused across multiple use cases, bridge diverse data sources, transcend organizational silos, and continue delivering value beyond the initial hype. 

Is your organization looking to evaluate AI readiness? How well does it measure up against these readiness factors? Explore our case studies and knowledge base on how other organizations are tackling this or get in touch to learn more about our approaches to content and data readiness for AI.

The post When Should You Use An AI Agent? Part One: Understanding the Components and Organizational Foundations for AI Readiness appeared first on Enterprise Knowledge.

]]>
How KM Leverages Semantics for AI Success https://enterprise-knowledge.com/how-km-leverages-semantics-for-ai-success/ Wed, 03 Sep 2025 19:08:31 +0000 https://enterprise-knowledge.com/?p=25271 This infographic highlights how KM incorporates semantic technologies and practices across scenarios to enhance AI capabilities.

The post How KM Leverages Semantics for AI Success appeared first on Enterprise Knowledge.

]]>
This infographic highlights how KM incorporates semantic technologies and practices across scenarios to enhance AI capabilities.

To get the most out of Large Language Model (LLM)-driven AI solutions, you need to provide them with structured, context-rich knowledge that is unique to your organization. Without purposeful access to proprietary terminology, clearly articulated business logic, and consistent interpretation of enterprise-wide data, LLMs risk delivering incomplete or misleading insights. This infographic highlights how KM incorporates semantic technologies and practices across scenarios to  enhance AI capabilities and when they're foundational — empowering your organization to strategically leverage semantics for more accurate, actionable outcomes while cultivating sound knowledge intelligence practices and investing in your enterprise's knowledge assets. Use Case: Expert Elicitation - Semantics used for AI Enhancement Efficiently capture valuable knowledge and insights from your organization's experts about past experiences and lessons learned, especially when these insights have not yet been formally documented.  By using ontologies to spot knowledge gaps and taxonomies to clarify terms, an LLM can capture and structure undocumented expertise—storing it in a knowledge graph for future reuse. Example:  Capturing a senior engineer's undocumented insights on troubleshooting past system failures to streamline future maintenance. Use Case: Discovery & Extraction - Semantics used for AI Enhancement Quickly locate key insights or important details within a large collection of documents and data, synthesizing them into meaningful, actionable summaries, and delivering these directly back to the user. Ontologies ensure concepts are recognized and linked consistently across wording and format, enabling insights to be connected, reused, and verified outside an LLM's opaque reasoning process. Example: Scanning thousands of supplier agreements to locate variations of key contract clauses—despite inconsistent wording—then compiling a cross-referenced summary for auditors to accelerate compliance verification and identify high-risk deviations. Use Case: Context Aggregation - Semantics for AI Foundations Gather fragmented information from diverse sources and combine it into a unified, comprehensive view of your business processes or critical concepts, enabling deeper analysis, more informed decisions, and previously unattainable insights. Knowledge graphs unify fragmented information from multiple sources into a persistent, coherent model that both humans and systems can navigate. Ontologies make relationships explicit, enabling the inference of new knowledge that reveals connections and patterns not visible in isolated data. Example: Integrating financial, operational, HR, and customer support data to predict resource needs and reveal links between staffing, service quality, and customer retention for smarter planning. Use Case: Cleanup and Optimization - Semantics for AI Enhancement Analyze and optimize your organization's knowledge base by detecting redundant, outdated, or trivial (ROT) content—then recommend targeted actions or automatically archive and remove irrelevant material to keep information fresh, accurate, and valuable. Leverage taxonomies and ontologies to recognize conceptually related information even when expressed in different terms, formats, or contexts; allowing the AI to uncover hidden redundancies, spot emerging patterns, and make more precise recommendations than could be justified by keyword or RAG search alone. Example: Automatically detecting and flagging outdated or duplicative policy documents—despite inconsistent titles or formats—across an entire intranet, streamlining reviews and ensuring only current, authoritative content remains accessible. Use Case: Situated Insight - Semantics used for AI Enhancement Proactively deliver targeted answers and actionable suggestions uniquely aligned with each user's expressed preferences, behaviors, and needs, enabling swift, confident decision-making. Use taxonomies to standardize and reconcile data from diverse systems, and apply knowledge graphs to connect and contextualize a user's preferences, behaviors, and history; creating a unified, dynamic profile that drives precise, timely, and highly relevant recommendations. Example: Instantly curating a personalized learning path (complete with recommended modules, mentors, and practice projects) based on an employee's recent performance trends, skill gaps, and long-term career goals, accelerating both individual growth and organizational capability. Use Case: Context Mediation and Resolution - Semantics for AI Foundations Bridge disparate contexts across people, processes, technologies, etc., into a common, resolved machine readable understanding that preserves nuance while eliminating ambiguity. Semantics establish a shared, machine-readable understanding that bridges differences in language, structure, and context across people, processes, and systems. Taxonomies unify terminology from diverse sources, while ontologies and knowledge graphs capture and clarify the nuanced relationships between concepts—eliminating ambiguity without losing critical detail. Example: Reconciling varying medical terminologies, abbreviations, and coding systems from multiple healthcare providers into a single, consistent patient record—ensuring that every clinician sees the same unambiguous history, enabling faster diagnosis, safer treatment decisions, and more effective care coordination. Learn more about our work with AI and semantics to help your organization make the most out of these investments, don't hesitate to reach out at:  https://enterprise-knowledge.com/contact-us/

The post How KM Leverages Semantics for AI Success appeared first on Enterprise Knowledge.

]]>
Breaking Down Types of Knowledge Assets and Their Impact https://enterprise-knowledge.com/breaking-down-types-of-knowledge-assets-and-their-impact/ Fri, 22 Aug 2025 13:52:30 +0000 https://enterprise-knowledge.com/?p=25190 In their blog “What is Knowledge Asset?”, EK’s CEO Zach Wahl and Practice Lead for Semantic Design and Modeling, Sara Mae O’Brien-Scott, explored how organizations can define knowledge assets beyond just documents or data. It emphasizes that anything, from people … Continue reading

The post Breaking Down Types of Knowledge Assets and Their Impact appeared first on Enterprise Knowledge.

]]>
In their blog “What is Knowledge Asset?”, EK’s CEO Zach Wahl and Practice Lead for Semantic Design and Modeling, Sara Mae O’Brien-Scott, explored how organizations can define knowledge assets beyond just documents or data. It emphasizes that anything, from people and processes to AI-generated content, can be treated as a knowledge asset if it holds value, can be connected via metadata, and contributes to a broader, contextualized knowledge network.

The way knowledge assets are defined is crucial for an organization because it directly impacts how they are managed, leveraged, and protected. This includes identifying which knowledge assets have strategic value, how to manage them to make them accessible for timely decision making, which management policies should be applied to ensure effective knowledge sharing, retention, continuity, and transfer, and which steps are necessary to comply with industry regulations.

This blog highlights the types of knowledge assets that are commonly found in organizations and provides industry-specific examples based on typical Knowledge Management (KM) Use Cases.

 

Infographic titled “Types of Knowledge Assets,” showing seven categories: People’s Expertise, Content & Documentation, Technical Infrastructure, Structured Data, Governance, Actionable Processes, and Operational Resources, each with icons and descriptions.

Examples Of Relevant Knowledge Asset Types Per Industry

As illustrated in the previous section describing the different types of knowledge assets, these assets encompass more than just content or data. They may include people’s expertise and experience, transaction records, policies, and even facilities or locations. Depending on the industry or organization type, certain knowledge assets may be prioritized in early use cases because they play a more central role in those specific contexts.

A manufacturing company looking to improve its supply chain processes would benefit significantly from tagging, managing, leveraging, and protecting operational and logistical resources — such as equipment, facilities, and products — and linking them to reveal relationships and dependencies across the supply chain. By also tagging and connecting additional knowledge assets, such as structured data and analytical resources — including order history, transactions, and metrics — and content and documentation — such as process descriptions and reports — the company may gain deeper visibility into operational bottlenecks, enhance forecasting accuracy, and improve coordination across departments. This holistic approach can enable more agile decision-making, reducing downtime and supporting continuous improvement across the entire manufacturing lifecycle.
A bank that is looking to maintain compliance, uphold governance standards, and minimize regulatory risk can benefit from managing, leveraging, and protecting its key knowledge assets in a standardized and connected way. By using key terminology to tag governance and compliance resources — such as corporate policies, industry regulations, and tax codes — alongside operational and logistical resources  — such as locations and facilities — and corresponding subject matter experts, the bank builds a foundation for semantic alignment. This will allow the bank not only to associate branches and operational sites with the specific policies and regulatory obligations they must meet, but also help ensure that the bank complies with jurisdiction-specific requirements, reduces audit exposure, and strengthens its ability to respond to regulatory changes with agility and confidence.
A healthcare organization relies on clinical expertise and institutional memory to diagnose and treat patients. By capturing, tagging, and sharing expertise and experience from physicians and multidisciplinary teams, doctors, nurses, and other support personnel will be able to timely access the expert-based information they need to diagnose and treat their patients more accurately. Additionally, having access to content and documentation from clinical cases and structured data from research studies will also help improve decision-making for the personnel of this healthcare organization.

Do you know which priority knowledge assets and related KM use cases can transform your organization by empowering teams to surface hidden insights, accelerating decision-making, or fostering operational excellence? If you need help uncovering the most valuable use cases and the associated knowledge assets that unlock meaningful transformation in your organization, we can help. Contact us to learn more. 

The post Breaking Down Types of Knowledge Assets and Their Impact appeared first on Enterprise Knowledge.

]]>