Enterprise Knowledge https://enterprise-knowledge.com/ Tue, 18 Nov 2025 19:55:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg Enterprise Knowledge https://enterprise-knowledge.com/ 32 32 Semantic Layer Symposium 2025: Knowledge Graphs Panel — The Rising Star of the Knowledge Management Toolkit https://enterprise-knowledge.com/semantic-layer-symposium-2025-knowledge-graphs-panel-the-rising-star-of-the-knowledge-management-toolkit/ Tue, 18 Nov 2025 19:54:33 +0000 https://enterprise-knowledge.com/?p=26072 In October of this year, Enterprise Knowledge held our annual Semantic Layer Symposium (SLS) in Copenhagen, Denmark, bringing together industry thought leaders, data experts, and practitioners to explore the transformative potential, and reflect on the successful implementation, of semantic layers. … Continue reading

The post Semantic Layer Symposium 2025: Knowledge Graphs Panel — The Rising Star of the Knowledge Management Toolkit appeared first on Enterprise Knowledge.

]]>
In October of this year, Enterprise Knowledge held our annual Semantic Layer Symposium (SLS) in Copenhagen, Denmark, bringing together industry thought leaders, data experts, and practitioners to explore the transformative potential, and reflect on the successful implementation, of semantic layers. With a focus on practical applications, real-world use cases, actionable strategies, and proven paths to delivering measurable value, the symposium provided attendees with tangible insights they can apply within their organizations.

We’re excited to continue to release these discussions for viewing: next up, a panel moderated by Barry Byrne of Novartis, featuring Kurt Kragh Sørensen (Novartis), Daan Hannessen (Shell), and Sara Mae O’Brien-Scott (Enterprise Knowledge). And check out Daan’s pre-SLS Knowledge Cast episode!

Panel – Knowledge Graphs: The Rising Star of the Knowledge Management Toolkit

Panel Moderator: Barry Byrne (Novartis)Panelists: Kurt Kragh Sørensen (Novartis) & Daan Hannessen (Shell) & Sara Mae O’Brien-Scott (Enterprise Knowledge)

Leading organizations are increasingly turning to knowledge graphs to connect information, enable intelligent discovery, and unlock new business value. In this panel, world-class practitioners share real stories of how they have implemented knowledge graphs as part of their knowledge management strategies. Expect practical lessons, proven approaches, and insights into why graphs are quickly becoming an essential part of the enterprise toolkit.

The post Semantic Layer Symposium 2025: Knowledge Graphs Panel — The Rising Star of the Knowledge Management Toolkit appeared first on Enterprise Knowledge.

]]>
Semantic Layer Symposium 2025: Using Semantics to Reduce Hallucinations and Overcome Agentic Limits – Neuro-Symbolic AI and the Promise of Agentic AI https://enterprise-knowledge.com/sls2025-using-semantics-to-reduce-hallucinations-and-overcome-agentic-limits/ Thu, 13 Nov 2025 17:26:57 +0000 https://enterprise-knowledge.com/?p=26012 In October of this year, Enterprise Knowledge held our annual Semantic Layer Symposium (SLS) in Copenhagen, Denmark, bringing together industry thought leaders, data experts, and practitioners to explore the transformative potential, and reflect on the successful implementation, of semantic layers. … Continue reading

The post Semantic Layer Symposium 2025: Using Semantics to Reduce Hallucinations and Overcome Agentic Limits – Neuro-Symbolic AI and the Promise of Agentic AI appeared first on Enterprise Knowledge.

]]>
In October of this year, Enterprise Knowledge held our annual Semantic Layer Symposium (SLS) in Copenhagen, Denmark, bringing together industry thought leaders, data experts, and practitioners to explore the transformative potential, and reflect on the successful implementation, of semantic layers. With a focus on practical applications, real-world use cases, actionable strategies, and proven paths to delivering measurable value, the symposium provided attendees with tangible insights they can apply within their organizations.

We’re excited to release these discussions for viewing, starting with Ben Clinch of Ortecha (who we also got the chance to speak with ahead of the event on Knowledge Cast).

Using Semantics to Reduce Hallucinations and Overcome Agentic Limits – Neuro-Symbolic AI and the Promise of Agentic AI

Speaker: Ben Clinch (Ortecha)

With the pace of change of AI being experienced across the industry and the constant bombardment of contradictory advice it is easy to become overwhelmed and not know where to start. The promise of LLMs have been undermined by vendor and journalistic hype and an inability to rely on quantitative answers being accurate. After all, what good would a colleague be (artificial or not) if you already need to know the answer to validate any question that you ask of them? This is further compounded by the exciting promise of Agentic AI but the relative immaturity of frameworks such as MCP. The promise of neuro-symbolic AI that combines two well established technologies (semantic knowledge graphs with machine learning) enable you to get more accurate LLM powered analytics and most importantly faster time to greater data value and when leveraged alongside solid data management foundations can mitigate and empower AI Agents while limiting the inherent risks in using them.

In this practical, engaging, and fun talk, Ben equips participants with the principles and fundamentals that never change but often go under-utilized to help you lay a solid foundation for the new age of agentic AI.

The post Semantic Layer Symposium 2025: Using Semantics to Reduce Hallucinations and Overcome Agentic Limits – Neuro-Symbolic AI and the Promise of Agentic AI appeared first on Enterprise Knowledge.

]]>
Enterprise Knowledge Speaking at KMWorld 2025 https://enterprise-knowledge.com/enterprise-knowledge-speaking-at-kmworld-2025/ Wed, 12 Nov 2025 21:11:22 +0000 https://enterprise-knowledge.com/?p=26002 Enterprise Knowledge (EK) will once again have a strong presence at the upcoming KMWorld Conference in Washington, D.C. This year, EK is delivering 11 sessions throughout KMWorld and its four co-located events: Taxonomy Boot Camp, Enterprise Search & Discovery, Enterprise … Continue reading

The post Enterprise Knowledge Speaking at KMWorld 2025 appeared first on Enterprise Knowledge.

]]>
Enterprise Knowledge (EK) will once again have a strong presence at the upcoming KMWorld Conference in Washington, D.C. This year, EK is delivering 11 sessions throughout KMWorld and its four co-located events: Taxonomy Boot Camp, Enterprise Search & Discovery, Enterprise AI World, and the Text Analytics Forum. 

EK is offering an array of thought leadership sessions to share KM approaches and methodologies. Several of EK’s sessions include presentations with clients, where presenters jointly deliver advanced case studies on knowledge graphs, enterprise learning solutions, and AI.  



 On November 17, EK-led events will include:

  • Taxonomy Principles to Support Knowledge Management at a Not-for-Profit, featuring Bonnie Griffin, co-presenting with Miriam Heard of YMCA – Learn how Heard and Griffin applied taxonomy design to tame tags, align content types, and simplify conventions, transforming the YMCA’s intranet so staff can find people faster, retrieve information reliably, and share updates with the right audiences.
  • Utilizing Taxonomies to Meet UN SDG Obligations, featuring Benjamin Kass, co-presenting with Mike Cannon of the American Speech-Language-Hearing Association (ASHA) – Discover how ASHA, a UN SDG Publishers Compact signatory, piloted automatic tagging to surface SDG-relevant articles, using taxonomies for robust metadata, analytics, and high-quality content collections.
  • Driving Knowledge Management With Taxonomy and Ontology, featuring Bonnie Griffin, co-presenting with Alexander Zichettello of Honda Development & Manufacturing of America – Explore how Zichettello and Griffin designed taxonomies and ontologies for a major automaker, unifying siloed content and terminology. Presenters will share a repeatable, standards-based process and the best practices for scalable, sustainable knowledge management with attendees.

On November 18, EK-led events will include:

  • Taxonomy From 2006 to 2045: Are We Ready for the Future?, moderated by Zach Wahl, EK’s CEO and co-founder – Celebrate 20 years of Taxonomy Boot Camp with a look back at 2006 abstracts, crowd-voted predictions for the next two decades (AI included), lively debate, and a cake-cutting send-off.

On November 19, EK-led events will include:

  • Transforming Content Operations in the Age of AI, featuring Rebecca Wyatt and Elliott Risch – Learn how Wyatt and Risch partnered to leverage an AI proof of concept to prioritize and accelerate content remediation and improve content and search experiences on a flagship Intel KM platform.
  • Tracing the Thread: Decoding the Decision-Making Process With GraphRAG, featuring Urmi Majumder and Kaleb Schultz – Learn about GraphRAG, how pairing generative AI with a standards-based knowledge graph can unify data to tackle complex questions, curb hallucinations, and deliver traceable answers.
  • The Cost of Missing Critical Connections in Data: Suspicious Behavior Detection Using Link Analysis (A Case Study), featuring Urmi Majumder and Kyle Garcia – See how graph-powered link analysis and NLP can uncover hidden connections in messy data, powering fraud detection and risk mitigation, with practical modeling choices and a real-world, enterprise-ready case study.
  • Generating Structured Outputs From Unstructured Content Using LLMs, featuring Kyle Garcia and Joseph Hilger, EK’s COO and co-founder – Discover how LLMs guided by content models break long, unstructured documents into reusable, knowledge graph–ready components, reducing hallucinations while improving search, personalization, and cross-platform reuse.

On November 20, EK-led events will include:

  • Enterprises, KM, & Agentic AI, featuring Jess DeMay, co-presenting with Rachel Teague of Emory Consulting Services – This interactive discussion looks at organizational trends as well as new technologies and processes to enhance knowledge sharing, communication, collaboration, and innovation in the enterprises of the future.
  • Making Search Less Taxing: Leveraging Semantics and Keywords in Hybrid Search, featuring Chris Marino, co-presenting with Jaime Martin of Tax Analysts – Explore how Tax Analysts, the nonpartisan nonprofit behind Tax Notes, scaled an advanced search overhaul that lets subscribers rapidly find what they need while surfacing relevant content they didn’t know to look for.
  • The Future of Enterprise Search & Discovery, a panel including EK’s COO and co-founder Joseph Hilger – Get a glimpse of what’s next in enterprise search and discovery as this panel unpacks agentic AI and emerging trends, offering near and long-term predictions for how tools, workflows, and roles will evolve. 

Come to KMWorld 2025, November 17–20 in Washington D.C., to hear from EK experts and learn more about the growing field of knowledge management. Register here.

The post Enterprise Knowledge Speaking at KMWorld 2025 appeared first on Enterprise Knowledge.

]]>
How Taxonomies and Ontologies Enable Explainable AI https://enterprise-knowledge.com/how-taxonomies-and-ontologies-enable-explainable-ai/ Fri, 31 Oct 2025 15:18:09 +0000 https://enterprise-knowledge.com/?p=25955 Taxonomy and ontology models are essential to unlocking the value of knowledge assets. They provide the structure needed to connect fragmented information across an organization, enabling explainable AI. As part of a broader Knowledge Intelligence (KI) strategy, these models help … Continue reading

The post How Taxonomies and Ontologies Enable Explainable AI appeared first on Enterprise Knowledge.

]]>
Taxonomy and ontology models are essential to unlocking the value of knowledge assets. They provide the structure needed to connect fragmented information across an organization, enabling explainable AI. As part of a broader Knowledge Intelligence (KI) strategy, these models help reduce hallucinations and make AI-generated content more trustworthy. This blog provides an overview of why taxonomies and ontologies are essential to connect disparate knowledge assets within an organization and improve the quality and accuracy of AI generated content. 

 

The Anatomy of AI

Here is a conceptual analogy to help illustrate how taxonomies and ontologies support AI. While inspired by the human musculoskeletal system, this analogy is not intended to represent anatomical accuracy, but rather to illustrate how taxonomies provide foundational structure and ontologies enable flexible, contextual connections of knowledge assets within AI systems.

Just like the musculoskeletal system gives structure, support, and coherence to the human body, taxonomies and ontologies provide the structural framework that organizes and contextualizes knowledge assets for AI. Here is the analogy: the spine and the bones represent the taxonomies, in other words, the hierarchical, backbone structure for categorizing and organizing concepts that describe an organization’s core knowledge assets. Similarly, the joints, ligaments, and muscles represent the ontologies that provide the flexibility to connect related concepts across assets in an organization’s knowledge domain. 

Just as the musculoskeletal system provides structure, support, and coherence to the human body, taxonomies and ontologies serve as a structural framework that organizes and contextualizes knowledge assets for AI. When those assets are consistently tagged with taxonomies and linked through ontologies, AI systems can trace how decisions are made, reducing the likelihood of hallucinations.

Taxonomies: the spine and the bones represent the taxonomies, in other words, the hierarchical backbone structure for categorizing and organizing concepts.

Ontologies: the joints, ligaments, and muscles represent the ontologies that provide the flexibility to connect related concepts across an organization's knowledge domain.

Depending on the organization’s domain or industry, certain types of knowledge assets become more relevant or strategically important. In the case of a healthcare organization, key knowledge assets may include content such as patients’ electronic health records, clinical guidelines and protocols, multidisciplinary case reviews, and research publications, as well as data such as diagnostic data and clinical trial data. Taxonomies that capture and group together key concepts, such as illnesses, symptoms, treatments, outcomes, medicines, clinical specialties can be used to tag and structure these assets. Continuing with the same scenario, an ontology in a healthcare organization can incorporate those key concepts (entities) from the taxonomy, along with their properties and relationships, to enable alignment and consistent interpretation of knowledge assets across systems. Both taxonomies and ontologies in healthcare organizations make it possible to connect, for instance, a patient’s health record with diagnostic data and previous case reviews for other patients based on the same (or similar) conditions, including illnesses, symptoms, treatments, and medicines. As a result, healthcare professionals can quickly access the information they need to make well-informed decisions about a patient’s care.

 

Where AI is Failing

On multiple occasions, AI has repeatedly failed to provide reliable information to employees, customers, and patients, undermining their confidence in the AI supported system and sometimes leading to serious organizational consequences. You may be familiar with the case in which a chatbot of a medical association was unintentionally giving harmful advice to people with eating disorders. Or maybe you heard in the news about the bank with a faulty AI system that misclassified thousands of transactions as fraudulent due to a programming error, resulting in significant customer dissatisfaction and harming the organization’s reputation. There was also a case in which an AI-powered translation system failed to accurately assess asylum seekers’ applications, raising serious concerns about its fairness and accuracy, and potentially affecting critical life decisions for those applicants. In each of these cases, had the corresponding AI systems effectively aggregated both unstructured and structured knowledge assets, and reliably linked them to encoded expert knowledge and relevant business context, these cases would have produced very different and positive outcomes. By leveraging taxonomies and ontologies to aggregate key knowledge assets, the result of these cases would have been much more closely aligned with intended objectives, ultimately, benefiting the end users as it was initially intended. 

 

How Taxonomies And Ontologies Enable Explainable AI

When knowledge assets are consistently tagged with taxonomies and related via ontologies, AI systems can trace how a decision was made. This means that end users can understand the reasoning path, supported by defined relationships. This also means that bias and hallucinations can be more easily detected by auditing the semantic structure behind the results.

As illustrated in the healthcare organization example, diagnoses can be tagged with medical industry taxonomies, while ontologies can help create relationships among symptoms, treatments, and outcomes. This can help physicians tailor treatments to individual patient needs by leveraging past patient cases and the collective expertise from other physicians. Similarly, a retail organization can enhance its customer service by implementing a chatbot that is linked to structured product taxonomies and ontologies to help deliver consistent and explainable answers about products to customers. More consistent and trustworthy customer interactions result in streamlining end user support and strengthening brand confidence.

 

Do We Really Need Taxonomies and Ontologies to be Successful With AI?

The examples above illustrate that explainability in AI really matters. Whether end users are patients, bank customers, or any individuals requesting specific products or services, they all want more transparent, trustworthy, and human-centered AI experiences. Taxonomies and ontologies help provide structure and connectedness to content, documents, data, expert knowledge and overall business context, so that they all are machine readable and findable by AI systems at the moment of need, ultimately creating meaningful interactions for end users.  

 

Conclusion

Just like bones, joints, ligaments, and muscles in the human body, taxonomies and ontologies provide the essential structure and connection that allow AI systems to stand up to testing, be reliable, and perform with clarity. At EK we have extensive experience identifying key knowledge assets as well as designing and implementing taxonomies and ontologies to successfully support AI initiatives. If you want to improve the Knowledge Intelligence (KI) of your existing or future AI applications and need help with your taxonomy and ontology efforts, don’t hesitate to get in touch with us

The post How Taxonomies and Ontologies Enable Explainable AI appeared first on Enterprise Knowledge.

]]>
How to Leverage LLMs for Auto-tagging & Content Enrichment https://enterprise-knowledge.com/how-to-leverage-llms-for-auto-tagging-content-enrichment/ Wed, 29 Oct 2025 14:57:56 +0000 https://enterprise-knowledge.com/?p=25940 When working with organizations on key data and knowledge management initiatives, we’ve often noticed that a roadblock is the lack of quality (relevant, meaningful, or up-to-date) existing content an organization has. Stakeholders may be excited to get started with advanced … Continue reading

The post How to Leverage LLMs for Auto-tagging & Content Enrichment appeared first on Enterprise Knowledge.

]]>
When working with organizations on key data and knowledge management initiatives, we’ve often noticed that a roadblock is the lack of quality (relevant, meaningful, or up-to-date) existing content an organization has. Stakeholders may be excited to get started with advanced tools as part of their initiatives, like graph solutions, personalized search solutions, or advanced AI solutions; however, without a strong backbone of semantic models and context-rich content, these solutions are significantly less effective. For example, without proper tags and content types, a knowledge portal development effort  can’t fully demonstrate the value of faceting and aggregating pieces of content and data together in ‘knowledge panes’. With a more semantically rich set of content to work with, the portal can begin showing value through search, filtering, and aggregation, leading to further organizational and leadership buy-in.

One key step in preparing content is the application of metadata and organizational context to pieces of content through tagging. There are several tagging approaches an organization can take to enrich pre-existing content with metadata and organizational context, including manual tagging, automated tagging capabilities from a taxonomy and ontology management system (TOMS), using apps and features directly from a content management solution, and various hybrid approaches. While many of these approaches, in particular acquiring a TOMS, are recommended as a long-term auto-tagging solution, EK has recommended and implemented Large Language Model (LLM)-based auto-tagging capabilities across several recent engagements. Due to LLM-based tagging’s lower initial investment compared to a TOMS and its greater efficiency than manual tagging, these auto-tagging solutions have been able to provide immediate value and jumpstart the process of re-tagging existing content. This blog will dive deeper into how LLM tagging works, the value of semantics, technical considerations, and next steps for implementing an LLM-based tagging solution.

Overview of LLM-Based Auto-Tagging Process

Similar to existing auto-tagging approaches, the LLM suggests a tag by parsing through a piece of content, processing and identifying key phrases, terms, or structure that gives the document context. Through prompt engineering, the LLM is then asked to compare the similarity of key semantic components (e.g., named entities, key phrases) with various term lists, returning a set of terms that could be used to categorize the piece of content. These responses can be adjusted in the tagging workflow to only return terms meeting a specific similarity score. These tagging results are then exported to a data store and applied to the content source. Many factors, including the particular LLM used, the knowledge an LLM is working with, and the source location of content, can greatly impact the tagging effectiveness and accuracy. In addition, adjusting parameters, taxonomies/term lists, and/or prompts to improve precision and recall can ensure tagging results align with an organization’s needs. The final step is the auto-tagging itself and the application of the tags in the source system. This could look like a script or workflow that applies the stored tags to pieces of content.

Figure 1: High-level steps for LLM content enrichment

EK has put these steps into practice, for example, when engaging with a trade association on a content modernization project to migrate and auto-tag content into a new content management system (CMS). The organization had been struggling with content findability, standardization, and governance, in particular, the language used to describe the diverse areas of work the trade association covers. As part of this engagement, EK first worked with the organization’s subject matter experts (SMEs) to develop new enterprise-wide taxonomies and controlled vocabularies integrated across multiple platforms to be utilized by both external and internal end-users. To operationalize and apply these common vocabularies, EK developed an LLM-based auto-tagging workflow utilizing the four high-level steps above to auto-tag metadata fields and identify content types. This content modernization effort set up the organization for document workflows, search solutions, and generative AI projects, all of which are able to leverage the added metadata on documents. 

Value of Semantics with LLM-Based Auto-Tagging

Semantic models such as taxonomies, metadata models, ontologies, and content types can all be valuable inputs to guide an LLM on how to effectively categorize a piece of content. When considering how an LLM is trained for auto-tagging content, a greater emphasis needs to be put on organization-specific context. If using a taxonomy as a training input, organizational context can be added through weighting specific terms, increasing the number of synonyms/alternative labels, and providing organization-specific definitions. For example, by providing organizational context through a taxonomy or business glossary that the term “Green Account” refers to accounts that have met a specific environmental standard, the LLM would not accidentally tag content related to the color green or an account that is financially successful.

Another benefit of an LLM-based approach is the ability to evolve both the semantic model and LLM as tagging results are received. As sets of tags are generated for an initial set of content, the taxonomies and content models being used to train the LLM can be refined to better fit the specific organizational context. This could look like adding additional alternative labels, adjusting the definition of terms, or adjusting the taxonomy hierarchy. Similarly, additional tools and techniques, such as weighting and prompt engineering, can tune the results provided by the LLM and evolve the results generated to achieve a higher recall (rate the LLM is including the correct term) and precision (rate the LLM is selecting only the correct term) when recommending terms. One example of this is  adding weighting from 0 to 10 for all taxonomy terms and assigning a higher score for terms the organization prefers to use. The workflow developed alongside the LLM can use this context to include or exclude a particular term.

Implementation Considerations for LLM-Based Auto-Tagging 

Several factors, such as the timeframe, volume of information, necessary accuracy, types of content management systems, and desired capabilities, inform the complexity and resources needed for LLM-based content enrichment. The following considerations expand upon the factors an organization must consider for effective LLM content enrichment. 

Tagging Accuracy

The accuracy of tags from an LLM directly impacts end-users and systems (e.g., search instances or dashboards) that are utilizing the tags. Safeguards need to be implemented to ensure end-users can trust the accuracy of the tagged content they are using. These help ensure that a user is not mistakenly accessing or using a particular document, or that they are frustrated by the results they get. To mitigate both of these concerns, a high recall and precision score with the LLM tagging improves the overall accuracy and lowers the chance for miscategorization. This can be done by investing further into human test-tagging and input from SMEs to create a gold-standard set of tagged content as training data for the LLM. The gold-standard set can then be used to adjust how the LLM weights or prioritizes terms, based on the organizational context in the gold-standard set. These practices will help to avoid hallucinations (factually incorrect or misleading content) that could appear in applications utilizing the auto-tagged set of content.

Content Repositories

One factor that greatly adds technical complexity is accessing the various types of content repositories that an LLM solution, or any auto-tagging solution, needs to read from. The best content management practice for auto-tagging is to read content in its source location, limiting the risk of duplication and the effort needed to download and then read content. When developing a custom solution, each content repository often needs a distinctive approach to read and apply tags. A content or document repository like SharePoint, for example, has a robust API for reading content and seamlessly applying tags, while a less widely adopted platform may not have the same level of support. It is important to account for the unique needs of each system in order to limit the disruption end-users may experience when embarking on a tagging effort.

Knowledge Assets

When considering the scalability of the auto-tagging effort, it is also important to evaluate the breadth of knowledge asset types being analyzed. While the ability of LLMs to process several types of knowledge assets has been growing, each step of additional complexity, particularly evaluating multiple types, can result in additional resources and time needed to read and tag documents. A PDF document with 2-3 pages of content will take far fewer tokens and resources for an LLM to read its content than a long visual or audio asset. Going from a tagging workflow of structured knowledge assets to tagging unstructured content will increase the overall time, resources, and custom development needed to run a tagging workflow. 

Data Security & Entitlements

When utilizing an LLM, it is recommended that an organization invest in a private or an in-house LLM to complete analysis, rather than leveraging a publicly available model. In particular, an LLM does not need to be ‘on-premises’, as several providers have options for LLMs in your company’s own environment. This ensures a higher level of document security and additional features for customization. Particularly when tackling use cases with higher levels of personal information and access controls, a robust mapping of content and an understanding of what needs to be tagged is imperative. As an example, if a publicly facing LLM was reading confidential documents on how to develop a company-specific product, this information could then be leveraged in other public queries and has a higher likelihood of being accessed outside of the organization. In an enterprise data ecosystem, running an LLM-based auto-tagging solution can raise red flags around data access, controls, and compliance. These challenges can be addressed through a Unified Entitlements System (UES) that creates a centralized policy management system for both end users and LLM solutions being deployed.

Next Steps:

One major consideration with an LLM tagging solution is maintenance and governance over time. For some organizations, after completing an initial enrichment of content by the LLM, a combination of manual tagging and forms within each CMS helps them maintain tagging standards over time. However, a more mature organization that is dealing with several content repositories and systems may want to either operationalize the content enrichment solution for continued use or invest in a TOMS. With either approach, completing an initial LLM enrichment of content is a key method to prove the value of semantics and metadata to decision-makers in an organization. 
Many technical solutions and initiatives that excite both technical and business stakeholders can be actualized by an LLM content enrichment effort. By having content that is tagged and adhering to semantic standards, solutions like knowledge graphs, knowledge portals, and semantic search engines, or even an enterprise-wide LLM Solution, are upgraded even further to show organizational value.

If your organization is interested in upgrading your content and developing new KM solutions, contact us!

The post How to Leverage LLMs for Auto-tagging & Content Enrichment appeared first on Enterprise Knowledge.

]]>
Knowledge Cast – Michal Bachman, CEO of GraphAware https://enterprise-knowledge.com/knowledge-cast-michal-bachman-ceo-of-graphaware/ Tue, 28 Oct 2025 16:15:27 +0000 https://enterprise-knowledge.com/?p=25930 Enterprise Knowledge COO Joe Hilger speaks with Michal Bachman, CEO at GraphAware. GraphAware provides technology and expertise for mission-critical graph analytics, and its graph-powered intelligence analysis platform — Hume — is used by democratic government agencies (law enforcement, intelligence, cybersecurity, … Continue reading

The post Knowledge Cast – Michal Bachman, CEO of GraphAware appeared first on Enterprise Knowledge.

]]>

Enterprise Knowledge COO Joe Hilger speaks with Michal Bachman, CEO at GraphAware. GraphAware provides technology and expertise for mission-critical graph analytics, and its graph-powered intelligence analysis platform — Hume — is used by democratic government agencies (law enforcement, intelligence, cybersecurity, defense) and Fortune 500 companies across the world.

In their conversation, Joe and Michal discuss how you can use a graph to investigate criminal networks, what’s next with graphs (hint: ensuring trustworthy AI doesn’t just mean supporting the machines), and some helpful books that experts at GraphAware have released recently.

Check out Knowledge Graphs and LLMs in Action and Neo4j: The Definitive Guide to dive deeper into the topics discussed in this episode!

 

 

If you would like to be a guest on Knowledge Cast, contact Enterprise Knowledge for more information.

The post Knowledge Cast – Michal Bachman, CEO of GraphAware appeared first on Enterprise Knowledge.

]]>
EK Again Recognized as Leading Services Provider by KMWorld https://enterprise-knowledge.com/ek-again-recognized-as-leading-services-provider-by-kmworld/ Tue, 21 Oct 2025 17:18:42 +0000 https://enterprise-knowledge.com/?p=25847 Enterprise Knowledge (EK) has once again been named to KMWorld’s list of the 100 Companies That Matter in Knowledge Management. As the world’s largest dedicated Knowledge Management (KM) consulting firm, EK has been recognized for global leadership in KM consulting … Continue reading

The post EK Again Recognized as Leading Services Provider by KMWorld appeared first on Enterprise Knowledge.

]]>
Enterprise Knowledge (EK) has once again been named to KMWorld’s list of the 100 Companies That Matter in Knowledge Management. As the world’s largest dedicated Knowledge Management (KM) consulting firm, EK has been recognized for global leadership in KM consulting services, as well as overall thought leadership in the field, for the eleventh consecutive year.

EK hosts a public knowledge base of over 700 articles on KM, Semantic Layer, and AI thought leadership, produces the top-rated KM podcast, Knowledge Cast, and has published the definitive book on KM benchmarking and technologies, Making Knowledge Management Clickable

In addition to the Top 100 List, EK was also recently recognized by KMWorld on their list of AI Trailblazers. You can read EK VP Lulit Tesfaye’s thoughts on that recognition here. These new areas of recognition come on the heels of Honda recognizing Enterprise Knowledge as one of their suppliers of the year, and Inc. Magazine listing EK as one of the best places to work in the United States.

The post EK Again Recognized as Leading Services Provider by KMWorld appeared first on Enterprise Knowledge.

]]>
Navigating the Retirement Cliff: Challenges and Strategies for Knowledge Capture and Succession Planning https://enterprise-knowledge.com/navigating-the-retirement-cliff-challenges-and-strategies-for-knowledge-capture-and-succession-planning/ Tue, 14 Oct 2025 13:59:50 +0000 https://enterprise-knowledge.com/?p=25782 As organizations prepare for workforce retirements, knowledge management should be a key element of any effective succession planning strategy, ensuring a culture of ongoing learning and stability. This piece explores the challenges organizations face in capturing and transferring critical knowledge, … Continue reading

The post Navigating the Retirement Cliff: Challenges and Strategies for Knowledge Capture and Succession Planning appeared first on Enterprise Knowledge.

]]>
As organizations prepare for workforce retirements, knowledge management should be a key element of any effective succession planning strategy, ensuring a culture of ongoing learning and stability. This piece explores the challenges organizations face in capturing and transferring critical knowledge, alongside practical knowledge management strategies to address them and build more sustainable knowledge-sharing practices.

The Retirement Cliff and Its Implications

The “retirement cliff” refers to the impending wave of retirements as a significant portion of the workforce—particularly Baby Boomers—reaches retirement age. According to labor market trends, millions of experienced professionals are set to retire in the coming years, posing a critical challenge for organizations. The departure of seasoned employees risks the loss of institutional knowledge, technical expertise, and key relationships, leading to operational disruptions and costly efforts to regain lost expertise.

One of the most immediate financial consequences Enterprise Knowledge has seen on several of our engagements is the growing reliance on retirees returning as contractors to fill knowledge and capability gaps, often at significantly higher costs than their original salaries. While this can provide a short-term fix, it also creates a long-term liability. Research from Harvard Business Review and other labor market analyses shows that rehiring former employees without structured knowledge transfer can perpetuate a cycle of dependency, inflate workforce costs, and suppress the development of internal talent. Organizations may pay premium contract rates while still losing institutional knowledge over time, especially if critical expertise remains undocumented or siloed. Without proactive strategies, such as structured succession planning, mentoring, and systematic knowledge capture, organizations risk operational disruption, weakened continuity, and increased turnover-related costs that can amount to billions of dollars annually.

The Role of Knowledge Management in Succession Planning

Knowledge management plays a vital role in succession planning by implementing systems and practices that ensure critical expertise is systematically captured and transferred across generations of employees. Documenting key insights, best practices, and institutional knowledge is essential for mitigating the risk of knowledge loss. This process helps to strengthen organizational continuity and ensures that employees have the knowledge they need to perform their roles effectively and make informed decisions.

The Retirement Cliff: Challenges & Solutions

Challenge Solution
Employee Resistance: Staff hesitate to share knowledge if it feels risky, time-consuming, or undervalued. Build trust, emphasize benefits, and use incentives or recognition programs to encourage sharing.
Cultural Barriers & Siloes: Rigid hierarchies and disconnected teams block collaboration and cross-functional flow. Foster collaboration through Communities of Practice, cross-team projects, and leadership modeling knowledge sharing.
Resource Constraints: KM is often underfunded or deprioritized compared to immediate operational needs. Start small with scalable pilots that demonstrate ROI and secure executive sponsorship to sustain investment.
Time Pressures: Rushed retirements capture checklists but miss critical tacit knowledge and insights. Integrate ongoing knowledge capture into workflows before retirements, not just at exit interviews.

While the table highlights immediate challenges and corresponding solutions, organizations benefit from a deeper set of strategies that address both near-term risks and long-term sustainability. The following sections expand on these themes, outlining actionable approaches that help organizations capture critical knowledge today, while laying the foundation for resilient succession planning tomorrow.

Near-term Strategies: Mitigating Immediate Risk

Engage Employees in Knowledge Capture Efforts

Long-tenured employees approaching retirement have accumulated invaluable institutional knowledge, and their sustained tenure itself demonstrates their consistent value to the organization. When a retirement cliff is looming, organizations should take action to engage those employees in efforts that help to capture and transfer key institutional knowledge before it is lost.

Cast a Wide, Inclusive Net

Organizations often lack visibility into actual retirement timelines. Rather than making assumptions about who might retire or inadvertently pressuring employees to reveal their plans, frame knowledge transfer efforts as part of comprehensive KM practices. By positioning these initiatives as valuable for all long-tenured employees—not just potential retirees—organizations create an inclusive environment that captures critical knowledge. This broader approach not only prepares for potential retirement-related knowledge gaps but also establishes ongoing knowledge transfer as a standard organizational practice.

Acknowledge and Thank Employees

Explicitly acknowledge the expertise and contributions of key knowledge holders participating in efforts. By recognizing their professional legacy and expressing the organization’s desire to preserve and share their wisdom with others, leaders can create a foundation for meaningful participation in knowledge transfer activities. This approach validates key members’ career impact while positioning them as mentors and knowledge stewards for the next generation. Consider setting aside some time from their normal responsibilities to encourage participation.

Reward Knowledge Sharing

Employees are far more likely to engage in knowledge transfer when it is seen as both valuable and valued. In EK’s experience, organizations that successfully foster a culture of knowledge sharing often embed these behaviors into their core talent practices, such as performance evaluations and internal recognition programs. For example, EK has helped to incorporate KM contributions into annual review processes or introduce peer-nominated awards like “Knowledge Champion” to highlight and celebrate individuals who model strong knowledge-sharing behaviors.

Enable Employees to Capture Knowledge

Effective knowledge transfer begins with capturing critical institutional knowledge. This includes both explicit knowledge, such as processes and workflows, and tacit knowledge, such as decision-making frameworks, strategic insights, and the rationale behind past choices. To guide organizations in successful knowledge capture and transfer practices, EK recommends implementing a variety of strategies that help build confidence and make the process manageable.

Provide Documentation Training and Support

Organizations should consider offering dedicated support through roles and teams that naturally align with KM efforts, such as technical documentation, organizational learning and development, or quality assurance. These groups can help introduce employees to the practice and facilitate more effective capture. For example, many organizations focus solely on documenting step-by-step processes, overlooking the tacit knowledge that explains the “why” behind key decisions to provide future employees with critical context. In EK’s experience, preserving and transmitting knowledge of past actions and opinions has given teams the confidence to make more informed decisions and ensure coherence in guidance. This approach is especially valuable from a legal perspective, where understanding the rationale behind decisions is crucial for consistency and compliance.

Help Prioritize the Knowledge to Capture

Organizations can help focus knowledge capture efforts, without overwhelming employees, by prioritizing the types of knowledge to capture. If knowledge falls into one of these categories, it is ideal to prioritize:

    1. Mission-Critical Knowledge – High-impact expertise that is not widely known (e.g., decision-making rationales, specialized processes) is at greatest risk for loss. Encourage employees to prioritize this knowledge first.

    1. Operational Knowledge – Day-to-day processes that can be captured progressively over time. Suggest to employees that they take advantage of workflows and cycles as they are completed to document knowledge in real time from beginning to end.

    1. Contextual Knowledge – Broader insights from specific projects and initiatives are best captured in collective discussions or team reflections from various participants. Aim to make arrangements to put team members in conversation with one another and capture insights.

Embed Knowledge Capture into Workflows

Rather than treating documentation as a separate task, organizations should embed it into the existing processes and workflows where the knowledge is already being used. Integrating documentation creation and review into regular processes helps normalize knowledge capture as a routine part of work. In practice, this may look like employees updating Standard Operating Procedures (SOPs) during routine tasks, recording leadership reflections during key decisions, or incorporating “lessons learned” or retrospective activities into project cycles. Additionally, structured after-action reviews and reflective learning exercises can further strengthen this practice by documenting key takeaways from major projects and initiatives. Beyond improving project and knowledge transfer outcomes, these habits also build durable knowledge assets that support AI-readiness.

Design Succession-Focused Knowledge Sharing Programs

Cultural silos and resistance to sharing knowledge often undermine succession planning. Employees may hesitate to share what they know due to fears about losing job security, feeling undervalued, or simply lacking the time to do so. To overcome these challenges, organizations must implement intentional knowledge transfer programs, as outlined below, that aim to prevent a forthcoming retirement cliff from leaving large gaps.

Create Knowledge Transfer Interview Programs

Pairing long-tenured staff with successors ensures that critical institutional knowledge is passed on before key departures. Create thoughtful interview programming that takes the burden off the experienced staff from initiating or handling administrative efforts. EK recently partnered with a global automotive manufacturing company to design and facilitate structured knowledge capture and transfer plans for high-risk roles that were eligible for retirement, including walkthroughs of core responsibilities, stakeholder maps, decision-making criteria, and context around ongoing initiatives. These sessions were tracked and archived, enabling smoother transitions and reducing institutional memory loss. EK also supported a federal agency in implementing a leadership knowledge transfer interview series with retiring senior leaders to capture institutional knowledge and critical insights from their tenure. These conversations focused on navigating the agency’s operations, lessons for successors, and role-specific takeaways. EK distilled these into concise, topical summaries that were tagged for findability and reuse, laying the foundation for a repeatable, agency-wide approach to preserving institutional knowledge.

Foster Communities of Practice

Encourage cross-functional collaboration and socialize knowledge sharing across the organization by establishing communities of practice.  The programs provide opportunities for employees to gather regularly and discuss a common professional interest, to learn from each other through sharing ideas, experiences, and best practices. Involve long-tenured staff in these efforts and encourage them to develop topics around their expertise. EK has seen firsthand how these practices promote ongoing knowledge exchange, helping employees stay connected and informed across departments, even during leadership transitions.

Offer Formal Knowledge Exchange Programs

Knowledge Exchange Programs, like job shadowing, expert-led cohorts, and mentorship initiatives, create clear pathways for employees to share and document expertise before transitions occur. Long-tenured employees are often excellent candidates to take the leading role in these efforts because of the vast knowledge they hold.

Ultimately, effective succession planning is not just about capturing what people know—it is about creating a culture where knowledge transfer is expected, supported, and celebrated. By addressing resistance and embedding knowledge-sharing into the rhythm of daily work, organizations can reduce risk, improve continuity, and build long-term resilience.

Long-term Strategies: Building Sustainable Knowledge Flow

While short-term efforts can help reduce immediate risk, organizations also need long-term strategies that embed knowledge management into daily operations and ensure continuity across future workforce transitions. That is why EK believes Artificial Intelligence (AI) and Knowledge Intelligence (KI) are essential tools in capturing, contextualizing, and preserving knowledge in a way that supports sustainable transitions and continuity.

Below are long-term, technology-enabled strategies that organizations can adopt to complement near-term efforts and future-proof institutional knowledge.

Structure and Contextualize Knowledge with a Semantic Foundation

EK sees contextual understanding as central to KM and succession planning, as adding business context to knowledge helps to illuminate and interpret meaning for users. By breaking down content into dynamic, structured components and enriching it with semantic metadata, organizations can preserve not only the knowledge itself, but also the meaning, rationale, and relationships behind it. EK has supported clients in building semantic layers and structured knowledge models that tag and categorize lessons learned, decisions made, and guidance provided, enabling content to be reused, assembled, and delivered at the point of need. This approach helps ensure continuity through leadership transitions, reduces duplication of effort, and allows institutional knowledge to evolve without losing its foundational context.

Leverage Knowledge Graphs and Intelligent Portals

Traditional knowledge repositories, while well-intentioned, often become static libraries that users struggle to navigate. EK has helped organizations move from these repositories to dynamic knowledge ecosystems by implementing knowledge graphs and a semantic layer. These approaches connect once disparate data, creating relationships between concepts, decisions, and people.

To leverage the power of the knowledge graph and semantic layer, EK has designed and deployed knowledge portals for several clients, providing a means for users to engage with the semantic layer. These portals consolidate information from multiple existing systems into a streamlined, user-friendly landing page. Each portal is designed to serve as a central hub for enterprise knowledge, connecting users to the right information, experts, and insights they need to do their jobs, while also supporting smoother transitions when staff move on or new team members step in. With intuitive navigation and contextualized search, the portal helps staff quickly find complete, relevant answers across multiple systems, explore related content, and access expertise—all within a single experience.

Augment Search and Discovery with Artificial Intelligence

To reduce the friction of finding and applying knowledge, EK has helped clients enhance knowledge portals with AI capabilities, integrating features like context-aware search, intelligent recommendations, and predictive content delivery.  These features anticipate user intent, guide employees to relevant insights, and surface related content that might otherwise be missed. When paired with a strong semantic foundation, these enhancements transform a portal from a basic search tool into an intelligent instrument that supports real-time learning, decision-making, and collaboration across the enterprise.

Automate and Scale Tagging with AI-Assisted Curation

Manual tagging is often cited as one of the more time-consuming and inconsistent aspects of content management. To improve both the speed and quality of metadata, EK has helped clients implement AI-assisted tagging solutions that automatically classify content based on a shared taxonomy.

We recommend a human-in-the-loop model, where AI performs the initial tagging, and subject matter experts validate results to preserve nuance and apply expertise. This approach allows organizations to scale content organization efforts while maintaining accuracy and alignment.

For example, we partnered with a leading development bank to build an AI-powered knowledge platform that processed data from eight enterprise systems. Using a multilingual taxonomy of over 4,000 terms, the platform automatically tagged content and proactively delivered contextual content recommendations across the enterprise. The solution dramatically improved enterprise search, reduced time spent locating information, and earned recognition from leadership as one of the organization’s most impactful knowledge initiatives.

Integrate Technology, People, and Process Within Succession Planning

The most successful organizations do not treat knowledge technologies as standalone tools. Instead, they integrate them into broader KM and succession planning strategies, ensuring these solutions support, rather than replace, human collaboration and expertise.

In EK’s experience, when AI, knowledge graphs, and semantic metadata are used to enhance existing processes—like onboarding, leadership transitions, or project handovers—they become powerful enablers of continuity. These tools help protect institutional knowledge, reduce bottlenecks, and enable repeatable practices for knowledge transfer across roles, teams, and time.

As part of a long-term KM strategy, this allows organizations to evolve from reactive knowledge capture to proactive, ongoing knowledge flow.

Measuring Knowledge Transfer Impact

As we have provided the tools and advice for ensuring impactful knowledge captures and transfers, measuring the effectiveness of knowledge transfer initiatives is the essential next step to ensure that succession planning goals are being met and that knowledge transfer efforts are producing meaningful outcomes. Key performance indicators (KPIs) and metrics can help track the success of these initiatives and provide insights into their impact on the organization’s leadership pipeline.

Metric Measurement Examples
Employee Engagement:One key indicator is active employee participation in knowledge transfer programs. This includes involvement in mentoring, workshops, job shadowing, and other formal knowledge-sharing activities. Tracking participation levels helps assess cultural adoption and provides insight into where additional encouragement or resources may be needed.
  • Workshop attendance records
  • Peer learning program participation rates
  • Surveys assessing perceived value and engagement
Knowledge Retention:Capturing knowledge is only part of the equation. Ensuring it is understood and applied is equally important. By assessing how well successors are able to retain and use critical knowledge, organizations can confirm whether the transfer process is actually supporting operational continuity and decision quality.
  • Post-transition employee self-evaluations
  • Peer or supervisor assessments
  • Case reviews of decisions informed by legacy knowledge
Transitioner Feedback:Understanding the perspective of new leaders or incoming staff can reveal valuable insights into what worked and what did not during a handoff. Their feedback can help organizations fine-tune interview guides, documentation practices, or onboarding resources for future transitions.
  • Qualitative feedback via structured interviews
  • New hire or successor surveys
  • Retrospectives after major transitions
Future Leader Readiness:Evaluating how prepared upcoming leaders are to step into key roles, both in terms of process knowledge and organizational culture, can serve as a long-term measure of success.
  • Succession readiness assessments
  • Familiarity with key systems, priorities, and workflows.
  • Participation in ongoing KM or leadership development programs

Closing

Navigating the retirement cliff requires both immediate action and long-term planning. By addressing resistance, dismantling silos, embedding knowledge-sharing into daily work, and leveraging technology, organizations can reduce risk, preserve critical expertise, and build long-term resilience. Need help developing a strategy that supports both near-term needs and long-term success? Let’s connect to explore tailored solutions for your organization.

The post Navigating the Retirement Cliff: Challenges and Strategies for Knowledge Capture and Succession Planning appeared first on Enterprise Knowledge.

]]>
Defining Governance and Operating Models for AI Readiness of Knowledge Assets https://enterprise-knowledge.com/defining-governance-and-operating-models-for-ai-readiness-of-knowledge-assets/ Wed, 08 Oct 2025 18:57:59 +0000 https://enterprise-knowledge.com/?p=25729 Artificial intelligence (AI) solutions continue to capture both the attention and the budgets of many organizations. As we have previously explained, a critical factor to the success of your organization’s AI initiatives is the readiness of your content, data, and … Continue reading

The post Defining Governance and Operating Models for AI Readiness of Knowledge Assets appeared first on Enterprise Knowledge.

]]>
Artificial intelligence (AI) solutions continue to capture both the attention and the budgets of many organizations. As we have previously explained, a critical factor to the success of your organization’s AI initiatives is the readiness of your content, data, and other knowledge assets. When correctly executed, this preparation will ensure your knowledge assets are of the appropriate quality and semantic structure for AI solutions to leverage with context and inference, while identifying and exposing only the appropriate assets to the right people through entitlements.

This, of course, is an ongoing challenge, rather than a moment in time initiative. To ensure the important work you’ve done to get your content, data, and other assets AI-ready is not lost, you need governance as well as an operating model to guide it. Indeed, well before any AI readiness initiative, governance and the organization must be top of mind. 

Governance is not a new term within the field. Historically, we’ve identified four core components to governance in the context of content or data:

  • Business Case and Measurable Success Criteria: Defining the value of the solution and the governance model itself, as well as what success looks like for both.
  • Roles and Responsibilities: Defining the individuals and groups necessary for governance, as well as the specific authorities and expectations of their roles.
  • Policies and Procedures: Detailing the timelines, steps, definitions, and actions for the associated roles to play.
  • Communications and Training: Laying out the approach to two-way communications between the associated governance roles/groups and the various stakeholders.

These traditional components of governance all have held up, tried and true, over the quarter-century since we first defined them. In the context of AI, however, it is important to go deeper and consider the unique aspects that artificial intelligence brings into the conversation. Virtually every expert in the field agrees that AI governance should be a priority for any organization, but that must be detailed further in order to be meaningful.

In the context of AI readiness for knowledge assets, we focus AI governance, and more broadly its supporting operating model, on five key elements for success:

  • Coordination and Enablement Over Execution
  • Connection Instead of Migration
  • Filling Gaps to Address the Unanswerable Questions
  • Acting on “Hallucinations”
  • Embedding Automation (Where It Makes Sense)

There is, of course, more to AI governance than these five elements, but in the context of AI readiness for knowledge assets, our experience shows that these are the areas where organizations should be focusing and shifting away from traditional models. 

1) Coordination and Enablement Over Execution

In traditional governance models (i.e. content governance, data governance, etc.), most of the work was done in the context of a single system. Content would be in a content management system and have a content governance model. Data would be in a data management solution and have a data governance model. The shift is that today’s AI governance solutions shouldn’t care what types of assets you have or where they are housed. This presents an amazing opportunity to remove artificial silos within an organization, but brings a marked challenge. 

If you were previously defining a content governance model, you most likely possessed some level of control or ownership over your content and document management systems. Likewise, if you were in charge of data governance, you likely “own” some or all of the major data solutions like master data management or a data warehouse within your organization. With AI, however, an enormous benefit of a correctly architected enterprise AI solution that leverages a semantic layer is that you likely don’t own these source systems. The system housing the content, data, and other knowledge assets is likely, at least partly, managed by other parts of your organization. In other words, in an AI world, you have less control over the sources of the knowledge assets, and thereby over the knowledge assets themselves. This may well change as organizations evolve in the “Age of AI,” but for now, the role and responsibility for AI governance becomes more about coordination and less about execution or enforcement.

In practice, this means an AI Governance for Knowledge Asset Readiness group must coordinate with the owners of the various source systems for knowledge assets, providing additive guidance to define what it means to have AI-ready assets as well as training and communications to enable and engage system and asset owners to understand what they must do to have their content, data, and other assets included within the AI models. The word “must” in the previous sentence is purposeful. You alone may not possess the authority of an information system owner to define standards for their assets, but you should have the authority to choose not to include those assets within the enterprise AI solution set.

How do you apply that authority? As the lines continue to blur between the purview of KM, Data, and AI teams, this AI Governance for Knowledge Asset Readiness group should comprise representatives from each of these once siloed teams to co-own outcomes as new AI use cases, features, and capabilities are developed. The AI governance group should be responsible for delineating key interaction points and expected outcomes across teams and business functions to build alignment, facilitate planning and coordination, and establish expectations for business and technical stakeholders alike as AI solutions evolve. Further, this group should define what it means (and what is required) for an asset to be AI-ready. We cover this in detail in previous articles, but in short, this boils down to semantic structure, quality, and entitlements as the three core pillars to AI readiness for knowledge assets. 

2) Connection Instead of Migration

The idea of connections over migration aligns with the previous point. Past monolithic efforts in your organization would commonly have included massive migrations and consolidations of systems and solutions. The roadmaps of past MDMs, data warehouses, and enterprise content management initiatives are littered with failed migrations. Again, part of the value of an enterprise AI initiative that leverages a semantic layer, or at least a knowledge graph, is that you don’t need to absorb the cost, complexity, and probable failure of a massive migration. 

Instead, the role of the AI Governance for Knowledge Asset Readiness group is one of connections. Once the group has set the expectation for AI-ready knowledge assets, the next step is to ensure the systems that house those assets are connected and available, ready for the enterprise AI solutions to be ingested and understood. This can be a highly iterative process, not to be rushed, as the sanctity of the assets ingested by AI is more important than their depth. Said differently, you have few chances to deliver wrong answers—your end users will lose trust quickly in a solution that delivers inaccurate information that they know is unmistakably incorrect; but if they receive an incomplete answer instead, they will be more likely to raise this and continue to engage. The role of this AI governance group is to ensure the right systems and their assets are reliably available for the AI solution(s) at the right time, after your knowledge assets have passed through the appropriate requirements.

 

3) Filling Gaps to Address the Unanswerable Questions

As the AI solutions are deployed, the shift for AI governance moves from being proactive to reactive. There is a great opportunity associated with this that bears a particular focus. In the history of knowledge management, and more broadly the fields of content management, data management, and information management, there’s always been a creeping concern that an organization “doesn’t know what it doesn’t know.” What are the gaps in knowledge? What are the organizational blind spots? These questions have been nearly impossible to answer at the enterprise level. However, with enterprise-level AI solutions implemented, the ability to have this awareness is suddenly a possibility.

Even before deploying AI solutions, a well-designed semantic layer can help pinpoint organizational gaps in knowledge by finding taxonomy elements lacking in applied knowledge assets. However, this potential is magnified once the AI solution is fully defined. Today’s mature AI solutions are “smart” enough to know when they can’t answer a question and highlight that unanswerable question to the AI governance group. Imagine possessing the organizational intelligence to know what your colleagues are seeking to understand, having insights into that which they are trying to learn or answer, but are currently unable to. 

In this way, once an AI solution is deployed, the primary role of the AI governance group should be to diagnose and then respond to these automatically identified knowledge gaps, using their standards to fill them. It may be that the information does, in fact, exist within the enterprise, but that the AI solution wasn’t connected to those knowledge assets. Alternatively, it may be that the right semantic structure wasn’t placed on the assets, resulting in a missed connection and a false gap from the AI. However, it may also be that the answer to the “unanswerable” question only exists as tacit knowledge in the heads of the organization’s experts, or doesn’t exist at all. This is the most core and true value of the field of knowledge management, and has never been so possible.

4) Acting on “Hallucinations”

Aligned with the idea of filling gaps, a similar role for the AI governance group should be to address hallucinations or failures for AI to deliver an accurate, consistent, and complete “answer.” For organizations attempting to implement enterprise AI, a hallucination is little more than a cute word for an error, and should be treated as such by the AI governance group. There are many reasons for these errors, ranging from poor quality (i.e., wrong, outdated, near-duplicate, or conflicting) knowledge assets, insufficient semantic structure (e.g., taxonomy, ontology, or a business glossary), or poor logic built into the model itself. Any of these issues should be treated with immediate action. Your organization’s end users will quickly lose trust in an AI solution that delivers inaccurate results. Your governance model and associated organizational structure must be equipped to act quickly, first to leverage communications and feedback channels to ensure your end users are telling you when they believe something is inaccurate or incomplete, and moreover, to diagnose and address it.

As a note, for the most mature organizations, this action won’t be entirely reactive. For the most mature, organizational subject matter experts will be involved in perpetuity, especially right before and after enterprise AI deployment, to hunt for errors in these systems. Commonly, you can consider this governance function as the “Hallucination Killers” within your organization, likely to be one of the most critical actions as AI continues to expand.

5) Embedding Automation (Where It Makes Sense)

Finally, one of the most important roles of an AI governance group will be to use AI to make AI better. Almost everything we’ve described above can be automated. AI can and should be used to automate identification of knowledge gaps as well as solve the issue of those knowledge gaps by pinpointing organizational subject matter experts and targeting them to deliver their learning and experience at the right moments. It can also play a major role in helping to apply the appropriate semantic structure to knowledge, through tagging of taxonomy terms as metadata or identification of potential terms for inclusion in a business glossary. Central to all of this automation, however, is to ensure the ‘human is in the loop’, or rather, the AI governance group plays an advisory and oversight role throughout these automations, to ensure the design doesn’t fall out of alignment. This element further facilitates AI governance coordination across the organization by supporting stakeholders and knowledge asset stewards through technical enablement.

All of this presents a world of possibility. Governance was historically one of the drier and more esoteric concepts within the field, often where good projects went bad. We have the opportunity to do governance better by leveraging AI in the areas where humans historically fell short, while maintaining the important role of human experts with the right authority to ensure organizational alignment and value.

If your AI efforts aren’t yet yielding the results you expected, or you’re seeking to get things started right from the beginning, contact EK to help you.

The post Defining Governance and Operating Models for AI Readiness of Knowledge Assets appeared first on Enterprise Knowledge.

]]>
Semantic Layer Strategy: The Core Components You Need for Successfully Implementing a Semantic Layer https://enterprise-knowledge.com/semantic-layer-strategy-the-core-components-you-need-for-successfully-implementing-a-semantic-layer/ Mon, 06 Oct 2025 16:03:47 +0000 https://enterprise-knowledge.com/?p=25718 Today’s organizations are flooded with opportunities to apply AI and advanced data experiences, but many struggle with where to focus first. Leaders are asking questions like: “Which AI use cases will bring the most value? How can we connect siloed … Continue reading

The post Semantic Layer Strategy: The Core Components You Need for Successfully Implementing a Semantic Layer appeared first on Enterprise Knowledge.

]]>
Today’s organizations are flooded with opportunities to apply AI and advanced data experiences, but many struggle with where to focus first. Leaders are asking questions like: “Which AI use cases will bring the most value? How can we connect siloed data to support them?” Without a clear strategy, quick start-ups and vendors are making it easy to spin wheels on experiments that never scale. As more organizations recognize the value of meaningful, connected data experiences via a Semantic Layer, many find themselves unsure of how to begin their journey, or how to sustain meaningful progress once they begin. 

A well-defined Semantic Layer strategy is essential to avoid costly missteps in planning or execution, secure stakeholder alignment and buy-in, and ensure long-term scalability of models and tooling.

This blog outlines the key components of a successful Semantic Layer strategy, explaining how each component supports a scalable implementation and contributes to unlocking greater value from your data.

What is a Semantic Layer?

The Semantic Layer is a framework that adds rich structure and meaning to data by applying categorization models (such as taxonomies and ontologies) and using semantic technologies like graph databases and data catalogs. Your Semantic Layer should be a connective tissue that leverages a shared language to unify information across systems, tools, and domains. 

Data-rich organizations often manage information across a growing number of siloed repositories, platforms, and tools. The lack of a shared structure for how data is described and connected across these systems ultimately slows innovation and undermines initiatives. Importantly, your semantic layer enables humans and machines to interpret data in context and lays the foundation for enterprise-wide AI capabilities.    

 

What is a Semantic Layer Strategy?

A Semantic Layer Strategy is a tailored vision outlining the value of using knowledge assets to enable new tools and create insights through semantic approaches. This approach ensures your organization’s semantic efforts are focused, feasible, and value-driven by aligning business priorities with technical implementation. 

Regardless of your organization’s size, maturity, or goals, a strong Semantic Layer Strategy enables you to achieve the following:

1. Articulate a clear vision and value proposition.

Without a clear vision, semantic layer initiatives risk becoming scattered and mismanaged, with teams pulling in different directions and value to the organization left unclear. The Semantic Layer vision serves as the “North Star,” or guiding principle for planning, design, and execution. Organizations can realize a variety of use cases via a Semantic Layer (including advanced search, recommendation engines, personalized knowledge delivery, and more), and Semantic Layer Strategy helps to define and align on what a Semantic Layer can solve for your organization.

The vision statement clearly answers three core questions:

  • What is the business problem you are trying to solve?
  • What outcomes and capabilities are you enabling?
  • How will you measure success?

These three items create a strategic narrative that business and technical stakeholders alike can understand, and enable discussions to gain executive buy-in and prioritize initiative efforts. 

Enterprise Knowledge Case Study (Risk Mitigation for a Wall Street Bank): EK led the development of a  data strategy for operational risk for a bank seeking to create a unified view of highly regulated data dispersed across siloed repositories. By framing a clear vision statement for the Bank’s semantic layer, EK guided the firm to establish a multi-year program to expand the scope of data and continually enable new data insights and capabilities that were previously impossible. For example, users of a risk application could access information from multiple repositories in a single knowledge panel within the tool rather than hunting for it in siloed applications. The Bank’s Semantic Layer vision is contained in a single easy-to-understand one-pager  that has been used repeatedly as a rallying point to communicate value across the enterprise, win executive sponsorship, and onboard additional business groups into the semantic layer initiative. 

2. Assess your current organizational semantic maturity.

A semantic maturity assessment looks at the semantic structures, programs, processes, knowledge assets and overall awareness that already exist at your organization. Understanding where your organization lies on the semantic maturity spectrum is essential for setting realistic goals and sequencing a path to greater maturity. 

  • Less mature organizations may lack formal taxonomies or ontologies, or may have taxonomies and ontologies that are outdated, inconsistently applied, or not integrated across systems. They have limited (or no) semantic tooling and few internal semantic champions. Their knowledge assets are isolated, inconsistently tagged (or untagged) documents that require human interpretation to understand and are difficult for systems to find or connect.
  • More mature organizations typically have well-maintained taxonomies and/or ontologies, have established governance processes, and actively use semantic tooling such as knowledge graphs or business glossaries. More than likely, there are individuals or groups who advocate for the adoption of these tools and processes within the organization. Their knowledge assets are well-structured, consistently tagged, and interconnected pieces of content that both humans and machines can easily discover, interpret, and reuse.

Enterprise Knowledge Case Study (Risk Mitigation for a Wall Street Bank): EK conducted a comprehensive semantic maturity assessment of the current state of the Bank’s semantics program to uncover strengths, gaps, and opportunities. This assessment included:

  • Knowledge Asset Assessment: Evaluated the connectedness, completeness, and consistency of existing risk knowledge assets, identifying opportunities to enrich and restructure them to support redesigned application workflows.
  • Ontology Evaluation: Reviewed existing ontologies describing risk at the firm to assess accuracy, currency, semantic standards compliance, and maintenance practices.
  • Category Model Evaluation: Created a taxonomy tracker to evaluate candidate categories for a unified category management program, focusing on quality, ownership, and ongoing governance.
  • Architecture Gap Analysis and Tooling Recommendation : Reviewed existing applications, APIs, and integrations to determine whether components should be reused, replaced, or rebuilt.
  • People & Roles Assessment: Designed a target operating model to identify team structures, collaboration patterns, and missing roles or skills that are critical for semantic growth.

Together, these evaluations provided a clear benchmark of maturity and guided a right-sized strategy for the bank. 

3. Create a shared conceptual knowledge asset model. 

When it comes to strategy, executive stakeholders don’t want to see exhaustive technical documentation–they want to see impact. A high-level visual model of what your Semantic Layer will achieve brings a Semantic Layer Strategy to life by showing how connected knowledge assets can enable better decisions and new insights. 

Your data model should show, in broad strokes, what kinds of data will be connected at the conceptual level. For example, your data model could show that people, business units, and sales reports can be connected to answer questions like, “How many people in the United States created documents about X Law?” or “What laws apply to me when writing a contract in Wisconsin?” 

In sum, it should focus on how people and systems will benefit from the relationships between data, enabling clearer communication and shared understanding of your Semantic Layer’s use cases. 

Enterprise Knowledge Case Study (Risk Mitigation for a Wall Street Bank): EK collaborated with data owners to map out core concepts and their relationships in a single, digestible diagram. The conceptual knowledge asset model served as a shared reference point for both business and technical stakeholders, grounding executive conversations about Semantic Layer priorities and guiding onboarding decisions for data and systems. 

By simplifying complex data relationships into a clear visual, EK enabled alignment across technical and non-technical audiences and built momentum for the Semantic Layer initiative.

4. Develop a practical and iterative roadmap for implementation and scale.

With your vision, assessment, and foundational conceptual model in place, the next step is translating your strategy into execution. Your Semantic Layer roadmap should be outcome-driven, iterative, and actionable. A well-constructed roadmap provides not only a starting point for your Semantic Layer initiative, but also a mechanism for continuous alignment as business priorities evolve. 

Importantly, your roadmap should not be a rigid set of instructions; rather, it should act as a living guide. As your semantic maturity increases and business needs shift, the roadmap should adapt to reflect new opportunities while keeping long-term goals in focus. While the roadmap may be more detailed and technically advanced for highly mature organizations, less mature organizations may focus their roadmap on broader strokes such as tool procurement and initial category modeling. In both cases, the roadmap should be tailored to the organization’s unique needs and maturity, ensuring it is practical, actionable, and aligned to real priorities.

Enterprise Knowledge Case Study (Risk Mitigation for a Wall Street Bank): EK led the creation of a roadmap focused on expanding the firm’s existing semantic layer. Through planning sessions, EK identified the necessary categories, ontologies, tooling, and architecture uplifts needed to chart forward on their Semantic Layer journey. Once a strong foundation was built, additional planning sessions centered on adding new categories, onboarding additional data concepts, and refining ontologies to increase coverage and usability. Through sessions with key stakeholders responsible for the growth of the program, EK prioritized high-value expansion opportunities and recommended governance practices to sustain long-term scale. This enabled the firm to confidently evolve its Semantic Layer while maintaining alignment with business priorities and demonstrating measurable impact across the organization.

 

Conclusion

A successful Semantic Layer Strategy doesn’t come from technology alone; it comes from a clear vision, organizational alignment, and intentional design. Whether you’re just getting started on your semantics journey or refining your Semantic Layer approach, Enterprise Knowledge can support your organization. Contact us at info@enterprise-knowledge.com to discuss how we can help bring your Semantic Layer strategy to life.

The post Semantic Layer Strategy: The Core Components You Need for Successfully Implementing a Semantic Layer appeared first on Enterprise Knowledge.

]]>