Taxonomy & Ontology Design Articles - Enterprise Knowledge https://enterprise-knowledge.com/category/taxonomy-ontology-design/ Mon, 17 Nov 2025 22:21:32 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg Taxonomy & Ontology Design Articles - Enterprise Knowledge https://enterprise-knowledge.com/category/taxonomy-ontology-design/ 32 32 How Taxonomies and Ontologies Enable Explainable AI https://enterprise-knowledge.com/how-taxonomies-and-ontologies-enable-explainable-ai/ Fri, 31 Oct 2025 15:18:09 +0000 https://enterprise-knowledge.com/?p=25955 Taxonomy and ontology models are essential to unlocking the value of knowledge assets. They provide the structure needed to connect fragmented information across an organization, enabling explainable AI. As part of a broader Knowledge Intelligence (KI) strategy, these models help … Continue reading

The post How Taxonomies and Ontologies Enable Explainable AI appeared first on Enterprise Knowledge.

]]>
Taxonomy and ontology models are essential to unlocking the value of knowledge assets. They provide the structure needed to connect fragmented information across an organization, enabling explainable AI. As part of a broader Knowledge Intelligence (KI) strategy, these models help reduce hallucinations and make AI-generated content more trustworthy. This blog provides an overview of why taxonomies and ontologies are essential to connect disparate knowledge assets within an organization and improve the quality and accuracy of AI generated content. 

 

The Anatomy of AI

Here is a conceptual analogy to help illustrate how taxonomies and ontologies support AI. While inspired by the human musculoskeletal system, this analogy is not intended to represent anatomical accuracy, but rather to illustrate how taxonomies provide foundational structure and ontologies enable flexible, contextual connections of knowledge assets within AI systems.

Just like the musculoskeletal system gives structure, support, and coherence to the human body, taxonomies and ontologies provide the structural framework that organizes and contextualizes knowledge assets for AI. Here is the analogy: the spine and the bones represent the taxonomies, in other words, the hierarchical, backbone structure for categorizing and organizing concepts that describe an organization’s core knowledge assets. Similarly, the joints, ligaments, and muscles represent the ontologies that provide the flexibility to connect related concepts across assets in an organization’s knowledge domain. 

Just as the musculoskeletal system provides structure, support, and coherence to the human body, taxonomies and ontologies serve as a structural framework that organizes and contextualizes knowledge assets for AI. When those assets are consistently tagged with taxonomies and linked through ontologies, AI systems can trace how decisions are made, reducing the likelihood of hallucinations.

Taxonomies: the spine and the bones represent the taxonomies, in other words, the hierarchical backbone structure for categorizing and organizing concepts.

Ontologies: the joints, ligaments, and muscles represent the ontologies that provide the flexibility to connect related concepts across an organization's knowledge domain.

Depending on the organization’s domain or industry, certain types of knowledge assets become more relevant or strategically important. In the case of a healthcare organization, key knowledge assets may include content such as patients’ electronic health records, clinical guidelines and protocols, multidisciplinary case reviews, and research publications, as well as data such as diagnostic data and clinical trial data. Taxonomies that capture and group together key concepts, such as illnesses, symptoms, treatments, outcomes, medicines, clinical specialties can be used to tag and structure these assets. Continuing with the same scenario, an ontology in a healthcare organization can incorporate those key concepts (entities) from the taxonomy, along with their properties and relationships, to enable alignment and consistent interpretation of knowledge assets across systems. Both taxonomies and ontologies in healthcare organizations make it possible to connect, for instance, a patient’s health record with diagnostic data and previous case reviews for other patients based on the same (or similar) conditions, including illnesses, symptoms, treatments, and medicines. As a result, healthcare professionals can quickly access the information they need to make well-informed decisions about a patient’s care.

 

Where AI is Failing

On multiple occasions, AI has repeatedly failed to provide reliable information to employees, customers, and patients, undermining their confidence in the AI supported system and sometimes leading to serious organizational consequences. You may be familiar with the case in which a chatbot of a medical association was unintentionally giving harmful advice to people with eating disorders. Or maybe you heard in the news about the bank with a faulty AI system that misclassified thousands of transactions as fraudulent due to a programming error, resulting in significant customer dissatisfaction and harming the organization’s reputation. There was also a case in which an AI-powered translation system failed to accurately assess asylum seekers’ applications, raising serious concerns about its fairness and accuracy, and potentially affecting critical life decisions for those applicants. In each of these cases, had the corresponding AI systems effectively aggregated both unstructured and structured knowledge assets, and reliably linked them to encoded expert knowledge and relevant business context, these cases would have produced very different and positive outcomes. By leveraging taxonomies and ontologies to aggregate key knowledge assets, the result of these cases would have been much more closely aligned with intended objectives, ultimately, benefiting the end users as it was initially intended. 

 

How Taxonomies And Ontologies Enable Explainable AI

When knowledge assets are consistently tagged with taxonomies and related via ontologies, AI systems can trace how a decision was made. This means that end users can understand the reasoning path, supported by defined relationships. This also means that bias and hallucinations can be more easily detected by auditing the semantic structure behind the results.

As illustrated in the healthcare organization example, diagnoses can be tagged with medical industry taxonomies, while ontologies can help create relationships among symptoms, treatments, and outcomes. This can help physicians tailor treatments to individual patient needs by leveraging past patient cases and the collective expertise from other physicians. Similarly, a retail organization can enhance its customer service by implementing a chatbot that is linked to structured product taxonomies and ontologies to help deliver consistent and explainable answers about products to customers. More consistent and trustworthy customer interactions result in streamlining end user support and strengthening brand confidence.

 

Do We Really Need Taxonomies and Ontologies to be Successful With AI?

The examples above illustrate that explainability in AI really matters. Whether end users are patients, bank customers, or any individuals requesting specific products or services, they all want more transparent, trustworthy, and human-centered AI experiences. Taxonomies and ontologies help provide structure and connectedness to content, documents, data, expert knowledge and overall business context, so that they all are machine readable and findable by AI systems at the moment of need, ultimately creating meaningful interactions for end users.  

 

Conclusion

Just like bones, joints, ligaments, and muscles in the human body, taxonomies and ontologies provide the essential structure and connection that allow AI systems to stand up to testing, be reliable, and perform with clarity. At EK we have extensive experience identifying key knowledge assets as well as designing and implementing taxonomies and ontologies to successfully support AI initiatives. If you want to improve the Knowledge Intelligence (KI) of your existing or future AI applications and need help with your taxonomy and ontology efforts, don’t hesitate to get in touch with us

The post How Taxonomies and Ontologies Enable Explainable AI appeared first on Enterprise Knowledge.

]]>
How to Leverage LLMs for Auto-tagging & Content Enrichment https://enterprise-knowledge.com/how-to-leverage-llms-for-auto-tagging-content-enrichment/ Wed, 29 Oct 2025 14:57:56 +0000 https://enterprise-knowledge.com/?p=25940 When working with organizations on key data and knowledge management initiatives, we’ve often noticed that a roadblock is the lack of quality (relevant, meaningful, or up-to-date) existing content an organization has. Stakeholders may be excited to get started with advanced … Continue reading

The post How to Leverage LLMs for Auto-tagging & Content Enrichment appeared first on Enterprise Knowledge.

]]>
When working with organizations on key data and knowledge management initiatives, we’ve often noticed that a roadblock is the lack of quality (relevant, meaningful, or up-to-date) existing content an organization has. Stakeholders may be excited to get started with advanced tools as part of their initiatives, like graph solutions, personalized search solutions, or advanced AI solutions; however, without a strong backbone of semantic models and context-rich content, these solutions are significantly less effective. For example, without proper tags and content types, a knowledge portal development effort  can’t fully demonstrate the value of faceting and aggregating pieces of content and data together in ‘knowledge panes’. With a more semantically rich set of content to work with, the portal can begin showing value through search, filtering, and aggregation, leading to further organizational and leadership buy-in.

One key step in preparing content is the application of metadata and organizational context to pieces of content through tagging. There are several tagging approaches an organization can take to enrich pre-existing content with metadata and organizational context, including manual tagging, automated tagging capabilities from a taxonomy and ontology management system (TOMS), using apps and features directly from a content management solution, and various hybrid approaches. While many of these approaches, in particular acquiring a TOMS, are recommended as a long-term auto-tagging solution, EK has recommended and implemented Large Language Model (LLM)-based auto-tagging capabilities across several recent engagements. Due to LLM-based tagging’s lower initial investment compared to a TOMS and its greater efficiency than manual tagging, these auto-tagging solutions have been able to provide immediate value and jumpstart the process of re-tagging existing content. This blog will dive deeper into how LLM tagging works, the value of semantics, technical considerations, and next steps for implementing an LLM-based tagging solution.

Overview of LLM-Based Auto-Tagging Process

Similar to existing auto-tagging approaches, the LLM suggests a tag by parsing through a piece of content, processing and identifying key phrases, terms, or structure that gives the document context. Through prompt engineering, the LLM is then asked to compare the similarity of key semantic components (e.g., named entities, key phrases) with various term lists, returning a set of terms that could be used to categorize the piece of content. These responses can be adjusted in the tagging workflow to only return terms meeting a specific similarity score. These tagging results are then exported to a data store and applied to the content source. Many factors, including the particular LLM used, the knowledge an LLM is working with, and the source location of content, can greatly impact the tagging effectiveness and accuracy. In addition, adjusting parameters, taxonomies/term lists, and/or prompts to improve precision and recall can ensure tagging results align with an organization’s needs. The final step is the auto-tagging itself and the application of the tags in the source system. This could look like a script or workflow that applies the stored tags to pieces of content.

Figure 1: High-level steps for LLM content enrichment

EK has put these steps into practice, for example, when engaging with a trade association on a content modernization project to migrate and auto-tag content into a new content management system (CMS). The organization had been struggling with content findability, standardization, and governance, in particular, the language used to describe the diverse areas of work the trade association covers. As part of this engagement, EK first worked with the organization’s subject matter experts (SMEs) to develop new enterprise-wide taxonomies and controlled vocabularies integrated across multiple platforms to be utilized by both external and internal end-users. To operationalize and apply these common vocabularies, EK developed an LLM-based auto-tagging workflow utilizing the four high-level steps above to auto-tag metadata fields and identify content types. This content modernization effort set up the organization for document workflows, search solutions, and generative AI projects, all of which are able to leverage the added metadata on documents. 

Value of Semantics with LLM-Based Auto-Tagging

Semantic models such as taxonomies, metadata models, ontologies, and content types can all be valuable inputs to guide an LLM on how to effectively categorize a piece of content. When considering how an LLM is trained for auto-tagging content, a greater emphasis needs to be put on organization-specific context. If using a taxonomy as a training input, organizational context can be added through weighting specific terms, increasing the number of synonyms/alternative labels, and providing organization-specific definitions. For example, by providing organizational context through a taxonomy or business glossary that the term “Green Account” refers to accounts that have met a specific environmental standard, the LLM would not accidentally tag content related to the color green or an account that is financially successful.

Another benefit of an LLM-based approach is the ability to evolve both the semantic model and LLM as tagging results are received. As sets of tags are generated for an initial set of content, the taxonomies and content models being used to train the LLM can be refined to better fit the specific organizational context. This could look like adding additional alternative labels, adjusting the definition of terms, or adjusting the taxonomy hierarchy. Similarly, additional tools and techniques, such as weighting and prompt engineering, can tune the results provided by the LLM and evolve the results generated to achieve a higher recall (rate the LLM is including the correct term) and precision (rate the LLM is selecting only the correct term) when recommending terms. One example of this is  adding weighting from 0 to 10 for all taxonomy terms and assigning a higher score for terms the organization prefers to use. The workflow developed alongside the LLM can use this context to include or exclude a particular term.

Implementation Considerations for LLM-Based Auto-Tagging 

Several factors, such as the timeframe, volume of information, necessary accuracy, types of content management systems, and desired capabilities, inform the complexity and resources needed for LLM-based content enrichment. The following considerations expand upon the factors an organization must consider for effective LLM content enrichment. 

Tagging Accuracy

The accuracy of tags from an LLM directly impacts end-users and systems (e.g., search instances or dashboards) that are utilizing the tags. Safeguards need to be implemented to ensure end-users can trust the accuracy of the tagged content they are using. These help ensure that a user is not mistakenly accessing or using a particular document, or that they are frustrated by the results they get. To mitigate both of these concerns, a high recall and precision score with the LLM tagging improves the overall accuracy and lowers the chance for miscategorization. This can be done by investing further into human test-tagging and input from SMEs to create a gold-standard set of tagged content as training data for the LLM. The gold-standard set can then be used to adjust how the LLM weights or prioritizes terms, based on the organizational context in the gold-standard set. These practices will help to avoid hallucinations (factually incorrect or misleading content) that could appear in applications utilizing the auto-tagged set of content.

Content Repositories

One factor that greatly adds technical complexity is accessing the various types of content repositories that an LLM solution, or any auto-tagging solution, needs to read from. The best content management practice for auto-tagging is to read content in its source location, limiting the risk of duplication and the effort needed to download and then read content. When developing a custom solution, each content repository often needs a distinctive approach to read and apply tags. A content or document repository like SharePoint, for example, has a robust API for reading content and seamlessly applying tags, while a less widely adopted platform may not have the same level of support. It is important to account for the unique needs of each system in order to limit the disruption end-users may experience when embarking on a tagging effort.

Knowledge Assets

When considering the scalability of the auto-tagging effort, it is also important to evaluate the breadth of knowledge asset types being analyzed. While the ability of LLMs to process several types of knowledge assets has been growing, each step of additional complexity, particularly evaluating multiple types, can result in additional resources and time needed to read and tag documents. A PDF document with 2-3 pages of content will take far fewer tokens and resources for an LLM to read its content than a long visual or audio asset. Going from a tagging workflow of structured knowledge assets to tagging unstructured content will increase the overall time, resources, and custom development needed to run a tagging workflow. 

Data Security & Entitlements

When utilizing an LLM, it is recommended that an organization invest in a private or an in-house LLM to complete analysis, rather than leveraging a publicly available model. In particular, an LLM does not need to be ‘on-premises’, as several providers have options for LLMs in your company’s own environment. This ensures a higher level of document security and additional features for customization. Particularly when tackling use cases with higher levels of personal information and access controls, a robust mapping of content and an understanding of what needs to be tagged is imperative. As an example, if a publicly facing LLM was reading confidential documents on how to develop a company-specific product, this information could then be leveraged in other public queries and has a higher likelihood of being accessed outside of the organization. In an enterprise data ecosystem, running an LLM-based auto-tagging solution can raise red flags around data access, controls, and compliance. These challenges can be addressed through a Unified Entitlements System (UES) that creates a centralized policy management system for both end users and LLM solutions being deployed.

Next Steps:

One major consideration with an LLM tagging solution is maintenance and governance over time. For some organizations, after completing an initial enrichment of content by the LLM, a combination of manual tagging and forms within each CMS helps them maintain tagging standards over time. However, a more mature organization that is dealing with several content repositories and systems may want to either operationalize the content enrichment solution for continued use or invest in a TOMS. With either approach, completing an initial LLM enrichment of content is a key method to prove the value of semantics and metadata to decision-makers in an organization. 
Many technical solutions and initiatives that excite both technical and business stakeholders can be actualized by an LLM content enrichment effort. By having content that is tagged and adhering to semantic standards, solutions like knowledge graphs, knowledge portals, and semantic search engines, or even an enterprise-wide LLM Solution, are upgraded even further to show organizational value.

If your organization is interested in upgrading your content and developing new KM solutions, contact us!

The post How to Leverage LLMs for Auto-tagging & Content Enrichment appeared first on Enterprise Knowledge.

]]>
Semantic Layer Strategy: The Core Components You Need for Successfully Implementing a Semantic Layer https://enterprise-knowledge.com/semantic-layer-strategy-the-core-components-you-need-for-successfully-implementing-a-semantic-layer/ Mon, 06 Oct 2025 16:03:47 +0000 https://enterprise-knowledge.com/?p=25718 Today’s organizations are flooded with opportunities to apply AI and advanced data experiences, but many struggle with where to focus first. Leaders are asking questions like: “Which AI use cases will bring the most value? How can we connect siloed … Continue reading

The post Semantic Layer Strategy: The Core Components You Need for Successfully Implementing a Semantic Layer appeared first on Enterprise Knowledge.

]]>
Today’s organizations are flooded with opportunities to apply AI and advanced data experiences, but many struggle with where to focus first. Leaders are asking questions like: “Which AI use cases will bring the most value? How can we connect siloed data to support them?” Without a clear strategy, quick start-ups and vendors are making it easy to spin wheels on experiments that never scale. As more organizations recognize the value of meaningful, connected data experiences via a Semantic Layer, many find themselves unsure of how to begin their journey, or how to sustain meaningful progress once they begin. 

A well-defined Semantic Layer strategy is essential to avoid costly missteps in planning or execution, secure stakeholder alignment and buy-in, and ensure long-term scalability of models and tooling.

This blog outlines the key components of a successful Semantic Layer strategy, explaining how each component supports a scalable implementation and contributes to unlocking greater value from your data.

What is a Semantic Layer?

The Semantic Layer is a framework that adds rich structure and meaning to data by applying categorization models (such as taxonomies and ontologies) and using semantic technologies like graph databases and data catalogs. Your Semantic Layer should be a connective tissue that leverages a shared language to unify information across systems, tools, and domains. 

Data-rich organizations often manage information across a growing number of siloed repositories, platforms, and tools. The lack of a shared structure for how data is described and connected across these systems ultimately slows innovation and undermines initiatives. Importantly, your semantic layer enables humans and machines to interpret data in context and lays the foundation for enterprise-wide AI capabilities.    

 

What is a Semantic Layer Strategy?

A Semantic Layer Strategy is a tailored vision outlining the value of using knowledge assets to enable new tools and create insights through semantic approaches. This approach ensures your organization’s semantic efforts are focused, feasible, and value-driven by aligning business priorities with technical implementation. 

Regardless of your organization’s size, maturity, or goals, a strong Semantic Layer Strategy enables you to achieve the following:

1. Articulate a clear vision and value proposition.

Without a clear vision, semantic layer initiatives risk becoming scattered and mismanaged, with teams pulling in different directions and value to the organization left unclear. The Semantic Layer vision serves as the “North Star,” or guiding principle for planning, design, and execution. Organizations can realize a variety of use cases via a Semantic Layer (including advanced search, recommendation engines, personalized knowledge delivery, and more), and Semantic Layer Strategy helps to define and align on what a Semantic Layer can solve for your organization.

The vision statement clearly answers three core questions:

  • What is the business problem you are trying to solve?
  • What outcomes and capabilities are you enabling?
  • How will you measure success?

These three items create a strategic narrative that business and technical stakeholders alike can understand, and enable discussions to gain executive buy-in and prioritize initiative efforts. 

Enterprise Knowledge Case Study (Risk Mitigation for a Wall Street Bank): EK led the development of a  data strategy for operational risk for a bank seeking to create a unified view of highly regulated data dispersed across siloed repositories. By framing a clear vision statement for the Bank’s semantic layer, EK guided the firm to establish a multi-year program to expand the scope of data and continually enable new data insights and capabilities that were previously impossible. For example, users of a risk application could access information from multiple repositories in a single knowledge panel within the tool rather than hunting for it in siloed applications. The Bank’s Semantic Layer vision is contained in a single easy-to-understand one-pager  that has been used repeatedly as a rallying point to communicate value across the enterprise, win executive sponsorship, and onboard additional business groups into the semantic layer initiative. 

2. Assess your current organizational semantic maturity.

A semantic maturity assessment looks at the semantic structures, programs, processes, knowledge assets and overall awareness that already exist at your organization. Understanding where your organization lies on the semantic maturity spectrum is essential for setting realistic goals and sequencing a path to greater maturity. 

  • Less mature organizations may lack formal taxonomies or ontologies, or may have taxonomies and ontologies that are outdated, inconsistently applied, or not integrated across systems. They have limited (or no) semantic tooling and few internal semantic champions. Their knowledge assets are isolated, inconsistently tagged (or untagged) documents that require human interpretation to understand and are difficult for systems to find or connect.
  • More mature organizations typically have well-maintained taxonomies and/or ontologies, have established governance processes, and actively use semantic tooling such as knowledge graphs or business glossaries. More than likely, there are individuals or groups who advocate for the adoption of these tools and processes within the organization. Their knowledge assets are well-structured, consistently tagged, and interconnected pieces of content that both humans and machines can easily discover, interpret, and reuse.

Enterprise Knowledge Case Study (Risk Mitigation for a Wall Street Bank): EK conducted a comprehensive semantic maturity assessment of the current state of the Bank’s semantics program to uncover strengths, gaps, and opportunities. This assessment included:

  • Knowledge Asset Assessment: Evaluated the connectedness, completeness, and consistency of existing risk knowledge assets, identifying opportunities to enrich and restructure them to support redesigned application workflows.
  • Ontology Evaluation: Reviewed existing ontologies describing risk at the firm to assess accuracy, currency, semantic standards compliance, and maintenance practices.
  • Category Model Evaluation: Created a taxonomy tracker to evaluate candidate categories for a unified category management program, focusing on quality, ownership, and ongoing governance.
  • Architecture Gap Analysis and Tooling Recommendation : Reviewed existing applications, APIs, and integrations to determine whether components should be reused, replaced, or rebuilt.
  • People & Roles Assessment: Designed a target operating model to identify team structures, collaboration patterns, and missing roles or skills that are critical for semantic growth.

Together, these evaluations provided a clear benchmark of maturity and guided a right-sized strategy for the bank. 

3. Create a shared conceptual knowledge asset model. 

When it comes to strategy, executive stakeholders don’t want to see exhaustive technical documentation–they want to see impact. A high-level visual model of what your Semantic Layer will achieve brings a Semantic Layer Strategy to life by showing how connected knowledge assets can enable better decisions and new insights. 

Your data model should show, in broad strokes, what kinds of data will be connected at the conceptual level. For example, your data model could show that people, business units, and sales reports can be connected to answer questions like, “How many people in the United States created documents about X Law?” or “What laws apply to me when writing a contract in Wisconsin?” 

In sum, it should focus on how people and systems will benefit from the relationships between data, enabling clearer communication and shared understanding of your Semantic Layer’s use cases. 

Enterprise Knowledge Case Study (Risk Mitigation for a Wall Street Bank): EK collaborated with data owners to map out core concepts and their relationships in a single, digestible diagram. The conceptual knowledge asset model served as a shared reference point for both business and technical stakeholders, grounding executive conversations about Semantic Layer priorities and guiding onboarding decisions for data and systems. 

By simplifying complex data relationships into a clear visual, EK enabled alignment across technical and non-technical audiences and built momentum for the Semantic Layer initiative.

4. Develop a practical and iterative roadmap for implementation and scale.

With your vision, assessment, and foundational conceptual model in place, the next step is translating your strategy into execution. Your Semantic Layer roadmap should be outcome-driven, iterative, and actionable. A well-constructed roadmap provides not only a starting point for your Semantic Layer initiative, but also a mechanism for continuous alignment as business priorities evolve. 

Importantly, your roadmap should not be a rigid set of instructions; rather, it should act as a living guide. As your semantic maturity increases and business needs shift, the roadmap should adapt to reflect new opportunities while keeping long-term goals in focus. While the roadmap may be more detailed and technically advanced for highly mature organizations, less mature organizations may focus their roadmap on broader strokes such as tool procurement and initial category modeling. In both cases, the roadmap should be tailored to the organization’s unique needs and maturity, ensuring it is practical, actionable, and aligned to real priorities.

Enterprise Knowledge Case Study (Risk Mitigation for a Wall Street Bank): EK led the creation of a roadmap focused on expanding the firm’s existing semantic layer. Through planning sessions, EK identified the necessary categories, ontologies, tooling, and architecture uplifts needed to chart forward on their Semantic Layer journey. Once a strong foundation was built, additional planning sessions centered on adding new categories, onboarding additional data concepts, and refining ontologies to increase coverage and usability. Through sessions with key stakeholders responsible for the growth of the program, EK prioritized high-value expansion opportunities and recommended governance practices to sustain long-term scale. This enabled the firm to confidently evolve its Semantic Layer while maintaining alignment with business priorities and demonstrating measurable impact across the organization.

 

Conclusion

A successful Semantic Layer Strategy doesn’t come from technology alone; it comes from a clear vision, organizational alignment, and intentional design. Whether you’re just getting started on your semantics journey or refining your Semantic Layer approach, Enterprise Knowledge can support your organization. Contact us at info@enterprise-knowledge.com to discuss how we can help bring your Semantic Layer strategy to life.

The post Semantic Layer Strategy: The Core Components You Need for Successfully Implementing a Semantic Layer appeared first on Enterprise Knowledge.

]]>
Breaking Down Types of Knowledge Assets and Their Impact https://enterprise-knowledge.com/breaking-down-types-of-knowledge-assets-and-their-impact/ Fri, 22 Aug 2025 13:52:30 +0000 https://enterprise-knowledge.com/?p=25190 In their blog “What is Knowledge Asset?”, EK’s CEO Zach Wahl and Practice Lead for Semantic Design and Modeling, Sara Mae O’Brien-Scott, explored how organizations can define knowledge assets beyond just documents or data. It emphasizes that anything, from people … Continue reading

The post Breaking Down Types of Knowledge Assets and Their Impact appeared first on Enterprise Knowledge.

]]>
In their blog “What is Knowledge Asset?”, EK’s CEO Zach Wahl and Practice Lead for Semantic Design and Modeling, Sara Mae O’Brien-Scott, explored how organizations can define knowledge assets beyond just documents or data. It emphasizes that anything, from people and processes to AI-generated content, can be treated as a knowledge asset if it holds value, can be connected via metadata, and contributes to a broader, contextualized knowledge network.

The way knowledge assets are defined is crucial for an organization because it directly impacts how they are managed, leveraged, and protected. This includes identifying which knowledge assets have strategic value, how to manage them to make them accessible for timely decision making, which management policies should be applied to ensure effective knowledge sharing, retention, continuity, and transfer, and which steps are necessary to comply with industry regulations.

This blog highlights the types of knowledge assets that are commonly found in organizations and provides industry-specific examples based on typical Knowledge Management (KM) Use Cases.

 

Infographic titled “Types of Knowledge Assets,” showing seven categories: People’s Expertise, Content & Documentation, Technical Infrastructure, Structured Data, Governance, Actionable Processes, and Operational Resources, each with icons and descriptions.

Examples Of Relevant Knowledge Asset Types Per Industry

As illustrated in the previous section describing the different types of knowledge assets, these assets encompass more than just content or data. They may include people’s expertise and experience, transaction records, policies, and even facilities or locations. Depending on the industry or organization type, certain knowledge assets may be prioritized in early use cases because they play a more central role in those specific contexts.

A manufacturing company looking to improve its supply chain processes would benefit significantly from tagging, managing, leveraging, and protecting operational and logistical resources — such as equipment, facilities, and products — and linking them to reveal relationships and dependencies across the supply chain. By also tagging and connecting additional knowledge assets, such as structured data and analytical resources — including order history, transactions, and metrics — and content and documentation — such as process descriptions and reports — the company may gain deeper visibility into operational bottlenecks, enhance forecasting accuracy, and improve coordination across departments. This holistic approach can enable more agile decision-making, reducing downtime and supporting continuous improvement across the entire manufacturing lifecycle.
A bank that is looking to maintain compliance, uphold governance standards, and minimize regulatory risk can benefit from managing, leveraging, and protecting its key knowledge assets in a standardized and connected way. By using key terminology to tag governance and compliance resources — such as corporate policies, industry regulations, and tax codes — alongside operational and logistical resources  — such as locations and facilities — and corresponding subject matter experts, the bank builds a foundation for semantic alignment. This will allow the bank not only to associate branches and operational sites with the specific policies and regulatory obligations they must meet, but also help ensure that the bank complies with jurisdiction-specific requirements, reduces audit exposure, and strengthens its ability to respond to regulatory changes with agility and confidence.
A healthcare organization relies on clinical expertise and institutional memory to diagnose and treat patients. By capturing, tagging, and sharing expertise and experience from physicians and multidisciplinary teams, doctors, nurses, and other support personnel will be able to timely access the expert-based information they need to diagnose and treat their patients more accurately. Additionally, having access to content and documentation from clinical cases and structured data from research studies will also help improve decision-making for the personnel of this healthcare organization.

Do you know which priority knowledge assets and related KM use cases can transform your organization by empowering teams to surface hidden insights, accelerating decision-making, or fostering operational excellence? If you need help uncovering the most valuable use cases and the associated knowledge assets that unlock meaningful transformation in your organization, we can help. Contact us to learn more. 

The post Breaking Down Types of Knowledge Assets and Their Impact appeared first on Enterprise Knowledge.

]]>
The Semantic Exchange Webinar Series Recap https://enterprise-knowledge.com/the-semantic-exchange-webinar-series-recap/ Mon, 11 Aug 2025 15:18:30 +0000 https://enterprise-knowledge.com/?p=25098 Enterprise Knowledge recently completed the first round of our new webinar series The Semantic Exchange, which offers participants an opportunity to engage in Q&A with EK’s Semantic Design thought leaders. Participants were able to engage with EK’s experts on topics … Continue reading

The post The Semantic Exchange Webinar Series Recap appeared first on Enterprise Knowledge.

]]>
Promotional graphic for The Semantic Exchange webinar by Enterprise Knowledge, featuring six semantic experts as moderators and presenters.

Enterprise Knowledge recently completed the first round of our new webinar series The Semantic Exchange, which offers participants an opportunity to engage in Q&A with EK’s Semantic Design thought leaders. Participants were able to engage with EK’s experts on topics such as the value of enterprise semantic architecture, best practices for generating buy-in for semantics across an organization, and techniques for semantic solution implementation. The series sparked thoughtful discussion on how to understand and address real-world semantic challenges. 

To view any of the recorded sessions and their corresponding published work – use the links below:

 

Recording Published Work Author & Presenter
Why Your Taxonomy Needs SKOS Infographic Bonnie Griffin
What is Semantics and Why
Does it Matter?
Blog Ben Kass
Metadata Within the
Semantic Layer
Blog Kathleen Gollner
A Semantic Layer to Enable Risk Management Case Study Yumiko Saito
Humanitarian Foundation
SemanticRAG POC
Case Study James Egan

If you are interested in bringing semantics and data modeling solutions to your organization, contact us here!

The post The Semantic Exchange Webinar Series Recap appeared first on Enterprise Knowledge.

]]>
Semantic Layer for Content Discovery, Personalization, and AI Readiness https://enterprise-knowledge.com/semantic-layer-for-content-discovery-personalization-and-ai-readiness/ Tue, 29 Jul 2025 13:20:52 +0000 https://enterprise-knowledge.com/?p=25048 A professional association needed to improve their members’ content experiences. With tens of thousands of content assets published across 50 different websites and 5 disparate content management systems (CMSes), they struggled to coordinate a content strategy and improve content discovery. They could not keep up with the demands of managing content ... Continue reading

The post Semantic Layer for Content Discovery, Personalization, and AI Readiness appeared first on Enterprise Knowledge.

]]>

The Challenge

A professional association needed to improve their members’ content experiences. With tens of thousands of content assets published across 50 different websites and 5 disparate content management systems (CMSes), they struggled to coordinate a content strategy and improve content discovery. They could not keep up with the demands of managing content, leading to problems with outdated content and content pieces that were hard to discover. They also lacked the ability to identify and act on user data and trends, to better plan and tailor their content to member needs. Ultimately, members could not discover and take full advantage of the wealth of resources provided to them by the association.

Overall, the key driver behind this challenge was that the professional association lacked semantic maturity. While the association had a way to structure their content through a number of taxonomies across their web properties, their models were not aligned or mapped to one another and updates were not coordinated. Tagging expertise—and time to contribute to content tagging—varied considerably between content creators, resulting in inconsistent and irregular content tagging. The association also struggled to maintain their content due to an absence of clear governance responsibilities and practices. More broadly, the association lacked organization-wide processes to align semantic modeling with content governance—processes that ensure taxonomies and metadata models evolve in step with new content areas, and that governance practices consistently enforce tagging standards across content types and updates. This gap was also reflected in their technology stack: the association lacked an organization-wide solution architecture that would support their ability to coordinate and share semantics, data, and content across their systems. These challenges prevented the association from developing more engaging content experiences for their members. They needed support developing the strategies, semantic models, and solution architecture to enable their vision.

The Solution

EK partnered with the professional association to establish the foundational content strategy, semantic models, and solution architecture to enable their goals for content discovery and analytics. First, EK conducted a current state analysis and target state definition, as well as a semantic maturity assessment. This helped EK understand the factors that could be leveraged to help the association realize its goals. EK subsequently completed three parallel workstreams:

  1. Content Assessment: EK audited a sample of assets on priority web properties to understand the condition of the association’s content and semantic practices. EK identified recommendations for how to enhance the performance, governance, and discoverability of content. Based on these recommendations, EK provided step-by-step procedures to support the association in completing a comprehensive audit to enhance their content quality and aid in future findability enhancement and content personalization efforts.
  2. Taxonomy and Ontology Development: EK developed an enterprise taxonomy and ontology framework for the association—to provide a standardized vocabulary for use across the association’s systems, and increase the maturity of the association’s semantic models. The enterprise taxonomy included 12 facets to support 12 metadata fields, with a cumulative total of over 900 concepts. An ontology identified key relationships between the different taxonomy facets, establishing a foundation for identifying related content and supporting auto-tagging.
  3. Semantic Layer Architecture: EK provided recommendations for maturing the association’s tooling and integrations in support of their goals. Specifically, EK developed a solution architecture to integrate taxonomy, ontology, and auto-tagging across content, asset, and learning management systems, in order to inform a variety of content analytics, discovery, recommendation, and assembly applications. This architecture was designed to form the basis of a semantic layer that the association could later use to connect and relate content enterprise-wide. The architecture included the addition of a taxonomy and ontology management system (TOMS) to centralize semantic model management and to introduce auto-tagging capabilities. Alongside years of experience in tool evaluation, EK leveraged their proprietary TOMS evaluation matrix to score candidate vendors and TOMS solutions, supporting the association in selecting a tool that was the best fit for their needs.
  4. Auto-Tagging Proof of Concept: Building on these efforts, EK conducted an auto-tagging proof of concept (PoC), to support the association in applying the taxonomy to their content. The PoC automatically tagged all content assets in 2 priority CMSes with concepts from 2 prioritized topic taxonomy facets. The EK team prepared the processing pipeline for the auto-tagging effort, including pre-processing the content and conducting analysis of the tags to gauge quality and improvement over time.

To determine the exact level of improvement, EK worked with subject matter experts to establish a gold standard set of expected tags for a sample of content assets. The tags produced by the auto-tagger were compared to the expected tag set, to generate measures of recall, precision, and accuracy. EK used the analytics to inform adjustments to the taxonomy facets and to fine-tune and improve the auto-tagger’s performance over successive rounds.

To support the association in continuing to grow and leverage their semantic maturity, EK provided a detailed semantic maturity implementation roadmap. The roadmap identified five target outcomes for semantic enrichment, including: enhancing analytics to provide insights into content use and content gaps; and recommending content by using content tags to suggest related resources. For each outcome, EK detailed the requisite goals, business value, tasks, and dependencies—providing the association with the guidance they needed to realize each outcome and further advance their semantic maturity.

The EK Difference

EK was uniquely positioned to help the association improve their semantic maturity. As thought leaders in the semantic space, EK had the expertise and experience to assess the association’s semantic maturity, identify opportunities for growth, and define a vision and roadmap to help the association realize its business priorities. Further, EK has a deep understanding of the semantic technology landscape. This positioned EK to deliver tailored solutions that reflect the specific needs of the association, ensuring the solutions contribute to the association’s long-term technology roadmap.

EK leveraged a holistic approach to assessing and advancing the association’s semantic maturity. EK’s proprietary semantic maturity assessment accounts for the varied factors that influence an organization’s semantic maturity, including considerations for people, process, content, models, and technology. This positions the association to develop the capabilities required for semantic maturity across all contributing factors. Building off of the semantic maturity assessment, EK delivered end-to-end services that supported the entire semantic lifecycle, from strategy through design, implementation, and governance. This provided the association with the semantic infrastructure to realize near-term value; for instance, developing an enterprise taxonomy and applying it to their content assets using auto-tagging. By using proprietary, industry-leading approaches, EK was able to deliver these end-to-end services with tangible results within 4 months.

The Results

EK delivered a semantic strategy and solution architecture, as well as a content clean-up strategy and initial taxonomy and ontology designs, that helped the professional association establish a foundation for realizing their goals. This effort culminated in the implementation of an auto-tagging PoC. The PoC included configuring the selected TOMS, establishing system integrations, and developing processing pipelines and quality evaluations. Ultimately, the PoC captured tags for over 23,000 content assets using more than 600 concepts from 2 priority taxonomy facets. This foundational work helped the professional association establish the initial components required for a semantic layer. A final roadmap and recommendations report provided detailed next steps, with specific tasks, dependencies, and pilots, to guide the professional association in leveraging and extending their foundational semantic layer. The first engagement was deemed a success by association leadership, and the roadmap was approved for phased implementation, which EK is now supporting. This continued partnership is enabling the association to begin realizing its goals of enhancing member engagement with content by improving content discovery and overall user experience.

Want to improve your organization’s content discovery capabilities? Interested in learning more about the semantic layer? Learn more from our experience or contact us today!

Download Flyer

Ready to Get Started?

Get in Touch

The post Semantic Layer for Content Discovery, Personalization, and AI Readiness appeared first on Enterprise Knowledge.

]]>
The Semantic Exchange: Humanitarian Foundation – SemanticRAG POC https://enterprise-knowledge.com/the-semantic-exchange-humanitarian-foundation-semanticrag-poc/ Thu, 17 Jul 2025 18:25:33 +0000 https://enterprise-knowledge.com/?p=24913 Enterprise Knowledge is concluding the first round of our new webinar series, The Semantic Exchange. In this webinar series, we follow a Q&A style to provide participants an opportunity to engage with our semantic design experts on a variety of … Continue reading

The post The Semantic Exchange: Humanitarian Foundation – SemanticRAG POC appeared first on Enterprise Knowledge.

]]>

Enterprise Knowledge is concluding the first round of our new webinar series, The Semantic Exchange. In this webinar series, we follow a Q&A style to provide participants an opportunity to engage with our semantic design experts on a variety of topics about which they have written. This webinar is designed for a variety of audiences, ranging from those working in the semantic space as taxonomists or ontologists, to folks who are just starting to learn about structured data and content, and how they may fit into broader initiatives around artificial intelligence or knowledge graphs.

This 30-minute session invites you to engage with James Egan’s case study, Humanitarian Foundation – SemanticRAG POC. Come ready to hear and ask about:

  • How various types of organizations can leverage standards-based semantic graph technologies;
  • How can leveraging semantics addresses data integration challenges; and
  • What value semantics can provide to an organization’s overall data ecosystem.

This webinar will take place on Wednesday July 23rd, from 2:00 – 2:30PM EDT. Can’t make it? The session will also be recorded and published to registered attendees. View the recording here!

The post The Semantic Exchange: Humanitarian Foundation – SemanticRAG POC appeared first on Enterprise Knowledge.

]]>
The Semantic Exchange: A Semantic Layer to Enable Risk Management at a Multinational Bank https://enterprise-knowledge.com/the-semantic-exchange-a-semantic-layer-to-enable-risk-management/ Fri, 11 Jul 2025 17:02:13 +0000 https://enterprise-knowledge.com/?p=24874 Enterprise Knowledge is continuing our new webinar series, The Semantic Exchange with the fourth session. This session is designed for a variety of audiences, ranging from those working in the semantic space as taxonomists or ontologists, to folks who are … Continue reading

The post The Semantic Exchange: A Semantic Layer to Enable Risk Management at a Multinational Bank appeared first on Enterprise Knowledge.

]]>

Enterprise Knowledge is continuing our new webinar series, The Semantic Exchange with the fourth session. This session is designed for a variety of audiences, ranging from those working in the semantic space as taxonomists or ontologists, to folks who are just starting to learn about structured data and content, and how they may fit into broader initiatives around artificial intelligence or knowledge graphs.

This 30-minute session invites you to engage with Yumiko Saito’s case study, A Semantic Layer to Enable Risk Management at a Multinational Bank. Come ready to hear and ask about:

  • The challenges financial firms encounter with risk management;
  • The semantic solutions employed to mitigate these challenges; and
  • The value created by employing semantic layer solutions.

This webinar will take place on Thursday July 17th, from 1:00 – 1:30PM EDT. Can’t make it? The session will also be recorded and published to registered attendees. View the recording here!

The post The Semantic Exchange: A Semantic Layer to Enable Risk Management at a Multinational Bank appeared first on Enterprise Knowledge.

]]>
The Semantic Exchange: Metadata Within the Semantic Layer https://enterprise-knowledge.com/the-semantic-exchange-metadata-within-the-semantic-layer/ Tue, 01 Jul 2025 18:32:10 +0000 https://enterprise-knowledge.com/?p=24803 Enterprise Knowledge is pleased to introduce a new webinar series, The Semantic Exchange. This session is the third of a five part series where we invite fellow practitioners to tune in and hear more about work we’ve published from the … Continue reading

The post The Semantic Exchange: Metadata Within the Semantic Layer appeared first on Enterprise Knowledge.

]]>

Enterprise Knowledge is pleased to introduce a new webinar series, The Semantic Exchange. This session is the third of a five part series where we invite fellow practitioners to tune in and hear more about work we’ve published from the authors themselves. In these moderated sessions, we invite you to ask the authors questions in a short, accessible format. Think of the series as a chance for a little semantic snack!

This session is designed for a variety of audiences, ranging from those working in the semantic space as taxonomists or ontologists, to folks who are just starting to learn about structured data and content, and how they may fit into broader initiatives around artificial intelligence or knowledge graphs.

This 30-minute session invites you to engage with Kathleen Gollner’s blog, Metadata Within the Semantic Layer. Come ready to hear and ask about:

  • Why metadata is foundational for a semantic layer;
  • How to optimize metadata for use across knowledge assets, systems, and use cases; and
  • How metadata can be leveraged in AI solutions.

This webinar will take place on Wednesday July 9th, from 1:00 – 1:30PM EDT. Can’t make it? The session will also be recorded and published to registered attendees. View the recording here!

The post The Semantic Exchange: Metadata Within the Semantic Layer appeared first on Enterprise Knowledge.

]]>
The Semantic Exchange: What is Semantics and Why Does it Matter? https://enterprise-knowledge.com/the-semantic-exchange-what-is-semantics-and-why-does-it-matter/ Thu, 26 Jun 2025 18:45:58 +0000 https://enterprise-knowledge.com/?p=24786 Enterprise Knowledge is pleased to introduce a new webinar series, The Semantic Exchange. This session is number two of a five part series where we invite fellow practitioners to tune in and hear more about work we’ve published from the … Continue reading

The post The Semantic Exchange: What is Semantics and Why Does it Matter? appeared first on Enterprise Knowledge.

]]>

Enterprise Knowledge is pleased to introduce a new webinar series, The Semantic Exchange. This session is number two of a five part series where we invite fellow practitioners to tune in and hear more about work we’ve published from the authors themselves. In these moderated sessions, we invite you to ask the authors questions in a short, accessible format. Think of the series as a chance for a little semantic snack!

This session is designed for a variety of audiences, ranging from those working in the semantic space as taxonomists or ontologists – to folks who are just starting to learn about structured data and content, and how they may fit into broader initiatives around artificial intelligence or knowledge graphs. 

This 30-minute session invites you to engage with Ben Kass’s white paper, What is Semantics and Why Does it Matter?. Come ready to hear and ask about:

  • Why should you model the semantics of your data?
  • What does it mean to define semantics?
  • How do we capture semantics for use by machines?

This webinar will take place on Wednesday July 2nd, from 1:00 – 1:30PM EDT. Can’t make it? The session will also be recorded and published to registered attendees. View the recording here!

The post The Semantic Exchange: What is Semantics and Why Does it Matter? appeared first on Enterprise Knowledge.

]]>