SEMANTiCS Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/semantics/ Mon, 17 Nov 2025 22:20:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg SEMANTiCS Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/semantics/ 32 32 How to Leverage LLMs for Auto-tagging & Content Enrichment https://enterprise-knowledge.com/how-to-leverage-llms-for-auto-tagging-content-enrichment/ Wed, 29 Oct 2025 14:57:56 +0000 https://enterprise-knowledge.com/?p=25940 When working with organizations on key data and knowledge management initiatives, we’ve often noticed that a roadblock is the lack of quality (relevant, meaningful, or up-to-date) existing content an organization has. Stakeholders may be excited to get started with advanced … Continue reading

The post How to Leverage LLMs for Auto-tagging & Content Enrichment appeared first on Enterprise Knowledge.

]]>
When working with organizations on key data and knowledge management initiatives, we’ve often noticed that a roadblock is the lack of quality (relevant, meaningful, or up-to-date) existing content an organization has. Stakeholders may be excited to get started with advanced tools as part of their initiatives, like graph solutions, personalized search solutions, or advanced AI solutions; however, without a strong backbone of semantic models and context-rich content, these solutions are significantly less effective. For example, without proper tags and content types, a knowledge portal development effort  can’t fully demonstrate the value of faceting and aggregating pieces of content and data together in ‘knowledge panes’. With a more semantically rich set of content to work with, the portal can begin showing value through search, filtering, and aggregation, leading to further organizational and leadership buy-in.

One key step in preparing content is the application of metadata and organizational context to pieces of content through tagging. There are several tagging approaches an organization can take to enrich pre-existing content with metadata and organizational context, including manual tagging, automated tagging capabilities from a taxonomy and ontology management system (TOMS), using apps and features directly from a content management solution, and various hybrid approaches. While many of these approaches, in particular acquiring a TOMS, are recommended as a long-term auto-tagging solution, EK has recommended and implemented Large Language Model (LLM)-based auto-tagging capabilities across several recent engagements. Due to LLM-based tagging’s lower initial investment compared to a TOMS and its greater efficiency than manual tagging, these auto-tagging solutions have been able to provide immediate value and jumpstart the process of re-tagging existing content. This blog will dive deeper into how LLM tagging works, the value of semantics, technical considerations, and next steps for implementing an LLM-based tagging solution.

Overview of LLM-Based Auto-Tagging Process

Similar to existing auto-tagging approaches, the LLM suggests a tag by parsing through a piece of content, processing and identifying key phrases, terms, or structure that gives the document context. Through prompt engineering, the LLM is then asked to compare the similarity of key semantic components (e.g., named entities, key phrases) with various term lists, returning a set of terms that could be used to categorize the piece of content. These responses can be adjusted in the tagging workflow to only return terms meeting a specific similarity score. These tagging results are then exported to a data store and applied to the content source. Many factors, including the particular LLM used, the knowledge an LLM is working with, and the source location of content, can greatly impact the tagging effectiveness and accuracy. In addition, adjusting parameters, taxonomies/term lists, and/or prompts to improve precision and recall can ensure tagging results align with an organization’s needs. The final step is the auto-tagging itself and the application of the tags in the source system. This could look like a script or workflow that applies the stored tags to pieces of content.

Figure 1: High-level steps for LLM content enrichment

EK has put these steps into practice, for example, when engaging with a trade association on a content modernization project to migrate and auto-tag content into a new content management system (CMS). The organization had been struggling with content findability, standardization, and governance, in particular, the language used to describe the diverse areas of work the trade association covers. As part of this engagement, EK first worked with the organization’s subject matter experts (SMEs) to develop new enterprise-wide taxonomies and controlled vocabularies integrated across multiple platforms to be utilized by both external and internal end-users. To operationalize and apply these common vocabularies, EK developed an LLM-based auto-tagging workflow utilizing the four high-level steps above to auto-tag metadata fields and identify content types. This content modernization effort set up the organization for document workflows, search solutions, and generative AI projects, all of which are able to leverage the added metadata on documents. 

Value of Semantics with LLM-Based Auto-Tagging

Semantic models such as taxonomies, metadata models, ontologies, and content types can all be valuable inputs to guide an LLM on how to effectively categorize a piece of content. When considering how an LLM is trained for auto-tagging content, a greater emphasis needs to be put on organization-specific context. If using a taxonomy as a training input, organizational context can be added through weighting specific terms, increasing the number of synonyms/alternative labels, and providing organization-specific definitions. For example, by providing organizational context through a taxonomy or business glossary that the term “Green Account” refers to accounts that have met a specific environmental standard, the LLM would not accidentally tag content related to the color green or an account that is financially successful.

Another benefit of an LLM-based approach is the ability to evolve both the semantic model and LLM as tagging results are received. As sets of tags are generated for an initial set of content, the taxonomies and content models being used to train the LLM can be refined to better fit the specific organizational context. This could look like adding additional alternative labels, adjusting the definition of terms, or adjusting the taxonomy hierarchy. Similarly, additional tools and techniques, such as weighting and prompt engineering, can tune the results provided by the LLM and evolve the results generated to achieve a higher recall (rate the LLM is including the correct term) and precision (rate the LLM is selecting only the correct term) when recommending terms. One example of this is  adding weighting from 0 to 10 for all taxonomy terms and assigning a higher score for terms the organization prefers to use. The workflow developed alongside the LLM can use this context to include or exclude a particular term.

Implementation Considerations for LLM-Based Auto-Tagging 

Several factors, such as the timeframe, volume of information, necessary accuracy, types of content management systems, and desired capabilities, inform the complexity and resources needed for LLM-based content enrichment. The following considerations expand upon the factors an organization must consider for effective LLM content enrichment. 

Tagging Accuracy

The accuracy of tags from an LLM directly impacts end-users and systems (e.g., search instances or dashboards) that are utilizing the tags. Safeguards need to be implemented to ensure end-users can trust the accuracy of the tagged content they are using. These help ensure that a user is not mistakenly accessing or using a particular document, or that they are frustrated by the results they get. To mitigate both of these concerns, a high recall and precision score with the LLM tagging improves the overall accuracy and lowers the chance for miscategorization. This can be done by investing further into human test-tagging and input from SMEs to create a gold-standard set of tagged content as training data for the LLM. The gold-standard set can then be used to adjust how the LLM weights or prioritizes terms, based on the organizational context in the gold-standard set. These practices will help to avoid hallucinations (factually incorrect or misleading content) that could appear in applications utilizing the auto-tagged set of content.

Content Repositories

One factor that greatly adds technical complexity is accessing the various types of content repositories that an LLM solution, or any auto-tagging solution, needs to read from. The best content management practice for auto-tagging is to read content in its source location, limiting the risk of duplication and the effort needed to download and then read content. When developing a custom solution, each content repository often needs a distinctive approach to read and apply tags. A content or document repository like SharePoint, for example, has a robust API for reading content and seamlessly applying tags, while a less widely adopted platform may not have the same level of support. It is important to account for the unique needs of each system in order to limit the disruption end-users may experience when embarking on a tagging effort.

Knowledge Assets

When considering the scalability of the auto-tagging effort, it is also important to evaluate the breadth of knowledge asset types being analyzed. While the ability of LLMs to process several types of knowledge assets has been growing, each step of additional complexity, particularly evaluating multiple types, can result in additional resources and time needed to read and tag documents. A PDF document with 2-3 pages of content will take far fewer tokens and resources for an LLM to read its content than a long visual or audio asset. Going from a tagging workflow of structured knowledge assets to tagging unstructured content will increase the overall time, resources, and custom development needed to run a tagging workflow. 

Data Security & Entitlements

When utilizing an LLM, it is recommended that an organization invest in a private or an in-house LLM to complete analysis, rather than leveraging a publicly available model. In particular, an LLM does not need to be ‘on-premises’, as several providers have options for LLMs in your company’s own environment. This ensures a higher level of document security and additional features for customization. Particularly when tackling use cases with higher levels of personal information and access controls, a robust mapping of content and an understanding of what needs to be tagged is imperative. As an example, if a publicly facing LLM was reading confidential documents on how to develop a company-specific product, this information could then be leveraged in other public queries and has a higher likelihood of being accessed outside of the organization. In an enterprise data ecosystem, running an LLM-based auto-tagging solution can raise red flags around data access, controls, and compliance. These challenges can be addressed through a Unified Entitlements System (UES) that creates a centralized policy management system for both end users and LLM solutions being deployed.

Next Steps:

One major consideration with an LLM tagging solution is maintenance and governance over time. For some organizations, after completing an initial enrichment of content by the LLM, a combination of manual tagging and forms within each CMS helps them maintain tagging standards over time. However, a more mature organization that is dealing with several content repositories and systems may want to either operationalize the content enrichment solution for continued use or invest in a TOMS. With either approach, completing an initial LLM enrichment of content is a key method to prove the value of semantics and metadata to decision-makers in an organization. 
Many technical solutions and initiatives that excite both technical and business stakeholders can be actualized by an LLM content enrichment effort. By having content that is tagged and adhering to semantic standards, solutions like knowledge graphs, knowledge portals, and semantic search engines, or even an enterprise-wide LLM Solution, are upgraded even further to show organizational value.

If your organization is interested in upgrading your content and developing new KM solutions, contact us!

The post How to Leverage LLMs for Auto-tagging & Content Enrichment appeared first on Enterprise Knowledge.

]]>
How to Fill Your Knowledge Gaps to Ensure You’re AI-Ready https://enterprise-knowledge.com/how-to-fill-your-knowledge-gaps-to-ensure-youre-ai-ready/ Mon, 29 Sep 2025 19:14:44 +0000 https://enterprise-knowledge.com/?p=25629 “If only our company knew what our company knows” has been a longstanding lament for leaders: organizations are prevented from mobilizing their knowledge and capabilities towards their strategic priorities. Similarly, being able to locate knowledge gaps in the organization, whether … Continue reading

The post How to Fill Your Knowledge Gaps to Ensure You’re AI-Ready appeared first on Enterprise Knowledge.

]]>
“If only our company knew what our company knows” has been a longstanding lament for leaders: organizations are prevented from mobilizing their knowledge and capabilities towards their strategic priorities. Similarly, being able to locate knowledge gaps in the organization, whether we were initially aware of them (known unknowns), or initially unaware of them (unknown unknowns), represents opportunities to gain new capabilities, mitigate risks, and navigate the ever-accelerating business landscape more nimbly.  

AI implementations are already showing signs of knowledge gaps: hallucinations, wrong answers, incomplete answers, and even “unanswerable” questions. There are multiple causes for AI hallucinations, but an important one is not having the right knowledge to answer a question in the first place. While LLMs may have been trained on massive amounts of data, it doesn’t mean that they know your business, your people, or your customers. This is a common problem when organizations make the leap from how they experience “Public AI” tools like ChatGPT, Gemini, or Copilot, to attempting their own organization’s AI solutions. LLMs and agentic solutions need knowledge—your organization’s unique knowledgeto produce results that are unique to your and your customers’ needs, and help employees navigate and solve challenges they encounter in their day-to-day work. 

In a recent article, EK outlined key strategies for preparing content and data for AI. This blog post builds on that foundation by providing a step-by-step process for identifying and closing knowledge gaps, ensuring a more robust AI implementation.

 

The Importance of Bridging Knowledge Gaps for AI Readiness

EK lays out a six-step path to getting your content, data, and other knowledge assets AI-ready, yielding assets that are correct, complete, consistent, contextual, and compliant. The diagram below provides an overview of these six steps:

The six steps to AI readiness. Step one: Define Knowledge Assets. Step two: Conduct cleanup. Step three: Fill Knowledge Gaps (We are here). Step four: Enrich with context. Step five: Add structure. Step six: Protect the knowledge assets.

Identifying and filling knowledge gaps, the third step of EK’s path towards AI readiness, is crucial in ensuring that AI solutions have optimized inputs. 

Prior to filling gaps, an organization will have defined its critical knowledge assets and conducted a content cleanup. A content cleanup not only ensures the correctness and reliability of the knowledge assets, but also reveals the specific topics, concepts, or capabilities that the organization cannot currently supply to AI solutions as inputs.

This scenario presupposes that the organization has a clear idea of the AI use cases and purposes for its knowledge assets. Given the organization knows the questions AI needs to answer, an assessment to identify the location and state of knowledge assets can be targeted based on the inputs required. This assessment would be followed by efforts to collect the identified knowledge and optimize it for AI solutions. 

A second, more complicated, scenario arises when an organization hasn’t formulated a prioritized list of questions for AI to answer. The previously described approach, relying on drawing up a traditional knowledge inventory will face setbacks because it may prove difficult to scale, and won’t always uncover the insights we need for AI readiness. Knowledge inventories may help us understand our known unknowns, but they will not be helpful in revealing our unknown unknowns

 

Identifying the Gap

How can we identify something that is missing? At this juncture, organizations will need to leverage analytics, introduce semantics, and if AI is already deployed in the organization, then use it as a resource as well. There are different techniques to identify these gaps, depending on whether your organization has already deployed an AI solution or is ramping up for one. Available options include:

Before and After AI Deployment

Leveraging Analytics from Existing Systems

Monitoring and assessing different tools’ analytics is an established practice to understand user behavior. In this instance, EK applies these same methods to understand critical questions about the availability of knowledge assets. We are particularly interested in analytics that reveal answers to the following questions:

  • When are our people giving up when navigating different sections of a tool or portal? 
  • What sort of queries return no results?
  • What queries are more likely to get abandoned? 
  • What sort of content gets poor reviews, and by whom?
  • What sort of material gets no engagement? What did the user do or search for before getting to it? 

These questions aim to identify instances of users trying, and failing, to get knowledge they need to do their work. Where appropriate, these questions can also be posed directly to users via surveys or focus groups to get a more rounded perspective. 

Semantics

Semantics involve modeling an organization’s knowledge landscape with taxonomies and ontologies. When taxonomies and ontologies have been properly designed, updated, and consistently applied to knowledge, they are invaluable as part of wider knowledge mapping efforts. In particular, semantic models can be used as an exemplar of what should be there, and can then be compared with what is actually present, thus revealing what is missing.

We recently worked with a professional association within the medical field, helping them define a semantic model for their expansive amount of content, and then defining an automated approach to tagging these knowledge assets. As part of the design process, EK taxonomists interviewed experts across all of the association’s organizational functional teams to define the terms that should be present in the organization’s knowledge assets. After the first few rounds of auto-tagging, we examined the taxonomy’s coverage, and found that a significant fraction of the terms in the taxonomy went unused. We validated our findings with our clients’ experts, and, to their surprise, our engagement revealed an imbalance of knowledge asset production: while some topics were covered by their content, others were entirely lacking. 

Valid taxonomy terms or ontology concepts for which few to no knowledge assets exist reveal a knowledge gap where AI is likely to struggle.

After AI Deployment

User Engagement & Feedback

To ensure a solution can scale, evolve, and remain effective over time, it is important to establish formal feedback mechanisms for users to engage with system owners and governance bodies on an ongoing basis. Ideally, users should have a frictionless way to report an unsatisfactory answer immediately after they receive it, whether it is because the answer is incomplete or just plain wrong. A thumbs-up or thumbs-down icon has traditionally been used to solicit this kind of feedback, but organizations should also consider dedicated chat channels, conversations within forums, or other approaches for communicating feedback to which their users are accustomed.

AI Design and Governance 

Out-of-the-box, pre-trained language models are designed to prioritize providing a fluid response, often leading them to confidently generate answers even when their underlying knowledge is uncertain or incomplete. This core behavior increases the risk of delivering wrong information to users. However, this flaw can be preempted by thoughtful design in enterprise AI solutions: the key is to transform them from a simple answer generator into a sophisticated instrument that can also detect knowledge gaps. Enterprise AI solutions can be engineered to proactively identify questions which they do not have adequate information to answer and immediately flag these requests. This approach effectively creates a mandate for AI governance bodies to capture the needed knowledge. 

AI can move beyond just alerting the relevant teams about missing knowledge. As we will soon discuss, AI holds additional capabilities to close knowledge gaps by inferring new insights from disparate, already-known information, and connecting users directly with relevant human experts. This allows enterprise AI to not only identify knowledge voids, but also begin the process of bridging them.

 

Closing the Gap

It is important, at this point, to make the distinction between knowledge that is truly missing from the organization and knowledge that is simply unavailable to the organization’s AI solution. The approach to close the knowledge gap will hinge on this key distinction. 

 

If the ‘missing’ knowledge is documented or recorded somewhere… but the knowledge is not in a format that AI can use it, then:

Transform and migrate the present knowledge asset into a format that AI can more readily ingest. 

How this looks in practice:

A professional services firm had a database of meeting recordings meant for knowledge-sharing and disseminating lessons learned. The firm determined that there is a lot of knowledge “in the rough” that AI could incorporate into existing policies and procedures, but this was impossible to do by ingesting content in video format. EK engineers programmatically transcribed the videos, and then transformed the text into a machine-readable format. To make it truly AI-ready, we leveraged Natural Language Processing (NLP) and Named Entity Recognition (NER) techniques to contextualize the new knowledge assets by associating them with other concepts across the organization.

If the ‘missing’ knowledge is documented or recorded somewhere… but the knowledge exists in private spaces like email or closed forums, then:

Establish workflows and guidelines to promote, elevate, and institutionalize knowledge that had been previously informal.

How this looks in practice:

A government agency established online Communities of Practice (CoPs) to transfer and disseminate critical knowledge on key subject areas. Community members shared emerging practices and jointly solved problems. Community managers were able to ‘graduate’ informal conversations and documents into formal agency resources that lived within a designated repository, fully tagged, and actively managed. These validated and enhanced knowledge assets became more valuable and reliable for AI solutions to ingest.

If the ‘missing’ knowledge is documented or recorded somewhere… but the knowledge exists in different fragments across disjointed repositories, then: 

Unify the disparate fragments of knowledge by designing and applying a semantic model to associate and contextualize them. 

How this looks in practice:

A Sovereign Wealth Fund (SWF) collected a significant amount of knowledge about its investments, business partners, markets, and people, but kept this information fragmented and scattered across multiple repositories and databases. EK designed a semantic layer (composed of a taxonomy, ontology, and a knowledge graph) to act as a ‘single view of truth’. EK helped the organization define its key knowledge assets, like investments, relationships, and people, and weaved together data points, documents, and other digital resources to provide a 360-degree view of each of them. We furthermore established an entitlements framework to ensure that every attribute of every entity could be adequately protected and surfaced only to the right end-user. This single view of truth became a foundational element in the organization’s path to AI deployment—it now has complete, trusted, and protected data that can be retrieved, processed, and surfaced to the user as part of solution responses. 

If the ‘missing’ knowledge is not recorded anywhere… but the company’s experts hold this knowledge with them, then: 

Choose the appropriate techniques to elicit knowledge from experts during high-value moments of knowledge capture. It is important to note that we can begin incorporating agentic solutions to help the organization capture institutional knowledge, especially when agents can know or infer expertise held by the organization’s people. 

How this looks in practice:

Following a critical system failure, a large financial institution recognized an urgent need to capture the institutional knowledge held by its retiring senior experts. To address this challenge, they partnered with EK, who developed an AI-powered agent to conduct asynchronous interviews. This agent was designed to collect and synthesize knowledge from departing experts and managers by opening a chat with each individual and asking questions until the defined success criteria were met. This method allowed interviewees to contribute their knowledge at their convenience, ensuring a repeatable and efficient process for capturing critical information before the experts left the organization.

If the ‘missing’ knowledge is not recorded anywhere… and the knowledge cannot be found, then:

Make sure to clearly define the knowledge gap and its impact on the AI solution as it supports the business. When it has substantial effects on the solution’s ability to provide critical responses, then it will be up to subject matter experts within the organization to devise a strategy to create, acquire, and institutionalize the missing knowledge. 

How this looks in practice:

A leading construction firm needed to develop its knowledge and practices to be able to keep up with contracts won for a new type of project. Its inability to quickly scale institutional knowledge jeopardized its capacity to deliver, putting a significant amount of revenue at risk. EK guided the organization in establishing CoPs to encourage the development of repeatable processes, new guidance, and reusable artifacts. In subsequent steps, the firm could extract knowledge from conversations happening within the community and ingest them into AI solutions, along with novel knowledge assets the community developed. 

 

Conclusion

Identifying and closing knowledge gaps is no small feat, and predicting knowledge needs was nearly impossible before the advent of AI. Now, AI acts as both a driver and a solution, helping modern enterprises maintain their competitive edge.

Whether your critical knowledge is in people’s heads or buried in documents, Enterprise Knowledge can help. We’ll show you how to capture, connect, and leverage your company’s knowledge assets to their full potential to solve complex problems and obtain the results you expect out of your AI investments. Contact us today to learn how to bridge your knowledge gaps with AI.

The post How to Fill Your Knowledge Gaps to Ensure You’re AI-Ready appeared first on Enterprise Knowledge.

]]>
Knowledge Cast – Daan Hannessen, Global Head of Knowledge Management at Shell – Semantic Layer Symposium Series https://enterprise-knowledge.com/knowledge-cast-daan-hannessen-global-head-of-knowledge-management-at-shell/ Mon, 29 Sep 2025 07:00:25 +0000 https://enterprise-knowledge.com/?p=25624 Enterprise Knowledge’s Lulit Tesfaye, VP of Knowledge & Data Services, speaks with Daan Hannessen, Global Head of Knowledge Management at Shell. He has over 20 years experience in Knowledge Management for large knowledge-intensive organizations in Europe, Australia, and the USA, … Continue reading

The post Knowledge Cast – Daan Hannessen, Global Head of Knowledge Management at Shell – Semantic Layer Symposium Series appeared first on Enterprise Knowledge.

]]>

Enterprise Knowledge’s Lulit Tesfaye, VP of Knowledge & Data Services, speaks with Daan Hannessen, Global Head of Knowledge Management at Shell. He has over 20 years experience in Knowledge Management for large knowledge-intensive organizations in Europe, Australia, and the USA, ranging from continuous improvement programs, KM transformations, lessons learned solutions, digital workplaces, AI driven expert bots, enterprise search, and much more.

In their conversation, Lulit and Daan discuss the importance of senior leadership support in ensuring the success of KM initiatives, emphasizing “speaking their language” as key to implementing KM and the semantic layer at a global scale. They also touch on how to measure the success of AI, when AI-generated content can be considered valuable insights, and why to invest in a semantic layer in the first place, as well as Daan’s talk at the upcoming Semantic Layer Symposium.

For more information on the Semantic Layer Symposium, check it out here!

 

 

If you would like to be a guest on Knowledge Cast, contact Enterprise Knowledge for more information.

The post Knowledge Cast – Daan Hannessen, Global Head of Knowledge Management at Shell – Semantic Layer Symposium Series appeared first on Enterprise Knowledge.

]]>
Knowledge Cast – Ben Clinch, Chief Data Officer & Partner at Ortecha – Semantic Layer Symposium Series https://enterprise-knowledge.com/knowledge-cast-ben-clinch-chief-data-officer-partner-at-ortecha/ Thu, 11 Sep 2025 13:43:01 +0000 https://enterprise-knowledge.com/?p=25345 Enterprise Knowledge’s Lulit Tesfaye, VP of Knowledge & Data Services, speaks with Ben Clinch, Chief Data Officer and Partner at Ortecha and Regional Lead Trainer for the EDM Council (EMEA/India). He is a sought-after public speaker and thought leader in … Continue reading

The post Knowledge Cast – Ben Clinch, Chief Data Officer & Partner at Ortecha – Semantic Layer Symposium Series appeared first on Enterprise Knowledge.

]]>

Enterprise Knowledge’s Lulit Tesfaye, VP of Knowledge & Data Services, speaks with Ben Clinch, Chief Data Officer and Partner at Ortecha and Regional Lead Trainer for the EDM Council (EMEA/India). He is a sought-after public speaker and thought leader in data and AI, having held numerous senior roles in architecture and business in some of the world’s largest financial and telecommunication institutions over his 25 year career, with a passion for helping organizations thrive with their data.

In their conversation, Lulit and Ben discuss Ben’s personal journey into the world of semantics, their data architecture must-haves in a perfect world, and how to calculate the value of data and knowledge initiatives. They also preview Ben’s talk at the Semantic Layer Symposium in Copenhagen this year, which will cover the combination of semantics and LLMs and neurosymbolic AI. 

For more information on the Semantic Layer Symposium, check it out here!

 

 

If you would like to be a guest on Knowledge Cast, contact Enterprise Knowledge for more information.

The post Knowledge Cast – Ben Clinch, Chief Data Officer & Partner at Ortecha – Semantic Layer Symposium Series appeared first on Enterprise Knowledge.

]]>
How KM Leverages Semantics for AI Success https://enterprise-knowledge.com/how-km-leverages-semantics-for-ai-success/ Wed, 03 Sep 2025 19:08:31 +0000 https://enterprise-knowledge.com/?p=25271 This infographic highlights how KM incorporates semantic technologies and practices across scenarios to enhance AI capabilities.

The post How KM Leverages Semantics for AI Success appeared first on Enterprise Knowledge.

]]>
This infographic highlights how KM incorporates semantic technologies and practices across scenarios to enhance AI capabilities.

To get the most out of Large Language Model (LLM)-driven AI solutions, you need to provide them with structured, context-rich knowledge that is unique to your organization. Without purposeful access to proprietary terminology, clearly articulated business logic, and consistent interpretation of enterprise-wide data, LLMs risk delivering incomplete or misleading insights. This infographic highlights how KM incorporates semantic technologies and practices across scenarios to  enhance AI capabilities and when they're foundational — empowering your organization to strategically leverage semantics for more accurate, actionable outcomes while cultivating sound knowledge intelligence practices and investing in your enterprise's knowledge assets. Use Case: Expert Elicitation - Semantics used for AI Enhancement Efficiently capture valuable knowledge and insights from your organization's experts about past experiences and lessons learned, especially when these insights have not yet been formally documented.  By using ontologies to spot knowledge gaps and taxonomies to clarify terms, an LLM can capture and structure undocumented expertise—storing it in a knowledge graph for future reuse. Example:  Capturing a senior engineer's undocumented insights on troubleshooting past system failures to streamline future maintenance. Use Case: Discovery & Extraction - Semantics used for AI Enhancement Quickly locate key insights or important details within a large collection of documents and data, synthesizing them into meaningful, actionable summaries, and delivering these directly back to the user. Ontologies ensure concepts are recognized and linked consistently across wording and format, enabling insights to be connected, reused, and verified outside an LLM's opaque reasoning process. Example: Scanning thousands of supplier agreements to locate variations of key contract clauses—despite inconsistent wording—then compiling a cross-referenced summary for auditors to accelerate compliance verification and identify high-risk deviations. Use Case: Context Aggregation - Semantics for AI Foundations Gather fragmented information from diverse sources and combine it into a unified, comprehensive view of your business processes or critical concepts, enabling deeper analysis, more informed decisions, and previously unattainable insights. Knowledge graphs unify fragmented information from multiple sources into a persistent, coherent model that both humans and systems can navigate. Ontologies make relationships explicit, enabling the inference of new knowledge that reveals connections and patterns not visible in isolated data. Example: Integrating financial, operational, HR, and customer support data to predict resource needs and reveal links between staffing, service quality, and customer retention for smarter planning. Use Case: Cleanup and Optimization - Semantics for AI Enhancement Analyze and optimize your organization's knowledge base by detecting redundant, outdated, or trivial (ROT) content—then recommend targeted actions or automatically archive and remove irrelevant material to keep information fresh, accurate, and valuable. Leverage taxonomies and ontologies to recognize conceptually related information even when expressed in different terms, formats, or contexts; allowing the AI to uncover hidden redundancies, spot emerging patterns, and make more precise recommendations than could be justified by keyword or RAG search alone. Example: Automatically detecting and flagging outdated or duplicative policy documents—despite inconsistent titles or formats—across an entire intranet, streamlining reviews and ensuring only current, authoritative content remains accessible. Use Case: Situated Insight - Semantics used for AI Enhancement Proactively deliver targeted answers and actionable suggestions uniquely aligned with each user's expressed preferences, behaviors, and needs, enabling swift, confident decision-making. Use taxonomies to standardize and reconcile data from diverse systems, and apply knowledge graphs to connect and contextualize a user's preferences, behaviors, and history; creating a unified, dynamic profile that drives precise, timely, and highly relevant recommendations. Example: Instantly curating a personalized learning path (complete with recommended modules, mentors, and practice projects) based on an employee's recent performance trends, skill gaps, and long-term career goals, accelerating both individual growth and organizational capability. Use Case: Context Mediation and Resolution - Semantics for AI Foundations Bridge disparate contexts across people, processes, technologies, etc., into a common, resolved machine readable understanding that preserves nuance while eliminating ambiguity. Semantics establish a shared, machine-readable understanding that bridges differences in language, structure, and context across people, processes, and systems. Taxonomies unify terminology from diverse sources, while ontologies and knowledge graphs capture and clarify the nuanced relationships between concepts—eliminating ambiguity without losing critical detail. Example: Reconciling varying medical terminologies, abbreviations, and coding systems from multiple healthcare providers into a single, consistent patient record—ensuring that every clinician sees the same unambiguous history, enabling faster diagnosis, safer treatment decisions, and more effective care coordination. Learn more about our work with AI and semantics to help your organization make the most out of these investments, don't hesitate to reach out at:  https://enterprise-knowledge.com/contact-us/

The post How KM Leverages Semantics for AI Success appeared first on Enterprise Knowledge.

]]>
The Semantic Exchange Webinar Series Recap https://enterprise-knowledge.com/the-semantic-exchange-webinar-series-recap/ Mon, 11 Aug 2025 15:18:30 +0000 https://enterprise-knowledge.com/?p=25098 Enterprise Knowledge recently completed the first round of our new webinar series The Semantic Exchange, which offers participants an opportunity to engage in Q&A with EK’s Semantic Design thought leaders. Participants were able to engage with EK’s experts on topics … Continue reading

The post The Semantic Exchange Webinar Series Recap appeared first on Enterprise Knowledge.

]]>
Promotional graphic for The Semantic Exchange webinar by Enterprise Knowledge, featuring six semantic experts as moderators and presenters.

Enterprise Knowledge recently completed the first round of our new webinar series The Semantic Exchange, which offers participants an opportunity to engage in Q&A with EK’s Semantic Design thought leaders. Participants were able to engage with EK’s experts on topics such as the value of enterprise semantic architecture, best practices for generating buy-in for semantics across an organization, and techniques for semantic solution implementation. The series sparked thoughtful discussion on how to understand and address real-world semantic challenges. 

To view any of the recorded sessions and their corresponding published work – use the links below:

 

Recording Published Work Author & Presenter
Why Your Taxonomy Needs SKOS Infographic Bonnie Griffin
What is Semantics and Why
Does it Matter?
Blog Ben Kass
Metadata Within the
Semantic Layer
Blog Kathleen Gollner
A Semantic Layer to Enable Risk Management Case Study Yumiko Saito
Humanitarian Foundation
SemanticRAG POC
Case Study James Egan

If you are interested in bringing semantics and data modeling solutions to your organization, contact us here!

The post The Semantic Exchange Webinar Series Recap appeared first on Enterprise Knowledge.

]]>
Content Management Strategy for a Capital Producer https://enterprise-knowledge.com/content-management-strategy-for-a-capital-producer/ Wed, 16 Jul 2025 14:55:03 +0000 https://enterprise-knowledge.com/?p=24878 A capital producer understood the complexity of navigating international regulatory environments. Operating across nations in numerous fields of specialization, the organization had to uphold diverse and disparate ordinances, many of which have changed over time. Dedicated to providing high-quality services to their customers, the organization sought a solution that would help them better navigate revisions to compliance requirements and ensure adherence to rigorous standards of excellence. Continue reading

The post Content Management Strategy for a Capital Producer appeared first on Enterprise Knowledge.

]]>
 

The Challenge

A capital producer understood the complexity of navigating international regulatory environments. Operating across nations in numerous fields of specialization, the organization had to uphold diverse and disparate ordinances, many of which have changed over time. Dedicated to providing high-quality services to their customers, the organization sought a solution that would help them better navigate revisions to compliance requirements and ensure adherence to rigorous standards of excellence.

Like many companies, the capital producer relied on manual processes to identify, track, and communicate regulations across the organization. Unfortunately, manual approaches exposed the organization to human error, a possibility that threatened its ability to remain compliant. Since regulatory adherence depends on numerous team members throughout the organization, there were various potential points of failure, many of which were unknown. Staff had to personally determine how to best share sensitive information between groups, which created inefficiencies and risked information exposure. When these processes were performed correctly, they frequently included periods of redundancy where staff members duplicated each other’s efforts, thus diminishing organizational productivity. 

The Solution

To facilitate the organization’s compliance with regulations and standards, EK provided the capital producer with a comprehensive content management strategy rooted in knowledge management (KM) best practices. EK’s recommendations were informed by 11 separate interviews, four system demos, and 28 business unit validation workshop participants. EK spoke to executive stakeholders, content owners, system owners, and process performers throughout the organization. Based on these conversations and demonstrations of the organization’s current processes, EK developed a content strategy at the intersection of content and knowledge management. Recommendations for the organization were divided into five separate workstreams, based upon EK’s proprietary content strategy for KM evaluation framework, and broken down into the strategic and business impact of each item.

Leveraging EK’s expertise in semantics and data-driven knowledge management, EK delivered a content strategy with an emphasis on the structure, metadata, and management requirements for key organizational content types. For example, contracts exist currently as unstructured content, and in this organization’s use case can continue to be managed as such. Formulas for certain products, however, require robust security and personalization to enable regulatory compliance across multiple countries. These complex requirements necessitated a recommendation for structured formula content managed in a Product Information Management System (PIMS). 

Additionally, EK created a technology solution approach that not only identified existing pain points in the organization but also mapped each challenge to a corresponding technology solution. EK prioritized technical approaches that could easily work within the capital producer’s current technical ecosystem, minimizing the cost of integrating these solutions. At the start of the engagement, the organization was leveraging SharePoint for all content management needs. EK’s technology recommendations included strategies to optimize the use of SharePoint for appropriate use cases as well as recommendations for specialized contract management systems for product lifecycle management and contract management.

Implementing technological and procedural changes within the capital producer will allow the organization to continue to grow globally while providing compliant and high-quality products for its consumers. EK’s proposed content management approach will enable staff to better create, protect, share, and utilize compliance content to ensure the seamless continuity of operations, establish secure intellectual property, and achieve operational efficiencies. 

The EK Difference

Our team worked closely with the organization’s stakeholders to produce a content management strategy that would help them achieve larger knowledge objectives. Establishing processes and avenues for information sharing will enable the organization to not only uphold international standards of compliance but also increase productivity over time by efficiently sharing information and preserving tacit knowledge.

This engagement operated within the intersection of content management and KM. EK leveraged its KM background to guide this content strategy approach and used KM best practices to conduct knowledge-gathering activities, including document review, stakeholder interviews, stakeholder workshops, and system demos. After reviewing this information, EK was able to use its proprietary current state and target state framework to conduct a content management analysis at the organization. 

EK additionally utilized an ontological data modeling approach to guide its advanced content management strategy. The capital producer was exclusively a document-based organization at the beginning of the engagement; with EK’s support, they identified a future-ready content strategy for prioritized content use cases. There are a variety of content management approaches that can be used to provide structure to digital materials. These methods can be viewed on a continuum from file-level management to semantically enriched component management. However, not all approaches are the right fit for every client. Our content strategy and operations experts were able to ascertain the right level of content management for various use cases at the organization and ultimately provide them with a detailed technical plan for how to implement the right content management strategy. 

Content Management Continuum

The Results

At the end of the engagement, EK provided the organization with a clear roadmap for the adoption of a transformational content management strategy. Stakeholders from over ten different business units aligned on an approach that addressed their various needs and pain points, as well as an understanding of the investment required to achieve the target state content strategy. 

EK provided the organization’s stakeholders with the roadmap for a long-term vision and the tools for a quick return on investment. This came in five key accelerators, allowing the organization to deploy strategies, frameworks, and management approaches tailored to the organization’s unique needs. Each accelerator included a description of the recommendation, a path to implement the task successfully, success indicators to track, and the corresponding pain points it addressed. 

By implementing a more robust content management strategy, the capital producer will maintain compliance with regulations and standards, ensure content is secure and only accessible to those who need it, and improve overall efficiency of content operations. 

Download Flyer

Ready to Get Started?

Get in Touch

The post Content Management Strategy for a Capital Producer appeared first on Enterprise Knowledge.

]]>
Unified Entitlements: The Hidden Vulnerability in Modern Enterprises https://enterprise-knowledge.com/unified-entitlements-the-hidden-vulnerability-in-modern-enterprises/ Thu, 10 Jul 2025 12:51:04 +0000 https://enterprise-knowledge.com/?p=24848 Maria, a finance analyst at a multinational corporation, needs quarterly revenue data for her report. She logs into her company’s data portal, runs a query against the company’s data lake, and unexpectedly retrieves highly confidential merger negotiations that should be … Continue reading

The post Unified Entitlements: The Hidden Vulnerability in Modern Enterprises appeared first on Enterprise Knowledge.

]]>
Maria, a finance analyst at a multinational corporation, needs quarterly revenue data for her report. She logs into her company’s data portal, runs a query against the company’s data lake, and unexpectedly retrieves highly confidential merger negotiations that should be restricted to the executive team. Meanwhile, across the organization, Anthony, an ML engineer, deploys a recommendation model that accidentally incorporates customer PII data due to misconfigured access controls in Databricks. Both scenarios represent the same fundamental problem: fragmented entitlement management across diverse data platforms.

These aren’t hypothetical situations. They happen daily across enterprises that have invested millions in data infrastructure but neglected the crucial layer that governs who can access what data, when, and how. As organizations expand their data ecosystems across multiple clouds, databases, and analytics platforms, the challenge of maintaining consistent access control becomes exponentially more complex. This review provides a technical follow-up to the concepts outlined in Why Your Organization Needs Unified Entitlements and details the architecture, implementation strategies, and integration patterns needed to build a robust Unified Entitlements System (UES) for enterprise environments. I will address the complexities of translating centralized policies to platform-specific controls, resolving user identities across systems, and maintaining consistent governance across cloud platforms.

 

The Entitlements Dilemma: A Perfect Storm

Today’s enterprises face a perfect storm in data access governance. The migration to cloud-native architectures has created a sprawling landscape of data sources, each with its own security model. A typical enterprise might store customer data in Snowflake, operational metrics in PostgreSQL, transaction records in MongoDB, and unstructured content in AWS S3—all while running analytics in Databricks and feeding AI systems through various pipelines.

This diversity creates several critical challenges that collectively undermine data governance:

Inconsistent Policy Enforcement: When a new employee joins the marketing team, their access might be correctly configured in Snowflake but misaligned in AWS Lake Formation due to differences in how these platforms structure roles and permissions. Snowflake’s role-based access control model bears little resemblance to AWS Lake Formation’s permission structure, making uniform governance nearly impossible without a unifying layer.

Operational Friction: Jennifer, a data governance officer at a financial services firm, spends over 25 hours a week manually reconciling access controls across platforms. Her team must update dozens of platform-specific policies when regulatory requirements change, leading to weeks of delay before new controls take effect.

Compliance Blind Spots: Regulations like GDPR, HIPAA, and CCPA mandate strict data access controls, but applying them uniformly across diverse platforms requires expertise in multiple security frameworks. This creates dangerous compliance gaps as platform-specific nuances escape notice during audits.

Identity Fragmentation: Most enterprises operate with multiple identity providers—perhaps Azure AD for corporate applications, AWS IAM for cloud resources, and Okta for customer-facing services. Without proper identity resolution, a user might exist as three separate entities with misaligned permissions.

 

Beyond Simple Access Control: The Semantics Challenge

The complexity doesn’t end with technical implementation. Modern AI workflows rely on a semantic layer that gives meaning to data. Entitlement systems must understand these semantics to avoid breaking critical data relationships.

Consider a healthcare system where patient records are split across systems: demographics in one database, medical history in another, and insurance details in a third. A unified approach to managing entitlements should be developed to understand these semantic connections and ensure that when doctors query patient information, they receive a complete view according to their access rights rather than fragmented data that could lead to medical errors.

 

The Unified Entitlements Solution

A UES addresses these challenges by creating a centralized policy management system that translates high-level business rules into platform-specific controls. Think of it as a universal translator for security policies—allowing governance teams to define rules once and apply them everywhere.

How UES Transforms Entitlement Management

Let’s follow how a UES transforms the experience for both users and administrators:

For Maria, the Finance Analyst: When she logs in through corporate SSO, the UES immediately identifies her role, department, and project assignments. As she queries the data lake, the UES dynamically evaluates her request against centralized policies, translating them into AWS Lake Formation predicates and Snowflake secure views. When she exports data to Excel, column-level masking automatically obscures sensitive fields she shouldn’t see. All of this happens seamlessly without Maria even knowing the UES exists.

For the Data Governance Team: Instead of managing dozens of platform-specific security configurations, they define policies in business terms: “Finance team members can access aggregated revenue data but not customer PII” or “EU-based employees cannot access unmasked US customer data.” The UES handles the complex translation to platform-native controls, dramatically reducing administrative overhead.

 

Conclusion: The New Foundation for Data Governance

As enterprises continue their data-driven transformation, a UES emerges as the essential foundation for effective governance. UES enables organizations to enforce consistent access rules across their entire data ecosystem by bridging the gap between high-level security policies and platform-specific controls.

The benefits extend beyond security and compliance. With a properly implemented UES, organizations can accelerate data democratization while remaining confident that appropriate guardrails are in place. They can adopt new data platforms more rapidly, knowing that existing governance policies will translate seamlessly. Most importantly, they can unlock the full value of their data assets without compromising on protection or compliance.

In a world where data is the lifeblood of business, unified entitlements isn’t just a security enhancement—it’s the key to unlocking the true potential of enterprise data.

 

The post Unified Entitlements: The Hidden Vulnerability in Modern Enterprises appeared first on Enterprise Knowledge.

]]>
Semantic Graphs in Action: Bridging LPG and RDF Frameworks https://enterprise-knowledge.com/semantic-graphs-action-bridging-lpg-and-rdf-frameworks/ Tue, 08 Jul 2025 20:08:43 +0000 https://enterprise-knowledge.com/?p=24852 Enterprise Knowledge is pleased to introduce a new webinar titled, Semantic Graphs in Action: Bridging LPG and RDF Frameworks. This webinar will bring together four EK experts on graph technologies to explore the differences, complementary aspects, and best practices of … Continue reading

The post Semantic Graphs in Action: Bridging LPG and RDF Frameworks appeared first on Enterprise Knowledge.

]]>

Enterprise Knowledge is pleased to introduce a new webinar titled, Semantic Graphs in Action: Bridging LPG and RDF Frameworks. This webinar will bring together four EK experts on graph technologies to explore the differences, complementary aspects, and best practices of implementing RDF and LPG approaches. The session will delve into common misconceptions, when to utilize each approach, real-world case studies, industry gaps, as well as future opportunities in the graph space.

The ideal audience for this webinar includes data architects, data scientists, and information management professionals hoping to better understand when an LPG, RDF, or combined approach is best for your organization. At the end of our discussion webinar attendees will have the opportunity to ask our panelists additional follow up questions.

    This webinar will take place on Thursday August 21st, from 1:00 – 2:00PM EDT. Can’t make it? The webinar will also be recorded and published to registered attendees. Register for the webinar here!

    The post Semantic Graphs in Action: Bridging LPG and RDF Frameworks appeared first on Enterprise Knowledge.

    ]]>
    What is a Knowledge Asset? https://enterprise-knowledge.com/what-is-a-knowledge-asset/ Mon, 16 Jun 2025 15:15:40 +0000 https://enterprise-knowledge.com/?p=24635 Over the course of Enterprise Knowledge’s history, we have been in the business of connecting an organization’s information and data, ensuring it is findable and discoverable, and enriching it to be more useful to both humans and AI. Though use … Continue reading

    The post What is a Knowledge Asset? appeared first on Enterprise Knowledge.

    ]]>
    Over the course of Enterprise Knowledge’s history, we have been in the business of connecting an organization’s information and data, ensuring it is findable and discoverable, and enriching it to be more useful to both humans and AI. Though use cases, scope, and scale of engagements—and certainly, the associated technologies—have all changed, that core mission has not.

    As part of our work, we’ve endeavored to help our clients understand the expansive nature of their knowledge, content, and data. The complete range of these materials can be considered based on several different spectra. They can range from tacit to explicit, knowledge to information, structured to unstructured, digital to analog, internal to external, and originated to generated. Before we go deeper into the definition of knowledge assets, let’s first explore each of these variables to understand how vast the full collection of knowledge assets can be for an organization.

    • Tacit and Explicit – Tacit content is held in people’s heads. It is inferred instead of explicitly encoded in systems, and does not exist in a shareable or repeatable format. Explicit content is that which has been captured in an independent form, typically as a digital file or entry. Historically, organizations have been focused on converting tacit knowledge to explicit so that the organization could better maintain and reuse it. However, we’ll explain below how the complete definition of a knowledge asset shifts that thinking somewhat.
    • Knowledge and Information – Knowledge is the expertise and experience people acquire, making it extremely valuable but hard to convert from tacit to explicit. Information is just facts, lacking expert context. Organizations have both, and documents often mix them.
    • Structured and Unstructured – Structured information is machine-readable and system-friendly and unstructured information is human-readable and context-rich. Structured data, like database entries, is easy for systems but hard for humans to understand without tools. Unstructured data, designed for humans, is easier to grasp but historically challenging for machines to process. 
    • Digital to Analog – Digital information exists in an electronic format, whereas analog information exists in a physical format. Many global organizations are sitting on mountains of knowledge and information that isn’t accessible (or perhaps even known) to most people in the organization. Making things more complex, there’s also formerly analog information, the many old documents that have been digitized but exist in a middle state where they’re not particularly machine-readable, but are electronic.
    • Internal to External – Internal content targets employees, while external content targets customers, partners, or the public, with differing tones and styles, and often greater governance and overall rigor for external content. Both types should align, but are treated differently. You can also consider the content created by your organization versus external content purchased, acquired, or accessed from external sources. From this perspective, you have much greater control over your organization’s own content than that which was created or is owned externally.
    • Originated and Generated – Originated content already exists within the organization as discrete items within a repository or repositories, authored by humans. Explicit content, for example, is originated. It was created by a person or people, it is managed, and identified as a unique item. Any file you’ve created before the AI era falls into this category. With Generative AI becoming pervasive, however, we must also consider generated information, derived from AI. These generated assets (synthetic assets) are automatically created based on an organization’s existing (originated) information, forming new content that may not possess the same level of rigor or governance.

    If we were to go no further than the above, most organizations would already be dealing with petabytes of information and tons of paper encompassing years and years. However, by thinking about information based on its state (i.e. structured or unstructured, digital or analog, etc), or by its use (i.e. internal or external), organizations are creating artificial barriers and silos to knowledge, as well as duplicating or triplicating work that should be done at the enterprise level. Unfortunately, for most organizations, the data management group defines and oversees data governance for their data, while the content management group defines and oversees content governance for their content. This goes beyond inefficiency or redundancy, creating cost and confusion for the organization and misaligning how information is managed, shared, and evolved. Addressing this issue, in itself, is already a worthy challenge, but it doesn’t yet fully define a knowledge asset or how thinking in terms of knowledge assets can deliver new value and insights to an organization.

    If you go beyond traditional digital content and begin to consider how people actually want to obtain answers, as well as how artificial intelligence solutions work, we can begin to think of the knowledge an organization possesses more broadly. Rather than just looking at digital content, we can recognize all the other places, things, and people that can act as resources for an organization. For instance, people and the knowledge and information they possess are, in fact, an asset themselves. The field of KM has long been focused on extracting that knowledge, with at best mixed results. However, in the modern ecosystem of KM, semantics, and AI, we can instead consider people themselves as the asset that can be connected to the network. We may still choose to capture their knowledge in a digital form, but we can also add them to the network, creating avenues for people to find them, learn from them, and collaborate with them while mapping them to other assets.

    In the same way, products, equipment, processes, and facilities can all be considered knowledge assets. By considering all of your organizational components not as “things,” but as containers of knowledge, you move from a world of silos to a connected and contextualized network that is traversable by a human and understandable by a machine. We coined the term knowledge assets to express this concept. The key to a knowledge asset is that it can be connected with other knowledge assets via metadata, meaning it can be put into the organization’s context. Anything that can hold metadata and be connected to other knowledge assets can be an asset.

    Another set of knowledge assets that are quickly becoming critical for mature organizations are components of AI orchestration. As organizations build increasingly complex systems of agents, models, tools, and workflows, the logic that governs how these components interact becomes a form of operational knowledge in its own right. These orchestration components encode decisions, institutional context, and domain expertise, meaning they are worthy of being treated as first-class knowledge assets. To fully harness the value of AI, orchestration components should be clearly defined, governed, and meaningfully connected to the broader knowledge ecosystem.

    Put into practice, a mature organization could create a true web of knowledge assets to serve virtually any use case. Rather than a simple search, a user might instead query their system to learn about a process. Instead of getting a link to the process documentation, they get a view of options, allowing them to read the documentation, speak to an expert on the topic, attend training on the process, join a community of practice working on it, or visit an application supporting it. 

    A new joiner to your organization might be given a task to complete. Currently, they may hunt around your network for guidance, or wait for a message back from their mentor, but if they instead had a traversable network of all your organization’s knowledge assets, they could begin with a simple search on the topic of the task, find a past deliverable from a related task, which would lead them to the author of that task from whom they could seek guidance, or instead to an internal meetup of professionals deemed to have expertise in that task.

    If we break these silos down, add context and meaning via metadata, and begin to treat our knowledge assets holistically, we’re also creating the necessary foundations for any AI solutions to better understand our enterprise and deliver complete answers. This means that we’re building the better answer for our organization immediately, while also enabling our organization to leverage AI capabilities faster, more consistently, and more reliably than others.

    The idea of knowledge assets will be a shift both in mindset and strategies, with impacts potentially rippling deeply through your org chart, technologies, and culture. However, the organizations that embrace this concept will achieve an enterprise most closely resembling how humans naturally think and learn and how AI is best equipped to deliver.

    If you’re ready to take the next big step in organizational knowledge and maturity, contact us, and we will bring all of our knowledge assets to bear in support. 

    The post What is a Knowledge Asset? appeared first on Enterprise Knowledge.

    ]]>