Governance Articles - Enterprise Knowledge http://enterprise-knowledge.com/tag/governance/ Mon, 17 Nov 2025 22:18:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg Governance Articles - Enterprise Knowledge http://enterprise-knowledge.com/tag/governance/ 32 32 How to Fill Your Knowledge Gaps to Ensure You’re AI-Ready https://enterprise-knowledge.com/how-to-fill-your-knowledge-gaps-to-ensure-youre-ai-ready/ Mon, 29 Sep 2025 19:14:44 +0000 https://enterprise-knowledge.com/?p=25629 “If only our company knew what our company knows” has been a longstanding lament for leaders: organizations are prevented from mobilizing their knowledge and capabilities towards their strategic priorities. Similarly, being able to locate knowledge gaps in the organization, whether … Continue reading

The post How to Fill Your Knowledge Gaps to Ensure You’re AI-Ready appeared first on Enterprise Knowledge.

]]>
“If only our company knew what our company knows” has been a longstanding lament for leaders: organizations are prevented from mobilizing their knowledge and capabilities towards their strategic priorities. Similarly, being able to locate knowledge gaps in the organization, whether we were initially aware of them (known unknowns), or initially unaware of them (unknown unknowns), represents opportunities to gain new capabilities, mitigate risks, and navigate the ever-accelerating business landscape more nimbly.  

AI implementations are already showing signs of knowledge gaps: hallucinations, wrong answers, incomplete answers, and even “unanswerable” questions. There are multiple causes for AI hallucinations, but an important one is not having the right knowledge to answer a question in the first place. While LLMs may have been trained on massive amounts of data, it doesn’t mean that they know your business, your people, or your customers. This is a common problem when organizations make the leap from how they experience “Public AI” tools like ChatGPT, Gemini, or Copilot, to attempting their own organization’s AI solutions. LLMs and agentic solutions need knowledge—your organization’s unique knowledgeto produce results that are unique to your and your customers’ needs, and help employees navigate and solve challenges they encounter in their day-to-day work. 

In a recent article, EK outlined key strategies for preparing content and data for AI. This blog post builds on that foundation by providing a step-by-step process for identifying and closing knowledge gaps, ensuring a more robust AI implementation.

 

The Importance of Bridging Knowledge Gaps for AI Readiness

EK lays out a six-step path to getting your content, data, and other knowledge assets AI-ready, yielding assets that are correct, complete, consistent, contextual, and compliant. The diagram below provides an overview of these six steps:

The six steps to AI readiness. Step one: Define Knowledge Assets. Step two: Conduct cleanup. Step three: Fill Knowledge Gaps (We are here). Step four: Enrich with context. Step five: Add structure. Step six: Protect the knowledge assets.

Identifying and filling knowledge gaps, the third step of EK’s path towards AI readiness, is crucial in ensuring that AI solutions have optimized inputs. 

Prior to filling gaps, an organization will have defined its critical knowledge assets and conducted a content cleanup. A content cleanup not only ensures the correctness and reliability of the knowledge assets, but also reveals the specific topics, concepts, or capabilities that the organization cannot currently supply to AI solutions as inputs.

This scenario presupposes that the organization has a clear idea of the AI use cases and purposes for its knowledge assets. Given the organization knows the questions AI needs to answer, an assessment to identify the location and state of knowledge assets can be targeted based on the inputs required. This assessment would be followed by efforts to collect the identified knowledge and optimize it for AI solutions. 

A second, more complicated, scenario arises when an organization hasn’t formulated a prioritized list of questions for AI to answer. The previously described approach, relying on drawing up a traditional knowledge inventory will face setbacks because it may prove difficult to scale, and won’t always uncover the insights we need for AI readiness. Knowledge inventories may help us understand our known unknowns, but they will not be helpful in revealing our unknown unknowns

 

Identifying the Gap

How can we identify something that is missing? At this juncture, organizations will need to leverage analytics, introduce semantics, and if AI is already deployed in the organization, then use it as a resource as well. There are different techniques to identify these gaps, depending on whether your organization has already deployed an AI solution or is ramping up for one. Available options include:

Before and After AI Deployment

Leveraging Analytics from Existing Systems

Monitoring and assessing different tools’ analytics is an established practice to understand user behavior. In this instance, EK applies these same methods to understand critical questions about the availability of knowledge assets. We are particularly interested in analytics that reveal answers to the following questions:

  • When are our people giving up when navigating different sections of a tool or portal? 
  • What sort of queries return no results?
  • What queries are more likely to get abandoned? 
  • What sort of content gets poor reviews, and by whom?
  • What sort of material gets no engagement? What did the user do or search for before getting to it? 

These questions aim to identify instances of users trying, and failing, to get knowledge they need to do their work. Where appropriate, these questions can also be posed directly to users via surveys or focus groups to get a more rounded perspective. 

Semantics

Semantics involve modeling an organization’s knowledge landscape with taxonomies and ontologies. When taxonomies and ontologies have been properly designed, updated, and consistently applied to knowledge, they are invaluable as part of wider knowledge mapping efforts. In particular, semantic models can be used as an exemplar of what should be there, and can then be compared with what is actually present, thus revealing what is missing.

We recently worked with a professional association within the medical field, helping them define a semantic model for their expansive amount of content, and then defining an automated approach to tagging these knowledge assets. As part of the design process, EK taxonomists interviewed experts across all of the association’s organizational functional teams to define the terms that should be present in the organization’s knowledge assets. After the first few rounds of auto-tagging, we examined the taxonomy’s coverage, and found that a significant fraction of the terms in the taxonomy went unused. We validated our findings with our clients’ experts, and, to their surprise, our engagement revealed an imbalance of knowledge asset production: while some topics were covered by their content, others were entirely lacking. 

Valid taxonomy terms or ontology concepts for which few to no knowledge assets exist reveal a knowledge gap where AI is likely to struggle.

After AI Deployment

User Engagement & Feedback

To ensure a solution can scale, evolve, and remain effective over time, it is important to establish formal feedback mechanisms for users to engage with system owners and governance bodies on an ongoing basis. Ideally, users should have a frictionless way to report an unsatisfactory answer immediately after they receive it, whether it is because the answer is incomplete or just plain wrong. A thumbs-up or thumbs-down icon has traditionally been used to solicit this kind of feedback, but organizations should also consider dedicated chat channels, conversations within forums, or other approaches for communicating feedback to which their users are accustomed.

AI Design and Governance 

Out-of-the-box, pre-trained language models are designed to prioritize providing a fluid response, often leading them to confidently generate answers even when their underlying knowledge is uncertain or incomplete. This core behavior increases the risk of delivering wrong information to users. However, this flaw can be preempted by thoughtful design in enterprise AI solutions: the key is to transform them from a simple answer generator into a sophisticated instrument that can also detect knowledge gaps. Enterprise AI solutions can be engineered to proactively identify questions which they do not have adequate information to answer and immediately flag these requests. This approach effectively creates a mandate for AI governance bodies to capture the needed knowledge. 

AI can move beyond just alerting the relevant teams about missing knowledge. As we will soon discuss, AI holds additional capabilities to close knowledge gaps by inferring new insights from disparate, already-known information, and connecting users directly with relevant human experts. This allows enterprise AI to not only identify knowledge voids, but also begin the process of bridging them.

 

Closing the Gap

It is important, at this point, to make the distinction between knowledge that is truly missing from the organization and knowledge that is simply unavailable to the organization’s AI solution. The approach to close the knowledge gap will hinge on this key distinction. 

 

If the ‘missing’ knowledge is documented or recorded somewhere… but the knowledge is not in a format that AI can use it, then:

Transform and migrate the present knowledge asset into a format that AI can more readily ingest. 

How this looks in practice:

A professional services firm had a database of meeting recordings meant for knowledge-sharing and disseminating lessons learned. The firm determined that there is a lot of knowledge “in the rough” that AI could incorporate into existing policies and procedures, but this was impossible to do by ingesting content in video format. EK engineers programmatically transcribed the videos, and then transformed the text into a machine-readable format. To make it truly AI-ready, we leveraged Natural Language Processing (NLP) and Named Entity Recognition (NER) techniques to contextualize the new knowledge assets by associating them with other concepts across the organization.

If the ‘missing’ knowledge is documented or recorded somewhere… but the knowledge exists in private spaces like email or closed forums, then:

Establish workflows and guidelines to promote, elevate, and institutionalize knowledge that had been previously informal.

How this looks in practice:

A government agency established online Communities of Practice (CoPs) to transfer and disseminate critical knowledge on key subject areas. Community members shared emerging practices and jointly solved problems. Community managers were able to ‘graduate’ informal conversations and documents into formal agency resources that lived within a designated repository, fully tagged, and actively managed. These validated and enhanced knowledge assets became more valuable and reliable for AI solutions to ingest.

If the ‘missing’ knowledge is documented or recorded somewhere… but the knowledge exists in different fragments across disjointed repositories, then: 

Unify the disparate fragments of knowledge by designing and applying a semantic model to associate and contextualize them. 

How this looks in practice:

A Sovereign Wealth Fund (SWF) collected a significant amount of knowledge about its investments, business partners, markets, and people, but kept this information fragmented and scattered across multiple repositories and databases. EK designed a semantic layer (composed of a taxonomy, ontology, and a knowledge graph) to act as a ‘single view of truth’. EK helped the organization define its key knowledge assets, like investments, relationships, and people, and weaved together data points, documents, and other digital resources to provide a 360-degree view of each of them. We furthermore established an entitlements framework to ensure that every attribute of every entity could be adequately protected and surfaced only to the right end-user. This single view of truth became a foundational element in the organization’s path to AI deployment—it now has complete, trusted, and protected data that can be retrieved, processed, and surfaced to the user as part of solution responses. 

If the ‘missing’ knowledge is not recorded anywhere… but the company’s experts hold this knowledge with them, then: 

Choose the appropriate techniques to elicit knowledge from experts during high-value moments of knowledge capture. It is important to note that we can begin incorporating agentic solutions to help the organization capture institutional knowledge, especially when agents can know or infer expertise held by the organization’s people. 

How this looks in practice:

Following a critical system failure, a large financial institution recognized an urgent need to capture the institutional knowledge held by its retiring senior experts. To address this challenge, they partnered with EK, who developed an AI-powered agent to conduct asynchronous interviews. This agent was designed to collect and synthesize knowledge from departing experts and managers by opening a chat with each individual and asking questions until the defined success criteria were met. This method allowed interviewees to contribute their knowledge at their convenience, ensuring a repeatable and efficient process for capturing critical information before the experts left the organization.

If the ‘missing’ knowledge is not recorded anywhere… and the knowledge cannot be found, then:

Make sure to clearly define the knowledge gap and its impact on the AI solution as it supports the business. When it has substantial effects on the solution’s ability to provide critical responses, then it will be up to subject matter experts within the organization to devise a strategy to create, acquire, and institutionalize the missing knowledge. 

How this looks in practice:

A leading construction firm needed to develop its knowledge and practices to be able to keep up with contracts won for a new type of project. Its inability to quickly scale institutional knowledge jeopardized its capacity to deliver, putting a significant amount of revenue at risk. EK guided the organization in establishing CoPs to encourage the development of repeatable processes, new guidance, and reusable artifacts. In subsequent steps, the firm could extract knowledge from conversations happening within the community and ingest them into AI solutions, along with novel knowledge assets the community developed. 

 

Conclusion

Identifying and closing knowledge gaps is no small feat, and predicting knowledge needs was nearly impossible before the advent of AI. Now, AI acts as both a driver and a solution, helping modern enterprises maintain their competitive edge.

Whether your critical knowledge is in people’s heads or buried in documents, Enterprise Knowledge can help. We’ll show you how to capture, connect, and leverage your company’s knowledge assets to their full potential to solve complex problems and obtain the results you expect out of your AI investments. Contact us today to learn how to bridge your knowledge gaps with AI.

The post How to Fill Your Knowledge Gaps to Ensure You’re AI-Ready appeared first on Enterprise Knowledge.

]]>
Top Ways to Get Your Content and Data Ready for AI https://enterprise-knowledge.com/top-ways-to-get-your-content-and-data-ready-for-ai/ Mon, 15 Sep 2025 19:17:48 +0000 https://enterprise-knowledge.com/?p=25370 As artificial intelligence has quickly moved from science fiction, to pervasive internet reality, and now to standard corporate solutions, we consistently get the question, “How do I ensure my organization’s content and data are ready for AI?” Pointing your organization’s … Continue reading

The post Top Ways to Get Your Content and Data Ready for AI appeared first on Enterprise Knowledge.

]]>
As artificial intelligence has quickly moved from science fiction, to pervasive internet reality, and now to standard corporate solutions, we consistently get the question, “How do I ensure my organization’s content and data are ready for AI?” Pointing your organization’s new AI solutions at the “right” content and data are critical to AI success and adoption, and failing to do so can quickly derail your AI initiatives.  

Though the world is enthralled with the myriad of public AI solutions, many organizations struggle to make the leap to reliable AI within their organizations. A recent MIT report, “The GenAI Divide,” reveals a concerning truth: despite significant investments in AI, 95% of organizations are not seeing any benefits from their AI investments. 

One of the core impediments to achieving AI within your own organization is poor-quality content and data. Without the proper foundation of high-quality content and data, any AI solution will be rife with ‘hallucinations’ and errors. This will expose organizations to unacceptable risks, as AI tools may deliver incorrect or outdated information, leading to dangerous and costly outcomes. This is also why tools that perform well as demos fail to make the jump to production.  Even the most advanced AI won’t deliver acceptable results if an organization has not prepared their content and data.

This blog outlines seven top ways to ensure your content and data are AI-ready. With the right preparation and investment, your organization can successfully implement the latest AI technologies and deliver trustworthy, complete results.

1) Understand What You Mean by “Content” and/or “Data” (Knowledge Asset Definition)

While it seems obvious, the first step to ensuring your content and data are AI-ready is to clearly define what “content” and “data” mean within your organization. Many organizations use these terms interchangeably, while others use one as a parent term of the other. This obviously leads to a great deal of confusion. 

Leveraging the traditional definitions, we define content as unstructured information (ranging from files and documents to blocks of intranet text), and data as structured information (namely the rows and columns in databases and other applications like Customer Relationship Management systems, People Management systems, and Product Information Management systems). You are wasting the potential of AI if you’re not seeking to apply your AI to both content and data, giving end users complete and comprehensive information. In fact, we encourage organizations to think even more broadly, going beyond just content and data to consider all the organizational assets that can be leveraged by AI.

We’ve coined the term knowledge assets to express this. Knowledge assets comprise all the information and expertise an organization can use to create value. This includes not only content and data, but also the expertise of employees, business processes, facilities, equipment, and products. This manner of thinking quickly breaks down artificial silos within organizations, getting you to consider your assets collectively, rather than by type. Moving forward in this article, we’ll use the term knowledge assets in lieu of content and data to reinforce this point. Put simply and directly, each of the below steps to getting your content and data AI-ready should be considered from an enterprise perspective of knowledge assets, so rather than discretely developing content governance and data governance, you should define a comprehensive approach to knowledge asset governance. This approach will not only help you achieve AI-readiness, it will also help your organization to remove silos and redundancies in order to maximize enterprise efficiency and alignment.

knowledge asset zoom in 1

2) Ensure Quality (Asset Cleanup)

We’ve found that most organizations are maintaining approximately 60-80% more information than they should, and in many cases, may not even be aware of what they still have. That means that four out of five knowledge assets are old, outdated, duplicate, or near-duplicate. 

There are many costs to this over-retention before even considering AI, including the administrative burden of maintaining this 80% (including the cost and environmental impact of unnecessary server storage), and the usability and findability cost to the organization’s end users when they go through obsolete knowledge assets.

The AI cost becomes even higher for several reasons. First, AI typically “white labels” the knowledge assets it finds. If a human were to find an old and outdated policy, they may recognize the old corporate branding on it, or note the date from several years ago on it, but when AI leverages the information within that knowledge asset and resurfaces it, it looks new and the contextual clues are lost.

Next, we have to consider the old adage of “garbage in, garbage out.” Incorrect knowledge assets fed to an AI tool will result in incorrect results, also known as hallucinations. While prompt engineering can be used to try to avoid these conflicts and, potentially even errors, the only surefire guarantee to avoid this issue is to ensure the accuracy of the original knowledge assets, or at least the vast majority of it.

Many AI models also struggle with near-duplicate “knowledge assets,” unable to discern which version is trusted. Consider your organization’s version control issues, working documents, data modeled with different assumptions, and iterations of large deliverables and reports that are all currently stored. Knowledge assets may go through countless iterations, and most of the time, all of these versions are saved. When ingested by AI, multiple versions present potential confusion and conflict, especially when these versions didn’t simply build on each other but were edited to improve findings or recommendations. Each of these, in every case, is an opportunity for AI to fail your organization.

Finally, this would also be the point at which you consider restructuring your assets for improved readability (both by humans and machines). This could include formatting (to lower cognitive lift and improve consistency) from a human perspective. For both humans and AI, this could also mean adding text and tags to better describe images and other non-text-based elements. From an AI perspective, in longer and more complex assets, proximity and order can have a negative impact on precision, so this could include restructuring documents to make them more linear, chronological, or topically aligned. This is not necessary or even important for all types of assets, but remains an important consideration especially for text-based and longer types of assets.

knowledge asset zoom in 2

3) Fill Gaps (Tacit Knowledge Capture)

The next step to ensure AI readiness is to identify your gaps. At this point, you should be looking at your AI use cases and considering the questions you want AI to answer. In many cases, your current repositories of knowledge assets will not have all of the information necessary to answer those questions completely, especially in a structured, machine-readable format. This presents a risk itself, especially if the AI solution is unaware that it lacks the complete range of knowledge assets necessary and portrays incomplete or limited answers as definitive. 

Filling gaps in knowledge assets is extremely difficult. The first step is to identify what is missing. To invoke another old adage, organizations have long worried they “don’t know what they don’t know,” meaning they lack the organizational maturity to identify gaps in their own knowledge. This becomes a major challenge when proactively seeking to arm an AI solution with all the knowledge assets necessary to deliver complete and accurate answers. The good news, however, is that the process of getting knowledge assets AI-ready helps to identify gaps. In the next two sections, we cover semantic design and tagging. These steps, among others, can identify where there appears to be missing knowledge assets. In addition, given the iterative nature of designing and deploying AI solutions, the inability of AI to answer a question can trigger gap filling, as we cover later. 

Of course, once you’ve identified the gaps, the real challenge begins, in that the organization must then generate new knowledge assets (or locate “hidden” assets) to fill those gaps. There are many techniques for this, ranging from tacit knowledge capture, to content inventories, all of which collectively can help an organization move from AI to Knowledge Intelligence (KI).    

knowledge asset zoom in 3

4) Add Structure and Context (Semantic Components)

Once the knowledge assets have been cleansed and gaps have been filled, the next step in the process is to structure them so that they can be related to each other correctly, with the appropriate context and meaning. This requires the use of semantic components, specifically, taxonomies and ontologies. Taxonomies deliver meaning and structure, helping AI to understand queries from users, relate knowledge assets based on the relationships between the words and phrases used within them, and leverage context to properly interpret synonyms and other “close” terms. Taxonomies can also house glossaries that further define words and phrases that AI can leverage in the generation of results.

Though often confused or conflated with taxonomies, ontologies deliver a much more advanced type of knowledge organization, which is both complementary to taxonomies and unique. Ontologies focus on defining relationships between knowledge assets and the systems that house them, enabling AI to make inferences. For instance:

<Person> works at <Company>

<Zach Wahl> works at <Enterprise Knowledge>

<Company> is expert in <Topic>

<Enterprise Knowledge> is expert in <AI Readiness>

From this, a simple inference based on structured logic can be made, which is that the person who works at the company is an expert in the topic: Zach Wahl is an expert in AI Readiness. More detailed ontologies can quickly fuel more complex inferences, allowing an organization’s AI solutions to connect disparate knowledge assets within an organization. In this way, ontologies enable AI solutions to traverse knowledge assets, more accurately make “assumptions,” and deliver more complete and cohesive answers. 

Collectively, you can consider these semantic components as an organizational map of what it does, who does it, and how. Semantic components can show an AI how to get where you want it to go without getting lost or taking wrong turns.

5) Semantic Model Application (Tagging)

Of course, it is not sufficient simply to design the semantic components; you must complete the process by applying them to your knowledge assets. If the semantic components are the map, applying semantic components as metadata is the GPS that allows you to use it easily and intuitively. This step is commonly a stumbling block for organizations, and again is why we are discussing knowledge assets rather than discrete areas like content and data. To best achieve AI readiness, all of your knowledge assets, regardless of their state (structured, unstructured, semi-structured, etc), must have consistent metadata applied against them. 

When applied properly, this consistent metadata becomes an additional layer of meaning and context for AI to leverage in pursuit of complete and correct answers. With the latest updates to leading taxonomy and ontology management systems, the process of automatically applying metadata or storing relationships between knowledge assets in metadata graphs is vastly improved, though still requires a human in the loop to ensure accuracy. Even so, what used to be a major hurdle in metadata application initiatives is much simpler than it used to be.

knowledge asset zoom in 4

6) Address Access and Security (Unified Entitlements)

What happens when you finally deliver what your organization has been seeking, and give it the ability to collectively and completely serve their end users the knowledge assets they’ve been seeking? If this step is skipped, the answer is calamity. One of the express points of the value of AI is that it can uncover hidden gems in knowledge assets, make connections humans typically can’t, and combine disparate sources to build new knowledge assets and new answers within them. This is incredibly exciting, but also presents a massive organizational risk.

At present, many organizations have an incomplete or actually poor model for entitlements, or ensuring the right people see the right assets, and the wrong people do not. We consistently discover highly sensitive knowledge assets in various forms on organizational systems that should be secured but are not. Some of this takes the form of a discrete document, or a row of data in an application, which is surprisingly common but relatively easy to address. Even more of it is only visible when you take an enterprise view of an organization. 

For instance, Database A might contain anonymized health information about employees for insurance reporting purposes but maps to discrete unique identifiers. File B includes a table of those unique identifiers mapped against employee demographics. Application C houses the actual employee names and titles for the organizational chart, but also includes their unique identifier as a hidden field. The vast majority of humans would never find this connection, but AI is designed to do so and will unabashedly generate a massive lawsuit for your organization if you’re not careful.

If you have security and entitlement issues with your existing systems (and trust me, you do), AI will inadvertently discover them, connect the dots, and surface knowledge assets and connections between them that could be truly calamitous for your organization. Any AI readiness effort must confront this challenge, before your AI solutions shine a light on your existing security and entitlements issues.

knowledge asset zoom in 5

7) Maintain Quality While Iteratively Improving (Governance)

Steps one through six describe how to get your knowledge assets ready for AI, but the final step gets your organization ready for AI. With a massive investment in both getting your knowledge assets in the right state for AI and in  the AI solution itself, the final step is to ensure ongoing quality of both. Mature organizations will invest in a core team to ensure knowledge assets go from AI-ready to AI-mature, including:

  • Maintaining and enforcing the core tenets to ensure knowledge assets stay up-to-date and AI solutions are looking at trusted assets only;
  • Reacting to hallucinations and unanswerable questions to fill gaps in knowledge assets; 
  • Tuning the semantic components to stay up to date with organizational changes.

The most mature organizations, those wishing to become AI-Powered organizations, will look first to their knowledge assets as the key building block to drive success. Those organizations will seek ROCK (Relevant, Organizationally Contextualized, Complete, and Knowledge-Centric) knowledge assets as the first line to delivering Enterprise AI that can be truly transformative for the organization. 

If you’re seeking help to ensure your knowledge assets are AI-Ready, contact us at info@enterprise-knowledge.com

The post Top Ways to Get Your Content and Data Ready for AI appeared first on Enterprise Knowledge.

]]>
Semantic Layer Strategy for Linked Data Investigations https://enterprise-knowledge.com/semantic-layer-strategy-for-linked-data-investigations/ Thu, 08 May 2025 15:08:08 +0000 https://enterprise-knowledge.com/?p=24011 A government organization sought to more effectively exploit their breadth of data generated by investigation activity of criminal networks for comprehensive case building and threat trend analysis. EK engaged with the client to develop a strategy and product vision for their semantic solution, paired with foundational semantic data models for meaningful data categorization and linking, architecture designs and tool recommendations for integrating and leveraging graph data, and entitlements designs for adhering to complex security standards. Continue reading

The post Semantic Layer Strategy for Linked Data Investigations appeared first on Enterprise Knowledge.

]]>

The Challenge

A government organization sought to more effectively exploit their breadth of data generated by investigation activity of criminal networks for comprehensive case building and threat trend analysis. The agency struggled to meaningfully connect structured and unstructured data from multiple siloed data sources, each with misaligned naming conventions and inconsistent data structures and formats. Users had to have an existing understanding of underlying data models and jump between multiple system views to answer core investigation analysis questions, such as “What other drivers have been associated with this vehicle involved in an inspection at the border?” or “How often has this person in the network traveled to a known suspect storage location in the past 6 months?”

These challenges manifest in data ambiguity across the organization, complex and resource-intensive integration workflows, and underutilized data assets lacking meaningful context, all resulting in significant cognitive load and burdensome manual efforts for users conducting intelligence analyses. The organization recognized the need to define a robust semantic layer solution grounded in data modeling, architecture frameworks, and governance controls to unify, contextualize, and operationalize data assets via a “single pane of intelligence” analysis platform.

The Solution

To address these challenges, EK engaged with the client to develop a strategy and product vision for their semantic solution, paired with foundational semantic data models for meaningful data categorization and linking, architecture designs and tool recommendations for integrating and leveraging graph data, and entitlements designs for adhering to complex security standards. With phased implementation plans for incremental delivery, these components lay the foundations for the client’s solution vision for advanced entity resolution and analytics capabilities. The overall solution will power streamlined consumption experiences and data-driven insights through the “single pane of intelligence.”  

The core components of EK’s semantic advisory and solution development included:

Product Vision and Use Case Backlog:
EK collaborated with the client to shape a product vision anchored around the solution’s purpose and long-term value for the organization. Complemented with a strategic backlog of priority use cases, EK’s guidance resulted in a compelling narrative to drive stakeholder engagement and organizational buy-in, while also establishing a clear and tangible vision for scalable solution growth.

Solution Architecture Design:
EK’s solution architects gathered technical requirements to propose a modular solution architecture consisting of multiple, self-contained technology products that will provision a comprehensive analytic ecosystem to the organization’s user base. The native graph architecture involves a graph database, entity resolution services, and a linked data analysis platform to create a unified, interactive model of all of their data assets via the “single pane of intelligence.”

Tool Selection Advisory:
EK guided the client on selecting and successfully gaining buy-in for procurement of a graph database and a data analysis and visualization platform with native graph capabilities to plug into the semantic and presentation layers of the recommended architecture design. This selection moves the organization away from a monolithic, document-centric platform to a data-centric solution for dynamic intelligence analysis in alignment with their graph and network analytics use cases. EK’s experts in unified entitlements and industry security standards also ensured the selected tooling would comply with the client’s database, role, and attribute-based access control requirements.

Taxonomy and Ontology Modeling:
In collaboration with intelligence subject matter experts, EK guided the team from a broad conceptual model to an implementable ontology and starter taxonomy designs to enable a specific use case for prioritized data sources. EK advised on mapping the ontology model to components of the Common Core Ontologies to create a standard, interoperable foundation for consistent and scalable domain expansion.

Phased Implementation Plan:
Through dedicated planning and solutioning sessions with the core client team, EK developed an iterative implementation plan to scale the foundational data model and architecture components and unlock incremental technical capabilities. EK advised on identifying and defining starter pilot activities, outlining definitions of done, necessary roles and skillsets, and required tasks and supporting tooling from the overall architecture to ensure the client could quickly start on solution implementation. EK is directly supporting the team on the short-term implementation tasks while continuing to advise and plan for the longer-term solution needs.

 

The EK Difference

Semantic Layer Solution Strategy:
EK guided the client in transforming existing experimental work in the knowledge graph space into an enterprise solution that can scale and bring tangible value to users. From strategic use case development to iterative semantic model and architecture design, EK provided the client with repeatable processes for defining, shaping, and productionalizing components of the organization’s semantic layer.

LPG Analytics with RDF Semantics:
To support the client’s complex and dynamic analytics needs, EK recommended an LPG-based solution for its flexibility and scalability. At the same time, the client’s need for consistent data classification and linkage still pointed to the value of RDF frameworks for taxonomy and ontology development. EK is advising on how to bridge these models for the translation and connectivity of data across RDF and LPG formats, ultimately enabling seamless data integration and interoperability in alignment with semantic standards.

Semantic Layer Tooling:
EK has extensive experience advising on the evaluation, selection, procurement, and scalable implementation of semantic layer technologies. EK’s qualitative evaluation for the organization’s linked data analysis platforms was supplemented by a proprietary structured matrix measuring down-selected tools against 50+ functional and non-functional factors to provide a quantitative view of each tool’s ability to meet the organization’s specific needs.

Semantic Modeling and Scalable Graph Development:
Working closely with the organization’s domain experts, EK provided expert advisory in industry standards and best practices to create a semantic data model that will maximize graph benefits in the context of the client’s use cases and critical data assets. In parallel with model development, EK offered technical expertise to advise on the scalability of the resulting graph and connected data pipelines to support continued maintenance and expansion.

Unified Entitlements Design:
Especially working with a highly regulated government agency, EK understands the critical need for unified entitlements to provide a holistic definition of access rights, enabling consistent and correct privileges across every system and asset type in the organization. EK offered comprehensive entitlements design and development support to ensure access rights would be properly implemented across the client’s environment, closely tied to the architecture and data modeling frameworks.

Organizational Buy-In:
Throughout the engagement, EK worked closely with project sponsors to craft and communicate the solution product vision. EK tailored product communication components to different audiences by detailing granular technical features for tool procurement conversations and formulating business-driven, strategic value statements to engage business users and executives for organizational alignment. Gaining this buy-in early on is critical for maintaining development momentum and minimizing future roadblocks as wider user groups transition to using the productionalized solution.

The Results

With initial core semantic models, iterative solution architecture design plans, and incremental pilot modeling and engineering activities, the organization is equipped to stand up key pieces of the solution as they procure the graph analytics tooling for continued scale. The phased implementation plan provides the core team with tangible and achievable steps to transition from their current document-centric ways of working to a truly data-centric environment. The full resulting solution will facilitate investigation activities with a single pane view of multi-sourced data and comprehensive, dynamic analytics. This will streamline intelligence analysis across the organization with the enablement of advanced consumption experiences such as self-service reporting, text summarization, and geospatial network analysis, ultimately reducing the cognitive load and manual efforts users currently face in understanding and connecting data. EK’s proposed strategy has been approved for implementation, and EK will publish the results from the MVP development as a follow-up to this case study.

Download Flyer

Ready to Get Started?

Get in Touch

The post Semantic Layer Strategy for Linked Data Investigations appeared first on Enterprise Knowledge.

]]>
AI & Taxonomy: the Good and the Bad https://enterprise-knowledge.com/ai-taxonomy-the-good-and-the-bad/ Tue, 04 Mar 2025 18:18:49 +0000 https://enterprise-knowledge.com/?p=23266 The recent popularity of new machine learning (ML) and artificial intelligence (AI) applications has disrupted a great deal of traditional data and knowledge management understanding and tooling. At EK, we have worked with a number of clients who have questions–how … Continue reading

The post AI & Taxonomy: the Good and the Bad appeared first on Enterprise Knowledge.

]]>

The recent popularity of new machine learning (ML) and artificial intelligence (AI) applications has disrupted a great deal of traditional data and knowledge management understanding and tooling. At EK, we have worked with a number of clients who have questions–how can these AI tools help with our taxonomy development and implementation efforts? As a rapidly developing area, there is still more to be discovered in terms of how these applications and agents can be used as tools. However, from our own experience, experiments, and work with AI-literate clients, we have noticed alignment on a number of benefits, as well as a few persistent pitfalls. This article will walk you through where AI and ML can be used effectively for taxonomy work, and where it can lead to limitations and challenges. Ultimately, AI and ML should be used as an additional tool in a taxonomist’s toolbelt, rather than as a replacement for human understanding and decision-making.

 

Pluses

Taxonomy Component Generation

One area of AI integration that Taxonomy Management System (TMS) vendors quickly aligned on is the usefulness of LLMs (Large Language Models) and ML for assisting in the creation of taxonomy components like Alternative Labels, Child Concepts, and Definitions. Using AI to create a list of potential labels or sub-terms that can quickly be added or discarded is a great productivity aid. Content generation is especially powerful when it comes to generating definitions for a taxonomy. Using AI, you can draft hundreds of definitions for a taxonomy at a time, which can then be reviewed, updated, and approved. This is an immensely useful time-saver for taxonomists–especially those that are working solo within a larger organization. By giving an LLM instructions on how to construct definitions, you can avoid creating bad definitions that restate the term being defined (for example, Customer Satisfaction: def. When the customer is satisfied.), and save time that would be spent by the taxonomist looking up definitions individually. I also like using LLMs to help suggest labels for categories when I am struggling to find a descriptive term that isn’t a phrase or jargon.

Mapping Between Vocabularies

Some taxonomists may already be familiar with this use case; I first encountered this five years ago back in 2020. LLMs, as well as applications that can do semantic embedding and similarity analysis, are great for doing an initial pass at cross-mapping between vocabularies. Especially for application taxonomies that ingest a lot of already-tagged content/data from different sources, this can cut down on the time spent reviewing hundreds of terms across multiple taxonomies for potential mappings. One example of this is Learning Management Systems, or LMSs. Large LMSs typically license learning content from a number of different educational vendors. In order to present users with a unified discovery and search experience, the topic categories, audiences, and experience levels that vendors assign to their learning content will need to be mapped to the LMS’s own taxonomies to ensure consistent tagging for findability.

Document Processing and Summarization

One helpful content use case for LLMs is their ability to summarize existing content and text, rather than creating new text from scratch. Using an LLM to create content summaries and abstracts can be a useful input for automatic tagging of longer, technical documents. This should not be the only input for auto-tagging, since hallucinations may lead to missed tags, but when tagged alongside the document text, we have seen improved tagging performance.

Topic Modeling and Classification

The components that make up BERTopic framework for topic modeling. Within each category, components are interchangeable. Image of BERTopic components reprinted with permission from https://maartengr.github.io/BERTopic/algorithm/algorithm.html

Most taxonomists are familiar with using NLP (Natural Language Processing) tools to perform corpus analysis, or the automated identification of potential taxonomy terms from a set of documents. Often taxonomists use either standalone applications or TMS modules to investigate word frequency, compound phrases, and overall relevancy of terms. These tools serve an important part of taxonomy development and validation processes, and we recommend using a TMS to handle NLP analysis and tagging of documents at scale. 

BERTopic is an innovative topic modeling approach that is remarkably flexible in handling various information formats and can identify hierarchical relationships with adjustable levels of detail. BERTopic uses document embedding and clustering to add additional layers of analysis and processing to the traditional NLP approach of term frequency-inverse document frequency, or TF-IDF, and can incorporate LLMs to generate topic labels and summaries for topics. For organizations with a well-developed semantic model, the BERTopic technique can also be used for supervised classification, sentiment analysis, and topic tagging. Topic modeling is a useful tool for providing another dimension with which taxonomists can review documents, and demonstrates how LLMs and ML frameworks can be used for analysis and classification. 

 

Pitfalls

Taxonomy Management

One of the most desired features that we have heard from clients is the use of an Agentic AI to handle long-term updates to and expansion of a taxonomy. Despite the desire for a magic bullet that will allow an organization to scale up their taxonomy use without additional headcount, to date there has been no ML or AI application or framework that can replace human decision making in this sphere. As the following pitfalls will show, decision making for taxonomy management still requires human judgement to determine whether management decisions are appropriate, aligned to organizational understanding and business objectives, support taxonomy scaling, and more.

Human Expertise and Contextual Understanding

Taxonomy management requires discussions with experts in a subject area and the explicit capture of their information categories. Many organizations struggle with knowledge capture, especially for tacit knowledge gained through experience. Taxonomies that are designed with only document inputs will fail to capture this important implicit information and language, which can lead to issues in utilization and adoption. 

These taxonomies may struggle to handle instances where common terms are used differently in a business context, along with terms where the definition is ambiguous. For example, “Product” at an organization may refer not only to purchasable goods and services, but also to internal data products & APIs, or even not-for-profit offerings like educational materials and research. And within a single taxonomy term, such as “Data Product”, there may be competing ideas of scope and definition across the organization that need to be brought into alignment for it to be accurately used.

Content Quality and Bias

AI taxonomy tools are dependent on the quality of content used for training them. Content cleanup and management is a difficult task, and unfortunately many businesses lag behind in both capturing up-to-date information, and deprecating or removing out-of-date information. This can lead to taxonomies that are out of date with modern understanding of a field. Additionally, if the documents used have a bias towards a particular audience, stakeholder group, or view of a topic then the taxonomy terms and definitions suggested by the AI reflect that bias, even if that audience, stakeholder group, or view is not aligned with your organization. I’ve seen this problem come up when trying to use press releases and news to generate taxonomies – the results are too generic, vague, and public rather than expert oriented to be of much use.

Governance Processes and Decision Making

Similar to the pitfalls of using AI for taxonomy management, governance and decision making are another area where human judgement is required to ensure that taxonomies are aligned to an organization’s initiatives and strategic direction. Choosing whether undertagged terms should be sunsetted or updated, responding to changes in how words are used, and identifying new domain areas for taxonomy expansion are all tasks that require conversation with content owners and users, alongside careful consideration of consequences. As a result, ultimate taxonomy ownership and responsibility should lie with trained taxonomists or subject matter experts.

AI Scalability

There are two challenges to using AI alongside taxonomies. The first challenge is the shortage of individuals with the specialized expertise required to scale AI initiatives from pilot projects to full implementations. In today’s fast-evolving landscape, organizations often struggle to find taxonomists or semantic engineers who can bridge deep domain knowledge with advanced machine learning skills. Addressing this gap can take two main approaches. Upskilling existing teams is a viable strategy—it is cost-effective and builds long-term internal capability, though it typically requires significant time investment and may slow progress in the short term. Alternatively, partnering with external experts offers immediate access to specialized skills and fresh insights, but it can be expensive and sometimes misaligned with established internal processes. Ultimately, a hybrid approach—leveraging external partners to guide and accelerate the upskilling of internal teams—can balance these tradeoffs, ensuring that organizations build sustainable expertise while benefiting from immediate technical support.

The second challenge is overcoming infrastructure and performance limitations that can impede the scaling of AI solutions. Robust and scalable infrastructure is essential for maintaining data latency, integrity, and managing storage costs as the volume of content and complexity of taxonomies grow. For example, an organization might experience significant delays in real-time content tagging when migrating a legacy database to a cloud-based system, thereby affecting overall efficiency. Similarly, a media company processing vast amounts of news content can encounter bottlenecks in automated tagging, document summarization, and cross-mapping, resulting in slower turnaround times and reduced responsiveness. One mitigation strategy would be to leverage scalable cloud architectures, which offer dynamic resource allocation to automatically adjust computing power based on demand—directly reducing latency and enhancing performance. Additionally, the implementation of continuous performance monitoring to detect system bottlenecks and data integrity issues early would ensure that potential problems are addressed before they impact operations.

 

Closing

Advances in AI, particularly with large language models, have opened up transformative opportunities in taxonomy development and the utilization of semantic technologies in general. Yet, like any tool, AI is most effective when its strengths are matched with human expertise and a well-thought-out strategy. When combined with the insights of domain experts, ML/AI not only streamlines productivity and uncovers new layers of content understanding but also accelerates the rollout of innovative applications.

Our experience shows that overcoming the challenges of expertise gaps and infrastructure limitations through a blend of internal upskilling and strategic external partnerships can yield lasting benefits. We’re committed to sharing these insights, so if you have any questions or would like to explore how AI can support your taxonomy initiatives, we’re here to help.

The post AI & Taxonomy: the Good and the Bad appeared first on Enterprise Knowledge.

]]>
Nurturing Knowledge – A Journey in Building a KM Program from Scratch: A Case Study https://enterprise-knowledge.com/building-a-km-program-from-scratch/ Thu, 13 Feb 2025 15:30:43 +0000 https://enterprise-knowledge.com/?p=23094 Today, non-profit organizations face the challenge of optimizing knowledge management to maximize resources and support decision-making. During this presentation  “Nurturing Knowledge: A Journey in Building a KM Program from Scratch”, Jess DeMay (Enterprise Knowledge) and Jennifer Anna (WWF) shared a … Continue reading

The post Nurturing Knowledge – A Journey in Building a KM Program from Scratch: A Case Study appeared first on Enterprise Knowledge.

]]>
Today, non-profit organizations face the challenge of optimizing knowledge management to maximize resources and support decision-making. During this presentation  “Nurturing Knowledge: A Journey in Building a KM Program from Scratch”, Jess DeMay (Enterprise Knowledge) and Jennifer Anna (WWF) shared a case study on November 19th at KM World 2024 in Washington, D.C.

In this presentation, DeMay and Anna explored the World Wildlife Fund’s (WWF) approach to developing its knowledge management strategy from the ground up. They focused on the organization’s initial challenges, such as disparate systems and siloed information. They highlighted WWF’s strategy for overcoming these obstacles, emphasizing the integration of people, processes, and technology to craft a roadmap aligned with WWF’s organizational goals.

They discussed WWF’s proactive efforts to foster a knowledge-sharing culture, define clear roles, and implement a governance structure that enhances content management across a distributed team of over 1,900 employees. They also addressed the vital role of change management, sharing techniques for navigating resistance and securing buy-in through executive sponsorship and grassroots advocacy.

Participants in this session gained insights into:

  • Key challenges and strategies for building a KM program from scratch;
  • The importance of aligning KM initiatives with organizational goals;
  • Techniques for fostering a knowledge-sharing culture and managing content; and
  • How to drive sustainable change with effective communication, training, and support.

The post Nurturing Knowledge – A Journey in Building a KM Program from Scratch: A Case Study appeared first on Enterprise Knowledge.

]]>
Governing a Federated Data Model https://enterprise-knowledge.com/governing-a-federated-data-model/ Thu, 25 Apr 2024 15:36:11 +0000 https://enterprise-knowledge.com/?p=20398 Kjerish, CC BY-SA 4.0, via Wikimedia Commons Data proliferates. Whether you are a small team or a multinational enterprise, information grows at an accelerated rate over time. As that data proliferates, you can run into issues of interoperability, duplication, and … Continue reading

The post Governing a Federated Data Model appeared first on Enterprise Knowledge.

]]>
Example of a node mesh network.

Kjerish, CC BY-SA 4.0, via Wikimedia Commons

Data proliferates. Whether you are a small team or a multinational enterprise, information grows at an accelerated rate over time. As that data proliferates, you can run into issues of interoperability, duplication, and inconsistency that slows down the speed with which actionable insights can be derived. In order to mitigate this natural tendency, we develop and enforce standardized data models. 

Developing enterprise data models introduces new concerns, such as how centralized ownership of the model should be. While it can be helpful to have a singular overarching data model team, centralization can also introduce its own challenges. Chief among these is the introduction of a modeling bottleneck. If only one team can produce or approve models, that slows down the speed with which models can be developed and improved. Even if that team is incorporating feedback and review from the data experts, centralization is typically a blocker to ensuring that deep domain knowledge is captured and kept updated. It is for that reason that frameworks such as data mesh and data fabric promote domain ownership of data models by the people closest to the data within a larger federated framework.

Before continuing, we should define a few terms:

Federation refers to a collection of smaller groups within a larger organization, each of which has some degree of autonomy and ability to make decisions. Within the context of a data framework, federation means that different groups within an organization, such as Sales and Accounts, are responsible for and make decisions about their data. Domain, for the purposes of this article, is a specific area of knowledge within an organization. Products are a common domain area within organizations, with specific product types or categories serving as more specific sub-domains. Domains may make use of highly-specific subject knowledge, and are often characterized by their depth rather than their breadth.

Of course, implementing domains working within a federated data model brings its own challenges for data governance. Some–such as the need for global standardization to promote interoperability across data products–are common data challenges, while others–such as the federation of governance responsibilities–may be new to organizations embarking on a decentralized model journey. This article will walk through how to begin transitioning to federated data model governance.

Local government plenary chamber in the town hall in Dülmen, North Rhine-Westphalia, Germany (2017)

Similar to a town hall or local government, success will rely on ensuring that many different stakeholders have a seat at the table and a sense of shared responsibility. Dietmar Rabich / Wikimedia Commons / “Dülmen, Rathaus, Ratssaal — 2017 — 9667-73” / CC BY-SA 4.0

 

Moving away from a centralized model: Balancing standardization and autonomy

For organizations that have already implemented centralized data governance, the thought that governance responsibilities can or should be federated out to different domains may seem strange. Data governance grows out of the need for standardization, interoperability, and regulatory standards, all of which are typically associated with centralized management. These needs are central to any large organization’s data governance, and they don’t go away when creating a federated governance model. However, within a federated data model, these standardization needs are balanced against the principles of domain autonomy that support data innovation and agile production. Time spent explaining field naming conventions and data structure to non-experts and waiting for approval can slow or even stymie the ability to make data internally available, resulting in increased cost, lost hours, and lower innovation.

To support domain autonomy and the ability to move quickly when iterating on or creating new data products, some of the responsibility for ensuring that data meets governance standards is shifted onto the domains as part of a shared governance model. Business-wide governance concerns like security and enterprise authorization remain with central teams, while specific governance implementations like authorization rules and data quality assurance are handled on a domain basis. Domains handle the domain-level governance checks and leave the centralized governance group to tackle more central issues like regulatory compliance, meaning that the data product teams spend less time waiting on centralized governance checks when iterating on a data product. 

The federated and central governance teams are not separate entities, working without knowledge of one another. Domain teams are able to weigh in on and guide global data product governance policies, through a cross-functional governance team.

 

 Global governance, local implementation

Within a federated governance model, it is still important to be able to create enterprise-wide governance policies. Individual domains need guidance on meeting regulatory requirements in areas of privacy and security, among others. Additionally, for standardization to be of the greatest benefit, all of the groups producing data need to align on the same standards. 

It is for these reasons that the federated governance model relies on a cross-functional governance team for policy decisions, as well as guidance on how to implement governance to meet those policies. This cross-functional team should be made up of domain representatives and representatives from Central IT, Compliance, Standards, and other experts in relevant governance areas. This ensures that policy decisions are not removed from the data producers, and that domains have a say in governance decisions while remaining connected to your organization’s central governance bodies. Policies that should be determined by this governance team can include PII requirements, API contracts, mappings, security policies, representation requirements, and more.

An example federated governance diagram

In order to ensure that domains are fully engaged in the governance process, it is best practice to involve the data product teams early in the governance process. For an organization new to federated data models, this should happen when the data product teams are being stood up, rather than waiting for product teams to be fully formed before grafting on later. When Enterprise Knowledge spearheaded the development of an enterprise ontology for data fabric at a major multinational bank, we worked with the major stakeholders to start defining a federated governance program and initial domains from the beginning of the engagement alongside the initial ontology modeling. This helped to ensure that there was early buy-in from the teams that would later define and be responsible for data products.

The data product teams are ultimately responsible for executing the governance policies of this group, so it is vital that they are involved in defining those policies. Lack of involvement can lead to friction between the governance and data product teams, especially if the data product teams feel that the policies are out of sync with their governance needs.

Shift left on standards

The idea of “shifting left” comes from software testing, where it means to evaluate early rather than later in project lifecycles. Similarly, shifting left on standards looks to incorporate data management practices early into data lifecycles, rather than trying to tack them on at the end. Data management frameworks prioritize working with data close to its source, and data governance should be no different. Standards need to be embedded as early within the data lifecycle as possible in order to promote greater usability downstream within data products. 

For data attributes, this could mean mapping data to standardized concept definitions and business rules as defined in an ontology. EK has worked with clients to shift left on standardization by using a semantic layer to connect standardized vocabulary to source data across disparate datasets, and map the data to a shared logical model. Applying standardization within the data products improves the user experience for data consumers and lessens the time lost when working with multiple data products. 

Zhamak Dehghani, the creator of the data mesh framework, suggests looking for places where standardization and governance can be applied programmatically as part of what she refers to as “computational governance.” Depending on an organization’s technical maturity (i.e. the availability and use of technical solutions internally), governance tasks such as the anonymization of PII, access controls, retention schedules, and more can be coded as a part of the data products. This is another instance of embedding standardization within domains to promote data quality and ease of use. Early standardization lessens the amount of later coordination that is required to publish data products, resulting in faster production, and it is one of the keys to enabling a federated data model. 

Conclusion

While federated data governance will be a new paradigm to many organizations, it has clear advantages for data environments that rely on expertise across different subject areas. The best practices discussed in this article will ensure that your organization’s data ecosystem is not only a powerful tool for standardization and insights, but also a robust and reliable one. Data product thinking can be an exciting new way to gain insights from your data, but the change in paradigm required can also leave new users feeling lost and unsure. If you want to learn more about the social and technical sides of setting up federated data governance, contact us and we can discuss your organization’s needs in detail. 

 

The post Governing a Federated Data Model appeared first on Enterprise Knowledge.

]]>
Make Content Management Systems Work for You: Designing Your CMS to Deliver KM Solutions https://enterprise-knowledge.com/make-content-management-systems-work-for-you-designing-your-cms-to-deliver-km-solutions/ Wed, 26 Jul 2023 16:25:34 +0000 https://enterprise-knowledge.com/?p=18563 The most common use case our clients report for implementing their Content Management System (CMS) is “we needed a place to store our documents.” When they come to Enterprise Knowledge (EK), they’ve begun to realize that storing content is one … Continue reading

The post Make Content Management Systems Work for You: Designing Your CMS to Deliver KM Solutions appeared first on Enterprise Knowledge.

]]>
The most common use case our clients report for implementing their Content Management System (CMS) is “we needed a place to store our documents.” When they come to Enterprise Knowledge (EK), they’ve begun to realize that storing content is one thing, but configuring a CMS so that you can easily leverage your content is quite another. Many organizations see having a CMS as a knowledge management (KM) solution in and of itself. At EK, we understand that KM is comprised of a balance of People, Processes, Content, Culture, and Technology as they interact within an organization. CMSs are a single tool in a KM suite and, when combined with KM best practices, can help store and present the ‘content and technology’ aspect of KM. This white paper will deliver an overview of four overarching approaches for setting up or revamping your CMS with knowledge management best practices in mind.

SharePoint is one of the most popular Content Management Systems and one used by over 250,000 companies worldwide (ScienceSoft). While this white paper will focus on Document Management Systems and will include examples that are pertinent to SharePoint and SharePoint Online, the KM best practices in this white paper can be applied to almost any Content Management System.

 

1. Creating Rules and Governance for Your CMS

One of the most impactful things you can do for your CMS is create clear rules and regulations about what can be stored in your system, how it should be stored, and, critically, who is actually responsible for maintaining your content. The process of developing system governance will be different for every organization and CMS, but there are two approaches that we have seen consistently work for our clients: crafting system charters and designing role-based governance frameworks.

System Charters 

System Charters are the perfect lightweight backbone that can help inform every decision regarding your CMS. Your System Charter should include a one- to two-sentence summary of your system’s purpose, what should be stored in it, and its value to your organization. For example, a strong System Charter statement may look like this: 

“EK’s Knowledge Base was created to house thought leadership about knowledge management. Within the Knowledge Base, you will find blogs, presentations, podcasts, and case studies that teach readers about EK’s services and KM as a whole.”

This System Charter removes ambiguity about what should be stored in the Knowledge Base and introduces users to what to expect. 

System Governance 

With an overall framework in mind, you can begin to create roles and responsibilities for maintaining your system. CMSs can quickly get out of hand if every user has the ability to create folders, add pages, and upload content at will. However, users need some autonomy to manage their work and workspaces. My colleagues have explained content governance at length, but I want to highlight three key pieces of guidance for CMS governance here: 

  1. Create a cross-functional team for overall system governance. This should be a team that includes staff representing all departments and teams using your system. This overarching team ensures there is accountability for all governance efforts. 
  2. Formulate individual accountabilities for content you own. One of the best ways to avoid content being pumped into a system and never addressed is to create rules about what content owners must do with the content they’ve created. There is a fine line to walk here, as having heavy-handed rules will discourage knowledge sharing, but loose rules will allow a proliferation of bad content. To avoid issues, keep rules light and reward good behavior. 
  3. Create role-based permissions wherever possible. If a role within your organization doesn’t need to edit the page’s overall appearance, don’t give them that permission. Providing additional permissions as needed is easier than walking back major changes or mishaps. In SharePoint, this can be done by tying the seven permission levels (View Only, Limited Access, Read, Contribute, Edit, Design, and Full Control) to individual roles through the Admin Center or the Active Directory. 

Establishing usage guidelines through system charters and governance frameworks enables you to direct the evolution of your CMS and ensure its long-term maintenance, allowing you to focus on improving the actual experience for your users.

 

2. Design (Or Redesign) with User Experience in Mind

Many organizations see Content Management and Document Management Systems as utilitarian spaces that don’t need to cater to users’ needs and desires. However, developing a well-designed CMS interface and experience can reduce time spent searching for information, garner trust in the system, and encourage staff to give back to the tool. As such, ensuring your tool is easy to navigate and use is key to the success of both it and your staff. You can cater your CMS to your organization by:

  1. Retaining interface consistency wherever possible. Your CMS will be one of many interfaces staff use daily; lowering the cognitive load by retaining consistent button placement and page layouts can streamline the user experience and reduce time to find. While no system looks exactly alike, making structural changes can impact overall staff satisfaction. Your cross-functional KM governance team can take responsibility for understanding the user experience across various sites. Creating design consistency means faster usage, more efficient interactions, and fewer errors – all of which can have a measurable, positive impact on your bottom line. One way to start is by creating a cross-site style guide to simplify the design process and prevent users from having to relearn each site they visit. 
  2. Organizing your page based on new users’ needs. Set up your site or page to provide clear introductions to every visitor. You might try adding an introductory paragraph (a Site, Page, or System Charter typically fits well), adding quick links to the most critical and popular documents within your space, and adding contact information for any questions near the top of the page. While not all users will need this information, focusing on new users ensures the most visited and easy-to-digest information is close at hand. When using SharePoint, leverage Web Parts to create call-to-action buttons that users can use to navigate and jump around your site quickly. 

Consider creating a consistent navigational taxonomy to streamline your pages and system’s navigation and overall layout. While navigation by department or service area may be the best approach for some organizations, going through the navigational taxonomy design process is an opportunity to learn more about your end users and create a system that suits the needs of the largest user base. We have repeatedly seen the impacts and frustration caused by having radically different experiences from site to site, such as increased time-to-find and a disgruntled feeling that discourages users from complying with governance processes or disinterest in using the system at all. 

 

3. Organize with a Metadata Strategy

Metadata is descriptive detail used to describe or provide additional information about a piece of content. Most CMSs allow for the capture of metadata along with content. Metadata can improve knowledge management processes within your CMS by creating faceted searches, managing workflows and governance, and support access controls. 

With your cross-functional team, consider how the organization should use metadata within your CMS by prioritizing the KM processes you want to enable. Potential processes include increasing findability through search, managing and governing content via their applied metadata, presenting useful content to the individuals who will use it most, and enforcing nuanced access controls. To some extent, all of these processes can be started by creating content types. 

Content types are a foundational piece of almost any well-operating CMS. A content type is a reusable collection of metadata for a type of content. For example, a blog post is a content type that can have metadata fields such as title, author, topic, and date published. Most CMSs, like SharePoint, allow you to create custom content types. For a blog, for example, a system administrator could develop a defined content type that can be populated with content and metadata whenever a new blog is published. 

When creating content types, operate iteratively and start small by developing one or two at a time. An Agile approach will allow you to devote your efforts to iterative improvements with real user feedback as staff begins to interact with them, promoting effective, efficient, and focused user experiences. Additionally, always start with the content types you see most often. For example, if your organization posts a large number of News items, start by solidifying a News content type that reflects the standard form and information authors normally include. 

This relatively lightweight effort can be repeated over time and expanded as your content and user habits change. This will allow you to create powerful systems and workflows while keeping your content in manageable formats and repeatable frameworks. Keep in mind that content types work best when they work for the largest percentage of users. To achieve buy-in, ensure content types are designed centrally and communicated to the entire user base, emphasizing their power to make positive changes and improve the user experience. 

 

4. Workflows

Workflows are a key foundational element you can implement within your CMS. Automated workflows are a powerful tool that can boost the value and usefulness of your CMS if done correctly; however, if workflows are too rigid or slow down work, they can severely harm CMS adoption. Some popular workflows to get started with are content publishing, sunsetting content based on content types (now that you have your metadata strategy), and resurfacing content to be updated at predetermined times. Additionally, well-designed workflows will enforce the policies and procedures in your governance plan, ensuring it will be followed while creating greater usability. 

The best way to approach workflows is by focusing on the following: 

  1. Keeping it simple. Don’t over-complicate your workflows with too many steps or people that content needs to go through. Complex processes have more parts that can break down or create bottlenecks. Start by testing one small workflow that can be added to and iterated upon as it gets used and reviewed. 
  2. Eliminating extra work. As a system owner, workflows can greatly reduce the burden of content management for you and your team; consider identifying the most tedious parts of content management and design workflows to start. For example, if you know News posts are only relevant for 1-2 months and you constantly have to rehouse them, create a workflow that automatically archives those posts. If this feels too permanent, you can set up a workflow that resurfaces the News post to a content or system owner for revision and repurposing rather than archiving. These simple workflows can save you time and energy, ensure stakeholders maintain content responsibly and help declutter your CMS for your users. 
  3. Ensuring content is useful. Workflows can also serve as your automatic auditing system. Unfortunately, many content owners see their role as only content authors. With a fairly simple workflow, you can create a system that notices when a content item is 6 months old and automatically triggers an email notification to the content author to check in and revise their item. This normally isn’t enough to ensure it is actually updated, so consider adding a step that also notifies a new hire or volunteer to review and make notes about whether the item makes sense to them; the content owner can then choose to update, archive, or replace the item accordingly. This workflow can enable small efforts that continually improve and maintain your CMS.

Combine workflows with analytics to ensure that under-used, duplicated, or frequently edited content is addressed by content and system owners often. Think of analytics as another trigger for workflows that automatically point out difficult-to-notice trends or content issues and begin the remediation process for you.

 

Conclusion

CMSs are powerful tools that require a touch of Knowledge and Information Management best practices to reach their full potential. SharePoint is one of the many Content Management Systems that can be cumbersome, unwieldy, and a financial drain for your organization, but effective KM best practice implementation can transfer your Content Management System into a powerful and business-effective solution for your organization. You can start incorporating these best practices and processes today by starting bite-sized, iterative, and impactful engagements to help you and your users. This white paper is meant to inspire you to start your own CMS improvement processes tailored to your organization. Do you need help improving your CMS and the processes around it? Contact Enterprise Knowledge

The post Make Content Management Systems Work for You: Designing Your CMS to Deliver KM Solutions appeared first on Enterprise Knowledge.

]]>
A Semantic Data Fabric with Federated Governance for Data Standardization https://enterprise-knowledge.com/a-semantic-data-fabric-with-federated-governance-for-data-standardization/ Wed, 23 Nov 2022 15:18:25 +0000 https://enterprise-knowledge.com/?p=16819 The Solution To enable the adoption of standardized data across the organization, EK spearheaded the design and development of an enterprise ontology, defining key data elements describing the core financial activities the client performs for millions of customers as well … Continue reading

The post A Semantic Data Fabric with Federated Governance for Data Standardization appeared first on Enterprise Knowledge.

]]>

The Challenge

A major multinational bank faced challenges for internal data producers and consumers stemming from lack of standardized data across the enterprise. The organization produces, stores, and processes a vast amount of data in a context that historically lacks centralized data management controls, resulting in data that was inconsistent, duplicated, difficult to link together, and without clear ownership. In the absence of common standards across departments and financial product lines, divergence had become more and more ingrained in the ecosystem, affecting over 300 petabytes of data. The lack of alignment across organizational divisions on the meaning, format, and intent of data attributes reduced the ability of data producers and consumers to find, trust, or use crucial data.

Examples of divergent names for data elements containing country and state abbreviations. Country abbreviation name variants include countryCode, IsoCountryCode, alpha_code_country, and CTRY. State abbreviation name variants include stateCode, CountrySubdivisionCode, state_postal_code, and USPS_Code.
The same columns or elements have many different names across data assets, making it difficult to use, compare, or integrate data across the organization.

The Solution

To enable the adoption of standardized data across the organization, EK spearheaded the design and development of an enterprise ontology, defining key data elements describing the core financial activities the client performs for millions of customers as well as additional emerging domain areas and use cases. EK worked with stakeholders to identify and scope use cases and collaborated with subject-matter experts to model data elements into the ontology. EK supported publication of the ontology through cloud-based discovery and access tools by recommending an ontology management system, working with tooling teams to implement the system, and identifying opportunities to enhance usability through UX improvements and integrations with existing systems.

In order to facilitate ongoing maintenance and development, EK led the creation of a federated contribution and governance program, including approaches for communication and change management. EK led the definition of federated roles and responsibilities, the establishment of collaborative communication forums and channels, and the selection of high-value pilot use cases. To ensure consistency and mitigate the risks of decentralization, the governance model included a set of enterprise-wide minimal core criteria for standardized data to be applied across domains and use cases. As federated governance was rolled out, EK continued to refine processes, templates and materials, and related enterprise policies, standards, and procedures.

The EK Difference

EK’s unique use case-driven design methodology centered our solution around the business value it delivers. Leveraging our extensive experience in end-to-end ontology design and implementation, we started with a small-scale but high-impact proof of concept (POC) followed by iterative refinements and enhancements. This enabled internal champions to clearly demonstrate the ability of the enterprise ontology to answer crucial analysis questions and deliver tangible business value.

EK also applied a minimum viable product (MVP) approach to creating and implementing the federated governance model, launching the initial framework to support the POC and iteratively operationalizing new processes as they were developed. Rolling out the governance model as an MVP with ongoing enhancements enabled continual incorporation of stakeholder feedback and demonstration of effective federation. In addition, we drew upon EK’s proven methods for engaging stakeholders throughout the process. This allowed us to manage change in the organization, influence strategic direction, and drive adoption among data producers, data consumers, and executive decision-makers.

The Results

Over the course of this engagement, EK was able to drive the growth and maturation of the client’s data standardization program, processes, and team. EK’s multi-faceted approach to ontology modeling enabled:

  • Standardization of thousands of data elements powering six large-scale use cases;
  • Improved data consistency, usability, and alignment across the enterprise for business-critical financial data;
  • Continually increasing internal demand for onboarding into the program; and
  • Adoption of federated governance by more than 10 departments less than a year after the beginning of initial process design.

The enterprise ontology developed by EK will enable the client to pursue next-generation data usage across the organization, while the federated governance model will simultaneously allow the standardization program to sustainably adapt to evolving future use cases and data needs.

 

 

The post A Semantic Data Fabric with Federated Governance for Data Standardization appeared first on Enterprise Knowledge.

]]>
Where Should KM Leadership and Governance Belong in an Organization? https://enterprise-knowledge.com/where-does-km-leadership-and-governance-belong/ Fri, 09 Sep 2022 15:03:03 +0000 https://enterprise-knowledge.com/?p=16328 Whenever an organization invests in Knowledge Management (KM) solutions, whether it’s a new technology, process, or program, the question of who should have ownership and governance over these new tools and ways of working almost always arises. This KM function, … Continue reading

The post Where Should KM Leadership and Governance Belong in an Organization? appeared first on Enterprise Knowledge.

]]>
Whenever an organization invests in Knowledge Management (KM) solutions, whether it’s a new technology, process, or program, the question of who should have ownership and governance over these new tools and ways of working almost always arises. This KM function, which should include dedicated roles and responsibilities for maintaining and improving KM within an organization, is housed within different departments or business units. Through my years of KM consulting, I’ve often received the question of where it should be located in an ideal state. Though there are other organizational models that will certainly work, this blog presents the top four options I’ve seen, and which an organization should consider when deciding where KM Leadership and Governance belongs.

4. Human Resources (HR) Department

Many past KM organizations have been housed in HR, aligning the natural “people” role of HR with the focus on tacit knowledge capture and sharing of KM. Beyond making sure that employees get the training and development they need, it is also HR’s responsibility to create a culture of collaboration, accomplishing this by implementing programs that connect people to one another from across the organization. KM helps to improve this flow of knowledge and information through programs and initiatives like Communities of Practice, Succession Planning, and Knowledge Sharing Sessions. KM functions that live within the HR department emphasize a focus on the people within an organization and what they need to be successful in their roles. However, KM placed within the HR line runs the risk of separating KM from a focus on business value and returns. Indeed, we’ve seen the placement of KM as an HR line function decreasing, and we’re happy with this trend. Though it can be successful, KM sitting within HR too often relegates it to a support function, and one that limits the full and potential scope of KM.

3. The Learning and Development (L&D) Department

While placing KM in HR trends lower and lower, I’m increasingly seeing KM as part of the L&D Department of an organization. One of the key missions of KM is providing people with access to the knowledge and information they need to do their jobs, and in mature learning organizations, L&D delivers those resources and learning experiences. L&D is centrally responsible for the professional development and capabilities of its workforce and to ensure that people are getting the proper training and development opportunities. This is all the more critical in today’s current state of the great resignation, where organizations are seeing a massive churn in staff and are finding it especially important to onboard and upskill new joiners. 

Mature KM and learning organizations invest in proper knowledge transfer, content management, and mentorship practices, all of which overlap between the realms of KM. When KM is governed from within the L&D Department, there is a focus on the quality of knowledge resources being delivered to people to help onboard and upskill as they work towards their performance goals and grow in their career. In short, a “merger” of KM and learning can drive the creation, delivery, and exchange of people-centric knowledge and the learnings they need to perform in their positions. It is a natural fit, but on the downside, placing KM within the learning group creates potential separations from the business, similar to the HR option above. In the wrong organization, that will lower the visibility or perceived importance of KM, thereby separating KM from its most important attribute; its ability to solve real business problems.

2. Business Line

Business Lines vary depending on the type of organization and can mean a bunch of different things. What I mean, in the context of this blog, is that KM is placed as part of the mission operations of an organization, such as marketing, sales, or production. Said differently, these are the revenue-generating or customer-facing, parts of a business. In many cases, this is an ideal part of the organization for KM to live because it is much easier to draw a connection between KM efforts like self-service knowledge bases for customers, personalized recommendations of products and services, and sales relationship management, to hard Return-on-Investment (ROI). 

An organization that chooses a business line as the part of the organization where KM is governed can more easily gain investments and buy-in towards their efforts because they help improve the cash flow of an organization as a result of its focus on delivering the necessary knowledge and information to people who are directly supporting customers. At the same time, KM placed within a business line can still (and should) draw upon engagement and information sources from HR, L&D and other support functions, but can do so driven by business need and value. The prominence and, ideally, business criticality of this placement also puts a more permanent, longer-term investment in KM and engenders executive support and accountability.

1. Stand-Alone Department Reporting to the Chief Operating Officer’s (COO) Office

There are various other places where you might find a KM function – such as IT, communications, or legal/compliance – but the number one ideal option is as a standalone function reporting directly to the COO. This is where I’ve seen KM “stick” and succeed most consistently. KM is necessary for the efficiency and effectiveness of all organizations. When KM has a bird’s eye view of the needs of the entire organization, it can make strategic decisions that benefit all of the different parts that make up the whole. KM becomes a unifying force that connects people to the resources they need regardless of where those resources are within the organization.

When KM is governed as a stand-alone department, it can better align its efforts and decision-making with the organization’s strategic objectives, as opposed to just the goals of one department. There are also significant cost savings and reductions in risk when this standalone function reports to the COO, because the organization can make enterprise-level decisions regarding investing in KM technology – like intranets, company-wide policies, and practices related to security of information – and process improvement efforts that lead to more efficient delivery of products and services. Ultimately, the COO is responsible for all operational aspects of an organization and when KM is tied to this focus, it can reap the most benefits, both financially and in support of employee and customer satisfaction.

Put simply, when the COO “owns” KM, there is an executive focus that flows down through the entire organization, not just placing KM in a location to be effective, but also clearly communicating the organization’s belief in, and prioritization of, KM.

Any organization that will be successful at enterprise KM will be deliberate about where the KM organization sits, what its authority is, and how initiatives are elevated to leadership for support, sign-off, and sustainment. Regardless of what you’re implementing, always consider what roles will need to be defined and filled to make sure that the solution is kept up-to-date overtime in response to changes within and outside of the organization. This requires an understanding of the organization’s culture, leadership style, decision-making preferences, and predominant motivations. 

There is no one-size-fits-all to KM organization design and implementation. Based on what you know about the organization, you can structure the KM department or governing body to function in a way that supports the upkeep and improvement of your KM tools, programs, and practices. If you need support designing your KM Function to help you achieve your KM strategic goals, contact us.

The post Where Should KM Leadership and Governance Belong in an Organization? appeared first on Enterprise Knowledge.

]]>
Better Practices for Collaborative Knowledge Graph Modeling https://enterprise-knowledge.com/better-practices-for-collaborative-knowledge-graph-modeling/ Tue, 24 May 2022 14:29:07 +0000 https://enterprise-knowledge.com/?p=15540 As the scale and scope of the modern data ecosystem grows, we at EK see an increasing number of enterprises recognizing the potential for ontologies to make their data findable and usable throughout the organization. An ontology can provide robust … Continue reading

The post Better Practices for Collaborative Knowledge Graph Modeling appeared first on Enterprise Knowledge.

]]>
As the scale and scope of the modern data ecosystem grows, we at EK see an increasing number of enterprises recognizing the potential for ontologies to make their data findable and usable throughout the organization. An ontology can provide robust support for standardized data definitions, enterprise-wide alignment among data producers and consumers, and interoperability among diverse data sources and systems, but it usually requires a shift away from legacy data models that were not meeting these needs.

An ontology or knowledge graph cannot be modeled in isolation: in order to make sure the ontology answers business needs and technical requirements, knowledge engineers must partner with and consult stakeholders or domain experts, technical teams, data producers, and data consumers. In addition, a larger project might require a team of knowledge engineers working together to model the domain in question, which carries with it the risk of duplicative effort, divergent modeling approaches, and unclear direction.

These risks can threaten near-term progress goals, long-term sustainability, and the ability of the ontology or knowledge graph to power next-generation data usage across the organization. At EK we have drawn on our extensive experience designing ontologies and implementing knowledge graph solutions to develop a framework of better practices for collaborative modeling among knowledge engineers to help mitigate these risks and drive the immediate and ongoing success of collaborative modeling projects.

Establish Roles and Responsibilities

The solution to effectively dividing labor among a team of modelers has two parts. First, ensuring individual focus and eliminating duplicated effort requires clear, team-wide consensus and alignment on roles and responsibilities. Second, facilitating in-flight work and productive review requires your process to record explicitly who was, is, and will be responsible for each piece of the modeling effort.Gear with play symbol above three people

In order to facilitate focused modeling and reduce duplication of effort, establish clear roles and responsibilities among members of the team. Dividing the work so that each team member is able to take the lead on a conceptual area of the model allows that team member to become a localized subject-matter expert and facilitates cohesive modeling. While the optimal approach to this division will depend on the specifics of the data and domain, dividing the work by subject area, data element, or data source are often effective strategies.

Clipboard with checklistThen, make it easy to track who on the team is responsible for which parts of the model. This could be simple or complex, from a column in a shared spreadsheet to a dedicated Jira board, as long as it is accessible and comprehensible to everyone on the team. This transparency helps prevent duplication of effort and allows feedback to be targeted appropriately. You can amplify these benefits by simultaneously implementing a version control system for your ontology.

 

Routine, Frequent Peer Review

When you’re working on a team, make sure you actually collaborate! Division of labor has many benefits, but it is essential that team members do not become siloed from each other. Instituting a process of one-on-one peer reviews and having a regular forum for group feedback and suggestions help to foster collaboration, prevent internal siloing, and create near-term benefits and long-term value.

OTwo people with circular arrows around a starne-on-one peer review, in addition to offering the quick wins of finding typos or other human errors while they’re easily fixed, can strengthen your modeling effort by providing an opportunity for the ontologists to share conceptual approaches. This promotes alignment among the team and reduces the modeling of similar concepts in divergent ways.Four people around a table with speech bubbles

Group feedback sessions are a powerful tool for achieving team-wide alignment, providing a forum in which to explore different approaches and resolve them into a unified conceptualization of the model. By surfacing errors and inconsistencies early, iterative review and refinement helps reduce the need for later remodeling and revision.

Measuring Success and Progress

As you embark on any major project, you should always ask yourself what constitutes success and what measures you will use to evaluate whether you achieved it, and an ontology modeling project is no exception. In addition, consider how you will measure and report progress throughout the project, and what information you need to collect as part of your workflow to support this.

Upward-trending line chartBeing able to provide concrete measurements to demonstrate the progression of in-flight work will increase accountability, facilitate organizational buy-in, and improve internal morale when the project encounters stumbling blocks. When the project is completed, being able to demonstrate and quantify the project’s success and impact will prove the ontology’s value and help secure organizational resources for maintaining and enhancing the ontology, knowledge graph, or semantic systems that you have established.Trophy

There are many ways to measure the success of an ontology design project, and the particulars of the project will inform which one will be most effective and persuasive. Useful success metrics can include the number of concepts modeled into an ontology; the number of legacy data fields or a percentage of a legacy data model mapped to the ontology; or a number or percentage of high-impact data uses that can be supported and enhanced through your new graph systems.

Moving Forward

When the scope and resources of a project allow it, a team of knowledge engineers modeling collaboratively can tackle bigger projects and achieve more sophisticated results than a single modeler. Using a collaborative approach, we have built ontologies that support standardization and integration of distributed financial data, enable 360-degree customer and enterprise views, power automated content recommendation, and drive enterprise-wide semantic search.

At the same time, this kind of collaboration carries the risk of duplicative effort, divergent modeling approaches, and unclear direction. Leveraging this framework of better practices for knowledge engineer collaboration by establishing roles and responsibilities, engaging in frequent and routine peer review, and measuring success and progress throughout the project can help you mitigate these risks and unlock the long-term value of an enterprise ontology and knowledge graph.

Whether you are starting out in your ontology or knowledge graph journey and aren’t sure where to start or already have a project in flight with mature roadmaps and goals, EK is here to help with our deep experience in scoping, planning, and implementing enterprise ontologies and knowledge graphs. Contact us here to get started.

The post Better Practices for Collaborative Knowledge Graph Modeling appeared first on Enterprise Knowledge.

]]>