Knowledge Management Strategy & Design Articles - Enterprise Knowledge https://enterprise-knowledge.com/category/strategy-design/ Wed, 12 Nov 2025 22:00:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg Knowledge Management Strategy & Design Articles - Enterprise Knowledge https://enterprise-knowledge.com/category/strategy-design/ 32 32 Enterprise Knowledge Speaking at KMWorld 2025 https://enterprise-knowledge.com/enterprise-knowledge-speaking-at-kmworld-2025/ Wed, 12 Nov 2025 21:11:22 +0000 https://enterprise-knowledge.com/?p=26002 Enterprise Knowledge (EK) will once again have a strong presence at the upcoming KMWorld Conference in Washington, D.C. This year, EK is delivering 11 sessions throughout KMWorld and its four co-located events: Taxonomy Boot Camp, Enterprise Search & Discovery, Enterprise … Continue reading

The post Enterprise Knowledge Speaking at KMWorld 2025 appeared first on Enterprise Knowledge.

]]>
Enterprise Knowledge (EK) will once again have a strong presence at the upcoming KMWorld Conference in Washington, D.C. This year, EK is delivering 11 sessions throughout KMWorld and its four co-located events: Taxonomy Boot Camp, Enterprise Search & Discovery, Enterprise AI World, and the Text Analytics Forum. 

EK is offering an array of thought leadership sessions to share KM approaches and methodologies. Several of EK’s sessions include presentations with clients, where presenters jointly deliver advanced case studies on knowledge graphs, enterprise learning solutions, and AI.  



 On November 17, EK-led events will include:

  • Taxonomy Principles to Support Knowledge Management at a Not-for-Profit, featuring Bonnie Griffin, co-presenting with Miriam Heard of YMCA – Learn how Heard and Griffin applied taxonomy design to tame tags, align content types, and simplify conventions, transforming the YMCA’s intranet so staff can find people faster, retrieve information reliably, and share updates with the right audiences.
  • Utilizing Taxonomies to Meet UN SDG Obligations, featuring Benjamin Kass, co-presenting with Mike Cannon of the American Speech-Language-Hearing Association (ASHA) – Discover how ASHA, a UN SDG Publishers Compact signatory, piloted automatic tagging to surface SDG-relevant articles, using taxonomies for robust metadata, analytics, and high-quality content collections.
  • Driving Knowledge Management With Taxonomy and Ontology, featuring Bonnie Griffin, co-presenting with Alexander Zichettello of Honda Development & Manufacturing of America – Explore how Zichettello and Griffin designed taxonomies and ontologies for a major automaker, unifying siloed content and terminology. Presenters will share a repeatable, standards-based process and the best practices for scalable, sustainable knowledge management with attendees.

On November 18, EK-led events will include:

  • Taxonomy From 2006 to 2045: Are We Ready for the Future?, moderated by Zach Wahl, EK’s CEO and co-founder – Celebrate 20 years of Taxonomy Boot Camp with a look back at 2006 abstracts, crowd-voted predictions for the next two decades (AI included), lively debate, and a cake-cutting send-off.

On November 19, EK-led events will include:

  • Transforming Content Operations in the Age of AI, featuring Rebecca Wyatt and Elliott Risch – Learn how Wyatt and Risch partnered to leverage an AI proof of concept to prioritize and accelerate content remediation and improve content and search experiences on a flagship Intel KM platform.
  • Tracing the Thread: Decoding the Decision-Making Process With GraphRAG, featuring Urmi Majumder and Kaleb Schultz – Learn about GraphRAG, how pairing generative AI with a standards-based knowledge graph can unify data to tackle complex questions, curb hallucinations, and deliver traceable answers.
  • The Cost of Missing Critical Connections in Data: Suspicious Behavior Detection Using Link Analysis (A Case Study), featuring Urmi Majumder and Kyle Garcia – See how graph-powered link analysis and NLP can uncover hidden connections in messy data, powering fraud detection and risk mitigation, with practical modeling choices and a real-world, enterprise-ready case study.
  • Generating Structured Outputs From Unstructured Content Using LLMs, featuring Kyle Garcia and Joseph Hilger, EK’s COO and co-founder – Discover how LLMs guided by content models break long, unstructured documents into reusable, knowledge graph–ready components, reducing hallucinations while improving search, personalization, and cross-platform reuse.

On November 20, EK-led events will include:

  • Enterprises, KM, & Agentic AI, featuring Jess DeMay, co-presenting with Rachel Teague of Emory Consulting Services – This interactive discussion looks at organizational trends as well as new technologies and processes to enhance knowledge sharing, communication, collaboration, and innovation in the enterprises of the future.
  • Making Search Less Taxing: Leveraging Semantics and Keywords in Hybrid Search, featuring Chris Marino, co-presenting with Jaime Martin of Tax Analysts – Explore how Tax Analysts, the nonpartisan nonprofit behind Tax Notes, scaled an advanced search overhaul that lets subscribers rapidly find what they need while surfacing relevant content they didn’t know to look for.
  • The Future of Enterprise Search & Discovery, a panel including EK’s COO and co-founder Joseph Hilger – Get a glimpse of what’s next in enterprise search and discovery as this panel unpacks agentic AI and emerging trends, offering near and long-term predictions for how tools, workflows, and roles will evolve. 

Come to KMWorld 2025, November 17–20 in Washington D.C., to hear from EK experts and learn more about the growing field of knowledge management. Register here.

The post Enterprise Knowledge Speaking at KMWorld 2025 appeared first on Enterprise Knowledge.

]]>
Knowledge Cast – Michal Bachman, CEO of GraphAware https://enterprise-knowledge.com/knowledge-cast-michal-bachman-ceo-of-graphaware/ Tue, 28 Oct 2025 16:15:27 +0000 https://enterprise-knowledge.com/?p=25930 Enterprise Knowledge COO Joe Hilger speaks with Michal Bachman, CEO at GraphAware. GraphAware provides technology and expertise for mission-critical graph analytics, and its graph-powered intelligence analysis platform — Hume — is used by democratic government agencies (law enforcement, intelligence, cybersecurity, … Continue reading

The post Knowledge Cast – Michal Bachman, CEO of GraphAware appeared first on Enterprise Knowledge.

]]>

Enterprise Knowledge COO Joe Hilger speaks with Michal Bachman, CEO at GraphAware. GraphAware provides technology and expertise for mission-critical graph analytics, and its graph-powered intelligence analysis platform — Hume — is used by democratic government agencies (law enforcement, intelligence, cybersecurity, defense) and Fortune 500 companies across the world.

In their conversation, Joe and Michal discuss how you can use a graph to investigate criminal networks, what’s next with graphs (hint: ensuring trustworthy AI doesn’t just mean supporting the machines), and some helpful books that experts at GraphAware have released recently.

Check out Knowledge Graphs and LLMs in Action and Neo4j: The Definitive Guide to dive deeper into the topics discussed in this episode!

 

 

If you would like to be a guest on Knowledge Cast, contact Enterprise Knowledge for more information.

The post Knowledge Cast – Michal Bachman, CEO of GraphAware appeared first on Enterprise Knowledge.

]]>
EK Again Recognized as Leading Services Provider by KMWorld https://enterprise-knowledge.com/ek-again-recognized-as-leading-services-provider-by-kmworld/ Tue, 21 Oct 2025 17:18:42 +0000 https://enterprise-knowledge.com/?p=25847 Enterprise Knowledge (EK) has once again been named to KMWorld’s list of the 100 Companies That Matter in Knowledge Management. As the world’s largest dedicated Knowledge Management (KM) consulting firm, EK has been recognized for global leadership in KM consulting … Continue reading

The post EK Again Recognized as Leading Services Provider by KMWorld appeared first on Enterprise Knowledge.

]]>
Enterprise Knowledge (EK) has once again been named to KMWorld’s list of the 100 Companies That Matter in Knowledge Management. As the world’s largest dedicated Knowledge Management (KM) consulting firm, EK has been recognized for global leadership in KM consulting services, as well as overall thought leadership in the field, for the eleventh consecutive year.

EK hosts a public knowledge base of over 700 articles on KM, Semantic Layer, and AI thought leadership, produces the top-rated KM podcast, Knowledge Cast, and has published the definitive book on KM benchmarking and technologies, Making Knowledge Management Clickable

In addition to the Top 100 List, EK was also recently recognized by KMWorld on their list of AI Trailblazers. You can read EK VP Lulit Tesfaye’s thoughts on that recognition here. These new areas of recognition come on the heels of Honda recognizing Enterprise Knowledge as one of their suppliers of the year, and Inc. Magazine listing EK as one of the best places to work in the United States.

The post EK Again Recognized as Leading Services Provider by KMWorld appeared first on Enterprise Knowledge.

]]>
Navigating the Retirement Cliff: Challenges and Strategies for Knowledge Capture and Succession Planning https://enterprise-knowledge.com/navigating-the-retirement-cliff-challenges-and-strategies-for-knowledge-capture-and-succession-planning/ Tue, 14 Oct 2025 13:59:50 +0000 https://enterprise-knowledge.com/?p=25782 As organizations prepare for workforce retirements, knowledge management should be a key element of any effective succession planning strategy, ensuring a culture of ongoing learning and stability. This piece explores the challenges organizations face in capturing and transferring critical knowledge, … Continue reading

The post Navigating the Retirement Cliff: Challenges and Strategies for Knowledge Capture and Succession Planning appeared first on Enterprise Knowledge.

]]>
As organizations prepare for workforce retirements, knowledge management should be a key element of any effective succession planning strategy, ensuring a culture of ongoing learning and stability. This piece explores the challenges organizations face in capturing and transferring critical knowledge, alongside practical knowledge management strategies to address them and build more sustainable knowledge-sharing practices.

The Retirement Cliff and Its Implications

The “retirement cliff” refers to the impending wave of retirements as a significant portion of the workforce—particularly Baby Boomers—reaches retirement age. According to labor market trends, millions of experienced professionals are set to retire in the coming years, posing a critical challenge for organizations. The departure of seasoned employees risks the loss of institutional knowledge, technical expertise, and key relationships, leading to operational disruptions and costly efforts to regain lost expertise.

One of the most immediate financial consequences Enterprise Knowledge has seen on several of our engagements is the growing reliance on retirees returning as contractors to fill knowledge and capability gaps, often at significantly higher costs than their original salaries. While this can provide a short-term fix, it also creates a long-term liability. Research from Harvard Business Review and other labor market analyses shows that rehiring former employees without structured knowledge transfer can perpetuate a cycle of dependency, inflate workforce costs, and suppress the development of internal talent. Organizations may pay premium contract rates while still losing institutional knowledge over time, especially if critical expertise remains undocumented or siloed. Without proactive strategies, such as structured succession planning, mentoring, and systematic knowledge capture, organizations risk operational disruption, weakened continuity, and increased turnover-related costs that can amount to billions of dollars annually.

The Role of Knowledge Management in Succession Planning

Knowledge management plays a vital role in succession planning by implementing systems and practices that ensure critical expertise is systematically captured and transferred across generations of employees. Documenting key insights, best practices, and institutional knowledge is essential for mitigating the risk of knowledge loss. This process helps to strengthen organizational continuity and ensures that employees have the knowledge they need to perform their roles effectively and make informed decisions.

The Retirement Cliff: Challenges & Solutions

Challenge Solution
Employee Resistance: Staff hesitate to share knowledge if it feels risky, time-consuming, or undervalued. Build trust, emphasize benefits, and use incentives or recognition programs to encourage sharing.
Cultural Barriers & Siloes: Rigid hierarchies and disconnected teams block collaboration and cross-functional flow. Foster collaboration through Communities of Practice, cross-team projects, and leadership modeling knowledge sharing.
Resource Constraints: KM is often underfunded or deprioritized compared to immediate operational needs. Start small with scalable pilots that demonstrate ROI and secure executive sponsorship to sustain investment.
Time Pressures: Rushed retirements capture checklists but miss critical tacit knowledge and insights. Integrate ongoing knowledge capture into workflows before retirements, not just at exit interviews.

While the table highlights immediate challenges and corresponding solutions, organizations benefit from a deeper set of strategies that address both near-term risks and long-term sustainability. The following sections expand on these themes, outlining actionable approaches that help organizations capture critical knowledge today, while laying the foundation for resilient succession planning tomorrow.

Near-term Strategies: Mitigating Immediate Risk

Engage Employees in Knowledge Capture Efforts

Long-tenured employees approaching retirement have accumulated invaluable institutional knowledge, and their sustained tenure itself demonstrates their consistent value to the organization. When a retirement cliff is looming, organizations should take action to engage those employees in efforts that help to capture and transfer key institutional knowledge before it is lost.

Cast a Wide, Inclusive Net

Organizations often lack visibility into actual retirement timelines. Rather than making assumptions about who might retire or inadvertently pressuring employees to reveal their plans, frame knowledge transfer efforts as part of comprehensive KM practices. By positioning these initiatives as valuable for all long-tenured employees—not just potential retirees—organizations create an inclusive environment that captures critical knowledge. This broader approach not only prepares for potential retirement-related knowledge gaps but also establishes ongoing knowledge transfer as a standard organizational practice.

Acknowledge and Thank Employees

Explicitly acknowledge the expertise and contributions of key knowledge holders participating in efforts. By recognizing their professional legacy and expressing the organization’s desire to preserve and share their wisdom with others, leaders can create a foundation for meaningful participation in knowledge transfer activities. This approach validates key members’ career impact while positioning them as mentors and knowledge stewards for the next generation. Consider setting aside some time from their normal responsibilities to encourage participation.

Reward Knowledge Sharing

Employees are far more likely to engage in knowledge transfer when it is seen as both valuable and valued. In EK’s experience, organizations that successfully foster a culture of knowledge sharing often embed these behaviors into their core talent practices, such as performance evaluations and internal recognition programs. For example, EK has helped to incorporate KM contributions into annual review processes or introduce peer-nominated awards like “Knowledge Champion” to highlight and celebrate individuals who model strong knowledge-sharing behaviors.

Enable Employees to Capture Knowledge

Effective knowledge transfer begins with capturing critical institutional knowledge. This includes both explicit knowledge, such as processes and workflows, and tacit knowledge, such as decision-making frameworks, strategic insights, and the rationale behind past choices. To guide organizations in successful knowledge capture and transfer practices, EK recommends implementing a variety of strategies that help build confidence and make the process manageable.

Provide Documentation Training and Support

Organizations should consider offering dedicated support through roles and teams that naturally align with KM efforts, such as technical documentation, organizational learning and development, or quality assurance. These groups can help introduce employees to the practice and facilitate more effective capture. For example, many organizations focus solely on documenting step-by-step processes, overlooking the tacit knowledge that explains the “why” behind key decisions to provide future employees with critical context. In EK’s experience, preserving and transmitting knowledge of past actions and opinions has given teams the confidence to make more informed decisions and ensure coherence in guidance. This approach is especially valuable from a legal perspective, where understanding the rationale behind decisions is crucial for consistency and compliance.

Help Prioritize the Knowledge to Capture

Organizations can help focus knowledge capture efforts, without overwhelming employees, by prioritizing the types of knowledge to capture. If knowledge falls into one of these categories, it is ideal to prioritize:

    1. Mission-Critical Knowledge – High-impact expertise that is not widely known (e.g., decision-making rationales, specialized processes) is at greatest risk for loss. Encourage employees to prioritize this knowledge first.

    1. Operational Knowledge – Day-to-day processes that can be captured progressively over time. Suggest to employees that they take advantage of workflows and cycles as they are completed to document knowledge in real time from beginning to end.

    1. Contextual Knowledge – Broader insights from specific projects and initiatives are best captured in collective discussions or team reflections from various participants. Aim to make arrangements to put team members in conversation with one another and capture insights.

Embed Knowledge Capture into Workflows

Rather than treating documentation as a separate task, organizations should embed it into the existing processes and workflows where the knowledge is already being used. Integrating documentation creation and review into regular processes helps normalize knowledge capture as a routine part of work. In practice, this may look like employees updating Standard Operating Procedures (SOPs) during routine tasks, recording leadership reflections during key decisions, or incorporating “lessons learned” or retrospective activities into project cycles. Additionally, structured after-action reviews and reflective learning exercises can further strengthen this practice by documenting key takeaways from major projects and initiatives. Beyond improving project and knowledge transfer outcomes, these habits also build durable knowledge assets that support AI-readiness.

Design Succession-Focused Knowledge Sharing Programs

Cultural silos and resistance to sharing knowledge often undermine succession planning. Employees may hesitate to share what they know due to fears about losing job security, feeling undervalued, or simply lacking the time to do so. To overcome these challenges, organizations must implement intentional knowledge transfer programs, as outlined below, that aim to prevent a forthcoming retirement cliff from leaving large gaps.

Create Knowledge Transfer Interview Programs

Pairing long-tenured staff with successors ensures that critical institutional knowledge is passed on before key departures. Create thoughtful interview programming that takes the burden off the experienced staff from initiating or handling administrative efforts. EK recently partnered with a global automotive manufacturing company to design and facilitate structured knowledge capture and transfer plans for high-risk roles that were eligible for retirement, including walkthroughs of core responsibilities, stakeholder maps, decision-making criteria, and context around ongoing initiatives. These sessions were tracked and archived, enabling smoother transitions and reducing institutional memory loss. EK also supported a federal agency in implementing a leadership knowledge transfer interview series with retiring senior leaders to capture institutional knowledge and critical insights from their tenure. These conversations focused on navigating the agency’s operations, lessons for successors, and role-specific takeaways. EK distilled these into concise, topical summaries that were tagged for findability and reuse, laying the foundation for a repeatable, agency-wide approach to preserving institutional knowledge.

Foster Communities of Practice

Encourage cross-functional collaboration and socialize knowledge sharing across the organization by establishing communities of practice.  The programs provide opportunities for employees to gather regularly and discuss a common professional interest, to learn from each other through sharing ideas, experiences, and best practices. Involve long-tenured staff in these efforts and encourage them to develop topics around their expertise. EK has seen firsthand how these practices promote ongoing knowledge exchange, helping employees stay connected and informed across departments, even during leadership transitions.

Offer Formal Knowledge Exchange Programs

Knowledge Exchange Programs, like job shadowing, expert-led cohorts, and mentorship initiatives, create clear pathways for employees to share and document expertise before transitions occur. Long-tenured employees are often excellent candidates to take the leading role in these efforts because of the vast knowledge they hold.

Ultimately, effective succession planning is not just about capturing what people know—it is about creating a culture where knowledge transfer is expected, supported, and celebrated. By addressing resistance and embedding knowledge-sharing into the rhythm of daily work, organizations can reduce risk, improve continuity, and build long-term resilience.

Long-term Strategies: Building Sustainable Knowledge Flow

While short-term efforts can help reduce immediate risk, organizations also need long-term strategies that embed knowledge management into daily operations and ensure continuity across future workforce transitions. That is why EK believes Artificial Intelligence (AI) and Knowledge Intelligence (KI) are essential tools in capturing, contextualizing, and preserving knowledge in a way that supports sustainable transitions and continuity.

Below are long-term, technology-enabled strategies that organizations can adopt to complement near-term efforts and future-proof institutional knowledge.

Structure and Contextualize Knowledge with a Semantic Foundation

EK sees contextual understanding as central to KM and succession planning, as adding business context to knowledge helps to illuminate and interpret meaning for users. By breaking down content into dynamic, structured components and enriching it with semantic metadata, organizations can preserve not only the knowledge itself, but also the meaning, rationale, and relationships behind it. EK has supported clients in building semantic layers and structured knowledge models that tag and categorize lessons learned, decisions made, and guidance provided, enabling content to be reused, assembled, and delivered at the point of need. This approach helps ensure continuity through leadership transitions, reduces duplication of effort, and allows institutional knowledge to evolve without losing its foundational context.

Leverage Knowledge Graphs and Intelligent Portals

Traditional knowledge repositories, while well-intentioned, often become static libraries that users struggle to navigate. EK has helped organizations move from these repositories to dynamic knowledge ecosystems by implementing knowledge graphs and a semantic layer. These approaches connect once disparate data, creating relationships between concepts, decisions, and people.

To leverage the power of the knowledge graph and semantic layer, EK has designed and deployed knowledge portals for several clients, providing a means for users to engage with the semantic layer. These portals consolidate information from multiple existing systems into a streamlined, user-friendly landing page. Each portal is designed to serve as a central hub for enterprise knowledge, connecting users to the right information, experts, and insights they need to do their jobs, while also supporting smoother transitions when staff move on or new team members step in. With intuitive navigation and contextualized search, the portal helps staff quickly find complete, relevant answers across multiple systems, explore related content, and access expertise—all within a single experience.

Augment Search and Discovery with Artificial Intelligence

To reduce the friction of finding and applying knowledge, EK has helped clients enhance knowledge portals with AI capabilities, integrating features like context-aware search, intelligent recommendations, and predictive content delivery.  These features anticipate user intent, guide employees to relevant insights, and surface related content that might otherwise be missed. When paired with a strong semantic foundation, these enhancements transform a portal from a basic search tool into an intelligent instrument that supports real-time learning, decision-making, and collaboration across the enterprise.

Automate and Scale Tagging with AI-Assisted Curation

Manual tagging is often cited as one of the more time-consuming and inconsistent aspects of content management. To improve both the speed and quality of metadata, EK has helped clients implement AI-assisted tagging solutions that automatically classify content based on a shared taxonomy.

We recommend a human-in-the-loop model, where AI performs the initial tagging, and subject matter experts validate results to preserve nuance and apply expertise. This approach allows organizations to scale content organization efforts while maintaining accuracy and alignment.

For example, we partnered with a leading development bank to build an AI-powered knowledge platform that processed data from eight enterprise systems. Using a multilingual taxonomy of over 4,000 terms, the platform automatically tagged content and proactively delivered contextual content recommendations across the enterprise. The solution dramatically improved enterprise search, reduced time spent locating information, and earned recognition from leadership as one of the organization’s most impactful knowledge initiatives.

Integrate Technology, People, and Process Within Succession Planning

The most successful organizations do not treat knowledge technologies as standalone tools. Instead, they integrate them into broader KM and succession planning strategies, ensuring these solutions support, rather than replace, human collaboration and expertise.

In EK’s experience, when AI, knowledge graphs, and semantic metadata are used to enhance existing processes—like onboarding, leadership transitions, or project handovers—they become powerful enablers of continuity. These tools help protect institutional knowledge, reduce bottlenecks, and enable repeatable practices for knowledge transfer across roles, teams, and time.

As part of a long-term KM strategy, this allows organizations to evolve from reactive knowledge capture to proactive, ongoing knowledge flow.

Measuring Knowledge Transfer Impact

As we have provided the tools and advice for ensuring impactful knowledge captures and transfers, measuring the effectiveness of knowledge transfer initiatives is the essential next step to ensure that succession planning goals are being met and that knowledge transfer efforts are producing meaningful outcomes. Key performance indicators (KPIs) and metrics can help track the success of these initiatives and provide insights into their impact on the organization’s leadership pipeline.

Metric Measurement Examples
Employee Engagement:One key indicator is active employee participation in knowledge transfer programs. This includes involvement in mentoring, workshops, job shadowing, and other formal knowledge-sharing activities. Tracking participation levels helps assess cultural adoption and provides insight into where additional encouragement or resources may be needed.
  • Workshop attendance records
  • Peer learning program participation rates
  • Surveys assessing perceived value and engagement
Knowledge Retention:Capturing knowledge is only part of the equation. Ensuring it is understood and applied is equally important. By assessing how well successors are able to retain and use critical knowledge, organizations can confirm whether the transfer process is actually supporting operational continuity and decision quality.
  • Post-transition employee self-evaluations
  • Peer or supervisor assessments
  • Case reviews of decisions informed by legacy knowledge
Transitioner Feedback:Understanding the perspective of new leaders or incoming staff can reveal valuable insights into what worked and what did not during a handoff. Their feedback can help organizations fine-tune interview guides, documentation practices, or onboarding resources for future transitions.
  • Qualitative feedback via structured interviews
  • New hire or successor surveys
  • Retrospectives after major transitions
Future Leader Readiness:Evaluating how prepared upcoming leaders are to step into key roles, both in terms of process knowledge and organizational culture, can serve as a long-term measure of success.
  • Succession readiness assessments
  • Familiarity with key systems, priorities, and workflows.
  • Participation in ongoing KM or leadership development programs

Closing

Navigating the retirement cliff requires both immediate action and long-term planning. By addressing resistance, dismantling silos, embedding knowledge-sharing into daily work, and leveraging technology, organizations can reduce risk, preserve critical expertise, and build long-term resilience. Need help developing a strategy that supports both near-term needs and long-term success? Let’s connect to explore tailored solutions for your organization.

The post Navigating the Retirement Cliff: Challenges and Strategies for Knowledge Capture and Succession Planning appeared first on Enterprise Knowledge.

]]>
Defining Governance and Operating Models for AI Readiness of Knowledge Assets https://enterprise-knowledge.com/defining-governance-and-operating-models-for-ai-readiness-of-knowledge-assets/ Wed, 08 Oct 2025 18:57:59 +0000 https://enterprise-knowledge.com/?p=25729 Artificial intelligence (AI) solutions continue to capture both the attention and the budgets of many organizations. As we have previously explained, a critical factor to the success of your organization’s AI initiatives is the readiness of your content, data, and … Continue reading

The post Defining Governance and Operating Models for AI Readiness of Knowledge Assets appeared first on Enterprise Knowledge.

]]>
Artificial intelligence (AI) solutions continue to capture both the attention and the budgets of many organizations. As we have previously explained, a critical factor to the success of your organization’s AI initiatives is the readiness of your content, data, and other knowledge assets. When correctly executed, this preparation will ensure your knowledge assets are of the appropriate quality and semantic structure for AI solutions to leverage with context and inference, while identifying and exposing only the appropriate assets to the right people through entitlements.

This, of course, is an ongoing challenge, rather than a moment in time initiative. To ensure the important work you’ve done to get your content, data, and other assets AI-ready is not lost, you need governance as well as an operating model to guide it. Indeed, well before any AI readiness initiative, governance and the organization must be top of mind. 

Governance is not a new term within the field. Historically, we’ve identified four core components to governance in the context of content or data:

  • Business Case and Measurable Success Criteria: Defining the value of the solution and the governance model itself, as well as what success looks like for both.
  • Roles and Responsibilities: Defining the individuals and groups necessary for governance, as well as the specific authorities and expectations of their roles.
  • Policies and Procedures: Detailing the timelines, steps, definitions, and actions for the associated roles to play.
  • Communications and Training: Laying out the approach to two-way communications between the associated governance roles/groups and the various stakeholders.

These traditional components of governance all have held up, tried and true, over the quarter-century since we first defined them. In the context of AI, however, it is important to go deeper and consider the unique aspects that artificial intelligence brings into the conversation. Virtually every expert in the field agrees that AI governance should be a priority for any organization, but that must be detailed further in order to be meaningful.

In the context of AI readiness for knowledge assets, we focus AI governance, and more broadly its supporting operating model, on five key elements for success:

  • Coordination and Enablement Over Execution
  • Connection Instead of Migration
  • Filling Gaps to Address the Unanswerable Questions
  • Acting on “Hallucinations”
  • Embedding Automation (Where It Makes Sense)

There is, of course, more to AI governance than these five elements, but in the context of AI readiness for knowledge assets, our experience shows that these are the areas where organizations should be focusing and shifting away from traditional models. 

1) Coordination and Enablement Over Execution

In traditional governance models (i.e. content governance, data governance, etc.), most of the work was done in the context of a single system. Content would be in a content management system and have a content governance model. Data would be in a data management solution and have a data governance model. The shift is that today’s AI governance solutions shouldn’t care what types of assets you have or where they are housed. This presents an amazing opportunity to remove artificial silos within an organization, but brings a marked challenge. 

If you were previously defining a content governance model, you most likely possessed some level of control or ownership over your content and document management systems. Likewise, if you were in charge of data governance, you likely “own” some or all of the major data solutions like master data management or a data warehouse within your organization. With AI, however, an enormous benefit of a correctly architected enterprise AI solution that leverages a semantic layer is that you likely don’t own these source systems. The system housing the content, data, and other knowledge assets is likely, at least partly, managed by other parts of your organization. In other words, in an AI world, you have less control over the sources of the knowledge assets, and thereby over the knowledge assets themselves. This may well change as organizations evolve in the “Age of AI,” but for now, the role and responsibility for AI governance becomes more about coordination and less about execution or enforcement.

In practice, this means an AI Governance for Knowledge Asset Readiness group must coordinate with the owners of the various source systems for knowledge assets, providing additive guidance to define what it means to have AI-ready assets as well as training and communications to enable and engage system and asset owners to understand what they must do to have their content, data, and other assets included within the AI models. The word “must” in the previous sentence is purposeful. You alone may not possess the authority of an information system owner to define standards for their assets, but you should have the authority to choose not to include those assets within the enterprise AI solution set.

How do you apply that authority? As the lines continue to blur between the purview of KM, Data, and AI teams, this AI Governance for Knowledge Asset Readiness group should comprise representatives from each of these once siloed teams to co-own outcomes as new AI use cases, features, and capabilities are developed. The AI governance group should be responsible for delineating key interaction points and expected outcomes across teams and business functions to build alignment, facilitate planning and coordination, and establish expectations for business and technical stakeholders alike as AI solutions evolve. Further, this group should define what it means (and what is required) for an asset to be AI-ready. We cover this in detail in previous articles, but in short, this boils down to semantic structure, quality, and entitlements as the three core pillars to AI readiness for knowledge assets. 

2) Connection Instead of Migration

The idea of connections over migration aligns with the previous point. Past monolithic efforts in your organization would commonly have included massive migrations and consolidations of systems and solutions. The roadmaps of past MDMs, data warehouses, and enterprise content management initiatives are littered with failed migrations. Again, part of the value of an enterprise AI initiative that leverages a semantic layer, or at least a knowledge graph, is that you don’t need to absorb the cost, complexity, and probable failure of a massive migration. 

Instead, the role of the AI Governance for Knowledge Asset Readiness group is one of connections. Once the group has set the expectation for AI-ready knowledge assets, the next step is to ensure the systems that house those assets are connected and available, ready for the enterprise AI solutions to be ingested and understood. This can be a highly iterative process, not to be rushed, as the sanctity of the assets ingested by AI is more important than their depth. Said differently, you have few chances to deliver wrong answers—your end users will lose trust quickly in a solution that delivers inaccurate information that they know is unmistakably incorrect; but if they receive an incomplete answer instead, they will be more likely to raise this and continue to engage. The role of this AI governance group is to ensure the right systems and their assets are reliably available for the AI solution(s) at the right time, after your knowledge assets have passed through the appropriate requirements.

 

3) Filling Gaps to Address the Unanswerable Questions

As the AI solutions are deployed, the shift for AI governance moves from being proactive to reactive. There is a great opportunity associated with this that bears a particular focus. In the history of knowledge management, and more broadly the fields of content management, data management, and information management, there’s always been a creeping concern that an organization “doesn’t know what it doesn’t know.” What are the gaps in knowledge? What are the organizational blind spots? These questions have been nearly impossible to answer at the enterprise level. However, with enterprise-level AI solutions implemented, the ability to have this awareness is suddenly a possibility.

Even before deploying AI solutions, a well-designed semantic layer can help pinpoint organizational gaps in knowledge by finding taxonomy elements lacking in applied knowledge assets. However, this potential is magnified once the AI solution is fully defined. Today’s mature AI solutions are “smart” enough to know when they can’t answer a question and highlight that unanswerable question to the AI governance group. Imagine possessing the organizational intelligence to know what your colleagues are seeking to understand, having insights into that which they are trying to learn or answer, but are currently unable to. 

In this way, once an AI solution is deployed, the primary role of the AI governance group should be to diagnose and then respond to these automatically identified knowledge gaps, using their standards to fill them. It may be that the information does, in fact, exist within the enterprise, but that the AI solution wasn’t connected to those knowledge assets. Alternatively, it may be that the right semantic structure wasn’t placed on the assets, resulting in a missed connection and a false gap from the AI. However, it may also be that the answer to the “unanswerable” question only exists as tacit knowledge in the heads of the organization’s experts, or doesn’t exist at all. This is the most core and true value of the field of knowledge management, and has never been so possible.

4) Acting on “Hallucinations”

Aligned with the idea of filling gaps, a similar role for the AI governance group should be to address hallucinations or failures for AI to deliver an accurate, consistent, and complete “answer.” For organizations attempting to implement enterprise AI, a hallucination is little more than a cute word for an error, and should be treated as such by the AI governance group. There are many reasons for these errors, ranging from poor quality (i.e., wrong, outdated, near-duplicate, or conflicting) knowledge assets, insufficient semantic structure (e.g., taxonomy, ontology, or a business glossary), or poor logic built into the model itself. Any of these issues should be treated with immediate action. Your organization’s end users will quickly lose trust in an AI solution that delivers inaccurate results. Your governance model and associated organizational structure must be equipped to act quickly, first to leverage communications and feedback channels to ensure your end users are telling you when they believe something is inaccurate or incomplete, and moreover, to diagnose and address it.

As a note, for the most mature organizations, this action won’t be entirely reactive. For the most mature, organizational subject matter experts will be involved in perpetuity, especially right before and after enterprise AI deployment, to hunt for errors in these systems. Commonly, you can consider this governance function as the “Hallucination Killers” within your organization, likely to be one of the most critical actions as AI continues to expand.

5) Embedding Automation (Where It Makes Sense)

Finally, one of the most important roles of an AI governance group will be to use AI to make AI better. Almost everything we’ve described above can be automated. AI can and should be used to automate identification of knowledge gaps as well as solve the issue of those knowledge gaps by pinpointing organizational subject matter experts and targeting them to deliver their learning and experience at the right moments. It can also play a major role in helping to apply the appropriate semantic structure to knowledge, through tagging of taxonomy terms as metadata or identification of potential terms for inclusion in a business glossary. Central to all of this automation, however, is to ensure the ‘human is in the loop’, or rather, the AI governance group plays an advisory and oversight role throughout these automations, to ensure the design doesn’t fall out of alignment. This element further facilitates AI governance coordination across the organization by supporting stakeholders and knowledge asset stewards through technical enablement.

All of this presents a world of possibility. Governance was historically one of the drier and more esoteric concepts within the field, often where good projects went bad. We have the opportunity to do governance better by leveraging AI in the areas where humans historically fell short, while maintaining the important role of human experts with the right authority to ensure organizational alignment and value.

If your AI efforts aren’t yet yielding the results you expected, or you’re seeking to get things started right from the beginning, contact EK to help you.

The post Defining Governance and Operating Models for AI Readiness of Knowledge Assets appeared first on Enterprise Knowledge.

]]>
Semantic Layer Strategy: The Core Components You Need for Successfully Implementing a Semantic Layer https://enterprise-knowledge.com/semantic-layer-strategy-the-core-components-you-need-for-successfully-implementing-a-semantic-layer/ Mon, 06 Oct 2025 16:03:47 +0000 https://enterprise-knowledge.com/?p=25718 Today’s organizations are flooded with opportunities to apply AI and advanced data experiences, but many struggle with where to focus first. Leaders are asking questions like: “Which AI use cases will bring the most value? How can we connect siloed … Continue reading

The post Semantic Layer Strategy: The Core Components You Need for Successfully Implementing a Semantic Layer appeared first on Enterprise Knowledge.

]]>
Today’s organizations are flooded with opportunities to apply AI and advanced data experiences, but many struggle with where to focus first. Leaders are asking questions like: “Which AI use cases will bring the most value? How can we connect siloed data to support them?” Without a clear strategy, quick start-ups and vendors are making it easy to spin wheels on experiments that never scale. As more organizations recognize the value of meaningful, connected data experiences via a Semantic Layer, many find themselves unsure of how to begin their journey, or how to sustain meaningful progress once they begin. 

A well-defined Semantic Layer strategy is essential to avoid costly missteps in planning or execution, secure stakeholder alignment and buy-in, and ensure long-term scalability of models and tooling.

This blog outlines the key components of a successful Semantic Layer strategy, explaining how each component supports a scalable implementation and contributes to unlocking greater value from your data.

What is a Semantic Layer?

The Semantic Layer is a framework that adds rich structure and meaning to data by applying categorization models (such as taxonomies and ontologies) and using semantic technologies like graph databases and data catalogs. Your Semantic Layer should be a connective tissue that leverages a shared language to unify information across systems, tools, and domains. 

Data-rich organizations often manage information across a growing number of siloed repositories, platforms, and tools. The lack of a shared structure for how data is described and connected across these systems ultimately slows innovation and undermines initiatives. Importantly, your semantic layer enables humans and machines to interpret data in context and lays the foundation for enterprise-wide AI capabilities.    

 

What is a Semantic Layer Strategy?

A Semantic Layer Strategy is a tailored vision outlining the value of using knowledge assets to enable new tools and create insights through semantic approaches. This approach ensures your organization’s semantic efforts are focused, feasible, and value-driven by aligning business priorities with technical implementation. 

Regardless of your organization’s size, maturity, or goals, a strong Semantic Layer Strategy enables you to achieve the following:

1. Articulate a clear vision and value proposition.

Without a clear vision, semantic layer initiatives risk becoming scattered and mismanaged, with teams pulling in different directions and value to the organization left unclear. The Semantic Layer vision serves as the “North Star,” or guiding principle for planning, design, and execution. Organizations can realize a variety of use cases via a Semantic Layer (including advanced search, recommendation engines, personalized knowledge delivery, and more), and Semantic Layer Strategy helps to define and align on what a Semantic Layer can solve for your organization.

The vision statement clearly answers three core questions:

  • What is the business problem you are trying to solve?
  • What outcomes and capabilities are you enabling?
  • How will you measure success?

These three items create a strategic narrative that business and technical stakeholders alike can understand, and enable discussions to gain executive buy-in and prioritize initiative efforts. 

Enterprise Knowledge Case Study (Risk Mitigation for a Wall Street Bank): EK led the development of a  data strategy for operational risk for a bank seeking to create a unified view of highly regulated data dispersed across siloed repositories. By framing a clear vision statement for the Bank’s semantic layer, EK guided the firm to establish a multi-year program to expand the scope of data and continually enable new data insights and capabilities that were previously impossible. For example, users of a risk application could access information from multiple repositories in a single knowledge panel within the tool rather than hunting for it in siloed applications. The Bank’s Semantic Layer vision is contained in a single easy-to-understand one-pager  that has been used repeatedly as a rallying point to communicate value across the enterprise, win executive sponsorship, and onboard additional business groups into the semantic layer initiative. 

2. Assess your current organizational semantic maturity.

A semantic maturity assessment looks at the semantic structures, programs, processes, knowledge assets and overall awareness that already exist at your organization. Understanding where your organization lies on the semantic maturity spectrum is essential for setting realistic goals and sequencing a path to greater maturity. 

  • Less mature organizations may lack formal taxonomies or ontologies, or may have taxonomies and ontologies that are outdated, inconsistently applied, or not integrated across systems. They have limited (or no) semantic tooling and few internal semantic champions. Their knowledge assets are isolated, inconsistently tagged (or untagged) documents that require human interpretation to understand and are difficult for systems to find or connect.
  • More mature organizations typically have well-maintained taxonomies and/or ontologies, have established governance processes, and actively use semantic tooling such as knowledge graphs or business glossaries. More than likely, there are individuals or groups who advocate for the adoption of these tools and processes within the organization. Their knowledge assets are well-structured, consistently tagged, and interconnected pieces of content that both humans and machines can easily discover, interpret, and reuse.

Enterprise Knowledge Case Study (Risk Mitigation for a Wall Street Bank): EK conducted a comprehensive semantic maturity assessment of the current state of the Bank’s semantics program to uncover strengths, gaps, and opportunities. This assessment included:

  • Knowledge Asset Assessment: Evaluated the connectedness, completeness, and consistency of existing risk knowledge assets, identifying opportunities to enrich and restructure them to support redesigned application workflows.
  • Ontology Evaluation: Reviewed existing ontologies describing risk at the firm to assess accuracy, currency, semantic standards compliance, and maintenance practices.
  • Category Model Evaluation: Created a taxonomy tracker to evaluate candidate categories for a unified category management program, focusing on quality, ownership, and ongoing governance.
  • Architecture Gap Analysis and Tooling Recommendation : Reviewed existing applications, APIs, and integrations to determine whether components should be reused, replaced, or rebuilt.
  • People & Roles Assessment: Designed a target operating model to identify team structures, collaboration patterns, and missing roles or skills that are critical for semantic growth.

Together, these evaluations provided a clear benchmark of maturity and guided a right-sized strategy for the bank. 

3. Create a shared conceptual knowledge asset model. 

When it comes to strategy, executive stakeholders don’t want to see exhaustive technical documentation–they want to see impact. A high-level visual model of what your Semantic Layer will achieve brings a Semantic Layer Strategy to life by showing how connected knowledge assets can enable better decisions and new insights. 

Your data model should show, in broad strokes, what kinds of data will be connected at the conceptual level. For example, your data model could show that people, business units, and sales reports can be connected to answer questions like, “How many people in the United States created documents about X Law?” or “What laws apply to me when writing a contract in Wisconsin?” 

In sum, it should focus on how people and systems will benefit from the relationships between data, enabling clearer communication and shared understanding of your Semantic Layer’s use cases. 

Enterprise Knowledge Case Study (Risk Mitigation for a Wall Street Bank): EK collaborated with data owners to map out core concepts and their relationships in a single, digestible diagram. The conceptual knowledge asset model served as a shared reference point for both business and technical stakeholders, grounding executive conversations about Semantic Layer priorities and guiding onboarding decisions for data and systems. 

By simplifying complex data relationships into a clear visual, EK enabled alignment across technical and non-technical audiences and built momentum for the Semantic Layer initiative.

4. Develop a practical and iterative roadmap for implementation and scale.

With your vision, assessment, and foundational conceptual model in place, the next step is translating your strategy into execution. Your Semantic Layer roadmap should be outcome-driven, iterative, and actionable. A well-constructed roadmap provides not only a starting point for your Semantic Layer initiative, but also a mechanism for continuous alignment as business priorities evolve. 

Importantly, your roadmap should not be a rigid set of instructions; rather, it should act as a living guide. As your semantic maturity increases and business needs shift, the roadmap should adapt to reflect new opportunities while keeping long-term goals in focus. While the roadmap may be more detailed and technically advanced for highly mature organizations, less mature organizations may focus their roadmap on broader strokes such as tool procurement and initial category modeling. In both cases, the roadmap should be tailored to the organization’s unique needs and maturity, ensuring it is practical, actionable, and aligned to real priorities.

Enterprise Knowledge Case Study (Risk Mitigation for a Wall Street Bank): EK led the creation of a roadmap focused on expanding the firm’s existing semantic layer. Through planning sessions, EK identified the necessary categories, ontologies, tooling, and architecture uplifts needed to chart forward on their Semantic Layer journey. Once a strong foundation was built, additional planning sessions centered on adding new categories, onboarding additional data concepts, and refining ontologies to increase coverage and usability. Through sessions with key stakeholders responsible for the growth of the program, EK prioritized high-value expansion opportunities and recommended governance practices to sustain long-term scale. This enabled the firm to confidently evolve its Semantic Layer while maintaining alignment with business priorities and demonstrating measurable impact across the organization.

 

Conclusion

A successful Semantic Layer Strategy doesn’t come from technology alone; it comes from a clear vision, organizational alignment, and intentional design. Whether you’re just getting started on your semantics journey or refining your Semantic Layer approach, Enterprise Knowledge can support your organization. Contact us at info@enterprise-knowledge.com to discuss how we can help bring your Semantic Layer strategy to life.

The post Semantic Layer Strategy: The Core Components You Need for Successfully Implementing a Semantic Layer appeared first on Enterprise Knowledge.

]]>
Capture as You Work: Embedding Knowledge Capture in Daily Work https://enterprise-knowledge.com/capture-as-you-work-embedding-knowledge-capture-in-daily-work/ Fri, 03 Oct 2025 17:53:55 +0000 https://enterprise-knowledge.com/?p=25703 Knowledge capture is most effective when it is embedded as part of your daily work, not an occasional task. But we know that it is easier said than done.  Enterprise Knowledge regularly hears from our clients that:  “We don’t have … Continue reading

The post Capture as You Work: Embedding Knowledge Capture in Daily Work appeared first on Enterprise Knowledge.

]]>
Knowledge capture is most effective when it is embedded as part of your daily work, not an occasional task. But we know that it is easier said than done. 

Enterprise Knowledge regularly hears from our clients that: 

  • “We don’t have time for documentation with everything going on.”
  • “We’re not sure how to capture knowledge in a way that is useful to others.”
  • “People don’t know what they can or can’t share.”

These are real barriers, and this blog and accompanying infographic address them directly. It is not about doing more. It is about working smarter by embedding lightweight, effective knowledge-sharing habits into what you are already doing. Over time, these habits create durable knowledge assets that strengthen organizational memory and prepare your content and data for AI-readiness.

 

Integrate Knowledge Capture Into the Flow of Work

Small changes can make a big impact, especially when they reduce friction and feel like a natural part of the workday. Start by using familiar tools to ensure employees can document and share knowledge within the platforms they already use. This lowers barriers to participation and makes it easier to integrate knowledge sharing into the flow of work.

Standardized templates offer a simple, structured way to capture lessons learned, best practices, and key insights. The templates themselves serve as a guide, prompting employees on what details to capture and where those details belong. This reduces the cognitive load and guesswork that often gets in the way of documenting knowledge.

To reinforce the habit, build knowledge capture tasks into process and project checklists, or use workflow triggers that remind employees when it is time to reflect and share. Until knowledge-sharing practices are fully embedded, timely prompts help ensure action happens at the right moment.

Some moments naturally lend themselves to knowledge capture, such as project closeouts, after client interactions, during onboarding, or following major decisions. These are high-value opportunities where small, structured contributions can have an outsized impact. Our blog on High Value Moments of Content Capture expands on this by showing how to identify the right moments and implement simple practices to capture knowledge effectively when it matters most.

 

Automate Where You Can

Leverage automated and AI-powered processes to further enhance knowledge capture by minimizing manual effort and making information more accessible with low-effort, intelligent solutions such as:

  • Automated meeting transcription and indexing capture discussions with minimal effort, converting conversations into structured content that is searchable and readily available for reference.
  • AI-powered recommendations proactively surface relevant documentation within collaboration tools, reducing the need for employees to search for critical information manually.
  • Auto-classification of content streamlines knowledge organization by automatically tagging and categorizing information, ensuring documents and insights are consistently structured and easy to retrieve.
  • AI-driven named entity recognition (NER) automatically extracts and tags key information in real-time, transforming unstructured content into easily searchable and actionable knowledge.

 

Closing Thoughts

When knowledge capture is built into existing workflows, rather than treated as a separate activity, staff do not have to choose between sharing what they know and doing their job. The goal is not perfection; it is progress through building consistent, low-effort habits.

Whether your organization is just starting to explore knowledge capture or is ready to scale existing practices with automation, EK can help. Our approach is practical and tailored–we will meet you where you are and co-design right-sized solutions that fit your current capacity and goals. Contact us to learn more.

The post Capture as You Work: Embedding Knowledge Capture in Daily Work appeared first on Enterprise Knowledge.

]]>
How to Fill Your Knowledge Gaps to Ensure You’re AI-Ready https://enterprise-knowledge.com/how-to-fill-your-knowledge-gaps-to-ensure-youre-ai-ready/ Mon, 29 Sep 2025 19:14:44 +0000 https://enterprise-knowledge.com/?p=25629 “If only our company knew what our company knows” has been a longstanding lament for leaders: organizations are prevented from mobilizing their knowledge and capabilities towards their strategic priorities. Similarly, being able to locate knowledge gaps in the organization, whether … Continue reading

The post How to Fill Your Knowledge Gaps to Ensure You’re AI-Ready appeared first on Enterprise Knowledge.

]]>
“If only our company knew what our company knows” has been a longstanding lament for leaders: organizations are prevented from mobilizing their knowledge and capabilities towards their strategic priorities. Similarly, being able to locate knowledge gaps in the organization, whether we were initially aware of them (known unknowns), or initially unaware of them (unknown unknowns), represents opportunities to gain new capabilities, mitigate risks, and navigate the ever-accelerating business landscape more nimbly.  

AI implementations are already showing signs of knowledge gaps: hallucinations, wrong answers, incomplete answers, and even “unanswerable” questions. There are multiple causes for AI hallucinations, but an important one is not having the right knowledge to answer a question in the first place. While LLMs may have been trained on massive amounts of data, it doesn’t mean that they know your business, your people, or your customers. This is a common problem when organizations make the leap from how they experience “Public AI” tools like ChatGPT, Gemini, or Copilot, to attempting their own organization’s AI solutions. LLMs and agentic solutions need knowledge—your organization’s unique knowledgeto produce results that are unique to your and your customers’ needs, and help employees navigate and solve challenges they encounter in their day-to-day work. 

In a recent article, EK outlined key strategies for preparing content and data for AI. This blog post builds on that foundation by providing a step-by-step process for identifying and closing knowledge gaps, ensuring a more robust AI implementation.

 

The Importance of Bridging Knowledge Gaps for AI Readiness

EK lays out a six-step path to getting your content, data, and other knowledge assets AI-ready, yielding assets that are correct, complete, consistent, contextual, and compliant. The diagram below provides an overview of these six steps:

The six steps to AI readiness. Step one: Define Knowledge Assets. Step two: Conduct cleanup. Step three: Fill Knowledge Gaps (We are here). Step four: Enrich with context. Step five: Add structure. Step six: Protect the knowledge assets.

Identifying and filling knowledge gaps, the third step of EK’s path towards AI readiness, is crucial in ensuring that AI solutions have optimized inputs. 

Prior to filling gaps, an organization will have defined its critical knowledge assets and conducted a content cleanup. A content cleanup not only ensures the correctness and reliability of the knowledge assets, but also reveals the specific topics, concepts, or capabilities that the organization cannot currently supply to AI solutions as inputs.

This scenario presupposes that the organization has a clear idea of the AI use cases and purposes for its knowledge assets. Given the organization knows the questions AI needs to answer, an assessment to identify the location and state of knowledge assets can be targeted based on the inputs required. This assessment would be followed by efforts to collect the identified knowledge and optimize it for AI solutions. 

A second, more complicated, scenario arises when an organization hasn’t formulated a prioritized list of questions for AI to answer. The previously described approach, relying on drawing up a traditional knowledge inventory will face setbacks because it may prove difficult to scale, and won’t always uncover the insights we need for AI readiness. Knowledge inventories may help us understand our known unknowns, but they will not be helpful in revealing our unknown unknowns

 

Identifying the Gap

How can we identify something that is missing? At this juncture, organizations will need to leverage analytics, introduce semantics, and if AI is already deployed in the organization, then use it as a resource as well. There are different techniques to identify these gaps, depending on whether your organization has already deployed an AI solution or is ramping up for one. Available options include:

Before and After AI Deployment

Leveraging Analytics from Existing Systems

Monitoring and assessing different tools’ analytics is an established practice to understand user behavior. In this instance, EK applies these same methods to understand critical questions about the availability of knowledge assets. We are particularly interested in analytics that reveal answers to the following questions:

  • When are our people giving up when navigating different sections of a tool or portal? 
  • What sort of queries return no results?
  • What queries are more likely to get abandoned? 
  • What sort of content gets poor reviews, and by whom?
  • What sort of material gets no engagement? What did the user do or search for before getting to it? 

These questions aim to identify instances of users trying, and failing, to get knowledge they need to do their work. Where appropriate, these questions can also be posed directly to users via surveys or focus groups to get a more rounded perspective. 

Semantics

Semantics involve modeling an organization’s knowledge landscape with taxonomies and ontologies. When taxonomies and ontologies have been properly designed, updated, and consistently applied to knowledge, they are invaluable as part of wider knowledge mapping efforts. In particular, semantic models can be used as an exemplar of what should be there, and can then be compared with what is actually present, thus revealing what is missing.

We recently worked with a professional association within the medical field, helping them define a semantic model for their expansive amount of content, and then defining an automated approach to tagging these knowledge assets. As part of the design process, EK taxonomists interviewed experts across all of the association’s organizational functional teams to define the terms that should be present in the organization’s knowledge assets. After the first few rounds of auto-tagging, we examined the taxonomy’s coverage, and found that a significant fraction of the terms in the taxonomy went unused. We validated our findings with our clients’ experts, and, to their surprise, our engagement revealed an imbalance of knowledge asset production: while some topics were covered by their content, others were entirely lacking. 

Valid taxonomy terms or ontology concepts for which few to no knowledge assets exist reveal a knowledge gap where AI is likely to struggle.

After AI Deployment

User Engagement & Feedback

To ensure a solution can scale, evolve, and remain effective over time, it is important to establish formal feedback mechanisms for users to engage with system owners and governance bodies on an ongoing basis. Ideally, users should have a frictionless way to report an unsatisfactory answer immediately after they receive it, whether it is because the answer is incomplete or just plain wrong. A thumbs-up or thumbs-down icon has traditionally been used to solicit this kind of feedback, but organizations should also consider dedicated chat channels, conversations within forums, or other approaches for communicating feedback to which their users are accustomed.

AI Design and Governance 

Out-of-the-box, pre-trained language models are designed to prioritize providing a fluid response, often leading them to confidently generate answers even when their underlying knowledge is uncertain or incomplete. This core behavior increases the risk of delivering wrong information to users. However, this flaw can be preempted by thoughtful design in enterprise AI solutions: the key is to transform them from a simple answer generator into a sophisticated instrument that can also detect knowledge gaps. Enterprise AI solutions can be engineered to proactively identify questions which they do not have adequate information to answer and immediately flag these requests. This approach effectively creates a mandate for AI governance bodies to capture the needed knowledge. 

AI can move beyond just alerting the relevant teams about missing knowledge. As we will soon discuss, AI holds additional capabilities to close knowledge gaps by inferring new insights from disparate, already-known information, and connecting users directly with relevant human experts. This allows enterprise AI to not only identify knowledge voids, but also begin the process of bridging them.

 

Closing the Gap

It is important, at this point, to make the distinction between knowledge that is truly missing from the organization and knowledge that is simply unavailable to the organization’s AI solution. The approach to close the knowledge gap will hinge on this key distinction. 

 

If the ‘missing’ knowledge is documented or recorded somewhere… but the knowledge is not in a format that AI can use it, then:

Transform and migrate the present knowledge asset into a format that AI can more readily ingest. 

How this looks in practice:

A professional services firm had a database of meeting recordings meant for knowledge-sharing and disseminating lessons learned. The firm determined that there is a lot of knowledge “in the rough” that AI could incorporate into existing policies and procedures, but this was impossible to do by ingesting content in video format. EK engineers programmatically transcribed the videos, and then transformed the text into a machine-readable format. To make it truly AI-ready, we leveraged Natural Language Processing (NLP) and Named Entity Recognition (NER) techniques to contextualize the new knowledge assets by associating them with other concepts across the organization.

If the ‘missing’ knowledge is documented or recorded somewhere… but the knowledge exists in private spaces like email or closed forums, then:

Establish workflows and guidelines to promote, elevate, and institutionalize knowledge that had been previously informal.

How this looks in practice:

A government agency established online Communities of Practice (CoPs) to transfer and disseminate critical knowledge on key subject areas. Community members shared emerging practices and jointly solved problems. Community managers were able to ‘graduate’ informal conversations and documents into formal agency resources that lived within a designated repository, fully tagged, and actively managed. These validated and enhanced knowledge assets became more valuable and reliable for AI solutions to ingest.

If the ‘missing’ knowledge is documented or recorded somewhere… but the knowledge exists in different fragments across disjointed repositories, then: 

Unify the disparate fragments of knowledge by designing and applying a semantic model to associate and contextualize them. 

How this looks in practice:

A Sovereign Wealth Fund (SWF) collected a significant amount of knowledge about its investments, business partners, markets, and people, but kept this information fragmented and scattered across multiple repositories and databases. EK designed a semantic layer (composed of a taxonomy, ontology, and a knowledge graph) to act as a ‘single view of truth’. EK helped the organization define its key knowledge assets, like investments, relationships, and people, and weaved together data points, documents, and other digital resources to provide a 360-degree view of each of them. We furthermore established an entitlements framework to ensure that every attribute of every entity could be adequately protected and surfaced only to the right end-user. This single view of truth became a foundational element in the organization’s path to AI deployment—it now has complete, trusted, and protected data that can be retrieved, processed, and surfaced to the user as part of solution responses. 

If the ‘missing’ knowledge is not recorded anywhere… but the company’s experts hold this knowledge with them, then: 

Choose the appropriate techniques to elicit knowledge from experts during high-value moments of knowledge capture. It is important to note that we can begin incorporating agentic solutions to help the organization capture institutional knowledge, especially when agents can know or infer expertise held by the organization’s people. 

How this looks in practice:

Following a critical system failure, a large financial institution recognized an urgent need to capture the institutional knowledge held by its retiring senior experts. To address this challenge, they partnered with EK, who developed an AI-powered agent to conduct asynchronous interviews. This agent was designed to collect and synthesize knowledge from departing experts and managers by opening a chat with each individual and asking questions until the defined success criteria were met. This method allowed interviewees to contribute their knowledge at their convenience, ensuring a repeatable and efficient process for capturing critical information before the experts left the organization.

If the ‘missing’ knowledge is not recorded anywhere… and the knowledge cannot be found, then:

Make sure to clearly define the knowledge gap and its impact on the AI solution as it supports the business. When it has substantial effects on the solution’s ability to provide critical responses, then it will be up to subject matter experts within the organization to devise a strategy to create, acquire, and institutionalize the missing knowledge. 

How this looks in practice:

A leading construction firm needed to develop its knowledge and practices to be able to keep up with contracts won for a new type of project. Its inability to quickly scale institutional knowledge jeopardized its capacity to deliver, putting a significant amount of revenue at risk. EK guided the organization in establishing CoPs to encourage the development of repeatable processes, new guidance, and reusable artifacts. In subsequent steps, the firm could extract knowledge from conversations happening within the community and ingest them into AI solutions, along with novel knowledge assets the community developed. 

 

Conclusion

Identifying and closing knowledge gaps is no small feat, and predicting knowledge needs was nearly impossible before the advent of AI. Now, AI acts as both a driver and a solution, helping modern enterprises maintain their competitive edge.

Whether your critical knowledge is in people’s heads or buried in documents, Enterprise Knowledge can help. We’ll show you how to capture, connect, and leverage your company’s knowledge assets to their full potential to solve complex problems and obtain the results you expect out of your AI investments. Contact us today to learn how to bridge your knowledge gaps with AI.

The post How to Fill Your Knowledge Gaps to Ensure You’re AI-Ready appeared first on Enterprise Knowledge.

]]>
Top Ways to Get Your Content and Data Ready for AI https://enterprise-knowledge.com/top-ways-to-get-your-content-and-data-ready-for-ai/ Mon, 15 Sep 2025 19:17:48 +0000 https://enterprise-knowledge.com/?p=25370 As artificial intelligence has quickly moved from science fiction, to pervasive internet reality, and now to standard corporate solutions, we consistently get the question, “How do I ensure my organization’s content and data are ready for AI?” Pointing your organization’s … Continue reading

The post Top Ways to Get Your Content and Data Ready for AI appeared first on Enterprise Knowledge.

]]>
As artificial intelligence has quickly moved from science fiction, to pervasive internet reality, and now to standard corporate solutions, we consistently get the question, “How do I ensure my organization’s content and data are ready for AI?” Pointing your organization’s new AI solutions at the “right” content and data are critical to AI success and adoption, and failing to do so can quickly derail your AI initiatives.  

Though the world is enthralled with the myriad of public AI solutions, many organizations struggle to make the leap to reliable AI within their organizations. A recent MIT report, “The GenAI Divide,” reveals a concerning truth: despite significant investments in AI, 95% of organizations are not seeing any benefits from their AI investments. 

One of the core impediments to achieving AI within your own organization is poor-quality content and data. Without the proper foundation of high-quality content and data, any AI solution will be rife with ‘hallucinations’ and errors. This will expose organizations to unacceptable risks, as AI tools may deliver incorrect or outdated information, leading to dangerous and costly outcomes. This is also why tools that perform well as demos fail to make the jump to production.  Even the most advanced AI won’t deliver acceptable results if an organization has not prepared their content and data.

This blog outlines seven top ways to ensure your content and data are AI-ready. With the right preparation and investment, your organization can successfully implement the latest AI technologies and deliver trustworthy, complete results.

1) Understand What You Mean by “Content” and/or “Data” (Knowledge Asset Definition)

While it seems obvious, the first step to ensuring your content and data are AI-ready is to clearly define what “content” and “data” mean within your organization. Many organizations use these terms interchangeably, while others use one as a parent term of the other. This obviously leads to a great deal of confusion. 

Leveraging the traditional definitions, we define content as unstructured information (ranging from files and documents to blocks of intranet text), and data as structured information (namely the rows and columns in databases and other applications like Customer Relationship Management systems, People Management systems, and Product Information Management systems). You are wasting the potential of AI if you’re not seeking to apply your AI to both content and data, giving end users complete and comprehensive information. In fact, we encourage organizations to think even more broadly, going beyond just content and data to consider all the organizational assets that can be leveraged by AI.

We’ve coined the term knowledge assets to express this. Knowledge assets comprise all the information and expertise an organization can use to create value. This includes not only content and data, but also the expertise of employees, business processes, facilities, equipment, and products. This manner of thinking quickly breaks down artificial silos within organizations, getting you to consider your assets collectively, rather than by type. Moving forward in this article, we’ll use the term knowledge assets in lieu of content and data to reinforce this point. Put simply and directly, each of the below steps to getting your content and data AI-ready should be considered from an enterprise perspective of knowledge assets, so rather than discretely developing content governance and data governance, you should define a comprehensive approach to knowledge asset governance. This approach will not only help you achieve AI-readiness, it will also help your organization to remove silos and redundancies in order to maximize enterprise efficiency and alignment.

knowledge asset zoom in 1

2) Ensure Quality (Asset Cleanup)

We’ve found that most organizations are maintaining approximately 60-80% more information than they should, and in many cases, may not even be aware of what they still have. That means that four out of five knowledge assets are old, outdated, duplicate, or near-duplicate. 

There are many costs to this over-retention before even considering AI, including the administrative burden of maintaining this 80% (including the cost and environmental impact of unnecessary server storage), and the usability and findability cost to the organization’s end users when they go through obsolete knowledge assets.

The AI cost becomes even higher for several reasons. First, AI typically “white labels” the knowledge assets it finds. If a human were to find an old and outdated policy, they may recognize the old corporate branding on it, or note the date from several years ago on it, but when AI leverages the information within that knowledge asset and resurfaces it, it looks new and the contextual clues are lost.

Next, we have to consider the old adage of “garbage in, garbage out.” Incorrect knowledge assets fed to an AI tool will result in incorrect results, also known as hallucinations. While prompt engineering can be used to try to avoid these conflicts and, potentially even errors, the only surefire guarantee to avoid this issue is to ensure the accuracy of the original knowledge assets, or at least the vast majority of it.

Many AI models also struggle with near-duplicate “knowledge assets,” unable to discern which version is trusted. Consider your organization’s version control issues, working documents, data modeled with different assumptions, and iterations of large deliverables and reports that are all currently stored. Knowledge assets may go through countless iterations, and most of the time, all of these versions are saved. When ingested by AI, multiple versions present potential confusion and conflict, especially when these versions didn’t simply build on each other but were edited to improve findings or recommendations. Each of these, in every case, is an opportunity for AI to fail your organization.

Finally, this would also be the point at which you consider restructuring your assets for improved readability (both by humans and machines). This could include formatting (to lower cognitive lift and improve consistency) from a human perspective. For both humans and AI, this could also mean adding text and tags to better describe images and other non-text-based elements. From an AI perspective, in longer and more complex assets, proximity and order can have a negative impact on precision, so this could include restructuring documents to make them more linear, chronological, or topically aligned. This is not necessary or even important for all types of assets, but remains an important consideration especially for text-based and longer types of assets.

knowledge asset zoom in 2

3) Fill Gaps (Tacit Knowledge Capture)

The next step to ensure AI readiness is to identify your gaps. At this point, you should be looking at your AI use cases and considering the questions you want AI to answer. In many cases, your current repositories of knowledge assets will not have all of the information necessary to answer those questions completely, especially in a structured, machine-readable format. This presents a risk itself, especially if the AI solution is unaware that it lacks the complete range of knowledge assets necessary and portrays incomplete or limited answers as definitive. 

Filling gaps in knowledge assets is extremely difficult. The first step is to identify what is missing. To invoke another old adage, organizations have long worried they “don’t know what they don’t know,” meaning they lack the organizational maturity to identify gaps in their own knowledge. This becomes a major challenge when proactively seeking to arm an AI solution with all the knowledge assets necessary to deliver complete and accurate answers. The good news, however, is that the process of getting knowledge assets AI-ready helps to identify gaps. In the next two sections, we cover semantic design and tagging. These steps, among others, can identify where there appears to be missing knowledge assets. In addition, given the iterative nature of designing and deploying AI solutions, the inability of AI to answer a question can trigger gap filling, as we cover later. 

Of course, once you’ve identified the gaps, the real challenge begins, in that the organization must then generate new knowledge assets (or locate “hidden” assets) to fill those gaps. There are many techniques for this, ranging from tacit knowledge capture, to content inventories, all of which collectively can help an organization move from AI to Knowledge Intelligence (KI).    

knowledge asset zoom in 3

4) Add Structure and Context (Semantic Components)

Once the knowledge assets have been cleansed and gaps have been filled, the next step in the process is to structure them so that they can be related to each other correctly, with the appropriate context and meaning. This requires the use of semantic components, specifically, taxonomies and ontologies. Taxonomies deliver meaning and structure, helping AI to understand queries from users, relate knowledge assets based on the relationships between the words and phrases used within them, and leverage context to properly interpret synonyms and other “close” terms. Taxonomies can also house glossaries that further define words and phrases that AI can leverage in the generation of results.

Though often confused or conflated with taxonomies, ontologies deliver a much more advanced type of knowledge organization, which is both complementary to taxonomies and unique. Ontologies focus on defining relationships between knowledge assets and the systems that house them, enabling AI to make inferences. For instance:

<Person> works at <Company>

<Zach Wahl> works at <Enterprise Knowledge>

<Company> is expert in <Topic>

<Enterprise Knowledge> is expert in <AI Readiness>

From this, a simple inference based on structured logic can be made, which is that the person who works at the company is an expert in the topic: Zach Wahl is an expert in AI Readiness. More detailed ontologies can quickly fuel more complex inferences, allowing an organization’s AI solutions to connect disparate knowledge assets within an organization. In this way, ontologies enable AI solutions to traverse knowledge assets, more accurately make “assumptions,” and deliver more complete and cohesive answers. 

Collectively, you can consider these semantic components as an organizational map of what it does, who does it, and how. Semantic components can show an AI how to get where you want it to go without getting lost or taking wrong turns.

5) Semantic Model Application (Tagging)

Of course, it is not sufficient simply to design the semantic components; you must complete the process by applying them to your knowledge assets. If the semantic components are the map, applying semantic components as metadata is the GPS that allows you to use it easily and intuitively. This step is commonly a stumbling block for organizations, and again is why we are discussing knowledge assets rather than discrete areas like content and data. To best achieve AI readiness, all of your knowledge assets, regardless of their state (structured, unstructured, semi-structured, etc), must have consistent metadata applied against them. 

When applied properly, this consistent metadata becomes an additional layer of meaning and context for AI to leverage in pursuit of complete and correct answers. With the latest updates to leading taxonomy and ontology management systems, the process of automatically applying metadata or storing relationships between knowledge assets in metadata graphs is vastly improved, though still requires a human in the loop to ensure accuracy. Even so, what used to be a major hurdle in metadata application initiatives is much simpler than it used to be.

knowledge asset zoom in 4

6) Address Access and Security (Unified Entitlements)

What happens when you finally deliver what your organization has been seeking, and give it the ability to collectively and completely serve their end users the knowledge assets they’ve been seeking? If this step is skipped, the answer is calamity. One of the express points of the value of AI is that it can uncover hidden gems in knowledge assets, make connections humans typically can’t, and combine disparate sources to build new knowledge assets and new answers within them. This is incredibly exciting, but also presents a massive organizational risk.

At present, many organizations have an incomplete or actually poor model for entitlements, or ensuring the right people see the right assets, and the wrong people do not. We consistently discover highly sensitive knowledge assets in various forms on organizational systems that should be secured but are not. Some of this takes the form of a discrete document, or a row of data in an application, which is surprisingly common but relatively easy to address. Even more of it is only visible when you take an enterprise view of an organization. 

For instance, Database A might contain anonymized health information about employees for insurance reporting purposes but maps to discrete unique identifiers. File B includes a table of those unique identifiers mapped against employee demographics. Application C houses the actual employee names and titles for the organizational chart, but also includes their unique identifier as a hidden field. The vast majority of humans would never find this connection, but AI is designed to do so and will unabashedly generate a massive lawsuit for your organization if you’re not careful.

If you have security and entitlement issues with your existing systems (and trust me, you do), AI will inadvertently discover them, connect the dots, and surface knowledge assets and connections between them that could be truly calamitous for your organization. Any AI readiness effort must confront this challenge, before your AI solutions shine a light on your existing security and entitlements issues.

knowledge asset zoom in 5

7) Maintain Quality While Iteratively Improving (Governance)

Steps one through six describe how to get your knowledge assets ready for AI, but the final step gets your organization ready for AI. With a massive investment in both getting your knowledge assets in the right state for AI and in  the AI solution itself, the final step is to ensure ongoing quality of both. Mature organizations will invest in a core team to ensure knowledge assets go from AI-ready to AI-mature, including:

  • Maintaining and enforcing the core tenets to ensure knowledge assets stay up-to-date and AI solutions are looking at trusted assets only;
  • Reacting to hallucinations and unanswerable questions to fill gaps in knowledge assets; 
  • Tuning the semantic components to stay up to date with organizational changes.

The most mature organizations, those wishing to become AI-Powered organizations, will look first to their knowledge assets as the key building block to drive success. Those organizations will seek ROCK (Relevant, Organizationally Contextualized, Complete, and Knowledge-Centric) knowledge assets as the first line to delivering Enterprise AI that can be truly transformative for the organization. 

If you’re seeking help to ensure your knowledge assets are AI-Ready, contact us at info@enterprise-knowledge.com

The post Top Ways to Get Your Content and Data Ready for AI appeared first on Enterprise Knowledge.

]]>
What is a KM Operating Model and Why You Need One https://enterprise-knowledge.com/what-is-a-km-operating-model-and-why-you-need-one/ Mon, 08 Sep 2025 13:13:02 +0000 https://enterprise-knowledge.com/?p=25326 As organizations race to adopt AI, implement advanced analytics, or embed new knowledge management (KM) strategies into their ways of working, the way they capture, organize, and transform knowledge becomes the foundation for success. While many organizations invest heavily in … Continue reading

The post What is a KM Operating Model and Why You Need One appeared first on Enterprise Knowledge.

]]>
As organizations race to adopt AI, implement advanced analytics, or embed new knowledge management (KM) strategies into their ways of working, the way they capture, organize, and transform knowledge becomes the foundation for success. While many organizations invest heavily in new tools and well-crafted KM strategies, they often overlook a critical enabler: the operating model, which is a framework of roles, structures, and governance that ensures KM and AI efforts do not just launch, but scale and sustain. 

This blog is the first in a two-part series exploring how organizations can design and sustain an effective KM operating model. This first blog focuses on one essential component of the operating model: the framework of roles that enable KM efforts to scale and deliver sustained impact. Clearly defining these roles and their structure helps organizations integrate related disciplines, such as data and AI, avoid duplication, and ensure teams work toward shared outcomes. In the second blog, we will share a practical roadmap for designing an operating model that aligns KM, data, and AI to maximize long-term value.

What is a KM Operating Model?

An operating model defines how an organization functions to serve its vision and realize its strategic goals by aligning elements like roles and responsibilities, organizational structure, governance frameworks, decision-making processes, and change management approaches. 

For KM, a strong operating model outlines:

  • How knowledge flows across the organization
  • Who owns and governs it 
  • What processes and key interaction points enable it
  • Which tools and standards are applied to deliver value 

In other words, it integrates people, processes, governance, and resources to ensure KM becomes a sustainable organizational capability, rather than a temporary initiative or toolset. 

What an Operating Model Looks Like in Practice

When a large automotive manufacturer wanted to implement a Knowledge Portal and improve the way knowledge was captured and transferred throughout its North American factory and business units, Enterprise Knowledge (EK) worked with the organization to design an operating model with a centralized Knowledge Management Center of Excellence (CoE) to align with current ways in which the company operates. Staffed by a Program Director, Knowledge Manager, KM System Administrator, and a Knowledge Modeling Engineer, these core roles would lead the charge to align business units in improving content quality, knowledge capture and transfer, and drive KM adoption and value, as well as scale the Knowledge Portal. In considering how to successfully roll out the technical solution, complementary content, and KM strategies to nearly 20,000 employees, EK recommended partially dedicated KM support roles within individual organizational units to reinforce KM adoption and deliver support at the point of need. By training existing employees already embedded within an organizational unit on KM initiatives, support comes from familiar colleagues who understand the team’s workflows, priorities, and pain points. This helps surface obstacles, such as competing demands, legacy processes, or resistance to change, that might otherwise hinder KM adoption, while also ensuring guidance is tailored to the realities of daily work within the organization. This strategy was intended to not only strengthen employees’ ability to find, share, and apply knowledge in their daily work, but also to build a network of formal KM champions who would be equipped to help inform and embed the KM CoE’s enterprise vision. This new network would also support the planned future implementation of AI capabilities into the Knowledge Portal and in knowledge capture and transfer activities.

Example Operating Model with a KM Center of Excellence:

The Knowledge Management Center of Excellence includes a Program Director, Knowledge Manager, KM System Administrator, Knowledge Modeling Engineer, and Unit KM Support Roles (which come from different business units across the organization).

In another case, a global conservation organization sought to remedy struggling KM efforts and an organizational structure that lacked effectiveness and authority. With a focus on maturing both their KM program and its facilitating framework, EK developed a new operating model seeking cross-functional coordination and KM alignment. The new model also accompanied an effort to advance their technology stack and improve the findability of knowledge assets. A newly retooled KM Enablement Team would provide strategic oversight to operationalize KM across the organization with focused efforts and dedicated roles around four key initiatives: Knowledge Capture & Content Creation, Taxonomy, Technology, and Data. This enablement framework required Workstream Leads to participate in regular meetings with the KM Enablement team to ensure initiative progress and alignment to the advancing KM solution. Designed to not only guide the implementation of an enterprise-level KM Program, this framework would also sustainably support its ongoing governance, maintenance, and enhancement.

Example Operating Model with a KM Enablement Team:

The Knowledge Management Enablement Team includes the Data Workstream Lead, Technology Workstream Lead, Taxonomy Workstream Lead, and Knowledge Capture & Content Creation Lead. These people serve as KM Champions within an organization.

Why You Should Develop an Operating Model

Without a clear operating model, even the most promising KM initiatives risk stalling after the initial launch. Roles become unclear, priorities drift, and the connection between KM strategy and day-to-day work weakens. An operating model creates the structure, accountability, and shared understanding needed to keep KM efforts focused, adaptable, and impactful over time. 

As organizations evolve, their KM efforts must keep pace, not just growing in capability but in navigating new challenges. Without this evolution, misalignment creeps in, draining value and creating costly friction. At the same time, the boundaries between KM, data, and AI are blurring, making collaboration not only beneficial but necessary. Understanding these dynamics is critical to appreciating why a thoughtfully designed operating model is the backbone of sustainable knowledge management.

The Evolution of Knowledge Management Maturity

Most organizations do not start with a fully mature KM program or operating model. They evolve into them. Often, KM efforts begin as isolated, informal initiatives and grow into structured, enterprise-wide models as KM needs and capabilities mature.

The EK KM maturity model outlines five stages, from ‘Ad-Hoc’ to ‘Strategic’, that reflect how KM roles, tools, and outcomes mature over time. In the less mature stages, an inconsistent KM strategy is met with operating models that lack intention and legitimacy to sustain KM. At these stages, roles for KM are not formalized or are minimally visible and cursory. As maturity grows, increasing alignment between KM practices and business or AI goals gets supported by an operating model with clearer ownership and dedicated roles, scalable governance, and integrated systems.

By mapping existing systems, structures, and people roles onto the model, EK diagnoses the current state of client KM maturity and identifies the maturity characteristics that would support relevant KM evolution.

The Cost of Misalignment

When an organization rolls out a new enterprise KM, AI, or data solution without clearly identifying and establishing the roles and organizational structure needed to support it, those solutions often struggle to deliver their intended value. This is a common challenge that EK has observed when organizations overlook how the solution will be governed, maintained, and embedded in day-to-day work. This misalignment creates real risk as the solution can become ineffective, underutilized, or scrapped entirely. 

When the necessary roles and organizational framework do not exist to drive or sustain KM intentions, common resulting conditions arise, including:

  • Deteriorating content quality: Information can become outdated, fragmented, duplicated, or hard to find, undermining trust in the KM solution. 
  • Solution misuse: Employees remain unclear about the solution’s purpose and benefits, leading to incorrect usage and inconsistent solution outcomes.
  • Technology sitting idle:  Despite technical functionality and success, solutions fail to integrate into workflows, and the anticipated business value is not met.

These costly outcomes represent more than just implementation challenges–they are a missed opportunity to legitimize the value of KM as a critical enabler of AI, compliance, innovation, and business continuity. 

The Convergence Factor

As organizations begin to better understand the need for an operating model that supports their transformational efforts, formalized cross-collaborative teams and frameworks are becoming more popular. The push toward integrating KM, data, and AI teams is not coincidental; several forces and potential benefits are accelerating the move toward converging teams:

  • Demands of changing technology: The rise of semantic layers, large language models (LLMs), and AI-enabled search surface the need for structured, standardized knowledge assets, historically unique to the realm of KM, but now core to AI and data workflows. Collaboration from subject matter experts from all three areas ensures the inputs needed for these technologies, like curated knowledge and clean data, as well as the processes that ensure those, are present to produce the intended outputs, such as generative content that is accurate and properly contextualized.
  • Leaner operations: While budgets may shrink, expectations for more insights and automation are growing. Instead of hiring new roles for new solutions, some companies are being asked to retool existing roles or merge disparate teams to oversee new needs. The convergence of roles in these scenarios offers opportunities to show how integration reduces redundancy and strengthens solution delivery.
  • Shared systems, shared stakes: KM platforms, data catalogs, and AI training environments are increasingly overlapping or built on the same tech stack. Integration helps ensure these tools are optimized and governed collectively.
  • Scalability: Unified teams create structures that scale enterprise initiatives more effectively; reinforcing standards, enabling shared support models, and accelerating adoption across business units. When KM, data, and AI teams move from siloed functions to integrated workflows, their collective influence helps scale solutions that no single team could drive alone.

Enterprise Knowledge (EK) has seen firsthand how organizations are recognizing the value of cross-functional collaboration catalyzed by KM. For example, a large construction-sector client came to EK to bolster internal efforts to connect KM and data functions. This led to the alignment of parallel initiatives, including content governance, data catalog development, and KM strategy. EK’s engagement helped accelerate this convergence by embedding KM specialists to support both streams of work, ensuring continuity, shared context, and a repeatable governance model across teams.

Closing Thoughts

Your knowledge management strategy is only as effective as the operating model behind it. By intentionally designing clear KM roles and responsibilities to support your KM goals and initiatives, you create the foundation for sustainable, scalable KM that is ready for AI and data advancement. In Part 2 of this blog series, we will walk through how to design and implement a KM operating model that leverages team integration and supports maturing strategies.

If you are unsure where your organization sits on the KM Maturity Ladder–or need support designing an operating model that enables sustainable, high-impact knowledge management–EK is here to help. Contact us to learn how we can support your KM transformation and build a model that reflects your goals.

 

The post What is a KM Operating Model and Why You Need One appeared first on Enterprise Knowledge.

]]>