content Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/content/ Mon, 03 Nov 2025 21:23:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg content Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/content/ 32 32 How to Ensure Your Content is AI Ready https://enterprise-knowledge.com/how-to-ensure-your-content-is-ai-ready/ Thu, 02 Oct 2025 16:45:28 +0000 https://enterprise-knowledge.com/?p=25691 In 1996, Bill Gates declared “Content is King” because of its importance (and revenue generating potential) on the World Wide Web. Nearly 30 years later, content remains king, particularly when leveraged as a vital input for Enterprise AI. Having AI-ready … Continue reading

The post How to Ensure Your Content is AI Ready appeared first on Enterprise Knowledge.

]]>
In 1996, Bill Gates declared “Content is King” because of its importance (and revenue generating potential) on the World Wide Web. Nearly 30 years later, content remains king, particularly when leveraged as a vital input for Enterprise AI. Having AI-ready content is critical to successful AI implementation because it decreases hallucinations and errors, improves the efficiency and scalability of the model, and ensures seamless integration with evolving AI technologies. Put simply: if your content isn’t AI-ready, your AI initiatives will fail, stall, or deliver low value.  

In a recent blog, “Top Ways to Get Your Content and Data Ready for AI,” Sara Mae O’Brien-Scott and Zach Wahl gave an approach for ensuring your organization is ready to undertake an AI Initiative. While the previous blog provided a broad view of AI-readiness for all types of Knowledge Assets collectively, this blog will leverage the same approach, zeroing in on actionable steps to ensure your content is ready for AI. Content, also known as unstructured information, is pervasive in every organization. In fact, for many organizations it comprises 80% to 90% of the total information held within the organization. Within that corpus of content, there is a massive amount of value, but there also tends to be chaos. We’ve found that most organizations should only be actively maintaining 15-20% of their unstructured information, with the rest being duplicate, near-duplicate, outdated, or completely incorrect. Without taking steps to clean it up, contextualize it, and ensure it is properly accessible to the right people, your AI initiatives will flounder. The steps we detail below will enable you to implement Enterprise AI at your organization, minimizing the pitfalls and struggles many organizations have encountered while trying to implement AI.

1) Understand What You Mean by “Content” (Knowledge Asset Definition) 

In a previous blog, we discussed the many types of knowledge assets organizations possess, how they can be connected, and the collective value they offer. Identifying content, or unstructured information, as one of the types of knowledge assets to be included in your organization’s AI solutions will be a foregone conclusion for most. However, that alone is insufficient to manage scope and understand what needs to be done to ensure your content is AI-ready. There are many types of content, held in varied repositories, with much likely sprawling on existing file drives and old document management systems. 

Before embarking on an AI initiative, it is essential to focus on the content that addresses your highest priority use cases and will yield the greatest value, recognizing that more layers can be added iteratively over time. To maximize AI effectiveness, it is critical to ensure the content feeding AI models aligns with real user needs and AI use cases. Misaligned content can lead to hallucinations, inaccurate responses, or poor user experiences. The following actions help define content and prepare it for AI:

  • Identify the types of content that are critical for priority AI use cases.
  • Work with Content Governance Groups to identify content owners for future inclusion in AI testing. 
  • Map end-to-end user journeys to determine where AI interacts with users and the content touchpoints that need to be referenced by AI applications.
  • Inventory priority content across enterprise-wide source systems, breaking knowledge asset silos and system silos.
  • Flag where different assets serve the same intent to flag potential overlap or duplication, helping AI applications ingest only relevant content and minimize noise during AI model training.

What content means can vary significantly across organizations. For example, in a manufacturing company, content can take the form of operational procedures and inventory reports, while in a healthcare organization, it can include clinical case documentation and electronic health records. Understanding what content truly represents in an organization and identifying where it resides, often across siloed repositories, are the first steps toward enabling AI solutions to deliver complete and context-rich information to end users.

2) Ensure Quality (Content Cleanup)

Your AI Model is only as good as what’s going into it. ‘Garbage in, garbage out’, ‘steady foundation’, ‘steady house’, there are any number of ways to describe that if the content going into an AI model lacks quality, the outputs will too. Strong AI starts with strong content. Below, we have detailed both manual and automated actions that can be taken to improve the quality of your content, thereby improving your AI outcomes. 

Content Quality

Content created without regard for quality is common in the everyday workflow. While this content might serve business-as-usual processes, it can be detrimental to AI initiatives. Therefore, it’s crucial to address content quality issues within your repositories. Steps you can take to improve content quality and accelerate content AI readiness include:

  • Automate content cleanup processes by leveraging a combination of human-led and system-driven approaches, such as auto-tagging content for update, archival, or removal.
  • Scan and index content using automated processes to detect potential duplication by comparing titles, file size, metadata, and semantic similarity.
  • Apply similarity analysis to define business rules for deleting, archiving or modifying duplicate or near-duplicate content.
  • Flag content that has low-use or no-use, using analytics.
  • Combine analytics and content age to determine a retention cut-off (such as removing any content older than 2 years).
  • Leverage semantic tools like Named Entity Recognition (NER) and Natural Language Processing (NLP) to apply expert knowledge and determine the accuracy of content.
  • Use NLP to detect overly complex sentence structure and enterprise specific jargon that may reduce clarity or discoverability.

Content Restructuring

In the blog, Improve Enterprise AI with Semantic Content Management we note that content in an organization exists on a continuum of structure depending on many factors. The same is true for the amount of content restructuring that may or may not need to happen to enable your AI use case. We recently saw with a client that introducing even just basic structure to a document improved AI outcomes by almost 200%. However, this step requires clear goals and prioritization. Oftentimes this part of ensuring your content is AI-ready happens iteratively as the model is applied and you can determine what level of restructuring needs to occur to best improve AI outcomes. Restructuring content to prepare it for AI involves activities such as:

  • Apply tags, such as heading structures, to unstructured content to improve AI outcomes and enhance the end-user experience.
  • Use an AI-assisted check to validate that heading structures and tags are being used appropriately and are machine readable, so that content can be ingested smoothly by AI systems.
  • Simplify and restructure content that has been identified as overly complex and could result in hallucinations or unsatisfactory responses generated by the AI model.
  • Focus on reformatting longer, text-heavy content to achieve a more linear, time-based, or topic-based flow and improve AI effectiveness. 
  • Develop repeatable structures that can be applied automatically to content during creation or retroactively to provide AI with relevant content in a consumable format. 

In brief, cleaning up and restructuring content assets improves machine readability of content and therefore allows the AI model to generate stronger and more accurate outputs. To prioritize assets that need cleanup and restructuring, focus on activities and resources that will yield the highest return on investment for your AI solution. However, it is important to recognize that this may vary significantly across organizations, industries, and AI use cases. For example, an organization with a truly cross-functional use case, such as enterprise search, may prioritize deduplication of content to ensure information from different business areas doesn’t conflict when providing AI-generated responses. On the other hand, an organization with a more function-specific use case, such as streamlining legal contract review, may prioritize more hands-on content restructuring to improve AI comprehension.

3) Fill Gaps (Tacit Knowledge Capture)

Even with high-quality content, knowledge gaps that exist in your full enterprise ecosystem can cause AI errors and introduce the risk of unreliable outcomes. Considering your AI use case, the questions you want to answer, the discovery you’ve completed in previous steps, and the actions detailed below you can start to identify and fill gaps that may exist. 

Content Coverage 

Even with the best content strategy, it is not uncommon for different types of content to “fall through the cracks” and be unavailable or inaccessible for any number of reasons. Many organizations “don’t know what they don’t know”, so it can be difficult to begin this process. However, it is crucial to be aware of these content gaps, particularly when using LLMs to avoid hallucinations. Actions you may take to ensure content coverage and accelerate your journey toward content AI readiness include: 

  • Leverage systems analytics to assess user search behavior and uncover content gaps. This may include unused content areas of a repository, abandoned search queries, or searches that returned no results. 
  • Identify content gaps by using taxonomy analytics to identify missing categories or underrepresented terms and as a result, determine what content should be included.
  • Leverage SMEs and other end users during AI testing to evaluate AI-generated responses and identify areas where content may be missing. 
  • Use AI governance to ensure the model is transparent and can communicate with the user when it is not able to find a satisfactory answer.

Fill the Gap

Once missing content has been identified from information sources feeding the AI model, the real challenge is to fill in those gaps to prevent “hallucinations” and avoid user frustration that may be generated by incomplete or inaccurate answers. This may include creating new assets, locating assets, or other techniques identified which together can move the organization from AI to Knowledge Intelligence. Steps you may take to remediate the gaps and help your organization’s content be AI ready include:

  • Use link detection to uncover relationships across the content, identify knowledge that may exist elsewhere, and increase the likelihood of surfacing the right content. This can also inform later semantic tagging activities.
  • Identify, by analyzing content repositories, sources where content identified as “missing” could possibly exist.
  • Apply content transformation practices to “missing” content identified during the content repository analysis to ensure machine readability.
  • Conduct knowledge capture and transfer activities such as SME interviews, communities of practice, and collaborative tools to document tacit knowledge in the form of guides, processes, or playbooks. 
  • Institutionalize content that exists in private spaces that aren’t currently included in the repositories accessed by AI.
  • Create draft content using generative AI, making sure to include a human-in-the-loop step for accuracy. 
  • Acquire external content when gaps aren’t organization specific. Consider purchasing or licensing third-party content, such as research reports, marketing intelligence, and stock images.

By evaluating the content coverage for a particular use case, you can start to predict how well (or poorly) your AI model may perform. When critical content mostly exists in people’s heads, rather than in documented, accessible format, the organization is exposed to significant risk. For example, an organization deploying a customer-facing AI chatbot to help with case deflection in customer service centers, gaps in content can lead to potentially false or misleading responses. If the chatbot tries to answer questions it wasn’t trained for, it could result in out-of-policy exceptions, financial loss, decrease in customer trust, or lower retention due to inaccurate, outdated, or non-existent information. This example highlights why it is so important to identify and fill knowledge gaps to ensure your content is ready for AI. 

4) Add Structure and Context (Semantic Components)

Once you have identified the relevant content for an AI solution, ensured its quality for AI, and addressed major content gaps for your AI use cases, the next step in getting content ready for AI involves adding structure and context to content by leveraging semantic components. Taxonomy and metadata models provide the foundational structure needed to categorize unstructured content and provide meaningful context. Business glossaries ensure alignment by defining terms for shared understanding, while ontology models provide contextual connections needed for AI systems to process content. The semantic maturity of all of these models is critical to achieve successful AI applications. 

Semantic Maturity of Taxonomy and Business Glossaries

Some organizations struggle with the state of their taxonomies when starting AI-driven projects. Organizations must actively design and manage taxonomies and business glossaries to properly support AI-driven applications and use cases. This is not only essential for short-term implementation of the AI solution, but most importantly for long-term success. Standardization and centralization of these models help balance organization-wide needs and domain-specific needs. Properly structured and annotated taxonomies are instrumental in preparing content for AI. Taking the following actions will ensure that you have the Semantic Maturity of Taxonomies and Business Glossaries needed to achieve AI ready content:

  • Balance taxonomies across business areas to ensure organization-wide standardization, enabling smooth implementation of AI use cases and seamless integration of AI applications. 
  • Design hierarchical taxonomy structures with the depth and breadth needed to support AI use cases.
  • Refine concepts and alternative terms (synonyms and acronyms) in the taxonomy to more adequately describe and apply to priority AI content.
  • Align taxonomies with usability standards, such as ANSI/NISO Z39.19, and interoperability/machine readability standards, such as SKOS, so that taxonomies are both human and machine readable.
  • Incorporate definitions and usage notes from an organizational business glossary into the taxonomy to enrich meaning and improve semantic clarity.
  • Store and manage taxonomies in a centralized Taxonomy Management System (TMS) to support scalable AI readiness.

Semantic Maturity of Metadata 

Before content can effectively support AI-driven applications, organizations must also establish metadata practices to ensure that content has been sufficiently described and annotated. This involves not only establishing shared or enterprise-wide coordinated metadata models, but more importantly, applying complete and consistent metadata to that content. The following actions will ensure that the Semantic Maturity of your Metadata model meets the standards required for content to be AI ready:

  • Structure metadata models to meet the requirements of AI use cases, helping derive meaningful insights from tagged content.
  • Design metadata models that accurately represent different knowledge asset types (types of content) associated with priority AI use cases.
  • Apply metadata models consistently across all content source systems to enhance findability and discoverability of content in AI applications. 
  • Document and regularly update metadata models.
  • Store and manage metadata models in a centralized semantic repository to ensure interoperability and scalable reuse across AI solutions.

Semantic Maturity of Ontology

Just as with taxonomies, metadata, and business glossaries, developing semantically rich and precise ontologies is essential to achieve successful AI applications and to enable Knowledge Intelligence (KI) or explainable AI. Ontologies must be sufficiently expressive to support semantic enrichment, traceability, and AI-driven reasoning. They must be designed to accurately represent key entities, their properties, and relationships in ways that enable consistent tagging, retrieval, and interpretation across systems and AI use cases. By taking the following actions, your ontology model will achieve the level of semantic maturity needed for content to be AI ready:

  • Ensure ontologies accurately describe the knowledge domain for the in-scope content.
  • Define key entities, their attributes, and relationships in a way that supports AI-driven classification, recommendation, and reasoning.
  • Design modular and extensible ontologies for reuse across domains, applications, and future AI use cases.
  • Align ontologies with organizational taxonomies to support semantic interoperability across business areas and content source systems.
  • Annotate ontologies with rich metadata for human and machine readability.
  • Adhere to ontology standards such as OWL, RDF, or SHACL for interoperability with AI tools.
  • Store ontologies in a central ontology management system for machine readability and interoperability with other semantic models.

Preparing content for AI is not just about organizing information, it’s about making it discoverable, valuable, and usable. Investing in semantic models and ensuring a consistent content structure lays the foundation for AI to generate meaningful insights. For example, if an organization wants to deliver highly personalized recommendations that connect users to specific content, building customized taxonomies, metadata models, business glossaries, and ontologies not only maximizes the impact of current AI initiatives, but also future-proofs content for emerging AI-driven use cases.

5) Semantic Model Application (Content Tagging)

Designing structured semantic models is just one part of preparing content for AI. Equally important is the consistent application of complete, high-quality metadata to organization-wide content. Metadata enrichment of unstructured content, especially across silo repositories, is critical for enabling AI-powered systems to reliably discover, interpret, and utilize that content. The following actions to enhance the application of content tags will help you achieve content AI readiness:

  • Tag unstructured content with high-quality metadata to enhance interpretability in AI systems.
  • Ensure each piece of relevant content for the AI solution is sufficiently annotated, or in other words, it is labeled with enough metadata to describe its meaning and context. 
  • Promote consistent annotation of content across business areas and systems using tags derived from a centralized and standardized taxonomy. 
  • Leverage mechanisms, like auto-tagging, to enhance the speed and coverage of content tagging. 
  • Include a human-in-the-loop step in the auto-tagging process to improve accuracy of content tagging.

Consistent content tagging provides an added layer of meaning and context that AI can use to deliver more complete and accurate answers. For example, an organization managing thousands of unstructured content assets across disparate repositories and aiming to deliver personalized content experiences to end users, can more effectively tag content by leveraging a centralized taxonomy and an auto-tagging approach. As a result, AI systems can more reliably surface relevant content, extract meaningful insights, and generate personalized recommendations.

6) Address Access and Security (Unified Entitlements)

As Joe Hilger mentioned in his blog about unified entitlements, “successful semantic solutions and knowledge management initiatives help the right people see the right information at the right time.” But to achieve this, access permissions must be in place so that only authorized individuals have visibility into the appropriate content. Unfortunately, many organizations still maintain content in old repositories that don’t have the right features or processes to secure it, creating a significant risk for organizations pursuing AI initiatives. Therefore, now more than ever, it is important to properly secure content by defining and applying entitlements, preventing access to highly sensitive content by unauthorized people and as a result, maintaining trust across the organization. The actions outlined below to enhance Unified Entitlements will accelerate your journey toward content AI readiness:

  • Define an enterprise-wide entitlement framework to apply security rules consistently across content assets, regardless of the source system.
  • Automate security by enforcing privileges across all systems and types of content assets using a unified entitlements solution.
  • Leverage AI governance processes to ensure that content creators, managers, and owners are aware of entitlements for content they handle and needs to be consumed by AI applications.

Entitlements are important because they ensure that content remains consistent, trustworthy, and reusable for AI systems. For example, if an organization developing a Generative AI solution stores documents and web content about products and clients across multiple SharePoint sites, content management systems, and webpages, inconsistent application of entitlements may represent a legal or compliance risk, potentially exposing outdated, or even worse, highly sensitive content to the wrong people. On the other hand, the correct definition and application of access permissions through a unified entitlements solution plays a key role in mitigating that risk, enabling operational integrity and scalability, not only for the intended Generative AI solution, but also for future AI initiatives.

7) Maintain Quality While Iteratively Improving (Governance)

Effective governance for AI solutions can be very complex because it requires coordination across systems and groups, not just within them, especially among content governance, semantic governance, and AI governance groups. This coordination is essential to ensure content remains up to date and accessible for users and AI solutions, and that semantic models are current and centrally accessible. 

AI Governance for Content Readiness 

Content Governance 

Not all organizations have supporting organizational structures with defined roles and processes to create, manage, and govern content that is aligned with cross-organizational AI initiatives. The existence of an AI Governance for Content Readiness Group ensures coordination with the traditional Content Governance Groups and provides guidance to content owners of the source systems on how to get content AI ready to support priority AI use cases. By taking the following actions, the AI Governance for Content Readiness Group will help ensure that you have the content governance practices required to achieve AI-ready content:

  • Define how content should be captured and managed in a way that is consistent, predictable, and interoperable for AI use cases.
  • Incorporate in your AI solution roadmap a step, delivered through the Content Governance Groups, to guide content owners of the source systems on what is required to get content AI ready for inclusion in AI models.
  • Provide guidance to the Content Governance Group on how to train and communicate with system owners and asset owners on how to prepare content for AI.
  • Take the technical and strategic steps necessary to connect content source systems to AI systems for effective content ingestion and interpretation.
  • Coordinate with the Content Governance Group to develop and adopt content governance processes that address content gaps identified through the detection of bias, hallucinations, or misalignment, or unanswered questions during AI testing.
  • Automate AI governance processes leveraging AI to identify content gaps, auto-tag content, or identify new taxonomy terms for the AI solution.

Semantic Models Governance

Similar to the importance of coordinating with the content governance groups, coordinating with semantic models governance groups is key for AI readiness. This involves establishing roles and responsibilities for the creation, ownership, management, and accountability of semantic models (taxonomy, metadata, business glossary, and ontology models) in relation to AI initiatives. This also involves providing clear guidance for managing changes in the models and communicating updates to those involved in AI initiatives. By taking the following actions, the AI Governance for Content Readiness Group will help ensure that your organization has the semantic governance practices required to achieve AI-ready content: 

  • Develop governance structures that support the development and evolution of semantic models in alignment with both existing and emerging AI initiatives.
  • Align governance roles (e.g. taxonomists, ontologists, semantic engineers, and AI engineers) with organizational needs for developing and maintaining semantic models that support enterprise-wide AI solutions.
  • Ensure that the systems used to manage taxonomies, metadata, and ontologies support enforcing permissions for accessing and updating the semantic models.
  • Work with the Semantic Models Governance Groups to develop processes that help remediate gaps in the semantic models uncovered during AI testing. This includes providing guidance on the recommended steps for making changes, suggested decision-makers, and implementation approaches.
  • Work with the Semantic Models Governance Groups to establish metrics and processes to monitor, tune, refine, and evolve semantic models throughout their lifecycle and stay up to date with AI efforts.
  • Coordinate with the Semantic Models Governance Groups to develop and adopt processes that address semantic model gaps identified through the detection of bias, hallucinations, or misalignment, or unanswered questions during AI solution testing.

For example, imagine an organization is developing business taxonomies and ontologies that represent skills, job roles, industries, and topics to support an Employee 360 View solution. It is essential to have a governance model in place with clearly defined roles, responsibilities, and processes to manage and evolve these semantic models as the AI solutions team ingests content from diverse business areas and detects gaps during AI testing. Therefore, coordination between the AI Governance for Content Readiness Group and the Semantic Models Governance Groups helps ensure that concepts, definitions, entities, properties, and relationships remain current and accurately reflect the knowledge domain for both today’s needs and future AI use cases.  

Conclusion

Unstructured content remains as one of the most common knowledge assets in organizations. Getting that content ready to be ingested by AI applications is a balancing act. By cleaning it up, filling in gaps, applying rich semantic models to add structure and context, securing it with unified entitlements, and leveraging AI governance, organizations will be better positioned to succeed in their own AI journey. We hope after reading this blog you have a better understanding of the actions you can take to ensure your organization’s content is AI ready. If you want to learn how our experts can help you achieve Content AI Readiness, contact us at info@enterprise-knowledge.com

The post How to Ensure Your Content is AI Ready appeared first on Enterprise Knowledge.

]]>
Top Ways to Get Your Content and Data Ready for AI https://enterprise-knowledge.com/top-ways-to-get-your-content-and-data-ready-for-ai/ Mon, 15 Sep 2025 19:17:48 +0000 https://enterprise-knowledge.com/?p=25370 As artificial intelligence has quickly moved from science fiction, to pervasive internet reality, and now to standard corporate solutions, we consistently get the question, “How do I ensure my organization’s content and data are ready for AI?” Pointing your organization’s … Continue reading

The post Top Ways to Get Your Content and Data Ready for AI appeared first on Enterprise Knowledge.

]]>
As artificial intelligence has quickly moved from science fiction, to pervasive internet reality, and now to standard corporate solutions, we consistently get the question, “How do I ensure my organization’s content and data are ready for AI?” Pointing your organization’s new AI solutions at the “right” content and data are critical to AI success and adoption, and failing to do so can quickly derail your AI initiatives.  

Though the world is enthralled with the myriad of public AI solutions, many organizations struggle to make the leap to reliable AI within their organizations. A recent MIT report, “The GenAI Divide,” reveals a concerning truth: despite significant investments in AI, 95% of organizations are not seeing any benefits from their AI investments. 

One of the core impediments to achieving AI within your own organization is poor-quality content and data. Without the proper foundation of high-quality content and data, any AI solution will be rife with ‘hallucinations’ and errors. This will expose organizations to unacceptable risks, as AI tools may deliver incorrect or outdated information, leading to dangerous and costly outcomes. This is also why tools that perform well as demos fail to make the jump to production.  Even the most advanced AI won’t deliver acceptable results if an organization has not prepared their content and data.

This blog outlines seven top ways to ensure your content and data are AI-ready. With the right preparation and investment, your organization can successfully implement the latest AI technologies and deliver trustworthy, complete results.

1) Understand What You Mean by “Content” and/or “Data” (Knowledge Asset Definition)

While it seems obvious, the first step to ensuring your content and data are AI-ready is to clearly define what “content” and “data” mean within your organization. Many organizations use these terms interchangeably, while others use one as a parent term of the other. This obviously leads to a great deal of confusion. 

Leveraging the traditional definitions, we define content as unstructured information (ranging from files and documents to blocks of intranet text), and data as structured information (namely the rows and columns in databases and other applications like Customer Relationship Management systems, People Management systems, and Product Information Management systems). You are wasting the potential of AI if you’re not seeking to apply your AI to both content and data, giving end users complete and comprehensive information. In fact, we encourage organizations to think even more broadly, going beyond just content and data to consider all the organizational assets that can be leveraged by AI.

We’ve coined the term knowledge assets to express this. Knowledge assets comprise all the information and expertise an organization can use to create value. This includes not only content and data, but also the expertise of employees, business processes, facilities, equipment, and products. This manner of thinking quickly breaks down artificial silos within organizations, getting you to consider your assets collectively, rather than by type. Moving forward in this article, we’ll use the term knowledge assets in lieu of content and data to reinforce this point. Put simply and directly, each of the below steps to getting your content and data AI-ready should be considered from an enterprise perspective of knowledge assets, so rather than discretely developing content governance and data governance, you should define a comprehensive approach to knowledge asset governance. This approach will not only help you achieve AI-readiness, it will also help your organization to remove silos and redundancies in order to maximize enterprise efficiency and alignment.

knowledge asset zoom in 1

2) Ensure Quality (Asset Cleanup)

We’ve found that most organizations are maintaining approximately 60-80% more information than they should, and in many cases, may not even be aware of what they still have. That means that four out of five knowledge assets are old, outdated, duplicate, or near-duplicate. 

There are many costs to this over-retention before even considering AI, including the administrative burden of maintaining this 80% (including the cost and environmental impact of unnecessary server storage), and the usability and findability cost to the organization’s end users when they go through obsolete knowledge assets.

The AI cost becomes even higher for several reasons. First, AI typically “white labels” the knowledge assets it finds. If a human were to find an old and outdated policy, they may recognize the old corporate branding on it, or note the date from several years ago on it, but when AI leverages the information within that knowledge asset and resurfaces it, it looks new and the contextual clues are lost.

Next, we have to consider the old adage of “garbage in, garbage out.” Incorrect knowledge assets fed to an AI tool will result in incorrect results, also known as hallucinations. While prompt engineering can be used to try to avoid these conflicts and, potentially even errors, the only surefire guarantee to avoid this issue is to ensure the accuracy of the original knowledge assets, or at least the vast majority of it.

Many AI models also struggle with near-duplicate “knowledge assets,” unable to discern which version is trusted. Consider your organization’s version control issues, working documents, data modeled with different assumptions, and iterations of large deliverables and reports that are all currently stored. Knowledge assets may go through countless iterations, and most of the time, all of these versions are saved. When ingested by AI, multiple versions present potential confusion and conflict, especially when these versions didn’t simply build on each other but were edited to improve findings or recommendations. Each of these, in every case, is an opportunity for AI to fail your organization.

Finally, this would also be the point at which you consider restructuring your assets for improved readability (both by humans and machines). This could include formatting (to lower cognitive lift and improve consistency) from a human perspective. For both humans and AI, this could also mean adding text and tags to better describe images and other non-text-based elements. From an AI perspective, in longer and more complex assets, proximity and order can have a negative impact on precision, so this could include restructuring documents to make them more linear, chronological, or topically aligned. This is not necessary or even important for all types of assets, but remains an important consideration especially for text-based and longer types of assets.

knowledge asset zoom in 2

3) Fill Gaps (Tacit Knowledge Capture)

The next step to ensure AI readiness is to identify your gaps. At this point, you should be looking at your AI use cases and considering the questions you want AI to answer. In many cases, your current repositories of knowledge assets will not have all of the information necessary to answer those questions completely, especially in a structured, machine-readable format. This presents a risk itself, especially if the AI solution is unaware that it lacks the complete range of knowledge assets necessary and portrays incomplete or limited answers as definitive. 

Filling gaps in knowledge assets is extremely difficult. The first step is to identify what is missing. To invoke another old adage, organizations have long worried they “don’t know what they don’t know,” meaning they lack the organizational maturity to identify gaps in their own knowledge. This becomes a major challenge when proactively seeking to arm an AI solution with all the knowledge assets necessary to deliver complete and accurate answers. The good news, however, is that the process of getting knowledge assets AI-ready helps to identify gaps. In the next two sections, we cover semantic design and tagging. These steps, among others, can identify where there appears to be missing knowledge assets. In addition, given the iterative nature of designing and deploying AI solutions, the inability of AI to answer a question can trigger gap filling, as we cover later. 

Of course, once you’ve identified the gaps, the real challenge begins, in that the organization must then generate new knowledge assets (or locate “hidden” assets) to fill those gaps. There are many techniques for this, ranging from tacit knowledge capture, to content inventories, all of which collectively can help an organization move from AI to Knowledge Intelligence (KI).    

knowledge asset zoom in 3

4) Add Structure and Context (Semantic Components)

Once the knowledge assets have been cleansed and gaps have been filled, the next step in the process is to structure them so that they can be related to each other correctly, with the appropriate context and meaning. This requires the use of semantic components, specifically, taxonomies and ontologies. Taxonomies deliver meaning and structure, helping AI to understand queries from users, relate knowledge assets based on the relationships between the words and phrases used within them, and leverage context to properly interpret synonyms and other “close” terms. Taxonomies can also house glossaries that further define words and phrases that AI can leverage in the generation of results.

Though often confused or conflated with taxonomies, ontologies deliver a much more advanced type of knowledge organization, which is both complementary to taxonomies and unique. Ontologies focus on defining relationships between knowledge assets and the systems that house them, enabling AI to make inferences. For instance:

<Person> works at <Company>

<Zach Wahl> works at <Enterprise Knowledge>

<Company> is expert in <Topic>

<Enterprise Knowledge> is expert in <AI Readiness>

From this, a simple inference based on structured logic can be made, which is that the person who works at the company is an expert in the topic: Zach Wahl is an expert in AI Readiness. More detailed ontologies can quickly fuel more complex inferences, allowing an organization’s AI solutions to connect disparate knowledge assets within an organization. In this way, ontologies enable AI solutions to traverse knowledge assets, more accurately make “assumptions,” and deliver more complete and cohesive answers. 

Collectively, you can consider these semantic components as an organizational map of what it does, who does it, and how. Semantic components can show an AI how to get where you want it to go without getting lost or taking wrong turns.

5) Semantic Model Application (Tagging)

Of course, it is not sufficient simply to design the semantic components; you must complete the process by applying them to your knowledge assets. If the semantic components are the map, applying semantic components as metadata is the GPS that allows you to use it easily and intuitively. This step is commonly a stumbling block for organizations, and again is why we are discussing knowledge assets rather than discrete areas like content and data. To best achieve AI readiness, all of your knowledge assets, regardless of their state (structured, unstructured, semi-structured, etc), must have consistent metadata applied against them. 

When applied properly, this consistent metadata becomes an additional layer of meaning and context for AI to leverage in pursuit of complete and correct answers. With the latest updates to leading taxonomy and ontology management systems, the process of automatically applying metadata or storing relationships between knowledge assets in metadata graphs is vastly improved, though still requires a human in the loop to ensure accuracy. Even so, what used to be a major hurdle in metadata application initiatives is much simpler than it used to be.

knowledge asset zoom in 4

6) Address Access and Security (Unified Entitlements)

What happens when you finally deliver what your organization has been seeking, and give it the ability to collectively and completely serve their end users the knowledge assets they’ve been seeking? If this step is skipped, the answer is calamity. One of the express points of the value of AI is that it can uncover hidden gems in knowledge assets, make connections humans typically can’t, and combine disparate sources to build new knowledge assets and new answers within them. This is incredibly exciting, but also presents a massive organizational risk.

At present, many organizations have an incomplete or actually poor model for entitlements, or ensuring the right people see the right assets, and the wrong people do not. We consistently discover highly sensitive knowledge assets in various forms on organizational systems that should be secured but are not. Some of this takes the form of a discrete document, or a row of data in an application, which is surprisingly common but relatively easy to address. Even more of it is only visible when you take an enterprise view of an organization. 

For instance, Database A might contain anonymized health information about employees for insurance reporting purposes but maps to discrete unique identifiers. File B includes a table of those unique identifiers mapped against employee demographics. Application C houses the actual employee names and titles for the organizational chart, but also includes their unique identifier as a hidden field. The vast majority of humans would never find this connection, but AI is designed to do so and will unabashedly generate a massive lawsuit for your organization if you’re not careful.

If you have security and entitlement issues with your existing systems (and trust me, you do), AI will inadvertently discover them, connect the dots, and surface knowledge assets and connections between them that could be truly calamitous for your organization. Any AI readiness effort must confront this challenge, before your AI solutions shine a light on your existing security and entitlements issues.

knowledge asset zoom in 5

7) Maintain Quality While Iteratively Improving (Governance)

Steps one through six describe how to get your knowledge assets ready for AI, but the final step gets your organization ready for AI. With a massive investment in both getting your knowledge assets in the right state for AI and in  the AI solution itself, the final step is to ensure ongoing quality of both. Mature organizations will invest in a core team to ensure knowledge assets go from AI-ready to AI-mature, including:

  • Maintaining and enforcing the core tenets to ensure knowledge assets stay up-to-date and AI solutions are looking at trusted assets only;
  • Reacting to hallucinations and unanswerable questions to fill gaps in knowledge assets; 
  • Tuning the semantic components to stay up to date with organizational changes.

The most mature organizations, those wishing to become AI-Powered organizations, will look first to their knowledge assets as the key building block to drive success. Those organizations will seek ROCK (Relevant, Organizationally Contextualized, Complete, and Knowledge-Centric) knowledge assets as the first line to delivering Enterprise AI that can be truly transformative for the organization. 

If you’re seeking help to ensure your knowledge assets are AI-Ready, contact us at info@enterprise-knowledge.com

The post Top Ways to Get Your Content and Data Ready for AI appeared first on Enterprise Knowledge.

]]>
Content Mastermind (Taylor’s Version): What Taylor Swift Can Teach Us About The Benefits of Repurposing Content https://enterprise-knowledge.com/content-mastermind-taylors-version-what-taylor-swift-can-teach-us-about-the-benefits-of-repurposing-content/ Mon, 23 Jun 2025 14:26:54 +0000 https://enterprise-knowledge.com/?p=24725 In January of 2025, Taylor Swift charted #1 on Billboard, breaking a record for most Number 1s on the Top Album Sales list with a new version of an almost six-year-old album. The 2025 repressing of Lover (Live from Paris) … Continue reading

The post Content Mastermind (Taylor’s Version): What Taylor Swift Can Teach Us About The Benefits of Repurposing Content appeared first on Enterprise Knowledge.

]]>
In January of 2025, Taylor Swift charted #1 on Billboard, breaking a record for most Number 1s on the Top Album Sales list with a new version of an almost six-year-old album. The 2025 repressing of Lover (Live from Paris) heart-shaped vinyl sold 100k copies within 45 minutes of its release, and continued to sell out every time it was restocked on the online store. 

Taylor Swift’s strategy of repurposing content, while unique for a singer, is very common from a business perspective. 94% of marketers repurpose content, indicating that reusing content is not a new concept… and yet, are you exploring the multi-facet reuse of your content? 

Since July 2020, Taylor Swift has released five original studio albums, four studio album re-recordings (“Taylor’s Version” produced before Taylor was able to buy back her original catalog of recordings), presentation variants, deluxe editions, and live albums totaling 36 albums to date, with 20 million+ units sold. Swift has had a stratospheric few years of breaking records—including becoming the first musician ranked as a Forbes billionaire primarily from songs and performances— partially due to her intelligent “content” reuse. What can we learn from this? Read on to find out.

Results

Before delving into the ways you can reuse content, what results can you expect when you put in the foundational work to enable intelligent reuse?

Broaden Your Target Audience

Statistically speaking, if you increase the amount of content you produce, you are more likely to reach a wider audience. With the development of the Eras Tour (where each era represents one of her 11 studio albums, spanning several different genres), many Taylor Swift fans began to classify themselves by their preferred “era”, or the album that made them a fan of Swift. With each album and re-recording, she’s endeared more fans to her, based on their preferred genre. 

The same can be said for reusing and repurposing content. By using Structured Content Management and effective content reuse, you decrease the overhead associated with creating and managing content. This effectively enables more systematic ways to reuse content and frees up time for content producers to create new and interesting types of content. This results in both an increase in content and the opportunity to broaden your audience. Moreover, content reuse frees up content producers’ and content marketers’ time, paving the way for two vital capabilities: personalization and experimentation.

Increase Customer Engagement with Personalization

In this day and age, most marketers use personalized content to reach their customers, but 74% say they struggle with scaling that personalization. While structured content alone can enable personalization of content in a more systematic way, when you combine structured content with the power of a knowledge graph, you also pave the way for effective personalization at scale. Using a combination of metadata applied to content components, data known about customers, and a knowledge graph, dynamic content can be created and scaled to reach more segments of customers. By giving customers relevant and personalized content for their needs, you are more likely to increase customer engagement and satisfaction.

Increase Conversion Rates with Experimentation

As a final highlighted benefit, deploying Structured Content Management enables your organization to run experiments on content, fail quickly, and adjust the content strategy as needed. While page variants and A/B testing can be deployed with traditional content management, it is not the same as being able to test an individual content component and run many different experiments quickly. This could be presentation experiments—does a CTA perform better on the side rail or above the fold embedded in the body content—but could also be which content performs best when presented in the “related content” section, an infographic or a blog? What ultimately comes from experimentation is an invaluable feedback loop that enables your organization to develop high-value, high-performing content that increases engagement metrics such as improving conversion rates.

Types of Reuse

Now that we’ve covered the benefits, let’s turn our attention to the types of reuse that are possible. Swift’s 36 record-shattering albums have three core reuse strategies: visual change, audience change, and assembly change. While there are certainly more than this, we’ll look at the same three methods in this blog: a new presentation, a new lens, and a new assembly. When it comes to your organization’s essential content, how can you reuse your content in the same ways without it becoming stale?

Change the Presentation of Content

Visually, many of the albums Swift has released in the last 5 years have thematic visual ties with the album art. Speak Now (Taylor’s Version) was released in three different shades and hues of purple: Orchid Marbled, Violet Marbled, and Lilac Marble. It’s not uncommon to see people create “Franken Variants,” where they’ve taken an LP from each version and put them together. The parallel to content strategy is the presentation changes made when employing multi-channel marketing. You may have written a long-form blog, but you’ll send it out in an email, on a social post, etc. Social posts can vary depending on the site, and many digital asset management systems (DAMs) support the ability to create automatic derivatives that fit the particular parameters of a social media channel (e.g. Instagram is 1080 x 1080 pixels, while LinkedIn is 1350 x 440 pixels) without creating an entirely new copy of this content. 

What are other ways you could create a new presentation of the content, though? When we design Content Models at EK, we emphasize decoupling content from presentation to enable this kind of reuse. When you create a model for a content type, the focus should be on what information is being communicated rather than how it is presented. An example of this could be a social proof component. Perhaps when writing up a use case of your product by a customer in long-form content, you have quotes from customers. Within the body of the long-form content it may have a particular styling, but you also reuse the quote on landing pages as social proof, and on these pages it uses more of a card style. If you decouple the customer quote from the styling needed in different channels, you can automatically populate the different styles without keeping multiple copies.

This not only saves time from creating all new components every time they are reused, but also decreases the risk of mistakes that can be introduced through manually copying the content. We saw this recently with a client who used social proof throughout their marketing website on many different pages, but through a content audit, it was discovered that one of the quotes was misattributed to another customer in an entirely different industry. The customer then had to go through the entire website (10,000 pages!) and scrub the quote. If they had already implemented Structured Content Management, they could have changed all instances with a single content update.

Update the Tone or Perspective of the Content

For Record Store Day in 2023, Swift released a version of her Folklore album that had only been seen in a Disney+ special, The Long Pond Studio Sessions (LPSS). This record was an acoustic version of the pandemic release Folklore, recorded at Aaron Dessner’s Long Pond Studios. While many may wonder why you would buy another version of an album you already have, many fans prefer the LPSS version because Swift sounds more raw than the studio version. While it is infamously difficult to get the right tone on the internet (e.g. if I don’t use exclamation points or emojis, I’m worried you’ll think I’m cold), tone of voice can still be incorporated in content and be consistent with your organization’s branding. In the same way, when you’re communicating information with a group of stakeholders, you may shift tone depending on the make-up of those stakeholders. You’ll communicate information differently to a group of executives compared to a group of individual contributors, or a group from IT vs. a group from HR. How can you use this with your customer-facing content? 

Perhaps your company writes a lot of thought leadership, and a customer can browse this thought leadership via an abstract or summary of the content. While you may have originally written the abstracts very technically, you may have since realized that your audience base is predominantly newer professionals who do not know all of your industry’s lingo. Using this insight into your customers, you could then update the abstracts to be more beginner-friendly to prompt more engagement with posts. While this could be a manual change, there’s also the possibility of using generative AI to adjust the tone or comprehension level of the abstracts to speed up rewriting and repurposing. Additionally, this paves the way for personalization by having variations of components tagged with different audiences. When a certain customer is identified as belonging to a certain group, content could be dynamically updated via a graph to be more appealing to the customer. This increases engagement and customer satisfaction. 

Use a New Assembly of Content

On many levels, music is an assembly. A song is an assembly of notes and phrases, an album is an assembly of songs that tell a story, a playlist is an assembly of songs curated in a chosen order to mimic an event or a feeling. One of the things Swift did during the Eras Tour was include a “Surprise Song” section in which she would play one song from her discography on guitar and one on piano. While at the beginning of the tour, she was playing single songs on each instrument, by the end of the tour, she was making “mashups” of songs where she would seamlessly mix multiple songs together for a new creation. I Hate It Here x the lakes, The Manuscript x Long Live, I Think He Knows x Gorgeous—over the course of several months, Swift created many new songs that were assemblies of parts of other songs.

When talking about Structured Content Management, we frequently compare content components or modular content to Legos. By creating reusable “legos” of content, you enable many different assemblies of those legos. This could take many forms—marketing landing pages or generation of proposals—but one of the easiest examples to understand is learning content. Internal trainings are ubiquitous in many organizations and often a sore spot because they can be irrelevant to an employee’s position. For example, perhaps you have a training on harassment that employees are required to take, but because the course is packaged as a single unit rather than broken up by the lessons within, all employees end up learning about topics that are more relevant to people managers. This could mean that the employee “checks out” when consuming that lesson and is more likely to disengage from the rest of the training. By creating smaller blocks of content, you could then have a personalized assembly of topics tagged with individual contributors and a personalized assembly of topics tagged with people managers without having to create multiple copies of the same course. 

Conclusion

While certainly not the first (or the only! or the last!) artist to develop methods of reuse, love her or hate her, it’s clear that Taylor Swift is a mastermind when it comes to engaging and expanding her fanbase. You can use these same techniques with your organization to expand your customer base. When you employ a clear content strategy and leverage methodical content engineering and content operations, your organization’s content has the potential to develop into a true business asset. If this has sparked your interest and you’re ready to get serious about bringing your content to its highest potential, give us a call.

The post Content Mastermind (Taylor’s Version): What Taylor Swift Can Teach Us About The Benefits of Repurposing Content appeared first on Enterprise Knowledge.

]]>
How to Prepare Content for AI https://enterprise-knowledge.com/how-to-prepare-content-for-ai/ Wed, 21 Feb 2024 16:40:32 +0000 https://enterprise-knowledge.com/?p=19919 Artificial Intelligence (AI) enables organizations to leverage and manage their content in exciting new ways, from chatbots and content summarization to auto-tagging and personalization. Most organizations have a copious amount of content and are looking to use AI to improve … Continue reading

The post How to Prepare Content for AI appeared first on Enterprise Knowledge.

]]>
Artificial Intelligence (AI) enables organizations to leverage and manage their content in exciting new ways, from chatbots and content summarization to auto-tagging and personalization. Most organizations have a copious amount of content and are looking to use AI to improve their operations and efficiency while enabling end users to find relevant information quickly and intuitively. 

With the rise of ChatGPT and other generative AI tools in the last year, there’s a common misconception that you can “do” AI on any content with no preparation. If you want accurate and useful results and insights, however, it requires some upfront work. Understanding how AI interacts with your content and how your content strategy supports AI readiness will set you up for an effective AI implementation. 

How AI Interacts with Content

While AI can help in many phases of the content lifecycle, from planning and authoring to discovery, AI usually interacts with existing content in two key ways:

1) Comprehension: AI must parse existing content to “understand” an organization’s vernacular or common language. This helps the AI model create statistical models, cluster content and concepts, and create a baseline for addressing future inputs.
2) Search: AI often needs to quickly identify snippets of content related to text, chunking longer content into smaller components and searching these components for relevant material. These smaller snippets are often used to gain an understanding of new or updated material.

When AI examines existing content, it is trying to understand what it is about and how it relates to other concepts within the knowledge domain, and there are steps we can take to help. While this blog is mostly considering how large language models (LLMs) and retrieval augmented generation (RAG) AI interact with content, the steps listed below will prepare content for a variety of other types of AI for both insight and action.

Developing a Content Strategy

The best way to prepare content for AI is to develop a content strategy that addresses the relationships, the structure, the clean up, and the componentization of the content. One key preliminary activity will be to audit your content with the specific lens of AI-readiness, and to assess your organization’s content against the steps listed below.  

Model the Knowledge Domain

In most situations, AI creates internal models to group and cluster information to help the AI respond efficiently to new inputs. AI does a decent job of inferring the relationships between information, but organizations can significantly assist this process by defining an ontology. Ontologies enable organizations to define and relate organizational information, codifying how people, tools, content, topics, and other concepts are related. These models improve findability, support advanced search use cases, and form semantic layers that facilitate the integration of data from multiple sources into consumable formats and user-intuitive structures. 

Once created, an ontology can be used with content to:

  • auto-tag content with related organizational information (topics, people, etc.);
  • enable navigation through an organization’s knowledge domain by following relationships; and
  • supply AI with curated models that explain how content connects with the organization’s information that can lead to key business insights. 

Modeling an organization’s knowledge domain with an ontology improves AI’s ability to utilize content more effectively and produce more accurate results.

Cleanup and Deduplicate the Content 

Today’s organizations have too much content and duplicated information. Content is often split between multiple systems due to limitations with legacy tools, user permissions, or the need to support new features and displays. While auditing all of an organization’s content may seem daunting, there are steps an organization can take to streamline the process. Organizations should focus on their NERDy content, identifying the new, essential, reliable, and dynamic content users need to perform their jobs. As part of this focus, organizations reduce content ROT (Redundant, Outdated, Trivial), improving user trust and experience with organizational information. 
As part of the cleaning effort, an organization may want to create a centralized authoring platform to maintain content in one place rather than siloed locations. This allows content to be managed in one place, reducing the effort to update content and enabling content reuse. Reusing content helps deduplicate content, removing the need to replicate and update content in multiple places. A content audit, analysis, and clean-up will organize content in an intuitive way for AI and reduce bias from repeated or incorrect information.

Add Structure and Standardization

Once your organization’s knowledge domain is defined, the next step is to create the content models and content types that support that ontology, this is often referred to as content engineering
Content types are the reusable templates that standardize the structure for a particular format of content such as articles, infographics, webinars, and product information, as well as the standard metadata that should be included with that content type (created date, author, department, related subjects, etc.).

Example of how each type of cake "bundt, round layered, and cupcake" all need their cake pan - or content type template.

If we think of Content Types as the cake pan in this analogy, a content model is the Cake Recipe. While the Content Type defines the structure of the content, the Content Model defines the meaning of that content. In the cake analogy, you may have a chocolate cake, a vanilla cake, and a carrot cake; theoretically, any of those recipes could be used in any of the pans. If the content type dictates how, the content model dictates what. In an organization this could look like a content model of a product that includes parts like the product title, the product value proposition, the product features, etc. This content model of a product could then be fit into many content types, such as a brochure, a web page, and an infographic. By creating content models and content types we give the AI model better insight into how the content is connected and the purpose it serves.

The structure of these templates provides AI with content in a consumable and semantically meaningful format where content sections and metadata are given to the AI model. A crucial part of content engineering  is the creation of a taxonomy to describe the content. Taxonomies should be user-centric, highlighting users’ terminology to talk about content. The terms within a taxonomy and the associated synonyms improve an AI’s capability to utilize the content. Additionally, content types and content models facilitate the consistent display of information and configuration of advanced search features, improving the user experience when looking for and viewing content.

Componentize the Content

Once the content is structured and cleaned, a common next step is to break up the content into smaller sections according to the content model. This process has many names, such as content deconstruction, content chunking, or the creation of content components. In content deconstruction, structured content is split into smaller semantically meaningful sections. Each section or component has a standalone purpose, even without the context of the original document. Content components are often managed in a component content management system (CCMS), providing the following benefits:

  • Users (and AI) can quickly identify relevant sections of larger content.
  • Authors can reuse content components across multiple documents.
  • Content components can have associated metadata, enabling systems to personalize the content that users see based on their profiles.
  • Dynamic content is possible.

Similar to the user benefits, content components provide AI with user-generated components of content as opposed to requiring the AI to perform statistical chunking. The content chunks allow an AI to identify relevant text inputs quickly and more accurately than if fed entire large documents.

Conclusion

Through effective content strategy, content audit, and content engineering, organizations can efficiently manage information and ensure that AI has correct, comprehensive content with semantically meaningful relationships. A well-defined content strategy provides a framework to curate old and new information, allowing organizations to continuously feed information into AI, keeping its internal models up-to-date. A well-structured content audit will ensure preparation time is spent on the areas that will make the most difference in AI-readiness such as structure, standardization, componentization, and relationships across content. Well-thought-out content engineering will enable content reuse and personalization at scale through machine readable structure. 

Are you seeking help defining a content strategy, auditing your content for AI-readiness, or training your AI to understand your domain? Contact us and let us know how we can help!

Special thank you to James Midkiff for his contributions to the first draft of this blog post!

The post How to Prepare Content for AI appeared first on Enterprise Knowledge.

]]>
Measuring the Value of your Semantic Layer: KPIs for Taxonomies, Ontologies, and Knowledge Graphs https://enterprise-knowledge.com/measuring-the-value-of-your-semantic-layer-kpis-for-taxonomies-ontologies-and-knowledge-graphs/ Tue, 13 Feb 2024 17:00:11 +0000 https://enterprise-knowledge.com/?p=19598 Utilizing semantic applications in your business, such as an enterprise taxonomy, ontology, or knowledge graph, can increase efficiency, reduce cognitive load, and improve cohesion across the enterprise, among other benefits. While these benefits are extremely valuable they can be difficult … Continue reading

The post Measuring the Value of your Semantic Layer: KPIs for Taxonomies, Ontologies, and Knowledge Graphs appeared first on Enterprise Knowledge.

]]>
Utilizing semantic applications in your business, such as an enterprise taxonomy, ontology, or knowledge graph, can increase efficiency, reduce cognitive load, and improve cohesion across the enterprise, among other benefits. While these benefits are extremely valuable they can be difficult to quantify and measure. Utilizing Key Performance Indicators (KPIs) is a common tactic that can directly tie the semantic integration into the business value it creates. This can be utilized to increase buy-in from leadership, build a common understanding of goals and trajectory, keep projects on track, and to tell powerful stories about the benefits created through these critical investments. 

 

Why KPIs?

KPIs are evaluation metrics that are specific, measurable, and goal-oriented. Good KPIs are easy to communicate and translate directly into business value. They are used to support trends over time and can be combined with other measures to tell a story.  They are focused on the  activity at hand, and thus the specific metrics are tied to the use-case, business drivers, and the overall goals of any given initiative. 

The broad objectives frequently associated with semantic tools, such as increased efficiency or more cohesive understanding across the organization can be particularly difficult to quantify and track.  KPIs serve as a bridge, creating qualitative metrics to measure against and serving as course correctors to keep projects focused and on track.  Without KPIs, lofty objectives can hang like rain clouds, hovering with ambiguity over otherwise productive efforts. At EK we have seen the impact of murky objectives or a lack of KPIs manifest through time errantly devoted to the wrong efforts, misunderstandings of the intended outcomes, or even an impediment to generating buy-in from leadership. KPIs quantify ROI and provide clear metrics to measure impact before, during, and after a semantic integration. These ways to see progress throughout a project lifecycle serve to unify and create momentum for businesses. 

Creating KPIs that assess the impact of a semantic integration is both a challenge and a skill. Since KPIs are inherently tied to business objectives and specific use-cases it is hard to provide a standard set of KPIs for the semantic layer. The matrix below provides some example objectives and potential KPIs in 3 key areas: Scale, Content, and Time. More details on each focus area is elaborated on following the graphic.  

 

Scale

Measurements of scale can be used to understand the sheer size of the semantic layer and to compare multiple iterations to demonstrate growth, such as the number of entities in a knowledge graph, or terms in a taxonomy. Conversely, the reduction in size of the content or asset library can also be an important measure, for instance when performing deduplication. Reducing the scale of asset libraries can also have a marked impact on additional business motivations such as a reduction in the cost and/or carbon footprint required to support oversized data servers. Additionally, a smaller content library results in a lower cognitive load for end users and less redundant, or incorrect information. In one project, an investment and insurance company was struggling with inefficient search and users were frustrated because the content that was returned to them was frequently out of date or inaccessible. Content auditing, a standardized taxonomy, and auto-tagging workflows were employed and EK identified that about 45% of the content in their existing library was obsolete or outdated. By cleaning up this content prior to migrating to a new system the scale of the library was reduced. This reduction in size paired with a newly standardized enterprise taxonomy resulted in more accurate content descriptions and a more relevant library for users. This occurrence is far from uncommon and emphasizes the critical nature of content cleanup and maintaining a “right-size” content library. 

 

Content Management and Fit

Common measures of accuracy for taxonomies include an understanding of precision (is the returned information correct?) as well as recall (is all the expected information included?). Assessing content fit is notoriously difficult, generally reliant on Subject Matter Experts and qualitative data, and thus is very tricky to quantify. Focused KPIs can help to quantify this data, for example, if the percentage of correctly auto tagged terms is 75% at the first evaluation mark and 90% at the next point of measure, then the system is 15% more accurate. Measuring accuracy is a critical step in the iterative development of taxonomy and other semantic solutions. Without an understanding of what is working, it is impossible to understand what changes need to be made. Being able to find and access accurate and up-to-date information is a common pain point that we see among businesses who come to EK. After reducing the size of the content library through content clean up, the business in our earlier example was able to apply a newly formed enterprise taxonomy to their content with an auto-tagging workflow. Analysis of this effort found that the one time auto-tagging process had a 86-99% success rate depending on the content type. This effort greatly increased the accuracy of the metadata describing the content resulting in a better semantic search experience and increased findability across the enterprise. 

 

Time

One of the major changes that a well tuned semantic layer can bring about is a measurable amount of time saved. From auto-tagging and other automations to reducing duplication and incorrect information. Time savings increase user satisfaction as well as create a tangible benefit for the company in reduction of “human-hours” spent on any given task. This reduction in hours is an easily measurable reference point, since it can generally translate directly into the labor cost of any given employee. If a semantic solution reduces errors in the data, then measuring the number of errors corrected within the new system can be used to understand the time-savings now that errors do not need to be manually corrected. Similarly, the number of documents auto-tagged can be used to tell a story of time not spent manually tagging content. In the example cited in the two previous sections, after cleaning up content and applying the new taxonomy through auto-tagging EK estimated that the necessary human effort related to the content migration was reduced by nearly 80%. This is truly a remarkable improvement that had both immediate and long term benefits. 

 

Conclusion

Throughout our engagements EK has learned critical lessons about the importance of creating and using KPIs. Anecdotal evidence can be a powerful tool, but even more powerful are anecdotes backed by quantified measures. Learning how to set and track KPIs at the outset of a project provides the opportunity to understand and quantify the impact of semantic integrations. Across a variety of projects, KPIs have been core data points that help keep projects focused, and keep efforts directed towards the most important areas of a project.  When faced with murky or unclear objectives and a lack of KPIs our teams have seen projects sit with the wheels spinning, have needed to re-do work, and have seen focus shift onto unnecessary components of a project, to the detriment of other components. On the flip side, well crafted KPIs have been utilized to prove business value, spot inefficiencies, and improve semantic solutions over and over again. KPIs can be utilized over time to tell a story that will help you and your business better understand and track goals met through the use of your taxonomy, ontology, or knowledge graph. The team at EK is well versed in developing both the semantic solutions you need as well as the KPIs necessary to evaluate, understand, and promote these solutions. To learn more about how we can help, contact EK today

The post Measuring the Value of your Semantic Layer: KPIs for Taxonomies, Ontologies, and Knowledge Graphs appeared first on Enterprise Knowledge.

]]>
Content Audit Workshop https://enterprise-knowledge.com/content-audit-workshop/ Fri, 26 Jan 2024 19:26:51 +0000 https://enterprise-knowledge.com/?p=19637 Kick-start your content audit with a practical, skill building workshop Whether you are planning a content management system implementation, a structured content initiative, a redesign or rebranding, or other content initiative, a content audit is the first step in understanding … Continue reading

The post Content Audit Workshop appeared first on Enterprise Knowledge.

]]>
Kick-start your content audit with a practical, skill building workshop

Whether you are planning a content management system implementation, a structured content initiative, a redesign or rebranding, or other content initiative, a content audit is the first step in understanding your current content landscape and preparing for your project. A well-designed and implemented content audit will help ensure that your content meets your business objectives and user goals and is in compliance with your style, discoverability, and management requirements. 

Our interactive workshop can be customized to meet the specific needs of your organization, aligning stakeholders across the business to set the foundation and develop a roadmap to accelerate a content improvement initiative.

The EK Advantage

  • Industry-leading expertise in content strategy, content analysis, taxonomy, content design and delivery, and enterprise content management technology
  • Our analyses are grounded in business strategy with a focus on content ROI
  • EK enhances its qualitative analysis with semantic analysis tools to increase outcomes and decrease cost
  • Templates and methodologies to prepare you to audit not only the content but also the systems within which it is managed and delivered
  • Workshop objectives and activities are customized to your business, technology stack, and content
  • We deliver both strategic and tactical guidance, including tested processes to ensure usable outcomes
  • EK experts guide workshop participants to apply key concepts and best practices in content auditing in order to maximize skill development

Download the Content Audit Workshop Brochure

EK’s content analysis experts will:

  • Empower participants to design and conduct an effective, actionable content audit project
  • Facilitate conversations with stakeholders to understand your organization’s unique content challenges
  • Discover business needs which may influence content creation, publication, and management processes
  • Provide the framework for a repeatable process for ongoing content analysis

Why Audit

A content audit allows you to

  • Assess whether your content supports business and user goals
  • Identify whether content consistently follows design, editorial, style, and metadata guidelines
  • Gain detailed knowledge of your content’s depth, breadth, substance, style, and structure
  • Establish a basis for analyzing the gap between the content you have and the content you need
  • Identify content for revision, removal, and migration
  • Make informed decisions about resources and budget
  • Support your long-term content strategy and planning
  • Make the case for content as a business asset
  • Quantify content ROI
  • Uncover opportunities for content improvements in the short and long term
  • Prepare your content for transformation
  • Identify areas for improvement in documentation, processes, training, and  tools
  • Plan for ongoing content governance

Workshop Outcomes

By the conclusion of the workshop you will have an actionable content audit plan including:

  • Prioritized business goals: Setting the context for the audit, this exercise will make the connection between business strategy and the content that supports it
  • List of priority audiences and their content requirements: Understanding your audience’s pain points with your content and the tasks they need to complete allows you to evaluate where content is supporting or lacking
  • Defined project objectives: Participants will complete a project brief that captures the objectives of the audit to ensure that the audit plan will result in usable outcomes
  • List of content stakeholders, their roles, and responsibilities: Listing the key stakeholders across the organization and roles and responsibilities assists in developing workflows and ensuring accountability
  • Content use cases: A prioritized list of the user tasks and business requirements content needs to support surfaces gaps or opportunities for content creation or improvement
  • Content type analysis: Identification and evaluation of the mix of content types, their use, design, and effectiveness
  • Content evaluation criteria definitions and measurements: The heuristic evaluations by which content will be audited, with clear definitions and metrics to ensure consistent results
  • Scope and timeline for a full audit: Guidance as to how to set the scope for a full content audit and determine the time and resources required
  • Map of your content ecosystem: Identifying all channels to which content is published and their integrations
  • Hands-on experience: During the workshop, participants will audit a pilot selection of content to test and refine the criteria and process
  • Skill development: Through instruction and hands-on practice, participants will gain skills in analyzing content
  • Templates: Participants will receive reusable templates that can be used for capturing and presenting audit activities and outcomes
  • Content governance framework: An outline for developing a content governance plan to manage content and processes over time  

Ready to get started? Contact us at info@enterprise-knowledge.com

The post Content Audit Workshop appeared first on Enterprise Knowledge.

]]>
When a Knowledge Portal Becomes a Learning and Performance Portal https://enterprise-knowledge.com/when-a-knowledge-portal-becomes-a-learning-and-performance-portal/ Fri, 21 Jul 2023 15:56:52 +0000 https://enterprise-knowledge.com/?p=18436 EK’s CEO,  Zach Wahl, previously published Knowledge Portals Revisited, a blog that spells out an integrated suite of systems that actually puts users’ needs at the center of a knowledge management solution. We’ve long acknowledged that content may need to … Continue reading

The post When a Knowledge Portal Becomes a Learning and Performance Portal appeared first on Enterprise Knowledge.

]]>
EK’s CEO,  Zach Wahl, previously published Knowledge Portals Revisited, a blog that spells out an integrated suite of systems that actually puts users’ needs at the center of a knowledge management solution. We’ve long acknowledged that content may need to live in specialized repositories all across the enterprise, but we finally have a solution that gives users one place to search for and discover meaningfully contextualized knowledge within one portal.

Knowledge Portals, as described in Wahl’s blog, integrate data and information from multiple sources so that organizations can more efficiently generate insights and make data-driven decisions. However, if our goal is not just to enable knowledge insights but also to improve learning and performance, there are some additional design imperatives:

  • Findability of content by task or competency
  • Focus on the actions which enable learning
  • Measurement of learning and performance

 

Findability of Content for Learning and Performance

When designing a Knowledge Portal, one of the key considerations is how content is organized to optimize findability. Recently the EK team designed and developed a few Knowledge Portals and the information architecture and metadata strategies centered around business-specific concepts:

  • For a global investment firm, the key organizing principle was deals and investments. Employees of the organization needed to see all of the data and information about a particular deal in one place so they could spot trends and analyze relationships between data points more effectively.
  • For a manufacturing company, the key organizing principle was products and solutions. The Knowledge Portal needed to dynamically aggregate all of the information about a product, from the technical specifications to the customer success stories all in one place. That place became one dynamically assembled page for each product.

A Knowledge Portal gives you the ability to see all of the diverse knowledge assets in context. But developing the skills and abilities to apply and solve complex problems using those knowledge assets – that requires dedicated learning and performance improvement strategies. When the goal of the Portal is not knowledge but instead improving learning and performance, the organizing principles change from business concepts to competencies or tasks.

  • If the primary driver is to improve learning, the organizing principle becomes competencies. For example, Indeed.com identifies eighteen essential sales professional competencies. These competencies, such as upselling, negotiation, and product knowledge, would serve as an excellent navigational structure and Primary Metadata Fields for organizing a learning-focused portal for the organization.
  • If the primary driver is to improve performance, the organizing principle becomes tasks. Put yourself in the shoes of a busy sales professional trying to complete a simple task, such as preparing and delivering a product demo, unsure of the correct process. The sales professional needs to quickly find the performance support and supporting knowledge necessary to complete their task – they don’t want to wade through a competency-based navigational structure about higher-level concepts like upselling. Instead, they seek a system that allows them to find action-oriented, task-focused information at the point of need.

 

The technical applications of these concepts are manifested in a few ways. In its simplest format, the key organizing principle informs the navigation menu, as shown in the gif above. Competencies or tasks can also serve as the top level of a hierarchical taxonomy, enabling users to filter search results by competency or task.

If we want to get beyond search and navigation and start to use AI to automate recommendations of content, we can build an ontological data model as the foundation of this functionality. In this instance, the key organizing principle must become central to our ontology. This can often be achieved by having many entities of a particular category or class. For example, in a Learning Portal, there would be more competency entities than entities of any other category or class. In a Performance Portal, we would emphasize task entities in the ontology design.

Actions to Enable Learning and Performance

Findability of knowledge assets solves the problem of access to knowledge, but to develop skills and abilities, users must invest a bit more effort. Through active engagement with new information and content, individuals can enhance their understanding, retention, and application of knowledge. A Learning and Performance Portal builds on the foundation of a Knowledge Portal by not only aggregating information but also incorporating features that encourage active engagement and interaction.

A typical level of engagement and interaction might be reviewing a piece of learning content (reading a summary of a process and or watching a video about it) and then answering a question. E-Learning courses often handle this through multiple choice, true/false, or matching questions. These types of assessment questions add multiple benefits at once – we collect formative or summative data about learner performance, and we also provide the learner an opportunity to pause and reflect on what they just read. Instructional designers have a lot of tricks up their sleeves to promote interaction and reflection, including branching scenarios and gamification. These types of dynamic interactions must be incorporated into our Learning and Performance Portals rather than simply enabling users to see all of the information in one place.

Another way we can promote dynamic interaction and reflection with new ideas is by enabling interactions with real people. When our Learning and Performance Portals aggregate all of the information about a competency or task in one place, we should include relevant Communities of Practice (CoPs) and contact information for Subject Matter Experts (SMEs). What better way is there to actively engage with new concepts than by asking questions and engaging in dialogue? What better way to learn a new task or process than by helping to collaboratively improve it?

Measuring Learning and Performance

In a Knowledge Portal, typical data points collected include the number of unique visitors to the portal, which pages they’re spending time on, which pages have high bounce rates, and what terms users are frequently searching for. This data helps us understand the performance of the content and the portal itself. But in a Learning and Performance Portal, we need to understand the specific learning content users were exposed to and, ideally, data that indicates mastery of concepts and/or successful performance of tasks.

In compliance-focused situations, data must be able to confirm that a specific employee fulfilled the requirement of accessing the correct information, serving the purpose of liability avoidance. Data that indicates that the employee completed a course or watched a video suffices in these cases. EK developed a Learning and Performance Portal for a client, which captured Experience API (xAPI) activity statements for each individual, tracking not only whether they started an instructional video but whether they watched it all the way to the end. We were able to generate reports showing which users never viewed the video, viewed but didn’t play the video at all, played a portion of the video, or completed the video.

While compliance remains an important requirement, utilizing Learning and Performance Portals enables organizations to go beyond simply checking off completion. They allow for a more comprehensive assessment of learners’ knowledge and skills, providing a more holistic view of their learning journey and progress. Depending on the learning strategy employed, Learning and Performance Portals can capture additional data beyond mere completion status. This may include tracking whether learners correctly answered formative or summative assessment questions and the number of attempts it took them to do so. Furthermore, the portals can record any earned badges or certifications as a result of completing specific learning activities. xAPI activity statements can be used to track whether or not a learner connected with an SME, joined a CoP, chatted with a mentor, or performed well in a multiplayer game.

By capturing this data within Learning and Performance Portals, organizations can gain insights into learners’ proficiency levels, their progress in mastering specific topics, and their overall engagement with the learning materials. This data can be valuable for assessing the effectiveness of the learning programs, identifying areas where additional support or resources may be required, and finding and recognizing individuals who have demonstrated competence through job performance.

Summary

A modern learning ecosystem requires a diverse body of learning content, from eLearning courses and webinars to performance support and communities of practice. Often we need multiple systems to best enable this diverse learning content. A Learning and Performance Portal can provide that single entry point for learners so that they can find everything they need to develop a new competency or perform a task all in one place. Further, this learning content is automatically aggregated – removing a manual content maintenance burden from your instructional designers and trainers. If you can use some support in the design and development of a Learning and Performance Portal, Enterprise Knowledge can help.

The post When a Knowledge Portal Becomes a Learning and Performance Portal appeared first on Enterprise Knowledge.

]]>
Let’s Talk Personalization: EK’s Joe Hilger to Speak at Upcoming Webinar “Opportunities And Outcomes From Personalizing Content” https://enterprise-knowledge.com/lets-talk-personalization-eks-joe-hilger-to-speak-at-upcoming-webinar-opportunities-and-outcomes-from-personalizing-content/ Fri, 10 Feb 2023 15:03:53 +0000 https://enterprise-knowledge.com/?p=17517 Joe Hilger, COO of Enterprise Knowledge (EK), will join Kevin Nichols for the live webinar Opportunities And Outcomes From Personalizing Content. Hilger and Nichols will discuss what personalization means in the Knowledge Management (KM) space and how to apply KM … Continue reading

The post Let’s Talk Personalization: EK’s Joe Hilger to Speak at Upcoming Webinar “Opportunities And Outcomes From Personalizing Content” appeared first on Enterprise Knowledge.

]]>
Joe Hilger, COO of Enterprise Knowledge (EK), will join Kevin Nichols for the live webinar Opportunities And Outcomes From Personalizing Content. Hilger and Nichols will discuss what personalization means in the Knowledge Management (KM) space and how to apply KM in different business cases. They will also address best practices when implementing a personalization content strategy and explore how to harness the power of personalization to transform the way employees, partners, and customers interact with personalized content. By the conclusion of the webinar, listeners will be able to:

  • Scale and expand a Proof of Concept from one business use case;
  • Prepare their organization for personalization; and
  • Apply knowledge graphs to scale personalized content.

 

Leveraging EK’s experience implementing personalization tools and our PoC techniques, Hilger and Nichols will discuss some of the specific impacts personalization has had on Content Management Systems. Join them for Opportunities And Outcomes From Personalizing Content on March 15th, from 1:00 PM – 2:00 PM ET on BrightTalk.

The post Let’s Talk Personalization: EK’s Joe Hilger to Speak at Upcoming Webinar “Opportunities And Outcomes From Personalizing Content” appeared first on Enterprise Knowledge.

]]>
EK’s Joe Hilger to Speak at Upcoming Webinar “Conducting A Personalization Proof-of-Concept (PoC)” https://enterprise-knowledge.com/eks-joe-hilger-to-speak-at-upcoming-webinar-conducting-a-personalization-proof-of-concept-poc/ Mon, 23 Jan 2023 16:44:10 +0000 https://enterprise-knowledge.com/?p=17297 Joe Hilger, COO of Enterprise Knowledge (EK), will join Scott Abel for the live webinar “Conducting A Personalization Proof-of-Concept (PoC).” Hilger and Abel will discuss what personalization means in the Knowledge Management (KM) space and how to apply KM in … Continue reading

The post EK’s Joe Hilger to Speak at Upcoming Webinar “Conducting A Personalization Proof-of-Concept (PoC)” appeared first on Enterprise Knowledge.

]]>
Joe Hilger, COO of Enterprise Knowledge (EK), will join Scott Abel for the live webinar “Conducting A Personalization Proof-of-Concept (PoC).” Hilger and Abel will discuss what personalization means in the Knowledge Management (KM) space and how to apply KM in different business use cases. Their webinar will also address best practices when first implementing a personalization content strategy and explore how to harness the power of personalization to transform the way employees, partners, and customers interact with your organization. By the conclusion of the webinar, listeners will be able to:

  • Prioritize use cases to narrow their PoC to a vertical slice;
  • Engineer content structure, metadata model, and schema for personalization;
  • Componentize content for reusability; and
  • Apply knowledge graphs to scale personalized content.

 

Leveraging EK’s experience implementing personalization tools and our PoC techniques, Hilger and Abel will discuss some of the specific impacts personalization has had on Content Management Systems. Join them for “Conducting A Personalization Proof-of-Concept” on January 24th, from 1:00 PM – 2:00 PM ET on BrightTalk

The post EK’s Joe Hilger to Speak at Upcoming Webinar “Conducting A Personalization Proof-of-Concept (PoC)” appeared first on Enterprise Knowledge.

]]>
Maureen “Mo” Weinhardt – Director of Knowledge Management & Content Creation at Mach49 https://enterprise-knowledge.com/maureen-mo-weinhardt-director-of-knowledge-management-content-creation-at-mach49/ Thu, 15 Dec 2022 14:09:26 +0000 https://enterprise-knowledge.com/?p=16906 Enterprise Knowledge CEO Zach Wahl speaks with Maureen “Mo” Weinhardt, Director of Knowledge Management & Content Creation at Mach49, a growth incubator for global industries. An avid learner, Mo’s passion for developing exceptional people, programs, and content inspires her work … Continue reading

The post Maureen “Mo” Weinhardt – Director of Knowledge Management & Content Creation at Mach49 appeared first on Enterprise Knowledge.

]]>
Enterprise Knowledge CEO Zach Wahl speaks with Maureen “Mo” Weinhardt, Director of Knowledge Management & Content Creation at Mach49, a growth incubator for global industries. An avid learner, Mo’s passion for developing exceptional people, programs, and content inspires her work to improve complex organizational systems and support dynamic, cross-functional teams. 

In conversation with Zach, Mo discusses how the scope of Knowledge Management affects daily meetings, KM’s role in managing the technical stack at Mach49, the creation and curation of learning content, and much more.


 

 

If you would like to be a guest on Knowledge Cast, contact Enterprise Knowledge for more information.

The post Maureen “Mo” Weinhardt – Director of Knowledge Management & Content Creation at Mach49 appeared first on Enterprise Knowledge.

]]>