taxonomies Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/taxonomies/ Tue, 04 Nov 2025 14:03:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg taxonomies Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/taxonomies/ 32 32 How to Leverage LLMs for Auto-tagging & Content Enrichment https://enterprise-knowledge.com/how-to-leverage-llms-for-auto-tagging-content-enrichment/ Wed, 29 Oct 2025 14:57:56 +0000 https://enterprise-knowledge.com/?p=25940 When working with organizations on key data and knowledge management initiatives, we’ve often noticed that a roadblock is the lack of quality (relevant, meaningful, or up-to-date) existing content an organization has. Stakeholders may be excited to get started with advanced … Continue reading

The post How to Leverage LLMs for Auto-tagging & Content Enrichment appeared first on Enterprise Knowledge.

]]>
When working with organizations on key data and knowledge management initiatives, we’ve often noticed that a roadblock is the lack of quality (relevant, meaningful, or up-to-date) existing content an organization has. Stakeholders may be excited to get started with advanced tools as part of their initiatives, like graph solutions, personalized search solutions, or advanced AI solutions; however, without a strong backbone of semantic models and context-rich content, these solutions are significantly less effective. For example, without proper tags and content types, a knowledge portal development effort  can’t fully demonstrate the value of faceting and aggregating pieces of content and data together in ‘knowledge panes’. With a more semantically rich set of content to work with, the portal can begin showing value through search, filtering, and aggregation, leading to further organizational and leadership buy-in.

One key step in preparing content is the application of metadata and organizational context to pieces of content through tagging. There are several tagging approaches an organization can take to enrich pre-existing content with metadata and organizational context, including manual tagging, automated tagging capabilities from a taxonomy and ontology management system (TOMS), using apps and features directly from a content management solution, and various hybrid approaches. While many of these approaches, in particular acquiring a TOMS, are recommended as a long-term auto-tagging solution, EK has recommended and implemented Large Language Model (LLM)-based auto-tagging capabilities across several recent engagements. Due to LLM-based tagging’s lower initial investment compared to a TOMS and its greater efficiency than manual tagging, these auto-tagging solutions have been able to provide immediate value and jumpstart the process of re-tagging existing content. This blog will dive deeper into how LLM tagging works, the value of semantics, technical considerations, and next steps for implementing an LLM-based tagging solution.

Overview of LLM-Based Auto-Tagging Process

Similar to existing auto-tagging approaches, the LLM suggests a tag by parsing through a piece of content, processing and identifying key phrases, terms, or structure that gives the document context. Through prompt engineering, the LLM is then asked to compare the similarity of key semantic components (e.g., named entities, key phrases) with various term lists, returning a set of terms that could be used to categorize the piece of content. These responses can be adjusted in the tagging workflow to only return terms meeting a specific similarity score. These tagging results are then exported to a data store and applied to the content source. Many factors, including the particular LLM used, the knowledge an LLM is working with, and the source location of content, can greatly impact the tagging effectiveness and accuracy. In addition, adjusting parameters, taxonomies/term lists, and/or prompts to improve precision and recall can ensure tagging results align with an organization’s needs. The final step is the auto-tagging itself and the application of the tags in the source system. This could look like a script or workflow that applies the stored tags to pieces of content.

Figure 1: High-level steps for LLM content enrichment

EK has put these steps into practice, for example, when engaging with a trade association on a content modernization project to migrate and auto-tag content into a new content management system (CMS). The organization had been struggling with content findability, standardization, and governance, in particular, the language used to describe the diverse areas of work the trade association covers. As part of this engagement, EK first worked with the organization’s subject matter experts (SMEs) to develop new enterprise-wide taxonomies and controlled vocabularies integrated across multiple platforms to be utilized by both external and internal end-users. To operationalize and apply these common vocabularies, EK developed an LLM-based auto-tagging workflow utilizing the four high-level steps above to auto-tag metadata fields and identify content types. This content modernization effort set up the organization for document workflows, search solutions, and generative AI projects, all of which are able to leverage the added metadata on documents. 

Value of Semantics with LLM-Based Auto-Tagging

Semantic models such as taxonomies, metadata models, ontologies, and content types can all be valuable inputs to guide an LLM on how to effectively categorize a piece of content. When considering how an LLM is trained for auto-tagging content, a greater emphasis needs to be put on organization-specific context. If using a taxonomy as a training input, organizational context can be added through weighting specific terms, increasing the number of synonyms/alternative labels, and providing organization-specific definitions. For example, by providing organizational context through a taxonomy or business glossary that the term “Green Account” refers to accounts that have met a specific environmental standard, the LLM would not accidentally tag content related to the color green or an account that is financially successful.

Another benefit of an LLM-based approach is the ability to evolve both the semantic model and LLM as tagging results are received. As sets of tags are generated for an initial set of content, the taxonomies and content models being used to train the LLM can be refined to better fit the specific organizational context. This could look like adding additional alternative labels, adjusting the definition of terms, or adjusting the taxonomy hierarchy. Similarly, additional tools and techniques, such as weighting and prompt engineering, can tune the results provided by the LLM and evolve the results generated to achieve a higher recall (rate the LLM is including the correct term) and precision (rate the LLM is selecting only the correct term) when recommending terms. One example of this is  adding weighting from 0 to 10 for all taxonomy terms and assigning a higher score for terms the organization prefers to use. The workflow developed alongside the LLM can use this context to include or exclude a particular term.

Implementation Considerations for LLM-Based Auto-Tagging 

Several factors, such as the timeframe, volume of information, necessary accuracy, types of content management systems, and desired capabilities, inform the complexity and resources needed for LLM-based content enrichment. The following considerations expand upon the factors an organization must consider for effective LLM content enrichment. 

Tagging Accuracy

The accuracy of tags from an LLM directly impacts end-users and systems (e.g., search instances or dashboards) that are utilizing the tags. Safeguards need to be implemented to ensure end-users can trust the accuracy of the tagged content they are using. These help ensure that a user is not mistakenly accessing or using a particular document, or that they are frustrated by the results they get. To mitigate both of these concerns, a high recall and precision score with the LLM tagging improves the overall accuracy and lowers the chance for miscategorization. This can be done by investing further into human test-tagging and input from SMEs to create a gold-standard set of tagged content as training data for the LLM. The gold-standard set can then be used to adjust how the LLM weights or prioritizes terms, based on the organizational context in the gold-standard set. These practices will help to avoid hallucinations (factually incorrect or misleading content) that could appear in applications utilizing the auto-tagged set of content.

Content Repositories

One factor that greatly adds technical complexity is accessing the various types of content repositories that an LLM solution, or any auto-tagging solution, needs to read from. The best content management practice for auto-tagging is to read content in its source location, limiting the risk of duplication and the effort needed to download and then read content. When developing a custom solution, each content repository often needs a distinctive approach to read and apply tags. A content or document repository like SharePoint, for example, has a robust API for reading content and seamlessly applying tags, while a less widely adopted platform may not have the same level of support. It is important to account for the unique needs of each system in order to limit the disruption end-users may experience when embarking on a tagging effort.

Knowledge Assets

When considering the scalability of the auto-tagging effort, it is also important to evaluate the breadth of knowledge asset types being analyzed. While the ability of LLMs to process several types of knowledge assets has been growing, each step of additional complexity, particularly evaluating multiple types, can result in additional resources and time needed to read and tag documents. A PDF document with 2-3 pages of content will take far fewer tokens and resources for an LLM to read its content than a long visual or audio asset. Going from a tagging workflow of structured knowledge assets to tagging unstructured content will increase the overall time, resources, and custom development needed to run a tagging workflow. 

Data Security & Entitlements

When utilizing an LLM, it is recommended that an organization invest in a private or an in-house LLM to complete analysis, rather than leveraging a publicly available model. In particular, an LLM does not need to be ‘on-premises’, as several providers have options for LLMs in your company’s own environment. This ensures a higher level of document security and additional features for customization. Particularly when tackling use cases with higher levels of personal information and access controls, a robust mapping of content and an understanding of what needs to be tagged is imperative. As an example, if a publicly facing LLM was reading confidential documents on how to develop a company-specific product, this information could then be leveraged in other public queries and has a higher likelihood of being accessed outside of the organization. In an enterprise data ecosystem, running an LLM-based auto-tagging solution can raise red flags around data access, controls, and compliance. These challenges can be addressed through a Unified Entitlements System (UES) that creates a centralized policy management system for both end users and LLM solutions being deployed.

Next Steps:

One major consideration with an LLM tagging solution is maintenance and governance over time. For some organizations, after completing an initial enrichment of content by the LLM, a combination of manual tagging and forms within each CMS helps them maintain tagging standards over time. However, a more mature organization that is dealing with several content repositories and systems may want to either operationalize the content enrichment solution for continued use or invest in a TOMS. With either approach, completing an initial LLM enrichment of content is a key method to prove the value of semantics and metadata to decision-makers in an organization. 
Many technical solutions and initiatives that excite both technical and business stakeholders can be actualized by an LLM content enrichment effort. By having content that is tagged and adhering to semantic standards, solutions like knowledge graphs, knowledge portals, and semantic search engines, or even an enterprise-wide LLM Solution, are upgraded even further to show organizational value.

If your organization is interested in upgrading your content and developing new KM solutions, contact us!

The post How to Leverage LLMs for Auto-tagging & Content Enrichment appeared first on Enterprise Knowledge.

]]>
How KM Leverages Semantics for AI Success https://enterprise-knowledge.com/how-km-leverages-semantics-for-ai-success/ Wed, 03 Sep 2025 19:08:31 +0000 https://enterprise-knowledge.com/?p=25271 This infographic highlights how KM incorporates semantic technologies and practices across scenarios to enhance AI capabilities.

The post How KM Leverages Semantics for AI Success appeared first on Enterprise Knowledge.

]]>
This infographic highlights how KM incorporates semantic technologies and practices across scenarios to enhance AI capabilities.

To get the most out of Large Language Model (LLM)-driven AI solutions, you need to provide them with structured, context-rich knowledge that is unique to your organization. Without purposeful access to proprietary terminology, clearly articulated business logic, and consistent interpretation of enterprise-wide data, LLMs risk delivering incomplete or misleading insights. This infographic highlights how KM incorporates semantic technologies and practices across scenarios to  enhance AI capabilities and when they're foundational — empowering your organization to strategically leverage semantics for more accurate, actionable outcomes while cultivating sound knowledge intelligence practices and investing in your enterprise's knowledge assets. Use Case: Expert Elicitation - Semantics used for AI Enhancement Efficiently capture valuable knowledge and insights from your organization's experts about past experiences and lessons learned, especially when these insights have not yet been formally documented.  By using ontologies to spot knowledge gaps and taxonomies to clarify terms, an LLM can capture and structure undocumented expertise—storing it in a knowledge graph for future reuse. Example:  Capturing a senior engineer's undocumented insights on troubleshooting past system failures to streamline future maintenance. Use Case: Discovery & Extraction - Semantics used for AI Enhancement Quickly locate key insights or important details within a large collection of documents and data, synthesizing them into meaningful, actionable summaries, and delivering these directly back to the user. Ontologies ensure concepts are recognized and linked consistently across wording and format, enabling insights to be connected, reused, and verified outside an LLM's opaque reasoning process. Example: Scanning thousands of supplier agreements to locate variations of key contract clauses—despite inconsistent wording—then compiling a cross-referenced summary for auditors to accelerate compliance verification and identify high-risk deviations. Use Case: Context Aggregation - Semantics for AI Foundations Gather fragmented information from diverse sources and combine it into a unified, comprehensive view of your business processes or critical concepts, enabling deeper analysis, more informed decisions, and previously unattainable insights. Knowledge graphs unify fragmented information from multiple sources into a persistent, coherent model that both humans and systems can navigate. Ontologies make relationships explicit, enabling the inference of new knowledge that reveals connections and patterns not visible in isolated data. Example: Integrating financial, operational, HR, and customer support data to predict resource needs and reveal links between staffing, service quality, and customer retention for smarter planning. Use Case: Cleanup and Optimization - Semantics for AI Enhancement Analyze and optimize your organization's knowledge base by detecting redundant, outdated, or trivial (ROT) content—then recommend targeted actions or automatically archive and remove irrelevant material to keep information fresh, accurate, and valuable. Leverage taxonomies and ontologies to recognize conceptually related information even when expressed in different terms, formats, or contexts; allowing the AI to uncover hidden redundancies, spot emerging patterns, and make more precise recommendations than could be justified by keyword or RAG search alone. Example: Automatically detecting and flagging outdated or duplicative policy documents—despite inconsistent titles or formats—across an entire intranet, streamlining reviews and ensuring only current, authoritative content remains accessible. Use Case: Situated Insight - Semantics used for AI Enhancement Proactively deliver targeted answers and actionable suggestions uniquely aligned with each user's expressed preferences, behaviors, and needs, enabling swift, confident decision-making. Use taxonomies to standardize and reconcile data from diverse systems, and apply knowledge graphs to connect and contextualize a user's preferences, behaviors, and history; creating a unified, dynamic profile that drives precise, timely, and highly relevant recommendations. Example: Instantly curating a personalized learning path (complete with recommended modules, mentors, and practice projects) based on an employee's recent performance trends, skill gaps, and long-term career goals, accelerating both individual growth and organizational capability. Use Case: Context Mediation and Resolution - Semantics for AI Foundations Bridge disparate contexts across people, processes, technologies, etc., into a common, resolved machine readable understanding that preserves nuance while eliminating ambiguity. Semantics establish a shared, machine-readable understanding that bridges differences in language, structure, and context across people, processes, and systems. Taxonomies unify terminology from diverse sources, while ontologies and knowledge graphs capture and clarify the nuanced relationships between concepts—eliminating ambiguity without losing critical detail. Example: Reconciling varying medical terminologies, abbreviations, and coding systems from multiple healthcare providers into a single, consistent patient record—ensuring that every clinician sees the same unambiguous history, enabling faster diagnosis, safer treatment decisions, and more effective care coordination. Learn more about our work with AI and semantics to help your organization make the most out of these investments, don't hesitate to reach out at:  https://enterprise-knowledge.com/contact-us/

The post How KM Leverages Semantics for AI Success appeared first on Enterprise Knowledge.

]]>
Building Your Information Shopping Mall – A Semantic Layer Guide https://enterprise-knowledge.com/building-your-information-shopping-mall-a-semantic-layer-guide/ Wed, 20 Aug 2025 20:31:05 +0000 https://enterprise-knowledge.com/?p=25160 Imagine your organization’s data as a vast collection of goods scattered across countless individual stores, each with its own layout and labeling system. Finding exactly what you need can feel like an endless, frustrating search. This is where a semantic … Continue reading

The post Building Your Information Shopping Mall – A Semantic Layer Guide appeared first on Enterprise Knowledge.

]]>
Imagine your organization’s data as a vast collection of goods scattered across countless individual stores, each with its own layout and labeling system. Finding exactly what you need can feel like an endless, frustrating search. This is where a semantic layer can help. Think of it as your organization’s “Information Shopping Mall.” 

Just as a physical mall provides a cohesive structure for shoppers to find stores, browse items, and make purchases, a semantic layer creates a unified environment for business users. It allows them to easily discover datasets from diverse sources, review connected information, and gain actionable insights. It brings together a variety of data providers (our “stores”), and their data (their “goods”) into a single, intuitive location, enabling end-users, including people, analytics tools, and agentic solutions (our “shoppers”) to find and intake precisely what they need to excel in their roles. 

This analogy of the Semantic Layer as an Information Shopping Mall has proven incredibly helpful for our teams and clients. In this blog post, we’ll use this familiar background to explore the foundational elements required to build your own Semantic Layer Shopping Mall and share key lessons learned along the way. 

 

1. Building the Mall: Creating the Structural Foundations

Before any stores can open their doors, a shopping mall needs fundamental structural elements: floors, walls, escalators, and walkways. Similarly, a semantic layer demands a well-designed technology architecture to support a seamless, connected data experience.

The core infrastructure of your semantic layer is formed by powerful tools such as Graph Databases, which connect complex relationships; Taxonomy Management Systems, for organizing data with consistent vocabularies; and Data Catalogs, which provide a directory of your data assets. Just like physical malls, no two semantic layers are identical. The unique goals and existing technological landscape of your organization will dictate the specific architecture required to build your bespoke information shopping mall. For example, an organization with a variety of data sensitivity levels and goals of creating agentic solutions may require an Identity and Access Management solution to ensure security across uses, or an organization that is keen on creating fraud detection solutions on top of a plethora of information may require a graph analytics tool. 

 

2. Creating the Directory: Developing Categorization Models

With your Information Shopping Mall’s Infrastructure in place, the next crucial step is to design its interior layout and create a clear map for your shoppers. A well-designed store directory allows a shopper to quickly scan by product types like clothing, electronics, and toys to effortlessly navigate to the right section or store.

Your semantic layer needs precisely this type of robust core categorization model to direct your tools, systems, and people to the specific information they seek. This is achieved by establishing and consistently applying a common vocabulary across all of your systems. Within the semantic layer context, we leverage taxonomies (hierarchical lists of values) and ontologies (formal maps of concepts and their relationships) to provide this essential direction. Taxonomies may be used in cases where we are looking to categorize stores as alike–Payless, DSW, and Foot Locker may be interchangeable as shoe stores–whereas ontologies, thanks to their multi-relational nature, can help tell us stores that make sense to visit for a certain occasion–Staples for school supplies followed by Gap for back-to-school clothes.  

Developing an effective semantic layer directory demands two key considerations: 

  • Achieving a Consensus on Terminology: Imagine a mall directory where “Footwear” and “Shoes” are used in different sections, or where “Electronics” and “Gadgets” demand their own spaces. This negates the purpose of categorization and causes confusion. A semantic layer requires careful negotiation with stakeholders to agree on common concepts. Investing the time to navigate organizational differences and build consensus on metadata and taxonomy terms before implementation significantly mitigates technical challenges down the line. 
  • Designing an Extensible Model: For a semantic layer to thrive, its underlying data model must be capable of growing over time. As new data providers (“stores”) join your mall and new use cases emerge, the model must seamlessly integrate without ‘breaking’ previous work. Employing ontology design best practices and engaging with seasoned professionals ensures that your semantic layer is an accurate reflection of your organization’s reality and can evolve flexibly with both new information and demands. 

At Enterprise Knowledge, we advocate for initiating this phase with a small group of pilot use cases. These pilots typically focus on building out scoped taxonomies or ontologies tied to high-value, priority use cases and serve as a proving ground for onboarding initial data providers. Starting small allows for agile iteration, refinement, and stakeholder alignment before scaling. 

 

3. Store Tenant Recruitment: Driving Adoption & Buy-In

Once the mall’s structure is complete, the focus shifts to a dual objective: attracting sought-after stores (data providers) to occupy the spaces and convincing customers (business users) to come and shop. A successful mall developer must persuasively demonstrate the benefits to retailers, such as high foot traffic, convenience, and access to a wider audience, to secure their commitment. A clear articulation of value is essential to get retailers on board.

When deploying your semantic layer, robust stakeholder buy-in is key. Strategically position your semantic layer initiative as an effort to significantly enhance your knowledge-connectedness and enable decision-making across the organization. Summarizing this information in a cohesive Semantic Layer Strategy is key to quickly convincing providers and customers. 

An effective Semantic Layer Strategy should focus on: 

  • Establishing a Clear Product Vision: To attract both data providers and consumers, the strategy must have a well-defined product vision. This involves articulating what the semantic layer will become, who it will serve, and what core problems it will solve. This strategic clarity ensures that all stakeholders understand the overarching purpose and direction, fostering alignment and shared purpose.
  • Defining Measurable Outcomes: To truly gain adoption, your strategy should demonstrably link to tangible business outcomes. It is paramount to build compelling reasons for stakeholders to both contribute information and consume insights from the semantic layer. This involves identifying and communicating the specific, high-impact results (e.g., increased efficiency, reduced risk, enhanced insights) that the semantic layer will deliver.

 

4. Grand Opening: Populating Data & Unveiling Use Cases

With the foundation built, the directory mapped, and the tenants recruited, it’s finally time for the grand unveiling of your Information Shopping Mall. This phase involves connecting applications to your semantic layer and populating it with data.

A successful grand opening requires:

  • Robust Data Pipelines: Just like a mall needs efficient distributors to stock its stores, your semantic layer needs APIs and data transformation pipelines. These are critical conduits that connect various source applications (like CRMs, Content Management Systems, and traditional databases) to your semantic layer, ensuring a continuous flow of high-quality data.
  • Secure Entitlement Structures: Paramount to any successful mall is ensuring security of its goods. For your semantic layer, this translates to establishing secure entitlement structures. This involves defining who has access to what information and ensuring sensitive information remains protected while still enabling necessary access for relevant business users.
  • Coordinated Capability Development: A seamless launch is the result of close coordination between technology teams, product owners, and stakeholders. This collaboration is vital for building the necessary technical capabilities, shaping an intuitive user experience, and managing expectations across the organization as new semantic-power use cases arise. 

  •  

  •  

Conclusion 

Building an Information Shopping Mall – your Semantic Layer – transforms disjointed data into an invaluable, accessible asset. This empowers your business with clarity, efficiency, and insight.

At Enterprise Knowledge, we specialize in guiding organizations through every phase of this complex journey, turning the vision of truly connected knowledge into a tangible reality. For more information, reach out to us at info@enterprise-knowledge.com.

The post Building Your Information Shopping Mall – A Semantic Layer Guide appeared first on Enterprise Knowledge.

]]>
Constructing KM Technology: Tips for Implementing Your KM Technology Solutions https://enterprise-knowledge.com/tips-for-implementing-km-technology-solutions/ Mon, 15 Aug 2022 15:10:55 +0000 https://enterprise-knowledge.com/?p=16156 In the digital age that we now live in, making Knowledge Management (KM) successful at any organization relies heavily on the technologies used to accomplish every day tasks. Companies are recognizing the importance of providing their workforce with smarter, more … Continue reading

The post Constructing KM Technology: Tips for Implementing Your KM Technology Solutions appeared first on Enterprise Knowledge.

]]>
In the digital age that we now live in, making Knowledge Management (KM) successful at any organization relies heavily on the technologies used to accomplish every day tasks. Companies are recognizing the importance of providing their workforce with smarter, more efficient, and highly specialized technological tools so that employees can maximize productivity in their everyday work. There’s also the expectation for a KM system, like SharePoint, to act as an all-in-one solution. Companies in search of software solutions often make the mistake of thinking a single system can effectively fulfill all of their needs including content management, document management, AI-powered search, automated workflows, etc., which simply isn’t the case. The reality is that multi-purpose software tools may be able to serve more than one business function, but in doing so only deliver basic features that lack necessary specifications and result in a sub-par product. More information on the need for a multi-system solution can be found in this blog about the importance of a semantic layer in a knowledge management technology suite.

In our experience at Enterprise Knowledge (EK), we consider the following to be core and essential systems for most integrated KM technology solutions:

  • Content Management Systems
  • Taxonomy Management Systems
  • Enterprise Search Tools
  • Knowledge Graphs

The systems mentioned above are essential tools to enable successful and mature KM, and when integrated with one another can serve to revolutionize the interaction between an organization’s staff and its information. EK has seen the most success with client organizations once they have understood the need for a blended set of technological tools and taken the steps to implement and integrate them with one another.

Once this need for a combined set of specialized solutions is realized, the issue of how to implement these solutions becomes ever-present and must be approached with a specific strategy for design and deployment. This blog will help to outline some of the key tips and guidelines for the implementation of a KM technology solution, regardless of its current state.

CMS, TMS, Search Engine

Prioritizing Your Technology Needs

When thinking about the approach to implementing an organization’s identified technology solutions, there is often an inclination to prioritize solutions that are considered “state-of-the-art” or “cooler” than others. This is understandable, especially with the new-age technology that is on the market and able to create a “wow” factor for a business’ employees and customers. However, it is important to remember that the order in which systems are implemented relies heavily on the current makeup of the organization’s technology stack. For example, although it might be tempting to take on the implementation of an AI-powered knowledge graph or a chat-bot that has Natural Language Processing (NLP) capabilities, the quality of your results and real-world usability of the product will increase dramatically if you also include other technologies such as a graph database to provide the foundation for a knowledge graph, or a Taxonomy Management System to allow for the design and curation of an enterprise taxonomy and/or ontology.

Depending on your organization’s level of maturity with respect to its technology ecosystem, the order in which systems are implemented must be strategically defined so that one system can build off of and enhance the previous. Typically, if an organization does not possess a solidified instance of any of the core KM technologies, the logical first step is to implement a Content Management System (CMS) or Document Management System (DMS), or in some cases, both. Following the “content first” approach, commonly used in web design and digitalization, organizations must first have a place in which they can effectively store, manage, and access their content, as an organization’s content is arguably one of its most valuable assets. Furthermore, one could argue that all core KM technologies are centered around an organization’s content and exist to improve/enhance that content whether it is adding to its structure, creating ways to more efficiently store and describe it, or more effectively searching and retrieving it at the time of need.

Once an organization has a solidified CMS solution in place, the next step is to implement tools geared towards the enhancement and findability of that content. One system in particular that helps to drastically improve the quality of an organization’s content by managing and deploying enterprise wide taxonomies and ontologies is a Taxonomy Management Systems (TMS). TMS solutions are integrated with an organization’s CMS and search tools and serve as a place to create, deploy, and manage poly-hierarchical taxonomies in a single place. TMS tools allow organizations to add structure to their content, describe it in a way that significantly improves organization, and fuel search by providing a set of predefined values from a controlled vocabulary that can be used to create facets and other forms of search-narrowing instruments. A common approach to implementing your technology ecosystem involves the simultaneous implementation of an enterprise search solution alongside the TMS implementation. Once again, the idea of one solution building off another is present here, as enterprise search tools feed off of the previously implemented CMS instance by utilizing Access Control List (ACL) specifications, security trimming considerations, content structure details, and many more. Once these three systems are in place, organizations can afford to look into additional tools such as Knowledge Graphs, AI-powered chatbots, and Metadata Catalogs.

Defining Business Logic and Common Uses

There is a great deal of preparation involved with the implementation of KM technologies, especially when considering the envisioned use of the system by organizational staff. As part of this preparation, a thorough analysis of existing business processes and standard operating procedures must be executed to account for the specific needs of users and how those needs will influence the design of the target system. Although it is not always initially obvious, the way in which a system is going to be used will heavily impact how that system is designed and implemented. As such, the individuals responsible for implementation must have a well-documented, thorough understanding of what end users will need from the tool, combined with a comprehensive list of core use cases. These types of details are most commonly elicited through a set of analysis activities with the system’s expected users.

Without these types of preliminary activities, the implementation process will seldom go as planned. This is because various detours will have to be taken to accommodate the business process details that are unique to the organization and therefore not ‘pre-baked’ into software solutions. These considerations sometimes come in the form of taxonomy/controlled list requirements, customizable workflows, content type specifications, and security concerns, to name a few.

If the proper arrangements aren’t made before implementing software and integrating with additional systems, it will almost always affect the scope of your implementation effort. Software implementation is not a “one size fits all” type of effort; there are certain design elements that are based on the business and functional requirements of the target solution, and these must be identified in the initial stages of the project. EK has seen how the lack of these preparatory activities can have impacts on project timelines, most commonly because of delays due to unforeseen circumstances. This results in extended deadlines, change requests, additional investment, and other general inefficiencies.

Recruiting the Proper Resources

In addition to the activities needed before implementation, it is absolutely essential to ensure that the appropriate resources are assigned to the project. This too can create issues down the road if not given the appropriate amount of time and attention before beginning the project. Generally speaking, there are a few standard roles that are necessary for any implementation project, regardless of the type or complexity of the effort. These roles are listed and described below:

  • KM Designer/Consultant: Regardless of the type of system to be implemented, having a KM consultant on board is needed for various reasons. A KM consultant will be able to assist with the non-developmental areas of the project, for example designing taxonomies/ontologies, content types, search experiences, and/or governance structures.
  • Senior Solutions Architect: Depending on the level of integration required, a Senior Solutions Architect is likely required. This is ideally a person with considerable experience working with multiple types of technologies that are core to KM. This person should have a thorough and comprehensive understanding of how to arrange systems into a technology suite and how each component works, both alone and as part of a larger, combined solution. Familiarity with REST, SOAP, and RPC APIs, along with other general knowledge about the communication between software is a must.
  • Technology Subject Matter Expert (SME): This role is absolutely critical to the success of the implementation, as there will be a need for someone who specializes in the type of software being implemented. For example, if an organization is working to implement a TMS and integrate it with other systems, the project will need to staff a TMS integration SME to ensure the system is installed according to implementation best practices. This person will also be responsible for a large portion of the “installment” of the software, meaning they will be heavily involved with the initial set up and configuration based on the organization’s specific use of the system.
  • KM Project Manager: As is common with all projects, there will be a need for a project manager to coordinate meetings, ensure the project is on schedule, and facilitate the ongoing alignment of all engaged parties. This person should be familiar with KM so that they can align efforts with best practices and help facilitate KM-related decisions.
  • API Developer(s): Depending on the level of integration required, a developer may be needed to develop code to serve as a connector between systems. This individual must be familiar with the communication logic needed between systems and have a thorough understanding of APIs as well. The programming language in which any custom coding is needed will vary from organization to organization, but it is required that the developer has experience with the identified language.

The list above is by no means exhaustive, nor does it contain resources that are commonly assumed to be a part of any implementation effort. These roles are simply the unique ones that help with successful implementations. Also, depending on the level of effort required, there may be a need for multiple resources at each role, such as the developer or SME role. This type of consideration is important, as the project will need to have ample resources according to the project’s defined timeline.

Defining a Realistic Timeline

One final factor to consider when preparing for a technology solution implementation effort is the estimated time with which the project is expected to be completed. Implementation efforts are notoriously difficult to estimate in terms of time and resources needed, which often results in the over- or under- allocation of financing for a given effort. As a result of this, it’s recommended to err on the side of caution and incorporate more time than is initially estimated for the project to reach completion. If similar efforts have been completed in the past, utilize informal benchmarking. If available resources have experience implementing similar solutions, bring them to the forefront. The best way to estimate the level of effort and time needed to complete certain tasks is to look at historical data, which in this case would be previous implementation efforts.

In EK’s experience implementing large scale and highly complex software and custom solutions, we have learned that it is important to prepare for the unexpected to ensure the expected timeline is not derailed by unanticipated delays. For example, one common consideration we have encountered many times and one that has created significant delays is the need to get individuals appropriate access to certain systems or organizational resources. This is especially relevant with third-party consultants and when the system(s) in question have high security requirements. Additionally, there are several KM-related considerations that can unexpectedly lengthen a project’s timeline, such as the quality/readiness of content, governance standards and procedures that may be lacking, and/or change management preparations.

Conclusion

There are many factors that go into an implementation effort and, unfortunately, a lot of ways one can go wrong. Very seldom are projects like these executed to perfection, and a majority of the times that they fail or go awry is due to one or a combination of a few of the factors mentioned above. The good news and common theme with these considerations is that these pitfalls can mostly be avoided with the proper planning, preparation, and estimates (with regards to both time and resources). The initial stages of an implementation effort are the most critical, as these are the times where project planners need to be honest and realistic with their projections. There is often the tendency to begin development as soon as possible, and to skip most of the preparatory activities due to an eagerness to get started. It is important to remember that successful implementation efforts require the necessary legwork, even if it may seem superfluous at the time. Does your company need assistance implementing a piece of technology and is not sure how to get started? EK provides end-to-end services beginning with strategy and design and ending with the implementation of fully functional KM systems. Reach out to us! Contact us with any questions or general inquiries.

The post Constructing KM Technology: Tips for Implementing Your KM Technology Solutions appeared first on Enterprise Knowledge.

]]>
Knowledge Cast Product Spotlight – Andreas Blumauer of Semantic Web Company https://enterprise-knowledge.com/knowledge-cast-product-spotlight-andreas-blumauer-of-semantic-web-company/ Wed, 15 Dec 2021 14:43:56 +0000 https://enterprise-knowledge.com/?p=14006 In this episode of Product Spotlight, EK COO Joe Hilger speaks with Andreas Blumauer of Semantic Web Company. Andreas has been CEO and managing partner of Semantic Web Company (SWC) for more than 15 years. At SWC, he is responsible … Continue reading

The post Knowledge Cast Product Spotlight – Andreas Blumauer of Semantic Web Company appeared first on Enterprise Knowledge.

]]>
In this episode of Product Spotlight, EK COO Joe Hilger speaks with Andreas Blumauer of Semantic Web Company.

Andreas has been CEO and managing partner of Semantic Web Company (SWC) for more than 15 years. At SWC, he is responsible for corporate strategy and strategic business development. Andreas has been a pioneer in the field of Semantic AI since 2001.

 

 

 

If you would like to be a guest on Knowledge Cast, Contact Enterprise Knowledge for more information.

The post Knowledge Cast Product Spotlight – Andreas Blumauer of Semantic Web Company appeared first on Enterprise Knowledge.

]]>
Knowledge Cast Product Spotlight – David Clarke of Synaptica https://enterprise-knowledge.com/knowledge-cast-product-spotlight-david-clarke-of-synaptica/ Mon, 13 Dec 2021 14:21:44 +0000 https://enterprise-knowledge.com/?p=13966 In this episode of Product Spotlight, EK COO Joe Hilger speaks with David Clarke of Synaptica. David is the CEO and Co-founder of Synaptica, an enterprise software application for building controlled vocabularies, including taxonomies, thesauri, ontologies and name authority files, … Continue reading

The post Knowledge Cast Product Spotlight – David Clarke of Synaptica appeared first on Enterprise Knowledge.

]]>
In this episode of Product Spotlight, EK COO Joe Hilger speaks with David Clarke of Synaptica. David is the CEO and Co-founder of Synaptica, an enterprise software application for building controlled vocabularies, including taxonomies, thesauri, ontologies and name authority files, and to integrate them with corporate content management systems. 

Product Spotlight is a series in which we talk about KM technologies and  how different products on the market meet KM challenges and provide new  and improved ways to meet tomorrow’s challenges.

 

 

 

If you would like to be a guest on Knowledge Cast, Contact Enterprise Knowledge for more information.

The post Knowledge Cast Product Spotlight – David Clarke of Synaptica appeared first on Enterprise Knowledge.

]]>
What is the Roadmap to Enterprise AI? https://enterprise-knowledge.com/enterprise-ai-in-5-steps/ Wed, 18 Dec 2019 14:00:57 +0000 https://enterprise-knowledge.com/?p=10153 Artificial Intelligence technologies allow organizations to streamline processes, optimize logistics, drive engagement, and enhance predictability as the organizations themselves become more agile, experimental, and adaptable. To demystify the process of incorporating AI capabilities into your own enterprise, we broke it … Continue reading

The post What is the Roadmap to Enterprise AI? appeared first on Enterprise Knowledge.

]]>
Artificial Intelligence technologies allow organizations to streamline processes, optimize logistics, drive engagement, and enhance predictability as the organizations themselves become more agile, experimental, and adaptable. To demystify the process of incorporating AI capabilities into your own enterprise, we broke it down into five key steps in the infographic below.

An infographic about implementing AI (artificial intelligence) capabilities into your enterprise.

If you are exploring ways your own enterprise can benefit from implementing AI capabilities, we can help! EK has deep experience in designing and implementing solutions that optimizes the way you use your knowledge, data, and information, and can produce actionable and personalized recommendations for you. Please feel free to contact us for more information.

The post What is the Roadmap to Enterprise AI? appeared first on Enterprise Knowledge.

]]>