taxonomy management system Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/taxonomy-management-system/ Mon, 17 Nov 2025 21:50:57 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg taxonomy management system Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/taxonomy-management-system/ 32 32 Semantic Layer Maturity Framework Series: Taxonomy https://enterprise-knowledge.com/semantic-layer-maturity-framework-series-taxonomy/ Wed, 18 Jun 2025 15:41:21 +0000 https://enterprise-knowledge.com/?p=24678 Taxonomy is foundational to the Semantic Layer. A taxonomy establishes the essential semantic building blocks upon which everything else is built, starting by standardizing naming conventions and ensuring consistent terminology. From there, taxonomy concepts are enriched with additional context, such … Continue reading

The post Semantic Layer Maturity Framework Series: Taxonomy appeared first on Enterprise Knowledge.

]]>
Taxonomy is foundational to the Semantic Layer. A taxonomy establishes the essential semantic building blocks upon which everything else is built, starting by standardizing naming conventions and ensuring consistent terminology. From there, taxonomy concepts are enriched with additional context, such as definitions and alternative terms, and arranged into hierarchical relationships, laying the foundation for the eventual establishment of other, more complex ontological relationships. Taxonomies provide additional value when used to categorize and label structured content, and enable metadata enrichment for any use case. 

Just as a semantic layer passes through degrees of maturity and complexity as it is developed and operationalized, so too does a taxonomy. While a taxonomy comprises only one facet of a fully realized Semantic Layer, every incremental increase in its granularity and scope can have a compounding effect in terms of unlocking additional solutions for the organization. While it can be tempting to assume that only a fully mature taxonomy is capable of delivering measurable value for the organization that developed it, each iteration of a taxonomy provides value that should be acknowledged, quantified, and celebrated to advocate for continued support of the taxonomy’s ongoing development.  

 

Taxonomy Maturity Stages

A taxonomy’s maturity can be measured across five levels: Basic, Foundational, Operational, Institutional, and Transformational. Taken as a snapshot from our full semantic layer maturity framework, the following diagram illustrates each of these levels in terms of their taxonomy components, technical manifestation, and what valuable outcomes can be expected from each at a high level. 

 

Basic Taxonomy

A basic taxonomy lacks depth, and is essentially a folksonomy (an informal, non-hierarchical classification system where users apply public tags). At this stage, a basic taxonomy is only inconsistently applied across departments. 

As an example, a single business unit (Marketing) may have begun developing a basic taxonomy that other business units (Sales) may be starting to integrate with their product taxonomy. 

Components and Technical Manifestation at this Level

  • Basic taxonomies are only developed for limited, specific use cases, often for a particular team or subset of an organization.
  • At this stage of maturity, a taxonomy expresses little granularity, and may have up to three levels of broader/narrower relationships. 
  • A basic taxonomy is likely maintained in a spreadsheet, rather than a taxonomy management system (TMS). The taxonomy may be implemented in a rudimentary form, like being expressed in file structures. Taxonomy concepts are not yet tagged to assets. 
  • At this stage, the taxonomy functions primarily as a proof of concept. The taxonomy has not yet been widely validated or socialized, and is likely only known by the team building it. It may represent an intentionally narrow scope that can then be scaled as the team builds buy-in with stakeholders. 

Outcomes and Value 

  • The basic taxonomy provides an essential foundation to build upon. If it is well-designed, the work invested in this stage can serve as a model for other functional areas of the organization to adopt for their own use cases. 
  • At this stage, the value is typically limited to providing a proof of concept to demonstrate what taxonomy is, and working towards establishing consistent terminology within a department.

   

Foundational Taxonomy

The foundational taxonomy is not yet wholly standardized, but growing momentum helps to drive adoption and standardization across systems and business units. The taxonomy can support simple data enrichment by adding semantic context (like relevant location data, contact information, definitions, or subcategories) to an existing data set. Often, a dedicated taxonomy management solution (TMS) is procured at this stage, and it may be unscalable to proceed to the next level of maturity without one. 

Components and Technical Manifestation at this Level

  • The taxonomy is imbued with semantic context such as definitions, scope notes, and alternative labels, along with the expected hierarchical relationships between concepts. A foundational taxonomy exhibits a greater level of granularity beyond the basic level. 
  • The taxonomy is no longer only housed in a spreadsheet, and is maintained in a Taxonomy Management Solution (TMS). This makes it easier to ensure that the taxonomy’s format adheres to semantic web frameworks (such as SKOS, the Simple Knowledge Organization System). 
  • The addition of this context serves the fundamental purpose of supporting and standardizing semantic understanding within an organization by clarifying and enforcing preferred terms while still capturing alternative terms.  
  • Some degree of implementation has been realized – for instance, the tagging of a representative set of content or data assets.
  • The taxonomy team actively engages in efforts to socialize and promote the taxonomy project to build awareness and support among stakeholders. 
  • A taxonomy governance team has been established for ongoing validation, maintenance, and change management. 

Outcomes and Value

  • At this stage, the taxonomy can provide more measurable benefits to the organization. For instance, a foundational taxonomy can support content audits for all content that has been auto-tagged. 
  • The taxonomy can support more advanced data analytics – for instance, users can get more granular insights into which topics are the most represented in content. 
  • The foundational taxonomy can be scaled to incorporate backlog use cases or other departments in the organization, and can be considered a product to be replicated and more broadly socialized.
  • The taxonomy can be enhanced by adding linked models and/or concept mapping.

 

Operational Taxonomy

The operational taxonomy is standardized, used regularly and consistently across teams, and is integrated with other components or applications. 

At this stage, the taxonomy is integrated with key systems like a content management system (CMS), learning management system (LMS), or similar. Users are able to interact with the taxonomy directly through the system-powered apps they work in, because the systems consume the taxonomy.

Components and Technical Manifestation at this Level

  • At this level of maturity, advanced integrations have been realized – for instance, the taxonomy is integrated into search for the organization’s intranet, or the taxonomy’s semantic context has been leveraged as training data for generative AI-powered chatbots.
  • At the operational level, the taxonomy acts as a source of truth for multiple use cases, and has been expanded to cover multiple key areas of the organization, such as Customer Operations, Product, and Content Operations. 
  • By this stage, content tagging has been seamlessly integrated into the content creation process, in which content creators apply relevant tags prior to publishing, or automatic tagging ensures content is applied to current and newly-published content. 
  • A TMS has been acquired, and is implemented with key systems, such as the organization’s LMS, intranet, or CMS. 
  • The taxonomy is subject to ongoing governance by a taxonomy governance team, and key stakeholders in the organization are informed of key updates or changes to the taxonomy.

Outcomes and Value 

  • The taxonomy is integrated with essential data sources to provide or consume data directly. As a result, users interacting with the systems that are connected to the taxonomy are able to experience the additional structure and clarity provided by the taxonomy via features like search filters, navigational structures, and content tags. 
  • The taxonomy can support enhanced data analytics, such as tracking the click-through rate (CTR) of content tagged with particular topics. 

 

Institutional Taxonomy

The institutional taxonomy is fully integrated into daily operations. Rigorous governance and change management capabilities are in place. 

By now, seamless integrations between the taxonomy and other systems have been established. Ongoing taxonomy maintenance work poses no disruption to day-to-day operations, and updates to the taxonomy are automatically pushed to all impacted systems.

Components and Technical Manifestation at this Level

  • The taxonomy, or taxonomies, are fully integrated into daily operations across teams and functional areas – for instance, the taxonomy supports dynamic content delivery for customer support workers, the customer-facing product taxonomy facilitates faceted search for online shopping, and so on. 
  • The organization’s use cases are supported by the taxonomy, which supports core goals such as ensuring a shared understanding of key concepts and their meaning, providing a consistent framework for the representation of data across systems, or representing the fundamental components of an organization across systems. 
  • Governance roles, policies, and procedures are fully established and follow a regular cadence. 

Outcomes and Value

  • At this stage of maturity, the taxonomy has been scaled to the extent that it can be considered an enterprise taxonomy; it covers all foundational areas, is utilized by all business units, and is poised to support key organizational operations. At this stage, the taxonomy drives a key enterprise-level use case. 
  • Data connectivity is supported across the organization; the taxonomy unifies language across teams and systems, reducing errors and data discrepancies. 
  • Internal as well as external users benefit from taxonomy-enhanced search in the form of query expansion. 

 

Transformational Taxonomy

The transformational taxonomy drives data classification and advanced analytics, informing and enhancing AI-driven processes. At this stage, the taxonomy provides significant functionality supporting an integrated semantic layer. 

Components and Technical Manifestation at this Level

  • The taxonomy can support the delivery of personalized, dynamic content for internal or external users for more impactful customer support or marketing outreach campaigns.
  • The taxonomy is inextricably tied to other key components of the semantic layer’s operating model. The taxonomy provides data for the knowledge graph, provides a hierarchy for the ontology, categorizes the data in the data catalog, and enriches the business glossary with additional semantic context. These connections help power semantic search, analytics, recommendation systems, discoverability, and other semantic applications. 
  • Taxonomy governance roles are embedded in functional groups. Feedback on the taxonomy is shared regularly, introductory taxonomy training is widely available, and there is common understanding of how to both use the taxonomy and provide feedback. 
  • Taxonomies are well-supported by defined metrics and reporting and, in turn, provide a source of truth to power consistent reporting and data analytics.  

Outcomes and Value 

  • At this stage, the taxonomy (within the broader semantic layer) drives multiple enterprise-level use cases. For instance, this could include self-service performance monitoring to support strategic planning, or facilitating efficient data analytics across previously-siloed datasets. 
  • Taxonomy labeling of structured and/or unstructured data powers Machine Learning (ML) and Artificial Intelligence (AI) development and applications. 

 

Taxonomy Use Cases 

Low Maturity Example

In many instances, EK partners with clients to help develop taxonomies in their earliest stages. Recently, a data and AI platform company engaged EK to lead a taxonomy workshop covering best practices in taxonomy design, validation activities, taxonomy governance, and developing an implementation roadmap. Prior to EK’s engagement, the company was in the process of developing a centralized marketing taxonomy. As the taxonomy was maintained in a shared spreadsheet, lacked a defined governance process, and lacked consistent design guidelines, it met the basic level of maturity. However, after the workshop, the client’s taxonomy design team left with a refreshed understanding of taxonomy design best practices, clarified user personas, an appreciation of the value of semantic web standards, a clear taxonomy development roadmap, and a scaled-down focus on prioritized pilots to build a starter taxonomy. 

By clarifying and narrowing their use cases, identifying their key stakeholders and their roles in taxonomy governance, and reworking the taxonomy to reflect design principles grounded in semantic standards, the taxonomy team was equipped to elevate their taxonomy from a basic level of maturity to work towards becoming foundational. 

 

High Maturity Example 

EK’s collaboration with a major international retailer illustrates an example of the evolution towards a highly-mature semantic layer supported by a robust taxonomy. EK partnered with the retailer’s Learning Team to develop a Learning Content Database to enable an enterprise view of their learning content. Initially, the organization’s learning team lacked a standardized taxonomy. This made it difficult to identify obsolete content, update outdated content, or address training gaps. Without consistent terminology or content categorization, it was especially challenging to search effectively and identify existing learning content that could be improved, forcing the learning team to waste time creating new content. As a result, store associates struggled to search for the right instructional resources, hindering their ability to learn about new roles, understand procedures, and adhere to compliance requirements. 

To address these issues, EK first partnered with the learning team to develop a standardized taxonomy. The taxonomy crystallized brand-approved language which was then tagged to learning content. Next, EK developed a tailored governance plan to ensure the ongoing maintenance of the taxonomy, and provided guidance around taxonomy implementation to ensure optimal outcomes around reducing time spent searching for content and simplifying the process of tagging content with metadata. With the taxonomy at a sufficient stage of maturity, EK was then able to build the Learning Content Database, which enabled users to locate learning content across previously disparate, disconnected systems, now in a central location. 

 

Conclusion

Every taxonomy – from the basic starter taxonomy to the highly-developed taxonomy with robust semantic context connected to an ontology – can provide value to its organization. As a taxonomy grows in maturity, each next level of development unlocks increasingly complex solutions. From driving alignment around key terms for products and resources, supporting content audits, enabling complex data analytics across systems, or powering semantic search, the progressive advancement of a taxonomy’s complexity and semantic richness translates to tangible business value. These advancements can also act as a flywheel, where each improvement makes it easier to continue to drive buy-in, secure necessary resources, and achieve greater enhancements. 

If you are looking to learn more about how other organizations have benefitted from advanced taxonomy implementations, read more from our case studies. If you want additional guidance on how to take your organization’s taxonomy to the next level, contact us to learn more about our taxonomy design services and workshops.

The post Semantic Layer Maturity Framework Series: Taxonomy appeared first on Enterprise Knowledge.

]]>
What is A Data Fabric Architecture and What Are The Design Considerations? https://enterprise-knowledge.com/what-is-a-data-fabric-architecture-and-what-are-the-design-considerations/ Thu, 31 Aug 2023 14:00:40 +0000 https://enterprise-knowledge.com/?p=18808 Introduction In today’s data-driven world, effective data management is crucial for businesses to remain competitive. A modern approach to data management is the use of data fabric architecture. A data fabric is a data management solution that connects and manages … Continue reading

The post What is A Data Fabric Architecture and What Are The Design Considerations? appeared first on Enterprise Knowledge.

]]>
Components of the data fabric architecture

Introduction

In today’s data-driven world, effective data management is crucial for businesses to remain competitive. A modern approach to data management is the use of data fabric architecture. A data fabric is a data management solution that connects and manages data in a federated way, employing a logical data architecture that captures connections relevant to the business. Data fabrics help businesses make sense of their data by organizing it in a domain-centric way without physically moving data from source systems. What makes this possible is a shift in focus on metadata as opposed to data itself. At a high level, a semantic data fabric leverages a knowledge graph as an abstraction architecture layer to provide connectivity between diverse metadata. The knowledge graph enriches metadata by aggregating, connecting, and storing relationships between unstructured and structured data in a standardized, domain-centric format. Using a graph-based data structure helps the business embed their business data with context, drive information discovery and inference, and lay a foundation for scale.

Unlike a monolithic solution, a data fabric facilitates the alignment of different toolsets to enable domain-centric, integrated data as a service to multiple downstream applications. A data fabric architecture consists of five main components:

  1. A data/metadata model
  2. Entity extraction
  3. Relationship extraction
  4. Data pipeline orchestration
  5. Persistent graph data storage. 

While there are a number of approaches to designing all of these components, there are best practices to ensure the quality and scalability of a data fabric. This blog post will enumerate the approaches for each architectural component, discuss how to achieve a data fabric implementation from a technical approach and tooling perspective that suits a wide variety of business needs, and ultimately detail how data fabrics support the development of artificial intelligence (AI).

Data/Metadata Model

Data models – specifically, ontologies and taxonomies – play a vital role in building a data fabric architecture. An ontology is a central aspect of a data fabric that defines concepts, attributes, and relationships in a domain that is encoded in a machine and human-readable graph format. Similarly, a taxonomy is essential for metadata management in a data fabric, storing extracted entities and defining controlled vocabularies for core business domains like products, business lines, services, and skills. By creating relationships between domains and data, businesses can help users find insights and discover content more easily. Therefore, to effectively manage taxonomies and ontologies, business owners need a Taxonomy/Ontology Management System (TOMS) that provides a user-friendly platform and interface. A good TOMS should:

  • Help users build data models that follow common standards like RDF (Resource Description Framework), OWL (Web Ontology Language), and SKOS (Simple Knowledge Organization System); 
  • Let users configure the main components of a data model such as classes, relationships, attributes, and labels that define the concepts, connections, and properties in the domain; 
  • Add metadata about the data model itself through annotations, such as its name, description, version, creator, etc.;
  • Support automated governance, supporting quality checks for errors;
  • Allow for easy portability of the data model in different ways, serving multiple enterprise use cases; and
  • Allow users to link to external data models that already exist and can be reused.

Organizations that do not place their data modeling and management at the forefront of their data fabric introduce the risk of scalability issues, limited user-friendly schema views, and hampered utilization of linked open data. Furthermore, the absence of formal metadata management poses a risk of inadequate alignment with business needs and hinders flexible information discovery within the data fabric. There are different ways of creating and using data models with a TOMS to avoid these risks. One way is to use code or scripts to generate and validate the data model based on the rules and requirements of the domain. Using subject matter expertise input helps to further validate the data model and confirm that it aligns with business needs. 

Entity Extraction 

One of the functions of building your data fabric is to perform entity extraction. This is the process of identifying and categorizing named entities in both structured and unstructured data, such as person names, locations, organizations, dates, etc. Entity extraction enriches the data with additional information and enables semantic analysis. Identifying Named Entity Recognition (NER) tools and performing text preprocessing (e.g., tokenization, stop words elimination, coreference resolution) is recommended before determining an entity extraction approach, of which there are several: rule-based, machine learning-based, or a hybrid of both. 

  • Rule-based approaches rely on predefined rules that use syntactic and lexical cues to extract entities. They require domain expertise to develop and maintain, and may not adapt well to new or evolving data. 
  • Machine learning-based approaches use deep learning models that can learn complex patterns in the data and extrapolate to unseen cases. However, they may require large amounts of labeled data and computational resources to train and deploy. 
  • Hybrid approaches (Best Practice) combine rule-based and machine learning-based methods to leverage the strengths of both. Hybrid approaches are recommended for businesses that foresee expanding their data fabric solutions. 

Relationship Extraction 

Relationship extraction is the process of identifying and categorizing semantic relationships that occur between two entities in text, such as who works for whom, what business line sells which product, what is located in what place, and so on. Relationship extraction helps construct a knowledge graph that represents the connections and interactions among entities, which enables semantic analysis and reasoning. However, relationship extraction can be challenging due to the diversity and complexity of natural language.There are again multiple approaches, including rule-based, machine learning-based, or hybrid. 

  • Rule-based approaches rely on predefined rules that use word-sequence patterns and dependency paths in sentences to extract relationships. They require domain expertise to develop and maintain, and they may not capture all the possible variations and nuances of natural language. 
  • One machine learning approach is to use an n-ary classifier that assigns a probability score to each possible relationship between two entities and selects the highest one. This supports capturing the variations and nuances of natural language and handling complex and ambiguous cases. However, machine learning approaches may require large amounts of labeled data and computational resources to train and deploy. 
  • Hybrid approaches (Best Practice) employ a combination of ontology-driven relationship extraction and machine learning approaches. Ontology-driven relationship extraction uses a predefined set of relationships that are relevant to the domain and the task. This helps avoid generating a sparse relationship matrix that results in a non-traversable knowledge graph.

Data Pipeline Orchestration

Data pipeline orchestration is the driving force of data fabric creation that brings all the components together. This is the process of integrating data sources with two or more applications or services to populate the knowledge graph initially and update it regularly thereafter. It involves coordinating and scheduling various tasks, such as data extraction, transformation, loading, validation, and analysis, and helps ensure data quality, consistency, and availability across the knowledge graph. Data pipeline orchestration can be performed using different approaches, such as a manual implementation, an open source orchestration engine, or using a vendor-specific orchestration engine / cloud service provider. 

  • A manual approach involves executing each step of the workflow manually, which is time-consuming, error-prone, and costly. 
  • An open source orchestration engine approach involves managing ETL pipelines as directed acyclic graphs (DAGs) that define the dependencies and order of execution of each task. This helps automate and monitor the workflow and handle failures and retries. Open source orchestration engines may require installation and configuration, and businesses need to take into account the required features and integrations before opting to use one.
  • Third-party vendors or cloud service providers can leverage the existing infrastructure and services and provide scalability and reliability. However, vendor specific orchestration engines / cloud service providers may have limitations in terms of customization and portability.

Persistent Graph Data Storage 

One of the central ideas behind a data fabric is the ability to store metadata and core relationships centrally while connecting to source data in a federated way. This manage-in-place approach enables data discovery and integration without moving or copying data. Persistent graph data storage is the glue that brings all the components together, storing extracted entities and relationships according to the ontology, and persisting the connected data for use in any downstream applications. A graph database helps preserve the semantic relationships among the data and enable efficient querying and analysis. However, not all graph databases are created equal. When selecting a graph database, there are 4 key characteristics to consider. Graph databases should be: standards-based, ACID compliant, widely-used, editable, and explorable via a UI. 

  • Standards-based involves making sure the graph database follows a common standard, such as RDF (Resource Description Framework), to ensure interoperability so that it is easier to transition from one tool to another. 
  • ACID compliant means the graph database ensures Atomicity, Consistency, Isolation, and Durability of the data transactions, which protects the data from infrastructure failures. 
  • Strong user and community support ensures that developers will have access to good documentation and feedback.
  • Explorable via a UI supports verification by experts to ensure data quality and alignment with domain and use case needs. 

Some of the common approaches for graph databases are using RDF-based graph databases, labeled property graphs, or custom implementations. 

  • RDF-based graph databases use RDF which is the standard model for representing and querying data. 
  • Labeled property graph databases use nodes, edges, and properties as the basic elements for storing and querying data. 

Data Fabric Architecture for AI

A mature data fabric architecture built upon the preceding standards plays a pivotal role in supporting the development of artificial intelligence (AI) for businesses by providing a solid foundation for harnessing the power of data. The data fabric’s support for data exploration, data preparation, and seamless integration empowers businesses to harness the transformative and generative power of AI.

By leveraging an existing data fabric architecture, businesses can seamlessly integrate structured and unstructured data, capturing the relationships between them within a standardized, domain-centric format. With the help of the knowledge graph at the core of the data fabric, businesses can empower AI algorithms to discover patterns, make informed decisions, and generate valuable insights by traversing and navigating the graph. This capability allows AI models to uncover valuable insights that are not immediately apparent in isolated data silos. 

Furthermore, data fabrics facilitate the process of data preparation and feature engineering, which are crucial steps in AI development. The logical architecture of the data fabric allows for efficient data transformation, aggregation, and enrichment. By streamlining the data preparation pipeline, AI practitioners can focus more on modeling and algorithm development, accelerating the overall AI development process. AI models often need continuous refinement and adaptation, and data fabrics enable seamless integration of new data sources and updates to ensure that AI models have the most up-to-date information.

Conclusion

A data fabric is a modern approach to data management that is crucial for businesses to remain competitive in a data-driven world. However, a data fabric is not a monolithic solution and the supporting architecture and technical approach can vary based on the state of sources, supporting use cases, and existing tooling at an organization. It’s important to prove out the value of the solutions before investing in costly tool procurement. We recommend starting small and iterating, beginning with a targeted domain in mind and sample source systems to lay a foundation for an enterprise data fabric. Once a data fabric has been established, businesses can unlock the full potential of their data assets, enabling AI algorithms to make intelligent predictions, discover hidden insights, and drive valuable business outcomes. Looking for a kick-start to get your solutions off the ground? Contact us to get started.

The post What is A Data Fabric Architecture and What Are The Design Considerations? appeared first on Enterprise Knowledge.

]]>
Constructing KM Technology: Tips for Implementing Your KM Technology Solutions https://enterprise-knowledge.com/tips-for-implementing-km-technology-solutions/ Mon, 15 Aug 2022 15:10:55 +0000 https://enterprise-knowledge.com/?p=16156 In the digital age that we now live in, making Knowledge Management (KM) successful at any organization relies heavily on the technologies used to accomplish every day tasks. Companies are recognizing the importance of providing their workforce with smarter, more … Continue reading

The post Constructing KM Technology: Tips for Implementing Your KM Technology Solutions appeared first on Enterprise Knowledge.

]]>
In the digital age that we now live in, making Knowledge Management (KM) successful at any organization relies heavily on the technologies used to accomplish every day tasks. Companies are recognizing the importance of providing their workforce with smarter, more efficient, and highly specialized technological tools so that employees can maximize productivity in their everyday work. There’s also the expectation for a KM system, like SharePoint, to act as an all-in-one solution. Companies in search of software solutions often make the mistake of thinking a single system can effectively fulfill all of their needs including content management, document management, AI-powered search, automated workflows, etc., which simply isn’t the case. The reality is that multi-purpose software tools may be able to serve more than one business function, but in doing so only deliver basic features that lack necessary specifications and result in a sub-par product. More information on the need for a multi-system solution can be found in this blog about the importance of a semantic layer in a knowledge management technology suite.

In our experience at Enterprise Knowledge (EK), we consider the following to be core and essential systems for most integrated KM technology solutions:

  • Content Management Systems
  • Taxonomy Management Systems
  • Enterprise Search Tools
  • Knowledge Graphs

The systems mentioned above are essential tools to enable successful and mature KM, and when integrated with one another can serve to revolutionize the interaction between an organization’s staff and its information. EK has seen the most success with client organizations once they have understood the need for a blended set of technological tools and taken the steps to implement and integrate them with one another.

Once this need for a combined set of specialized solutions is realized, the issue of how to implement these solutions becomes ever-present and must be approached with a specific strategy for design and deployment. This blog will help to outline some of the key tips and guidelines for the implementation of a KM technology solution, regardless of its current state.

CMS, TMS, Search Engine

Prioritizing Your Technology Needs

When thinking about the approach to implementing an organization’s identified technology solutions, there is often an inclination to prioritize solutions that are considered “state-of-the-art” or “cooler” than others. This is understandable, especially with the new-age technology that is on the market and able to create a “wow” factor for a business’ employees and customers. However, it is important to remember that the order in which systems are implemented relies heavily on the current makeup of the organization’s technology stack. For example, although it might be tempting to take on the implementation of an AI-powered knowledge graph or a chat-bot that has Natural Language Processing (NLP) capabilities, the quality of your results and real-world usability of the product will increase dramatically if you also include other technologies such as a graph database to provide the foundation for a knowledge graph, or a Taxonomy Management System to allow for the design and curation of an enterprise taxonomy and/or ontology.

Depending on your organization’s level of maturity with respect to its technology ecosystem, the order in which systems are implemented must be strategically defined so that one system can build off of and enhance the previous. Typically, if an organization does not possess a solidified instance of any of the core KM technologies, the logical first step is to implement a Content Management System (CMS) or Document Management System (DMS), or in some cases, both. Following the “content first” approach, commonly used in web design and digitalization, organizations must first have a place in which they can effectively store, manage, and access their content, as an organization’s content is arguably one of its most valuable assets. Furthermore, one could argue that all core KM technologies are centered around an organization’s content and exist to improve/enhance that content whether it is adding to its structure, creating ways to more efficiently store and describe it, or more effectively searching and retrieving it at the time of need.

Once an organization has a solidified CMS solution in place, the next step is to implement tools geared towards the enhancement and findability of that content. One system in particular that helps to drastically improve the quality of an organization’s content by managing and deploying enterprise wide taxonomies and ontologies is a Taxonomy Management Systems (TMS). TMS solutions are integrated with an organization’s CMS and search tools and serve as a place to create, deploy, and manage poly-hierarchical taxonomies in a single place. TMS tools allow organizations to add structure to their content, describe it in a way that significantly improves organization, and fuel search by providing a set of predefined values from a controlled vocabulary that can be used to create facets and other forms of search-narrowing instruments. A common approach to implementing your technology ecosystem involves the simultaneous implementation of an enterprise search solution alongside the TMS implementation. Once again, the idea of one solution building off another is present here, as enterprise search tools feed off of the previously implemented CMS instance by utilizing Access Control List (ACL) specifications, security trimming considerations, content structure details, and many more. Once these three systems are in place, organizations can afford to look into additional tools such as Knowledge Graphs, AI-powered chatbots, and Metadata Catalogs.

Defining Business Logic and Common Uses

There is a great deal of preparation involved with the implementation of KM technologies, especially when considering the envisioned use of the system by organizational staff. As part of this preparation, a thorough analysis of existing business processes and standard operating procedures must be executed to account for the specific needs of users and how those needs will influence the design of the target system. Although it is not always initially obvious, the way in which a system is going to be used will heavily impact how that system is designed and implemented. As such, the individuals responsible for implementation must have a well-documented, thorough understanding of what end users will need from the tool, combined with a comprehensive list of core use cases. These types of details are most commonly elicited through a set of analysis activities with the system’s expected users.

Without these types of preliminary activities, the implementation process will seldom go as planned. This is because various detours will have to be taken to accommodate the business process details that are unique to the organization and therefore not ‘pre-baked’ into software solutions. These considerations sometimes come in the form of taxonomy/controlled list requirements, customizable workflows, content type specifications, and security concerns, to name a few.

If the proper arrangements aren’t made before implementing software and integrating with additional systems, it will almost always affect the scope of your implementation effort. Software implementation is not a “one size fits all” type of effort; there are certain design elements that are based on the business and functional requirements of the target solution, and these must be identified in the initial stages of the project. EK has seen how the lack of these preparatory activities can have impacts on project timelines, most commonly because of delays due to unforeseen circumstances. This results in extended deadlines, change requests, additional investment, and other general inefficiencies.

Recruiting the Proper Resources

In addition to the activities needed before implementation, it is absolutely essential to ensure that the appropriate resources are assigned to the project. This too can create issues down the road if not given the appropriate amount of time and attention before beginning the project. Generally speaking, there are a few standard roles that are necessary for any implementation project, regardless of the type or complexity of the effort. These roles are listed and described below:

  • KM Designer/Consultant: Regardless of the type of system to be implemented, having a KM consultant on board is needed for various reasons. A KM consultant will be able to assist with the non-developmental areas of the project, for example designing taxonomies/ontologies, content types, search experiences, and/or governance structures.
  • Senior Solutions Architect: Depending on the level of integration required, a Senior Solutions Architect is likely required. This is ideally a person with considerable experience working with multiple types of technologies that are core to KM. This person should have a thorough and comprehensive understanding of how to arrange systems into a technology suite and how each component works, both alone and as part of a larger, combined solution. Familiarity with REST, SOAP, and RPC APIs, along with other general knowledge about the communication between software is a must.
  • Technology Subject Matter Expert (SME): This role is absolutely critical to the success of the implementation, as there will be a need for someone who specializes in the type of software being implemented. For example, if an organization is working to implement a TMS and integrate it with other systems, the project will need to staff a TMS integration SME to ensure the system is installed according to implementation best practices. This person will also be responsible for a large portion of the “installment” of the software, meaning they will be heavily involved with the initial set up and configuration based on the organization’s specific use of the system.
  • KM Project Manager: As is common with all projects, there will be a need for a project manager to coordinate meetings, ensure the project is on schedule, and facilitate the ongoing alignment of all engaged parties. This person should be familiar with KM so that they can align efforts with best practices and help facilitate KM-related decisions.
  • API Developer(s): Depending on the level of integration required, a developer may be needed to develop code to serve as a connector between systems. This individual must be familiar with the communication logic needed between systems and have a thorough understanding of APIs as well. The programming language in which any custom coding is needed will vary from organization to organization, but it is required that the developer has experience with the identified language.

The list above is by no means exhaustive, nor does it contain resources that are commonly assumed to be a part of any implementation effort. These roles are simply the unique ones that help with successful implementations. Also, depending on the level of effort required, there may be a need for multiple resources at each role, such as the developer or SME role. This type of consideration is important, as the project will need to have ample resources according to the project’s defined timeline.

Defining a Realistic Timeline

One final factor to consider when preparing for a technology solution implementation effort is the estimated time with which the project is expected to be completed. Implementation efforts are notoriously difficult to estimate in terms of time and resources needed, which often results in the over- or under- allocation of financing for a given effort. As a result of this, it’s recommended to err on the side of caution and incorporate more time than is initially estimated for the project to reach completion. If similar efforts have been completed in the past, utilize informal benchmarking. If available resources have experience implementing similar solutions, bring them to the forefront. The best way to estimate the level of effort and time needed to complete certain tasks is to look at historical data, which in this case would be previous implementation efforts.

In EK’s experience implementing large scale and highly complex software and custom solutions, we have learned that it is important to prepare for the unexpected to ensure the expected timeline is not derailed by unanticipated delays. For example, one common consideration we have encountered many times and one that has created significant delays is the need to get individuals appropriate access to certain systems or organizational resources. This is especially relevant with third-party consultants and when the system(s) in question have high security requirements. Additionally, there are several KM-related considerations that can unexpectedly lengthen a project’s timeline, such as the quality/readiness of content, governance standards and procedures that may be lacking, and/or change management preparations.

Conclusion

There are many factors that go into an implementation effort and, unfortunately, a lot of ways one can go wrong. Very seldom are projects like these executed to perfection, and a majority of the times that they fail or go awry is due to one or a combination of a few of the factors mentioned above. The good news and common theme with these considerations is that these pitfalls can mostly be avoided with the proper planning, preparation, and estimates (with regards to both time and resources). The initial stages of an implementation effort are the most critical, as these are the times where project planners need to be honest and realistic with their projections. There is often the tendency to begin development as soon as possible, and to skip most of the preparatory activities due to an eagerness to get started. It is important to remember that successful implementation efforts require the necessary legwork, even if it may seem superfluous at the time. Does your company need assistance implementing a piece of technology and is not sure how to get started? EK provides end-to-end services beginning with strategy and design and ending with the implementation of fully functional KM systems. Reach out to us! Contact us with any questions or general inquiries.

The post Constructing KM Technology: Tips for Implementing Your KM Technology Solutions appeared first on Enterprise Knowledge.

]]>
Improved Customer Service and Risk Management through KM https://enterprise-knowledge.com/improved-customer-service-and-risk-management-through-km/ Sat, 07 Aug 2021 13:00:00 +0000 https://enterprise-knowledge.com/?p=13249 The Challenge At over 30,000 employees, this diverse insurance organization has business areas that widely vary in information maturity, some actively leveraging taxonomies while others used basic/unorganized tagging systems or simple folder structures. Even business areas with mature taxonomies were … Continue reading

The post Improved Customer Service and Risk Management through KM appeared first on Enterprise Knowledge.

]]>

The Challenge

At over 30,000 employees, this diverse insurance organization has business areas that widely vary in information maturity, some actively leveraging taxonomies while others used basic/unorganized tagging systems or simple folder structures. Even business areas with mature taxonomies were managing them in lengthy, detailed word documents that lacked version control and governance. As a result, the organization recognized a business need to automate and improve the process for taxonomy management, not just by purchasing and leveraging technology but concurrently gathering information, assessing the current state, educating stakeholders, and building consensus to ensure success. 

The Solution

EK collaborated with the organization to create a strategy focused on both the people and the technology needed to support an enterprise taxonomy. EK developed a cross-functional governance group that would provide dedicated ownership, clear roles, and actionable responsibilities for maintaining the taxonomy. As part of defining this governance structure and process we included:

  • Guidance on how to make changes or adjustments to the taxonomy;
  • Guidance on how to consider new integrations and opportunities; and
  • Roles and responsibilities for governance group members.

EK also worked collaboratively with the organization in working sessions and alignment conversations to ensure agreement on preferred labels, synonyms, and completeness. These sessions also served as training to increase awareness of the enterprise taxonomy, the technology we had selected to manage it, and the overall business value resulting from our effort. 

The EK Difference

EK’s change and communications expertise, along with our Agile delivery approaches, ensured that all stakeholders were represented, felt heard, and had input into the end products. EK facilitated multiple conversations with stakeholders from across the business to develop and lay the foundation for a cohesive, collaborative working team who could maintain the taxonomy. In addition, the iterative approach to both taxonomy enhancement and governance plan design ensured consistent feedback loops and engagement by all stakeholders. 

The Results

The organization’s new enterprise taxonomy and taxonomy management platform support the ability to manage the taxonomy centrally, leverage autotagging, and implement faceted search, all while providing the flexibility to integrate into multiple internal systems, including the SharePoint based intranet. The taxonomy itself increased findability of content which allowed employees and customers the ability to find content and information more readily, saving time and effort, and creating a better overall user-experience. The taxonomy is also a foundational capability to support other technologies and future projects like knowledge graphs and advanced search capabilities with the on-going goal of more quickly gaining insights and innovations to improve customers and employee experiences and information gathering.

 

The post Improved Customer Service and Risk Management through KM appeared first on Enterprise Knowledge.

]]>
The Importance of a Semantic Layer in a Knowledge Management Technology Suite https://enterprise-knowledge.com/the-importance-of-a-semantic-layer-in-a-knowledge-management-technology-suite/ Thu, 27 May 2021 16:43:36 +0000 https://enterprise-knowledge.com/?p=13229 One of the most common Knowledge Management (KM) pitfalls at any organization is the inability to find fresh, reliable information at the time of need.  One of, if not the most prominent, causes of this inability to quickly find information … Continue reading

The post The Importance of a Semantic Layer in a Knowledge Management Technology Suite appeared first on Enterprise Knowledge.

]]>
One of the most common Knowledge Management (KM) pitfalls at any organization is the inability to find fresh, reliable information at the time of need. 

One of, if not the most prominent, causes of this inability to quickly find information that EK has seen more recently is that an organization possesses multiple content repositories that lack a clear intention or purpose. As a result, users are forced to visit each repository within their organization’s technology landscape one at a time in order to search for the information that they need. Further, this problem is often exacerbated by other KM issues, such as a lack of proper search techniques, organization mismanagement of content, and content sprawl and duplication. In addition to a loss in productivity, these issues lead to rework, individuals making decisions on outdated information, employees losing precious working time trying to validate information, and users relying on experts for information they cannot find on their own. 

Along with a solid content management and KM related strategy, EK recommends that clients experiencing these types of findability related issues also seek solutions at the technical level. It is critical that organizations take advantage of the opportunity to streamline the way their users access the information they need to do their jobs; this will allow for the reduction of time and effort of users spent searching for information, as well as the assuage of the aforementioned challenges. This blog will explain how organizations can proactively mitigate the challenges of siloed information in different applications by instituting a unique set of technical solutions, including taxonomy management systems, metadata hubs, and enterprise search, to alleviate these problems.

With the abundance and variety of content that organizations typically possess, it is often unrealistic to have one repository that houses all types of content. There are very few, if any, content management systems on the market that can optimally support the storage of every type of content an organization may have, let alone possess the search and metadata capabilities required for proper content management. Organizations can address this dilemma by having a unified, centralized search experience that is able to search all content repositories in a secure and safe manner. This is achieved through the design and implementation of a semantic layer – a combination of unique solutions that work together to provide users one place to go to for searching for content, but behind the scenes allow for the return of results from multiple locations.

In the following sections, I will illustrate the value of Taxonomy Management Systems, Enterprise Search, and Metadata Hubs that make up the semantic layer, which collectively enable a unique and highly beneficial set of solutions.

The semantic layer is made up of three main systems/solutions: a Taxonomy Management System (TMS), an Enterprise Search (ES) tool, and a Metadata Hub.
As seen in the image above, the semantic layer is made up of three main systems/solutions: a Taxonomy Management System (TMS), an Enterprise Search (ES) tool, and a Metadata Hub.

Taxonomy Management Systems

In order to pull consistent data values back from different sources and filter, sort, and facet that data, there must be a taxonomy in place that applies to all content, in all locations. This is achieved by the implementation of an Enterprise TMS, which can be used to create, manage, and apply an enterprise-wide taxonomy to content in every system. This is important because it’s likely there are already multiple, separate taxonomies built into various content repositories that are different from one another and therefore cannot be leveraged in one system. An enterprise wide taxonomy allows for the design of a taxonomy that applies to all content, regardless of its type or location. An additional benefit of having an enterprise TMS is that organizations can utilize the system’s auto-tagging capabilities to assist in the tagging of content in various repositories. Most, if not all major contenders in the TMS industry provide auto-tagging capabilities, and organizations can use these capabilities to significantly reduce the burden on content authors and curators to manually apply metadata to content. Once integrated with content repositories, the TMS can automatically parse content, assign metadata based on a controlled vocabulary (stored in the enterprise taxonomy), and return those tags to a central location.

Metadata Hub

The next piece of this semantic layer puzzle is a metadata hub. We often find that one or more content repositories in an organization’s KM ecosystem lack the necessary metadata capabilities to describe and categorize content. This is extremely important because it facilitates the efficient indexing and retrieval of content. A ‘metadata hub’ can help to alleviate this dilemma by effectively giving those systems their needed metadata capabilities as well as creating a single place to store and manage that metadata. The metadata hub, when integrated with the TMS can apply the taxonomy and tag content from each repository, and store those tags in a single place for a search tool to index. 

This metadata hub acts as a ‘manage in place’ solution. The metadata hub points to content in its source location. Tags and metadata that are being generated are only stored in the metadata hub and are not ‘pushed’ down to the source repositories. This “pushing down” of tags can be achieved with additional development, but is generally avoided as not to disrupt the integrity of content within its respective repository. The main goal here is to have one place that contains metadata about all content in all repositories, and that this metadata is based on a shared, enterprise-wide taxonomy.

Enterprise Search

The final component of the semantic layer is Enterprise Search (ES). This is the piece that allows for individuals to perform a single search as opposed to visiting multiple systems and performing multiple searches, which is far from the optimal search experience. The ES solution acts as the enabling tool that makes the singular search experience possible. This search tool is the one that individuals will use to execute queries for content across multiple systems and includes the ability to filter, facet, and sort content to narrow down search results. In order for the search tool to function properly, there must be integrations set up between the source repositories, the metadata hub, and the TMS solution. Once these connectors are established, the search tool will be able to query each source repository with the search criteria provided by the user, and then return metadata and additional information made available by the TMS and metadata hub solutions. The result is a faceted search solution similar to what we are all familiar with at Amazon and other leading e-commerce websites. These three systems work together to not only alleviate the issues created by a lack of metadata functionalities in source repositories, but also to give users a single place to find anything and everything that relates to their search criteria.

Bringing It All Together

The value of a semantic layer can be exemplified through a common use case:

Let’s say you are trying to find out more information about a certain topic within your organization. In order to do this, you would love to perform a search for everything related to this certain topic, but realize that you have to visit multiple systems to do so. One of your content repositories stores digital media, i.e. videos and pictures, another of your content repositories stores scholarly articles, and another one stores information on individuals who are experts on the topic. There could be many more repositories, and you must visit each one separately and search within each system to gather the information you need. This takes considerable time and effort and in a best case scenario makes for a painstakingly long search process. In a worst case scenario, content is missed and the research is incomplete.

With the introduction of the semantic layer, the searchers would only have to visit one location and perform a single search. When doing so, searchers would see the results from each individual repository all in one location. Additionally, searchers would have extensive amounts of metadata on each piece of content to filter to ensure that they find the information they are looking for. Normally when we build these semantic layers the search allows users the option to narrow results by source system, content type (article, person, digital media), date created or modified, and many more. Once the searcher has found their desired content, a convenient link is provided which will take them directly to the content in its respective repository. 

Closing

The increasingly common issue of having multiple, disparate content repositories in a KM technology stack is one that causes organizations to lose valuable time and effort, while hindering employees’ ability to efficiently find information through mature, proven metadata and search capabilities. Enterprise Knowledge (EK) specializes in the design and implementation of the exact systems mentioned above and has proven experience building out these types of technologies for clients. If your company is facing issues with the findability of your content, struggling with having to search for content in multiple places, or even finding that searching for information is a cumbersome task, we can help. Contact us with any questions you have about how we can improve the way your organization searches for and finds information within your KM environment.

The post The Importance of a Semantic Layer in a Knowledge Management Technology Suite appeared first on Enterprise Knowledge.

]]>
Taxonomy Implementation Best Practices https://enterprise-knowledge.com/taxonomy-implementation-best-practices/ Mon, 08 Feb 2021 15:00:07 +0000 https://enterprise-knowledge.com/?p=12705 Have you ever found yourself wondering how to implement a taxonomy you’ve just designed or updated? You might have asked yourself, “How do I make this taxonomy work in SharePoint? In Salesforce? Oracle Knowledge Advanced?” You are not alone. Many … Continue reading

The post Taxonomy Implementation Best Practices appeared first on Enterprise Knowledge.

]]>
Have you ever found yourself wondering how to implement a taxonomy you’ve just designed or updated? You might have asked yourself, “How do I make this taxonomy work in SharePoint? In Salesforce? Oracle Knowledge Advanced?” You are not alone. Many of our clients struggle with not just how to design the right taxonomy for their content, but how to implement them in a way that allows for the realization of all those benefits we know taxonomies can bring. In my years designing and implementing taxonomies, I’ve come to understand that taxonomies are only as good as their application. In this blog I will talk about some of the important considerations for taxonomy implementation and how preparing for these parameters will help to ensure a smoother implementation and long life for your taxonomy.

Taxonomy Implementation Considerations

This images shows two examples of topic taxonomies, one that is a single-level topic taxonomy and the other is a multi-level topic taxonomy. The single level has flat lists, while the multi-level shows hierarchical lists that have multiple parent-child relationshipsOne of the most important things to keep in mind with taxonomy development or maintenance and its subsequent implementation is the primary use case(s) that the taxonomy must support. For example, the taxonomy will often serve users in one of three primary use cases: search findability, browsing, and content management. The use case should also inform or be informed by the method of application, or the system(s) within which the taxonomy will be stored, maintained, and utilized. While most taxonomies, especially business taxonomies, are designed to be system agnostic and flexible, namely so they can be used in more than one system or location, it is important to know the limitations and features of systems that will leverage the taxonomy while developing. For example, if your intended system does not support multi-level hierarchies, you may want to consider designing a taxonomy with a deconstructed hierarchy, or multiple flat facet lists instead of a deep hierarchy. Alternatively, if you have not yet selected the system, your defined taxonomy use cases can assist in evaluating the limitations of potential systems (e.g., customizable search filters, synonym dictionaries, hierarchical topic facets).

Depending on the features of your systems and your long term goals, you may also consider a taxonomy management tool as your source system for the taxonomy. Taxonomy management tools assist in mitigating limitations within a content management system, in supporting the use of the taxonomy in more than one system without adding to the maintenance burden, and ensuring you have the foundation for more advanced use cases including ontologies, knowledge graphs, and Enterprise Artificial Intelligence (AI).

Defining Taxonomy Use Cases

First, let’s define the four most common use cases including those mentioned above. 

  1. Search Findability: Taxonomy facets support search through both synonym dictionaries and categorical facets. Synonyms allow for the varied language of different use groups, allowing one person to search for “Auto” and another to search “Vehicle”. Facets allow users to narrow their search through defined and optimized categories that represent the different types of information about the content. For example, an insurance company might need a facet for “Product” to allow users to filter by “Auto Insurance” vs “Home Insurance”.
  2. Browsing: Similar to facets, the categories within a taxonomy can be selected and optimized to provide navigation or browsing as an option to find or explore content. This might be seen in the top header of a website, and allow people to select a Product and be taken to a landing page of some kind where all content about that product can be found.
  3. Content Management and Tagging: Taxonomies also often support the content management lifecycle through fields such as Content Type (Article, Procedure) or Status (Draft, Published, Deprecated) in addition to tagging valuable information to help manage each item.
  4. Recommendation Engines and/or Chatbots: Taxonomies provide the foundation for advanced use cases such as recommendation engines and chatbots. In these cases, a taxonomy may be larger, deeper, and more complex to assist in disambiguation and machine learning techniques, rather than assisting a user in navigating a website. 

Defining Taxonomy Implementation Methods

Taxonomies can either be implemented directly in the content or document management system(s) of your choice, or can be implemented within a taxonomy management tool that connects to your Content Management System (CMS) via APIs. There are pros and cons to both options, but the main criteria for choosing a Taxonomy Management System include complexity of the taxonomy, use of the taxonomy in multiple, separate systems, and limitations for the taxonomy in one of the intended systems.

Common Implementation Challenges

Often, what we see as implementation challenges fit into one of three categories: 

An image that represents each implementation challenge in the list below

  • System limitations: This is a consideration when one or more of the intended systems (often content or document management systems) is less than advanced in taxonomy management capabilities. This often can include a lack of features able to store or display hierarchies, inability to store synonyms for terms, inability to display multi-select lists, and inability or difficulty indicating required fields. In any of these cases, it is important to understand what tweaks should be made to the taxonomy (e.g. removing hierarchy, adding synonyms in a keywords field) to fit within the constraints, what impact those changes might have on usability or other systems, or identify the level of effort for customizing the system to address its limitations. 
  • Taxonomy limitations/updates: A second common challenge is not actually a challenge at all. It is part of the process of taxonomy design and maintenance. Implementation often illuminates needed changes or additions to a taxonomy that may not have been identified or prioritized during the initial design. This may include a missing metadata field that needs to be designed, a lack of sufficient synonyms to support search, or the need for content types to assist in flexible, custom implementation options for different types of content or different user groups. 
  • Tagging content: An important component of taxonomy implementation is the tagging of content with the new taxonomy. Manual and assisted tagging approaches provide a range of options for tagging content with the new taxonomy. Assisted tagging can include using a text extraction tool to suggest tags, or including logic in a migration script to map and apply tags. Often, a mix of both approaches is needed to accurately tag the full taxonomy. For example, text extraction tools can auto-tag topical taxonomies that are well aligned to the content’s text, while migration scripts and mapping may be better suited for tagging fields that are similar in the current state. Finally, manual tagging may be needed for new, administrative fields that are not accurately covered by the first two approaches. 

Tips to Mitigate Implementation Challenges

Remember that a taxonomy is a living, changing thing and taxonomy governance is of utmost importance. Don’t be afraid to make adjustments, but first ensure you understand the requirements, the options, and the impact on the taxonomy for other systems as well. Focus on your primary use case, e.g. findability, and its benefits for your users, to help navigate implementation challenges including system limitations or complex migrations. And finally, document, document your changes, the reasons for the changes, and any system specifics that you’ve encountered and adjusted for. This will be important for the longevity of the taxonomy and your implementation, reducing the need for rework.

Are you currently designing or working to implement a taxonomy for your organization? We would be happy to help guide you through this process and work alongside you to ensure the taxonomy implementation is optimized for your organization, use cases, and systems. Contact us at to learn more.

The post Taxonomy Implementation Best Practices appeared first on Enterprise Knowledge.

]]>