metadata governance Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/metadata-governance/ Mon, 03 Nov 2025 21:32:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg metadata governance Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/metadata-governance/ 32 32 Out of Many, One: Building a Semantic Layer to Tear Down Knowledge Silos https://enterprise-knowledge.com/practical-proven-guidance-on-how-to-break-down-knowledge-silos-using-a-semantic-layer-and-streamline-the-delivery-of-content/ Wed, 06 Nov 2024 16:58:06 +0000 https://enterprise-knowledge.com/?p=22424 Guillermo Galdamez, Principal Consultant, and Nina Spoelker, Consultant, jointly delivered a presentation titled ‘Out of Many, One: Building a Semantic Layer to Tear Down Silos’ at the 2024 edition of LavaCon. Galdamez and Spoelker provided practical, proven guidance on how … Continue reading

The post Out of Many, One: Building a Semantic Layer to Tear Down Knowledge Silos appeared first on Enterprise Knowledge.

]]>
Guillermo Galdamez, Principal Consultant, and Nina Spoelker, Consultant, jointly delivered a presentation titled ‘Out of Many, One: Building a Semantic Layer to Tear Down Silos’ at the 2024 edition of LavaCon. Galdamez and Spoelker provided practical, proven guidance on how to break down knowledge silos using a semantic layer and streamline the delivery of content.

The LavaCon Conference on Content Strategy and Technical Communication Management took place October 27-30 in Portland, Oregon. The theme of this year’s event was Content as a Business Asset: Reducing Costs, Generating Revenue, and Improving the Customer Experience Through Better Content.

The post Out of Many, One: Building a Semantic Layer to Tear Down Knowledge Silos appeared first on Enterprise Knowledge.

]]>
Enterprise AI Readiness Assessment https://enterprise-knowledge.com/enterprise-ai-readiness-assessment/ Thu, 02 Jul 2020 14:46:25 +0000 https://enterprise-knowledge.com/?p=11483 Understand your organization’s priority areas before committing resources to mature your information and data management solutions. Enterprise Knowledge’s AI Readiness Assessment considers your organization’s business and technical ecosystem, and identifies specific priority and gap areas to help you make
targeted investments and gain tangible value from your data and information. Continue reading

The post Enterprise AI Readiness Assessment appeared first on Enterprise Knowledge.

]]>
A wide range of organizations have placed AI on their strategic roadmap, with C-levels commonly listing Knowledge AI amongst their biggest priorities. Yet, many are already encountering challenges as a vast majority of AI initiatives are failing to show results, meet expectations, and provide real business value. For these organizations, the setbacks typically originate from the lack of foundation on which to build AI capabilities. Enterprise AI projects too often end up as isolated endeavors, lacking the necessary foundations to support business practices and operations across the organization. So, how can your organization avoid these pitfalls? There are three key questions to ask when developing an Enterprise AI strategy; do you have clear business applications, do you understand the state of our information, and what in house capabilities do you possess?

Enterprise AI entails leveraging advanced machine learning and cognitive capabilities to discover and deliver organizational knowledge, data, and information in a way that closely aligns with how humans look for and process information.

With our focus and expertise in knowledge, data, and information management, Enterprise Knowledge (EK) developed this proprietary Enterprise Artificial Intelligence (AI) Readiness Assessment in order to enable organizations to understand where they are and where they need to be in order to begin leveraging today’s technologies and AI capabilities for knowledge and data management. 

assess your organization across 4 factors: enterprise readiness, state of data and content, skill sets and technical capabilities, and change readinessBased on our experience conducting strategic assessments as well as designing and implementing Enterprise AI solutions, we have identified four key factors as the most common indicators and foundations for many organizations in order to evaluate their current capabilities and understand what it takes to invest in advanced capabilities. 

This assessment leverages over thirty measurements across these four Enterprise AI Maturity factors as categorized under the following aspects. 

1. Organizational Readiness

Does your organization have the vision, support, and drive to enable successful Enterprise AI initiatives?The foundational requirement for any organization to undergo an Enterprise AI transformation stems from alignment on vision and the business applications and justifications for launching successful initiatives. The Organizational Readiness Factor includes the assessment of appropriate organizational designs, leadership willingness, and mandates that are necessary for success. This factor evaluates topics including:

  • The need for vision and strategy for AI and its clear application across the organization.
  • If AI is a strategic priority with leadership support.
  • If the scope of AI is clearly defined with measurable success criteria.
  • If there is a sense of urgency to implement AI.

With a clear picture of what your organizational needs are, your Organizational Readiness assessment factor will allow you to determine if your organization meets the requirements to consider AI related initiatives while surfacing and preparing you for potential risks to better mitigate failure.

2. The State of Organizational Data and Content

Is your data and content ready to be used for Enterprise AI initiatives?The volume and dynamism of data and content (structured and/or unstructured) is growing exponentially, and organizations need to be able to securely manage and integrate that information. Enterprise AI requires quality of, and access to, this information. This assessment factor focuses on the extent to which existing structured and unstructured data is in a machine consumable format and the level to which it supports business operations within the enterprise. This factor consider topics including:

  • The extent to which the organization’s information ecosystems allow for quick access to data from multiple sources.
  • The scope of organizational content that is structured and in a machine-readable format.
  • The state of standardized organization of content/data such as business taxonomy and metadata schemes and if it is accurately applied to content.
  • The existence of metadata for unstructured content. 
  • Access considerations including compliance or technical barriers.

AI needs to learn the human way of thinking and how an organization operates in order to provide the right solutions. Understanding the full state of your current data and content will enable you to focus on the right content/data with the highest business impact and help you develop a strategy to get your data in an organized and accessible format. Without high quality, well organized and tagged data, AI applications will not deliver high-value results for your organization.

3. Skills Sets and Technical Capabilities

Does your organization have the technical infrastructure and resources in place to support AI?With the increased focus on AI, the demand for individuals who have the technical skills to engineer advanced machine learning and intelligent solutions, as well as business knowledge experts who can transform data to a paradigm that aligns with how users and customers communicate knowledge, have both increased. Further, over the years, cloud computing capabilities, web standards, open source training models, and linked open data for a number of industries have emerged to help organizations craft customized Enterprise AI solutions for their business. This means an organization that is looking to start leveraging AI for their business no longer has to start from scratch. This assessment factor evaluates the organization’s existing capabilities to design, management, operate, and maintain an Enterprise AI Solution. Some of the factors we consider include:

  • The state of existing enterprise ontology solutions and enterprise knowledge graph capabilities that optimize information aggregation and governance. 
  • The existence of auto-classification and automation tools within the organization.
  • Whether roles and skill sets for advanced data modeling or knowledge engineering are present within the organization.
  • The availability and capacity to commit business and technical SMEs for AI efforts.

Understanding the current gaps and weaknesses in existing capabilities and defining your targets are crucial elements to developing a practical AI Roadmap. This factor also plays a foundational role in giving your organization the key considerations to ensure AI efforts kick off on the right track, such as leveraging web standards that enable interoperability, and starting with available existing/open-source semantic models and ecosystems to avoid short-term delays while establishing long-term governance and strategy. 

4. Change Threshold 

Is your organization prepared for supporting operational and strategic changes that will result from AI initiatives?The success of Enterprise AI relies heavily on the adoption of new technologies and ways of doing business. Organizations who fail to succeed with AI often struggle to understand the full scope of the change that AI will bring to their business and organizational norms. This usually manifests itself in the form of fear (either of change in job roles or creating wrong or unethical AI results that expose the organization to higher risks). Most organizations also struggle with the understanding that AI requires a few iterations to get it “right”. As such, this assessment factor focuses on the organization’s appetite, willingness, and threshold to understand and tackle the cultural, technical, and business challenges in order to achieve the full benefits of AI. This factor evaluates topics including:

  • Business and IT interest and desire for AI.
  • Existence of resource planning for the individuals whose roles will be impacted. 
  • Education and clear communication to facilitate adoption. 

The success of any technical solution is highly dependent on the human and culture factor in an organization and each organization has a threshold for dealing with change. Understanding and planning for this factor will enable your organization to integrate change management that addresses the negative implications, avoids unnecessary resistance or weak AI results, and provides the proper navigation through issues that arise.

How it Works

This Enterprise AI readiness assessment and benchmarking leverages the four factors that have over 30 different points upon which each organization can be evaluated and scored. We apply this proprietary maturity model to help assess your Enterprise AI readiness and clearly define success criteria for your target AI initiatives. Our steps include: 

  • Knowledge Gathering and Current State Assessment: We leverage a hybrid model that includes interviews and focus groups, supported by content/data and technology analysis to understand where you are and where you need to be.This gives us a complete understanding of your current strengths and weaknesses across the four factors, allowing us to provide the right recommendations and guidance to drive success, business value, and long-term adoption.
  • Strategy Development and Roadmapping: Building on the established focus on the assessment factors, we work with you to develop a strategy and roadmap that outlines the necessary work streams and activities needed to achieve your AI goals. It combines our understanding of your organization with proven best practices and methodologies into an iterative work plan that ensures you can achieve the target state while quickly and consistently showing interim business value.
  • Business Case Development and Alignment Support: we further compile our assessment of potential project ROI based on increased revenues, cost avoidance, risk and compliance management. We then balance those against the perceived business needs and wants by determining the areas that would have the biggest business impact with lowest costs. We further focus our discussions and explorations on these areas with the greatest need and higher interest.

Keys to Our Assessment  

Over the past several years, we have worked with diverse organizations to enable them to strategize, design, pilot, and implement scaled Enterprise AI solutions. What makes our priority assessment unique is that it is developed based on years of real-world experience supporting organizations in their knowledge and data management. As such, our assessment offers the following key differentiators and values for the enterprise: 

  • Recognition of Unique Organizational Factors: This assessment recognizes that no Enterprise AI initiative is exactly the same. It is designed in such a way that it recognizes the unique aspects of every organization, including priorities and challenges to then help develop a tailored strategy to address those unique needs.
  • Emphasis on Business Outcomes: Successful AI efforts result in tangible business applications and outcomes. Every assessment factor is tied to specific business outcomes with corresponding steps on how the organization can use it to better achieve practical business impact.
  • A Tangible Communication and Education Tool: Because this assessment provides measurable scores and over 30 tangible criteria for assessment and success factors, it serves as an effective tool to allow your organization to communicate up to leadership and quickly garner leadership buy-in, helping organizations understand the cost and the tangible value for AI efforts. 

Results

As a result of this effort, you will have a complete view of your AI readiness, gaps and required ecosystem and an accompanying understanding of the potential business value that could be realized once the target state is achieved. Taken as a whole, the assessment allows an organization to:

  • Understand strengths and weaknesses, and overall readiness to move forward with Enterprise AI compared to other organizations and the industry as a whole;
  • Judge where foundational gaps may exist in the organization in order to improve Enterprise AI readiness and likelihood of success; and
  • Identify and prioritize next steps in order to make immediate progress based on the organization’s current state and defined goals for AI and Machine Learning.

 

Get Started Download Trends Ask a Question

Taking the first step toward gaining this invaluable insight is easy:

1. Take 10-15 minutes to complete your Enterprise AI Maturity Assessment by answering a set of questions pertaining to the four factors; and
2. Submit your completed assessment survey and provide your email address to download a formal PDF report with your customized results.

The post Enterprise AI Readiness Assessment appeared first on Enterprise Knowledge.

]]>
Best Practices for Successful Metadata Governance https://enterprise-knowledge.com/best-practices-for-successful-metadata-governance/ Mon, 29 Jun 2020 15:07:11 +0000 https://enterprise-knowledge.com/?p=11400 Metadata governance is often seen as burdensome, tough to implement, and even harder to prioritize. When organizations embark on a metadata initiative, they tend to focus on design and development rather than governance. Metadata and taxonomies are living tools that … Continue reading

The post Best Practices for Successful Metadata Governance appeared first on Enterprise Knowledge.

]]>
Metadata governance is often seen as burdensome, tough to implement, and even harder to prioritize. When organizations embark on a metadata initiative, they tend to focus on design and development rather than governance. Metadata and taxonomies are living tools that evolve with the content they describe. Without a forward-thinking governance plan, the defined metadata will stagnate and fail to offer business value. Organizations that invest in metadata governance benefit from the following: 

  • Improved business intelligence by linking organizational information to commonly used and accepted terms. 
  • Procedures and processes that provide more consistent standards for describing and categorizing information and data resulting in consistent branding and customer experience, risk mitigation, etc.
  • A strategy to update metadata in accordance with organizational and domain specific needs.

While I’d love to convince you that governance is the most exciting part of a metadata initiative, I’ll settle for convincing you of its importance. The most successful metadata governance consists of three main components: roles and responsibilities; policies & procedures; and metrics to measure growth and change.

In a previous blog, I wrote specifically about the importance of governance when adding new terms to a taxonomy and my go-to approach for determA call-out box that reads "Successful governance happens when it is embedded in daily work and culture."ining which terms are relevant and where they should live within a taxonomy. In this blog, I wanted to take a step back and share a few best practices for establishing a successful metadata governance plan that accounts for and supports changes to any type of information model. Taxonomy changes might include addition, modification, and deletion of terms while ontology or knowledge graph changes might include changes to an ontology’s entities, relationships, and attributes or changes to the data sources being leveraged. 

Most organizations struggle with governance whether they are large or small, hierarchical or flat, remote or in-office, etc. More often than not, a few common mistakes are at the root of unsuccessful governance initiatives. By recognizing and mitigating those mistakes, you can ensure metadata governance that lives, breathes, and grows with your content, data, knowledge, and organization. 

1. Just get started.

One of the most common mistakes organizations make is trying too hard to perfect their governance plan instead of getting started. It’s human nature to try and ensure we have the right group of the right participants, the right process diagrams and flows, and a perfectly automated process, but I have seen that those details come with time and iterative review of a working governance process and cannot be accounted for before you start. My advice – just get started. Don’t wait till the stars align, you are much better off getting the first meeting or summit underway and getting the conversation started with the identified individuals. Just as content and metadata will evolve, so will your governance team and your processes. Start from a simple governance model, either hierarchical like the pyramid below, or flatter, represented by a continuous circle, both of which can include a few key roles like a Lead, Business Stakeholders, and and/or a System Admin who can enact changes in a metadata management tool or in an integrated system. A good practice is to implement checkpoints into your governance process or hold retrospectives after the first few meetings or changes. Identify what worked well and where you can improve your governance plan.

An example of a hierarchical taxonomy governance model with roles such as Taxonomy Governance Lead, System Administrator, and Market Leads. 

2. Keep it simple.

Don’t over-engineer the governance processes to the point where nothing can be completed in a timely or efficient manner. While it is true that organizations vary in governance styles and some may benefit from more centralized governance, keep in mind the time and cost of making decisions and changes to your metadata model. If your governance process requires that every minor change go through two rounds of approval, one at the business level and one at the enterprise level, remember that will increase the time it takes to implement a change both in the central taxonomy management tool and in any consuming systems. The longer a decision making process takes, the more likely your taxonomy will stall, governance processes will be abandoned, or the taxonomy may never reach full adoption. Where possible, limit lengthy decision trees and the need for each suggestion to wait for a quarterly or semi-annual meeting. One way to do this is to outline what a major change is, and what are minor changes, then customizing the approval or change process for each category. This will not only speed the process for minor changes, but will also improve the speed to implementation for major changes by ensuring you have only major changes on each quarterly agenda, resulting in fewer agenda items and less carryover to the next meeting. 

Major Changes
  • Adding a Metadata Field
  • Adding an Entity
  • Eliminating a Relationship
Addressed by Governance Committee or larger working group during regularly scheduled meetings
Minor Changes
  • Adding a Term
  • Adding Synonyms
  • Adding Attributes
Addressed by Governance Manager ad hoc

3. Dedicate ownership & communicate regularly.

Both the governance process and the metadata itself need to be owned. By owned, we mean having dedicated people to support the ongoing maintenance of the taxonomy and its governance processes. This dedicated ownership does not have to be a full-time role for one individual, it may require only a few hours a month once implemented. Some of the most successful ownership examples came from organizations where the ownership rotated semi-annually and the owner recognized the importance of their role without feeling overburdened.

Using a tool like RACI can be a great option for agile metadata governance teams. RACI stands for Responsible, Accountable, Consulted, and Informed which helps a team to outline who needs to be responsible for the work, who is ultimately accountable, who and when we need to consult, and importantly, who and how we should inform of changes. For example, a major change to a taxonomy (the addition of a new metadata field like Content Type) would require that the Taxonomy Manager be Responsible, perhaps the leader of the taxonomy initiative is Accountable, subject matter experts (SMEs) in content lifecycle and tagging are Consulted, and every consumer of the taxonomy or content may be Informed.

An image outline the 4 RACI steps, Responsible, Accountable, Consulted, and Informed.

Communicating with end users, consumers, and stakeholders before, during, and after governance meetings is one of the most important ways to keep people engaged in the life and maintenance of a taxonomy, ontology, or other metadata models.

4. Automate where possible.

Leveraging technology to support metadata governance can streamline efforts as metadata initiatives scale and grow. Technology empowers the governance team to rapidly scale processes to new systems while maintaining the integrity of the metadata model. Governance plans that are wired into technology, seamless to the end user, succeed. In other words, the most effective governance doesn’t have to be learned, or followed, it just happens when using the system, because that’s how the system was designed.

Benefits of leveraging technology and automation include the abilities to:

  • Manage the metadata in a centralized location versus manually tracking changes via spreadsheets or various databases.
  • Enforce access controls on changes to the metadata model and implement workflows to standardize how the model matures.
  • Standardize how the various pieces of metadata are consumed by systems to ensure accurate representation and quality enterprise-wide.

A few ways that we’ve seen automation enable and improve the longevity of metadata models and their governance plans are through leveraging system flags to do regular cleanup and archive/remove content or identify metadata that is not being used regularly, the ability to auto-categorize content and enable metadata suggestions based on context, and provision of forms and workflows that allow users to provide suggestions directly to the metadata governance review from within their daily task or system. Mature metadata governance is supported and enabled by technology, reducing manual burden on the governance team.

In Closing

Successful metadata governance saves rework and cost by not re-designing models that have gone stale. If you want help designing a metadata governance plan for your organization that is both customized and follows best practices or implementing a governance framework using the best practices above, contact us.

The post Best Practices for Successful Metadata Governance appeared first on Enterprise Knowledge.

]]>