Data Management Services Articles - Enterprise Knowledge https://enterprise-knowledge.com/category/data-management-services/ Mon, 17 Nov 2025 22:22:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg Data Management Services Articles - Enterprise Knowledge https://enterprise-knowledge.com/category/data-management-services/ 32 32 Defining Governance and Operating Models for AI Readiness of Knowledge Assets https://enterprise-knowledge.com/defining-governance-and-operating-models-for-ai-readiness-of-knowledge-assets/ Wed, 08 Oct 2025 18:57:59 +0000 https://enterprise-knowledge.com/?p=25729 Artificial intelligence (AI) solutions continue to capture both the attention and the budgets of many organizations. As we have previously explained, a critical factor to the success of your organization’s AI initiatives is the readiness of your content, data, and … Continue reading

The post Defining Governance and Operating Models for AI Readiness of Knowledge Assets appeared first on Enterprise Knowledge.

]]>
Artificial intelligence (AI) solutions continue to capture both the attention and the budgets of many organizations. As we have previously explained, a critical factor to the success of your organization’s AI initiatives is the readiness of your content, data, and other knowledge assets. When correctly executed, this preparation will ensure your knowledge assets are of the appropriate quality and semantic structure for AI solutions to leverage with context and inference, while identifying and exposing only the appropriate assets to the right people through entitlements.

This, of course, is an ongoing challenge, rather than a moment in time initiative. To ensure the important work you’ve done to get your content, data, and other assets AI-ready is not lost, you need governance as well as an operating model to guide it. Indeed, well before any AI readiness initiative, governance and the organization must be top of mind. 

Governance is not a new term within the field. Historically, we’ve identified four core components to governance in the context of content or data:

  • Business Case and Measurable Success Criteria: Defining the value of the solution and the governance model itself, as well as what success looks like for both.
  • Roles and Responsibilities: Defining the individuals and groups necessary for governance, as well as the specific authorities and expectations of their roles.
  • Policies and Procedures: Detailing the timelines, steps, definitions, and actions for the associated roles to play.
  • Communications and Training: Laying out the approach to two-way communications between the associated governance roles/groups and the various stakeholders.

These traditional components of governance all have held up, tried and true, over the quarter-century since we first defined them. In the context of AI, however, it is important to go deeper and consider the unique aspects that artificial intelligence brings into the conversation. Virtually every expert in the field agrees that AI governance should be a priority for any organization, but that must be detailed further in order to be meaningful.

In the context of AI readiness for knowledge assets, we focus AI governance, and more broadly its supporting operating model, on five key elements for success:

  • Coordination and Enablement Over Execution
  • Connection Instead of Migration
  • Filling Gaps to Address the Unanswerable Questions
  • Acting on “Hallucinations”
  • Embedding Automation (Where It Makes Sense)

There is, of course, more to AI governance than these five elements, but in the context of AI readiness for knowledge assets, our experience shows that these are the areas where organizations should be focusing and shifting away from traditional models. 

1) Coordination and Enablement Over Execution

In traditional governance models (i.e. content governance, data governance, etc.), most of the work was done in the context of a single system. Content would be in a content management system and have a content governance model. Data would be in a data management solution and have a data governance model. The shift is that today’s AI governance solutions shouldn’t care what types of assets you have or where they are housed. This presents an amazing opportunity to remove artificial silos within an organization, but brings a marked challenge. 

If you were previously defining a content governance model, you most likely possessed some level of control or ownership over your content and document management systems. Likewise, if you were in charge of data governance, you likely “own” some or all of the major data solutions like master data management or a data warehouse within your organization. With AI, however, an enormous benefit of a correctly architected enterprise AI solution that leverages a semantic layer is that you likely don’t own these source systems. The system housing the content, data, and other knowledge assets is likely, at least partly, managed by other parts of your organization. In other words, in an AI world, you have less control over the sources of the knowledge assets, and thereby over the knowledge assets themselves. This may well change as organizations evolve in the “Age of AI,” but for now, the role and responsibility for AI governance becomes more about coordination and less about execution or enforcement.

In practice, this means an AI Governance for Knowledge Asset Readiness group must coordinate with the owners of the various source systems for knowledge assets, providing additive guidance to define what it means to have AI-ready assets as well as training and communications to enable and engage system and asset owners to understand what they must do to have their content, data, and other assets included within the AI models. The word “must” in the previous sentence is purposeful. You alone may not possess the authority of an information system owner to define standards for their assets, but you should have the authority to choose not to include those assets within the enterprise AI solution set.

How do you apply that authority? As the lines continue to blur between the purview of KM, Data, and AI teams, this AI Governance for Knowledge Asset Readiness group should comprise representatives from each of these once siloed teams to co-own outcomes as new AI use cases, features, and capabilities are developed. The AI governance group should be responsible for delineating key interaction points and expected outcomes across teams and business functions to build alignment, facilitate planning and coordination, and establish expectations for business and technical stakeholders alike as AI solutions evolve. Further, this group should define what it means (and what is required) for an asset to be AI-ready. We cover this in detail in previous articles, but in short, this boils down to semantic structure, quality, and entitlements as the three core pillars to AI readiness for knowledge assets. 

2) Connection Instead of Migration

The idea of connections over migration aligns with the previous point. Past monolithic efforts in your organization would commonly have included massive migrations and consolidations of systems and solutions. The roadmaps of past MDMs, data warehouses, and enterprise content management initiatives are littered with failed migrations. Again, part of the value of an enterprise AI initiative that leverages a semantic layer, or at least a knowledge graph, is that you don’t need to absorb the cost, complexity, and probable failure of a massive migration. 

Instead, the role of the AI Governance for Knowledge Asset Readiness group is one of connections. Once the group has set the expectation for AI-ready knowledge assets, the next step is to ensure the systems that house those assets are connected and available, ready for the enterprise AI solutions to be ingested and understood. This can be a highly iterative process, not to be rushed, as the sanctity of the assets ingested by AI is more important than their depth. Said differently, you have few chances to deliver wrong answers—your end users will lose trust quickly in a solution that delivers inaccurate information that they know is unmistakably incorrect; but if they receive an incomplete answer instead, they will be more likely to raise this and continue to engage. The role of this AI governance group is to ensure the right systems and their assets are reliably available for the AI solution(s) at the right time, after your knowledge assets have passed through the appropriate requirements.

 

3) Filling Gaps to Address the Unanswerable Questions

As the AI solutions are deployed, the shift for AI governance moves from being proactive to reactive. There is a great opportunity associated with this that bears a particular focus. In the history of knowledge management, and more broadly the fields of content management, data management, and information management, there’s always been a creeping concern that an organization “doesn’t know what it doesn’t know.” What are the gaps in knowledge? What are the organizational blind spots? These questions have been nearly impossible to answer at the enterprise level. However, with enterprise-level AI solutions implemented, the ability to have this awareness is suddenly a possibility.

Even before deploying AI solutions, a well-designed semantic layer can help pinpoint organizational gaps in knowledge by finding taxonomy elements lacking in applied knowledge assets. However, this potential is magnified once the AI solution is fully defined. Today’s mature AI solutions are “smart” enough to know when they can’t answer a question and highlight that unanswerable question to the AI governance group. Imagine possessing the organizational intelligence to know what your colleagues are seeking to understand, having insights into that which they are trying to learn or answer, but are currently unable to. 

In this way, once an AI solution is deployed, the primary role of the AI governance group should be to diagnose and then respond to these automatically identified knowledge gaps, using their standards to fill them. It may be that the information does, in fact, exist within the enterprise, but that the AI solution wasn’t connected to those knowledge assets. Alternatively, it may be that the right semantic structure wasn’t placed on the assets, resulting in a missed connection and a false gap from the AI. However, it may also be that the answer to the “unanswerable” question only exists as tacit knowledge in the heads of the organization’s experts, or doesn’t exist at all. This is the most core and true value of the field of knowledge management, and has never been so possible.

4) Acting on “Hallucinations”

Aligned with the idea of filling gaps, a similar role for the AI governance group should be to address hallucinations or failures for AI to deliver an accurate, consistent, and complete “answer.” For organizations attempting to implement enterprise AI, a hallucination is little more than a cute word for an error, and should be treated as such by the AI governance group. There are many reasons for these errors, ranging from poor quality (i.e., wrong, outdated, near-duplicate, or conflicting) knowledge assets, insufficient semantic structure (e.g., taxonomy, ontology, or a business glossary), or poor logic built into the model itself. Any of these issues should be treated with immediate action. Your organization’s end users will quickly lose trust in an AI solution that delivers inaccurate results. Your governance model and associated organizational structure must be equipped to act quickly, first to leverage communications and feedback channels to ensure your end users are telling you when they believe something is inaccurate or incomplete, and moreover, to diagnose and address it.

As a note, for the most mature organizations, this action won’t be entirely reactive. For the most mature, organizational subject matter experts will be involved in perpetuity, especially right before and after enterprise AI deployment, to hunt for errors in these systems. Commonly, you can consider this governance function as the “Hallucination Killers” within your organization, likely to be one of the most critical actions as AI continues to expand.

5) Embedding Automation (Where It Makes Sense)

Finally, one of the most important roles of an AI governance group will be to use AI to make AI better. Almost everything we’ve described above can be automated. AI can and should be used to automate identification of knowledge gaps as well as solve the issue of those knowledge gaps by pinpointing organizational subject matter experts and targeting them to deliver their learning and experience at the right moments. It can also play a major role in helping to apply the appropriate semantic structure to knowledge, through tagging of taxonomy terms as metadata or identification of potential terms for inclusion in a business glossary. Central to all of this automation, however, is to ensure the ‘human is in the loop’, or rather, the AI governance group plays an advisory and oversight role throughout these automations, to ensure the design doesn’t fall out of alignment. This element further facilitates AI governance coordination across the organization by supporting stakeholders and knowledge asset stewards through technical enablement.

All of this presents a world of possibility. Governance was historically one of the drier and more esoteric concepts within the field, often where good projects went bad. We have the opportunity to do governance better by leveraging AI in the areas where humans historically fell short, while maintaining the important role of human experts with the right authority to ensure organizational alignment and value.

If your AI efforts aren’t yet yielding the results you expected, or you’re seeking to get things started right from the beginning, contact EK to help you.

The post Defining Governance and Operating Models for AI Readiness of Knowledge Assets appeared first on Enterprise Knowledge.

]]>
How to Ensure Your Data is AI Ready https://enterprise-knowledge.com/how-to-ensure-your-data-is-ai-ready/ Wed, 01 Oct 2025 16:37:50 +0000 https://enterprise-knowledge.com/?p=25670 Artificial intelligence has the potential to be a game-changer for organizations looking to empower their employees with data at every level. However, as business leaders look to initiate projects that incorporate data as part of their AI solutions, they frequently … Continue reading

The post How to Ensure Your Data is AI Ready appeared first on Enterprise Knowledge.

]]>
Artificial intelligence has the potential to be a game-changer for organizations looking to empower their employees with data at every level. However, as business leaders look to initiate projects that incorporate data as part of their AI solutions, they frequently look to us to ask, “How do I ensure my organization’s data is ready for AI?” In the first blog in this series, we shared ways to ensure knowledge assets are ready for AI. In this follow-on article, we will address the unique challenges that come with connecting data—one of the most unique and varied types of knowledge assets—to AI. Data is pervasive in any organization and can serve as the key feeder for many AI use cases, so it is a high priority knowledge asset to ready for your organization.

The question of data AI readiness stems from the very real concern that when AI is pointed at data that isn’t correct or that doesn’t have the right context associated with it, organizations could face risks to their reputation, their revenue, or their customers’ privacy. With the additional nuance that data brings by often being presented in formats that require transformation, lacking in context, and frequently containing multiple duplicates or near-duplicates with little explanation of their meaning, data (although seemingly already structured and ready for machine consumption) requires greater care than other forms of knowledge assets to comprise part of a trusted AI solution. 

This blog focuses on the key actions an organization needs to perform to ensure their data is ready to be consumed by AI. By following the steps below, an organization can use AI-ready data to develop end-products that are trustworthy, reliable, and transparent in their decision making.

1) Understand What You Mean by “Data” (Data Asset and Scope Definition)

Data is more than what we typically picture it as. Broadly, data is any raw information that can be interpreted to garner meaning or insights on a certain topic. While the typical understanding of data revolves around relational databases and tables galore, often with esoteric metrics filling their rows and columns, data takes a number of forms, which can often be surprising. In terms of format, while data can be in traditional SQL databases and formats, NoSQL data is growing in usage, in forms ranging from key-value pairs to JSON documents to graph databases. Plain, unstructured text such as emails, social media posts, and policy documents are also forms of data, but traditionally not included within the enterprise definition. Finally, data comes from myriad sources—from live machine data on a manufacturing floor to the same manufacturing plant’s Human Resources Management System (HRMS). Data can also be categorized by its business role: operational data that drives day-to-day processes, transactional data that records business exchanges, and even purchased or third-party data brought in to enrich internal datasets. Increasingly, organizations treat data itself as a product, packaged and maintained with the same rigor as software, and rely on data metrics to measure quality, performance, and impact of business assets.

All these forms and types of data meet the definition of a knowledge asset—information and expertise that an organization can use to create value, which can be connected with other knowledge assets. No matter the format or repository type, ingested, AI-ready data can form the backbone of a valuable AI solution by allowing business-specific questions to be answered reliably in an explainable manner. This raises the question to organizational decision makers—what within our data landscape needs to be included in our AI solution? From your definition of what data is, start thinking of what to add iteratively. What systems contain the highest priority data? What datasets would provide the most value to end users? Select high-value data in easy-to-transform formats that allows end users to see the value in your solution. This can garner excitement across departments and help support future efforts to introduce additional data into your AI environment. 

2) Ensure Quality (Data Cleanup)

The majority of organizations we’ve worked with have experienced issues with not knowing what data they have or what it’s intended to be used for. This is especially common in large enterprise settings as the sheer scale and variety of data can breed an environment where data becomes lost, buried, or degrades in quality. This sprawl occurs alongside another common problem, where multiple versions of the same dataset exist, with slight variations in the data they contain. Furthermore, the issue is exacerbated by yet another frequent challenge—a lack of business context. When data lacks context, neither humans nor AI can reliably determine the most up-to-date version, the assumptions and/or conditions in place when said data was collected, or even if the data warrants retention.

Once AI is introduced, these potential issues are only compounded. If an AI system is provided data that is out of date or of low quality, the model will ultimately fail to provide reliable answers to user queries. When data is collected for a specific purpose, such as identifying product preferences across customer segments, but not labeled for said use, and an AI model leverages that data for a completely separate purpose, such as dynamic pricing models, harmful biases can be introduced into the results that negatively impact both the customer and the organization.

Thankfully, there are several methods available to organizations today that allow them to inventory and restructure their data to fix these issues. Examples include data dictionaries, master data (MDM data), and reference data that help standardize data across an organization and help point to what is available at large. Additionally, data catalogs are a proven tool to identify what data exists within an organization, and include versioning and metadata features that can help label data with their versions and context. To help populate catalogs and data dictionaries and to create MDM/reference data, performing a data audit alongside stewards can help rediscover lost context and label data for better understanding by humans and machines alike. Another way to deduplicate, disambiguate, and contextualize data assets is through lineage. Lineage is a feature included in many metadata management tools that stores and displays metadata regarding source systems, creation and modification dates, and file contributors. Using this lineage metadata, data stewards can select which version of a data asset is the most current or relevant for a specific use case and only expose said asset to AI. These methods to ensure data quality and facilitate data stewardship can aid in action towards a larger governance framework. Finally, at a larger scale, a semantic layer can unify data and its meaning for easier ingestion into an AI solution, assist with deduplication efforts, and break down silos between different data users and consumers of knowledge assets at large. 

Separately, for the elimination of duplicate/near-duplicate data, entity resolution can autonomously parse the content of data assets, deduplicate them, and point AI to the most relevant, recent, or reliable data asset to answer a question. 

3) Fill Gaps (Data Creation or Acquisition)

With your organization’s data inventoried and priorities identified, it’s time to start identifying what gaps exist in your data landscape in light of the business questions and challenges you are looking to address. First, ask use case-based questions. Based on your identified use cases, what data would an AI model need to answer topical questions that your organization doesn’t already possess?

At a higher level, gaps in use cases for your AI solution will also exist. To drive use case creation forward, consider the use of a data model, entity relationship diagram (ERD), or ontology to serve as the conceptual map on which all organizational data exists. With a complete data inventory, an ontology can help outline the process by which AI solutions would answer questions at a high level, thanks to being both machine and human-readable. By traversing the ontology or data model, you can design user journeys and create questions that form the basis of novel use cases.

Often, gaps are identified that require knowledge assets outside of data to fill. A data model or ontology can help identify related assets, as they function independently of their asset type. Moreover, standardized metadata across knowledge assets and asset types can enrich assets, link them to one another, and provide insights previously not possible. When instantiated in a solution alongside a knowledge graph, this forms a semantic layer where data assets, such as data products or metrics, gain context and maturity based on related knowledge assets. We were able to enhance the performance of a large retail chain’s analytics team through such an approach utilizing a semantic layer.

To fill these gaps, organizations can look to collect or create more data, as well as purchase publicly available/incorporate open-source datasets (build vs. buy). Another common method of filling identified organizational gaps is the creation of content (and other non-data knowledge assets) to identify a gap via the extraction of tacit organizational knowledge. This is a method that more chief data officers/chief data and AI officers (CDOs/CDAOs) are employing, as their roles expand and reliance on structured data alone to gather insights and solve problems is no longer feasible.

As a whole, this process will drive future knowledge asset collection, creation, and procurement efforts and consequently is a crucial step in ensuring data at large is AI ready. If no such data exists for AI to rely on for certain use cases, users will be presented unreliable, hallucination-based answers, or in a best-case scenario, no answer at all. Yet as part of a solid governance plan as mentioned earlier, the continuation of the gap analysis process post-solution deployment can empower organizations to continually identify—and close—knowledge gaps, continuously improving data AI readiness and AI solution maturity.

4) Add Structure and Context (Semantic Components)

A key component of making data AI-ready is structure—not within the data per se (e.g., JSON, SQL, Excel), but the structure relating the data to use cases. As a term, ‘structure’ added meaning to knowledge assets in our previous blog, but can introduce confusion as a misnomer in this section. Consequently, ‘structure’ will refer to the added, machine-readable context a semantic model adds to data assets, rather than the format of the data assets themselves, as data loses meaning once taken out of the structure or format it is stored in (e.g., as takes place when retrieved by AI).

Although we touched on one type of semantic model in the previous step, there are three semantic models that work together to ensure data AI readiness: business glossaries, taxonomies, and ontologies. Adding semantics to data for the purpose of getting it ready for AI allows an organization to help users understand the meaning of the data they’re working with. Together, taxonomies, ontologies, and business glossaries imbue data with the context needed for an AI model to fully grasp the data’s meaning and make optimal use of it to answer user queries. 

Let’s dive into the business glossary first. Business glossaries define business context-specific terms that are often found in datasets in a plaintext, easy-to-understand manner. For AI models which are often trained generally, these glossary terms can further assist in the selection of the correct data needed to answer a user query. 

Taxonomies group knowledge assets into broader and narrower categories, providing a level of hierarchical organization not available with traditional business glossaries. This can help data AI readiness in manifold ways. By standardizing terminology (e.g., referring to “automobile,” “car,” and “vehicle” all as “Vehicles” instead of separately), data from multiple sources can be integrated more seamlessly, disambiguated, and deduplicated for clearer understanding. 

Finally, ontologies provide the true foundation for linking related datasets to one another and allow for the definition of custom relationships between knowledge assets. When combining ontology with AI, organizations can perform inferences as a way to capture explicit data about what’s only implied by individual datasets. This shows the power of semantics at work, and demonstrates that good, AI-ready data enriched with metadata can provide insights at the same level and accuracy as a human. 

Organizations who have not pursued developing semantics for knowledge assets before can leverage traditional semantic capture methods, such as business glossaries. As organizations mature in their curation of knowledge assets, they are able to leverage the definitions developed as part of these glossaries and dictionaries, and begin to structure that information using more advanced modeling techniques, like taxonomy and ontology development. When applied to data, these semantic models make data more understandable, both to end users and AI systems. 

5) Semantic Model Application (Labeling and Tagging) 

The data management community has more recently been focused on the value of metadata and metadata-first architecture, and is scrambling to catch up to the maturity displayed in the fields of content and knowledge management. Through replicating methods found in content management systems and knowledge management platforms, data management professionals are duplicating past efforts. Currently, the data catalog is the primary platform where metadata is being applied and stored for data assets. 

To aggregate metadata for your organization’s AI readiness efforts, it’s crucial to look to data stewards as the owners of, and primary contributors to, this effort. Through the process of labeling data by populating fields such as asset descriptions, owner, assumptions made upon collection, and purposes, data stewards help to drive their data towards AI readiness while making tacit knowledge explicit and available to all. Additionally, metadata application against a semantic model (especially taxonomies and ontologies) contextualizes assets in business context and connects related assets to one another, further enriching AI-generated responses to user prompts. While there are methods to apply metadata to assets without the need for as much manual effort (such as auto-classification, which excels for content-based knowledge assets), structured data usually dictates the need for human subject matter experts to ensure accurate classification. 

With data catalogs and recent investments in metadata repositories, however, we’ve noticed a trend that we expect will continue to grow and spread across organizations in the near future. Data system owners are more and more keen to manage metadata and catalog their assets within the same systems that data is stored/used, adopting features that were previously exclusive to a data catalog. Major software providers are strategically acquiring or building semantic capabilities for this purpose. This has been underscored by the recent acquisition of multiple data management platforms by the creators of larger, flagship software products. With the features of the data catalog being adapted from a full, standalone application that stores and presents metadata to a component of a larger application that focuses as a metadata store, the metadata repository is beginning to take hold as the predominant metadata management platform.

6) Address Access and Security (Unified Entitlements)

Applying semantic metadata as described above helps to make data findable across an organization and contextualized with relevant datasets—but this needs to be balanced alongside security and entitlements considerations. Without regard to data security and privacy, AI systems risk bringing in data they shouldn’t have access to because access entitlements are mislabeled or missing, leading to leaks in sensitive information.

A common example of when this can occur is with user re-identification. Data points that independently seem innocuous, when combined by an AI system, can leak information about customers or users of an organization. With as few as just 15 data points, information that was originally collected anonymously can be combined to identify an individual. Data elements like ZIP code or date of birth would not be damaging on their own, but when combined, can expose information about a user that should have been kept private. These concerns become especially critical in industries with small population sizes for their datasets, such as rare disease treatment in the healthcare industry.

EK’s unified entitlements work is focused on ensuring the right people and systems view the correct knowledge assets at the right time. This is accomplished through a holistic architectural approach with six key components. Components like a policy engine capture can enforce whether access to data should be given, while components like a query federation layer ensure that only data that is allowed to be retrieved is brought back from the appropriate sources. 

The components of unified entitlements can be combined with other technologies like dark data detection, where a program scrapes an organization’s data landscape for any unlabeled information that is potentially sensitive, so that both human users and AI solutions cannot access data that could result in compliance violations or reputational damage. 

As a whole, data that exposes sensitive information to the wrong set of eyes is not AI-ready. Unified entitlements can form the layer of protection that ensures data AI readiness across the organization.

7) Maintain Quality While Iteratively Improving (Governance)

Governance serves a vital purpose in ensuring data assets become, and remain, AI-ready. With the introduction of AI to the enterprise, we are now seeing governance manifest itself beyond the data landscape alone. As AI governance begins to mature as a field of its own, it is taking on its own set of key roles and competencies and separating itself from data governance. 

While AI governance is meant to guide innovation and future iterations while ensuring compliance with both internal and external standards, data governance personnel are taking on the new responsibility of ensuring data is AI-ready based on requirements set by AI governance teams. Barring the existence of AI governance personnel, data governance teams are meant to serve as a bridge in the interim. As such, your data governance staff should define a common model of AI-ready data assets and related standards (such as structure, recency, reliability, and context) for future reference. 

Both data and AI governance personnel hold the responsibility of future-proofing enterprise AI solutions, in order to ensure they continue to align to the above steps and meet requirements. Specific to data governance, organizations should ask themselves, “How do you update your data governance plan to ensure all the steps are applicable in perpetuity?” In parallel, AI governance should revolve around filling gaps in their solution’s capabilities. Once the AI solutions launch to a production environment and user base, more gaps in the solution’s realm of expertise and capabilities will become apparent. As such, AI governance professionals need to stand up processes to use these gaps to continue identifying new needs for knowledge assets, data or otherwise, in perpetuity.

Conclusion

As we have explored throughout this blog, data is an extremely varied and unique form of knowledge asset with a new and disparate set of considerations to take into account when standing up an AI solution. However, following the steps listed above as part of an iterative process for implementation of data assets within said solution will ensure data is AI-ready and an invaluable part of an AI-powered organization.

If you’re seeking help to ensure your data is AI-ready, contact us at info@enterprise-knowledge.com.

The post How to Ensure Your Data is AI Ready appeared first on Enterprise Knowledge.

]]>
Top Ways to Get Your Content and Data Ready for AI https://enterprise-knowledge.com/top-ways-to-get-your-content-and-data-ready-for-ai/ Mon, 15 Sep 2025 19:17:48 +0000 https://enterprise-knowledge.com/?p=25370 As artificial intelligence has quickly moved from science fiction, to pervasive internet reality, and now to standard corporate solutions, we consistently get the question, “How do I ensure my organization’s content and data are ready for AI?” Pointing your organization’s … Continue reading

The post Top Ways to Get Your Content and Data Ready for AI appeared first on Enterprise Knowledge.

]]>
As artificial intelligence has quickly moved from science fiction, to pervasive internet reality, and now to standard corporate solutions, we consistently get the question, “How do I ensure my organization’s content and data are ready for AI?” Pointing your organization’s new AI solutions at the “right” content and data are critical to AI success and adoption, and failing to do so can quickly derail your AI initiatives.  

Though the world is enthralled with the myriad of public AI solutions, many organizations struggle to make the leap to reliable AI within their organizations. A recent MIT report, “The GenAI Divide,” reveals a concerning truth: despite significant investments in AI, 95% of organizations are not seeing any benefits from their AI investments. 

One of the core impediments to achieving AI within your own organization is poor-quality content and data. Without the proper foundation of high-quality content and data, any AI solution will be rife with ‘hallucinations’ and errors. This will expose organizations to unacceptable risks, as AI tools may deliver incorrect or outdated information, leading to dangerous and costly outcomes. This is also why tools that perform well as demos fail to make the jump to production.  Even the most advanced AI won’t deliver acceptable results if an organization has not prepared their content and data.

This blog outlines seven top ways to ensure your content and data are AI-ready. With the right preparation and investment, your organization can successfully implement the latest AI technologies and deliver trustworthy, complete results.

1) Understand What You Mean by “Content” and/or “Data” (Knowledge Asset Definition)

While it seems obvious, the first step to ensuring your content and data are AI-ready is to clearly define what “content” and “data” mean within your organization. Many organizations use these terms interchangeably, while others use one as a parent term of the other. This obviously leads to a great deal of confusion. 

Leveraging the traditional definitions, we define content as unstructured information (ranging from files and documents to blocks of intranet text), and data as structured information (namely the rows and columns in databases and other applications like Customer Relationship Management systems, People Management systems, and Product Information Management systems). You are wasting the potential of AI if you’re not seeking to apply your AI to both content and data, giving end users complete and comprehensive information. In fact, we encourage organizations to think even more broadly, going beyond just content and data to consider all the organizational assets that can be leveraged by AI.

We’ve coined the term knowledge assets to express this. Knowledge assets comprise all the information and expertise an organization can use to create value. This includes not only content and data, but also the expertise of employees, business processes, facilities, equipment, and products. This manner of thinking quickly breaks down artificial silos within organizations, getting you to consider your assets collectively, rather than by type. Moving forward in this article, we’ll use the term knowledge assets in lieu of content and data to reinforce this point. Put simply and directly, each of the below steps to getting your content and data AI-ready should be considered from an enterprise perspective of knowledge assets, so rather than discretely developing content governance and data governance, you should define a comprehensive approach to knowledge asset governance. This approach will not only help you achieve AI-readiness, it will also help your organization to remove silos and redundancies in order to maximize enterprise efficiency and alignment.

knowledge asset zoom in 1

2) Ensure Quality (Asset Cleanup)

We’ve found that most organizations are maintaining approximately 60-80% more information than they should, and in many cases, may not even be aware of what they still have. That means that four out of five knowledge assets are old, outdated, duplicate, or near-duplicate. 

There are many costs to this over-retention before even considering AI, including the administrative burden of maintaining this 80% (including the cost and environmental impact of unnecessary server storage), and the usability and findability cost to the organization’s end users when they go through obsolete knowledge assets.

The AI cost becomes even higher for several reasons. First, AI typically “white labels” the knowledge assets it finds. If a human were to find an old and outdated policy, they may recognize the old corporate branding on it, or note the date from several years ago on it, but when AI leverages the information within that knowledge asset and resurfaces it, it looks new and the contextual clues are lost.

Next, we have to consider the old adage of “garbage in, garbage out.” Incorrect knowledge assets fed to an AI tool will result in incorrect results, also known as hallucinations. While prompt engineering can be used to try to avoid these conflicts and, potentially even errors, the only surefire guarantee to avoid this issue is to ensure the accuracy of the original knowledge assets, or at least the vast majority of it.

Many AI models also struggle with near-duplicate “knowledge assets,” unable to discern which version is trusted. Consider your organization’s version control issues, working documents, data modeled with different assumptions, and iterations of large deliverables and reports that are all currently stored. Knowledge assets may go through countless iterations, and most of the time, all of these versions are saved. When ingested by AI, multiple versions present potential confusion and conflict, especially when these versions didn’t simply build on each other but were edited to improve findings or recommendations. Each of these, in every case, is an opportunity for AI to fail your organization.

Finally, this would also be the point at which you consider restructuring your assets for improved readability (both by humans and machines). This could include formatting (to lower cognitive lift and improve consistency) from a human perspective. For both humans and AI, this could also mean adding text and tags to better describe images and other non-text-based elements. From an AI perspective, in longer and more complex assets, proximity and order can have a negative impact on precision, so this could include restructuring documents to make them more linear, chronological, or topically aligned. This is not necessary or even important for all types of assets, but remains an important consideration especially for text-based and longer types of assets.

knowledge asset zoom in 2

3) Fill Gaps (Tacit Knowledge Capture)

The next step to ensure AI readiness is to identify your gaps. At this point, you should be looking at your AI use cases and considering the questions you want AI to answer. In many cases, your current repositories of knowledge assets will not have all of the information necessary to answer those questions completely, especially in a structured, machine-readable format. This presents a risk itself, especially if the AI solution is unaware that it lacks the complete range of knowledge assets necessary and portrays incomplete or limited answers as definitive. 

Filling gaps in knowledge assets is extremely difficult. The first step is to identify what is missing. To invoke another old adage, organizations have long worried they “don’t know what they don’t know,” meaning they lack the organizational maturity to identify gaps in their own knowledge. This becomes a major challenge when proactively seeking to arm an AI solution with all the knowledge assets necessary to deliver complete and accurate answers. The good news, however, is that the process of getting knowledge assets AI-ready helps to identify gaps. In the next two sections, we cover semantic design and tagging. These steps, among others, can identify where there appears to be missing knowledge assets. In addition, given the iterative nature of designing and deploying AI solutions, the inability of AI to answer a question can trigger gap filling, as we cover later. 

Of course, once you’ve identified the gaps, the real challenge begins, in that the organization must then generate new knowledge assets (or locate “hidden” assets) to fill those gaps. There are many techniques for this, ranging from tacit knowledge capture, to content inventories, all of which collectively can help an organization move from AI to Knowledge Intelligence (KI).    

knowledge asset zoom in 3

4) Add Structure and Context (Semantic Components)

Once the knowledge assets have been cleansed and gaps have been filled, the next step in the process is to structure them so that they can be related to each other correctly, with the appropriate context and meaning. This requires the use of semantic components, specifically, taxonomies and ontologies. Taxonomies deliver meaning and structure, helping AI to understand queries from users, relate knowledge assets based on the relationships between the words and phrases used within them, and leverage context to properly interpret synonyms and other “close” terms. Taxonomies can also house glossaries that further define words and phrases that AI can leverage in the generation of results.

Though often confused or conflated with taxonomies, ontologies deliver a much more advanced type of knowledge organization, which is both complementary to taxonomies and unique. Ontologies focus on defining relationships between knowledge assets and the systems that house them, enabling AI to make inferences. For instance:

<Person> works at <Company>

<Zach Wahl> works at <Enterprise Knowledge>

<Company> is expert in <Topic>

<Enterprise Knowledge> is expert in <AI Readiness>

From this, a simple inference based on structured logic can be made, which is that the person who works at the company is an expert in the topic: Zach Wahl is an expert in AI Readiness. More detailed ontologies can quickly fuel more complex inferences, allowing an organization’s AI solutions to connect disparate knowledge assets within an organization. In this way, ontologies enable AI solutions to traverse knowledge assets, more accurately make “assumptions,” and deliver more complete and cohesive answers. 

Collectively, you can consider these semantic components as an organizational map of what it does, who does it, and how. Semantic components can show an AI how to get where you want it to go without getting lost or taking wrong turns.

5) Semantic Model Application (Tagging)

Of course, it is not sufficient simply to design the semantic components; you must complete the process by applying them to your knowledge assets. If the semantic components are the map, applying semantic components as metadata is the GPS that allows you to use it easily and intuitively. This step is commonly a stumbling block for organizations, and again is why we are discussing knowledge assets rather than discrete areas like content and data. To best achieve AI readiness, all of your knowledge assets, regardless of their state (structured, unstructured, semi-structured, etc), must have consistent metadata applied against them. 

When applied properly, this consistent metadata becomes an additional layer of meaning and context for AI to leverage in pursuit of complete and correct answers. With the latest updates to leading taxonomy and ontology management systems, the process of automatically applying metadata or storing relationships between knowledge assets in metadata graphs is vastly improved, though still requires a human in the loop to ensure accuracy. Even so, what used to be a major hurdle in metadata application initiatives is much simpler than it used to be.

knowledge asset zoom in 4

6) Address Access and Security (Unified Entitlements)

What happens when you finally deliver what your organization has been seeking, and give it the ability to collectively and completely serve their end users the knowledge assets they’ve been seeking? If this step is skipped, the answer is calamity. One of the express points of the value of AI is that it can uncover hidden gems in knowledge assets, make connections humans typically can’t, and combine disparate sources to build new knowledge assets and new answers within them. This is incredibly exciting, but also presents a massive organizational risk.

At present, many organizations have an incomplete or actually poor model for entitlements, or ensuring the right people see the right assets, and the wrong people do not. We consistently discover highly sensitive knowledge assets in various forms on organizational systems that should be secured but are not. Some of this takes the form of a discrete document, or a row of data in an application, which is surprisingly common but relatively easy to address. Even more of it is only visible when you take an enterprise view of an organization. 

For instance, Database A might contain anonymized health information about employees for insurance reporting purposes but maps to discrete unique identifiers. File B includes a table of those unique identifiers mapped against employee demographics. Application C houses the actual employee names and titles for the organizational chart, but also includes their unique identifier as a hidden field. The vast majority of humans would never find this connection, but AI is designed to do so and will unabashedly generate a massive lawsuit for your organization if you’re not careful.

If you have security and entitlement issues with your existing systems (and trust me, you do), AI will inadvertently discover them, connect the dots, and surface knowledge assets and connections between them that could be truly calamitous for your organization. Any AI readiness effort must confront this challenge, before your AI solutions shine a light on your existing security and entitlements issues.

knowledge asset zoom in 5

7) Maintain Quality While Iteratively Improving (Governance)

Steps one through six describe how to get your knowledge assets ready for AI, but the final step gets your organization ready for AI. With a massive investment in both getting your knowledge assets in the right state for AI and in  the AI solution itself, the final step is to ensure ongoing quality of both. Mature organizations will invest in a core team to ensure knowledge assets go from AI-ready to AI-mature, including:

  • Maintaining and enforcing the core tenets to ensure knowledge assets stay up-to-date and AI solutions are looking at trusted assets only;
  • Reacting to hallucinations and unanswerable questions to fill gaps in knowledge assets; 
  • Tuning the semantic components to stay up to date with organizational changes.

The most mature organizations, those wishing to become AI-Powered organizations, will look first to their knowledge assets as the key building block to drive success. Those organizations will seek ROCK (Relevant, Organizationally Contextualized, Complete, and Knowledge-Centric) knowledge assets as the first line to delivering Enterprise AI that can be truly transformative for the organization. 

If you’re seeking help to ensure your knowledge assets are AI-Ready, contact us at info@enterprise-knowledge.com

The post Top Ways to Get Your Content and Data Ready for AI appeared first on Enterprise Knowledge.

]]>
Knowledge Cast – Ben Clinch, Chief Data Officer & Partner at Ortecha – Semantic Layer Symposium Series https://enterprise-knowledge.com/knowledge-cast-ben-clinch-chief-data-officer-partner-at-ortecha/ Thu, 11 Sep 2025 13:43:01 +0000 https://enterprise-knowledge.com/?p=25345 Enterprise Knowledge’s Lulit Tesfaye, VP of Knowledge & Data Services, speaks with Ben Clinch, Chief Data Officer and Partner at Ortecha and Regional Lead Trainer for the EDM Council (EMEA/India). He is a sought-after public speaker and thought leader in … Continue reading

The post Knowledge Cast – Ben Clinch, Chief Data Officer & Partner at Ortecha – Semantic Layer Symposium Series appeared first on Enterprise Knowledge.

]]>

Enterprise Knowledge’s Lulit Tesfaye, VP of Knowledge & Data Services, speaks with Ben Clinch, Chief Data Officer and Partner at Ortecha and Regional Lead Trainer for the EDM Council (EMEA/India). He is a sought-after public speaker and thought leader in data and AI, having held numerous senior roles in architecture and business in some of the world’s largest financial and telecommunication institutions over his 25 year career, with a passion for helping organizations thrive with their data.

In their conversation, Lulit and Ben discuss Ben’s personal journey into the world of semantics, their data architecture must-haves in a perfect world, and how to calculate the value of data and knowledge initiatives. They also preview Ben’s talk at the Semantic Layer Symposium in Copenhagen this year, which will cover the combination of semantics and LLMs and neurosymbolic AI. 

For more information on the Semantic Layer Symposium, check it out here!

 

 

If you would like to be a guest on Knowledge Cast, contact Enterprise Knowledge for more information.

The post Knowledge Cast – Ben Clinch, Chief Data Officer & Partner at Ortecha – Semantic Layer Symposium Series appeared first on Enterprise Knowledge.

]]>
Data Quality and Architecture Enrichment for Insights Visualization https://enterprise-knowledge.com/data-quality-and-architecture-enrichment-for-insights-visualization/ Wed, 10 Sep 2025 18:39:35 +0000 https://enterprise-knowledge.com/?p=25343 The Challenge A radiopharmaceutical imaging company faced challenges in monitoring patient statistics and clinical trial logistics. A lack of visibility and awareness into this data hindered conversations with leadership regarding the status of active clinical trials, ultimately putting clinical trial … Continue reading

The post Data Quality and Architecture Enrichment for Insights Visualization appeared first on Enterprise Knowledge.

]]>

The Challenge

A radiopharmaceutical imaging company faced challenges in monitoring patient statistics and clinical trial logistics. A lack of visibility and awareness into this data hindered conversations with leadership regarding the status of active clinical trials, ultimately putting clinical trial results at risk. The company needed a trusted, single location to ask relevant business questions about their data and to see trends or anomalies across multiple clinical trials. They faced challenges, however, due to trial data being sent by various vendors in different formats (no standardized values across trials). To mitigate these issues, the company engaged Enterprise Knowledge (EK) to provide Semantic Data Management Advisory & Development as part of a data normalization and portfolio reporting program. The engagement’s goal was to develop data visualization dashboards to answer critical business questions with cleaned, normalized, and trustworthy patient data from four clinical trials, depicted in an easy-to-understand and actionable manner.

The Solution

To unlock data insights across trials in one accessible location, EK designed and developed a Power BI dashboard to visualize data from multiple trials in one centralized location. To begin developing dashboards, EK met with the client to confirm the business questions the dashboards would answer, ensuring the dashboards would visually display the patient and trial information needed to answer them. To remedy the varying data formats sent by vendors, EK mapped data values from trial reports to each other, normalizing and enriching the data with metadata and lineage. With structure and standardization added to the data, the dashboards could display robust data insights into patient status with filterable trial-specific information for the clinical imaging team.

EK also worked to transform the company’s data management environment—developing a medallion architecture structure to handle historical files and enforcing data cleaning and standardization on raw data inputs—to ensure dashboard insights were accurate and scalable to the inclusion of future trials. Implementing these data quality pre-processing steps and architecture considerations prepared the company for future applications and uses of reliable data, including the development of data products or the creation of a single view into the company-wide data landscape.

The EK Difference

To support the usage, maintenance, and future expansion of the data environment and data visualization tooling, EK developed knowledge transfer materials. These proprietary materials included setting up a semantic modeling foundation via a data dictionary to explain and define dashboard fields and features, a proposed future medallion architecture, and materials to socialize and expand the usage of visualization tools to additional sections of the company that could benefit from them.

Dashboard Knowledge Transfer Framework
To ensure the longevity of the dashboard, especially with the future inclusion of additional trial data, it was essential to develop materials for future dashboard users and developers. The knowledge transfer framework designed by EK outlined a repeatable process for dashboard development with enough detail and information that someone unfamiliar with the dashboards can understand the background, use cases, data inputs, visualization outputs, and the overall purpose of the dashboarding effort. Instructions for dashboard upkeep, including how to update and add data to the dashboard as business needs evolve, were also provided.

Semantic Model Foundations: Data Dictionary
To semantically enhance the dashboards, all dashboard fields and features were cataloged and defined by EK experts in semantics and data analysis. In addition to definitions, the dictionary included purpose statements and calculation rules for each dashboard concept (where applicable). This data dictionary was created to prepare the client to process all trial information moving forward and serve as a reference for the data transformation process.

Proposed Future Architecture
To optimize data storage in the future, EK proposed a medallion architecture strategy consisting of Bronze, Silver, and Gold layers to preserve historical data and pave the way for matured logging techniques. At the time EK engaged the client, there was no proper data storage. EK’s architecture strategy detailed storage preparation considerations for each layer, including workspace creation, file retention policies, and options for ingesting and storing data. EK leveraged technical expertise and a rich background in architecture strategies to provide expert advisory on the client’s future architecture.

Roadshow Materials
EK developed materials that summarized the mission and value of the clinical imaging dashboards. These materials included a high-level overview of the dashboard ecosystem so all audiences could comprehend the dashboard’s purpose and execution. With a KM-angled focus, the overall purpose of the materials was to gain organizational buy-in for the dashboard and build awareness of the clinical imaging team and the importance of the work they do. The roadshow materials also sought to promote dashboard adoption and future expansion of dashboarding into other areas of the company.

The Results

Before the dashboard, employees had to track down various spreadsheets for each trial sent from different sources and stored in at least four different locations. After the engagement, the company had a functional dashboard that displayed on-demand data visualizations across four clinical trials that pulled from a single data repository, creating a seamless way for the clinical imaging team to identify trial data and patient discrepancies early and often, preventing errors that could have resulted in unusable trial data. In all, having multiple trials’ information available in one streamlined view through the dashboard dramatically reduced the time and effort employees had previously spent tracking down and manually analyzing raw, disparate data for insights, from as high as 1–2 hours every week to as low as 15 minutes. Clinical imaging managers are now able to quickly determine and share trusted trial insights with their leadership confidently, enabling informed decision-making with the resources to explain where those insights were derived from.

In addition to the creation of the dashboard, EK helped develop a knowledge transfer framework and future architecture and data cleaning considerations, providing the company with a clear path to expand and scale usage to more clinical trials, other business units, and new business needs. In fact, the clinical imaging team identified at least four additional trials that, as a result of EK’s foundational work, can be immediately incorporated into the dashboard as the company sees fit.

Want to improve your organization’s content data quality and architecture? Contact us today!

Download Flyer

Ready to Get Started?

Get in Touch

The post Data Quality and Architecture Enrichment for Insights Visualization appeared first on Enterprise Knowledge.

]]>
Maturing Data Processes at a Decentralized Federal Organization https://enterprise-knowledge.com/maturing-data-processes-at-a-decentralized-federal-organization/ Wed, 09 Jul 2025 14:34:45 +0000 https://enterprise-knowledge.com/?p=24857 A large government agency sought EK’s help in addressing significant data management challenges they were facing. The agency had a decentralized organizational structure and a complex technical ecosystem, which created unique challenges for remote employees in finding, accessing, and sharing critical data at the time of need. These challenges resulted ... Continue reading

The post Maturing Data Processes at a Decentralized Federal Organization appeared first on Enterprise Knowledge.

]]>

The Challenge

A large government agency sought EK’s help in addressing significant data management challenges they were facing. The agency had a decentralized organizational structure and a complex technical ecosystem, which created unique challenges for remote employees in finding, accessing, and sharing critical data at the time of need. These challenges resulted in the agency experiencing difficulty in addressing critical operational needs, such as: 

  • Quickly collecting data points and information requested by legislators and appropriators in Congress;
  • Providing data-backed evidence to support budget justifications for agency programs, generating insight for executives to enable data-driven decisions;
  • Demonstrating operational compliance with various regulations; and 
  • Collaborating with other agencies at the state and local levels in complementary initiatives. 

The agency enlisted EK to develop a data management strategy with the goal of standardizing data practices, increasing data accessibility, fostering cross-team collaboration, and making valuable information accessible for reuse across the organization. The goal at the organization was to have a fully implemented set of prioritized data management initiatives, anchored in a comprehensive Data Management Maturity and Modernization Strategy and Roadmap, which would lay the foundations to build advanced cloud data analytics programs in the future.

The Solution

EK worked with the agency’s data office to develop a strategy to collect, connect, and distribute data. Focused on helping the agency improve data capture, quality, usage, and lineage, EK collaborated with over 15 executives and engaged with stakeholders such as project coordinators, system owners, and everyday users to collect feedback and align data management strategies with specific use cases and the agency’s strategic initiatives. Key activities that informed our resulting recommendations included:

  • Hosting working group meetings to foster collaboration and alignment across different business areas, regions, and staff offices.
  • Providing a strategic and measurable assessment of the agency’s current data processes, using EK’s Data Maturity Benchmark to establish a baseline for improvement.
  • Reviewing a sampling of over 100 organizational memos, policies, and manuals to ensure solutions were in accordance with industry standards and regulatory mandates.
  • Executing and designing a collaborative, custom-built Organizational Data Needs Assessment Survey informed by stakeholder and organizational objectives; and achieving target demographic participation through the development and implementation of a strategic Communications Plan.
  • Conducting 8+ comprehensive software system capability assessments evaluating the organization’s data and metadata landscape to identify strengths, challenges, and gaps within existing metadata management practices and tools.
  • Performing a gap analysis to identify areas of alignment between existing tools and business needs.
  • Developing a reusable Tool Evaluation Matrix to evaluate data tools against the agency’s specific functional, operational, and business needs.
  • Engaging over 70 people across 23 interviews and focus groups with diverse business units to gather crucial perspectives to inform long-term recommendations, goals, and objectives.
  • Facilitating townhalls and workshops to garner executive and organizational buy-in.

The EK Difference

EK developed a comprehensive understanding of the agency’s needs through a mix of customized discovery activities and stakeholder engagement. By leveraging EK’s proprietary Data Maturity Benchmark, deep knowledge of industry best practices, and a collaborative approach with key agency stakeholders, EK delivered a tailored solution that provided the agency with a clear understanding of existing system capabilities, a list of similar tools and their features that could be procured in the future, and actionable recommendations to optimize its tool investment strategy. These efforts ultimately enable the agency to make strategic, informed decisions to enhance efficiency, data accessibility, and long-term operational success.

EK’s extensive experience in developing data management strategies and solutions resulted in recommendations that were both data-driven and aligned with the agency’s existing processes and available resources. With EK’s support, the agency now has practical, scalable approaches to enhance and modernize their data management practices through improved access and visibility across resources.

The Results

EK ultimately provided this federal agency with actionable recommendations, ranging from people-oriented incentives to technical investments, to improve their data management processes. EK helped the organization develop quantifiable metrics to enhance data accessibility, quality assurance, and utilization. Where employees previously faced challenges related to a decentralized organization, such as navigating multiple data catalogs, EK enabled the organization to create a more cohesive and user-friendly data environment by establishing standardized metrics for completeness, quality control, and data management. This reduced frustration and inefficiencies in finding and reusing critical datasets across the organization.

EK’s adaptable Tool Evaluation Matrix allows the agency to effectively analyze future investment considerations in metadata management, data catalog, and data repository systems. Additionally, EK’s efforts have provided the organization with action-oriented approaches to address critical gaps to advance their data management initiatives. These foundations will accelerate the agency’s ability to scale their data management processes and explore advanced data solutions, such as metric dashboarding, implementation of an enterprise-wide inventory or data catalog, and advanced architecture and discoverability capabilities. Consequently, the improvements made based on these foundations will empower personnel to efficiently locate, share, and utilize existing data, reducing duplication and fostering a more collaborative and data-driven culture. All of these improvements will assist the agency in building foundational elements of a cloud data analytics program.

Interested in improving your organization’s data management processes? Contact us today!

Download Flyer

Ready to Get Started?

Get in Touch

The post Maturing Data Processes at a Decentralized Federal Organization appeared first on Enterprise Knowledge.

]]>
What is a Knowledge Asset? https://enterprise-knowledge.com/what-is-a-knowledge-asset/ Mon, 16 Jun 2025 15:15:40 +0000 https://enterprise-knowledge.com/?p=24635 Over the course of Enterprise Knowledge’s history, we have been in the business of connecting an organization’s information and data, ensuring it is findable and discoverable, and enriching it to be more useful to both humans and AI. Though use … Continue reading

The post What is a Knowledge Asset? appeared first on Enterprise Knowledge.

]]>
Over the course of Enterprise Knowledge’s history, we have been in the business of connecting an organization’s information and data, ensuring it is findable and discoverable, and enriching it to be more useful to both humans and AI. Though use cases, scope, and scale of engagements—and certainly, the associated technologies—have all changed, that core mission has not.

As part of our work, we’ve endeavored to help our clients understand the expansive nature of their knowledge, content, and data. The complete range of these materials can be considered based on several different spectra. They can range from tacit to explicit, knowledge to information, structured to unstructured, digital to analog, internal to external, and originated to generated. Before we go deeper into the definition of knowledge assets, let’s first explore each of these variables to understand how vast the full collection of knowledge assets can be for an organization.

  • Tacit and Explicit – Tacit content is held in people’s heads. It is inferred instead of explicitly encoded in systems, and does not exist in a shareable or repeatable format. Explicit content is that which has been captured in an independent form, typically as a digital file or entry. Historically, organizations have been focused on converting tacit knowledge to explicit so that the organization could better maintain and reuse it. However, we’ll explain below how the complete definition of a knowledge asset shifts that thinking somewhat.
  • Knowledge and Information – Knowledge is the expertise and experience people acquire, making it extremely valuable but hard to convert from tacit to explicit. Information is just facts, lacking expert context. Organizations have both, and documents often mix them.
  • Structured and Unstructured – Structured information is machine-readable and system-friendly and unstructured information is human-readable and context-rich. Structured data, like database entries, is easy for systems but hard for humans to understand without tools. Unstructured data, designed for humans, is easier to grasp but historically challenging for machines to process. 
  • Digital to Analog – Digital information exists in an electronic format, whereas analog information exists in a physical format. Many global organizations are sitting on mountains of knowledge and information that isn’t accessible (or perhaps even known) to most people in the organization. Making things more complex, there’s also formerly analog information, the many old documents that have been digitized but exist in a middle state where they’re not particularly machine-readable, but are electronic.
  • Internal to External – Internal content targets employees, while external content targets customers, partners, or the public, with differing tones and styles, and often greater governance and overall rigor for external content. Both types should align, but are treated differently. You can also consider the content created by your organization versus external content purchased, acquired, or accessed from external sources. From this perspective, you have much greater control over your organization’s own content than that which was created or is owned externally.
  • Originated and Generated – Originated content already exists within the organization as discrete items within a repository or repositories, authored by humans. Explicit content, for example, is originated. It was created by a person or people, it is managed, and identified as a unique item. Any file you’ve created before the AI era falls into this category. With Generative AI becoming pervasive, however, we must also consider generated information, derived from AI. These generated assets (synthetic assets) are automatically created based on an organization’s existing (originated) information, forming new content that may not possess the same level of rigor or governance.

If we were to go no further than the above, most organizations would already be dealing with petabytes of information and tons of paper encompassing years and years. However, by thinking about information based on its state (i.e. structured or unstructured, digital or analog, etc), or by its use (i.e. internal or external), organizations are creating artificial barriers and silos to knowledge, as well as duplicating or triplicating work that should be done at the enterprise level. Unfortunately, for most organizations, the data management group defines and oversees data governance for their data, while the content management group defines and oversees content governance for their content. This goes beyond inefficiency or redundancy, creating cost and confusion for the organization and misaligning how information is managed, shared, and evolved. Addressing this issue, in itself, is already a worthy challenge, but it doesn’t yet fully define a knowledge asset or how thinking in terms of knowledge assets can deliver new value and insights to an organization.

If you go beyond traditional digital content and begin to consider how people actually want to obtain answers, as well as how artificial intelligence solutions work, we can begin to think of the knowledge an organization possesses more broadly. Rather than just looking at digital content, we can recognize all the other places, things, and people that can act as resources for an organization. For instance, people and the knowledge and information they possess are, in fact, an asset themselves. The field of KM has long been focused on extracting that knowledge, with at best mixed results. However, in the modern ecosystem of KM, semantics, and AI, we can instead consider people themselves as the asset that can be connected to the network. We may still choose to capture their knowledge in a digital form, but we can also add them to the network, creating avenues for people to find them, learn from them, and collaborate with them while mapping them to other assets.

In the same way, products, equipment, processes, and facilities can all be considered knowledge assets. By considering all of your organizational components not as “things,” but as containers of knowledge, you move from a world of silos to a connected and contextualized network that is traversable by a human and understandable by a machine. We coined the term knowledge assets to express this concept. The key to a knowledge asset is that it can be connected with other knowledge assets via metadata, meaning it can be put into the organization’s context. Anything that can hold metadata and be connected to other knowledge assets can be an asset.

Another set of knowledge assets that are quickly becoming critical for mature organizations are components of AI orchestration. As organizations build increasingly complex systems of agents, models, tools, and workflows, the logic that governs how these components interact becomes a form of operational knowledge in its own right. These orchestration components encode decisions, institutional context, and domain expertise, meaning they are worthy of being treated as first-class knowledge assets. To fully harness the value of AI, orchestration components should be clearly defined, governed, and meaningfully connected to the broader knowledge ecosystem.

Put into practice, a mature organization could create a true web of knowledge assets to serve virtually any use case. Rather than a simple search, a user might instead query their system to learn about a process. Instead of getting a link to the process documentation, they get a view of options, allowing them to read the documentation, speak to an expert on the topic, attend training on the process, join a community of practice working on it, or visit an application supporting it. 

A new joiner to your organization might be given a task to complete. Currently, they may hunt around your network for guidance, or wait for a message back from their mentor, but if they instead had a traversable network of all your organization’s knowledge assets, they could begin with a simple search on the topic of the task, find a past deliverable from a related task, which would lead them to the author of that task from whom they could seek guidance, or instead to an internal meetup of professionals deemed to have expertise in that task.

If we break these silos down, add context and meaning via metadata, and begin to treat our knowledge assets holistically, we’re also creating the necessary foundations for any AI solutions to better understand our enterprise and deliver complete answers. This means that we’re building the better answer for our organization immediately, while also enabling our organization to leverage AI capabilities faster, more consistently, and more reliably than others.

The idea of knowledge assets will be a shift both in mindset and strategies, with impacts potentially rippling deeply through your org chart, technologies, and culture. However, the organizations that embrace this concept will achieve an enterprise most closely resembling how humans naturally think and learn and how AI is best equipped to deliver.

If you’re ready to take the next big step in organizational knowledge and maturity, contact us, and we will bring all of our knowledge assets to bear in support. 

The post What is a Knowledge Asset? appeared first on Enterprise Knowledge.

]]>
Semantic Layer Strategy for Linked Data Investigations https://enterprise-knowledge.com/semantic-layer-strategy-for-linked-data-investigations/ Thu, 08 May 2025 15:08:08 +0000 https://enterprise-knowledge.com/?p=24011 A government organization sought to more effectively exploit their breadth of data generated by investigation activity of criminal networks for comprehensive case building and threat trend analysis. EK engaged with the client to develop a strategy and product vision for their semantic solution, paired with foundational semantic data models for meaningful data categorization and linking, architecture designs and tool recommendations for integrating and leveraging graph data, and entitlements designs for adhering to complex security standards. Continue reading

The post Semantic Layer Strategy for Linked Data Investigations appeared first on Enterprise Knowledge.

]]>

The Challenge

A government organization sought to more effectively exploit their breadth of data generated by investigation activity of criminal networks for comprehensive case building and threat trend analysis. The agency struggled to meaningfully connect structured and unstructured data from multiple siloed data sources, each with misaligned naming conventions and inconsistent data structures and formats. Users had to have an existing understanding of underlying data models and jump between multiple system views to answer core investigation analysis questions, such as “What other drivers have been associated with this vehicle involved in an inspection at the border?” or “How often has this person in the network traveled to a known suspect storage location in the past 6 months?”

These challenges manifest in data ambiguity across the organization, complex and resource-intensive integration workflows, and underutilized data assets lacking meaningful context, all resulting in significant cognitive load and burdensome manual efforts for users conducting intelligence analyses. The organization recognized the need to define a robust semantic layer solution grounded in data modeling, architecture frameworks, and governance controls to unify, contextualize, and operationalize data assets via a “single pane of intelligence” analysis platform.

The Solution

To address these challenges, EK engaged with the client to develop a strategy and product vision for their semantic solution, paired with foundational semantic data models for meaningful data categorization and linking, architecture designs and tool recommendations for integrating and leveraging graph data, and entitlements designs for adhering to complex security standards. With phased implementation plans for incremental delivery, these components lay the foundations for the client’s solution vision for advanced entity resolution and analytics capabilities. The overall solution will power streamlined consumption experiences and data-driven insights through the “single pane of intelligence.”  

The core components of EK’s semantic advisory and solution development included:

Product Vision and Use Case Backlog:
EK collaborated with the client to shape a product vision anchored around the solution’s purpose and long-term value for the organization. Complemented with a strategic backlog of priority use cases, EK’s guidance resulted in a compelling narrative to drive stakeholder engagement and organizational buy-in, while also establishing a clear and tangible vision for scalable solution growth.

Solution Architecture Design:
EK’s solution architects gathered technical requirements to propose a modular solution architecture consisting of multiple, self-contained technology products that will provision a comprehensive analytic ecosystem to the organization’s user base. The native graph architecture involves a graph database, entity resolution services, and a linked data analysis platform to create a unified, interactive model of all of their data assets via the “single pane of intelligence.”

Tool Selection Advisory:
EK guided the client on selecting and successfully gaining buy-in for procurement of a graph database and a data analysis and visualization platform with native graph capabilities to plug into the semantic and presentation layers of the recommended architecture design. This selection moves the organization away from a monolithic, document-centric platform to a data-centric solution for dynamic intelligence analysis in alignment with their graph and network analytics use cases. EK’s experts in unified entitlements and industry security standards also ensured the selected tooling would comply with the client’s database, role, and attribute-based access control requirements.

Taxonomy and Ontology Modeling:
In collaboration with intelligence subject matter experts, EK guided the team from a broad conceptual model to an implementable ontology and starter taxonomy designs to enable a specific use case for prioritized data sources. EK advised on mapping the ontology model to components of the Common Core Ontologies to create a standard, interoperable foundation for consistent and scalable domain expansion.

Phased Implementation Plan:
Through dedicated planning and solutioning sessions with the core client team, EK developed an iterative implementation plan to scale the foundational data model and architecture components and unlock incremental technical capabilities. EK advised on identifying and defining starter pilot activities, outlining definitions of done, necessary roles and skillsets, and required tasks and supporting tooling from the overall architecture to ensure the client could quickly start on solution implementation. EK is directly supporting the team on the short-term implementation tasks while continuing to advise and plan for the longer-term solution needs.

 

The EK Difference

Semantic Layer Solution Strategy:
EK guided the client in transforming existing experimental work in the knowledge graph space into an enterprise solution that can scale and bring tangible value to users. From strategic use case development to iterative semantic model and architecture design, EK provided the client with repeatable processes for defining, shaping, and productionalizing components of the organization’s semantic layer.

LPG Analytics with RDF Semantics:
To support the client’s complex and dynamic analytics needs, EK recommended an LPG-based solution for its flexibility and scalability. At the same time, the client’s need for consistent data classification and linkage still pointed to the value of RDF frameworks for taxonomy and ontology development. EK is advising on how to bridge these models for the translation and connectivity of data across RDF and LPG formats, ultimately enabling seamless data integration and interoperability in alignment with semantic standards.

Semantic Layer Tooling:
EK has extensive experience advising on the evaluation, selection, procurement, and scalable implementation of semantic layer technologies. EK’s qualitative evaluation for the organization’s linked data analysis platforms was supplemented by a proprietary structured matrix measuring down-selected tools against 50+ functional and non-functional factors to provide a quantitative view of each tool’s ability to meet the organization’s specific needs.

Semantic Modeling and Scalable Graph Development:
Working closely with the organization’s domain experts, EK provided expert advisory in industry standards and best practices to create a semantic data model that will maximize graph benefits in the context of the client’s use cases and critical data assets. In parallel with model development, EK offered technical expertise to advise on the scalability of the resulting graph and connected data pipelines to support continued maintenance and expansion.

Unified Entitlements Design:
Especially working with a highly regulated government agency, EK understands the critical need for unified entitlements to provide a holistic definition of access rights, enabling consistent and correct privileges across every system and asset type in the organization. EK offered comprehensive entitlements design and development support to ensure access rights would be properly implemented across the client’s environment, closely tied to the architecture and data modeling frameworks.

Organizational Buy-In:
Throughout the engagement, EK worked closely with project sponsors to craft and communicate the solution product vision. EK tailored product communication components to different audiences by detailing granular technical features for tool procurement conversations and formulating business-driven, strategic value statements to engage business users and executives for organizational alignment. Gaining this buy-in early on is critical for maintaining development momentum and minimizing future roadblocks as wider user groups transition to using the productionalized solution.

The Results

With initial core semantic models, iterative solution architecture design plans, and incremental pilot modeling and engineering activities, the organization is equipped to stand up key pieces of the solution as they procure the graph analytics tooling for continued scale. The phased implementation plan provides the core team with tangible and achievable steps to transition from their current document-centric ways of working to a truly data-centric environment. The full resulting solution will facilitate investigation activities with a single pane view of multi-sourced data and comprehensive, dynamic analytics. This will streamline intelligence analysis across the organization with the enablement of advanced consumption experiences such as self-service reporting, text summarization, and geospatial network analysis, ultimately reducing the cognitive load and manual efforts users currently face in understanding and connecting data. EK’s proposed strategy has been approved for implementation, and EK will publish the results from the MVP development as a follow-up to this case study.

Download Flyer

Ready to Get Started?

Get in Touch

The post Semantic Layer Strategy for Linked Data Investigations appeared first on Enterprise Knowledge.

]]>
Women’s Health Foundation – Semantic Classification POC https://enterprise-knowledge.com/womens-health-foundation-semantic-classification-poc/ Thu, 10 Apr 2025 19:20:31 +0000 https://enterprise-knowledge.com/?p=23789 A humanitarian foundation focusing on women’s health faced a complex problem: determining the highest impact decision points in contraception adoption for specific markets and demographics. Two strategic objectives drove the initiative—first, understanding the multifaceted factors (from product attributes to social influences) that guide women’s contraceptive choices, and second, identifying actionable insights from disparate data sources. The key challenge was integrating internal survey response data with internal investment documents to answer nuanced competency questions such as, “What are the most frequently cited factors when considering a contraceptive method?” and “Which factors most strongly influence adoption or rejection?” This required a system that could not only ingest and organize heterogeneous data but also enable executives to visualize and act upon insights derived from complex cross-document analyses. Continue reading

The post Women’s Health Foundation – Semantic Classification POC appeared first on Enterprise Knowledge.

]]>

The Challenge

A humanitarian foundation focusing on women’s health faced a complex problem: determining the highest impact decision points in contraception adoption for specific markets and demographics. Two strategic objectives drove the initiative—first, understanding the multifaceted factors (from product attributes to social influences) that guide women’s contraceptive choices, and second, identifying actionable insights from disparate data sources. The key challenge was integrating internal survey response data with internal investment documents to answer nuanced competency questions such as, “What are the most frequently cited factors when considering a contraceptive method?” and “Which factors most strongly influence adoption or rejection?” This required a system that could not only ingest and organize heterogeneous data but also enable executives to visualize and act upon insights derived from complex cross-document analyses.

 

The Solution

To address these challenges, the project team developed a proof-of-concept (POC) that leveraged advanced graph technology combined with AI-augmented classification techniques. 

The solution was implemented across several workstreams:

Defining System Functionality
The initial phase involved clearly articulating the use case. By mapping out the decision landscape—from strategic objectives (improving modern contraceptive prevalence rates) to granular insights from user research—the team designed a tailored taxonomy and ontology for the women’s health domain. This semantic framework was engineered to capture cultural nuances, local linguistic variations, and the diverse attributes influencing contraceptive choices.

Processing Existing Data
With the functionality defined, the next phase involved transforming internal survey responses and investment documents into a unified, structured format. An AI-augmented classification workflow was deployed to extract tacit knowledge from survey responses. This process was supported by a stakeholder-validated taxonomy and ontology, allowing raw responses to be mapped into clearly defined data classes. This robust data processing pipeline ensured that quantitative measures (like frequency of citation) and qualitative insights were captured in a cohesive base graph.

Building the Analysis Model
The core of the solution was the creation of a Product Adoption Survey Base Graph. Processed data was converted into RDF triples using a rigorous ontology model, forming the base graph designed to answer competency questions via SPARQL queries. While this model laid the foundation for revealing correlations and decision factors, the full production of the advanced analysis graph—designed to incorporate deeper inference and reasoning—remained as a future enhancement.

Handoff of Analysis Graph Production and Frontend Implementation
Due to time constraints, the production of the comprehensive analysis graph and the implementation of the interactive front end were transitioned to the client. Our team delivered the base graph and all necessary supporting documentation, providing the client with a solid foundation and a detailed roadmap for further development. This handoff ensures that the client’s in-house teams can continue productionalizing the analysis graph and integrate it with their BI dashboard for end-user access.

Provide a Roadmap for Further Development
Beyond the initial POC, a clear roadmap was established. The next steps include refining the AI classification workflow, fully instantiating the analysis graph with enhanced reasoning capabilities, and developing the front end to expose these insights via a business intelligence (BI) dashboard. These tasks have been handed off to the client, along with guidance on leveraging enterprise graph database licenses and integrating the solution within existing knowledge management frameworks.

 

The EK Difference

A standout feature of this project is its novel, generalizable technical architecture:

Ontology and Taxonomy Design
A custom ontology was developed to model the women’s health domain—incorporating key decision factors, cultural influences, and local linguistic variations. This semantic backbone ensures that structured investment data and unstructured survey responses are harmonized under a common framework.

AI-Augmented Classification Pipeline:
The solution leverages state-of-the-art language models to perform the initial classification of survey responses. Supported by a validated taxonomy, this pipeline automatically extracts and tags critical data points from large volumes of survey content, laying the groundwork for subsequent graph instantiation, inference, and analysis.

Graph Instantiation and Querying:
Processed data is transformed into RDF triples and instantiated within a dedicated Product Adoption Survey Base Graph. This graph, queried via SPARQL through a GraphDB workbench, offers a robust mechanism for cross-document analysis. Although the full analysis graph is pending, the base graph effectively supports the core competency questions.


Guidance for BI Integration:
The architecture includes a flexible API layer and clear documentation that maps graph data into SQL tables. This design is intended to support future integration with BI platforms, enabling real-time visualization and executive-level decision-making.

 

The Results

The POC delivered compelling outcomes despite time constraints:

  • Actionable Insights:
    The system generated new insights by identifying frequently cited and impactful decision factors for contraceptive adoption, directly addressing the competency questions set by the Women’s Health teams.
  • Improved Data Transparency:
    By structuring tribal knowledge and unstructured survey data into a unified graph, the solution provided an explainable view of the decision landscape. Stakeholders gained visibility into how each insight was derived, enhancing trust in the system’s outputs.
  • Scalability and Generalizability:
    The technical architecture is robust and adaptable, offering a scalable model for analyzing similar survey data across other health domains. This approach demonstrates how enterprise knowledge graphs can drive down the total cost of ownership while enhancing integration within existing data management frameworks.
  • Strategic Handoff:
    Recognizing time constraints, our team successfully handed off the production of the comprehensive analysis graph and the implementation of the front end to the client. This strategic decision ensured continuity and allowed the client to tailor further development to their unique operational needs.
Download Flyer

Ready to Get Started?

Get in Touch

The post Women’s Health Foundation – Semantic Classification POC appeared first on Enterprise Knowledge.

]]>
Humanitarian Foundation – SemanticRAG POC https://enterprise-knowledge.com/humanitarian-foundation-semanticrag-poc/ Wed, 02 Apr 2025 18:03:04 +0000 https://enterprise-knowledge.com/?p=23603 A humanitarian foundation needed to demonstrate the ability of its Graph Retrieval Augmented Generation (GRAG) system to answer complex, cross-source questions. In particular, the task was to evaluate the impact of foundation investments on strategic goals by synthesizing information from publicly available domain data, internal investment documents, and internal investment data. The challenge laid in .... Continue reading

The post Humanitarian Foundation – SemanticRAG POC appeared first on Enterprise Knowledge.

]]>

The Challenge

A humanitarian foundation needed to demonstrate the ability of its Graph Retrieval Augmented Generation (GRAG) system to answer complex, cross-source questions. In particular, the task was to evaluate the impact of foundation investments on strategic goals by synthesizing information from publicly available domain data, internal investment documents, and internal investment data. The challenge laid in connecting diverse and unstructured information and also ensuring that the insights generated were precise, explainable, and actionable for executive stakeholders.

 

The Solution

To address these challenges, the project team developed a proof-of-concept (POC) that leveraged advanced graph technology and a semantic RAG (Retrieval Augmented Generation) agentic workflow. 

The solution was built around several core workstreams:

Defining System Functionality

The initial phase focused on establishing a clear use case: enabling the foundation to query its data ecosystem with natural language questions and receive accurate, explainable answers. This involved mapping out a comprehensive taxonomy and ontology that could encapsulate the knowledge domain of investments, thereby standardizing how investment documents and data were interpreted and interrelated.

Processing Existing Data

With functionality defined, the next step was to ingest and transform various data types. Structured data from internal systems and unstructured investment documents were processed and aligned with the newly defined ontology. Advanced techniques, including semantic extraction and graph mapping, were employed to ensure that all data—regardless of source—was accessible within a unified graph database.

Building the Chatbot Model

Central to the solution was the development of an investment chatbot that could leverage the graph’s interconnected data. This was approached as a cross-document question-answering challenge. The model was designed to predict answers by linking query nodes with relevant data nodes across the graph, thereby addressing competency questions that a naive retrieval model would miss. An explainable AI component was integrated to transparently show which data points drove each answer, instilling confidence in the results.

Deploying the Whole System in a Containerized Web Application Stack

To ensure immediate usability, the POC was deployed, along with all of its dependencies, in a user-friendly, portable web application stack. This involved creating a dedicated API layer to interface between the chatbot and the graph database containers, alongside a custom front end that allowed executive users to interact with the system and view detailed explanations of the generated answers and the source documents upon which they were based. Early feedback highlighted the system’s ability to connect structured and unstructured content seamlessly, paving the way for broader adoption.

Providing a Roadmap for Further Development

Beyond the initial POC, the project laid out clear next steps. Recommendations included refining the chatbot’s response logic, optimizing performance (notably in embedding and document chunking), and enhancing user experience through additional ontology-driven query refinements. These steps are critical for evolving the system from a demonstrative tool to a fully integrated component of the foundation’s data management and access stack.

 

 

The EK Difference

A key differentiator of this project was its adoption of standards-based semantic graph technology and its highly generalizable technical architecture. 

The architecture comprises:

Investment Ontology and Data Mapping:

A rigorously defined ontology underpins the entire system, ensuring that all investment-related data—from structured datasets to narrative reports—is harmonized under a common language. This semantic backbone supports both precise data integration and flexible query interpretation.

Graph Instantiation Pipeline:

Investment data is transformed into RDF triples and instantiated within a robust graph database. This pipeline supports current data volumes and is scalable for future expansion. It includes custom tools to convert CSV files and other structured datasets into RDF and mechanisms to continually map new data into the graph.

Semantic RAG Agentic Workflow and API:

The solution utilizes a semantic RAG approach to navigate the complexities of cross-document query answering. This agentic workflow is designed to minimize unhelpful hallucinations, ensuring that each answer is traceable back to the underlying data. The integrated API provides a seamless bridge between the front-end chatbot and the back-end graph, enabling real-time, explainable responses.

Investment Chatbot Deployment:

Built as a central interface, the chatbot exemplifies how graph technology can be operationalized to address executive-level investment queries. It is fine-tuned to reflect the foundation’s language and domain knowledge, ensuring that every answer is accurate and contextually relevant.

 

The Results

The POC successfully demonstrated that GRAG could answer complex questions by:

  • Delivering coherent and explainable recommendations that bridged structured and unstructured investment data.
  • Significantly reducing query response time through a tightly integrated semantic RAG workflow.
  • Providing a transparent AI mapping that allowed stakeholders to see exactly how each answer was derived.
  • Establishing a scalable architecture that can be extended to support a broader range of use cases across the foundation’s data ecosystem.

This project underscores the transformative potential of graph technology in revolutionizing how investment health is assessed and how strategic decisions are informed. With a clear roadmap for future enhancements, the foundation now has a powerful, next-generation tool for deep, context-driven analysis of its investments.

Download Flyer

Ready to Get Started?

Get in Touch

The post Humanitarian Foundation – SemanticRAG POC appeared first on Enterprise Knowledge.

]]>