Data Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/data/ Mon, 03 Nov 2025 21:23:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg Data Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/data/ 32 32 How to Ensure Your Data is AI Ready https://enterprise-knowledge.com/how-to-ensure-your-data-is-ai-ready/ Wed, 01 Oct 2025 16:37:50 +0000 https://enterprise-knowledge.com/?p=25670 Artificial intelligence has the potential to be a game-changer for organizations looking to empower their employees with data at every level. However, as business leaders look to initiate projects that incorporate data as part of their AI solutions, they frequently … Continue reading

The post How to Ensure Your Data is AI Ready appeared first on Enterprise Knowledge.

]]>
Artificial intelligence has the potential to be a game-changer for organizations looking to empower their employees with data at every level. However, as business leaders look to initiate projects that incorporate data as part of their AI solutions, they frequently look to us to ask, “How do I ensure my organization’s data is ready for AI?” In the first blog in this series, we shared ways to ensure knowledge assets are ready for AI. In this follow-on article, we will address the unique challenges that come with connecting data—one of the most unique and varied types of knowledge assets—to AI. Data is pervasive in any organization and can serve as the key feeder for many AI use cases, so it is a high priority knowledge asset to ready for your organization.

The question of data AI readiness stems from the very real concern that when AI is pointed at data that isn’t correct or that doesn’t have the right context associated with it, organizations could face risks to their reputation, their revenue, or their customers’ privacy. With the additional nuance that data brings by often being presented in formats that require transformation, lacking in context, and frequently containing multiple duplicates or near-duplicates with little explanation of their meaning, data (although seemingly already structured and ready for machine consumption) requires greater care than other forms of knowledge assets to comprise part of a trusted AI solution. 

This blog focuses on the key actions an organization needs to perform to ensure their data is ready to be consumed by AI. By following the steps below, an organization can use AI-ready data to develop end-products that are trustworthy, reliable, and transparent in their decision making.

1) Understand What You Mean by “Data” (Data Asset and Scope Definition)

Data is more than what we typically picture it as. Broadly, data is any raw information that can be interpreted to garner meaning or insights on a certain topic. While the typical understanding of data revolves around relational databases and tables galore, often with esoteric metrics filling their rows and columns, data takes a number of forms, which can often be surprising. In terms of format, while data can be in traditional SQL databases and formats, NoSQL data is growing in usage, in forms ranging from key-value pairs to JSON documents to graph databases. Plain, unstructured text such as emails, social media posts, and policy documents are also forms of data, but traditionally not included within the enterprise definition. Finally, data comes from myriad sources—from live machine data on a manufacturing floor to the same manufacturing plant’s Human Resources Management System (HRMS). Data can also be categorized by its business role: operational data that drives day-to-day processes, transactional data that records business exchanges, and even purchased or third-party data brought in to enrich internal datasets. Increasingly, organizations treat data itself as a product, packaged and maintained with the same rigor as software, and rely on data metrics to measure quality, performance, and impact of business assets.

All these forms and types of data meet the definition of a knowledge asset—information and expertise that an organization can use to create value, which can be connected with other knowledge assets. No matter the format or repository type, ingested, AI-ready data can form the backbone of a valuable AI solution by allowing business-specific questions to be answered reliably in an explainable manner. This raises the question to organizational decision makers—what within our data landscape needs to be included in our AI solution? From your definition of what data is, start thinking of what to add iteratively. What systems contain the highest priority data? What datasets would provide the most value to end users? Select high-value data in easy-to-transform formats that allows end users to see the value in your solution. This can garner excitement across departments and help support future efforts to introduce additional data into your AI environment. 

2) Ensure Quality (Data Cleanup)

The majority of organizations we’ve worked with have experienced issues with not knowing what data they have or what it’s intended to be used for. This is especially common in large enterprise settings as the sheer scale and variety of data can breed an environment where data becomes lost, buried, or degrades in quality. This sprawl occurs alongside another common problem, where multiple versions of the same dataset exist, with slight variations in the data they contain. Furthermore, the issue is exacerbated by yet another frequent challenge—a lack of business context. When data lacks context, neither humans nor AI can reliably determine the most up-to-date version, the assumptions and/or conditions in place when said data was collected, or even if the data warrants retention.

Once AI is introduced, these potential issues are only compounded. If an AI system is provided data that is out of date or of low quality, the model will ultimately fail to provide reliable answers to user queries. When data is collected for a specific purpose, such as identifying product preferences across customer segments, but not labeled for said use, and an AI model leverages that data for a completely separate purpose, such as dynamic pricing models, harmful biases can be introduced into the results that negatively impact both the customer and the organization.

Thankfully, there are several methods available to organizations today that allow them to inventory and restructure their data to fix these issues. Examples include data dictionaries, master data (MDM data), and reference data that help standardize data across an organization and help point to what is available at large. Additionally, data catalogs are a proven tool to identify what data exists within an organization, and include versioning and metadata features that can help label data with their versions and context. To help populate catalogs and data dictionaries and to create MDM/reference data, performing a data audit alongside stewards can help rediscover lost context and label data for better understanding by humans and machines alike. Another way to deduplicate, disambiguate, and contextualize data assets is through lineage. Lineage is a feature included in many metadata management tools that stores and displays metadata regarding source systems, creation and modification dates, and file contributors. Using this lineage metadata, data stewards can select which version of a data asset is the most current or relevant for a specific use case and only expose said asset to AI. These methods to ensure data quality and facilitate data stewardship can aid in action towards a larger governance framework. Finally, at a larger scale, a semantic layer can unify data and its meaning for easier ingestion into an AI solution, assist with deduplication efforts, and break down silos between different data users and consumers of knowledge assets at large. 

Separately, for the elimination of duplicate/near-duplicate data, entity resolution can autonomously parse the content of data assets, deduplicate them, and point AI to the most relevant, recent, or reliable data asset to answer a question. 

3) Fill Gaps (Data Creation or Acquisition)

With your organization’s data inventoried and priorities identified, it’s time to start identifying what gaps exist in your data landscape in light of the business questions and challenges you are looking to address. First, ask use case-based questions. Based on your identified use cases, what data would an AI model need to answer topical questions that your organization doesn’t already possess?

At a higher level, gaps in use cases for your AI solution will also exist. To drive use case creation forward, consider the use of a data model, entity relationship diagram (ERD), or ontology to serve as the conceptual map on which all organizational data exists. With a complete data inventory, an ontology can help outline the process by which AI solutions would answer questions at a high level, thanks to being both machine and human-readable. By traversing the ontology or data model, you can design user journeys and create questions that form the basis of novel use cases.

Often, gaps are identified that require knowledge assets outside of data to fill. A data model or ontology can help identify related assets, as they function independently of their asset type. Moreover, standardized metadata across knowledge assets and asset types can enrich assets, link them to one another, and provide insights previously not possible. When instantiated in a solution alongside a knowledge graph, this forms a semantic layer where data assets, such as data products or metrics, gain context and maturity based on related knowledge assets. We were able to enhance the performance of a large retail chain’s analytics team through such an approach utilizing a semantic layer.

To fill these gaps, organizations can look to collect or create more data, as well as purchase publicly available/incorporate open-source datasets (build vs. buy). Another common method of filling identified organizational gaps is the creation of content (and other non-data knowledge assets) to identify a gap via the extraction of tacit organizational knowledge. This is a method that more chief data officers/chief data and AI officers (CDOs/CDAOs) are employing, as their roles expand and reliance on structured data alone to gather insights and solve problems is no longer feasible.

As a whole, this process will drive future knowledge asset collection, creation, and procurement efforts and consequently is a crucial step in ensuring data at large is AI ready. If no such data exists for AI to rely on for certain use cases, users will be presented unreliable, hallucination-based answers, or in a best-case scenario, no answer at all. Yet as part of a solid governance plan as mentioned earlier, the continuation of the gap analysis process post-solution deployment can empower organizations to continually identify—and close—knowledge gaps, continuously improving data AI readiness and AI solution maturity.

4) Add Structure and Context (Semantic Components)

A key component of making data AI-ready is structure—not within the data per se (e.g., JSON, SQL, Excel), but the structure relating the data to use cases. As a term, ‘structure’ added meaning to knowledge assets in our previous blog, but can introduce confusion as a misnomer in this section. Consequently, ‘structure’ will refer to the added, machine-readable context a semantic model adds to data assets, rather than the format of the data assets themselves, as data loses meaning once taken out of the structure or format it is stored in (e.g., as takes place when retrieved by AI).

Although we touched on one type of semantic model in the previous step, there are three semantic models that work together to ensure data AI readiness: business glossaries, taxonomies, and ontologies. Adding semantics to data for the purpose of getting it ready for AI allows an organization to help users understand the meaning of the data they’re working with. Together, taxonomies, ontologies, and business glossaries imbue data with the context needed for an AI model to fully grasp the data’s meaning and make optimal use of it to answer user queries. 

Let’s dive into the business glossary first. Business glossaries define business context-specific terms that are often found in datasets in a plaintext, easy-to-understand manner. For AI models which are often trained generally, these glossary terms can further assist in the selection of the correct data needed to answer a user query. 

Taxonomies group knowledge assets into broader and narrower categories, providing a level of hierarchical organization not available with traditional business glossaries. This can help data AI readiness in manifold ways. By standardizing terminology (e.g., referring to “automobile,” “car,” and “vehicle” all as “Vehicles” instead of separately), data from multiple sources can be integrated more seamlessly, disambiguated, and deduplicated for clearer understanding. 

Finally, ontologies provide the true foundation for linking related datasets to one another and allow for the definition of custom relationships between knowledge assets. When combining ontology with AI, organizations can perform inferences as a way to capture explicit data about what’s only implied by individual datasets. This shows the power of semantics at work, and demonstrates that good, AI-ready data enriched with metadata can provide insights at the same level and accuracy as a human. 

Organizations who have not pursued developing semantics for knowledge assets before can leverage traditional semantic capture methods, such as business glossaries. As organizations mature in their curation of knowledge assets, they are able to leverage the definitions developed as part of these glossaries and dictionaries, and begin to structure that information using more advanced modeling techniques, like taxonomy and ontology development. When applied to data, these semantic models make data more understandable, both to end users and AI systems. 

5) Semantic Model Application (Labeling and Tagging) 

The data management community has more recently been focused on the value of metadata and metadata-first architecture, and is scrambling to catch up to the maturity displayed in the fields of content and knowledge management. Through replicating methods found in content management systems and knowledge management platforms, data management professionals are duplicating past efforts. Currently, the data catalog is the primary platform where metadata is being applied and stored for data assets. 

To aggregate metadata for your organization’s AI readiness efforts, it’s crucial to look to data stewards as the owners of, and primary contributors to, this effort. Through the process of labeling data by populating fields such as asset descriptions, owner, assumptions made upon collection, and purposes, data stewards help to drive their data towards AI readiness while making tacit knowledge explicit and available to all. Additionally, metadata application against a semantic model (especially taxonomies and ontologies) contextualizes assets in business context and connects related assets to one another, further enriching AI-generated responses to user prompts. While there are methods to apply metadata to assets without the need for as much manual effort (such as auto-classification, which excels for content-based knowledge assets), structured data usually dictates the need for human subject matter experts to ensure accurate classification. 

With data catalogs and recent investments in metadata repositories, however, we’ve noticed a trend that we expect will continue to grow and spread across organizations in the near future. Data system owners are more and more keen to manage metadata and catalog their assets within the same systems that data is stored/used, adopting features that were previously exclusive to a data catalog. Major software providers are strategically acquiring or building semantic capabilities for this purpose. This has been underscored by the recent acquisition of multiple data management platforms by the creators of larger, flagship software products. With the features of the data catalog being adapted from a full, standalone application that stores and presents metadata to a component of a larger application that focuses as a metadata store, the metadata repository is beginning to take hold as the predominant metadata management platform.

6) Address Access and Security (Unified Entitlements)

Applying semantic metadata as described above helps to make data findable across an organization and contextualized with relevant datasets—but this needs to be balanced alongside security and entitlements considerations. Without regard to data security and privacy, AI systems risk bringing in data they shouldn’t have access to because access entitlements are mislabeled or missing, leading to leaks in sensitive information.

A common example of when this can occur is with user re-identification. Data points that independently seem innocuous, when combined by an AI system, can leak information about customers or users of an organization. With as few as just 15 data points, information that was originally collected anonymously can be combined to identify an individual. Data elements like ZIP code or date of birth would not be damaging on their own, but when combined, can expose information about a user that should have been kept private. These concerns become especially critical in industries with small population sizes for their datasets, such as rare disease treatment in the healthcare industry.

EK’s unified entitlements work is focused on ensuring the right people and systems view the correct knowledge assets at the right time. This is accomplished through a holistic architectural approach with six key components. Components like a policy engine capture can enforce whether access to data should be given, while components like a query federation layer ensure that only data that is allowed to be retrieved is brought back from the appropriate sources. 

The components of unified entitlements can be combined with other technologies like dark data detection, where a program scrapes an organization’s data landscape for any unlabeled information that is potentially sensitive, so that both human users and AI solutions cannot access data that could result in compliance violations or reputational damage. 

As a whole, data that exposes sensitive information to the wrong set of eyes is not AI-ready. Unified entitlements can form the layer of protection that ensures data AI readiness across the organization.

7) Maintain Quality While Iteratively Improving (Governance)

Governance serves a vital purpose in ensuring data assets become, and remain, AI-ready. With the introduction of AI to the enterprise, we are now seeing governance manifest itself beyond the data landscape alone. As AI governance begins to mature as a field of its own, it is taking on its own set of key roles and competencies and separating itself from data governance. 

While AI governance is meant to guide innovation and future iterations while ensuring compliance with both internal and external standards, data governance personnel are taking on the new responsibility of ensuring data is AI-ready based on requirements set by AI governance teams. Barring the existence of AI governance personnel, data governance teams are meant to serve as a bridge in the interim. As such, your data governance staff should define a common model of AI-ready data assets and related standards (such as structure, recency, reliability, and context) for future reference. 

Both data and AI governance personnel hold the responsibility of future-proofing enterprise AI solutions, in order to ensure they continue to align to the above steps and meet requirements. Specific to data governance, organizations should ask themselves, “How do you update your data governance plan to ensure all the steps are applicable in perpetuity?” In parallel, AI governance should revolve around filling gaps in their solution’s capabilities. Once the AI solutions launch to a production environment and user base, more gaps in the solution’s realm of expertise and capabilities will become apparent. As such, AI governance professionals need to stand up processes to use these gaps to continue identifying new needs for knowledge assets, data or otherwise, in perpetuity.

Conclusion

As we have explored throughout this blog, data is an extremely varied and unique form of knowledge asset with a new and disparate set of considerations to take into account when standing up an AI solution. However, following the steps listed above as part of an iterative process for implementation of data assets within said solution will ensure data is AI-ready and an invaluable part of an AI-powered organization.

If you’re seeking help to ensure your data is AI-ready, contact us at info@enterprise-knowledge.com.

The post How to Ensure Your Data is AI Ready appeared first on Enterprise Knowledge.

]]>
Top Ways to Get Your Content and Data Ready for AI https://enterprise-knowledge.com/top-ways-to-get-your-content-and-data-ready-for-ai/ Mon, 15 Sep 2025 19:17:48 +0000 https://enterprise-knowledge.com/?p=25370 As artificial intelligence has quickly moved from science fiction, to pervasive internet reality, and now to standard corporate solutions, we consistently get the question, “How do I ensure my organization’s content and data are ready for AI?” Pointing your organization’s … Continue reading

The post Top Ways to Get Your Content and Data Ready for AI appeared first on Enterprise Knowledge.

]]>
As artificial intelligence has quickly moved from science fiction, to pervasive internet reality, and now to standard corporate solutions, we consistently get the question, “How do I ensure my organization’s content and data are ready for AI?” Pointing your organization’s new AI solutions at the “right” content and data are critical to AI success and adoption, and failing to do so can quickly derail your AI initiatives.  

Though the world is enthralled with the myriad of public AI solutions, many organizations struggle to make the leap to reliable AI within their organizations. A recent MIT report, “The GenAI Divide,” reveals a concerning truth: despite significant investments in AI, 95% of organizations are not seeing any benefits from their AI investments. 

One of the core impediments to achieving AI within your own organization is poor-quality content and data. Without the proper foundation of high-quality content and data, any AI solution will be rife with ‘hallucinations’ and errors. This will expose organizations to unacceptable risks, as AI tools may deliver incorrect or outdated information, leading to dangerous and costly outcomes. This is also why tools that perform well as demos fail to make the jump to production.  Even the most advanced AI won’t deliver acceptable results if an organization has not prepared their content and data.

This blog outlines seven top ways to ensure your content and data are AI-ready. With the right preparation and investment, your organization can successfully implement the latest AI technologies and deliver trustworthy, complete results.

1) Understand What You Mean by “Content” and/or “Data” (Knowledge Asset Definition)

While it seems obvious, the first step to ensuring your content and data are AI-ready is to clearly define what “content” and “data” mean within your organization. Many organizations use these terms interchangeably, while others use one as a parent term of the other. This obviously leads to a great deal of confusion. 

Leveraging the traditional definitions, we define content as unstructured information (ranging from files and documents to blocks of intranet text), and data as structured information (namely the rows and columns in databases and other applications like Customer Relationship Management systems, People Management systems, and Product Information Management systems). You are wasting the potential of AI if you’re not seeking to apply your AI to both content and data, giving end users complete and comprehensive information. In fact, we encourage organizations to think even more broadly, going beyond just content and data to consider all the organizational assets that can be leveraged by AI.

We’ve coined the term knowledge assets to express this. Knowledge assets comprise all the information and expertise an organization can use to create value. This includes not only content and data, but also the expertise of employees, business processes, facilities, equipment, and products. This manner of thinking quickly breaks down artificial silos within organizations, getting you to consider your assets collectively, rather than by type. Moving forward in this article, we’ll use the term knowledge assets in lieu of content and data to reinforce this point. Put simply and directly, each of the below steps to getting your content and data AI-ready should be considered from an enterprise perspective of knowledge assets, so rather than discretely developing content governance and data governance, you should define a comprehensive approach to knowledge asset governance. This approach will not only help you achieve AI-readiness, it will also help your organization to remove silos and redundancies in order to maximize enterprise efficiency and alignment.

knowledge asset zoom in 1

2) Ensure Quality (Asset Cleanup)

We’ve found that most organizations are maintaining approximately 60-80% more information than they should, and in many cases, may not even be aware of what they still have. That means that four out of five knowledge assets are old, outdated, duplicate, or near-duplicate. 

There are many costs to this over-retention before even considering AI, including the administrative burden of maintaining this 80% (including the cost and environmental impact of unnecessary server storage), and the usability and findability cost to the organization’s end users when they go through obsolete knowledge assets.

The AI cost becomes even higher for several reasons. First, AI typically “white labels” the knowledge assets it finds. If a human were to find an old and outdated policy, they may recognize the old corporate branding on it, or note the date from several years ago on it, but when AI leverages the information within that knowledge asset and resurfaces it, it looks new and the contextual clues are lost.

Next, we have to consider the old adage of “garbage in, garbage out.” Incorrect knowledge assets fed to an AI tool will result in incorrect results, also known as hallucinations. While prompt engineering can be used to try to avoid these conflicts and, potentially even errors, the only surefire guarantee to avoid this issue is to ensure the accuracy of the original knowledge assets, or at least the vast majority of it.

Many AI models also struggle with near-duplicate “knowledge assets,” unable to discern which version is trusted. Consider your organization’s version control issues, working documents, data modeled with different assumptions, and iterations of large deliverables and reports that are all currently stored. Knowledge assets may go through countless iterations, and most of the time, all of these versions are saved. When ingested by AI, multiple versions present potential confusion and conflict, especially when these versions didn’t simply build on each other but were edited to improve findings or recommendations. Each of these, in every case, is an opportunity for AI to fail your organization.

Finally, this would also be the point at which you consider restructuring your assets for improved readability (both by humans and machines). This could include formatting (to lower cognitive lift and improve consistency) from a human perspective. For both humans and AI, this could also mean adding text and tags to better describe images and other non-text-based elements. From an AI perspective, in longer and more complex assets, proximity and order can have a negative impact on precision, so this could include restructuring documents to make them more linear, chronological, or topically aligned. This is not necessary or even important for all types of assets, but remains an important consideration especially for text-based and longer types of assets.

knowledge asset zoom in 2

3) Fill Gaps (Tacit Knowledge Capture)

The next step to ensure AI readiness is to identify your gaps. At this point, you should be looking at your AI use cases and considering the questions you want AI to answer. In many cases, your current repositories of knowledge assets will not have all of the information necessary to answer those questions completely, especially in a structured, machine-readable format. This presents a risk itself, especially if the AI solution is unaware that it lacks the complete range of knowledge assets necessary and portrays incomplete or limited answers as definitive. 

Filling gaps in knowledge assets is extremely difficult. The first step is to identify what is missing. To invoke another old adage, organizations have long worried they “don’t know what they don’t know,” meaning they lack the organizational maturity to identify gaps in their own knowledge. This becomes a major challenge when proactively seeking to arm an AI solution with all the knowledge assets necessary to deliver complete and accurate answers. The good news, however, is that the process of getting knowledge assets AI-ready helps to identify gaps. In the next two sections, we cover semantic design and tagging. These steps, among others, can identify where there appears to be missing knowledge assets. In addition, given the iterative nature of designing and deploying AI solutions, the inability of AI to answer a question can trigger gap filling, as we cover later. 

Of course, once you’ve identified the gaps, the real challenge begins, in that the organization must then generate new knowledge assets (or locate “hidden” assets) to fill those gaps. There are many techniques for this, ranging from tacit knowledge capture, to content inventories, all of which collectively can help an organization move from AI to Knowledge Intelligence (KI).    

knowledge asset zoom in 3

4) Add Structure and Context (Semantic Components)

Once the knowledge assets have been cleansed and gaps have been filled, the next step in the process is to structure them so that they can be related to each other correctly, with the appropriate context and meaning. This requires the use of semantic components, specifically, taxonomies and ontologies. Taxonomies deliver meaning and structure, helping AI to understand queries from users, relate knowledge assets based on the relationships between the words and phrases used within them, and leverage context to properly interpret synonyms and other “close” terms. Taxonomies can also house glossaries that further define words and phrases that AI can leverage in the generation of results.

Though often confused or conflated with taxonomies, ontologies deliver a much more advanced type of knowledge organization, which is both complementary to taxonomies and unique. Ontologies focus on defining relationships between knowledge assets and the systems that house them, enabling AI to make inferences. For instance:

<Person> works at <Company>

<Zach Wahl> works at <Enterprise Knowledge>

<Company> is expert in <Topic>

<Enterprise Knowledge> is expert in <AI Readiness>

From this, a simple inference based on structured logic can be made, which is that the person who works at the company is an expert in the topic: Zach Wahl is an expert in AI Readiness. More detailed ontologies can quickly fuel more complex inferences, allowing an organization’s AI solutions to connect disparate knowledge assets within an organization. In this way, ontologies enable AI solutions to traverse knowledge assets, more accurately make “assumptions,” and deliver more complete and cohesive answers. 

Collectively, you can consider these semantic components as an organizational map of what it does, who does it, and how. Semantic components can show an AI how to get where you want it to go without getting lost or taking wrong turns.

5) Semantic Model Application (Tagging)

Of course, it is not sufficient simply to design the semantic components; you must complete the process by applying them to your knowledge assets. If the semantic components are the map, applying semantic components as metadata is the GPS that allows you to use it easily and intuitively. This step is commonly a stumbling block for organizations, and again is why we are discussing knowledge assets rather than discrete areas like content and data. To best achieve AI readiness, all of your knowledge assets, regardless of their state (structured, unstructured, semi-structured, etc), must have consistent metadata applied against them. 

When applied properly, this consistent metadata becomes an additional layer of meaning and context for AI to leverage in pursuit of complete and correct answers. With the latest updates to leading taxonomy and ontology management systems, the process of automatically applying metadata or storing relationships between knowledge assets in metadata graphs is vastly improved, though still requires a human in the loop to ensure accuracy. Even so, what used to be a major hurdle in metadata application initiatives is much simpler than it used to be.

knowledge asset zoom in 4

6) Address Access and Security (Unified Entitlements)

What happens when you finally deliver what your organization has been seeking, and give it the ability to collectively and completely serve their end users the knowledge assets they’ve been seeking? If this step is skipped, the answer is calamity. One of the express points of the value of AI is that it can uncover hidden gems in knowledge assets, make connections humans typically can’t, and combine disparate sources to build new knowledge assets and new answers within them. This is incredibly exciting, but also presents a massive organizational risk.

At present, many organizations have an incomplete or actually poor model for entitlements, or ensuring the right people see the right assets, and the wrong people do not. We consistently discover highly sensitive knowledge assets in various forms on organizational systems that should be secured but are not. Some of this takes the form of a discrete document, or a row of data in an application, which is surprisingly common but relatively easy to address. Even more of it is only visible when you take an enterprise view of an organization. 

For instance, Database A might contain anonymized health information about employees for insurance reporting purposes but maps to discrete unique identifiers. File B includes a table of those unique identifiers mapped against employee demographics. Application C houses the actual employee names and titles for the organizational chart, but also includes their unique identifier as a hidden field. The vast majority of humans would never find this connection, but AI is designed to do so and will unabashedly generate a massive lawsuit for your organization if you’re not careful.

If you have security and entitlement issues with your existing systems (and trust me, you do), AI will inadvertently discover them, connect the dots, and surface knowledge assets and connections between them that could be truly calamitous for your organization. Any AI readiness effort must confront this challenge, before your AI solutions shine a light on your existing security and entitlements issues.

knowledge asset zoom in 5

7) Maintain Quality While Iteratively Improving (Governance)

Steps one through six describe how to get your knowledge assets ready for AI, but the final step gets your organization ready for AI. With a massive investment in both getting your knowledge assets in the right state for AI and in  the AI solution itself, the final step is to ensure ongoing quality of both. Mature organizations will invest in a core team to ensure knowledge assets go from AI-ready to AI-mature, including:

  • Maintaining and enforcing the core tenets to ensure knowledge assets stay up-to-date and AI solutions are looking at trusted assets only;
  • Reacting to hallucinations and unanswerable questions to fill gaps in knowledge assets; 
  • Tuning the semantic components to stay up to date with organizational changes.

The most mature organizations, those wishing to become AI-Powered organizations, will look first to their knowledge assets as the key building block to drive success. Those organizations will seek ROCK (Relevant, Organizationally Contextualized, Complete, and Knowledge-Centric) knowledge assets as the first line to delivering Enterprise AI that can be truly transformative for the organization. 

If you’re seeking help to ensure your knowledge assets are AI-Ready, contact us at info@enterprise-knowledge.com

The post Top Ways to Get Your Content and Data Ready for AI appeared first on Enterprise Knowledge.

]]>
Knowledge Cast – Ben Clinch, Chief Data Officer & Partner at Ortecha – Semantic Layer Symposium Series https://enterprise-knowledge.com/knowledge-cast-ben-clinch-chief-data-officer-partner-at-ortecha/ Thu, 11 Sep 2025 13:43:01 +0000 https://enterprise-knowledge.com/?p=25345 Enterprise Knowledge’s Lulit Tesfaye, VP of Knowledge & Data Services, speaks with Ben Clinch, Chief Data Officer and Partner at Ortecha and Regional Lead Trainer for the EDM Council (EMEA/India). He is a sought-after public speaker and thought leader in … Continue reading

The post Knowledge Cast – Ben Clinch, Chief Data Officer & Partner at Ortecha – Semantic Layer Symposium Series appeared first on Enterprise Knowledge.

]]>

Enterprise Knowledge’s Lulit Tesfaye, VP of Knowledge & Data Services, speaks with Ben Clinch, Chief Data Officer and Partner at Ortecha and Regional Lead Trainer for the EDM Council (EMEA/India). He is a sought-after public speaker and thought leader in data and AI, having held numerous senior roles in architecture and business in some of the world’s largest financial and telecommunication institutions over his 25 year career, with a passion for helping organizations thrive with their data.

In their conversation, Lulit and Ben discuss Ben’s personal journey into the world of semantics, their data architecture must-haves in a perfect world, and how to calculate the value of data and knowledge initiatives. They also preview Ben’s talk at the Semantic Layer Symposium in Copenhagen this year, which will cover the combination of semantics and LLMs and neurosymbolic AI. 

For more information on the Semantic Layer Symposium, check it out here!

 

 

If you would like to be a guest on Knowledge Cast, contact Enterprise Knowledge for more information.

The post Knowledge Cast – Ben Clinch, Chief Data Officer & Partner at Ortecha – Semantic Layer Symposium Series appeared first on Enterprise Knowledge.

]]>
The Journey of Data: From Raw Numbers to Actionable Insights with LLMs https://enterprise-knowledge.com/the-journey-of-data-from-raw-numbers-to-actionable-insights-with-llms/ Thu, 19 Oct 2023 17:17:23 +0000 https://enterprise-knowledge.com/?p=19085 Wondering how to take your data from its raw, decontextualized state and actually leverage it to produce actionable insights through the power of a Large Language Model (LLM)? The infographic below provides a visual overview of the 10 steps to … Continue reading

The post The Journey of Data: From Raw Numbers to Actionable Insights with LLMs appeared first on Enterprise Knowledge.

]]>
Wondering how to take your data from its raw, decontextualized state and actually leverage it to produce actionable insights through the power of a Large Language Model (LLM)? The infographic below provides a visual overview of the 10 steps to achieving this, split into two phases – Preparation and Action.

This infographic is a visual introduction to how your organization can take the next step in preparing for and acting on LLMs. EK has experience in designing and implementing solutions that optimize the way you use your knowledge, data, and information, especially in the Enterprise AI space, and can produce actionable and personalized recommendations for you. If this is something you’d like to speak with the AI experts at EK about, contact us today.

The post The Journey of Data: From Raw Numbers to Actionable Insights with LLMs appeared first on Enterprise Knowledge.

]]>
5 Steps to Enhance Search with a Knowledge Graph https://enterprise-knowledge.com/5-steps-to-enhance-search-with-a-knowledge-graph/ Tue, 24 Jan 2023 16:47:36 +0000 https://enterprise-knowledge.com/?p=17301 As search engines and portals evolve, users have come to expect more advanced features common to popular websites like Google or Amazon. Users expect search engines to understand what they are asking for and give them the ability to easily … Continue reading

The post 5 Steps to Enhance Search with a Knowledge Graph appeared first on Enterprise Knowledge.

]]>
As search engines and portals evolve, users have come to expect more advanced features common to popular websites like Google or Amazon. Users expect search engines to understand what they are asking for and give them the ability to easily scan and drill down to the desired information.

Knowledge graphs are commonly paired with enterprise search to meet these expectations, enabling users to explore connections between information and extend search results with contextual data. To help get started enhancing your search results with a knowledge graph, we put together the following five-step process that adheres to search, knowledge graph, and search design best practices.

For a deeper dive into each of the five steps, check out my corresponding white paper on the topic. EK has expertise in enterprise search, ontology design, and knowledge graph implementations, and we would love to work with you on your next search journey. Please feel free to contact us for more information.

The post 5 Steps to Enhance Search with a Knowledge Graph appeared first on Enterprise Knowledge.

]]>
Getting Started With Data Cleanup and Data Management https://enterprise-knowledge.com/getting-started-with-data-cleanup-and-data-management/ Fri, 13 May 2022 14:00:58 +0000 https://enterprise-knowledge.com/?p=15461 At EK, many of the challenges we hear about revolve around data – no matter the company’s size or industry. Without clean, centralized data, staff may begin to lose trust and confidence in the information they are working with. They … Continue reading

The post Getting Started With Data Cleanup and Data Management appeared first on Enterprise Knowledge.

]]>
1: Scope your data inventory and cleanup pilot; 2: Data inventory pilot; 3: Develop data quality guidelines; 4: Execute data cleanup; 5: Supporting processes for data governance
This blog will cover five steps for data cleanup and management.

At EK, many of the challenges we hear about revolve around data – no matter the company’s size or industry. Without clean, centralized data, staff may begin to lose trust and confidence in the information they are working with. They may have trouble conducting effective data analysis and extracting data insights, and the organization as a whole remains ill-prepared for more advanced applications of data, such as knowledge graphs, artificial intelligence (AI), and machine learning (ML). Each of these business challenges is due to ineffective data management and governance.

Organizations with siloed and inconsistent data need an enterprise data architecture and governance model – but where do you start? How can you make small, iterative steps to see impact quickly, validate assumptions, and ensure successful rollout across the enterprise? This blog describes an approach to data cleanup and management, ultimately leading to enterprise-wide data integrity and standards.

Step 1: Scope Your Data Inventory and Cleanup Pilot

A data inventory allows an organization to gain a better understanding of the current state of its data, thus supporting a future data cleanup based on the findings and insights. For many organizations this is an expensive and time consuming exercise. The key success factor here is to start small. Factors to consider when prioritizing the data for your pilot include:

  • Risk: Is the data regulated or subject to privacy considerations?
  • Business value: Is this data a key indicator of business goals (e.g., revenue, user engagement)?
  • Visibility: To what degree will the pilot data attract attention throughout the organization?
  • User buy-in: How eager and enthusiastic would users be for a pilot leveraging this data?

Accessibility of data: How easily accessible is the data? Are there security considerations?

Step 2: Data Inventory Pilot

A data inventory will permit your organization to gain a clear understanding of how users access, use, and collaborate with the chosen system(s), thus beginning to identify relevant data quality and data management issues, concerns, and considerations.

With the inventory, your organization will have an overview of relevant data sources and data elements and can then use this inventory to draw insights, such as initial areas for enrichment and the early identification of data management challenges.

Inventory of sample data mapped to common data challenges

Inventory of sample data mapped to common data challenges.

Step 3: Develop Data Quality Guidelines

The team should seek to address data quality challenges surfaced in the data inventory with  clear, actionable cleanup processes and data quality guidelines that establish standards for the areas of inconsistency and risk within your data inventories. While these guidelines will initially focus on the prioritized systems, your organization can reap even greater value by scaling these guidelines and evolving them across the organization beyond this pilot activity.

A table explaining data issues and cleanup guidelines.

Sample data quality issues and their corresponding guidelines and cleanup actions.

Step 4: Execute Data Cleanup

It is essential to prioritize data cleanup and enrichment areas to focus on, with the end goal of cleaning up, archiving, deleting, and/or migrating data assets based on your standardized and replicable data cleanup guidelines.

A cleanup can be completed through a mix of manual and automated methods. Auto-tagging is one method that can help automate the process, through automatically classifying assets based on predetermined sets of terms that identify the assets as candidates for further action or cleanup. Overall, executing the data cleanup will result in more standardized, reliable, and accessible data at the organization.

A sample data governance framework diagram.
Sample data governance framework.

Step 5: Supporting Processes for Data Governance

It’s important to develop a data governance framework to better maintain, control, and update your data, cementing the value gained through your data cleanup. These governance rules and processes should help to eliminate duplicative data and allow staff to easily maintain data assets, helping them to better trust the information they are finding in downstream applications.

The framework should seek to address the user personas responsible for aspects of data governance, such as data owners, data stewards, data analysts, and project leads, and the recommended actions users should take to address common pitfalls with data quality and any identified enrichment challenges.

Conclusion

Through executing a data cleanup pilot end-to-end, you will see the value in making impactful improvements to the quality and reputability of your data, and can leverage lessons learned to execute additional data cleanup across the enterprise. EK can help you operationalize your data strategy initiatives through designing and executing a set of scalable data cleanup pilots. Ready to see the impact of higher quality data? Contact us.  A diagram for understanding data inventory maturity.

The post Getting Started With Data Cleanup and Data Management appeared first on Enterprise Knowledge.

]]>
Knowledge Management Trends in 2020 https://enterprise-knowledge.com/knowledge-management-trends-in-2020/ Fri, 03 Apr 2020 13:51:20 +0000 https://enterprise-knowledge.com/?p=10898 Looking for the latest KM trends from Enterprise Knowledge? Zach’s KM Trends 2021 List can be found here. The field of knowledge management continues to evolve quickly, embracing new disciplines including semantic technologies and artificial intelligence as core parts of … Continue reading

The post Knowledge Management Trends in 2020 appeared first on Enterprise Knowledge.

]]>
Looking for the latest KM trends from Enterprise Knowledge? Zach’s KM Trends 2021 List can be found here.

The field of knowledge management continues to evolve quickly, embracing new disciplines including semantic technologies and artificial intelligence as core parts of the growing field. Based on our experience as the largest Knowledge Management Consulting company globally, I’ve once again defined the trends that I believe we can expect to see increasing over the next year and beyond. 

Image that provides an outline for the blog, with each of the subheadings and a corresponding image.

The first two of these five trends (demand for ROI and Artificial Intelligence) are very similar to items from the KM Trends in 2019 article I wrote last year. I don’t anticipate those falling off the list any time soon. The remaining three certainly continue themes from previous years, but are formulated more directly to address the most immediate knowledge management trends of today.

Demand for clear business value and return on investment in KM efforts – Last year when I wrote about this, I noted through my years of KM Consulting, I’d seen KM efforts lose funding or be deprioritized when the economy took a hit. I have spent these years stressing the criticality of tying KM to tangible business value, measurable success, and hard return on investment for knowledge management initiatives. Good KM measurably results in improved productivity, improved customer and employee satisfaction, increased revenues, preparedness for artificial intelligence, and effective remote work. Every KM project should’ve already justified its existence by showing these connections. Now, in this time of global pandemic and economic uncertainty the importance of proving the critical value KM offers is more important than ever. The right KM efforts for an organization will help organizations be more agile and perform more effectively in the worst of times, and any smart CEO wouldn’t dare cut those business critical initiatives.

Clear understanding of Knowledge Management and Enterprise AI powered by ontologies and knowledge graphs – Increasingly, one of those tangible business benefits for KM is that it lays the foundation for real artificial intelligence within an organization. Foundational KM activities like taxonomy and tagging, content types and content cleanup, content governance, and tacit knowledge capture are all critical to an organization’s goals of connecting their knowledge, content, and data and automating ways of pushing it to the right users and assembling it for greater value and action. With a good KM foundation, AI isn’t something that organizations can dream of for another day. This is achievable now.

Acceptance and recognition of the enabling role of technology in KM – I think, for too long in the KM space, the impression of KM is that it is a “soft” skill, and too many KM practitioners gladly reinforce that concept. These issues have actually exacerbated my first theme above, regarding the linkage of KM and business value. The reality is that KM is a mix of “soft” and “hard” skills and the best KM efforts bring these together. Good KM, therefore, encompasses a mix of tacit knowledge, unstructured content, and structured data, and should leverage today’s technologies to more effectively capture, manage, share, relate, and find that mix of knowledge, information, and data. To be clear, technology is still just an enabling factor to successful KM; that’s why we list technology last in our five components of KM (People, Process, Content, Culture, and Technology). However, the field as a whole is increasingly recognizing the integral nature of this technology. 

Improved understanding of the knowledge ecosystem including all types of knowledge, information, and data – Over the years, the KM consulting “ask” from clients has moved from, “I want to be able to effectively capture, manage, and find my files,” to “I want to be able to effectively capture, manage, and find my files and data,” to “I want to be able to effectively capture, manage, and find everything together.” Where we are now is a clear recognition that the most mature organizations will be able to consolidate, present, find, discover, and relate all of their different types of content (with content loosely defined and including files, data, knowledge, collaborative materials, and even people). This enables paths of discovery where an end user can traverse content, data, and people in order to find all of the content that can help them complete their immediate mission and develop their knowledge over the longer-term. This is the true hallmark of a mature KM organization, leveraging everything they have, making it easily and intuitively available to their people, connecting it so it is enhanced and contextualized, and allowing their people to act on it in ways that feel natural and complete their goals.

Recognition of KM organizations and mandates as a critical success factor – As the last top KM trend of 2020, it is important to note that KM doesn’t just happen in organizations. For years, organizations have asked individuals to practice “hero KM,” in their free time or offered a big title like Chief Knowledge Officer without the authority, reporting lines, or staff to effect change. Today, we are thankfully seeing the trend that organizations are moving more toward building functional KM organizations. This trend is likely fueled by some of the others I’ve noted, namely the recognition of KM business value and its critical role at the center of AI. Organizational design is thereby increasingly overlapping with KM efforts, where designing, training, and coaching the fledgling KM unit with an organization and the successful establishment of such a unit is a critical success factor to broader KM.

When you put these trends together, you see that KM and business are consistently coming together, placing knowledge management at the center of an organization’s strategies for effectiveness and efficiency.

If you’re looking for help with bringing these trends and their business benefits to your organization, contact us.

 

The post Knowledge Management Trends in 2020 appeared first on Enterprise Knowledge.

]]>
What is a Semantic Architecture and How do I Build One? https://enterprise-knowledge.com/what-is-a-semantic-architecture-and-how-do-i-build-one/ Thu, 02 Apr 2020 13:00:48 +0000 https://enterprise-knowledge.com/?p=10865 Can you access the bulk of your organization’s data through simple search or navigation using common business terms? If so, your organization may be one of the few that is reaping the benefits of a semantic data layer. A semantic … Continue reading

The post What is a Semantic Architecture and How do I Build One? appeared first on Enterprise Knowledge.

]]>
Can you access the bulk of your organization’s data through simple search or navigation using common business terms? If so, your organization may be one of the few that is reaping the benefits of a semantic data layer. A semantic layer provides the enterprise with the flexibility to capture, store, and represent simple business terms and context as a layer sitting above complex data. This is why most of our clients typically give this architectural layer an internal nickname, referring to it as “The Brain,”  “The Hub,” “The Network,” “Our Universe,” and so forth. 

As such, before delving deep into the architecture, it is important to align on and understand what we mean by a semantic layer and its foundational ability to solve business and traditional data management challenges. In this article, I will share EK’s experience designing and building semantic data layers for the enterprise, the key considerations and potential challenges to look out for, and also outline effective practices to optimize, scale, and gain the utmost business value a semantic model provides to an organization.

What is a Semantic Layer?

A semantic layer is not a single platform or application, but rather the realization or actualization of a semantic approach to solving business problems by managing data in a manner that is optimized for capturing business meaning and designing it for end user experience. At its core, a standard semantic layer is specifically comprised of at least one or more of the following semantic approaches: 

  • Ontology Model: defines the types of things that exist in your business domain and the properties that can be used to describe them. An ontology provides a flexible and standard model that organizes structured and unstructured information through entities, their properties, and the way they relate to one another.
  • Enterprise Knowledge Graph: uses an ontology as a framework to add in real data and enable a standard representation of an organization’s knowledge domain and artifacts so that it is understood by both humans and machines. It is a collection of references to your organization’s knowledge assets, content, and data that leverages a data model to describe the people, places, and things and how they are related. 

A semantic layer thus pulls in these flexible semantic models to allow your organization to map disparate data sources into a single schema or a unified data model that provides a business representation of enterprise data in a “whiteboardable” view, making large data accessible to both technical and nontechnical users. In other words, it provides a business view of complex knowledge, information, and data and their assorted relationships in a way that can be visually understood.

How Does a Semantic Layer Provide Business Value to Your Organization?

Organizations have been successfully utilizing data lakes and data warehouses in order to unify enterprise data in a shared space. A semantic data layer delivers the best value for enterprises that are looking to support the growing consumers of big data, business users, by adding the “meaning” or “business knowledge” behind their data as an additional layer of abstraction or as a bridge between complex data assets and front-end applications such as enterprise search, business analytics and BI dashboards, chatbots, natural language process etc. For instance, if you ask a non-semantic chatbot, “what is our profit?” and it recites the definition of “profit” from the dictionary, it does not have a semantic understanding or context of your business language and what you mean by “our profit.” A chatbot built on a semantic layer would instead respond with something like a list of revenue generated per year and the respective percentage of your organization’s profit margins.Visual representation of how a semantic layer draws connections between your data management and storage layer

With a semantic layer as part of an organization’s Enterprise Architecture (EA), the enterprise will be able to realize the following key business benefits: 

  • Bringing Business Users Closer to Data: business users and leadership are closer to data and can independently derive meaningful information and facts to gain insights from large data sources without the technical skills required to query, cleanup, and transform large data.   
  • Data Processing: greater flexibility to quickly modify and improve data flows in a way that is aligned to business needs and the ability to support future business questions and needs that are currently unknown (by traversing your knowledge graph in real time). 
  • Data Governance: unification and interoperability of data across the enterprise minimizes the risk and cost associated with migration or duplication efforts to analyze the relationships between various data sources. 
  • Machine Learning (ML) and Artificial Intelligence (AI):  Serves as the source of truth for providing definition of the business data to machines and enabling the foundation for deep learning and analytics to help the business answer or predict business challenges.

Building the Architecture of a Semantic Layer

A semantic layer consists of a wide array of solutions, ranging from the organizational data itself, to data models that support object or context oriented design, semantic standards to guide machine understanding, as well as tools and technologies to enable and facilitate implementation and scale. Visual representation of semantic layer architecture. Shows how to go from data sources, to data modeling/transformation/unification and standardization, to graph storage and a unified taxonomy, to finally a semantic layer, and then lists some of the business outcomes.

The three foundational steps we have identified as critical to building a scalable semantic layer within your enterprise architecture are: 

1. Define and prioritize your business needs: In building semantic enterprise solutions, clearly defined use cases provide the key question or business reason your semantic architecture will answer for the organization. This in turn drives an understanding of the users and stakeholders, articulates the business value or challenge the solution will solve for your organization, and enables the definition of measurable success criteria. Active SME engagement and validation to ensure proper representation of their business knowledge and understanding of their data is critical to success. Skipping this foundational step will result in missed opportunities for ensuring organizational alignment and return on your investment (ROI). 

2. Map and model your relevant data: Many organizations we work with support a data architecture that is based on relational databases, data warehouses, and/or a wide range of content management cloud or hybrid cloud applications and systems that drive data analysis and analytics capabilities. This does not necessarily mean that these organizations need to start from scratch or overhaul their working enterprise architecture in order to adopt/implement semantic capabilities. For these organizations, it is more effective to start increasing the focus on data modeling and designing efforts by adding models and standards that will allow for capturing business meaning and context (see section below on Web Standards) in a manner that provides the least disruptive starting point. In such scenarios, we typically select the most effective approach to model data and map from source systems by employing the relevant transformation and unification processes (Extract, Transform, Load – ETLs) as well as model-mapping best practices (think ‘virtual model’ versus stored data model in graph storages like graph databases, property graphs, etc.) that are based on the organization’s use cases, enterprise architecture capabilities, staff skill sets, and primarily provide the highest flexibility for data governance and evolving business needs.

The state of an organization’s data typically comes in various formats and from disparate sources. Start with a small use case and plan for an upfront clean-up and transformation effort that will serve as a good investment to start organizing your data and set stakeholder expectations while demonstrating the value of your model early.

3. Leverage semantic web standards to ensure interoperability and governance: Despite the required agility to evolve data management practices, organizations need to think long term about scale and governance. Semantic Web Standards provide the fundamentals that enable you to adopt standard frameworks and practices when kicking off or advancing your semantic architecture. The most relevant standards to the enterprise should be to: 

  • Employ an established data description framework to add business context to your data to enable human understanding and natural language meaning of data (think taxonomies, data catalogs, and metadata); 
  • Use standard approaches to manage and share the data through core data representation formats and a set of rules for formalizing data to ensure your data is both human-readable and machine-readable (examples include XML/RDF formats); 
  • Apply a flexible logic or schema to map and represent relationships, knowledge, and hierarchies between your organization’s data (think ontologies/OWL);
  • A semantic query language to access and analyze the data natural language and artificial intelligence systems (think SPARQL). 
  • Start with available existing/open-source semantic models and ecosystems for your organization to serve as a low-risk, high-value stepping stone (think Open Linked Data/Schema.org). For instance, organizations in the financial industry can start their journey by using a starter ontology for Financial Industry Business Ontology (FIBO), while we have used the Gene Ontology for Biopharma as a jumping off point or to enrich or tailor their model for the specific needs of their organization.

4. Scale with Semantic Tools: Semantic technology components in a more mature semantic layer include graph management applications that serve as middleware, powering the storage, processing, and retrieval of your semantic data. In most scaled enterprise implementations, the architecture for a semantic layer includes a graph database for storing the knowledge and relationships within your data (i.e. your ontology), an enterprise taxonomy/ontology management or a data cataloging tool for effective application and governance of your metadata on enterprise applications such as content management systems, and text analytics or extraction tools to support  advanced capabilities such as Machine Learning (ML) or natural language processing (NLP) depending on the use cases you are working with. 

5. “Plug in” your customer/employee facing applications: The most practical and scalable semantic architecture will successfully support upstream customers or employees facing applications such as enterprise search, data visualization tools, end services/consuming systems, and chatbots, just to name a few potential applications. This way you can “plug” semantic components into other enterprise solutions, applications, and services. With this as your foundation, your organization can now start taking advantage of advanced artificial intelligence (AI) capabilities such as knowledge/relationship and text extraction tools to enable Natural Language Processing (NLP), Machine Learning based pattern recognition to enhance findability and usability of your content, as well automated categorization of your content to augment your data governance practices. 

The cornerstone of a scalable semantic layer is ensuring the capability for controlling and managing versions, governance, and automation. Continuous integration pipelines including standardized APIs and automated ETL scripts should be considered as part of the DNA to ensure consistent connections for structured input from tested and validated sources.

Conclusion

In summary, semantic layers work best as a natural integration framework for enabling interoperability of organizational information assets. It is important to get started by focusing on valuable business-centric use cases that drive getting into semantic solutions. Further, it is worth considering a semantic layer as a complement to other technologies, including relational databases, content management systems (CMS), and other front-end web applications that benefit from having easy access and an intuitive representation of your content and data including your enterprise search, data dashboards, and chatbots.

If you are interested in learning more to determine if a semantic model fits within your organization’s overall enterprise architecture or if you are embarking on the journey to bridge organizational silos and connect diverse domains of knowledge and data that accelerate enterprise AI capabilities, read more or email us.   

Get Started Ask Us a Question

 

The post What is a Semantic Architecture and How do I Build One? appeared first on Enterprise Knowledge.

]]>
Using Facets to Find Unstructured Content https://enterprise-knowledge.com/using-facets-to-find-unstructured-content/ Tue, 14 Jan 2020 14:00:25 +0000 https://enterprise-knowledge.com/?p=10296 What does ‘faceted navigation’ mean to you? For web-savvy individuals, it’s a search experience similar to that which you would find on Amazon. Facets primarily allow an individual to quickly sort through large amounts of information to locate a single … Continue reading

The post Using Facets to Find Unstructured Content appeared first on Enterprise Knowledge.

]]>
What does ‘faceted navigation’ mean to you? For web-savvy individuals, it’s a search experience similar to that which you would find on Amazon. Facets primarily allow an individual to quickly sort through large amounts of information to locate a single or few entities. The infographic below provides a visual overview of what facets are, where they come from, and what they can allow you to do.

https://enterprise-knowledge.com/wp-content/uploads/2020/01/Facets.png

This infographic is a visual introduction to how facets can improve item, document, and content findability, regardless of the form and structure of that content. Other factors, like customized action-oriented results and an enterprise-wide taxonomy, allow for an even more advanced search experience. EK has experience in designing and implementing solutions that optimize the way you use your knowledge, data, and information, and can produce actionable and personalized recommendations for you. If this is something you’d like to speak with the experts at EK about, reach out to info@enterprise-knowledge.com.

The post Using Facets to Find Unstructured Content appeared first on Enterprise Knowledge.

]]>
SEMANTiCs US to Promote Next Generation of Transparent AI https://enterprise-knowledge.com/semantics-us-to-promote-next-generation-of-transparent-ai/ Fri, 27 Dec 2019 16:11:18 +0000 https://enterprise-knowledge.com/?p=10181 Teaming with Semantic Web Company, Enterprise Knowledge is sponsoring the inaugural SEMANTiCs US conference to be held in Austin, TX, from April 21-23 at the AT&T Conference Center. The multi-day conference will showcase proven practices and real-life case studies of … Continue reading

The post SEMANTiCs US to Promote Next Generation of Transparent AI appeared first on Enterprise Knowledge.

]]>
Austin Skyline with purple text saying Register NowTeaming with Semantic Web Company, Enterprise Knowledge is sponsoring the inaugural SEMANTiCs US conference to be held in Austin, TX, from April 21-23 at the AT&T Conference Center. The multi-day conference will showcase proven practices and real-life case studies of companies using semantics to build artificial intelligence solutions that matter to their business. Additionally, SEMANTiCs US will leverage three simultaneous tracks to ensure active engagement for all levels of interest and experience, from novice to expert.

The conference will cover topics such as Explainable AI, Knowledge Graphs, Semantic Technologies, and Data Governance. A number of renowned experts will speak at the conference, including the following highlights:

  • Aaron Bradley, Senior Manager for Web Channel Strategy at Electronic Arts.
  • Alan Morrison, Senior Research Fellow, Emerging Technologies at PricewaterhouseCoopers.
  • Amit Sheth, Founding Director of the Artificial Intelligence Institute and Professor of Computer Science & Engineering at the University of South Carolina.
  • Ruben Verborgh, Technology Advocate for Inrupt.

Joe Hilger, COO of Enterprise Knowledge and Chair of SEMANTiCs US, is excited about the speakers already lined up: “We have fantastic speakers joining the conference. It will be a great opportunity for people to engage with some of the leading experts and business stakeholders in the field and learn more about Knowledge Graphs and Explainable AI.”

More information on SEMANTiCS 2020 can be found here: https://2020-us.semantics.cc/.

 

About Semantics Conference Series

SEMANTiCS is an established knowledge hub where technology professionals, industry experts, researchers and decision makers can learn about new technologies, innovations and enterprise implementations in the fields of Knowledge Graphs, Linked Data and Semantic AI. Founded in 2005 the SEMANTiCS is the only European conference at the intersection of research and industry.

In 2020 the SEMANTICS will be held in Austin, TX in April and in Amsterdam, NL in September.

 

About EK

Enterprise Knowledge (EK) is a services firm that integrates Knowledge Management, Information Management, Information Technology, and Agile Approaches to deliver comprehensive solutions. Our mission is to form true partnerships with our clients, listening and collaborating to create tailored, practical, and results-oriented solutions that enable them to thrive and adapt to changing needs.

 

About Semantic Web Company

A leading provider in graph-based metadata, search, and analytic solutions, Semantic Web Company helps global 500 customers manage corporate knowledge models, extract useful knowledge from big data sets and integrate both structured and unstructured data to recommend evolved strategies for organizing information at scale. Founded in 2004,  the Semantic Web Company is the vendor of PoolParty Semantic Suite (www.poolparty.biz) and was named on KMWorld’s 2016 until 2019 prestigious list of “100 Companies that Matter in Knowledge Management.” The company has recently been added to Gartner’s Magic Quadrant of Metadata Management Solutions in the category Visionary. Andreas Blumauer, founder and CEO has been nominated to the Forbes Technology Council.

The post SEMANTiCs US to Promote Next Generation of Transparent AI appeared first on Enterprise Knowledge.

]]>