enterprise ai Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/enterprise-ai/ Mon, 17 Nov 2025 22:21:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg enterprise ai Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/enterprise-ai/ 32 32 How to Ensure Your Data is AI Ready https://enterprise-knowledge.com/how-to-ensure-your-data-is-ai-ready/ Wed, 01 Oct 2025 16:37:50 +0000 https://enterprise-knowledge.com/?p=25670 Artificial intelligence has the potential to be a game-changer for organizations looking to empower their employees with data at every level. However, as business leaders look to initiate projects that incorporate data as part of their AI solutions, they frequently … Continue reading

The post How to Ensure Your Data is AI Ready appeared first on Enterprise Knowledge.

]]>
Artificial intelligence has the potential to be a game-changer for organizations looking to empower their employees with data at every level. However, as business leaders look to initiate projects that incorporate data as part of their AI solutions, they frequently look to us to ask, “How do I ensure my organization’s data is ready for AI?” In the first blog in this series, we shared ways to ensure knowledge assets are ready for AI. In this follow-on article, we will address the unique challenges that come with connecting data—one of the most unique and varied types of knowledge assets—to AI. Data is pervasive in any organization and can serve as the key feeder for many AI use cases, so it is a high priority knowledge asset to ready for your organization.

The question of data AI readiness stems from the very real concern that when AI is pointed at data that isn’t correct or that doesn’t have the right context associated with it, organizations could face risks to their reputation, their revenue, or their customers’ privacy. With the additional nuance that data brings by often being presented in formats that require transformation, lacking in context, and frequently containing multiple duplicates or near-duplicates with little explanation of their meaning, data (although seemingly already structured and ready for machine consumption) requires greater care than other forms of knowledge assets to comprise part of a trusted AI solution. 

This blog focuses on the key actions an organization needs to perform to ensure their data is ready to be consumed by AI. By following the steps below, an organization can use AI-ready data to develop end-products that are trustworthy, reliable, and transparent in their decision making.

1) Understand What You Mean by “Data” (Data Asset and Scope Definition)

Data is more than what we typically picture it as. Broadly, data is any raw information that can be interpreted to garner meaning or insights on a certain topic. While the typical understanding of data revolves around relational databases and tables galore, often with esoteric metrics filling their rows and columns, data takes a number of forms, which can often be surprising. In terms of format, while data can be in traditional SQL databases and formats, NoSQL data is growing in usage, in forms ranging from key-value pairs to JSON documents to graph databases. Plain, unstructured text such as emails, social media posts, and policy documents are also forms of data, but traditionally not included within the enterprise definition. Finally, data comes from myriad sources—from live machine data on a manufacturing floor to the same manufacturing plant’s Human Resources Management System (HRMS). Data can also be categorized by its business role: operational data that drives day-to-day processes, transactional data that records business exchanges, and even purchased or third-party data brought in to enrich internal datasets. Increasingly, organizations treat data itself as a product, packaged and maintained with the same rigor as software, and rely on data metrics to measure quality, performance, and impact of business assets.

All these forms and types of data meet the definition of a knowledge asset—information and expertise that an organization can use to create value, which can be connected with other knowledge assets. No matter the format or repository type, ingested, AI-ready data can form the backbone of a valuable AI solution by allowing business-specific questions to be answered reliably in an explainable manner. This raises the question to organizational decision makers—what within our data landscape needs to be included in our AI solution? From your definition of what data is, start thinking of what to add iteratively. What systems contain the highest priority data? What datasets would provide the most value to end users? Select high-value data in easy-to-transform formats that allows end users to see the value in your solution. This can garner excitement across departments and help support future efforts to introduce additional data into your AI environment. 

2) Ensure Quality (Data Cleanup)

The majority of organizations we’ve worked with have experienced issues with not knowing what data they have or what it’s intended to be used for. This is especially common in large enterprise settings as the sheer scale and variety of data can breed an environment where data becomes lost, buried, or degrades in quality. This sprawl occurs alongside another common problem, where multiple versions of the same dataset exist, with slight variations in the data they contain. Furthermore, the issue is exacerbated by yet another frequent challenge—a lack of business context. When data lacks context, neither humans nor AI can reliably determine the most up-to-date version, the assumptions and/or conditions in place when said data was collected, or even if the data warrants retention.

Once AI is introduced, these potential issues are only compounded. If an AI system is provided data that is out of date or of low quality, the model will ultimately fail to provide reliable answers to user queries. When data is collected for a specific purpose, such as identifying product preferences across customer segments, but not labeled for said use, and an AI model leverages that data for a completely separate purpose, such as dynamic pricing models, harmful biases can be introduced into the results that negatively impact both the customer and the organization.

Thankfully, there are several methods available to organizations today that allow them to inventory and restructure their data to fix these issues. Examples include data dictionaries, master data (MDM data), and reference data that help standardize data across an organization and help point to what is available at large. Additionally, data catalogs are a proven tool to identify what data exists within an organization, and include versioning and metadata features that can help label data with their versions and context. To help populate catalogs and data dictionaries and to create MDM/reference data, performing a data audit alongside stewards can help rediscover lost context and label data for better understanding by humans and machines alike. Another way to deduplicate, disambiguate, and contextualize data assets is through lineage. Lineage is a feature included in many metadata management tools that stores and displays metadata regarding source systems, creation and modification dates, and file contributors. Using this lineage metadata, data stewards can select which version of a data asset is the most current or relevant for a specific use case and only expose said asset to AI. These methods to ensure data quality and facilitate data stewardship can aid in action towards a larger governance framework. Finally, at a larger scale, a semantic layer can unify data and its meaning for easier ingestion into an AI solution, assist with deduplication efforts, and break down silos between different data users and consumers of knowledge assets at large. 

Separately, for the elimination of duplicate/near-duplicate data, entity resolution can autonomously parse the content of data assets, deduplicate them, and point AI to the most relevant, recent, or reliable data asset to answer a question. 

3) Fill Gaps (Data Creation or Acquisition)

With your organization’s data inventoried and priorities identified, it’s time to start identifying what gaps exist in your data landscape in light of the business questions and challenges you are looking to address. First, ask use case-based questions. Based on your identified use cases, what data would an AI model need to answer topical questions that your organization doesn’t already possess?

At a higher level, gaps in use cases for your AI solution will also exist. To drive use case creation forward, consider the use of a data model, entity relationship diagram (ERD), or ontology to serve as the conceptual map on which all organizational data exists. With a complete data inventory, an ontology can help outline the process by which AI solutions would answer questions at a high level, thanks to being both machine and human-readable. By traversing the ontology or data model, you can design user journeys and create questions that form the basis of novel use cases.

Often, gaps are identified that require knowledge assets outside of data to fill. A data model or ontology can help identify related assets, as they function independently of their asset type. Moreover, standardized metadata across knowledge assets and asset types can enrich assets, link them to one another, and provide insights previously not possible. When instantiated in a solution alongside a knowledge graph, this forms a semantic layer where data assets, such as data products or metrics, gain context and maturity based on related knowledge assets. We were able to enhance the performance of a large retail chain’s analytics team through such an approach utilizing a semantic layer.

To fill these gaps, organizations can look to collect or create more data, as well as purchase publicly available/incorporate open-source datasets (build vs. buy). Another common method of filling identified organizational gaps is the creation of content (and other non-data knowledge assets) to identify a gap via the extraction of tacit organizational knowledge. This is a method that more chief data officers/chief data and AI officers (CDOs/CDAOs) are employing, as their roles expand and reliance on structured data alone to gather insights and solve problems is no longer feasible.

As a whole, this process will drive future knowledge asset collection, creation, and procurement efforts and consequently is a crucial step in ensuring data at large is AI ready. If no such data exists for AI to rely on for certain use cases, users will be presented unreliable, hallucination-based answers, or in a best-case scenario, no answer at all. Yet as part of a solid governance plan as mentioned earlier, the continuation of the gap analysis process post-solution deployment can empower organizations to continually identify—and close—knowledge gaps, continuously improving data AI readiness and AI solution maturity.

4) Add Structure and Context (Semantic Components)

A key component of making data AI-ready is structure—not within the data per se (e.g., JSON, SQL, Excel), but the structure relating the data to use cases. As a term, ‘structure’ added meaning to knowledge assets in our previous blog, but can introduce confusion as a misnomer in this section. Consequently, ‘structure’ will refer to the added, machine-readable context a semantic model adds to data assets, rather than the format of the data assets themselves, as data loses meaning once taken out of the structure or format it is stored in (e.g., as takes place when retrieved by AI).

Although we touched on one type of semantic model in the previous step, there are three semantic models that work together to ensure data AI readiness: business glossaries, taxonomies, and ontologies. Adding semantics to data for the purpose of getting it ready for AI allows an organization to help users understand the meaning of the data they’re working with. Together, taxonomies, ontologies, and business glossaries imbue data with the context needed for an AI model to fully grasp the data’s meaning and make optimal use of it to answer user queries. 

Let’s dive into the business glossary first. Business glossaries define business context-specific terms that are often found in datasets in a plaintext, easy-to-understand manner. For AI models which are often trained generally, these glossary terms can further assist in the selection of the correct data needed to answer a user query. 

Taxonomies group knowledge assets into broader and narrower categories, providing a level of hierarchical organization not available with traditional business glossaries. This can help data AI readiness in manifold ways. By standardizing terminology (e.g., referring to “automobile,” “car,” and “vehicle” all as “Vehicles” instead of separately), data from multiple sources can be integrated more seamlessly, disambiguated, and deduplicated for clearer understanding. 

Finally, ontologies provide the true foundation for linking related datasets to one another and allow for the definition of custom relationships between knowledge assets. When combining ontology with AI, organizations can perform inferences as a way to capture explicit data about what’s only implied by individual datasets. This shows the power of semantics at work, and demonstrates that good, AI-ready data enriched with metadata can provide insights at the same level and accuracy as a human. 

Organizations who have not pursued developing semantics for knowledge assets before can leverage traditional semantic capture methods, such as business glossaries. As organizations mature in their curation of knowledge assets, they are able to leverage the definitions developed as part of these glossaries and dictionaries, and begin to structure that information using more advanced modeling techniques, like taxonomy and ontology development. When applied to data, these semantic models make data more understandable, both to end users and AI systems. 

5) Semantic Model Application (Labeling and Tagging) 

The data management community has more recently been focused on the value of metadata and metadata-first architecture, and is scrambling to catch up to the maturity displayed in the fields of content and knowledge management. Through replicating methods found in content management systems and knowledge management platforms, data management professionals are duplicating past efforts. Currently, the data catalog is the primary platform where metadata is being applied and stored for data assets. 

To aggregate metadata for your organization’s AI readiness efforts, it’s crucial to look to data stewards as the owners of, and primary contributors to, this effort. Through the process of labeling data by populating fields such as asset descriptions, owner, assumptions made upon collection, and purposes, data stewards help to drive their data towards AI readiness while making tacit knowledge explicit and available to all. Additionally, metadata application against a semantic model (especially taxonomies and ontologies) contextualizes assets in business context and connects related assets to one another, further enriching AI-generated responses to user prompts. While there are methods to apply metadata to assets without the need for as much manual effort (such as auto-classification, which excels for content-based knowledge assets), structured data usually dictates the need for human subject matter experts to ensure accurate classification. 

With data catalogs and recent investments in metadata repositories, however, we’ve noticed a trend that we expect will continue to grow and spread across organizations in the near future. Data system owners are more and more keen to manage metadata and catalog their assets within the same systems that data is stored/used, adopting features that were previously exclusive to a data catalog. Major software providers are strategically acquiring or building semantic capabilities for this purpose. This has been underscored by the recent acquisition of multiple data management platforms by the creators of larger, flagship software products. With the features of the data catalog being adapted from a full, standalone application that stores and presents metadata to a component of a larger application that focuses as a metadata store, the metadata repository is beginning to take hold as the predominant metadata management platform.

6) Address Access and Security (Unified Entitlements)

Applying semantic metadata as described above helps to make data findable across an organization and contextualized with relevant datasets—but this needs to be balanced alongside security and entitlements considerations. Without regard to data security and privacy, AI systems risk bringing in data they shouldn’t have access to because access entitlements are mislabeled or missing, leading to leaks in sensitive information.

A common example of when this can occur is with user re-identification. Data points that independently seem innocuous, when combined by an AI system, can leak information about customers or users of an organization. With as few as just 15 data points, information that was originally collected anonymously can be combined to identify an individual. Data elements like ZIP code or date of birth would not be damaging on their own, but when combined, can expose information about a user that should have been kept private. These concerns become especially critical in industries with small population sizes for their datasets, such as rare disease treatment in the healthcare industry.

EK’s unified entitlements work is focused on ensuring the right people and systems view the correct knowledge assets at the right time. This is accomplished through a holistic architectural approach with six key components. Components like a policy engine capture can enforce whether access to data should be given, while components like a query federation layer ensure that only data that is allowed to be retrieved is brought back from the appropriate sources. 

The components of unified entitlements can be combined with other technologies like dark data detection, where a program scrapes an organization’s data landscape for any unlabeled information that is potentially sensitive, so that both human users and AI solutions cannot access data that could result in compliance violations or reputational damage. 

As a whole, data that exposes sensitive information to the wrong set of eyes is not AI-ready. Unified entitlements can form the layer of protection that ensures data AI readiness across the organization.

7) Maintain Quality While Iteratively Improving (Governance)

Governance serves a vital purpose in ensuring data assets become, and remain, AI-ready. With the introduction of AI to the enterprise, we are now seeing governance manifest itself beyond the data landscape alone. As AI governance begins to mature as a field of its own, it is taking on its own set of key roles and competencies and separating itself from data governance. 

While AI governance is meant to guide innovation and future iterations while ensuring compliance with both internal and external standards, data governance personnel are taking on the new responsibility of ensuring data is AI-ready based on requirements set by AI governance teams. Barring the existence of AI governance personnel, data governance teams are meant to serve as a bridge in the interim. As such, your data governance staff should define a common model of AI-ready data assets and related standards (such as structure, recency, reliability, and context) for future reference. 

Both data and AI governance personnel hold the responsibility of future-proofing enterprise AI solutions, in order to ensure they continue to align to the above steps and meet requirements. Specific to data governance, organizations should ask themselves, “How do you update your data governance plan to ensure all the steps are applicable in perpetuity?” In parallel, AI governance should revolve around filling gaps in their solution’s capabilities. Once the AI solutions launch to a production environment and user base, more gaps in the solution’s realm of expertise and capabilities will become apparent. As such, AI governance professionals need to stand up processes to use these gaps to continue identifying new needs for knowledge assets, data or otherwise, in perpetuity.

Conclusion

As we have explored throughout this blog, data is an extremely varied and unique form of knowledge asset with a new and disparate set of considerations to take into account when standing up an AI solution. However, following the steps listed above as part of an iterative process for implementation of data assets within said solution will ensure data is AI-ready and an invaluable part of an AI-powered organization.

If you’re seeking help to ensure your data is AI-ready, contact us at info@enterprise-knowledge.com.

The post How to Ensure Your Data is AI Ready appeared first on Enterprise Knowledge.

]]>
The Evolution of Knowledge Management & Organizational Roles: Integrating KM, Data Management, and Enterprise AI through a Semantic Layer https://enterprise-knowledge.com/the-evolution-of-knowledge-management-km-organizational-roles/ Thu, 31 Jul 2025 16:51:14 +0000 https://enterprise-knowledge.com/?p=25082 On June 23, 2025, at the Knowledge Summit Dublin, Lulit Tesfaye and Jess DeMay presented “The Evolution of Knowledge Management (KM) & Organizational Roles: Integrating KM, Data Management, and Enterprise AI through a Semantic Layer.” The session examined how KM … Continue reading

The post The Evolution of Knowledge Management & Organizational Roles: Integrating KM, Data Management, and Enterprise AI through a Semantic Layer appeared first on Enterprise Knowledge.

]]>
On June 23, 2025, at the Knowledge Summit Dublin, Lulit Tesfaye and Jess DeMay presented “The Evolution of Knowledge Management (KM) & Organizational Roles: Integrating KM, Data Management, and Enterprise AI through a Semantic Layer.” The session examined how KM roles and responsibilities are evolving as organizations respond to the increasing convergence of data, knowledge, and AI.

Drawing from multiple client engagements across sectors, Tesfaye and DeMay shared patterns and lessons learned from initiatives where KM, Data Management, and AI teams are working together to create a more connected and intelligent enterprise. They highlighted the growing need for integrated strategies that bring together semantic modeling, content management, and metadata governance to enable intelligent automation and more effective knowledge discovery.

The presentation emphasized how KM professionals can lead the way in designing sustainable semantic architectures, building cross-functional partnerships, and aligning programs with organizational priorities and AI investments. Presenters also explored how roles are shifting from traditional content stewards to strategic enablers of enterprise intelligence.

Session attendees walked away with:

  • Insight into how KM roles are expanding to meet enterprise-wide data and AI needs;
  • Examples of how semantic layers can enhance findability, improve reuse, and enable automation;
  • Lessons from organizations integrating KM, Data Governance, and AI programs; and
  • Practical approaches to designing cross-functional operating models and governance structures that scale.

The post The Evolution of Knowledge Management & Organizational Roles: Integrating KM, Data Management, and Enterprise AI through a Semantic Layer appeared first on Enterprise Knowledge.

]]>
Semantic Search Advisory and Implementation for an Online Healthcare Information Provider https://enterprise-knowledge.com/semantic-search-advisory-and-implementation-for-an-online-healthcare-information-provider/ Tue, 22 Jul 2025 14:13:12 +0000 https://enterprise-knowledge.com/?p=24995 The medical field is an extremely complex space, with thousands of concepts that are referred to by vastly different terms. These terms can vary across regions, languages, areas of practice, and even from clinician to clinician. Additionally, patients often communicate ... Continue reading

The post Semantic Search Advisory and Implementation for an Online Healthcare Information Provider appeared first on Enterprise Knowledge.

]]>

The Challenge

The medical field is an extremely complex space, with thousands of concepts that are referred to by vastly different terms. These terms can vary across regions, languages, areas of practice, and even from clinician to clinician. Additionally, patients often communicate with clinicians using language that reflects their more elementary understanding of health. This complicates the experience for patients when trying to find resources relevant to certain topics such as medical conditions or treatments, whether through search, chatbots, recommendations, or other discovery methods. This can lead to confusion during stressful situations, such as when trying to find a topical specialist or treat an uncommon condition.

A major online healthcare information provider engaged with EK to improve both their consumer-facing and clinician-facing natural language search and discovery platforms in order to deliver faster and more relevant results and recommendations. Their consumer-facing web pages aimed to connect consumers with healthcare providers when searching for a condition, with consumers often using terms or phrases that may not be an exact match with medical terms. In contrast, the clinicians who purchased licenses to the provider’s content required a fast and accurate method of searching for content regarding various conditions. They work in time-sensitive settings where rapid access to relevant content could save a patient’s life, and often use synonymous acronyms or domain-specific jargon that complicates the search process. The client desired a solution which could disambiguate between concepts and match certain concepts to a list of potential conditions. EK was tasked to refine these search processes to provide both sets of end users with accurate content recommendations.

The Solution

Leveraging both industry and organizational taxonomies for clinical topics and conditions, EK architected a search solution that could take both the technical terms preferred by clinicians and the more conversational language used by consumers and match them to conditions and relevant medical information. 

To improve search while maintaining a user-friendly experience, EK worked to:

  1. Enhance keyword search through metadata enrichment;
  2. Enable natural language search using large language models (LLMs) and vector search techniques, and;
  3. Introduce advanced search features post-initial search, allowing users to refine results with various facets.

The core components of EK’s semantic search advisory and implementation included:

  1. Search Solution Vision: EK collaborated with client stakeholders to determine and implement business and technical requirements with associated search metrics. This would allow the client to effectively evaluate LLM-powered search performance and measure levels of improvement. This approach focused on making the experience faster for clinicians searching for information and for consumers seeking to connect with a doctor. This work supported the long-term goal of improving the overall experience for consumers using the search platform. The choice of LLM and associated embeddings played a key role: by selecting the right embeddings, EK could improve the association of search terms, enabling more accurate and efficient connections, which proved especially critical during crisis situations. 
  2. Future State Roadmap: As part of the strategy portion of this engagement, EK worked with the client to create a roadmap for deploying the knowledge panel to the consumer-facing website in production. This roadmap involved deploying and hosting the content recommender, further expanding the clinical taxonomy, adding additional filters to the knowledge panel (such as insurance networks and location data), and search features such as autocomplete and type-ahead search. Setting future goals after implementation, EK suggested the client use machine learning methods to classify consumer queries based on language and predict their intent, as well as establish a way to personalize the user experience based on collected behavioral data/characteristics.
  3. Keyword and Natural Language Search Enhancement: EK developed a gold standard template for client experts in the medical domain to provide the ideal expected search results for particular clinician queries. This gold standard served as the foundation for validating the accuracy of the search solution in pointing clinicians to the right topics. Additionally, EK used semantic clustering and synonym analysis in order to identify further search terms to add as synonyms into the client’s enterprise taxonomy. Enriching the taxonomy with more clinician-specific language used when searching for concepts with natural language improved the retrieval of more relevant search results.
  4. Semantic Search Architecture Design and LLM Integration: EK designed and implemented a semantic search architecture to support the solution’s search features, EK connecting the client’s existing taxonomy and ontology management system (TOMS), the client’s search engine, and a new LLM. Leveraging the taxonomy stored in the TOMS and using the LLM to match search terms and taxonomy concepts based on similarity enriched the accuracy and contextualization of search results. EK also wrote custom scripts to evaluate the LLM’s understanding of medical terminology and generate evaluation metrics, allowing for performance monitoring and continuous improvement to keep the client’s search solution at the forefront of LLM technology. Finally, EK created a bespoke, reusable benchmark for LLM scores, evaluating how well a certain model matched natural language queries to clinical search terms and allowing the client to select the highest-performing model for consumer use.
  5. Semantic Knowledge Panel: To demonstrate the value this technology would bring to consumers, EK developed a clickable, action-oriented knowledge panel that showcased the envisioned future-state experience. Designed to support consumer health journeys, the knowledge panel guides users through a seamless journey – from conversational search (e.g. “I think I broke my ankle”), to surfacing relevant contextual information (such as web content related to terms and definitions drawn from the taxonomy), to connecting users to recommended clinicians and their scheduling pages based on their ability to treat the condition being searched (e.g. An orthopedist for a broken ankle). EK’s prototype leveraged a taxonomy of tagged keywords and provider expertise, with a scoring algorithm that assessed how many, and how well, those tags matched the user’s query. This scoring informed a sorted display of provider results, enabling users to take direct action (e.g. scheduling an appointment with an orthopedist) without leaving the search experience.

The EK Difference

EK’s expertise in semantic layer, solution architecture, artificial intelligence, and enterprise search came together to deliver a bespoke and unified solution that returned more accurate, context-aware information for clinicians and consumers. By collaborating with key medical experts to enrich the client’s enterprise taxonomy, EK’s semantic experts were able to share unique insights and knowledge on LLMs, combined with their experience with applying taxonomy and semantic similarity in natural language search use cases, to place the client in the best position to enable accurate search. EK also was able to upskill the client’s technical team on semantic capabilities and the architecture of the knowledge panel through knowledge transfers and paired programming, so that they could continue to maintain and enhance the solution in the future.

Additionally, EK’s solution architects, possessing deep knowledge of enterprise search and artificial intelligence technologies, were uniquely positioned to provide recommendations on the most advantageous method to seamlessly integrate the client’s TOMS and existing search engine with an LLM specifically developed for information retrieval. While a standard-purpose LLM could perform these tasks to some extent, EK helped design a purpose-built semantic search solution leveraging a specialized LLM that better identified and disambiguated user terms and phrases. 

Finally, EK’s search experts were able to define and monitor key search metrics with the client’s team, enabling them to closely monitor improvement over time, identifying trends and suggesting improvements to match. These search improvements resulted in a solution the client could be confident in and trust to be accurate.

The Results

The delivery of a semantic search prototype with a clear path to a production, web-based solution resulted in the opportunity for greatly augmented search capabilities across the organization’s products. Overall, this solution allowed both healthcare patients and clinicians to find exactly what they are looking for using a wide variety of terms.

As a result of EK’s semantic search advisory and implementation efforts, the client was able to:

  1. Empower potential patients to use web-based semantic search platform to search for specialists who can treat their conditions quickly and easily find care; 
  2. Streamline the content delivery process in critical, time-sensitive situations such as emergency rooms by providing rapid and accurate content that highlights and elaborates on potential diagnoses and treatments to healthcare professionals; and
  3. Identify potential data and metadata gaps in the healthcare information database that the client relies on to populate its website and recommend content to users.

Looking to improve your organization’s search capabilities? Want to see how LLMs can power your semantic ecosystem? Learn more from our experience or contact us today.

Download Flyer

Ready to Get Started?

Get in Touch

The post Semantic Search Advisory and Implementation for an Online Healthcare Information Provider appeared first on Enterprise Knowledge.

]]>
EK Teaching Upcoming KM & AI Course with KMI https://enterprise-knowledge.com/2025-upcoming-km-ai-course/ Thu, 29 May 2025 18:54:06 +0000 https://enterprise-knowledge.com/?p=24563 EK is partnering with the Knowledge Management Institute to present the two-day Certified Knowledge Specialist (CKS) course in Knowledge Management (KM) and Enterprise AI. The course will be hosted virtually on July 15th and 16th, 2025. The full course overview … Continue reading

The post EK Teaching Upcoming KM & AI Course with KMI appeared first on Enterprise Knowledge.

]]>
EK is partnering with the Knowledge Management Institute to present the two-day Certified Knowledge Specialist (CKS) course in Knowledge Management (KM) and Enterprise AI. The course will be hosted virtually on July 15th and 16th, 2025. The full course overview and registration information can be found on the KMI event page.

The two-day certification course is the newest in KMI’s Certified Knowledge Specialist (CKS) offerings. The course provides a background on key concepts including the fundamentals of Enterprise AI and its application to KM, the importance of KM foundations in AI, identifying organizational readiness for AI, and industry best practices for designing, planning, and implementing Enterprise AI solutions.

Designed to encourage participation and collaboration, this course will be interactive, involving participants in a series of facilitated discussions and activities.

EK experts Bonnie Griffin (Taxonomy Consultant), Ethan Hamilton (Data Engineer), Tatiana Baquero Cakici (Senior KM and Taxonomy Consultant), Fernando Aguilar Islas (Senior Semantic Solutions Consultant), and Elliott Risch (Semantic Solutions Consultant) will serve as instructors and facilitators for the course, bringing unique perspectives and expertise based on their areas of specialization.

More information and the registration form may be found here.

Objectives of the KM & AI course

About Enterprise Knowledge

Enterprise Knowledge (EK) is a services firm that integrates Knowledge Management, Information Management, Information Technology, and Agile approaches to deliver comprehensive solutions. Our mission is to form true partnerships with our clients, listening and collaborating to create tailored, practical, and results-oriented solutions that enable them to thrive and adapt to changing needs.

About the Knowledge Management Institute

The KM Institute is a global leader in Knowledge Management certifications and training, with thousands certified since 2001 and classes delivered in several regional time zones annually.
KMI Programs provide what expert KM practitioners need to know to carry out successful enterprise KM; and what all KM Professionals need to know for greater career success in the Knowledge Age.

The post EK Teaching Upcoming KM & AI Course with KMI appeared first on Enterprise Knowledge.

]]>
EK’s Joe Hilger, Lulit Tesfaye, Sara Nash, and Urmi Majumder to Speak at Data Summit 2025 https://enterprise-knowledge.com/tesfaye-and-majumder-speaking-at-data-summit-conference-2025/ Thu, 27 Mar 2025 16:28:32 +0000 https://enterprise-knowledge.com/?p=23542 Enterprise Knowledge’s Joe Hilger, Chief Operating Officer, and Sara Nash, Principal Consultant, will co-present a workshop, and Lulit Tesfaye, Partner and Vice President of Knowledge and Data Services, and Urmi Majumder, Principal Data Architect, will present a conference session at … Continue reading

The post EK’s Joe Hilger, Lulit Tesfaye, Sara Nash, and Urmi Majumder to Speak at Data Summit 2025 appeared first on Enterprise Knowledge.

]]>
Enterprise Knowledge’s Joe Hilger, Chief Operating Officer, and Sara Nash, Principal Consultant, will co-present a workshop, and Lulit Tesfaye, Partner and Vice President of Knowledge and Data Services, and Urmi Majumder, Principal Data Architect, will present a conference session at the Data Summit Conference in Boston. The premiere data management and analytics conference will take place May 14-15 at the Hyatt Regency Boston, with pre-conference workshops on May 13, and will feature workshops, panel discussions, and provocative talks from industry leaders.

Hilger and Nash will be giving an in-person half-day workshop titled “Building the Semantic Layer of Your Data Platform,” on Tuesday, May 13. Semantic layers stand out as a key approach to solving business problems for organizations grappling with the complexities of managing and understanding the meaning of their content and data. Join Hilger and Nash to learn what a semantic layer is, how it is implemented, and how it can be used to support your Enterprise AI, search, and governance initiatives. Participants will get hands-on experience building a key component of the semantic layer, knowledge graphs, and the foundational elements required to scale it within an enterprise.

Tesfaye and Majumder’s session, “Implementing Semantic Layer Architectures,” on May 15 will focus on the real-world applications of how semantic layers enable generative AI (GenAI) to integrate organizational context, content, and domain knowledge in a machine-readable format, making them essential for enterprise-scale data transformation. Tesfaye and Majumder will highlight how enterprise AI can be realized through semantic components such as metadata, business glossaries, taxonomy/ontology, and graph solutions – uncovering the technical architectures behind successful semantic layer implementations. Key topics include federated metadata management, data catalogs, ontologies and knowledge graphs, and enterprise AI infrastructure. Attendees will learn how to establish a foundation for explainable GenAI solutions and facilitate data-driven decision-making by connecting disparate data and unstructured content using a semantic layer.

For further details and registration, please visit the conference website.

The post EK’s Joe Hilger, Lulit Tesfaye, Sara Nash, and Urmi Majumder to Speak at Data Summit 2025 appeared first on Enterprise Knowledge.

]]>
Enterprise AI Meets Access and Entitlement Challenges: A Framework for Securing Content and Data for AI https://enterprise-knowledge.com/enterprise-ai-meets-access-and-entitlement-challenges-a-framework-for-securing-content-and-data-for-ai/ Fri, 31 Jan 2025 18:13:00 +0000 https://enterprise-knowledge.com/?p=23037 In today’s digital landscape, organizations face a critical challenge: how to leverage the power of Artificial Intelligence (AI) while ensuring their knowledge assets remain secure and accessible to the right people at the right time. As enterprise AI systems become … Continue reading

The post Enterprise AI Meets Access and Entitlement Challenges: A Framework for Securing Content and Data for AI appeared first on Enterprise Knowledge.

]]>
In today’s digital landscape, organizations face a critical challenge: how to leverage the power of Artificial Intelligence (AI) while ensuring their knowledge assets remain secure and accessible to the right people at the right time. As enterprise AI systems become more sophisticated, the intersection of access management and enterprise AI emerges as a crucial frontier for organizations seeking to maximize their AI investments while maintaining robust security protocols.

This blog explores how the integration of secure access management within an enterprise AI framework can transform enterprise AI systems from simple automation tools into secure, context-aware knowledge platforms. We’ll discuss approaches for how modern Role-Based Access Control (RBAC), enhanced by AI capabilities, works to streamline and create a dynamic ecosystem where information flows securely to those who need it most.

Understanding Enterprise AI and Access Control

Enterprise AI represents a significant advancement in how organizations process and utilize their data, moving beyond basic automation to intelligent, context-aware systems. This awareness becomes particularly powerful when combined with sophisticated access management systems. Role-Based Access Control (RBAC) serves as a cornerstone of this integration, providing a framework for regulating access to organizational knowledge based on user roles rather than individual identities. Modern RBAC systems, enhanced by AI, go beyond static permission assignments to create dynamic, context-aware access controls that adapt to organizational needs in real time.

Key Features of AI-Enhanced RBAC

  1. Dynamic Role Assignment: AI systems continuously analyze user behavior, responsibilities, and organizational context to suggest and adjust role assignments, ensuring access privileges remain current and appropriate.
  2. Intelligent Permission Management: Machine learning algorithms help identify patterns in data usage and access requirements, automatically adjusting permission sets to optimize security while maintaining operational efficiency, thereby upholding the principles of least privilege in the organization.
  3. Contextual Access Control: The system considers multiple factors including time, location, device type, and user behavior patterns to make real-time access decisions.
  4. Automated Compliance Monitoring: AI-powered monitoring systems track access patterns and flag potential security risks or compliance issues, enabling proactive risk management.

This integration of enterprise AI and RBAC creates a sophisticated framework where access controls become more than just security measures – they become enablers of knowledge flow within the organization.

Secure Access Management for Enterprise AI

Integrating access management with enterprise AI creates a foundation for secure, intelligent knowledge sharing by effectively capturing and utilizing organizational expertise.

Modern enterprises require a thoughtful approach to incorporating domain expertise into AI processes while maintaining strict security protocols. This integration is particularly crucial where domain experts transform their tacit knowledge into explicit, actionable frameworks that can enhance AI system capabilities. The AI-RBAC framework embodies this principle through two key components that work in harmony:

  1. Adaptable Rule Foundation (ARF) for systematic content classification
  2. Expert-driven Organizational Role Mapping for secure knowledge sharing

While ARF provides the structure for explicit knowledge through content tagging, the role mapping performed by Subject Matter Experts (SMEs) injects critical domain intelligence into the organizational knowledge framework, creating a robust foundation for secure knowledge sharing. The ARF system exemplifies this integration by classifying and managing data across three distinct levels, while SMEs provide the crucial expertise needed to map these classifications to organizational roles. This combination ensures that organizational knowledge is not only properly categorized but also securely accessible to the right people at the right time, effectively bridging the gap between AI-driven classification and human expertise.

The Adaptable Rule Foundation (ARF) system exemplifies this integration by classifying and managing data across three distinct levels:

  • Core Level: Includes fundamental organizational knowledge and critical business rules, defined with input from domain SMEs.
  • Common Level: Contains shared knowledge assets and cross-departmental information, with SME guidance on scope.
  • Unique Level: Manages specialized knowledge specific to individual departments or projects, as defined by SMEs.

SMEs play a crucial role in adjusting the scope and definitions of the Core, Common, and Unique levels to inject their domain expertise into the ARF framework. This ensures the classification system aligns with real-world organizational knowledge and needs.

This three-tiered approach, powered by AI, enables organizations to:

  • Automatically classify incoming data based on sensitivity and relevance
  • Dynamically apply appropriate access controls using expert-driven organizational role mapping
  • Enable domain experts to contribute knowledge securely without requiring technical expertise
  • Adapt security measures in real-time based on organizational changes

The ARF system’s intelligence goes beyond traditional access management by understanding not just who should access information, but how that information fits into the broader organizational knowledge ecosystem. This contextual awareness ensures that security measures enhance, rather than hinder, knowledge sharing.

The Future of Enterprise AI

As organizations continue to leverage AI capabilities, the interaction between access management and enterprise AI becomes increasingly crucial. This integration ensures that AI systems serve as secure, intelligent platforms for knowledge sharing and decision-making. The combination of dynamic access controls and enterprise AI framework creates an environment where:

  • Security becomes an enabler rather than a barrier to innovation
  • Domain expertise naturally flows into AI systems through secure channels
  • Organizations can adapt quickly to changing knowledge needs while maintaining security
  • AI systems become more contextually aware and organizationally aligned

If your organization is looking to enhance AI capabilities while ensuring robust data security, our enterprise AI access management framework offers a powerful solution. Contact us to learn how to transform your organization’s knowledge infrastructure into a secure, intelligent ecosystem that drives innovation and growth.

The post Enterprise AI Meets Access and Entitlement Challenges: A Framework for Securing Content and Data for AI appeared first on Enterprise Knowledge.

]]>
The Role of Semantic Layers with LLMs https://enterprise-knowledge.com/the-role-of-semantic-layers-with-llms/ Wed, 10 Apr 2024 16:53:38 +0000 https://enterprise-knowledge.com/?p=20352 In today’s business landscape, Large Language Models (LLMs) are essential tools for driving innovation, streamlining operations, and unlocking new opportunities for growth. A Large Language Model, or LLM, is an advanced AI model designed to perform Natural Language Processing (NLP) … Continue reading

The post The Role of Semantic Layers with LLMs appeared first on Enterprise Knowledge.

]]>
In today’s business landscape, Large Language Models (LLMs) are essential tools for driving innovation, streamlining operations, and unlocking new opportunities for growth. A Large Language Model, or LLM, is an advanced AI model designed to perform Natural Language Processing (NLP) tasks, including interpreting, translating, predicting, and generating coherent, contextually relevant text. One core benefit of LLMs is the ability to quickly generate insights from a large corpus of documents while using any context provided in a prompt. However, all LLMs come with challenges that can be difficult to address without the proper expertise and technology.

Challenges

Sample architecture showing LLMs being an orchestrator between "data sources" and "results"

While LLMs are a powerful means of interfacing with an organization’s information, the effectiveness of LLMs is often hampered by the complexity and disorganization of the data they rely on. The challenge lies not only in processing vast amounts of information but also in ensuring that this information is accurate, relevant, and structured in a way that the models can effectively learn from. If LLMs are trained on information and data that lack those characteristics, they will produce low-quality results. Furthermore, without a model of how entities within a subject area — such as finance — relate to one another, the LLM may default to an inaccurate, generalist approach to responses based on the training data, causing it to miss relevant information and references. Finally, even in the case where these two problems are solved, there is the issue of hallucinations: the phenomena wherein an LLM will produce false or divergent answers that are unsupported by the underlying training data. Given the range of errors that can crop up when using an LLM, how can an organization prepare their LLM to be trustworthy enough for enterprise use?

Solution

Architecture that includes a Semantic Layer, providing context to a large language model

This is where semantic layers come into play. A semantic layer is a standardized framework that organizes and abstracts organizational data. A semantic layer also solves the fundamental disconnect that businesses face between collecting data and turning that data into actionable information by providing standard models and a consumption architecture for handling and connecting structured and unstructured organizational data. In doing so, the specific domain knowledge and expertise of the enterprise is captured in a way that is both machine and human readable, enabling better decision-making and insight generation. This interoperability allows a semantic layer to act as the bridge between your raw data and the sophisticated analytical capabilities of LLMs by structuring the underlying data to improve the coherence and explainability of an LLM’s outputs.

Benefits of a Semantic Layer for LLMs

1: Data Quality and Accessibility

A semantic layer organizes and abstracts organizational data across formats, making it accessible for both humans and machines. Within a semantic layer, data and high quality models can be tagged for LLM training and consumption. This means training on data that is not only high-quality but also rich in contextual and conceptual relationships. This improved data accessibility accelerates the training process and enhances the model’s ability to understand and generate nuanced, informed text. 

For example, consider a healthcare LLM designed to provide diagnostic suggestions based on patient symptoms. With a semantic layer, patient data, medical histories, and research articles are organized and tagged with contextual relationships, such as symptoms associated with specific conditions. This way of organizing information allows the LLM to access a rich, interconnected dataset during training and operation, enabling it to recognize subtle nuances in patient symptoms and suggest diagnoses that reflect a deeper understanding of medical conditions and their manifestations. As a result, the LLM’s suggestions are not only relevant but also grounded in a comprehensive view of available medical knowledge, demonstrating the semantic layer’s role in enhancing the quality and reliability of its outputs.

By providing a standardized framework for data interpretation, semantic layers enable LLMs to access higher-quality data, leading to improved decision-making, enhanced customer experiences, and more accurate generated content. For businesses, this means being able to leverage data assets more effectively and reduce time spent looking for accurate information. This improved data discovery both accelerates the training process and enhances the model’s ability to understand and generate nuanced text.

2: Contextual Understanding

A semantic layer is not unique in its ability to organize and make available data across formats. A data catalog or a data fabric can be an effective means of delivering high quality data to consumers and machine learning models. However, semantic layers pull away from the competition in their ability to capture heterogeneous sources of data and enrich them with semantics and contextual information. The flexible data models, standardized vocabularies, quality metadata, and business context captured as a part of a semantic layer allows for LLMs and other computer applications to understand a business domain on a foundational level. 

For example, imagine a multinational corporation that utilizes an LLM to streamline its customer service. This corporation operates in various countries, each with its unique set of products, services, and customer interactions. A semantic layer can organize customer feedback, service tickets, and product descriptions, enriching this data with contextual information such as geographical location, cultural nuances, and language variations. By using this semantically rich dataset, the LLM can understand not just the explicit content of customer queries but also the implicit context, such as regional product preferences or local market trends. As a result, the LLM can provide more accurate, context-aware responses to customer inquiries, reflecting an understanding that goes beyond words to grasp the subtleties of global business operations.

When a semantic layer serves as a backbone for the LLM’s data consumption it ensures that training data is coming from trusted, high-quality sources that are enriched with domain context. This foundational context empowers LLMs to generate outputs based on a more comprehensive understanding of the subject matter. By capturing and connecting content based on business or domain meaning and value, LLMs can produce more accurate and relevant outputs, tailored to specific industry needs or knowledge domains.

3: Explainable Results

Even with high-quality data and business domain understanding, “hallucinations” are still a concern when trying to use an LLM as a trustworthy source of information. LLMs hallucinate due to many reasons, including a lack of sufficient context or specific tagging in their training data. When the data lacks robust contextual information and nuanced tagging, the LLM can have a limited understanding of the relationships between different data points. This limitation can lead to the generation of outputs that are not grounded in factual information or logical inference, as the model attempts to ‘fill in the gaps’ without a robust framework to guide its responses.

The incorporation of a semantic layer can help to cut down on the prevalence of hallucinations and improve output quality by enriching the LLM’s training environment with deeply contextualized and well-tagged data. As we have seen by now, semantic layers ensure that data is not only of high quality but also embedded with lots of contextual information and relationships between data that keep the model more grounded in reality. Furthermore, an LLM trained with the aid of a semantic layer can be prompted to include explanations of its outputs, detailing the data sources it pulled from when generating output and the contextual reasons behind the selection of these sources. This level of transparency allows users to evaluate the validity of the generated content, distinguishing between well-founded information and potential hallucinations. 

Hallucinations will always remain a potential issue with LLMs due to the nature of how they creatively generate output, but semantic layers offer a way to reduce the likelihood of hallucinations by providing better training data and enhancing the trustworthiness and reliability of LLM outputs through explainability.

Conclusion

In this article, we have touched on some of the potential pitfalls of using an LLM, as well as how a semantic layer can be used in concert with an LLM to mitigate those issues and improve the quality of their output. For issues of data quality, contextual business understanding, and explainability of results, semantic layers stand out as a comprehensive solution to the most pressing challenges of LLMs. Semantic layers empower LLMs to serve not just as text generators but as sophisticated tools for knowledge discovery, decision-making, and automated reasoning. Through their components including ontologies and knowledge graphs, semantic layers enrich LLMs with the ability to understand complex relationships and concepts, paving the way for advanced applications in areas such as legal analysis, medical research, and financial forecasting. In short, integrating semantic layers with LLMs presents a strategic advantage, allowing businesses to not only overcome the challenges of data complexity, but also to maximize the full potential of AI for competitive gain while minimizing risk. 

If you want to learn more about how your business can take the next step in building a semantic layer, leveraging LLMs, and developing enterprise AI, contact us to get started today!

The post The Role of Semantic Layers with LLMs appeared first on Enterprise Knowledge.

]]>
The Top 3 Ways to Implement a Semantic Layer https://enterprise-knowledge.com/the-top-3-ways-to-implement-a-semantic-layer/ Tue, 12 Mar 2024 16:09:47 +0000 https://enterprise-knowledge.com/?p=20163 Over the last decade, we have seen some of the most exciting innovations emerge within the enterprise knowledge and data management spaces. Those innovations with real staying power have proven to drive business outcomes and prioritize intuitive user engagement. Within … Continue reading

The post The Top 3 Ways to Implement a Semantic Layer appeared first on Enterprise Knowledge.

]]>
Over the last decade, we have seen some of the most exciting innovations emerge within the enterprise knowledge and data management spaces. Those innovations with real staying power have proven to drive business outcomes and prioritize intuitive user engagement. Within this list are a semantic layer (for breaking the silos between knowledge and data) and of course, generative AI (a topic that is often top of mind on today’s strategic roadmaps). Both have one thing in common – they are showing promise in addressing the age-old challenge of unlocking business insights from organizational knowledge and data, without the complexities of expensive data, system, and content migrations.  

In 2019, Gartner published research emphasizing the end to “a single version of the truth” for data and knowledge management and that by 2026, “active metadata” will power over 50% of BI and analytics tools and solutions to provide a structured and consistent approach to connecting instead of consolidating data.  

By employing semantic components and standards (through metadata, business glossaries, taxonomy/ontology, and graph solutions), a semantic layer arms organizations with a framework to aggregate and connect siloed data/content, explicitly provide business context for data, and serve as the layer for explainable AI. Once connected, independent business units can use the organization’s semantic layers to locate and work with not only enterprise data, but their own, unit-specific data as well. 

Incorporating a semantic layer into enterprise architecture is not just a theoretical concept, it’s a practical enhancement that transforms how organizations harness their data. Over the last ten years, we’ve worked with a diverse set of organizations to design and implement the components of a semantic layer. Many organizations we work with support a data architecture that is based on relational databases, data warehouses, and/or a wide range of content management, cloud, or hybrid cloud applications and systems that drive data analysis and analytics capabilities. These models do not necessarily mean that organizations need to start from scratch or overhaul their working enterprise architecture in order to adopt/implement a semantic layer. To the contrary, it is more effective to shift the focus on metadata and data modeling or designing efforts by adding models and standards that will allow for capturing business meaning and context in a manner that provides the least disruptive starting point. 

Though we’ve been implementing the individual components for over a decade, it has only been the last couple years where we’ve been integrating them all to form a semantic layer. The maturity of approaches, technologies, and awareness have all combined with the growing need of organizations and the AI revolution to create this opportunity now.

In this article, I will explore the top three common approaches we are seeing at play in order to weave this data and knowledge layer into the fabric of enterprise architecture, highlighting the applications and organizational considerations for each.

1. A Metadata-First Logical Architecture: Using Enterprise Semantic Layer Solutions

This is the most common and scalable model we see across various industries and use cases for enterprise-wide applications. 

Architecture 

Implementing a semantic layer through a metadata-first logical architecture involves creating a logical layer that abstracts the underlying data sources by focusing on metadata. This approach establishes an organizational logical layer through standardized definitions and governance at the enterprise level while allowing for additional, decentralized components and solutions to be “pushed,” “published,” or “pulled from” specific business units, use cases, and systems/applications at a set cadence. 

Semantic Layer ArchitecturePros

Using middleware solutions like a data catalog or an ontology/graph storage, organizations are able to create a metadata layer that abstracts the underlying complexities, offering a unified view of data in real time based on metadata only. This allows organizations to abstract access, ditch application-centric approaches, and analyze data without the need for physical consolidation. This model effectively leverages the capabilities of standalone systems or applications to manage semantic layer components (such as metadata, taxonomies, glossaries, etc.) while providing centralized storage for semantic components to create a shared, enterprise semantic layer. This approach ensures consistency in core or shared data definitions to be managed at the enterprise level while providing the flexibility for individual teams to manage their unique secondary and group-level semantic data requirements.

Cons

Implementing a semantic layer as a metadata architecture or logical layer across enterprise systems requires planning in phases and incremental development to maintain cohesion and prevent fragmentation of shared metadata and semantic components across business groups and systems. Additionally, depending on the selected synchronization approach of the layer with downstream/upstream applications (push vs. pull), data orchestration and ETL pipelines will need to plan for a centralized vs. decentralized orchestration that ensures ongoing alignment. 

Best Suited For

This approach is our most deployed and well-suited for organizations that want to balance standardization with the need for business unit or application level agility in data processing and operations in different parts of the business.

2. Built-for-Purpose Architecture: Individual Tools with Semantic Capabilities

This model allows for greater flexibility and autonomy at the business unit or functional level. 

Architecture 

This architecture approach is a distributed model that leverages each standalone system or application capabilities to own semantic layer components – without a connected technical framework or governance structure at the enterprise level for shared semantics. With this approach, organizations typically identify establishing semantic standards as a strategic initiative but each individual team or department (marketing, sales, product, data teams, etc.) is responsible for creating, executing, and managing its semantic components (metadata, taxonomies, glossaries, graph, etc.), tailored to their specific needs and requirements.

Semantic Layer ArchitectureMost knowledge and data solutions such as content or document management systems (CMS/DMS), digital asset management (DAMs), customer relationship management (CRM), and data analytics/BI dashboards (such as Tableau and PowerBI) have inherent capabilities to manage simple semantic components (although with varied maturity and feature flexibility levels). This decentralized architecture results in the implementation of multiple system-level semantic layers. Let’s take SharePoint as an example, an enterprise document and content collaboration platform. For organizations that are in the early stages of growing their semantic capabilities, we leverage the Term Store for structuring metadata and taxonomy management within SharePoint, which allows teams to create a unified language, fostering consistency across documents, lists, and libraries. This helps with information retrieval and also enhances collaboration by ensuring a shared understanding of key metrics. On the other hand, Salesforce, a renowned CRM platform, offers semantic capabilities that enable teams across sales, marketing, and customer service to define and interpret customer data consistently across various modules.

Pros

This decentralized model promotes agility and empowers business units to leverage their existing platforms (that are built-for-purpose) as not just data/content repositories but as dynamic sources of context and alignment, driving consistent understanding of shared data and knowledge assets for specific business functions.

Cons

However, this decentralized approach typically leads those users who need cohesive organizational content and data to do so through separate interfaces. Data governance teams or content stewards are also likely to manage each system independently. This leads to data silos, “semantic drifts,” and inconsistency in data definitions and governance (where duplication and data quality issues arise). This ultimately results in misalignment between business units, as they may interpret data elements differently, leading to confusion and potential inaccuracies.

Best Suited For

This approach is particularly advantageous for organizations with diverse business units or teams that operate independently. It empowers business users to have more control over their data definitions and modeling and allows for quicker adaptation to evolving business needs, enabling business units to respond swiftly to changing requirements without relying on a centralized team. 

3. A Centralized Architecture: Within an Enterprise Data Warehouse (EDW) or Data Lake (DL)

This structured environment simplifies data engineering and ensures a consistent and centralized semantic layer specifically for analytics and BI use cases.

Architecture

Organizations that are looking to create a single, unified representation of their core organizational domains develop a semantic layer architecture that serves as the authoritative source for shared data definitions and business logic within a centralized architecture – particularly within an Enterprise Data Warehouse or Data Lake. This model makes it easier to build the semantic layer since data is already in one place, and analytics solutions that are using cloud-based data warehousing platforms (e.g., Amazon Redshift, Google BigQuery, Snowflake, Azure Blob Storage, Databricks, etc.) can serve as a “centralized” location for semantic layer components. 

Building a semantic layer within an EDW/DL involves consolidating and ingesting data from various sources into a centralized repository, identifying key data sources to be ingested, defining business terms, establishing relationships between different datasets, and mapping the semantic layer to the underlying data structures to create a unified and standardized interface for data access. 

Semantic Layer ArchitecturePros

This model architecture is a common implementation approach we support specifically within a dedicated team of data management, data analytics, and BI groups that are consistently ingesting data, setting the implementation processes for changes to data structures, and enforcing business rules through dedicated pipelines (ETL/APIs) for governance across enterprise data. 

Cons

The core consideration here (that usually suffers) is collaboration between business and data teams that is pivotal during the implementation process, guides investment in the right tools and solutions that have semantic modeling capabilities, and supports the creation of a semantic layer within this centralized landscape. 

It is important to ensure that the semantic layer reflects the actual needs and perspectives of end users. Regular feedback loops and iterative refinements are essential for creating a model that evolves with the dynamic nature of business requirements. Adopting these solutions within this environment will enable the effective definition of business concepts, hierarchies, and relationships, allowing for translation of technical data into business-friendly terms.

Another important aspect with this type of centralized model is that it is dependent on data that is consolidated or co-located and requires upfront investment in terms of resources and time to design and implement the layer comprehensively. As such, it’s important to start small by focusing on specific business use cases, the relevant scope of knowledge/data sources and foundational models that are highly visible, and focused on business outcomes. This will allow the organization to create a foundational model that will expand across the rest of the organization’s data and knowledge assets, incrementally. 

Best Suited For

We have seen this approach being particularly beneficial for large enterprises with complex but shared data requirements and that have the need for stringent knowledge and data governance and compliance rules – specifically, organizations that produce data products and need to control the data and knowledge assets that are shared internally or externally on a regular basis. This includes, but is not limited to, financial institutions, healthcare organizations, bioengineering firms, and retail companies. 

Closing

A well-implemented semantic layer is not merely a technical necessity but a strategic asset for organizations aiming to harness the full potential of their knowledge and data assets, as well as have the right foundations in place to make AI efforts successful. The choice of how to architect and implement a semantic layer depends on the specific needs, size, and structure of the organization. When considering this solution, the core decision really comes down to striking the right balance between standardization and flexibility, in order to ensure that your semantic layer serves as an effective enabler for knowledge-driven decision making across the organization. 

Organizations that invest in an enterprise architecture through the metadata layer and those that rely on experts with modeling experience that are anchored in semantic web standards find it the most flexible and scalable approach. As such, they are better positioned to abstract their data from vendor lock and ensure interoperability to navigate the complexities of today’s technologies and future evolutions.

When embarking on a semantic layer initiative, not understanding or planning for a solid technical architecture and phased implementation approach leads to unplanned investments or failure for many organizations. If you are looking to get started and learn more about how other organizations are approaching scale, read more from our case studies or contact us if you have specific questions.

The post The Top 3 Ways to Implement a Semantic Layer appeared first on Enterprise Knowledge.

]]>
What Every CEO Needs to Know About Semantic Layers https://enterprise-knowledge.com/what-every-ceo-needs-to-know-about-semantic-layers/ Thu, 29 Feb 2024 16:16:54 +0000 https://enterprise-knowledge.com/?p=20045 Recently, we at Enterprise Knowledge have been talking a lot about Semantic Layers, defining what they are, and even what they aren’t. We’ve detailed the various technical components and the business value of each, presented logical diagrams, and identified many … Continue reading

The post What Every CEO Needs to Know About Semantic Layers appeared first on Enterprise Knowledge.

]]>
Recently, we at Enterprise Knowledge have been talking a lot about Semantic Layers, defining what they are, and even what they aren’t. We’ve detailed the various technical components and the business value of each, presented logical diagrams, and identified many core use cases for which they may be applied. We continue to generate a depth of detail on the topic, and when I consider many of our clients, CEOs and top executives of some of the world’s largest companies, I want to make sure we’re adequately answering the question of why a senior executive should care about a Semantic Layer.

To answer that question succinctly, a Semantic Layer can be an organization’s pathway to achieving Artificial Intelligence. A properly implemented Semantic Layer delivers the quality inputs, the relationships and vocabularies, and the necessary connections and infrastructure to make AI real for most organizations. If AI initiatives have failed or are stalled, implementing a Semantic Layer will deliver all the necessary elements to make AI successful at your organization.

To further answer the question, I explored the top concerns of global CEOs from The Conference Board, effectively, what CEOs are losing sleep over. These are listed below, with specific examples of how a Semantic Layer can help to address each of these challenges.

 

A pentagon with each side representing a different part of the semantic layer: metadata, information architecture, business glossary, content, and ontology/knowledge grpah

Economic Downturn/Recession

A Semantic Layer can deliver a comprehensive, aligned, and integrated view of an organization’s operations. Regardless of the industry, this means an executive will have the ability to understand bottlenecks, spot inefficiencies, and even predict areas of savings. This translates to better decision-making and meaningful competitive advantage. One of our customers in the publishing and education industries had previously lacked the ability to look all the way across their diverse organization’s operations. They didn’t even have a consistent definition of core business concepts like what a product is, and therefore had no ability to understand profit and loss on a product by product basis. The next two quarters following the implementation of their Search and Intelligent Chatbot (powered by the Semantic Layer), they achieved a marked increase in margin as the company refocused sales efforts around their highest profitability products. More importantly, they attained a framework that allows them to evaluate, in real-time, which products are the most profitable. 

Inflation

As I mentioned above, a complete view of an organization’s operations will enable executives to spot hidden or unrealized costs and proactively make decisions to remove or shift those costs to keep operating costs lower. A Semantic Layer can also be used to counteract the impacts of inflation. Most customized customer services powered by a Semantic Layer can be used to retain customers even when they’re more cost conscious due to inflation. Similarly, with properly targeted and customized marketing, down to the individual level, organizations can more successfully win new customers, regardless of market conditions.

Global Political Instability

A Semantic Layer cannot prevent Global Political Instability (at least, not yet). However, it can help an organization predict where global politics may impact business, either from a market or supply perspective, helping executives to shift operations proactively. We did something like this for a global development bank. For them, we developed a knowledge graph as a component to their semantic layer (they dubbed it “the brain”) that identified trends between all of their global development projects, the countries and regions in which they were run, and recommended lessons learned to prevent repeats of the same mistakes or errors on future initiatives that might encounter the same issues. A Semantic Layer can use machine learning to help identify trends a human might not see, flagging the trends for executives to make these proactive decisions before they impact business.

Higher Borrowing Costs

Though it can’t control bank interest rates, a Semantic Layer can serve as a cost reduction tool in many ways, ranging from improved productivity, to identification of inefficiencies, even automatically identifying unnecessary redundancies in various stages of the supply chain. For one of our clients, we applied a Semantic Layer to map their “cradle to grave” operations, helping them to cut out unnecessary steps that were slowing down their operations. It not only saved them money, but reduced their production cycle and time to market by nearly ten percent.  

Labor Shortages

When applied against employee onboarding, learning, development, and performance, a Semantic Layer can be a critical tool in developing and retaining employees, helping to alleviate potential labor shortages. A Semantic Layer can serve as a map of employees, skills, and tenure that allows executives to identify current and future labor shortages and focus training and recruiting efforts on the most important skills.  For one services company, we incorporated the organization’s products, services, customers, and sales pipeline into the Semantic Layer, providing the organization’s executives with predictions of where they may have future labor shortages based on their employee demographics and sales pipeline, allowing them to proactively fill gaps through training or hiring before they even existed.

Rapidly Advancing AI Technology

A Semantic Layer can be an organization’s pathway to achieving Artificial Intelligence. If a CEO is losing sleep over how their organization can harness the power of AI, before getting left behind, a Semantic Layer is their answer. Given that the biggest hurdle for enterprise AI adoption is the lack of a comprehensive understanding of organization’s data and the black box nature of how most AI solutions are trained, a Semantic Layer plays a crucial role in providing a bridge between the complex underlying data and the end-users or AI applications. Specifically, by aggregating heterogeneous data and aligning business terminologies across various departments, we were able to employ components of Semantic Layer for a large financial institution to train complex AI algorithms based on their risk and controls library and management processes resulting in an approximately 80% improved accuracy of their risk identification process and alignment of relevant mitigation strategies.     

Higher Labor Costs

To expand on the point I made regarding labor shortages, a Semantic Layer can be a key tool in employee retention. Greater employee retention means lower costs related to recruiting and (re)training new employees, thereby helping to decrease overall labor costs. Another significant cost regarding labor is transfer of skills and knowledge from an organization’s most tenured and experienced resources to those who are new to an organization. An AI chatbot based on a Semantic Layer can help organizations get reliable answers to less experienced employees so that they can provide value more quickly. For a federal research institute with a retiring workforce, we leveraged the Semantic Layer to automate the identification of tenured experts (from project data leveraging taxonomies/metadata) and capture high value knowledge from their most experienced employees, then further used additional components of the the Semantic Layer and AI to automatically deliver that knowledge to new employees at their point of need. This not only improved employee development, it resulted in greater organizational efficiency and safety.

Regulation

A Semantic Layer can help an organization to stay up-to-date on all regulations and laws, ensuring they are compliant at the national, regional, and local levels. Moreover, especially regarding data privacy, security, and information delivery, a Semantic Layer can be a key tool to ensure an organization follows each regulation and is fully compliant. One large organization that deals with a myriad of individually identifiable health information (IIHI), came to EK after incurring a massive fine for the improper management of this information. We developed a Semantic Layer for them, which not only helped them achieve compliance, but also helped them spot ancillary systems with risky information management practices that needed to be secured. 

Global Financial Crisis

Each of the points I’ve already made have demonstrated how an organization can be more prepared for a global financial crisis, by having a complete understanding of their supply chain, a greater mastery of costs and redundancies, and a predictive view of their market and employee needs. An organization that has harnessed a Semantic Layer won’t be impervious to a financial crisis, but they’ll be more prepared for it, more able to adjust to it, and quicker to respond when emergencies arise.

Shifting Consumer/Customer Buying Behaviors

A Semantic Layer isn’t just about understanding your organization, it’s also about connecting data about customer behavior and understanding your customers and clients and delivering fully customized experiences and communications for them. Buying behaviors will always shift, but the organizations leveraging a Semantic Layer connecting knowledge and data assets across sales, marketing, and customer service will have greater intelligence and awareness around these shifts, meaning they’ll be ready for the shifts and more likely to retain or even expand their customer base while their competitors are left wondering what’s happening. For a large retail customer, we helped them build a graph of their customers and their buying habits, then matched individually assembled communications and offerings to them using the Semantic Layer, resulting in improved engagement and sales, including new engagement from previously dormant customers.

Conclusion

From the view of the CEO, a Semantic Layer, put simply, is about enterprise intelligence. Rather than piecing together answers from disparate systems, wasting time commissioning costly and time intensive research efforts, and taking a “best guess” approach, organization’s armed with a Semantic Layer will have the ability to get the right answers when they need them (or before), respond in advance to rapidly changing factors, and operate not just with Artificial Intelligence on their side, but with Organizational Intelligence.

If you’re ready to sleep better at night by harnessing the power of a Semantic Layer, let us know. This is not wishful thinking of the future – this is what we can, and have, achieved today.

 

The post What Every CEO Needs to Know About Semantic Layers appeared first on Enterprise Knowledge.

]]>
Enterprise Knowledge and data.world Partner to Make Knowledge Graphs More Accessible to the Enterprise https://enterprise-knowledge.com/enterprise-knowledge-and-data-world-partner-to-make-knowledge-graphs-more-accessible-to-the-enterprise/ Thu, 23 Sep 2021 15:16:51 +0000 https://enterprise-knowledge.com/?p=13639 New Knowledge Graph Accelerator Provides Organizations the Toolset and Capabilities to Make Enterprise AI a Reality. Enterprise Knowledge (EK), the world’s largest dedicated knowledge and information management consulting firm, announced the launch of the Knowledge Graph Accelerator today, a mechanism … Continue reading

The post Enterprise Knowledge and data.world Partner to Make Knowledge Graphs More Accessible to the Enterprise appeared first on Enterprise Knowledge.

]]>
New Knowledge Graph Accelerator Provides Organizations the Toolset and Capabilities to Make Enterprise AI a Reality.

Enterprise Knowledge (EK), the world’s largest dedicated knowledge and information management consulting firm, announced the launch of the Knowledge Graph Accelerator today, a mechanism to establish an organization’s first knowledge graph solution in a matter of weeks. In partnership with data.world, the knowledge graph-based enterprise data catalog, organizations will be able to rapidly unlock use cases such as Employee, Product, and Customer 360, Advanced Analytics, and Natural Language Search. 

“Knowledge Graphs are a critical component necessary to achieve Enterprise AI, but most organizations need a quick and scalable way to understand and experience the value,” said Lulit Tesfaye, Practice Lead of Data and Information Management at EK. “EK, in partnership with data.world, is creating a holistic solution to make building Enterprise AI intuitive using knowledge graphs, from data modeling and storage to enrichment and governance. Having this end-to-end consistency is critical for the success of knowledge graph products and setting the foundations for Enterprise AI.”

“EK has been at the leading edge of Knowledge Graph strategy, design, and implementation since our inception,” added Zach Wahl, CEO of EK. “Our thought leadership in this field, combined with data.world’s advanced capabilities, creates an exciting opportunity for organizations to feel the impact and realize the benefits quickly and meaningfully.”

Gartner predicts that graph technologies will be leveraged in over 80% of innovations in data and analytics by 2025, but many organizations find the business and technical complexities of graph design and implementation to be daunting. The Knowledge Graph Accelerator addresses the need to develop a practical, standards-based roadmap and prototype to quickly realize the potential of knowledge graphs. 

Through the Knowledge Graph Accelerator, organizations will get the following outcomes in less than 2 months:

  • An understanding of the foundations of knowledge graphs, including graph data modeling, data mapping, and data management;
  • A first implementable version (FIV) knowledge graph that can be scaled and enhanced;
  • A pilot version of your graph solution leveraging the knowledge graph-based data management solution data.world and gra.fo; and
  • A strategy for your organization to make Enterprise AI a reality. 

“Enterprises need to understand and trust the data powering their analytics while generating meaningful insights. But supporting different data sources and use cases, while analyzing and traversing changes to metadata and automating relationships can be challenging,” said Dr. Juan Sequeda, Principal Scientist at data.world. “Knowledge graphs are foundational for an effective and future proof data catalog, as well for next generation AI and analytics .”

To learn more, explore our approach and what your organization will get through the Knowledge Graph Accelerator. Also, reach out to Enterprise Knowledge to learn how to unlock the use cases that are most valuable to your enterprise. 

On September 29th, 2021,  Enterprise Knowledge will participate in the virtual data.world fall summit. Additional keynote speakers include Zhamak Dehghani, Barr Moses, Doug Laney, and Jon Loyens. 

 

About Enterprise Knowledge 

Enterprise Knowledge (EK) is a services firm that integrates Knowledge Management, Information and Data Management, Information Technology, and Agile Approaches to deliver comprehensive solutions. Our mission is to form true partnerships with our clients, listening and collaborating to create tailored, practical, and results-oriented solutions that enable them to thrive and adapt to changing needs. At the heart of these services, we always focus on working alongside our clients to understand their needs, ensuring we can provide practical and achievable solutions on an iterative, ongoing basis. Visit enterprise-knowledge.com to see how optimizing your knowledge and data management will impact your organization.  

About data.world

data.world is the enterprise data catalog for the modern data stack. Our cloud-native SaaS (software-as-a-service) platform combines a consumer-grade user experience with a powerful knowledge graph to deliver enhanced data discovery, agile data governance, and actionable insights. data.world is a Certified B Corporation and public benefit corporation and home to the world’s largest collaborative open data community with more than 1.3 million members, including 2/3 of the Fortune 500. Our company has 40 patents and has been named one of Austin’s Best Places to Work six years in a row. Follow us on LinkedIn, Twitter, and Facebook, or join us.

The post Enterprise Knowledge and data.world Partner to Make Knowledge Graphs More Accessible to the Enterprise appeared first on Enterprise Knowledge.

]]>