Search Articles - Enterprise Knowledge http://enterprise-knowledge.com/tag/search/ Mon, 17 Nov 2025 22:21:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg Search Articles - Enterprise Knowledge http://enterprise-knowledge.com/tag/search/ 32 32 Semantic Search Advisory and Implementation for an Online Healthcare Information Provider https://enterprise-knowledge.com/semantic-search-advisory-and-implementation-for-an-online-healthcare-information-provider/ Tue, 22 Jul 2025 14:13:12 +0000 https://enterprise-knowledge.com/?p=24995 The medical field is an extremely complex space, with thousands of concepts that are referred to by vastly different terms. These terms can vary across regions, languages, areas of practice, and even from clinician to clinician. Additionally, patients often communicate ... Continue reading

The post Semantic Search Advisory and Implementation for an Online Healthcare Information Provider appeared first on Enterprise Knowledge.

]]>

The Challenge

The medical field is an extremely complex space, with thousands of concepts that are referred to by vastly different terms. These terms can vary across regions, languages, areas of practice, and even from clinician to clinician. Additionally, patients often communicate with clinicians using language that reflects their more elementary understanding of health. This complicates the experience for patients when trying to find resources relevant to certain topics such as medical conditions or treatments, whether through search, chatbots, recommendations, or other discovery methods. This can lead to confusion during stressful situations, such as when trying to find a topical specialist or treat an uncommon condition.

A major online healthcare information provider engaged with EK to improve both their consumer-facing and clinician-facing natural language search and discovery platforms in order to deliver faster and more relevant results and recommendations. Their consumer-facing web pages aimed to connect consumers with healthcare providers when searching for a condition, with consumers often using terms or phrases that may not be an exact match with medical terms. In contrast, the clinicians who purchased licenses to the provider’s content required a fast and accurate method of searching for content regarding various conditions. They work in time-sensitive settings where rapid access to relevant content could save a patient’s life, and often use synonymous acronyms or domain-specific jargon that complicates the search process. The client desired a solution which could disambiguate between concepts and match certain concepts to a list of potential conditions. EK was tasked to refine these search processes to provide both sets of end users with accurate content recommendations.

The Solution

Leveraging both industry and organizational taxonomies for clinical topics and conditions, EK architected a search solution that could take both the technical terms preferred by clinicians and the more conversational language used by consumers and match them to conditions and relevant medical information. 

To improve search while maintaining a user-friendly experience, EK worked to:

  1. Enhance keyword search through metadata enrichment;
  2. Enable natural language search using large language models (LLMs) and vector search techniques, and;
  3. Introduce advanced search features post-initial search, allowing users to refine results with various facets.

The core components of EK’s semantic search advisory and implementation included:

  1. Search Solution Vision: EK collaborated with client stakeholders to determine and implement business and technical requirements with associated search metrics. This would allow the client to effectively evaluate LLM-powered search performance and measure levels of improvement. This approach focused on making the experience faster for clinicians searching for information and for consumers seeking to connect with a doctor. This work supported the long-term goal of improving the overall experience for consumers using the search platform. The choice of LLM and associated embeddings played a key role: by selecting the right embeddings, EK could improve the association of search terms, enabling more accurate and efficient connections, which proved especially critical during crisis situations. 
  2. Future State Roadmap: As part of the strategy portion of this engagement, EK worked with the client to create a roadmap for deploying the knowledge panel to the consumer-facing website in production. This roadmap involved deploying and hosting the content recommender, further expanding the clinical taxonomy, adding additional filters to the knowledge panel (such as insurance networks and location data), and search features such as autocomplete and type-ahead search. Setting future goals after implementation, EK suggested the client use machine learning methods to classify consumer queries based on language and predict their intent, as well as establish a way to personalize the user experience based on collected behavioral data/characteristics.
  3. Keyword and Natural Language Search Enhancement: EK developed a gold standard template for client experts in the medical domain to provide the ideal expected search results for particular clinician queries. This gold standard served as the foundation for validating the accuracy of the search solution in pointing clinicians to the right topics. Additionally, EK used semantic clustering and synonym analysis in order to identify further search terms to add as synonyms into the client’s enterprise taxonomy. Enriching the taxonomy with more clinician-specific language used when searching for concepts with natural language improved the retrieval of more relevant search results.
  4. Semantic Search Architecture Design and LLM Integration: EK designed and implemented a semantic search architecture to support the solution’s search features, EK connecting the client’s existing taxonomy and ontology management system (TOMS), the client’s search engine, and a new LLM. Leveraging the taxonomy stored in the TOMS and using the LLM to match search terms and taxonomy concepts based on similarity enriched the accuracy and contextualization of search results. EK also wrote custom scripts to evaluate the LLM’s understanding of medical terminology and generate evaluation metrics, allowing for performance monitoring and continuous improvement to keep the client’s search solution at the forefront of LLM technology. Finally, EK created a bespoke, reusable benchmark for LLM scores, evaluating how well a certain model matched natural language queries to clinical search terms and allowing the client to select the highest-performing model for consumer use.
  5. Semantic Knowledge Panel: To demonstrate the value this technology would bring to consumers, EK developed a clickable, action-oriented knowledge panel that showcased the envisioned future-state experience. Designed to support consumer health journeys, the knowledge panel guides users through a seamless journey – from conversational search (e.g. “I think I broke my ankle”), to surfacing relevant contextual information (such as web content related to terms and definitions drawn from the taxonomy), to connecting users to recommended clinicians and their scheduling pages based on their ability to treat the condition being searched (e.g. An orthopedist for a broken ankle). EK’s prototype leveraged a taxonomy of tagged keywords and provider expertise, with a scoring algorithm that assessed how many, and how well, those tags matched the user’s query. This scoring informed a sorted display of provider results, enabling users to take direct action (e.g. scheduling an appointment with an orthopedist) without leaving the search experience.

The EK Difference

EK’s expertise in semantic layer, solution architecture, artificial intelligence, and enterprise search came together to deliver a bespoke and unified solution that returned more accurate, context-aware information for clinicians and consumers. By collaborating with key medical experts to enrich the client’s enterprise taxonomy, EK’s semantic experts were able to share unique insights and knowledge on LLMs, combined with their experience with applying taxonomy and semantic similarity in natural language search use cases, to place the client in the best position to enable accurate search. EK also was able to upskill the client’s technical team on semantic capabilities and the architecture of the knowledge panel through knowledge transfers and paired programming, so that they could continue to maintain and enhance the solution in the future.

Additionally, EK’s solution architects, possessing deep knowledge of enterprise search and artificial intelligence technologies, were uniquely positioned to provide recommendations on the most advantageous method to seamlessly integrate the client’s TOMS and existing search engine with an LLM specifically developed for information retrieval. While a standard-purpose LLM could perform these tasks to some extent, EK helped design a purpose-built semantic search solution leveraging a specialized LLM that better identified and disambiguated user terms and phrases. 

Finally, EK’s search experts were able to define and monitor key search metrics with the client’s team, enabling them to closely monitor improvement over time, identifying trends and suggesting improvements to match. These search improvements resulted in a solution the client could be confident in and trust to be accurate.

The Results

The delivery of a semantic search prototype with a clear path to a production, web-based solution resulted in the opportunity for greatly augmented search capabilities across the organization’s products. Overall, this solution allowed both healthcare patients and clinicians to find exactly what they are looking for using a wide variety of terms.

As a result of EK’s semantic search advisory and implementation efforts, the client was able to:

  1. Empower potential patients to use web-based semantic search platform to search for specialists who can treat their conditions quickly and easily find care; 
  2. Streamline the content delivery process in critical, time-sensitive situations such as emergency rooms by providing rapid and accurate content that highlights and elaborates on potential diagnoses and treatments to healthcare professionals; and
  3. Identify potential data and metadata gaps in the healthcare information database that the client relies on to populate its website and recommend content to users.

Looking to improve your organization’s search capabilities? Want to see how LLMs can power your semantic ecosystem? Learn more from our experience or contact us today.

Download Flyer

Ready to Get Started?

Get in Touch

The post Semantic Search Advisory and Implementation for an Online Healthcare Information Provider appeared first on Enterprise Knowledge.

]]>
Top Semantic Layer Use Cases and Applications (with Real World Case Studies)   https://enterprise-knowledge.com/top-semantic-layer-use-cases-and-applications-with-realworld-case-studies/ Thu, 01 May 2025 17:32:34 +0000 https://enterprise-knowledge.com/?p=23922 Today, most enterprises are managing multiple content and data systems or repositories, often with overlapping capabilities such as content authoring, document management, or data management (typically averaging three or more). This leads to fragmentation and data silos, creating significant inefficiencies. … Continue reading

The post Top Semantic Layer Use Cases and Applications (with Real World Case Studies)   appeared first on Enterprise Knowledge.

]]>
Today, most enterprises are managing multiple content and data systems or repositories, often with overlapping capabilities such as content authoring, document management, or data management (typically averaging three or more). This leads to fragmentation and data silos, creating significant inefficiencies. Finding and preparing content and data for analysis takes weeks, or even months, resulting in high failure rates for knowledge management, data analytics, AI, and big data initiatives. Ultimately, negativity impacting decision-making capabilities and business agility.

To address these challenges, over the last few years, the semantic layer has emerged as a framework and solution to support a wide range of use cases, including content and data organization, integration, semantic search, knowledge discovery, data governance, and automation. By connecting disparate data sources, a semantic layer enables richer queries and supports programmatic knowledge extraction and modernization.

A semantic layer functions by utilizing metadata and taxonomies to create structure, business glossaries to align on the meaning of terms, ontologies to define relationships, and a knowledge graph to uncover hidden connections and patterns within content and data. This combination allows organizations to understand their information better and unlock greater value from their knowledge assets. Moreover, AI is tapping into this structured knowledge to generate contextual, relevant, and explainable answers.

So, what are the specific problems and use cases organizations are solving with a semantic layer? The case studies and use cases highlighted in this article are drawn from our own experience from recent projects and lessons learned, and demonstrate the value of a semantic layer not just as a technical foundation, but as a strategic asset, bridging human understanding with machine intelligence.

 

 

Semantic Layer Advancing Search and Knowledge Discovery: Getting Answers with Organizational Context

Over the past two decades, we have completed 50-70 semantic layer projects across a wide range of industries. In nearly every case, the core challenges revolve around age-old knowledge management and data quality issues—specifically, the findability and discoverability of organizational knowledge. In today’s fast-paced work environment, simply retrieving a list of documents as ‘information’ is no longer sufficient. Organizations require direct answers to discover new insights. Most importantly, organizations are looking to access data in the context of their specific business needs and processes. Traditional search methods continue to fall short in providing the depth and relevance required to make quick decisions. This is where a semantic layer comes into play. By organizing and connecting data with context, a semantic layer enables advanced search and knowledge discovery, allowing organizations to retrieve not just raw files or data, but answers that are rich in meaning, directly tied to objectives, and action-oriented. For example, supported by descriptive metadata and explicit relationships, semantic search, unlike keyword search, understands the meaning and context of our queries, leading to more accurate and relevant results by leveraging relationships between entities and concepts across content, rather than just matching keywords. This powers enterprise search solutions and question-answering systems that can understand and answer complex questions based on your organization’s knowledge. 

Case Study: For our clients in the pharmaceuticals and healthcare sectors, clinicians and researchers often face challenges locating the most relevant medical research, patient records, or treatment protocols due to the vast amount of unstructured data. A semantic layer facilitates knowledge discovery by connecting clinical data, trials, research articles, and treatment guidelines to enable context-aware search. By extracting and classifying entities like patient names, diagnoses, medications, and procedures from unstructured medical records, our clients are advancing scientific discovery and drug innovation. They are also improving patient care outcomes by applying the knowledge associated with these entities in clinical research. Furthermore, domain-specific ontologies organize unstructured content into a structured network, allowing AI solutions to better understand and infer knowledge from the data. This map-like representation helps systems navigate complex relationships and generate insights by clearly articulating how content and data are interconnected. As a result, rather than relying on traditional, time-consuming keyword-based searches that cannot distinguish between entities (e.g., “drugs manufactured by GSK” vs. “what drugs treat GSK”?), users can perform semantic queries that are more relevant and comprehend meaning (e.g., “What are the side effects of drug X?” or “Which pathways are affected by drug Y?”), by leveraging the relationships between entities to obtain precise and relevant answers more efficiently.

 

Semantic Layer as a Data Product: Unlocking Insights by Aligning & Connecting Knowledge Assets from Complex Legacy Systems

The reality is that most organizations face disconnected data spread across complex, legacy systems. Despite well-intended investments and efforts in enterprise knowledge and data management efforts, typical repositories often remain outdated, including legacy applications, email, shared network drives, folders, and information saved locally on desktops or laptops. Global investment banks, for instance, struggle with multiple outdated record management, risk, and compliance tracking systems, while healthcare organizations continue to contend with disparate electronic health record (EHR) systems and/or Electronic Medical Records (EMRs). These challenges hinder the ability to communicate and share data with newer, more advanced systems, are typically not designed to handle the growing demands of modern data, and result in businesses grappling with siloed information in legacy systems that make regulatory reporting onerous, manual, and time-consuming. The solution to these issues lies in treating the semantic layer as an abstracted data product itself whereby organizations employ semantic models to connect fragmented data from legacy systems, align shared terms across these systems, provide descriptive metadata and meaning, and connect data to empower users to query and access data with additional context, relevance, and speed. This approach not only streamlines decision-making but also modernizes data infrastructure without requiring a complete overhaul of existing systems.

Case Study: We are currently working with a global financial firm to transform their risk management program. The firm manages 21 bespoke legacy applications, each handling different aspects of their risk processes where compiling a comprehensive risk report typically took up to two months, and answering key questions like, “What are the related controls and policies relevant to a given risk in my business?” was a complex, time-consuming task to tackle. The firm engaged with us to augment their data transformation initiatives with a semantic layer and ecosystem. We began by piloting a conceptual graph model of their risk landscape, defining core risk taxonomies to connect disparate data across the ecosystem. We used ontologies to explicitly capture the relationships between risks, controls, issues, policies, and more. Additionally, we leveraged large language models (LLMs) to summarize and reconcile over 40,000 risks, which had previously been described by assessors using free text.

This initiative provided the firm with a simplified, intuitive view where users could quickly look up a risk and find relevant information in seconds via a graph front-end. Just 1.5 years later, the semantic layer is powering multiple key risk management tools, including a risk library with semantic search and knowledge panels, four recommendation engines, and a comprehensive risk dashboard featuring threshold and tolerance analysis. The early success of the project was due to a strategic approach: rather than attempting to integrate the semantic data model across their legacy applications, the firm treated it as a separate data product. This allowed risk assessors and various applications to use the semantic layer as modular “Lego bricks,” enabling flexibility and faster access to critical insights without disrupting existing systems.

 

Semantic Layer for Data Standards and Interoperability: Navigating the Dynamism of Data & Vendor Limitations 

Various data points suggest that, today, the average tenure of an S&P 500 technology company has dropped dramatically from 85 years to just 12-15 years. This rapid turnover reflects the challenges organizations face with the constant evolution of technology and vendor solutions. The ability to adapt to new tools and systems, while still maintaining operational continuity and reducing risk, is a growing concern for many organizations. One key solution to this challenge is using frameworks and standards that are created to ensure data interoperability, offering the flexibility to navigate data organization and abstracting data from system and vendor limitations. A proper semantic layer employs universally adopted semantic web (W3C) and data modeling standards to design, model, implement, and govern knowledge and data assets within organizations and across industries. 

Case Study: A few years ago, one of our clients faced a significant challenge when their graph database vendor was acquired by another company, leading to a sharp increase in both license and maintenance fees. To mitigate this, we were able to swiftly migrate all of their semantic data models from the old graph database to a new one within less than a week (the fastest migration we’ve ever experienced). This move saved the client approximately $2 million over three years. The success of the migration was made possible because their data models were built using semantic web standards (RDF-based), ensuring standards based data models and interoperability regardless of the underlying database or vendor. This case study highlights a fundamental shift in how organizations approach data management. 

 

Semantic Layer as the Framework for a Knowledge Portal 

The growing volume of data, the need for efficient knowledge sharing, and the drive to enhance employee productivity and engagement are fueling a renewed interest in knowledge portals. Organizations are increasingly seeking a centralized, easily accessible view of information as they adopt more data-driven, knowledge-centric approaches. A modern Knowledge Portal consolidates and presents diverse types of organizational content, ranging from unstructured documents and structured data to connections with people and enterprise resources, offering users a comprehensive “Enterprise 360” view of related knowledge assets to support their work effectively.

While knowledge portals fell out of favor in the 2010s due to issues like poor content quality, weak governance, and limited usability, today’s technological advancements are enabling their resurgence. Enhanced search capabilities, better content aggregation, intelligent categorization, and automated integrations are improving findability, discoverability, and user engagement. At its core, a Knowledge Portal comprises five key components that are now more feasible than ever: a Web UI, API layers, enterprise search engine, knowledge graph, and taxonomy/ontology management tools—half of which form part of the semantic layer.

Case Study: A global investment firm managing over $250 billion in assets partnered with us to break down silos and improve access to critical information across its 50,000-employee organization. Investment professionals were wasting time searching for fragmented, inconsistent knowledge stored across disparate systems, often duplicating efforts and missing key insights. We designed and implemented a Knowledge Portal integrating structured and unstructured content, AI-powered search, and a semantic layer to unify data from over 12 systems including their primary CRM (DealCloud), additional internal/external systems, while respecting complex access permissions and entitlements. A big part of the portal involved a semantic layer architecture which included the rollout of metadata and taxonomy design, ontology and graph modeling and storage, and an agile development process that ensured high user engagement and adoption. Today, the portal connects staff to both information and experts, enabling faster discovery, improved collaboration, and reduced redundancy. As a result, the firm saw measurable gains in their productivity, staff and client onboarding efficiency, and knowledge reuse. The company continues to expand the solution to advanced use cases such as semantic search applications and robust global use cases.

 

Semantic Layer for Analytics-Ready Data 

For many large-scale organizations, it takes weeks, sometimes months, for analytics teams to develop “insights” reports and dashboards that fulfill data-driven requests from executives or business stakeholders. Navigating complex systems and managing vast data volumes has become a point of friction between established software engineering teams managing legacy applications and emerging data science/engineering teams focused on unlocking analytics insights or data products. Such challenges persist as long as organizations work within complex infrastructures and proprietary platforms, where data is fragmented and locked in tables or applications with little to no business context. This makes it extremely difficult to extract useful insights, handle the dynamism of data, or manage the rising volumes of unstructured data, all while trying to ensure that data is consistent and trustworthy. 

Picture this scenario and use case from a recent engagement: a global retailer, with close to 40,000 store locations across the globe had recently migrated its data to a data lake in an attempt to centralize their data assets. Despite the investment, they still faced persistent challenges when new data requests came from their leadership, particularly around store performance metrics. Here’s a breakdown of the issues:

  • Each time a leadership team requested a new metric or report, the data team had to spin up a new project and develop new data pipelines.
  • 5-6 months was required for a data analyst to understand the content/data related to these metrics—often involving petabytes of raw data.
  • The process involved managing over 1500 ETL pipelines, which led to inefficiencies (what we jokingly called “death by 2,000 ETLs”).
  • Producing a single dashboard for C-level executives cost over $900,000.
  • Even after completing the dashboard, they often discovered that the metrics were being defined and used inconsistently. Terms like “revenue,” “headcount,” or “store performance” were frequently understood differently depending on who worked on the report, making output reports unreliable and unusable. 

This is one example of why organizations are now seeking and investing in a coherent, integrated way to bridge these gaps and understand their vast data ecosystems. Because organizations often work with complex systems, ranging from CRMs and ERPs to data lakes and cloud platforms, extracting meaningful insights from this data requires a coherent, integrated view that can bridge these gaps. This is where the semantic layer serves as a pragmatic tool that enables organizations to bridge these gaps, streamline processes, and transform how data is used across departments. Specifically for these use cases, semantic data is gaining significant traction across diverse pockets of the organization as the standard interpreter between complex systems and business goals. 

 

Semantic Layer for Delivering Knowledge Intelligence 

Another reality many organizations are grappling with today is that basic AI algorithms trained in public data sets may not work well on organization and domain-specific problems, especially in domains where industry preferences are relevant. Thus, organizational knowledge is a prerequisite for success, not just for generative AI, but for all applications of enterprise AI and data science solutions. This is where experience and best practices in knowledge and data management lend the AI space effective and proven approaches to sharing domain and institutional knowledge. Especially for technical teams that are tasked with making AI “work” or provide value for their organization, they are looking for programmatic ways for explicitly modeling relationships between various data entities, providing business context to tabular data, and extracting knowledge from unstructured content, ultimately delivering what we call Knowledge Intelligence.

A well-implemented semantic layer abstracts the complexities of underlying systems and presents a unified, business-friendly view of data. It transforms raw data into understandable concepts and relationships, as well as organizes and connects unstructured data. This makes it easier for both data teams and business users to query, analyze, and understand their data, while making this organizational knowledge machine-ready and readable. The semantic layer standardizes terminology and data models across the enterprise, and provides the required business context for the data. By unifying and organizing data in a way that is meaningful to the business, it ensures that key metrics are consistent, actionable, and aligned with the company’s strategic objectives and business definitions.

Case Study: With the aforementioned global retailer, as their data and analytics teams worked to integrate siloed data and unstructured content, we partnered with them to build a semantic ecosystem that streamlined processes and provided the business context needed to make sense of their vast data. Our approach included: 

  • Standardized Metadata and Vocabularies: Developed standardized metadata and vocabularies to describe their key enterprise data assets, especially for store metrics like sales performance, revenue, etc. This ensured that everyone in the organization used the same definitions and language when discussing key metrics. 
  • Explicitly Defined Concepts and Relationships: We used ontologies and graphs to define the relationships between various domains such as products, store locations, store performance, etc. This created a coherent and standardized model that allowed data teams to work from a shared understanding of how different data points were connected.
  • Data Catalog and Data Products: We helped the retailer integrate these semantic models into a data catalog that made data available as “data products.” This allowed analysts to access predefined, business-contextualized data directly, without having to start from scratch each time a new request was made.

This approach reduced report generation steps from 7 to 4 and cut development time from 6 months to just 4-5 weeks. Most importantly, it enabled the discovery of previously hidden data, unlocking valuable insights to optimize operations and drive business performance.

 

Semantic Layer as a Foundation for Reliable AI: Facilitating Human Reasoning and Explainable Decisions

Emerging technologies (like GenAI or Agentic AI) are democratizing access to information and automation, but they also contribute to the “dark data” problem—data that exists in an unstructured or inaccessible format but contains valuable, sensitive, or bad information. While LLMs have garnered significant attention in conversational AI and content generation, organizations are now recognizing that their data management challenges require more specialized, nuanced, and somewhat ‘grounded’ approaches that address the gaps in explainability, precision, and the ability to align AI with organizational context and business rules. Without this organizational context, raw data or text is often messy, outdated, redundant, and unstructured, making it difficult for AI algorithms to extract meaningful information. The key step to addressing this AI problem involves the ability to connect all types of organizational knowledge assets, i.e., using shared language, involving experts, related data, content, videos, best practices, lessons learned, and operational insights from across the organization. In other words, to fully benefit from an organization’s knowledge and information, both structured and unstructured information, as well as expert knowledge, must be represented and understood by machines. A semantic layer provides AI with a programmatic framework to make organizational context, content, and domain knowledge machine-readable. Techniques such as data labeling, taxonomy development, business glossaries, ontology, and knowledge graph creation make up the semantic layer to facilitate this process. 

Case Study: We have been working with a global foundation that had previously been through failed AI experiments as part of a mandate from their CEO for their data teams to “figure out a way” to adopt LLMs to evaluate the impact of their investments on strategic goals by synthesizing information from publicly available domain data, internal investment documents, and internal investment data. The challenge for previously failed efforts lay in connecting diverse and unstructured information to structured data and ensuring that the insights generated were precise, explainable, reliable, and actionable for executive stakeholders. To address these challenges, we took a hybrid approach that leveraged LLMs that were augmented through advanced graph technology and a semantic RAG (Retrieval Augmented Generation) agentic workflow. To provide the relevant organizational metrics and connection points in a structured manner, the solution leveraged an Investment Ontology as a semantic backbone that underpins their disconnected source systems, ensuring that all investment-related data (from structured datasets to narrative reports) is harmonized under a common language. This semantic backbone supports both precise data integration and flexible query interpretation. To effectively convey the value of this hybrid approach, we leveraged a chatbot that served as a user interface to toggle back and forth between the basic GPT model vs. the graph RAG solution. The solution consistently outperformed the basic/naive LLMs for complex questions, demonstrating the value of semantics for providing organizational context and alignment and ultimately, delivering coherent and explainable insights that bridged structured and unstructured investment data, as well as provided a transparent AI mapping that allowed stakeholders to see exactly how each answer was derived.

 

Closing 

Now more than ever, the understanding and application of semantic layers are rapidly advancing. Organizations across industries are increasingly investing in solutions to enhance their knowledge and data management capabilities, driven in part by the growing interest to benefit from advanced AI capabilities. 

The days of relying on a single, monolithic tool are behind us. Enterprises are increasingly investing in semantic technologies to not only work with the systems of today but also to future-proof their data infrastructure for the solutions of tomorrow. A semantic layer provides the standards that act as a universal “music sheet,” enabling data to be played and interpreted by any instrument, including emerging AI-driven tools. This approach ensures flexibility, reduces vendor lock-in, and empowers organizations to adapt and evolve without being constrained by legacy systems.

If you are looking to learn more about how organizations are approaching semantic layers at scale or are you seeking to unstick a stalled initiative, you can learn more from our case studies or contact us if you have specific questions.

The post Top Semantic Layer Use Cases and Applications (with Real World Case Studies)   appeared first on Enterprise Knowledge.

]]>
Women’s Health Foundation – Semantic Classification POC https://enterprise-knowledge.com/womens-health-foundation-semantic-classification-poc/ Thu, 10 Apr 2025 19:20:31 +0000 https://enterprise-knowledge.com/?p=23789 A humanitarian foundation focusing on women’s health faced a complex problem: determining the highest impact decision points in contraception adoption for specific markets and demographics. Two strategic objectives drove the initiative—first, understanding the multifaceted factors (from product attributes to social influences) that guide women’s contraceptive choices, and second, identifying actionable insights from disparate data sources. The key challenge was integrating internal survey response data with internal investment documents to answer nuanced competency questions such as, “What are the most frequently cited factors when considering a contraceptive method?” and “Which factors most strongly influence adoption or rejection?” This required a system that could not only ingest and organize heterogeneous data but also enable executives to visualize and act upon insights derived from complex cross-document analyses. Continue reading

The post Women’s Health Foundation – Semantic Classification POC appeared first on Enterprise Knowledge.

]]>

The Challenge

A humanitarian foundation focusing on women’s health faced a complex problem: determining the highest impact decision points in contraception adoption for specific markets and demographics. Two strategic objectives drove the initiative—first, understanding the multifaceted factors (from product attributes to social influences) that guide women’s contraceptive choices, and second, identifying actionable insights from disparate data sources. The key challenge was integrating internal survey response data with internal investment documents to answer nuanced competency questions such as, “What are the most frequently cited factors when considering a contraceptive method?” and “Which factors most strongly influence adoption or rejection?” This required a system that could not only ingest and organize heterogeneous data but also enable executives to visualize and act upon insights derived from complex cross-document analyses.

 

The Solution

To address these challenges, the project team developed a proof-of-concept (POC) that leveraged advanced graph technology combined with AI-augmented classification techniques. 

The solution was implemented across several workstreams:

Defining System Functionality
The initial phase involved clearly articulating the use case. By mapping out the decision landscape—from strategic objectives (improving modern contraceptive prevalence rates) to granular insights from user research—the team designed a tailored taxonomy and ontology for the women’s health domain. This semantic framework was engineered to capture cultural nuances, local linguistic variations, and the diverse attributes influencing contraceptive choices.

Processing Existing Data
With the functionality defined, the next phase involved transforming internal survey responses and investment documents into a unified, structured format. An AI-augmented classification workflow was deployed to extract tacit knowledge from survey responses. This process was supported by a stakeholder-validated taxonomy and ontology, allowing raw responses to be mapped into clearly defined data classes. This robust data processing pipeline ensured that quantitative measures (like frequency of citation) and qualitative insights were captured in a cohesive base graph.

Building the Analysis Model
The core of the solution was the creation of a Product Adoption Survey Base Graph. Processed data was converted into RDF triples using a rigorous ontology model, forming the base graph designed to answer competency questions via SPARQL queries. While this model laid the foundation for revealing correlations and decision factors, the full production of the advanced analysis graph—designed to incorporate deeper inference and reasoning—remained as a future enhancement.

Handoff of Analysis Graph Production and Frontend Implementation
Due to time constraints, the production of the comprehensive analysis graph and the implementation of the interactive front end were transitioned to the client. Our team delivered the base graph and all necessary supporting documentation, providing the client with a solid foundation and a detailed roadmap for further development. This handoff ensures that the client’s in-house teams can continue productionalizing the analysis graph and integrate it with their BI dashboard for end-user access.

Provide a Roadmap for Further Development
Beyond the initial POC, a clear roadmap was established. The next steps include refining the AI classification workflow, fully instantiating the analysis graph with enhanced reasoning capabilities, and developing the front end to expose these insights via a business intelligence (BI) dashboard. These tasks have been handed off to the client, along with guidance on leveraging enterprise graph database licenses and integrating the solution within existing knowledge management frameworks.

 

The EK Difference

A standout feature of this project is its novel, generalizable technical architecture:

Ontology and Taxonomy Design
A custom ontology was developed to model the women’s health domain—incorporating key decision factors, cultural influences, and local linguistic variations. This semantic backbone ensures that structured investment data and unstructured survey responses are harmonized under a common framework.

AI-Augmented Classification Pipeline:
The solution leverages state-of-the-art language models to perform the initial classification of survey responses. Supported by a validated taxonomy, this pipeline automatically extracts and tags critical data points from large volumes of survey content, laying the groundwork for subsequent graph instantiation, inference, and analysis.

Graph Instantiation and Querying:
Processed data is transformed into RDF triples and instantiated within a dedicated Product Adoption Survey Base Graph. This graph, queried via SPARQL through a GraphDB workbench, offers a robust mechanism for cross-document analysis. Although the full analysis graph is pending, the base graph effectively supports the core competency questions.


Guidance for BI Integration:
The architecture includes a flexible API layer and clear documentation that maps graph data into SQL tables. This design is intended to support future integration with BI platforms, enabling real-time visualization and executive-level decision-making.

 

The Results

The POC delivered compelling outcomes despite time constraints:

  • Actionable Insights:
    The system generated new insights by identifying frequently cited and impactful decision factors for contraceptive adoption, directly addressing the competency questions set by the Women’s Health teams.
  • Improved Data Transparency:
    By structuring tribal knowledge and unstructured survey data into a unified graph, the solution provided an explainable view of the decision landscape. Stakeholders gained visibility into how each insight was derived, enhancing trust in the system’s outputs.
  • Scalability and Generalizability:
    The technical architecture is robust and adaptable, offering a scalable model for analyzing similar survey data across other health domains. This approach demonstrates how enterprise knowledge graphs can drive down the total cost of ownership while enhancing integration within existing data management frameworks.
  • Strategic Handoff:
    Recognizing time constraints, our team successfully handed off the production of the comprehensive analysis graph and the implementation of the front end to the client. This strategic decision ensured continuity and allowed the client to tailor further development to their unique operational needs.
Download Flyer

Ready to Get Started?

Get in Touch

The post Women’s Health Foundation – Semantic Classification POC appeared first on Enterprise Knowledge.

]]>
Humanitarian Foundation – SemanticRAG POC https://enterprise-knowledge.com/humanitarian-foundation-semanticrag-poc/ Wed, 02 Apr 2025 18:03:04 +0000 https://enterprise-knowledge.com/?p=23603 A humanitarian foundation needed to demonstrate the ability of its Graph Retrieval Augmented Generation (GRAG) system to answer complex, cross-source questions. In particular, the task was to evaluate the impact of foundation investments on strategic goals by synthesizing information from publicly available domain data, internal investment documents, and internal investment data. The challenge laid in .... Continue reading

The post Humanitarian Foundation – SemanticRAG POC appeared first on Enterprise Knowledge.

]]>

The Challenge

A humanitarian foundation needed to demonstrate the ability of its Graph Retrieval Augmented Generation (GRAG) system to answer complex, cross-source questions. In particular, the task was to evaluate the impact of foundation investments on strategic goals by synthesizing information from publicly available domain data, internal investment documents, and internal investment data. The challenge laid in connecting diverse and unstructured information and also ensuring that the insights generated were precise, explainable, and actionable for executive stakeholders.

 

The Solution

To address these challenges, the project team developed a proof-of-concept (POC) that leveraged advanced graph technology and a semantic RAG (Retrieval Augmented Generation) agentic workflow. 

The solution was built around several core workstreams:

Defining System Functionality

The initial phase focused on establishing a clear use case: enabling the foundation to query its data ecosystem with natural language questions and receive accurate, explainable answers. This involved mapping out a comprehensive taxonomy and ontology that could encapsulate the knowledge domain of investments, thereby standardizing how investment documents and data were interpreted and interrelated.

Processing Existing Data

With functionality defined, the next step was to ingest and transform various data types. Structured data from internal systems and unstructured investment documents were processed and aligned with the newly defined ontology. Advanced techniques, including semantic extraction and graph mapping, were employed to ensure that all data—regardless of source—was accessible within a unified graph database.

Building the Chatbot Model

Central to the solution was the development of an investment chatbot that could leverage the graph’s interconnected data. This was approached as a cross-document question-answering challenge. The model was designed to predict answers by linking query nodes with relevant data nodes across the graph, thereby addressing competency questions that a naive retrieval model would miss. An explainable AI component was integrated to transparently show which data points drove each answer, instilling confidence in the results.

Deploying the Whole System in a Containerized Web Application Stack

To ensure immediate usability, the POC was deployed, along with all of its dependencies, in a user-friendly, portable web application stack. This involved creating a dedicated API layer to interface between the chatbot and the graph database containers, alongside a custom front end that allowed executive users to interact with the system and view detailed explanations of the generated answers and the source documents upon which they were based. Early feedback highlighted the system’s ability to connect structured and unstructured content seamlessly, paving the way for broader adoption.

Providing a Roadmap for Further Development

Beyond the initial POC, the project laid out clear next steps. Recommendations included refining the chatbot’s response logic, optimizing performance (notably in embedding and document chunking), and enhancing user experience through additional ontology-driven query refinements. These steps are critical for evolving the system from a demonstrative tool to a fully integrated component of the foundation’s data management and access stack.

 

 

The EK Difference

A key differentiator of this project was its adoption of standards-based semantic graph technology and its highly generalizable technical architecture. 

The architecture comprises:

Investment Ontology and Data Mapping:

A rigorously defined ontology underpins the entire system, ensuring that all investment-related data—from structured datasets to narrative reports—is harmonized under a common language. This semantic backbone supports both precise data integration and flexible query interpretation.

Graph Instantiation Pipeline:

Investment data is transformed into RDF triples and instantiated within a robust graph database. This pipeline supports current data volumes and is scalable for future expansion. It includes custom tools to convert CSV files and other structured datasets into RDF and mechanisms to continually map new data into the graph.

Semantic RAG Agentic Workflow and API:

The solution utilizes a semantic RAG approach to navigate the complexities of cross-document query answering. This agentic workflow is designed to minimize unhelpful hallucinations, ensuring that each answer is traceable back to the underlying data. The integrated API provides a seamless bridge between the front-end chatbot and the back-end graph, enabling real-time, explainable responses.

Investment Chatbot Deployment:

Built as a central interface, the chatbot exemplifies how graph technology can be operationalized to address executive-level investment queries. It is fine-tuned to reflect the foundation’s language and domain knowledge, ensuring that every answer is accurate and contextually relevant.

 

The Results

The POC successfully demonstrated that GRAG could answer complex questions by:

  • Delivering coherent and explainable recommendations that bridged structured and unstructured investment data.
  • Significantly reducing query response time through a tightly integrated semantic RAG workflow.
  • Providing a transparent AI mapping that allowed stakeholders to see exactly how each answer was derived.
  • Establishing a scalable architecture that can be extended to support a broader range of use cases across the foundation’s data ecosystem.

This project underscores the transformative potential of graph technology in revolutionizing how investment health is assessed and how strategic decisions are informed. With a clear roadmap for future enhancements, the foundation now has a powerful, next-generation tool for deep, context-driven analysis of its investments.

Download Flyer

Ready to Get Started?

Get in Touch

The post Humanitarian Foundation – SemanticRAG POC appeared first on Enterprise Knowledge.

]]>
What is Semantics and Why Does it Matter? https://enterprise-knowledge.com/what-is-semantics-and-why-does-it-matter/ Thu, 20 Mar 2025 15:05:34 +0000 https://enterprise-knowledge.com/?p=23458 This white paper will unpack what semantics is, and walk through the benefits of a semantic approach to your organization’s data across search, usability, and standardization. As a knowledge and information management consultancy, EK works closely with clients to help … Continue reading

The post What is Semantics and Why Does it Matter? appeared first on Enterprise Knowledge.

]]>
This white paper will unpack what semantics is, and walk through the benefits of a semantic approach to your organization’s data across search, usability, and standardization. As a knowledge and information management consultancy, EK works closely with clients to help them reorganize and transform their organization’s knowledge structure and culture. One habit that we’ve noticed in working with clients is a hesitancy on their part to engage with the meaning and semantics of their data, summed up in the question “Why semantics?” This can come from a few places:

  • An unfamiliarity with the concept;
  • The fear that semantics is too advanced for a lower data-maturity organization; or
  • The assumption that problems of semantics can be engineered away with the right codebase 

These are all reasons we’ve seen for semantic hesitancy. And to be fair, between the semantic layer, semantic web, semantic search, and other applications, it can be easy to lose track of what semantics means and what the benefits are. 

 

What is Semantics?

The term semantics originally comes from philosophy, where it refers to the study of how we construct and transmit meaning through concepts and language. This might sound daunting, but the semantics we refer to when looking at data is a much more limited application. 

Data semantics looks at what data is meant to represent – the meaning and information contained within the data – as well as our ability to encode and interpret that meaning. 

Data semantics can cover the context in which the data was produced, what the data is referring to, and any information needed to understand and make use of the data. 

To better understand what this looks like, let’s look at an imaginary example of tabular data for a veterinary clinic tracking visits:

Name Animal Breed Sex Date Reason for Visit Notes
Katara Cat American Shorthair F 11/22/23 Checkup
Grayson Rabbit English Lop M 10/13/23 Yearly Vaccination
Abby Dog German Shorthaired Pointer F 9/28/23 Appointment Urinary problems

My Awesome Vet Clinic

 

Above, we have the table of sample data for our imaginary veterinary clinic. Within this table, we can tell what a piece of data is meant to represent by its row and column placement. Looking at the second row of the first column, we can see that the string “Katara” refers to a name because it sits under the column header “Name”. If we look at the cell to the right of that one, we can see that Katara is in fact the name of a Cat. Continuing along the row to the right tells us the breed of cat, the date of the visit, and the reason that Katara’s owners have taken her in today.

The real-life Katara, watching the author as he typed the first draft of this white paper

 

While the semantics of our table might seem basic compared to more advanced applications and data formats, it is important for being able to understand and make use of the data. This leads into my first point:

 

You are Already Using Semantics

Whether you have a formal semantics program or not, your organization is already engaging with the semantics of data as a daily activity. Because semantics is so often mentioned as a component of advanced data applications and solutioning, people sometimes wrongly assume that enhancing and improving the semantics of data can only be a high-maturity activity. One client at a lower-maturity organization brought this up directly, saying “Semantics is the balcony of a house. Right now what I need is the foundation.” What they were missing, and what we showed them through our time with this client, is that understanding and improving the semantics of your data is a foundational activity. From how tables are laid out, to naming conventions, to the lists of terms that appear in cells and drop-downs, semantics is inextricably linked to how we use and understand data. 

 

Achieving Five-Star Semantic Data

To prevent misunderstandings, we need to improve our data’s semantic expressiveness. Let’s look at the veterinary clinic data sample again. Earlier, we assumed that “Name” refers to the name of the animal being brought in, but suppose someone unfamiliar with the table’s setup is given the data to use. If the clinic’s billing needs to make a call, will they realize that “Katara” refers to the name of the cat and not the cat’s owner, or will they make an embarrassing mistake? When evaluating the semantics of data, I like to reference Panos Alexopoulos’s book Semantic Modeling for Data. Here, Panos defines semantic data modeling as creating representations of data such that the meaning is “explicit, accurate, and commonly understood by both humans and computer systems.” Each of these is an important component of ensuring that the semantics of our data support use, growth over time, and analysis.

 

Explicit

Data carries meaning. Often the meaning of data is understood implicitly by the people who are close to the production of the data and involved in the creation of datasets. Because they already know what the data is, they might not feel the need to describe explicitly what the data is, how it was put together, and what the definition of different terms are. Unfortunately, this can lead to some common issues once the data is passed on to be used by other people:

  • Misunderstanding what the data can be used for
  • Misunderstanding what the data describes
  • Misunderstanding what data elements mean

When we look at the initial tabular example, we know that Katara is a cat because of the table’s structure. If we were to take the concept of “Katara” outside of the table, though, we would lose that information–”Katara” would just be a string, without any guidance as to whether that string refers to Katara the cat, Katara the fictional character, or any other Kataras that may exist.

To handle the issue of understanding what the data can be used for, we want to capture how the data was produced, and what it is meant to be used for, explicitly for consumers. Links between source and target data sets should also be called out explicitly to facilitate use and understanding, instead of being left to the reader to assume.

What the data describes can be captured by modeling the most important things (or entities) that the data is describing. Looking at our veterinary clinic data, let’s pull out these entities, their information, and the links between them:

 

A sample conceptual model for the veterinary clinic information, adding in some additional information such as phone number and address

 

We now have the beginnings of a conceptual model. This model is an abstraction that identifies the “things” behind the data–the conceptual entities that the information within the cells is referring to. Because the model makes relationships between entities explicit, it helps people new to the data understand the inherent structure. This makes it easier to join or map new datasets to the dataset that we modeled.

Finally, to capture what data elements mean, we can make use of a data dictionary. A data dictionary contains additional metadata about data elements, such as their definition, standardized name, and attributes. Using a data dictionary, we can see what allowable values are for the field “Animal”, for instance, or how the definitions of an “appointment” vs a “checkup” differ.

 

Accurate

Data should be able to vouch for its own accuracy to promote trust and usage. It is also important for those accuracy checks to be human-readable as well as something that can be used and understood by machines. It might seem obvious at first glance that we want data to be accurate. Less obvious is how we can achieve accuracy. To ensure our data is accurate, we should define what accuracy looks like for our data. This can be formatting information: dates should be encoded as YYYY-MM-DD following the ISO 8601 Standard rather than Month/Day/Year, for example. It can also take the form of a regular expression that ensures that phone numbers are 10 digits with a valid North American area code. Having accuracy information captured as a part of your data’s semantics works both to ensure that data is correct at the source, and that poor, inaccurate data is not mixed into the dataset down the line. As the saying goes, “Garbage in, garbage out.”

 

Machine Readable

Looking back to our conceptual diagram, it has a clear limitation. Human users can use the model to understand how entities in the data link together, but there isn’t any machine readability. With a well-defined machine-readable model, programs would be able to know that visits are always associated with one animal, and that animals must have one or more owners. That knowledge could then be used programmatically to verify when our data is accurate or inaccurate. This is the benefit of machine-readable semantics, and it is something that we want to enable across all aspects of our data. One way of encoding data semantics to be readable by humans and machines is to use a semantic knowledge graph. A semantic knowledge graph captures data, models, and associated semantic metadata in a graph structure that can be browsed by humans or queried programmatically. It fulfills the explicit semantics and accuracy requirements listed above in order to promote the usability and reliability of data.

 

Example: Solving for Semantics with a Knowledge Graph

To demonstrate what good semantic data can do, let’s take our data and put it into a simple semantic knowledge graph:

 

Sample knowledge graph based on our veterinary clinic data

 

Within this graph, we have made our semantics explicit by defining not just the data but also the model that our data follows. The graph also captures the data concept information that we would want to find in a data dictionary. If we want to know more about any part of this model – for example, what the relationship “hasBreed” refers to – we can navigate to that part of the model and find out more information: 

 

The definition of the relationship “hasBreed” within the model

 

Within the graph model, we’ve captured the taxonomies that can be used to fill out information on an animal’s Type and Breed as well as cardinality of relationships to ensure that the data and its use remains accurate. And, because we are using a knowledge graph, all of this information is machine readable, allowing us to do things like query the graph. Going back to the first example, we can ask the graph for the name of Katara’s Owner vs Katara’s Name to receive the contextually correct response (see the sample SPARQL query below). 

SELECT ?PetOwnerName ?PetName

WHERE {

    ?PetOwner hasPet ?Pet .

    ?PetOwner schema:name ?PetOwnerName .

    ?Pet schema:name ?PetName .

}

Rather than having to guess at the meaning of different cells in a table, our three components of good semantics ensure that we can understand and make sense of the data.

 

?PetOwnerName ?PetName
Ben Katara
Shay Grayson
Michael Abby

Example csv result based on the above SPARQL Query

 

Conclusion

This article has walked through how semantics is a core part of being able to understand and make use of an organization’s data. For any group working with data at scale, where there is a degree of separation between data producers and data consumers for a given dataset, having clear and well-documented semantics is crucial. Without good semantics, many of the fundamental uses of data run into roadblocks and difficulties. Ultimately, the important question to ask is not “Why semantics?”, rather “Where does semantics fit into a data strategy?” At Enterprise Knowledge, we can work with you to develop an enterprise data strategy that takes your needs into account. Contact us to learn more.

The post What is Semantics and Why Does it Matter? appeared first on Enterprise Knowledge.

]]>
Aligning an Enterprise-Wide Information Management (IM) Roadmap for a Global Energy Company https://enterprise-knowledge.com/aligning-an-enterprise-wide-information-management-im-roadmap-for-a-global-energy-company/ Wed, 26 Feb 2025 20:04:06 +0000 https://enterprise-knowledge.com/?p=23215 A global energy company sought support in detailing and aligning their information management (IM) team’s roadmaps for all four of their IM products – covering all managed applications, services, projects, and capabilities – to help them reach their target state vision of higher levels of productivity, more informed decision-making, and quality information made available to ... Continue reading

The post Aligning an Enterprise-Wide Information Management (IM) Roadmap for a Global Energy Company appeared first on Enterprise Knowledge.

]]>

The Challenge

A global energy company sought support in detailing and aligning their information management (IM) team’s roadmaps for all four of their IM products – covering all managed applications, services, projects, and capabilities – to help them reach their target state vision of higher levels of productivity, more informed decision-making, and quality information made available to all of their users.

They were facing the following challenges:

  • Recently-created products with immature, internally-focused roadmaps, resulting in missed opportunities for incorporation of industry trends and standards;
  • Limited alignment across products, resulting in unnecessary duplicative work, under-standardization, and a lack of business engagement;
  • Varying levels of granularity and detail across product roadmaps, resulting in some confusion around what tasks entail;
  • Inconsistently defined objectives and/or business cases, resulting in unclear task goals; and
  • Isolated, uncirculated efforts to harness artificial intelligence (AI), resulting in a fragmented AI strategy and time lost performing tasks manually that could have been automated.

 

The Solution

The energy company engaged Enterprise Knowledge (EK) over a 3.5-month time period to refine their product roadmaps and align and combine them into a unified 5-year roadmap for the entire portfolio. In addition, the company tasked EK with developing a supplemental landscape design diagram to visualize the information management team’s technical scope to strengthen the delivery per product and value to the company.

EK began by analyzing existing roadmaps and reviewing them with the product managers, identifying the target state for each. We facilitated multiple knowledge gathering sessions, conducted system demos, and analyzed relevant content items to understand the strengths, challenges, and scope of each product area, as well as the portfolio as a whole.

EK then provided recommendations for additional tasks to fill observed gaps and opportunities to consolidate overlap, aligning the roadmaps across 5 recommended KM workstreams:

  • Findability & Search Insights: Provide the business with the ability to find and discover the right information at the time of need.
  • Graph Modeling: Develop a graph model to power search, analytics, recommendations and more for the IM team.
  • Content & Process Governance: Establish and maintain content, information, and data governance across the company to support reuse and standardization.
  • Security & Access Management: Support the business in complying with regulatory requirements and security considerations to safeguard all IM information assets.
  • Communications & Adoption: Establish consistent processes and methods to support communication with the business and promote the adoption of new tools/capabilities.

To strengthen and connect the organization’s AI strategy, EK threaded automation throughout and incorporated it within each workstream wherever possible and/or feasible. The goal of this was to improve business efficiency and productivity, as well as to move the team one step closer to making IM “invisible.” Each task was also assigned a type (foundational, MVP, enhancement, operational support), level of effort (low, medium, high), business value (1 (low) to 5 (high) on a Likert scale), and ownership (portfolio vs. individual products). EK marked which tasks already existed in the product roadmaps and which ones were newly recommended to supplement them. By mapping the tasks to the 5 workstreams in both a visual roadmap diagram and an accompanying spreadsheet, the IM team was able to see where tasks were dependent on each other and where overlap was occurring across the portfolio.

An abstracted view of one task from each product’s roadmap, demonstrating how the task type and prioritization factors were assigned for readability.

Additionally, as supplemental material to the roadmaps, EK developed a diagram to visualize the team’s technical landscape and provide a reference point for connections between tools and capabilities within the portfolio and the company’s environment, as well as to show dependencies between the products as mapped to EK’s industry standard framework (including layers encompassing user interaction, findability and metadata, and governance and maintenance). The diagram delineated between existing applications and platforms, planned capabilities that haven’t been put in place yet, and recommended capabilities that correspond to EK’s suggested future state tasks from the roadmaps, and clearly marked AI-powered/-assisted capabilities.

 

 

The EK Difference

Throughout the engagement, time with stakeholders was difficult to find. To make sure we were able to engage the right stakeholders, EK developed a 15-minute “roadshow” and interview structure with targeted questions to optimize the time we were able to schedule with participants all across the globe. Our client team praised this during project closeout, claiming that the novel approach enabled more individuals with influence to get in the room with EK, generating more organic awareness of and excitement for the roadmap solutions.

Another key ingredient EK brought to the table was our expertise and insight into AI solutioning, tech and market trends, and success stories from other companies in the energy industry. We injected AI and other automation into the roadmaps wherever we identified the opportunity – prioritizing a strategy that focused on secure and responsible AI solutions, data preparedness, and long-term governance – and were even able to recommend a backlog of 10 unique pilots (with varying levels of automation, depending on the targeted subject and product area) to help the company determine their next steps.

 

The Results

As a result of our road mapping alignment efforts with the IM team, each product manager now has more visibility into what the other products are doing and where they may overlap with, complement, or depend on their own efforts, enabling them to better plan for the future. The Unified Portfolio Roadmap, spanning 5 years, provides the energy company with a single, aligned view of all IM initiatives, accompanied by four Product Roadmaps and a Technical Landscape Diagram, and establishes a balance between internal business demand, external technologies, strategic AI, and best-in-class industry developments.

The energy company also chose to implement two of the pilots EK had recommended – focused on reducing carbon emissions through AI-assisted content deduplication and developing a marketing package to promote their internal business management system – to begin operationalizing their roadmaps immediately.

Download Flyer

Ready to Get Started?

Get in Touch

The post Aligning an Enterprise-Wide Information Management (IM) Roadmap for a Global Energy Company appeared first on Enterprise Knowledge.

]]>
Galdamez and Cross to Speak at the APQC 2025 Process & Knowledge Management Conference https://enterprise-knowledge.com/guillermo-galdamez-benjamin-cross-will-be-presenting-at-apqc-2025/ Fri, 31 Jan 2025 19:08:43 +0000 https://enterprise-knowledge.com/?p=23047 Guillermo Galdamez, Principal Knowledge Management Consultant, and Benjamin Cross, Project Manager, will be presenting “Knowledge Portals: Manifesting A Single View Of Truth For Your Organization” at the APQC 2025 Process & Knowledge Management Conference on April 10th. In this presentation, … Continue reading

The post Galdamez and Cross to Speak at the APQC 2025 Process & Knowledge Management Conference appeared first on Enterprise Knowledge.

]]>
Guillermo Galdamez, Principal Knowledge Management Consultant, and Benjamin Cross, Project Manager, will be presenting “Knowledge Portals: Manifesting A Single View Of Truth For Your Organization” at the APQC 2025 Process & Knowledge Management Conference on April 10th.

In this presentation, Galdamez and Cross will go into an in-depth explanation of Knowledge Portals, their value to organizations, the technical components that make up these solutions, lessons learned from their implementation across multiple clients in different industries, how and when to make the case to get started on a Knowledge Portal design and implementation effort, and how these solutions can become a catalyst for a knowledge transformation within organizations.

Find out more about the event and register at the conference website.

The APQC 2025 Process & Knowledge Management Conference will be hosted in Houston, Texas, April 9 and 10. The conference theme is: Integrate, Influence, Impact. EK consultants Guillermo Galdamez and Benjamin Cross are featured speakers.

The post Galdamez and Cross to Speak at the APQC 2025 Process & Knowledge Management Conference appeared first on Enterprise Knowledge.

]]>
Why Graph Implementations Fail (Early Signs & Successes) https://enterprise-knowledge.com/why-graph-implementations-fail-early-signs-successes/ Thu, 09 Jan 2025 15:35:57 +0000 https://enterprise-knowledge.com/?p=22889 Organizations continue to invest heavily in efforts to unify institutional knowledge and data from multiple sources. This typically involves copying data between systems or consolidating it into a new physical location such as data lakes, warehouses, and data marts. With … Continue reading

The post Why Graph Implementations Fail (Early Signs & Successes) appeared first on Enterprise Knowledge.

]]>
Organizations continue to invest heavily in efforts to unify institutional knowledge and data from multiple sources. This typically involves copying data between systems or consolidating it into a new physical location such as data lakes, warehouses, and data marts. With few exceptions, these efforts have yet to deliver the connections and context required to address complex organizational questions and deliver usable insights. Moreover, the rise of Generative AI and Large Language Models (LLMs) continue to increase the need to ground AI models in factual, enterprise context. The result has been a renewed interest in standard knowledge management (KM) and information management (IM) principles.

Over the last decade, enterprise knowledge graphs have been rising to the challenge, playing a transformational role in providing enterprise 360 views, content and product personalizations, improving data quality and governance, and providing organizational knowledge in a machine readable format. Graphs offer a more intuitive, connected view of organizational data entities as they shift the focus from the physical data itself to the context, meaning, and relationships between data – providing a connected representation of an organization’s knowledge and data domains without the need to make copies or incur expensive migrations – and most importantly today, delivering Knowledge Intelligence to enterprise AI. 

While this growing interest in graph solutions has been long anticipated and is certainly welcome, it is also yielding some stalled implementations, unmet expectations, and, in some cases, complete initiative abandonment. Understanding that every organization has its own unique priorities and challenges, there can be various reasons why an investment in graph solutions did not yield the desired results. As part of this article, I will draw upon my observation and experience from industry lessons learned to pinpoint the most common culprits that are topping the list. The signs are often subtle but can be identified if you know where to look. These indicators typically emerge as misalignments between technology, processes, and the organization’s understanding of data relationships. Below are the top tell-tale signs that suggest a trajectory of failure:

1. Treated as Traditional, Application-Focused Efforts (As Technology/Software-Centric Programs)

If I take one datapoint out of observation from the organizations that we work with, the biggest hurdle to adopting graph solutions isn’t about whether the approach itself works – many top companies have already shown it does. The real challenge lies in the mindset and historical approach that organizations have developed over many years when it comes to managing information and technology programs. The complex questions we are asking from our content and data today are no longer fulfilled by the mental models and legacy solutions organizations have been working with for the last four or five decades. 

Traditional applications and databases, like relational or flat file systems, are built to handle structured, tabular data, not the complex, interwoven relationships. The real power of graphs lies in their ability to define organizational entities and data objects (people, customers, products, places, etc. – independent of the technology they are stored in). Graphs are optimized to handle highly interconnected use cases (such as networks of related business entities, supply chains, and recommendation systems), which traditional systems cannot represent efficiently. Adopting such a framework requires a shift from legacy application/system-centric to a data-centric approach where data doesn’t lose its meaning and context when taken out of a spreadsheet, a document, or a SQL table. 

Sticking with such traditional models and relying on legacy systems and implementation approaches that don’t support relationship modeling to make graph models work results in an incomplete or superficial understanding of the data leading to isolated or incorrect decisions, performance bottlenecks, and ultimately lack of trust and to failed efforts. Organizations that do not recognize that graph solutions often represent a significant shift in how data is viewed and used within an organization are the ones that are first to abandon the solution or incur significant technical debt. 

Early Signs of Failure

  • Implementation focuses excessively on selecting the best and latest graph database technologies without the required focus on data modeling standards. In such scenarios, the graph technology is deployed without a clear connection to key business goals or critical data outcomes. This ultimately results in misalignment between business objectives and graph implementation – often leading to vendor lock. 
  • Graph initiatives are treated as isolated IT projects where only highly technical users get a lot out of the solution. This results in little cross-functional involvement from departments outside of the data or IT teams (e.g., marketing, customer service, product development); where stakeholders and subject matter experts (SME) are not engaged or cannot easily access or contribute to the solution throughout modeling/validation and analysis – leading to the intended end users abandoning the solution altogether.
  • Lack of organizational ownership of data model quality and standards. Engineering teams often rely on traditional relational models, creating custom and complex relationships. However, no one is specifically responsible for ensuring the consistency, quality, or structure of these data models. This leads to problems such as inconsistent data formats, missing information, or incomplete relationships within the graph, which ultimately hinders the scalability and performance needed to effectively support the organization.

What Success Looks Like

Graph models rely on high-quality, well-structured business context and data to create meaningful relationships. As such, a data-centric approach requires a holistic view of organizational knowledge and data. If the initiative remains isolated, the organization will miss opportunities to fully leverage the relationships within its data across functions. To tackle this challenge, one of the largest global financial institutions is investing in a semantic layer and a connected graph model to enable comprehensive and complex risk management across the firm. As a heavily regulated financial services firm, their risk management processes necessitate accurate, timely, and detailed data and information to work. By shifting risk operations from application-centric to data-centric, they are investing in standardized terminology and relationship structures (taxonomies, ontologies, and graph analytics solutions) that foster consistency, accuracy and connected data usage across the organization’s 20+ legacy risk management systems. These consumer-grade semantic capabilities are in production environments aiding in business processes where the knowledge graph connects multiple applications providing a centralized, interconnected view of risk and related data such as policies, controls, regulations, etc. (without the need for data migration), facilitating better analysis and decision-making. These advancements are empowering the firm to proactively identify, assess, and mitigate risks, improve regulatory reporting, and foster a more data-driven culture across the firm.

2. Limited Understanding of the Cost-Benefit Equation 

The initial cost of discovering and implementing graph solutions to support early use cases can appear high due to the upfront work required – such as to the preliminary setup, data wrangling, aggregation, and fine tuning required to contextualize and connect what is otherwise siloed and disparate data. On top of this, the traditional mindset of ‘deploy a cutting-edge application once and you’re done’ can make these initial challenges feel even more cumbersome. This is especially true for executives who may not fully understand the shift from focusing on applications to investing in data-driven approaches, which can provide long-term, compounding benefits. This misunderstanding often leads to the premature abandonment of graph projects, causing organizations to miss out on their full potential far too early. Here’s a common scenario we often encounter when walking into stalled graph efforts:  

The leadership team or an executive champion leading the innovation arm of a large corporation makes a decision to experiment with building data-models and graph solutions to enhance product recommendations and improve data supply chain visibility. The data science team, excited by the possibilities, set up a pilot project, hoping to leverage graph’s ability to uncover non-obvious (inexplicit) relationships between products, customers, and inventory. Significant initial costs arise as they invest in graph databases, reallocate resources, and integrate data. Executives grow concerned over mounting costs and the lack of immediate, measurable results. The data science team struggles to show quick value as they uncover data quality issues, do not have access to stakeholders/domain experts or to the right type of knowledge needed to provide a holistic view, and likely lack the graph modeling expertise. Faced with escalating costs and no immediate payoff, some executives push to pull the plug on the initiative.

Early Signs of Failure:

  • There are no business cases or KPIs tied to a graph initiative, or success measures are centered around short-term ROI expectations such as immediate performance improvements. Graph databases are typically more valuable over time as they uncover deep, complex relationships and generate insights that may not be immediately obvious.
  • Graph development teams are not showing incremental value leading to misalignment between business goals – ultimately losing interest or becoming risk-averse toward the solution. 
  • Overemphasis on up-front technical investment where initial focus is only on costs related to software, talent, and infrastructure – overlooking data complexity and stakeholder engagement challenges, and without recognizing the economies of scale that graph technologies provide once they are up and running.
  • The application of graphs for non-optimal use cases (e.g., for single application, not interconnected data) – leading to the project and executives not seeing the impact and overarching business outcomes that are pertinent (e.g., providing AI with organizational knowledge) and impact the organization’s bottomline.

What Success Looks Like

A sizable number of large-scale graph transformation efforts have proven that once the foundational model is in place, the marginal cost of adding new data and making graph-based queries drops significantly. For example, for a multinational pharmaceutical company, this is measured in a six-digit increase in revenue gains within the first quarter of a production release as a result of the data quality and insights gained within their drug development process. In doing so, internal end-users are able to uncover the answers to critical business questions and the graph data model is poised to become a shareable industry standard. Such organizations who have invested early and are realizing the transformational value of graph solutions today understand this compounding nature of graph-powered insights and have invested in showing the short-term, incremental value as part of the success factors for their initial pilots to maintain buy-in and momentum.

 

3. Skillset Misalignment and Resistance to Change

The success of any advanced solution heavily depends on the skills and training of the teams that will be asked to implement and operationalize it. The reality is that the majority of data and IT professionals today, including database administrators, data analysts, and data/software engineers, are often trained in relational databases, and many may have limited exposure to graph theory, graph databases, graph modeling techniques, and graph query languages. 

This challenge is compounded by the limited availability of effective training resources that are specifically tailored to organizational needs, particularly considering the complexity of enterprise infrastructure (as opposed to research or academia). As a result, graph technologies have gained a reputation for having a steep learning curve, particularly within the developer community. This is because many programming languages do not natively support graphs or graph algorithms in a way that seamlessly integrates with traditional engineering workflows. 

Moreover, organizations that adopt graph technologies and databases (such as RDF-based GraphDB, Stardog, Amazon Neptune, or property-graph technologies like Neo4j) often do so without ensuring their teams receive proper training on the specific tools and platforms needed for successful scaling. This lack of preparation frequently limits the team’s ability to design effective graph data models, engineer the necessary content or pipelines for graph consumption, and integrate graph solutions with existing systems. As a result, organizations face slow development cycles, inefficient or incorrect graph implementations, performance issues, and poor scalability – all of which can lead to resistance, pushback, and ultimately the abandonment of the solution.

Early Signs of Failure:

  • Missing data schema or inappropriate data structures such as lack of ontology (especially a theme for property graphs), incorrect edge direction, missing connections where important relationships between nodes are not represented in the graph – leading to incomplete information, flawed analysis, and governance overhead. 
  • The project team doesn’t have the right interdisciplinary team representation. The team tasked with supporting graph initiatives lacks the diversity in expertise, such as domain experts, knowledge engineers, content/system owners, product owners, etc.
  • Inability to integrate graph solutions with existing systems as a result of inefficient query design. Queries that are not optimized to leverage the structure of the graph result in slow execution times and inefficient data retrieval where data is copied into graph resulting in redundant data storage – exacerbating the complexity and inefficiency in managing overall data quality and integrity. 
  • Scalability limitations. As the size of the graph increases, the processing time and memory requirements become substantial, making it difficult to perform operations on large datasets efficiently.

What Success Looks Like

By addressing the skills gap early, planning for the right team composition, aligning teams around a shared understanding of the value of graph solutions, and investing in comprehensive training ecosystems, organizations can avoid common pitfalls that lead to missed opportunities, abandonment or failure of the graph initiative. A leading global retail chain for example, invested in graph solutions to aid their data and analytics teams in enhancing reporting. We worked with their data engineering teams to conduct a skills gap analysis and develop tailored training workshops and curriculum for their various workstreams. The approach took five-module intensive training that is taught by our ontology and graph experts and a learning ecosystem that supported various learning formats, persona/role based training, practice labs, use case based hands-on training, Ask Me Anything (AMA) Sessions, industry talks, and on demand job aids tutorials and train-the-trainer modules. 

Employees were further provided programmatic approaches to tag their knowledge and data more effectively with the creation of a standard set of metadata and tags and data cataloging processes, and were able to leverage training on proper data tagging for an easier search experience. As a result, the chain acquired knowledge on the best practices for organizing and creating knowledge and data models for their data and analytics transformation efforts, so that less time and productivity was wasted by employees when searching for solutions in siloed locations. This approach significantly minimized the overlapping steps and the time it took for data teams to develop a report from 6 weeks to a number of days.

Closing 

Graph implementations require both an upfront investment and a long-term vision. Leaders who recognize this are more likely to support the project through its early challenges, ensuring the organization eventually benefits fully. A key to success is having a champion who understands the entire value of the solution, can drive the shift to a data-centric mindset, and ensures that roles, systems, processes, and culture align with the power of connected data. With the right approach, graph technologies unlock the power of organizational knowledge and intelligence in the age of AI.

If your project has any of the early signs of failure listed in this article, it behooves you to pause the project and revisit your approach. Organizations embarking on a graph initiative, not understanding or planning for the foundations discussed here frequently end up with stalled or failed projects that never provide the true value a graph project can deliver. Are you looking to get started and learn more about how other organizations are approaching graphs at scale or are you seeking to unstick a stalled initiative? Read more from our case studies or contact us if you have specific questions.

The post Why Graph Implementations Fail (Early Signs & Successes) appeared first on Enterprise Knowledge.

]]>
Semantic Maturity Spectrum: Search with Context https://enterprise-knowledge.com/semantic-maturity-spectrum-search-with-context/ Tue, 26 Nov 2024 20:57:09 +0000 https://enterprise-knowledge.com/?p=22477 EK’s Urmi Majumder and Madeleine Powell jointly delivered the presentation ‘Semantic Maturity Spectrum: Search with Context’ at the MarkLogic World Conference on September 24, 2024. Semantic search has long proven to be a powerful tool in creating intelligent search experiences. … Continue reading

The post Semantic Maturity Spectrum: Search with Context appeared first on Enterprise Knowledge.

]]>
EK’s Urmi Majumder and Madeleine Powell jointly delivered the presentation ‘Semantic Maturity Spectrum: Search with Context’ at the MarkLogic World Conference on September 24, 2024.

Semantic search has long proven to be a powerful tool in creating intelligent search experiences. By leveraging a semantic data model, it can effectively understand the searcher’s intent and the contextual meaning of the terms to improve search accuracy. In this session, Majumder and Powell presented case studies for three different organizations across three different industries (finance, pharmaceuticals, and federal research) that started their semantic search journey at three very different maturity levels. For each case study, they described the business use case, solution architecture, implementation approach, and outcomes. Finally, Majumder and Powell rounded out the presentation with a practical guide to getting started with semantic search projects using the organization’s current maturity in the space as a starting point.

The post Semantic Maturity Spectrum: Search with Context appeared first on Enterprise Knowledge.

]]>
How Semantic Layers Support Product Search and Discovery https://enterprise-knowledge.com/how-semantic-layers-support-product-search-and-discovery/ Tue, 19 Nov 2024 19:22:26 +0000 https://enterprise-knowledge.com/?p=22464 Taxonomies have been in use for a long time on e-commerce websites. They help users find the products they want to buy by means of organized categories and subcategories of product types, and the feature of filtering by product attributes. … Continue reading

The post How Semantic Layers Support Product Search and Discovery appeared first on Enterprise Knowledge.

]]>
Taxonomies have been in use for a long time on e-commerce websites. They help users find the products they want to buy by means of organized categories and subcategories of product types, and the feature of filtering by product attributes. In fact, when we explain taxonomies to those new to the concept, we often reference the e-commerce websites, such as Amazon.com, with which they are likely familiar. 

Companies that sell products, however, can further improve product management and sales if they implement customer-facing taxonomies along with other semantic structures, such as metadata sourced from other systems and ontologies, as a part of a larger information/data management strategy. The best way to do this is to take a semantic layer approach, in which the semantic components (taxonomies, metadata, glossaries, ontology, etc.) connect to each other and connect across different systems and data repositories. 

Uses of Product Taxonomies and Metadata

Product taxonomies have multiple uses beyond browsing through product categories on an e-commerce website. Both customers and product managers may benefit from taxonomies in a variety of ways.

Sometimes, users want to browse and explore types of products, and that is when a hierarchical taxonomy of product categories is most useful. Other times, users know the product they want and would like to see if the e-commerce vendor has the specifications they desire. In such cases, the users enter a description in the search box and then refine their search results by selecting among various attributes, which are managed as product metadata and are presented to the user as a faceted form of taxonomy. Attributes may include size, color, material, and category-specific features. 

Taxonomies also support user discovery of new or related products. Users search when they know what they want, and browse when they have an idea or want to better understand the overall offerings available. “Discovery,” alternatively, refers to when users find things that they did not initially look for but are still of interest. Retailers can sell more products when they support discovery. This can be done in different ways: 

  • By properly displaying taxonomy category names and allowing users to navigate up and down the hierarchy;
  • By including and enabling “related category” relationships in the user interface to let users navigate laterally across the taxonomy to find related products;
  • By implementing a supplemental taxonomy, such as for product function and not just for product type, users can discover different products for the same purpose.

Taxonomies and metadata for products are not just for the customers. Product vendors need to manage products by means of their metadata for various purposes: purchasing from suppliers or wholesalers, controlling inventory, fulfilling orders, or updating images of products. While detailed metadata is key, category taxonomies are also useful, such as for identifying closely related substitute products or vendors for the same product line.

Uses of Ontologies for E-Commerce

An ontology extends a taxonomy by providing greater semantic enrichment in the forms of custom relationships and custom properties. Custom relationships between taxonomy terms or categories go beyond “broader/narrower” and generic “related,” and may include relationships for products such as “goes with,” “compliments,” “has parts,” “has add-ons,” or “has optional services.” Custom properties can be used as attributes for terms or categories, comprising metadata and controlled values, such as size, which can be applied to filter product search results. 

Custom relationships between product categories support more options for discovery than a basic taxonomy can do. Not everyone agrees on what “related” means, and users may not agree with what someone else suggests is related. Defined relationships based on an ontology, such as “compliments,” or “has add-ons,” lets the user discover the types of related products of interest. Custom relationships may also link product categories to related services offered, such as product installation and product support, and to different types of content about products.

For custom attributes, each search filter/refinement is a metadata property with controlled values, and the metadata properties and values available depend on the product category. For example, “material” is an attribute for clothing, accessories, and furniture, but it is not for consumer electronics. Furthermore, the types of material available for clothing are not the same as for furniture, and they are not even the same for all types of clothing. Leather, for example, may be available for jackets but not for shirts. This can become quite complex to manage, but an ontology, which systematically links properties with categories, manages this task well.

Related to discovery is recommendation, which directly presents the recommended related products to the user. There are different kinds of recommendation methods. Common methods that base recommendations on past searches by the customer or other customers are not as beneficial if the customer has already purchased the searched product and does not need more. Recommendations based on the custom relationships of an ontology, however, such as “goes with,” are more useful. Recommendations may also be based on certain attributes, such as products with the same style or pattern.  

Different ways people search for products - on a mobile device or desktop.

Connecting Product Systems

Multiple different systems within an organization may deal with product data, metadata, and taxonomies. The most common are web content management systems (CMSs) for the e-commerce website, product information management (PIM) systems for the backend management of all product data, and digital asset management (DAM) systems for product images and videos. Some organizations also have data catalogs, master data management systems, media asset management systems, and product data may also be stored in a customer relationships management system used by sales and marketing people. Meanwhile, any product technical documentation is likely stored in another CMS. If an ontology is in use to model product data, then it is likely to be managed, along with a taxonomy in a dedicated taxonomy/ontology management system. 

The problem is that each of these different systems tends to be siloed, so their data and metadata is separate and thus not the same nor in sync. Products may have slightly different names in different systems, and they may have slightly different metadata property names and even varying metadata values. For example, one system could have category sizes of small, medium, and large, while another uses numerical sizes for the same product category. Taxonomies could vary even more greatly than the metadata. One system may have a flat list of product categories, whereas another system has a 2-level hierarchy of categories and subcategories. It is typical to have different taxonomies and metadata in different systems because they support different users and use cases. 

If the same data in these different systems is not described consistently and not connected, problems could arise from incomplete and missing data about product supplies and sales, such as poor decisions and missed opportunities. Trying to execute consecutive searches in multiple systems is very inefficient and is also prone to overlooking information. Finally, incomplete or inaccurate product information can result in a poor user experience for the customers. 

A semantic layer framework provides a method to link data in different systems with a shared common metadata set and taxonomy.

Semantic Layer Benefits

A semantic layer is a standardized framework that organizes and abstracts organizational knowledge and data (structured, unstructured, and semi-structured) and serves as a data connector for all organizational knowledge assets. A semantic layer enables data federation and virtualization of semantic labels or rules to capture and connect data based on business or domain meaning and value. It’s a method to bridge content and data silos through a structured and consistent approach to connecting, instead of consolidating, data. It is called a “layer” because in the larger framework, it is a middle layer between the content/data repositories and one or more front-end applications for users to search, browse, analyze, or receive recommendations of information.

There are different approaches to implementing a semantic layer but the most common is a “metadata-first” logical architecture, which creates a logical layer that abstracts the underlying data sources by focusing on the metadata. Since product information is rich in metadata, this approach is most suitable for linking product systems and their data. 

Systems connected through a semantic layer offer various benefits. For managing products and e-commerce, these include the following:

  • When PIM or fulfillment systems are connected to the e-commerce platform, new products and product updates can be added more quickly, and specific product availability can be indicated in real time, rather than later; 
  • When a DAM is connected to the e-commerce platform, product images can easily and quickly be refreshed with the latest versions, which is especially beneficial for seasonal items and promotions;
  • When a CRM and an e-commerce platform are connected, sales people know all the product details including the customer-facing product name and can better facilitate sales to prospects;
  • When technical documentation CMS and an e-commerce platform are connected, customers have access to product data sheets, and customer support representatives can better serve customers. 

A semantic layer allows an organization to build applications for users to access and interact with various sets of connected data and content better. For product information, such applications include search interfaces that seamlessly integrate product categories and attributes along with add-on services, recommendation systems for customers to discover products, chatbots for customers to get support, and data dashboards for product managers to track product data.

The post How Semantic Layers Support Product Search and Discovery appeared first on Enterprise Knowledge.

]]>