AI Strategy Articles - Enterprise Knowledge http://enterprise-knowledge.com/tag/ai-strategy/ Mon, 17 Nov 2025 22:16:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg AI Strategy Articles - Enterprise Knowledge http://enterprise-knowledge.com/tag/ai-strategy/ 32 32 AI Readiness Assessment, Benchmarking & Strategy https://enterprise-knowledge.com/ai-readiness-assessment-benchmarking-strategy/ Thu, 17 Apr 2025 17:53:21 +0000 https://enterprise-knowledge.com/?p=23858 Many organizations are looking for a tailored framework to get started in their AI journey and help them prioritize potential projects based on relative effort and estimated return. EK’s Strategy approach consists of five core factors that are within the … Continue reading

The post AI Readiness Assessment, Benchmarking & Strategy appeared first on Enterprise Knowledge.

]]>
Many organizations are looking for a tailored framework to get started in their AI journey and help them prioritize potential projects based on relative effort and estimated return. EK’s Strategy approach consists of five core factors that are within the fabric of any organization that unify all aspects of operationalizing AI, resulting in the creation of practical recommendations and an actionable program roadmap.

Approach

EK will evaluate your organization on the following five components of Enterprise AI, providing detailed reports on your assessed current state, desired target state, and customized roadmap activities to reach your target:

  • Organizational Readiness
  • Current State of Data & Content
  • Technical Capabilities
  • Skill Sets & Roles
  • Operations & Sustainability

EK will create a detailed design and framework based on prioritized needs, including customized AI models and solutions architecture that leverage secure and sustainable AI tailored to your organization’s core data, content, and systems. We will then develop a customized, iterative, task-based plan to achieve organizational AI use cases, and implement your prioritized pilots that take into account your organization’s pain points, pilot value, and technical complexity.

Engagement Outcomes

By the end of the AI Readiness, Benchmarking & Strategy engagement, your organization will have:

  • A deeper understanding of the building blocks needed to leverage AI across the enterprise.
  • A completed assessment that will uncover existing gaps in any of the five core AI dimensions, enabling you to set clear organizational priorities to address them.
  • Short-term, mid-term, and long-term goals for establishing, sustaining, and evolving your AI maturity.
  • Measurable success criteria and key performance indicators (KPIs) for tracking progress over time.
  • A roadmap and AI pilots backlog with fully customized, iterative, task-based plans to achieve your AI transformation, alongside considerations for making decisions that will prevent the accumulation of future technical debt.

The post AI Readiness Assessment, Benchmarking & Strategy appeared first on Enterprise Knowledge.

]]>
Women’s Health Foundation – Semantic Classification POC https://enterprise-knowledge.com/womens-health-foundation-semantic-classification-poc/ Thu, 10 Apr 2025 19:20:31 +0000 https://enterprise-knowledge.com/?p=23789 A humanitarian foundation focusing on women’s health faced a complex problem: determining the highest impact decision points in contraception adoption for specific markets and demographics. Two strategic objectives drove the initiative—first, understanding the multifaceted factors (from product attributes to social influences) that guide women’s contraceptive choices, and second, identifying actionable insights from disparate data sources. The key challenge was integrating internal survey response data with internal investment documents to answer nuanced competency questions such as, “What are the most frequently cited factors when considering a contraceptive method?” and “Which factors most strongly influence adoption or rejection?” This required a system that could not only ingest and organize heterogeneous data but also enable executives to visualize and act upon insights derived from complex cross-document analyses. Continue reading

The post Women’s Health Foundation – Semantic Classification POC appeared first on Enterprise Knowledge.

]]>

The Challenge

A humanitarian foundation focusing on women’s health faced a complex problem: determining the highest impact decision points in contraception adoption for specific markets and demographics. Two strategic objectives drove the initiative—first, understanding the multifaceted factors (from product attributes to social influences) that guide women’s contraceptive choices, and second, identifying actionable insights from disparate data sources. The key challenge was integrating internal survey response data with internal investment documents to answer nuanced competency questions such as, “What are the most frequently cited factors when considering a contraceptive method?” and “Which factors most strongly influence adoption or rejection?” This required a system that could not only ingest and organize heterogeneous data but also enable executives to visualize and act upon insights derived from complex cross-document analyses.

 

The Solution

To address these challenges, the project team developed a proof-of-concept (POC) that leveraged advanced graph technology combined with AI-augmented classification techniques. 

The solution was implemented across several workstreams:

Defining System Functionality
The initial phase involved clearly articulating the use case. By mapping out the decision landscape—from strategic objectives (improving modern contraceptive prevalence rates) to granular insights from user research—the team designed a tailored taxonomy and ontology for the women’s health domain. This semantic framework was engineered to capture cultural nuances, local linguistic variations, and the diverse attributes influencing contraceptive choices.

Processing Existing Data
With the functionality defined, the next phase involved transforming internal survey responses and investment documents into a unified, structured format. An AI-augmented classification workflow was deployed to extract tacit knowledge from survey responses. This process was supported by a stakeholder-validated taxonomy and ontology, allowing raw responses to be mapped into clearly defined data classes. This robust data processing pipeline ensured that quantitative measures (like frequency of citation) and qualitative insights were captured in a cohesive base graph.

Building the Analysis Model
The core of the solution was the creation of a Product Adoption Survey Base Graph. Processed data was converted into RDF triples using a rigorous ontology model, forming the base graph designed to answer competency questions via SPARQL queries. While this model laid the foundation for revealing correlations and decision factors, the full production of the advanced analysis graph—designed to incorporate deeper inference and reasoning—remained as a future enhancement.

Handoff of Analysis Graph Production and Frontend Implementation
Due to time constraints, the production of the comprehensive analysis graph and the implementation of the interactive front end were transitioned to the client. Our team delivered the base graph and all necessary supporting documentation, providing the client with a solid foundation and a detailed roadmap for further development. This handoff ensures that the client’s in-house teams can continue productionalizing the analysis graph and integrate it with their BI dashboard for end-user access.

Provide a Roadmap for Further Development
Beyond the initial POC, a clear roadmap was established. The next steps include refining the AI classification workflow, fully instantiating the analysis graph with enhanced reasoning capabilities, and developing the front end to expose these insights via a business intelligence (BI) dashboard. These tasks have been handed off to the client, along with guidance on leveraging enterprise graph database licenses and integrating the solution within existing knowledge management frameworks.

 

The EK Difference

A standout feature of this project is its novel, generalizable technical architecture:

Ontology and Taxonomy Design
A custom ontology was developed to model the women’s health domain—incorporating key decision factors, cultural influences, and local linguistic variations. This semantic backbone ensures that structured investment data and unstructured survey responses are harmonized under a common framework.

AI-Augmented Classification Pipeline:
The solution leverages state-of-the-art language models to perform the initial classification of survey responses. Supported by a validated taxonomy, this pipeline automatically extracts and tags critical data points from large volumes of survey content, laying the groundwork for subsequent graph instantiation, inference, and analysis.

Graph Instantiation and Querying:
Processed data is transformed into RDF triples and instantiated within a dedicated Product Adoption Survey Base Graph. This graph, queried via SPARQL through a GraphDB workbench, offers a robust mechanism for cross-document analysis. Although the full analysis graph is pending, the base graph effectively supports the core competency questions.


Guidance for BI Integration:
The architecture includes a flexible API layer and clear documentation that maps graph data into SQL tables. This design is intended to support future integration with BI platforms, enabling real-time visualization and executive-level decision-making.

 

The Results

The POC delivered compelling outcomes despite time constraints:

  • Actionable Insights:
    The system generated new insights by identifying frequently cited and impactful decision factors for contraceptive adoption, directly addressing the competency questions set by the Women’s Health teams.
  • Improved Data Transparency:
    By structuring tribal knowledge and unstructured survey data into a unified graph, the solution provided an explainable view of the decision landscape. Stakeholders gained visibility into how each insight was derived, enhancing trust in the system’s outputs.
  • Scalability and Generalizability:
    The technical architecture is robust and adaptable, offering a scalable model for analyzing similar survey data across other health domains. This approach demonstrates how enterprise knowledge graphs can drive down the total cost of ownership while enhancing integration within existing data management frameworks.
  • Strategic Handoff:
    Recognizing time constraints, our team successfully handed off the production of the comprehensive analysis graph and the implementation of the front end to the client. This strategic decision ensured continuity and allowed the client to tailor further development to their unique operational needs.
Download Flyer

Ready to Get Started?

Get in Touch

The post Women’s Health Foundation – Semantic Classification POC appeared first on Enterprise Knowledge.

]]>
Humanitarian Foundation – SemanticRAG POC https://enterprise-knowledge.com/humanitarian-foundation-semanticrag-poc/ Wed, 02 Apr 2025 18:03:04 +0000 https://enterprise-knowledge.com/?p=23603 A humanitarian foundation needed to demonstrate the ability of its Graph Retrieval Augmented Generation (GRAG) system to answer complex, cross-source questions. In particular, the task was to evaluate the impact of foundation investments on strategic goals by synthesizing information from publicly available domain data, internal investment documents, and internal investment data. The challenge laid in .... Continue reading

The post Humanitarian Foundation – SemanticRAG POC appeared first on Enterprise Knowledge.

]]>

The Challenge

A humanitarian foundation needed to demonstrate the ability of its Graph Retrieval Augmented Generation (GRAG) system to answer complex, cross-source questions. In particular, the task was to evaluate the impact of foundation investments on strategic goals by synthesizing information from publicly available domain data, internal investment documents, and internal investment data. The challenge laid in connecting diverse and unstructured information and also ensuring that the insights generated were precise, explainable, and actionable for executive stakeholders.

 

The Solution

To address these challenges, the project team developed a proof-of-concept (POC) that leveraged advanced graph technology and a semantic RAG (Retrieval Augmented Generation) agentic workflow. 

The solution was built around several core workstreams:

Defining System Functionality

The initial phase focused on establishing a clear use case: enabling the foundation to query its data ecosystem with natural language questions and receive accurate, explainable answers. This involved mapping out a comprehensive taxonomy and ontology that could encapsulate the knowledge domain of investments, thereby standardizing how investment documents and data were interpreted and interrelated.

Processing Existing Data

With functionality defined, the next step was to ingest and transform various data types. Structured data from internal systems and unstructured investment documents were processed and aligned with the newly defined ontology. Advanced techniques, including semantic extraction and graph mapping, were employed to ensure that all data—regardless of source—was accessible within a unified graph database.

Building the Chatbot Model

Central to the solution was the development of an investment chatbot that could leverage the graph’s interconnected data. This was approached as a cross-document question-answering challenge. The model was designed to predict answers by linking query nodes with relevant data nodes across the graph, thereby addressing competency questions that a naive retrieval model would miss. An explainable AI component was integrated to transparently show which data points drove each answer, instilling confidence in the results.

Deploying the Whole System in a Containerized Web Application Stack

To ensure immediate usability, the POC was deployed, along with all of its dependencies, in a user-friendly, portable web application stack. This involved creating a dedicated API layer to interface between the chatbot and the graph database containers, alongside a custom front end that allowed executive users to interact with the system and view detailed explanations of the generated answers and the source documents upon which they were based. Early feedback highlighted the system’s ability to connect structured and unstructured content seamlessly, paving the way for broader adoption.

Providing a Roadmap for Further Development

Beyond the initial POC, the project laid out clear next steps. Recommendations included refining the chatbot’s response logic, optimizing performance (notably in embedding and document chunking), and enhancing user experience through additional ontology-driven query refinements. These steps are critical for evolving the system from a demonstrative tool to a fully integrated component of the foundation’s data management and access stack.

 

 

The EK Difference

A key differentiator of this project was its adoption of standards-based semantic graph technology and its highly generalizable technical architecture. 

The architecture comprises:

Investment Ontology and Data Mapping:

A rigorously defined ontology underpins the entire system, ensuring that all investment-related data—from structured datasets to narrative reports—is harmonized under a common language. This semantic backbone supports both precise data integration and flexible query interpretation.

Graph Instantiation Pipeline:

Investment data is transformed into RDF triples and instantiated within a robust graph database. This pipeline supports current data volumes and is scalable for future expansion. It includes custom tools to convert CSV files and other structured datasets into RDF and mechanisms to continually map new data into the graph.

Semantic RAG Agentic Workflow and API:

The solution utilizes a semantic RAG approach to navigate the complexities of cross-document query answering. This agentic workflow is designed to minimize unhelpful hallucinations, ensuring that each answer is traceable back to the underlying data. The integrated API provides a seamless bridge between the front-end chatbot and the back-end graph, enabling real-time, explainable responses.

Investment Chatbot Deployment:

Built as a central interface, the chatbot exemplifies how graph technology can be operationalized to address executive-level investment queries. It is fine-tuned to reflect the foundation’s language and domain knowledge, ensuring that every answer is accurate and contextually relevant.

 

The Results

The POC successfully demonstrated that GRAG could answer complex questions by:

  • Delivering coherent and explainable recommendations that bridged structured and unstructured investment data.
  • Significantly reducing query response time through a tightly integrated semantic RAG workflow.
  • Providing a transparent AI mapping that allowed stakeholders to see exactly how each answer was derived.
  • Establishing a scalable architecture that can be extended to support a broader range of use cases across the foundation’s data ecosystem.

This project underscores the transformative potential of graph technology in revolutionizing how investment health is assessed and how strategic decisions are informed. With a clear roadmap for future enhancements, the foundation now has a powerful, next-generation tool for deep, context-driven analysis of its investments.

Download Flyer

Ready to Get Started?

Get in Touch

The post Humanitarian Foundation – SemanticRAG POC appeared first on Enterprise Knowledge.

]]>
Aligning an Enterprise-Wide Information Management (IM) Roadmap for a Global Energy Company https://enterprise-knowledge.com/aligning-an-enterprise-wide-information-management-im-roadmap-for-a-global-energy-company/ Wed, 26 Feb 2025 20:04:06 +0000 https://enterprise-knowledge.com/?p=23215 A global energy company sought support in detailing and aligning their information management (IM) team’s roadmaps for all four of their IM products – covering all managed applications, services, projects, and capabilities – to help them reach their target state vision of higher levels of productivity, more informed decision-making, and quality information made available to ... Continue reading

The post Aligning an Enterprise-Wide Information Management (IM) Roadmap for a Global Energy Company appeared first on Enterprise Knowledge.

]]>

The Challenge

A global energy company sought support in detailing and aligning their information management (IM) team’s roadmaps for all four of their IM products – covering all managed applications, services, projects, and capabilities – to help them reach their target state vision of higher levels of productivity, more informed decision-making, and quality information made available to all of their users.

They were facing the following challenges:

  • Recently-created products with immature, internally-focused roadmaps, resulting in missed opportunities for incorporation of industry trends and standards;
  • Limited alignment across products, resulting in unnecessary duplicative work, under-standardization, and a lack of business engagement;
  • Varying levels of granularity and detail across product roadmaps, resulting in some confusion around what tasks entail;
  • Inconsistently defined objectives and/or business cases, resulting in unclear task goals; and
  • Isolated, uncirculated efforts to harness artificial intelligence (AI), resulting in a fragmented AI strategy and time lost performing tasks manually that could have been automated.

 

The Solution

The energy company engaged Enterprise Knowledge (EK) over a 3.5-month time period to refine their product roadmaps and align and combine them into a unified 5-year roadmap for the entire portfolio. In addition, the company tasked EK with developing a supplemental landscape design diagram to visualize the information management team’s technical scope to strengthen the delivery per product and value to the company.

EK began by analyzing existing roadmaps and reviewing them with the product managers, identifying the target state for each. We facilitated multiple knowledge gathering sessions, conducted system demos, and analyzed relevant content items to understand the strengths, challenges, and scope of each product area, as well as the portfolio as a whole.

EK then provided recommendations for additional tasks to fill observed gaps and opportunities to consolidate overlap, aligning the roadmaps across 5 recommended KM workstreams:

  • Findability & Search Insights: Provide the business with the ability to find and discover the right information at the time of need.
  • Graph Modeling: Develop a graph model to power search, analytics, recommendations and more for the IM team.
  • Content & Process Governance: Establish and maintain content, information, and data governance across the company to support reuse and standardization.
  • Security & Access Management: Support the business in complying with regulatory requirements and security considerations to safeguard all IM information assets.
  • Communications & Adoption: Establish consistent processes and methods to support communication with the business and promote the adoption of new tools/capabilities.

To strengthen and connect the organization’s AI strategy, EK threaded automation throughout and incorporated it within each workstream wherever possible and/or feasible. The goal of this was to improve business efficiency and productivity, as well as to move the team one step closer to making IM “invisible.” Each task was also assigned a type (foundational, MVP, enhancement, operational support), level of effort (low, medium, high), business value (1 (low) to 5 (high) on a Likert scale), and ownership (portfolio vs. individual products). EK marked which tasks already existed in the product roadmaps and which ones were newly recommended to supplement them. By mapping the tasks to the 5 workstreams in both a visual roadmap diagram and an accompanying spreadsheet, the IM team was able to see where tasks were dependent on each other and where overlap was occurring across the portfolio.

An abstracted view of one task from each product’s roadmap, demonstrating how the task type and prioritization factors were assigned for readability.

Additionally, as supplemental material to the roadmaps, EK developed a diagram to visualize the team’s technical landscape and provide a reference point for connections between tools and capabilities within the portfolio and the company’s environment, as well as to show dependencies between the products as mapped to EK’s industry standard framework (including layers encompassing user interaction, findability and metadata, and governance and maintenance). The diagram delineated between existing applications and platforms, planned capabilities that haven’t been put in place yet, and recommended capabilities that correspond to EK’s suggested future state tasks from the roadmaps, and clearly marked AI-powered/-assisted capabilities.

 

 

The EK Difference

Throughout the engagement, time with stakeholders was difficult to find. To make sure we were able to engage the right stakeholders, EK developed a 15-minute “roadshow” and interview structure with targeted questions to optimize the time we were able to schedule with participants all across the globe. Our client team praised this during project closeout, claiming that the novel approach enabled more individuals with influence to get in the room with EK, generating more organic awareness of and excitement for the roadmap solutions.

Another key ingredient EK brought to the table was our expertise and insight into AI solutioning, tech and market trends, and success stories from other companies in the energy industry. We injected AI and other automation into the roadmaps wherever we identified the opportunity – prioritizing a strategy that focused on secure and responsible AI solutions, data preparedness, and long-term governance – and were even able to recommend a backlog of 10 unique pilots (with varying levels of automation, depending on the targeted subject and product area) to help the company determine their next steps.

 

The Results

As a result of our road mapping alignment efforts with the IM team, each product manager now has more visibility into what the other products are doing and where they may overlap with, complement, or depend on their own efforts, enabling them to better plan for the future. The Unified Portfolio Roadmap, spanning 5 years, provides the energy company with a single, aligned view of all IM initiatives, accompanied by four Product Roadmaps and a Technical Landscape Diagram, and establishes a balance between internal business demand, external technologies, strategic AI, and best-in-class industry developments.

The energy company also chose to implement two of the pilots EK had recommended – focused on reducing carbon emissions through AI-assisted content deduplication and developing a marketing package to promote their internal business management system – to begin operationalizing their roadmaps immediately.

Download Flyer

Ready to Get Started?

Get in Touch

The post Aligning an Enterprise-Wide Information Management (IM) Roadmap for a Global Energy Company appeared first on Enterprise Knowledge.

]]>
The Top 5 Reasons for a Semantic Layer https://enterprise-knowledge.com/the-top-5-reasons-for-a-semantic-layer/ Wed, 14 Feb 2024 17:21:50 +0000 https://enterprise-knowledge.com/?p=19833 Implementing Semantic Layers has become a critical strategic plan for many of our most advanced data clients. A Semantic Layer connects all organizational knowledge assets, including content items (files, videos, media, etc.) via a well defined and standardized semantic framework. … Continue reading

The post The Top 5 Reasons for a Semantic Layer appeared first on Enterprise Knowledge.

]]>
Implementing Semantic Layers has become a critical strategic plan for many of our most advanced data clients. A Semantic Layer connects all organizational knowledge assets, including content items (files, videos, media, etc.) via a well defined and standardized semantic framework. If you are unfamiliar with Semantic Layers, read Lulit Tesfaye’s blog What is a Semantic Layer. It provides a great explanation of the Semantic Layer and how it can be implemented. There are a lot of great reasons organizations should implement Semantic Layers. My top five reasons are below.

Improved Findability and Confidence in Data

Improved Findability and confidence in dataData continues to grow at an alarming rate. Leaders want their organizations to be data-driven, but their direct reports need to know the data they require and have confidence in that data. A Semantic Layer helps with both of these issues. It uses a graph database and the metadata from your data catalog to offer a best-in-class search that returns data in the context of the business need. For example, if you are looking for all the data sets containing information about the average purchase price of a product, a graph-based search would have a result explaining what the purchase price is and then show all of the data sets that have purchase transactions with price information in them. Many of our retail clients have multiple data feeds from different purchasing systems. Showing all of this information together helps ensure that one of the feeds is not missed. 

The information returned in this type of graph-based custom search is not limited to data sets. We have one client who uses the graph to capture the relationship between a dashboard, the dashboard objects, and the data tables that populate each component. Their graph-based search not only returns data sets, but also the dashboards and dashboard objects that display results. Their IT people use this to develop new dashboards with the correct data sets and their data scientists prioritize the data sets that power the dashboards they already use

Google has been using graph search for years. Now, this same technology is available in our data environments. 

Enabling AI for Data

Enabling AI for data AI and ChatGPT are all over the news these days. It is a budget priority for every company executive I speak with. One of the most exciting use cases for Generative AI is the databot. Organizations that implement databots give their business users easy access to the metrics they need to do their job. Rather than trying to build dashboards that anticipate users’ needs, databots allow business users to ask questions of any level of complexity and get answers without knowing or understanding anything about the data behind the result. Software companies in the Semantic Layer are already showing demos of how business users can ask their data complicated natural language questions and get answers back. 

Databots require integration with a Generative AI tool (LLM). This integration will not work without a Semantic Layer. The Semantic Layer, specifically the metadata, taxonomy, and graph framework, provides the context so that LLM tools can properly answer these data-specific questions with organizational context. The importance of the Semantic Layer has been proven in multiple studies. In one study, Juan Sequeda, Dean Allmegang, and Bryan Jacob of data.world produced a benchmark showing how knowledge graphs affect the accuracy of question answering against SQL databases. You can see the results of this study here. Their benchmark evaluated how LLMs answered both high complexity and low complexity questions on both high and low schema data sets. The results are below.

  • Low Complexity/Low Schema, knowledge graph accuracy was 71.1% while the SQL accuracy was 25.5%
  • High Complexity/Low Schema, knowledge graph accuracy was 66.9% while the SQL accuracy was 37.4%
  • Low Complexity/High Schema, knowledge graph accuracy was 35.7% while the SQL accuracy was 0%
  • High Complexity/High Schema, knowledge graph accuracy was 38.7% while the SQL accuracy was 0%

As these stats show, organizations implementing a Semantic Layer are better equipped to integrate with an LLM. One of the most striking results is that the schema is much less important than the availability of a knowledge graph in question response accuracy. If your organization is looking to integrate the use of LLMs into your data environment, a Semantic Layer is critical.

Reporting Across Data Domains

Reporting across data domainsThe Semantic Layer uses a combination of the semantic framework (metadata/ taxonomies/ontologies/knowledge graphs) to map data and related data tools to the entities that business users care about. This approach creates a flexible and more reliable way to manage data across different domains. It gives business users greater access to the information they need in a format that makes sense.  

Reporting on metrics that cross data domains or systems continues to be challenging for large enterprises. Historically, these organizations have addressed this through complex ETL processes and rigid dashboards that attempt to align and aggregate the information for business users. This approach has several problems, including:

  • Slow or problematic ETL processes that erode trust in the information,
  • Over-reliance on a data expert to understand how the data comes together,
  • Problems with changing data over time, and 
  • Lack of flexibility to answer new questions.

Implementing a Semantic Layer addresses each of these issues. Taxonomies provide a consistent way to categorize data across domains. The taxonomies are implemented as metadata in the data catalogs so business users and data owners can quickly find and align information across their current sources. The Knowledge Graph portion of the Semantic Layer maps data sets and data elements to business objects. These maps can be used to pull information back dynamically without the need for ETL processes. When an ETL process is required for performance purposes, how the data is related is defined in the graph and not in the head of your data developers. ETL routines can be developed against the knowledge graph rather than in code. As the data changes, the map can be updated so that the processes that use that data reflect the new changes immediately. 

We developed a Semantic Layer for a retail client. Once it was in place, they could report on sales transactions from 6 different point-of-sale systems (each with a different format) in a way that used to be done using time-consuming and complicated ETL processes. They were also able to expand their reporting to show links between third-party sales, store sales, and supply chain issues in a single dashboard. This was impossible before the Semantic Layer was in place because they were overly reliant upon a small set of developers and dashboards that only addressed one domain at a time. Instead of constantly building and maintaining complex ETL routines that move data around, our client maps and defines the relationships in the graph and updates the graph or their metadata when changes occur. Business users are seeing more information than they ever have, and they have greater trust in what they are seeing.

Improved Data Governance

Improved Data GovernanceData governance is critical to providing business users with data that they have confidence in for proper decision-making. The velocity and variety of today’s data environments makes controlling and managing that data seem almost impossible. Tools from the Semantic Layer are built to address the problem of scale and complexity organizations face. Data catalogs use metadata and built-in workflows to allow organizations to manage similar data sets in similar ways. They also provide data lineage information so that users know how data is used and what has been done to the data files over time. Metadata driven data catalogs give organizations a way to align similar data sets and a framework so that they can be managed collectively rather than individually.

In addition to data catalogs, ontologies and knowledge graphs can aid in enterprise data governance. Ontologies identify data elements representing the same thing from a business standpoint, even if they are from different source locations or have different field names. Tying similar data elements together in a machine-readable way allows the system to enforce a consistent set of rules automatically. For example, at a large financial institution we worked with, a knowledge graph linked all fields that represented the open date for an account. The customer was a bank with investment accounts, bank accounts, and credit card accounts. Because ontologies linked these fields as account open dates, we could implement constraints that ensured these fields are always filled out, use a standard date format, and have a date in a reasonable timeframe. The ability to automate constraints across many related fields, allows data administrators to scale their processes even as the data they are collecting continues to grow.

Stronger Security

Stronger SecurityThe incremental growth of data has made controlling access to data sets (A.K.A. entitlements) more challenging than ever. Sensitive data, like HR data, must have limited access for those that need to know only. Licensed data could have contractual limitations as to the number of users and may not exist in your organization’s data lake. Often, data is combined from multiple sources. What are the security rules for those new data combinations? The number of permutations and rules as to who can see what across an organization’s data landscape is daunting. 

The Semantic Layer improves the way data entitlements are managed using metadata. The metadata can define the source of the data (for licensed data) as well as the type of data so that sensitive data can be more easily found and flagged. Data administrators can use a data catalog to find licensed data and ensure proper access rules are in place. They can also find data about a sensitive topic, like salaries, and ensure that the proper security measures are in place. Data Lineage, a common feature in catalogs, can also help identify when a newly combined data set needs to be secured and who should see it. Catalogs have gone a long way to solve these security problems, but they are insufficient to solve the growing security challenges. 

Knowledge graphs augment the information about data stored in data catalogs to provide greater insight and inference of data entitlements. Graphs map relationships across data and those relationships can be used to identify related data sets that need similar security rules. Because the graph’s relationships are machine-readable, implementation of many of these security rules can be automated. Graphs can also identify how and where data sets are used to identify potential security mismatches. For example, a graph can identify situations where data sets have different security requirements than the dashboards that display them. These situations can be automatically flagged and exposed to data administrators who can proactively align the security between the data and the dashboard.

In Conclusion

Layers are a natural evolution of the recognition that metadata is a first class citizen in the battle to get the right data to the right people at the right time. The combination of formal metadata and graphs gives data administrators and data users new ways to find, manage, and work with data.

The post The Top 5 Reasons for a Semantic Layer appeared first on Enterprise Knowledge.

]]>
The 5 Key Components of a Semantic Search Experience https://enterprise-knowledge.com/the-5-key-components-of-a-semantic-search-experience/ Wed, 06 Nov 2019 19:16:46 +0000 https://enterprise-knowledge.com/?p=9947 Semantic Search extends meaning and context to your otherwise run-of-the-mill search results. This future-ready phase of search seeks to apply machine-driven understanding of user intent, query context, and the relationships between words. We broke down the primary elements that make … Continue reading

The post The 5 Key Components of a Semantic Search Experience appeared first on Enterprise Knowledge.

]]>
Semantic Search extends meaning and context to your otherwise run-of-the-mill search results. This future-ready phase of search seeks to apply machine-driven understanding of user intent, query context, and the relationships between words. We broke down the primary elements that make search ‘semantic’ in the following infographic to shed some light on the varying concepts and principles in play. 

The 5 key components to build the foundation for a future-ready search strategy are: action-oriented results, faceted taxonomy, knowledge graphs, context, and scale.

Applying any of the principles identified in the above infographic can upgrade your search strategy to a future-ready, semantic experience. Whether you think your search needs a simple update or is ready for a serious upgrade, we can help. EK offers a range of search-specific services that will produce actionable recommendations. Please feel free to contact us for more information.

The post The 5 Key Components of a Semantic Search Experience appeared first on Enterprise Knowledge.

]]>
Using Knowledge Graph Data Models to Solve Real Business Problems https://enterprise-knowledge.com/using-knowledge-graph-data-models-to-solve-real-business-problems/ Mon, 10 Jun 2019 19:35:28 +0000 https://enterprise-knowledge.com/?p=8961 A successful business today must possess the capacity to quickly glean valuable insights from massive amounts of data and information coming from diverse sources. The scale and speed at which companies are generating data and information, however, often makes this … Continue reading

The post Using Knowledge Graph Data Models to Solve Real Business Problems appeared first on Enterprise Knowledge.

]]>
A successful business today must possess the capacity to quickly glean valuable insights from massive amounts of data and information coming from diverse sources. The scale and speed at which companies are generating data and information, however, often makes this task seem overwhelming.

An Enterprise Knowledge Graph allows organizations to connect and show meaningful relationships between data regardless of type, format, size, or where it is located. This allows us to view and analyze an organization’s knowledge and data assets in a format that is understood by both humans and machines.

While most organizations have been built to organize and manage data by department and type, a Knowledge Graph allows us to view and connect data the way our brain relates and infers information to answer specific business questions, without making another copy of the data from its original sources.

How much of your data and information is currently dispersed across business units, departments, systems, and knowledge domains? How many clicks or reports do you currently navigate to find an answer to a single business problem or get relevant results to your search? Below, I will share a selection of real business problems that we are able to tackle more efficiently by using knowledge graph data models, as well as examples of how we have used knowledge graphs to better serve our clients.

Aggregate Disparate Data in Order to Spot Trends and Make Better Investment Decisions

A vast amount of the data we create and work with is unstructured, in the form of emails, webpages, video files, financial reports, images, etc. Our own wide assessment of organizations finds that as much as 85% of an organization’s information exists in an unstructured form.  Organizing all of these data proves to be a necessary undertaking for many large and small institutions in order to extract meaning and value from their organization’s information. One way we have found to make this manageable is through leveraging semantic models and technologies to automatically extract and classify unstructured text to make it machine readable/processable. This allows us to further relate this classified content with other data sources to be able to define relationships, understand patterns, and quickly obtain holistic insights on a given topic from varied sources, despite where the data and content lives (business units, departments, systems, and locations).

 

 

One of the largest supply chain clients we work with needed to provide its business users a way to obtain quick answers based on very large and varied data sets. The goal was to bring meaningful information and facts closer to the business to make funding and investment decisions. By extracting topics, places, people, etc. from a given file, we were able to develop  an ontology to describe the key types of things business users were interested in and how they relate to each other. We mapped the various data sets to the ontology and leveraged semantic Natural Language Processing (NLP) capabilities to recognize user intent, link concepts, and dynamically generate the data queries that provide the response. This enabled non-technical users to uncover the answers to critical business questions such as:

  • Which of your products or services are most profitable and perform better?
  • What investments are successful, and when are they successful?
  • How much of a given product did we deliver in a given timeframe?
  • Who were my most profitable customers last year?
  • How can we align products and services with the right experts, locations, delivery method, and timing?

Discover Hidden Facts in Data to Predict and Reduce Operational Risks

By allowing organizations to collect, integrate, and identify user interest and intent, ontologies and knowledge graphs build the foundations for Artificial Intelligence (AI) to allow organizations to analyze different paths jointly, describe their connectivity from various angles, and discover hidden facts and relationships through inferences in related content that would have otherwise gone unnoticed.

What this means for our large engineering and manufacturing partner is that, by connecting internal data to analyze relationships and further mining external data sources (e.g. social media, news, help-desk, forums, etc.), they were able to gain a holistic view of products and services to influence operational decisions. Examples include the ability to:

  • Predict breakdown and detect service failures in early stages to schedule preventive maintenance, minimize downtime or maximize service life;
  • Maintain right level of operator, experts and inventory;
  • Find remaining life of an asset or determine right warranty period; and
  • Prevent risk of negative brand image and impacts to lifetime loyalty of customers.

Facilitate Employee Engagement, Knowledge Discovery, and Retention

Most organizations have accumulated vast amounts of structured and unstructured data that are not easy to share, use, or reuse among staff. This difficulty leads to diminished retention of  institutional knowledge and rework. Our work with a global development bank, for instance, was driven by the need to find a better way to disseminate information and expertise to all of their staff so that projects would be more efficient and successful and employees would have a simple knowledge sharing tool to solve complex project challenges without rework and knowledge loss. We developed a semantic hub, leveraging a knowledge graph, that collects organizational content, user context, and project activities. This information then powers a recommendation engine that suggests relevant articles and information when an email or a calendar invite is sent on a given topic or during searches on that topic. This will eventually power a chatbot as part of a larger AI Strategy. These outputs were then published on the bank’s website to help improve knowledge retention and to showcase  expertise via Google recognition and search optimization for future reference. Using knowledge graphs based on this linked data strategy enables the organization to connect all of their knowledge assets in a meaningful way to:

  • Increase the relevancy and personalization of search;
  • Enable employees to discover content across unstructured content types, such as webinars, classes, or other learning materials based on factors like location, interest, role, seniority level, etc.; and
  • Further facilitate connections between people who share similar interests, expertise, or location.

 

 

Implement Scalable Data Management and Governance Models

The size, variety, and complexity by which businesses are integrating data is making it difficult for IT and data management teams to keep up with a traditional data management structure. Currently, most organizations are facing challenges to efficiently and properly map ownership and usage of various data sources, to track changes, and to determine the right level of access and security. For example, we worked with a large US Federal Agency that needed a better way to manage the thousands of data sets that they use to develop economic models that drive US policy. We developed a semantic data model that captures data sets, metadata, how they relate to one another, and information about how they are used. This information is stored in a triple store that sits behind a custom web application. Through this ontology, a front-end semantic search application, and an administrative dashboard, the model provided Economists and the Agency:

  • A single unified data view that clearly represents the knowledge domains of the organization without copying or duplicating data from the authoritative and managed sources;
  • A machine-processable metadata from documents, images, and other unstructured content as well as from relational databases, data lakes, and NoSQL sources to see relationships across data sets; and
  • An automated log of usage so that the Agency can better evaluate the internal value of a data set which then allows it to better negotiate deals with data vendors.

Lay the Foundations for AI Strategy

Considered to be the primary catalyst for what is now being termed the 4th industrial revolution, Artificial Intelligence (AI) is expected to drive half of our world’s economic gains within a decade. Where KM and AI meet, we discuss Knowledge Artificial Intelligence, a key element of this forthcoming revolution. For organizations looking to dabble in a pragmatic and scalable AI strategy, their enterprise needs to have a solid data practice that lays down the infrastructure necessary to sow and reap the advantages of data across disparate sources, as well as drive scale and efficient governance through graph-based machine learning.

The key approach that the business problems above share is that the knowledge graph modeling is enabling the application of some form of AI to transform the productivity and growth of these organizations with“hard” and “soft” business returns. These range from improvements to processes through staff augmentation and task automation, employee and customer satisfaction through personalized sales and marketing, risk avoidance through anomaly detection and predictive analytics, and facilitating staff intelligence through the enablement of natural ways to ask the toughest business questions.

 

 

Closing

Data lakes and data warehouses have gotten us this far by allowing organizations to integrate and analyze large data to drive business decisions, but the practicality of data consolidation will always be a limiting factor for the enterprise implementation of these technologies. As business agility and high volumes of data are becoming the ingredients for success, the need for speed, exploration, scalability, and optimization of data is becoming an undertaking that traditional and relational data models are struggling to keep up.

Backed by operational convenience, semantic data models provide organizations the power to synthesize real-time decisions, make relevant recommendations and facilitate knowledge sharing with limited administrative burden and a proven potential for scale. What further sets graph models apart is that they rely on context from human knowledge, structure, and reasoning that are necessary to relate knowledge to language in a natural way. Graph data models are able to leverage machine learning to apply this knowledge by collecting and automatically classifying  knowledge from various sources into machine readable formats, and allowing the organization to benefit from the deeper layers of AI, such as natural language processing, image recognition, predictive analytics, and so much more.

If your organization, like many, is facing challenges in surfacing, exploring, and managing data in an understandable and actionable manner, learn more on ways to get started or contact us directly.

The post Using Knowledge Graph Data Models to Solve Real Business Problems appeared first on Enterprise Knowledge.

]]>