Graph Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/graph/ Wed, 03 Sep 2025 18:04:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg Graph Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/graph/ 32 32 Graph Solutions PoC to Production: Overcoming the Barriers to Success (Part I) https://enterprise-knowledge.com/graph-solutions-poc-to-production-overcoming-the-barriers-to-success-part-i/ Thu, 15 May 2025 13:16:55 +0000 https://enterprise-knowledge.com/?p=24334 Part I: A Review of Why Graph PoCs Struggle to Demonstrate Success or Progress to Production This is Part 1 of a two-part series on graph database PoC success and production deployment.   Introduction I began my journey with graphs … Continue reading

The post Graph Solutions PoC to Production: Overcoming the Barriers to Success (Part I) appeared first on Enterprise Knowledge.

]]>
Part I: A Review of Why Graph PoCs Struggle to Demonstrate Success or Progress to Production

This is Part 1 of a two-part series on graph database PoC success and production deployment.

 

Introduction

I began my journey with graphs around 2014 when I discovered network theory and tools like NetworkX and Neo4j. As our world becomes increasingly connected, it makes sense to work with data by leveraging its inherent connections. Soon, every problem I faced seemed ideally suited for graph solutions.

Early in my experiences, I worked with a biotech startup, exploring how graphs could surface insights into drug-protein interactions (DPI). The team was excited about graphs’ potential to reveal latent signals that traditional analytics missed. With a small budget, we created a Proof-of-Concept (PoC) to demonstrate the “art of the possible.” After a quick kick-off meeting, we loaded data into a free graph database and wrote queries exploring the DPI network. In just three months, we established novel insights that advanced the team’s understanding.

Despite what we considered success, the engagement wasn’t extended. More troubling, I later learned our PoC had been put into a production-like environment where it failed to scale in performance or handle new data sources. What went wrong? How had we lost the potential scientific value of what we’d built?

This experience highlights a common problem in the graph domain: many promising PoCs never make it to production. Through reflection, I’ve developed strategies for avoiding these issues and increasing the likelihood of successful transitions to production. This blog explores why graph PoCs fail and presents a holistic approach for success. It complements the blog Why Graph Implementations Fail (Early Signs & Successes).

Why Graph Database Solutions and Knowledge Graph PoCs Often Fail

Organizational Challenges

Lack of Executive Sponsorship and Alignment

Successful production deployments require strong top-level support. Without executive buy-in, graph initiatives seldom become priorities or receive funding. Executives often don’t understand the limitations of existing approaches or the paradigm shift that graphs represent.

The lack of sponsorship is compounded by how graph practitioners approach stakeholders. We often begin with technical explanations of graph theory, ontologies, and the differences between Resource Description Framework (RDF) and Label Property Graphs (LPG), rather than focusing on business value. No wonder executives struggle to understand why graph initiatives deserve funding over other projects. I’ve been guilty of this myself, starting conversations with “Let me tell you about Leonhard Euler and graph theory…” instead of addressing business problems directly.

Middle Management Resistance and Data Silos

Even with executive interest, mid-level managers can inhibit progress. Many have vested interests in existing systems and fear losing control over their data domains. They’re comfortable with familiar relational databases and may view knowledge graphs as threats to their “systems of record”. This presents an opportunity to engage managers and demonstrate how graphs can integrate with existing systems and support their goals.For example, a graph database may load data “just in time” to perform a connected data analysis and then drop the data after returning the analytic results. This would be an ephemeral use of graph analytics.

Bureaucracy and Data Duplication Concerns

Large organizations have lengthy approval processes for new technologies. Infrastructure teams may be reluctant to support experimental technology without an established return on investment  (ROI).

A critical but often undiscussed factor is that graph databases typically require extracting data from existing sources and creating another copy—raising security risks, infrastructure costs, and data synchronization concerns. This is the Achilles heel of graph databases. However, emerging trends in decoupling data from query engines may offer alternatives to this problem. A new paradigm is emerging where data in data lakes can be analyzed through a graph lens at rest without an ETL ingestion into a graph database. Graph query engines enable data to be viewed through traditional relational and now connected data lenses.

Isolated Use Cases and Limited Understanding

Many graph initiatives start as isolated projects tackling narrow use cases. While this limits upfront risk, it can make the impact seem trivial. Conventional technologies might solve that single problem adequately, leading skeptics to question whether a new approach is needed. The real value of knowledge graphs emerges when connecting data across silos—something that’s hard to demonstrate in limited-scope PoCs.

A practical approach I’ve found effective is asking stakeholders to diagram their problem at a whiteboard. This naturally reveals how they’re already thinking in graph terms, making it easier to demonstrate the value of a graph approach.

Talent and Skills Gap

Graph technologies require specialized skills that are in short supply. Learning curve issues affect even experienced developers, who must master new query languages and paradigms. This shortage of expertise can lead to reliance on a few key individuals, putting projects at risk if they leave.

 

Technical Challenges

Complex Data Modeling

Graph data models require a different mindset than relational schemas. Designing an effective graph schema or ontology is complex, and mistakes can lead to poor performance. Equally, an effective semantic layer is critical to understanding the meaning of an organization’s data. The schema-less flexibility of graphs can be a double-edged sword—without careful planning, a PoC might be built ad-hoc and prove inefficient or lead to data quality issues when scaled up. Refactoring a graph model late in development can be a major undertaking that casts doubt on the technology itself.

Integration Challenges

Enterprise data rarely lives in one place. Integrating graphs and other solutions with legacy systems requires extensive data mapping and transformation. Without smooth interoperability via connectors, APIs, or virtualization layers, the graph becomes an isolated silo with limited production value. Decoupled approaches mentioned above address this solution by focusing on graph and connected data analytics as a standalone feature of graph query engines. Tooling optimized for graphs are making ETL and integration of graph databases easier and more efficient.

Performance Trade-offs

Graph databases excel at traversing complex relationships but may underperform for simple transactions compared to optimized relational databases. In a PoC with a small dataset, this may not be immediately noticeable, but production workloads expose these limitations. As data volumes grow, even traversals that were fast in the PoC can slow significantly, requiring careful performance tuning and possibly hybrid approaches.

Evolving Standards and Tooling

The graph ecosystem is still evolving, with multiple database models and query languages (Cypher, Gremlin, SPARQL). More recently, decoupled graph query engines enable analyzing tabular and columnar data as if it were a graph, supporting the concept of “Single Copy Analytics” and concurrently increasing the breadth of options for graph analytics. Unlike the relational world with SQL and decades of tooling, graph technologies lack standardization, making it difficult to find mature tools for monitoring, validation, and analytics integration. This inconsistency means organizations must develop more in-house expertise and custom tools. 

Production Readiness Gaps

Production deployment requires high availability, backups, and disaster recovery—considerations often overlooked during PoCs. Some graph databases lack battle-tested replication, clustering, and monitoring solutions. Integrating with enterprise logging and DevOps pipelines requires additional effort that can derail production transitions. In the next blog on this topic, we will present strategies for integrating logging into a PoC and production releases.

Scaling Limitations

Graph databases often struggle with horizontal scaling compared to relational databases. While this isn’t apparent in small PoCs, production deployment across multiple servers can reveal significant challenges. As graphs grow larger and more complex, query performance can degrade dramatically without careful tuning and indexing strategies. We will explore how to thoughtfully scale graph efforts in the next blog on taking projects from PoC to Production.

 

Security and Compliance Challenges

Access Control Complexity

Graphs connect data in ways that complicate fine-grained access control. In a relational system, you might restrict access to certain tables; in a graph, queries traverse multiple node types and relationships. Implementing security after the fact is tremendously complex. Demonstrating that a graph solution can respect existing entitlements and implement role-based access control is crucial. 

Sensitive Data and Privacy Risks

Graphs can amplify privacy concerns because of their connected nature. An unauthorized user gaining partial access might infer much more from relationship patterns. This interconnectedness raises security stakes—you must protect not just individual data points but relationships as well.

Regulatory Compliance

Regulations like GDPR, HIPAA, or PCI present unique challenges for graphs. For instance, GDPR’s “right to be forgotten” is difficult to implement when deleting a node might leave residual links or inferred knowledge. Auditing requires tracking which relationships were traversed, and demonstrating data lineage becomes complex. If compliance wasn’t planned for in the PoC, retrofitting it can stall production deployment.

 

Financial and ROI Challenges

Unclear Business Value

Justifying a graph project financially is tricky, especially when benefits are long-term or indirect. A PoC might show an interesting capability, but translating that into clear ROI is difficult if only one use case is demonstrated. Without a strong business case tied to measurable Key Performance Indicators (KPIs), projects struggle to secure production funding.

Scaling Costs

PoCs often leverage free or low-cost resources. However, production deployment requires enterprise licenses, robust infrastructure, and high-availability configurations. An enterprise-level knowledge graph spanning multiple use cases can incur significant long-term costs. These financial requirements can shock organizations that didn’t plan for them.

Operational and Talent Expenses

Beyond technology costs, successfully operating a knowledge graph requires specialized talent—data engineers, knowledge engineers, and graph database administrators. While a PoC might be built by a single person or small team, maintaining a production graph could require several dedicated staff. This represents a significant ongoing expense that organizations often underestimate.

Competing Priorities

Every project competes for finite resources. Graph initiatives promise strategic long-term benefits but may seem less immediately impactful than customer-facing applications. Organizations focused on quarterly results may abandon graph projects if they don’t show quick wins. Breaking the roadmap into phased deliverables demonstrating incremental value can help maintain support.

 

Data Governance and Scalability Challenges

Ontology and Data Stewardship

Knowledge graphs require consistent definitions across the enterprise. Many organizations lack ontology expertise, leading to inconsistent data modeling. Strong governance is essential to manage how data elements are defined, connected, and updated. Without data stewards responsible for accuracy, production graphs can become unreliable or inconsistent, undermining user trust.

 

Conclusion

Transitioning a graph database or knowledge graph from PoC to production involves multiple challenges across organizational, technical, security, financial, governance, and talent dimensions. Many promising PoCs fail to cross this “last mile” due to one or more of these issues.

In Part Two, I’ll outline a holistic strategy for successful graph initiatives that can effectively transition to production—incorporating executive alignment, technical best practices, emerging trends like GraphRAG and semantic layers, and the critical people-process factors that make the difference between a stalled pilot and a thriving production deployment.

The post Graph Solutions PoC to Production: Overcoming the Barriers to Success (Part I) appeared first on Enterprise Knowledge.

]]>
Enterprise Knowledge Graphs: The Importance of Semantics https://enterprise-knowledge.com/enterprise-knowledge-graphs-the-importance-of-semantics/ Thu, 23 May 2024 17:13:34 +0000 https://enterprise-knowledge.com/?p=20913 Heather Hedden, Senior Consultant at Enterprise Knowledge, presented “Enterprise Knowledge Graphs: The Importance of Semantics” on May 9, 2024, at the annual Data Summit in Boston.  In her presentation, Hedden describes the components of an enterprise knowledge graph and provides … Continue reading

The post Enterprise Knowledge Graphs: The Importance of Semantics appeared first on Enterprise Knowledge.

]]>
Heather Hedden, Senior Consultant at Enterprise Knowledge, presented “Enterprise Knowledge Graphs: The Importance of Semantics” on May 9, 2024, at the annual Data Summit in Boston. 

In her presentation, Hedden describes the components of an enterprise knowledge graph and provides further insight into the semantic layer – or knowledge model – component, which includes an ontology and controlled vocabularies, such as taxonomies, for controlled metadata. While data experts tend to focus on the graph database components (RDF triple store or a label property graph), Hedden emphasizes they should not overlook the importance of the semantic layer.

Explore the presentation to learn:

  • The definition and benefits of an enterprise knowledge graph
  • The components of a knowledge graph
  • The fundamentals of graph databases
  • The basics features of taxonomies and ontologies
  • The role of taxonomies and ontologies in knowledge graphs
  • How an enterprise knowledge graph is built

The post Enterprise Knowledge Graphs: The Importance of Semantics appeared first on Enterprise Knowledge.

]]>
Synergizing Knowledge Graphs with Large Language Models (LLMs): A Path to Semantically Enhanced Intelligence https://enterprise-knowledge.com/synergizing-knowledge-graphs-with-large-language-models-llms/ Tue, 02 Apr 2024 16:06:08 +0000 https://enterprise-knowledge.com/?p=20280 Why do Large Language Models (LLMs) sometimes produce unexpected or inaccurate results, often referred to as ‘hallucinations’? What challenges do organizations face when attempting to align the capabilities of LLMs with their specific business contexts? These pressing questions underscore the … Continue reading

The post Synergizing Knowledge Graphs with Large Language Models (LLMs): A Path to Semantically Enhanced Intelligence appeared first on Enterprise Knowledge.

]]>
Why do Large Language Models (LLMs) sometimes produce unexpected or inaccurate results, often referred to as ‘hallucinations’? What challenges do organizations face when attempting to align the capabilities of LLMs with their specific business contexts? These pressing questions underscore the complexities and potential problems of LLMs. Yet, the integration of LLMs with Knowledge Graphs (KGs) offers promising avenues to not only address these concerns but also to revolutionize the landscape of data processing and knowledge extraction. This paper delves into this innovative integration, exploring how it shapes the future of artificial intelligence (AI) and its real-world applications.

 

Introduction

Large Language Models (LLMs) have been trained on diverse and extensive datasets containing billions of words to understand, generate, and interact with human language in a way that is remarkably coherent and contextually relevant. Knowledge Graphs (KGs) are a structured form of information storage that utilizes a graph database format to connect entities and their relationships. KGs translate the relationships between various concepts into a mathematical and logical format that both humans and machines can interpret. The purpose of this paper is to explore the synergetic relationship between LLMs and KGs, showing how their integration can revolutionize data processing, knowledge extraction, and artificial intelligence (AI) capabilities. We explain the complexities of LLMs and KGs, showcase their strengths, and demonstrate how their combination can lead to more efficient and comprehensive knowledge processing and improved performance in AI applications.

 

Understanding Generative Large Language Models

LLMs can generate text that closely mimics human writing. They can compose essays, poems, and technical articles, and even simulate conversation in a remarkably human-like manner. LLMs use deep learning, specifically a form of neural network architecture known as transformers. This architecture allows the model to weigh the importance of different words in a sentence, leading to a better understanding of language context and syntax. One of the key strengths of LLMs is their ability to understand and respond to context within a conversation or a text. This makes them particularly effective for applications like chatbots, content creation, and language translation. However, despite the many capabilities of LLMs, they have limitations. They can generate incorrect or biased information, and their responses are influenced by the data they were trained on. Moreover, they do not possess true understanding or consciousness; they simply simulate this understanding based on patterns in data.

 

Exploring Knowledge Graphs

KGs are a powerful way to represent and store information in a structured format, making it easier for both humans and machines to access and understand complex datasets. They are used extensively in various domains, including search engines, recommendation systems, and data integration platforms. At their core, knowledge graphs are made up of entities (nodes) and relationships (edges) that connect these entities.  This structure allows for the representation of complex relationships between different pieces of data in a way that is both visually intuitive and computationally efficient. KGs are often used to integrate structured and unstructured data from multiple sources. This integration provides a more comprehensive understanding of the data by providing a unified view. One of the strengths of KGs is the ease with which they can be queried. Technologies like SPARQL (a query language for graph databases) enable users to efficiently extract complex information from a knowledge graph. KGs find applications in various fields, including search engines (like Google’s Knowledge Graph), social networks, business intelligence, and artificial intelligence.

 

Enhancing Knowledge Graph Creation with LLMs

KGs make implicit human knowledge explicit and allow inferences to be drawn from the information they are provided. The ontology, or graph model, serves as anchors or constraints to these inferences. Once created and validated, KGs can be trusted as a source of truth, they make inferences based on the semantics and structure of their model (ontology). Because of this element of human intervention, humans can ensure that the interpretation of information is correct for the given context, in particular alleviating the ‘garbage in – garbage out’ phenomenon. However, because of this human intervention, they can also be fairly labor-intensive to create. KGs are created using one of a couple types of graph database frameworks, they are generally dependent on some form of human intervention and are generated by individuals with a specialized skill set and/or specialized software. To access the information in a Knowledge Graph they must be stored in an appropriate graph database platform and require the use of specialized query languages to query the graph. Because of these specialized skills and a high degree of human intervention, knowledge graphs can be time-consuming and labor-intensive to create. 

Enhancing KG Creation with LLMs through Ontology Prompting

There is an established process for creating a complete knowledge graph. After data collection, LLM processing and structuring for the knowledge graph make up the bulk of the work.

Through a technique known as ontology prompting, LLMs can effectively parse through vast amounts of unstructured text, accurately identify and extract pertinent entities, and discern the intricate relationships between these entities. By understanding and leveraging the context in which data appears, these models are not only capable of recognizing diverse entity types (such as people, places, organizations, etc.) but can also delineate the nuanced relationships that connect these entities. This process significantly streamlines the creation and enrichment of KGs, transforming raw, unstructured data into a structured, interconnected web of knowledge that is both accessible and actionable. The integration of LLMs into KG construction not only enriches the data but also significantly augments the utility and accuracy of the knowledge graphs in various applications, ranging from semantic search and content recommendation to advanced analytics and decision-making support.

 

Improving LLM Performance with Knowledge Graphs

The integration of KGs into LLMs offers substantial performance improvements, particularly in enhancing contextual understanding, reducing biases, and boosting accuracy. KGs inject a semantic layer of contextual depth into LLMs, enabling these models to grasp and process language with a more nuanced understanding of the subject matter. This interaction significantly enhances the comprehension capabilities of LLMs, as they become more adept at interpreting and responding to complex queries with enhanced precision. Moreover, the structured nature of KGs aids in mitigating biases inherent in LLMs. By providing a balanced and factual representation of information, KGs help neutralize skewed perspectives and promote a more objective and informed generation of content. Finally, the incorporation of KGs into LLMs has been instrumental in enhancing the accuracy and reliability of the output generated by LLMs.

A contextual framework for enhancing large language models with knowledge graphs. Knowledge Graphs boost accuracy & reliability, reduce bias, improve comprehension, inject contextual depth, and provide a semantic layer of context for LLMs.

The validated data from KGs serve as a solid foundation, and reduce ambiguities and errors in the information processed by LLMs, thereby ensuring a higher quality of output that is trustworthy, traceable, and contextually coherent.

 

Case Studies and Applications

The integration of LLMs and KGs is making significant advances across various industries and transforming how we process and leverage information. For instance, in the finance sector, LLMs combined with KGs are used for risk assessment and fraud detection. These systems analyze transaction patterns, detect anomalies, and understand the relationships between different entities, helping financial institutions mitigate risks and prevent fraudulent activities.  Another example is personalized recommendation systems. E-commerce platforms like Amazon utilize KGs and LLMs to understand customer preferences, search histories, and purchase behaviors. This integration allows for highly personalized product and content recommendations, improving customer experience and increasing sales and engagement. In the legal industry LLMs and KGs are used to analyze legal documents, case laws, and statutes. They help in summarizing legal documents, extracting relevant clauses, and conducting research, thereby saving time for legal professionals and improving the accuracy of legal advice. The potential of LLM and KG integrations is unlimited, promising transformative advancements across sectors. For example, leveraging LLMs and KGs can transform educational platforms, guiding learners through tailored and personalized educational journeys. In healthcare, the innovation in sophisticated virtual assistants is revolutionizing telemedicine, offering preventive care and preliminary diagnoses. Urban planning and management stand to gain immensely from this technology, enabling smarter city planning through the analysis of diverse data sources, from traffic patterns to social media sentiments. Moreover, the research and development are set to accelerate, with LLMs and KGs synergizing to automate literature reviews, foster novel research ideas, and predict experimental outcomes. 

The impact of large language models and knowledge graph integration is far reaching. It affects a wide range of industries, including healthcare, urban planning, research & development, finance, law, education, and e-commerce.

Challenges and Considerations

While the integration of LLMs and KGs is promising, it is accompanied by a set of significant challenges and considerations. From a technical perspective, merging LLMs with KGs needs sophisticated algorithms capable of handling the complexity of KG structures and the nuances of natural language processed by LLMs. For example, ensuring data compatibility, maintaining real-time data synchronization, and managing the computational load are difficult tasks that require advanced solutions and ongoing innovation. Moreover, ethical and privacy concerns are one of the top challenges of this integration. The use of LLMs and KGs involves processing vast amounts of data some of which may be sensitive or personal. Ensuring that these technologies adhere to privacy laws and regulations, maintain data confidentiality, and make ethically sound decisions is a continuous challenge.  There’s also the risk of perpetuating biases present in the training data of LLM that require meticulous oversight and implementation of bias-mitigation strategies. Furthermore, the sustainability of these advanced technologies cannot be overlooked. The energy consumption associated with training and running large-scale LLMs and maintaining extensive KGs poses significant environmental concerns. As the demand for these technologies grows, finding ways to minimize their carbon footprint and developing more energy-efficient models is important. Addressing these technical, ethical, and sustainability challenges is crucial for the responsible and effective implementation of LLM and KG integrations.

 

Conclusion

In this white paper, we explored the dynamic interplay between LLMs and KGs, unraveling the profound impact of their integration on various industries. We delved into the transformative capabilities of LLMs in enhancing the creation and enrichment of KGs, highlighting automated data extraction, contextual understanding, and data enrichment. Conversely, we discussed how KGs can improve LLM performance by imparting contextual depth, mitigating biases, enabling source traceability, and increasing accuracy and reliability. We also showcased the practical benefits and revolutionary potential of this synergy. In conclusion, the combination of LLMs and KGs stands at the forefront of technological advancement and directs us toward an era of enhanced intelligence and informed decision-making. However, fostering continued research, encouraging interdisciplinary collaboration, and nurturing an ecosystem that prioritizes ethical considerations and sustainability is important.

Want to jumpstart your organization’s use of LLMs? Check out our Enterprise LLM Accelerator and contact us at info@enterprise-knowledge.com for more information! 

 

About this article

This is an article within a linked series written to provide a straightforward introduction to getting started with language models (LLMs) and knowledge graphs (KGs). You can find the next article in the series here.

The post Synergizing Knowledge Graphs with Large Language Models (LLMs): A Path to Semantically Enhanced Intelligence appeared first on Enterprise Knowledge.

]]>
Top Graph Use Cases and Enterprise Applications (with Real World Examples) https://enterprise-knowledge.com/top-graph-use-cases-and-enterprise-applications-with-real-world-examples/ Wed, 22 Feb 2023 22:20:48 +0000 https://enterprise-knowledge.com/?p=17581 Graph solutions have gained momentum due to their wide-ranging applications across multiple industries. Gartner predicts that graph technologies will be used in 80% of data and analytics innovations by 2025, up from 10% in 2021. Several factors are driving the … Continue reading

The post Top Graph Use Cases and Enterprise Applications (with Real World Examples) appeared first on Enterprise Knowledge.

]]>
Graph solutions have gained momentum due to their wide-ranging applications across multiple industries. Gartner predicts that graph technologies will be used in 80% of data and analytics innovations by 2025, up from 10% in 2021. Several factors are driving the adoption of knowledge graphs. Specifically, the increasing amount of data being generated and collected, and the need to make sense of it, and its use in artificial intelligence and machine learning, which can benefit from the structured data and context provided by knowledge graphs.

For many organizations, however, the question remains, “Is it the right solution for us?” We get this question regularly. Here, I will draw upon our own experience from client projects and lessons learned to provide a selection of optimal use cases for knowledge graphs and semantic solutions along with real world examples of their applications.

Top Graph Use Cases Enterprise Applications

Use Case #1: Customer 360 / Enterprise 360

Customer 360 / enterprise 360 graphic

Customer data is typically spread across multiple applications, departments, and regions. Each team and system need to keep diverse sets of data about their customers in order to play their specific role – inadvertently leading to siloed experiences. A graph solution allows us to create a connection layer that facilitates consistent aggregation and ingestion of diverse information types from sources, internal or external, to the organization. Graphs boost knowledge discovery and efficient data-driven analytics to understand a company’s relationship with customers and personalize marketing, products, and services.

Real World Examples:

Customer 360 for a Commercial Real-Estate Company

“We lost a multi-million-dollar value customer after one of our regional sales reps offered the customer a property that the customer already owned. How do we get better with understanding our customers? We would like to be able to quickly answer questions like:

  • Who is our repeat customer in North America over the last 10 years?”

Customer 360 for a Global Digital Marketing and Technology Firm

“Our customer databases contain records for more than 2 billion distinct consumers (supposed to be reflecting an estimated 240 million real world individuals) – we need to understand how many versions of ‘Customer A’ we have in order to integrate the intelligence gathered from different data sources to fully understand each customer.”

Solution Outcomes: Lead generation and sales cycles are improved through faster access to content and improved customer intelligence (and ability to customize materials), where a 1% decrease in time spent searching for customer information by a sales rep resulted in $6.24M in cost savings annually. Increased awareness of and ability to leverage customer connections within these companies, helps foster positive customer relationships.

Use Case #2: Content Personalization

content personalization graphucThe next critical step after understanding customers is to personalize and recommend relevant content to them. With the size of data and dropping attention spans of online users, digital personalization has become one of the top priorities for companies’ business models. Especially with third-party cookies being phased out, companies need innovative ways to understand and target their online customers with relevant and personalized content. Graph analytics provide a meaningful way to aggregate information about a customer and create relationships with your solutions and services to determine a way to decide what information is right to share with a customer. 

Real World Examples:

Customer Journey Map for a Healthcare Training and Information Provider

“We want to understand a patient’s journey to serve the next best content and information using the right channel and cadence.”

“We want to deliver tailored training content and course recommendations based on our audience and their setting so that we can connect users with the exact learning content that would help them better master key competencies.”

Solution Outcomes: A semantic recommendation service that is beating accuracy benchmarks and replacing manual processes aggregating content – that is supporting higher-quality, more advanced, and targeted recommendations with clear reasons. Rich metadata and semantic modeling continue to drive the matching of 50K training materials to specific curricula, leading new, data-driven, audience-based marketing efforts that demonstrate how the recommender service is achieving increased engagement and performance from over 2.3 million users.

Use Case #3: Supply Chain and Environmental Social Governance (ESG)

ESG graphicHaving a plan for ESG is no longer an option. Many organizations now have a goal to establish a standardized, central platform to get insights on environmental impacts associated with their supply chain processes. However, this information is typically stored in disparate locations, often hidden within departmental documents or applications. Additionally, there is usually no standardized vocabulary used across different industries, leading to inconsistent understandings of key business and supply chain concepts. Graphs reconcile such data continuously crawled from diverse sources to support interactive queries and provide a graphic representation or model of the elements within supply chain, aiding in pathfinding and the ability to semantically enrich complex machine learning (ML) algorithms and decision making.

Real World Examples:

Aggregating Data to Reduce Carbon Footprints of Supply Chain for a Global Consultancy 

“We are at a pivotal time in ESG where our clients are coming to us to answer questions like:

  • What’s the best material we can use to package Product x?
  • What shipping route is the most fuel efficient?
  • Who was my most ESG compliant plant in 2020?”

Solution Outcomes: Graph embedded, machine-readable relationships between key supply chain and ESG concepts in a way that do not require tables and complex joins that enabled the firm to leverage their extensive knowledge base around methods to reduce environmental impact and guided them in building a centralized database of this knowledge. Consultants can leverage insights that are certified and align with industry standards to provide clients with a strategy that can generate profit while supporting sustainability mission and impact, detect patterns and provide market intelligence to their clients.

Use Case #4: Financial Risk Detection and Prediction

Financial risk detection graphicThe financial industry is made up of a network of markets and transactions. A risk issue in one financial institution could result in a domino effect for many. As such, most large financial organizations have moved their data to a data lake or a data warehouse to understand and manage financial risk in one place. Yet, the biggest challenge for risk analysis continues to suffer from lack of a scalable way of understanding how data is interrelated. A graph or network is enabling institutions to model and visualize these connections as a collection of nodes and points that specifies the exact link between certain financial concepts and entities. Graph-based solutions further leverage the relationships among the entities involved to create a semantically enhanced machine learning model.

Real World Examples:

Financial Risk Reporting for a Federal Financial Regulator 

“Data scientists and economists were finding it difficult to make efficient use of siloed data sources in order to  easily access, interpret, and  regulatory functions including answering questions like:

  • What are the compliance forms and reporting requirements for Bank X?
  • Which financial institutions have filed similar risk compliance issues?
  • Which financial institutions are behind on their risk reporting and filings this year?
  • What’s the revision history and the corresponding policies and procedures for a given regulation?”

Realtime Fraud Detection For Multinational e-Commerce Company

“We want to tap into our extensive historic listing data to understand the relationship between packages being rerouted, listings, and merchants to ultimately detect shipping scams so that we can minimize the fraud risk for online merchants from ‘unpaid’ and fraudulent purchases on their listing items.”

Solution Outcomes: Graph data that enables explorations, linking and understanding of entities such as product, categories/customer, orders that supports risk fraud pattern detections for the organization’s risk engine algorithm. Ultimately resulting in: 

  • Real-time risk fraud detection: Risk fraud pattern detections for risk engines to onboard.
  • A non-disruptive fraud prevention: Help the company identify and truncate fraudulent transactions before they take place without impacting legitimate business transactions.

Use Case #5: Mergers and Acquisitions

M&A graphicMany factors can impact the success of mergers and acquisitions (M&A) and their successful integration as merging with or acquiring new companies inevitably brings another ecosystem of applications, operations, data/content, and vernacular. The process of knowledge transfer and the challenge to enable strategic alignment of processes and data is becoming a rising concern to the already delicate success of M&As. For a knowledge graph, data relationships are first class citizens. Thus, graphs offer ways to semantically harmonize, store, and connect similar or related organizational concepts. The approach further represents information in the way people speak using taxonomies and ontological schemas that allow for storing data with organizational context.

Real World Examples:

Product/Solution Alignment for the World’s Leading Provider of Content Management and Intellectual Property Services

“We have gone through multiple M&As over the past 5 years. We are looking for a way to connect and standardize the data we have across 40 systems that have some overlapping applications, data, and users.”

“On our e-commerce platforms, it’s not clear what our specific products or solutions are. We are losing business due to our inability to consistently name and describe our solution offerings across the organization. How can we align our terminology on our products and solutions company wide?”

Solution Outcome: Graph solution allows for explicitly capturing and aligning the knowledge and data models by providing a comprehensive and structured representation of entities and their relationships. This is aiding in the due diligence process by allowing for the quick identification and analysis of key stakeholders, competitors, and potential synergies. Additionally, the graph serves as a useful tool for gaining a better understanding of the complexities involved in mergers, facilitates the deduplication of work or loss of information and intelligence across and enables context-based decision making.

Use Case #6: Data Quality and Governance

data quality and governance graphicThe size and complexity of data sources and datasets is making traditional data dictionaries and Entity Relationship Diagrams (ERD) inadequate. Knowledge Graphs provide structure for all types of data – either serving as a semantic layer or as a domain mapping solution – and enable the creation of multilateral relations across data sources, explicitly capturing how the data is being used, and what changes are being made to data. As such, knowledge graphs support data governance and quality inspection by providing a contextual understanding of enterprise data, where it is, who can access it and where, and how it will be shared or changed over time. As such, data governance strategies that are leveraging knowledge graph solutions have increased data accessibility and improved data quality and observability at scale. 

Real World Examples:

Graph for Data Quality at a Global Digital Marketer

“Our enterprise has over 20 child organizations that:

  • Lack transparency over which common data sets were available for use,
  • Did not understand the quality of the data available,
  • Have drastically different definitions of key terms, and 
  • Use a database of consumer data containing over 10 billion records, with dirty data and millions of duplicates.”

Solution Outcome: A Graph creation and mapping process alone reduced record count from ~10 billion to ~4 billion with matching algorithms that optimized QA process resulting in 80% record deduplication with 95% accuracy.

Use Case #7: Data as a Product (and Data Interoperability)

Every enterprise data strategy strives to facilitate the flexibility that will allow data to move between current and future systems, minimize limitations of proprietary solutions and avoid vendor lock. To do so, data needs to be created based on a shared terminology, web standards, and security protocols. The Financial Industry Business Ontology (FIBO) from the EDM Council is an example of a conceptual graph model that provides common vocabulary and meaning for key concepts and terms for the financial industry and a way to align and harmonize data irrespective of its source. As a standards’- based data model, graphs allow for consistent ingestion of diverse information types from sources internal or external to the organization (e.g. Linked Data, subscriptions, purchased datasets, etc.). Ultimately allowing organizations to handle large data coming from various sources, including public sources and boost knowledge discovery, industry compliance, and efficient data-driven analytics. 

Real World Examples:

Data-as-a-Product for Global Veterinary that Provides a Comprehensive Suite of Products, Software, and Services for Veterinary Professionals

“Most of our highly interrelated data is stuck behind 4-5 legacy data platforms and it’s hard to unify and understand our data which is slowing down our engineering processes. Ultimately, we need a way to model and describe business processes and data flow between individual veterinary practices and enrich and align their data with industry standards. This will allow us to normalize services, improve efficiency and create the ability to report on the data across practices as well as trends within a specific practice.”

Solution Outcome: Taxonomy/ontology was used as a schema to generate the graph and to describe the key types of ‘things’ vet partners were interested in and how they relate to each other. This is ensuring the use of a common vocabulary from all veterinary practices submitting data and resulting in:

  • Automation of data normalization,
  • Identification of potential drug targets and understanding the relationships between different molecules, and
  • Enablement of the company to provide the ontological data model as a product and a shareable industry standard

Use Case #8: Semantic Search

“Search doesn’t work” usually is a common sentiment at organizations that are only leveraging key words to determine what search results should look like. Semantic search, at its core, is a Search that provides results based on context and meaning. Search relevance, or a search engine’s ability to find and return a page of search results to user intent, isn’t possible without semantic understanding. Knowledge graphs thus create a machine-readable structure that will allow systems to explicitly capture context and thus search engines to understand concepts, entities and the relationships between them. 

Today, many of the search engines we use such as Google, Amazon, Airbnb, etc., all leverage multiple knowledge graphs, along with natural language processing (NLP) and machine learning (ML) to go beyond basic keyword-based searching. Understanding semantic search is becoming fundamental to providing a good search experience that’s rooted in a deep understanding of users and ultimately driving the intended digital experience that garners trust and adoption (be it knowledge transfer, enterprise learning, employee/customer retention, or increased sales).

Real World Examples:

Expert Finder for a Federal Engineering Research Institute 

“We have a retiring workforce and are facing challenges with brain drain. We would like to be able to get quick answers to questions like:

  • What type of paint did we use to manufacture this engineering part in 1956?”

Solution Outcomes: A graph model enables browsing and discovery of previously uncaptured relationships between people, roles, projects, organizations, and engineering materials to aggregate and return in search results. Providing a unified view of institutional information and resulting in reduced time to find an expert and project information from 3-4 weeks to 5-10 minutes.

Use Case #9: Context and Reasoning for AI and ML

Most Enterprise AI projects are stalled due to lack of strategy to get data and knowledge. AI efforts had typically started with Data scientists getting hired to explore and figure out what’s in the data. They often get stuck after some exploration with fundamental questions like: what problem am I solving or how do I know this is good training data? This is resulting in mistakes in the algorithms, bad AI errors, ultimately lack of trust, and then abandonment of AI efforts. Data on its own, does not explain itself nor its journey. Data is only valuable in the context of what it means to end users. Knowledge graphs provide ML and AI a knowledge modeling approach to accelerate the data exploration, connection, and feature extraction process and provide automated data classification based on context during data preparation for AI and ML. 

Real World Examples:

A Semantic Recommendation Service for a Scientific Products and Software Services Supplier

“We need to improve our ML algorithms to automate the aggregation of products and related marketing and manuals, videos, etc. to make personalized content recommendations to our customers investing in our products. This is currently a manual process that requires significant time investment and resources from Marketing, Products, IT. This is becoming business critical for us to manage at a global scale.”

Solution Outcomes: Graph provides a comprehensive and organized view of data, helping improve the performance and explainability of models, and automating several tasks. Specifically, the graph is supporting:

  • Data integration/preparation: integrate and organize data from various sources such as marketing content platforms, Product Information management (PIM) application and more making it easier for ML and AI models to access and understand the data by encoding context through metadata and taxonomies.
  • Automation: support the automation of tasks such as data annotation, data curation, data pre-processing and so on, which can help save time and resources.
  • Explanation: a way to understand and explain the decisions made by ML and AI models, increasing trust and transparency.
  • Reasoning: the graph is used to perform reasoning and inferences, which help the ML and AI algorithms to make more accurate predictions on content recommendations.
  • Personalization: using the knowledge graph, AI is extracting user’s preference and behavior to provide personalized services for a given product.

For more details and use cases visit Enterprise Knowledge.

The post Top Graph Use Cases and Enterprise Applications (with Real World Examples) appeared first on Enterprise Knowledge.

]]>
EK’s Chris Marino to Present at This Year’s Data Architecture Online Event https://enterprise-knowledge.com/eks-chris-marino-to-present-at-this-years-data-architecture-online-event/ Mon, 18 Jul 2022 14:00:05 +0000 https://enterprise-knowledge.com/?p=15698 Chris Marino, a Principal Solution Consultant at Enterprise Knowledge (EK), is a featured speaker at this year’s Data Architecture Online event organized by Dataversity. Marino will present his webinar “Learning 360: Crafting a Comprehensive View of Learning Content Using a … Continue reading

The post EK’s Chris Marino to Present at This Year’s Data Architecture Online Event appeared first on Enterprise Knowledge.

]]>
Chris Marino, a Principal Solution Consultant at Enterprise Knowledge (EK), is a featured speaker at this year’s Data Architecture Online event organized by Dataversity. Marino will present his webinar “Learning 360: Crafting a Comprehensive View of Learning Content Using a Graph” on July 20th at 12:30 pm Eastern Standard Time (EST). The webinar will feature a case study on how a major U.S. retailer is transforming how they organize and manage various learning material types, forms, and systems. In his webinar presentation, Marino will take participants through the entire Graph development process, including planning, designing, and developing the new tool, as well as highlight benefits to the organization and lessons learned throughout the process.

Data Architecture Online is an annual, free online event hosted by Dataversity designed to teach data architects, modelers, database administrators, and IT managers about the latest strategies in building, utilizing, and managing modern Data Architecture.

 

Register for the free online event here: Data Architecture Online

Sessions will feature on Data Architecture Online from 8:00 am – 1:30 pm EST

2:00pm – 7:30pm Central European Time

 

The post EK’s Chris Marino to Present at This Year’s Data Architecture Online Event appeared first on Enterprise Knowledge.

]]>
What is a Semantic Architecture and How do I Build One? https://enterprise-knowledge.com/what-is-a-semantic-architecture-and-how-do-i-build-one/ Thu, 02 Apr 2020 13:00:48 +0000 https://enterprise-knowledge.com/?p=10865 Can you access the bulk of your organization’s data through simple search or navigation using common business terms? If so, your organization may be one of the few that is reaping the benefits of a semantic data layer. A semantic … Continue reading

The post What is a Semantic Architecture and How do I Build One? appeared first on Enterprise Knowledge.

]]>
Can you access the bulk of your organization’s data through simple search or navigation using common business terms? If so, your organization may be one of the few that is reaping the benefits of a semantic data layer. A semantic layer provides the enterprise with the flexibility to capture, store, and represent simple business terms and context as a layer sitting above complex data. This is why most of our clients typically give this architectural layer an internal nickname, referring to it as “The Brain,”  “The Hub,” “The Network,” “Our Universe,” and so forth. 

As such, before delving deep into the architecture, it is important to align on and understand what we mean by a semantic layer and its foundational ability to solve business and traditional data management challenges. In this article, I will share EK’s experience designing and building semantic data layers for the enterprise, the key considerations and potential challenges to look out for, and also outline effective practices to optimize, scale, and gain the utmost business value a semantic model provides to an organization.

What is a Semantic Layer?

A semantic layer is not a single platform or application, but rather the realization or actualization of a semantic approach to solving business problems by managing data in a manner that is optimized for capturing business meaning and designing it for end user experience. At its core, a standard semantic layer is specifically comprised of at least one or more of the following semantic approaches: 

  • Ontology Model: defines the types of things that exist in your business domain and the properties that can be used to describe them. An ontology provides a flexible and standard model that organizes structured and unstructured information through entities, their properties, and the way they relate to one another.
  • Enterprise Knowledge Graph: uses an ontology as a framework to add in real data and enable a standard representation of an organization’s knowledge domain and artifacts so that it is understood by both humans and machines. It is a collection of references to your organization’s knowledge assets, content, and data that leverages a data model to describe the people, places, and things and how they are related. 

A semantic layer thus pulls in these flexible semantic models to allow your organization to map disparate data sources into a single schema or a unified data model that provides a business representation of enterprise data in a “whiteboardable” view, making large data accessible to both technical and nontechnical users. In other words, it provides a business view of complex knowledge, information, and data and their assorted relationships in a way that can be visually understood.

How Does a Semantic Layer Provide Business Value to Your Organization?

Organizations have been successfully utilizing data lakes and data warehouses in order to unify enterprise data in a shared space. A semantic data layer delivers the best value for enterprises that are looking to support the growing consumers of big data, business users, by adding the “meaning” or “business knowledge” behind their data as an additional layer of abstraction or as a bridge between complex data assets and front-end applications such as enterprise search, business analytics and BI dashboards, chatbots, natural language process etc. For instance, if you ask a non-semantic chatbot, “what is our profit?” and it recites the definition of “profit” from the dictionary, it does not have a semantic understanding or context of your business language and what you mean by “our profit.” A chatbot built on a semantic layer would instead respond with something like a list of revenue generated per year and the respective percentage of your organization’s profit margins.Visual representation of how a semantic layer draws connections between your data management and storage layer

With a semantic layer as part of an organization’s Enterprise Architecture (EA), the enterprise will be able to realize the following key business benefits: 

  • Bringing Business Users Closer to Data: business users and leadership are closer to data and can independently derive meaningful information and facts to gain insights from large data sources without the technical skills required to query, cleanup, and transform large data.   
  • Data Processing: greater flexibility to quickly modify and improve data flows in a way that is aligned to business needs and the ability to support future business questions and needs that are currently unknown (by traversing your knowledge graph in real time). 
  • Data Governance: unification and interoperability of data across the enterprise minimizes the risk and cost associated with migration or duplication efforts to analyze the relationships between various data sources. 
  • Machine Learning (ML) and Artificial Intelligence (AI):  Serves as the source of truth for providing definition of the business data to machines and enabling the foundation for deep learning and analytics to help the business answer or predict business challenges.

Building the Architecture of a Semantic Layer

A semantic layer consists of a wide array of solutions, ranging from the organizational data itself, to data models that support object or context oriented design, semantic standards to guide machine understanding, as well as tools and technologies to enable and facilitate implementation and scale. Visual representation of semantic layer architecture. Shows how to go from data sources, to data modeling/transformation/unification and standardization, to graph storage and a unified taxonomy, to finally a semantic layer, and then lists some of the business outcomes.

The three foundational steps we have identified as critical to building a scalable semantic layer within your enterprise architecture are: 

1. Define and prioritize your business needs: In building semantic enterprise solutions, clearly defined use cases provide the key question or business reason your semantic architecture will answer for the organization. This in turn drives an understanding of the users and stakeholders, articulates the business value or challenge the solution will solve for your organization, and enables the definition of measurable success criteria. Active SME engagement and validation to ensure proper representation of their business knowledge and understanding of their data is critical to success. Skipping this foundational step will result in missed opportunities for ensuring organizational alignment and return on your investment (ROI). 

2. Map and model your relevant data: Many organizations we work with support a data architecture that is based on relational databases, data warehouses, and/or a wide range of content management cloud or hybrid cloud applications and systems that drive data analysis and analytics capabilities. This does not necessarily mean that these organizations need to start from scratch or overhaul their working enterprise architecture in order to adopt/implement semantic capabilities. For these organizations, it is more effective to start increasing the focus on data modeling and designing efforts by adding models and standards that will allow for capturing business meaning and context (see section below on Web Standards) in a manner that provides the least disruptive starting point. In such scenarios, we typically select the most effective approach to model data and map from source systems by employing the relevant transformation and unification processes (Extract, Transform, Load – ETLs) as well as model-mapping best practices (think ‘virtual model’ versus stored data model in graph storages like graph databases, property graphs, etc.) that are based on the organization’s use cases, enterprise architecture capabilities, staff skill sets, and primarily provide the highest flexibility for data governance and evolving business needs.

The state of an organization’s data typically comes in various formats and from disparate sources. Start with a small use case and plan for an upfront clean-up and transformation effort that will serve as a good investment to start organizing your data and set stakeholder expectations while demonstrating the value of your model early.

3. Leverage semantic web standards to ensure interoperability and governance: Despite the required agility to evolve data management practices, organizations need to think long term about scale and governance. Semantic Web Standards provide the fundamentals that enable you to adopt standard frameworks and practices when kicking off or advancing your semantic architecture. The most relevant standards to the enterprise should be to: 

  • Employ an established data description framework to add business context to your data to enable human understanding and natural language meaning of data (think taxonomies, data catalogs, and metadata); 
  • Use standard approaches to manage and share the data through core data representation formats and a set of rules for formalizing data to ensure your data is both human-readable and machine-readable (examples include XML/RDF formats); 
  • Apply a flexible logic or schema to map and represent relationships, knowledge, and hierarchies between your organization’s data (think ontologies/OWL);
  • A semantic query language to access and analyze the data natural language and artificial intelligence systems (think SPARQL). 
  • Start with available existing/open-source semantic models and ecosystems for your organization to serve as a low-risk, high-value stepping stone (think Open Linked Data/Schema.org). For instance, organizations in the financial industry can start their journey by using a starter ontology for Financial Industry Business Ontology (FIBO), while we have used the Gene Ontology for Biopharma as a jumping off point or to enrich or tailor their model for the specific needs of their organization.

4. Scale with Semantic Tools: Semantic technology components in a more mature semantic layer include graph management applications that serve as middleware, powering the storage, processing, and retrieval of your semantic data. In most scaled enterprise implementations, the architecture for a semantic layer includes a graph database for storing the knowledge and relationships within your data (i.e. your ontology), an enterprise taxonomy/ontology management or a data cataloging tool for effective application and governance of your metadata on enterprise applications such as content management systems, and text analytics or extraction tools to support  advanced capabilities such as Machine Learning (ML) or natural language processing (NLP) depending on the use cases you are working with. 

5. “Plug in” your customer/employee facing applications: The most practical and scalable semantic architecture will successfully support upstream customers or employees facing applications such as enterprise search, data visualization tools, end services/consuming systems, and chatbots, just to name a few potential applications. This way you can “plug” semantic components into other enterprise solutions, applications, and services. With this as your foundation, your organization can now start taking advantage of advanced artificial intelligence (AI) capabilities such as knowledge/relationship and text extraction tools to enable Natural Language Processing (NLP), Machine Learning based pattern recognition to enhance findability and usability of your content, as well automated categorization of your content to augment your data governance practices. 

The cornerstone of a scalable semantic layer is ensuring the capability for controlling and managing versions, governance, and automation. Continuous integration pipelines including standardized APIs and automated ETL scripts should be considered as part of the DNA to ensure consistent connections for structured input from tested and validated sources.

Conclusion

In summary, semantic layers work best as a natural integration framework for enabling interoperability of organizational information assets. It is important to get started by focusing on valuable business-centric use cases that drive getting into semantic solutions. Further, it is worth considering a semantic layer as a complement to other technologies, including relational databases, content management systems (CMS), and other front-end web applications that benefit from having easy access and an intuitive representation of your content and data including your enterprise search, data dashboards, and chatbots.

If you are interested in learning more to determine if a semantic model fits within your organization’s overall enterprise architecture or if you are embarking on the journey to bridge organizational silos and connect diverse domains of knowledge and data that accelerate enterprise AI capabilities, read more or email us.   

Get Started Ask Us a Question

 

The post What is a Semantic Architecture and How do I Build One? appeared first on Enterprise Knowledge.

]]>
KM Showcase 2020: Leveraging KM as the Foundation for AI https://enterprise-knowledge.com/km-showcase-2020-leveraging-km-as-the-foundation-for-ai/ Mon, 09 Mar 2020 21:35:32 +0000 https://enterprise-knowledge.com/?p=10763 This presentation from Joe Hilger, Founder and COO of Enterprise Knowledge was presented at the KM Showcase 2020 in Arlington, VA on March 5th. The presentation addresses why knowledge management (KM) is the foundation for successful artificial intelligence (AI). Hilger … Continue reading

The post KM Showcase 2020: Leveraging KM as the Foundation for AI appeared first on Enterprise Knowledge.

]]>
This presentation from Joe Hilger, Founder and COO of Enterprise Knowledge was presented at the KM Showcase 2020 in Arlington, VA on March 5th. The presentation addresses why knowledge management (KM) is the foundation for successful artificial intelligence (AI). Hilger provides reasoning and examples for why taxonomy, content strategy, governance, and KM leadership are foundational requirements for organization’s pursuing recommender systems, chat bots, and much more. Lastly, he defines Knowledge Artificial Intelligence and provides a brief overview of knowledge graphs.

The post KM Showcase 2020: Leveraging KM as the Foundation for AI appeared first on Enterprise Knowledge.

]]>
What is Artificial Intelligence (AI) for the Enterprise? https://enterprise-knowledge.com/what-is-artificial-intelligence-ai-for-the-enterprise/ Thu, 12 Dec 2019 14:00:21 +0000 https://enterprise-knowledge.com/?p=10082 Artificial intelligence (AI) is set to be the key source of transformation, disruption, and competitive advantage in today’s fast-changing economy. Gartner estimates that AI will create $2.9 trillion in business value and 6.2 billion hours of worker productivity in 2021. As a result, numerous early adopters are buying into AI within organizations across diverse industries ... Continue reading

The post What is Artificial Intelligence (AI) for the Enterprise? appeared first on Enterprise Knowledge.

]]>
Artificial intelligence (AI) is set to be the key source of transformation, disruption, and competitive advantage in today’s fast-changing economy. Gartner estimates that AI will create $2.9 trillion in business value and 6.2 billion hours of worker productivity in 2021. As a result, numerous early adopters are buying into AI within organizations across diverse industries. But many are already encountering challenges as a vast majority of AI initiatives are failing to meet their expectations or provide solid gains on investments. For these organizations, the setback typically originates from the lack of foundation on which to build AI capabilities. Enterprise AI projects end up being isolated endeavors without the needed strategic change to support business practices and operations across the organization. So, how can your organization avoid these pitfalls? It may help to first define what successful AI transformation looks like for the enterprise.

Deconstructing Artificial Intelligence: What are Enterprise AI Applications?

Enterprise AI entails leveraging advanced machine and cognitive capabilities to discover and deliver organizational knowledge, data, and information in a way that closely aligns with how humans look for and process information.

In order to succeed with AI, organizations will first need to identify which of their current enterprise information and data management challenges are a good fit for an AI solution, keeping in mind that AI is not a magic bullet that can solve all business problems. After selecting appropriate use cases, organizations must then build the foundational competencies to structure their information in a manner that is machine-readable. From our experience, the best suited enterprise use cases for advanced capabilities such as artificial intelligence and machine learning include:  

  • Infographic about Machine LearningSemantic Search & Natural Language Processing (NLP): Semantic search seeks to understand the meaning and context behind searched terms as opposed to just executing queries against keywords. It takes into consideration language, word variation, synonymous terms, location, and user preferences to simplify user experience by describing information closer to how the user would to another person. For the enterprise, this is made possible through semantic technologies and enterprise knowledge graphs that provide the architecture to discover and surface knowledge across disparate data sources with the flexibility to quickly modify and improve data flows. This further makes it easier to sustainably add new data sources (without making extensive changes) and support future business questions that are currently unknown. Successful organizations leverage semantic search to develop human centered applications using simple natural language (think applications such as chatbots and question answering systems).
  • Scaled Data Governance through automated organization: Auto-tagging and classification automatically route and organize content and data to the right channel(s) to enable findability, discoverability, optimize enterprise information and data/content governance. The most successful data categorization solutions put in place consistent follow-up processes to manage and access data in the right place, removing the manual burden (usually error prone) from humans, and enabling the enterprise to consistently organize data based on predefined access and security requirements for reliable risk management and compliance purposes.
  • Augmented categorization and classification of data: Augmented categorization leverages machine logic to organize data based on similarities between content, context, and/or users, and further enables the automatic assignment of non-topical concepts to documents such as sentiment (e.g. positive, negative). Once the enterprise determines the relevant categories and relationship model (think taxonomies and ontologies) that will then be used for this process for the machine to learn to define the organization and management of concepts that are unlikely to be explicitly mentioned in a particular document (e.g. emails, helpdesk requests, etc.). The most relevant business problems we have seen here include enabling intelligent routing for handling support tickets, determining if an email needs a follow up response, or further recommending the relevant response. 
  • Discover relationships across disparate sources through recommender systems: A recommendation system works by defining relationships between information or content to provide a better understanding of how things fit together. They also track what is relevant, add context to random data, and suggest relevant information and content based on similarity of users, similarity of content, and the relationships between users and content. Recommendation systems using knowledge graphs and machine readable logic pick up on patterns that enable users to discover new facts and knowledge that would have otherwise remain hidden. 
  • Advanced Analytics: Unlike basic analytics, advanced analytics uses machine learning and large sets of quantitative data to empower organizations to efficiently mine information, discover hidden facts, and identify patterns at a large scale. With this capability, the enterprise is able to understand the business through insights from large and disparate data sources to make relevant and timely decisions, as well as forecast or predict future outcomes.

Why Does the Enterprise Need to Invest in AI?

The most common business drivers for Enterprise AI include: 

Business Agility iconBusiness Pace and Agility: The need to cope with rapid change and the speed of business while successfully balancing effective change management and user experience with increased personalization, knowledge retention, sustainability, and scalability over time is becoming one of the cornerstones of competitive advantage. This, for the enterprise, requires impeccable harmonization and autonomous operation of disparate data and content and information management solutions.

Data Dynamism, Governance, and Scale: According to Forbes, 90% of the data and information we have today was created in just the past two years. The volume and dynamism of organizational data and content (structured and unstructured) is growing exponentially, and organizations’ need significant efficiency to glean meaningful insights and value out of it to make better decisions.

Aging Technology and Infrastructure: Most organizations have been built to organize and manage data and information by type, department, or business function. To add to this complexity, many enterprise leaders say that their systems don’t talk to each other. Increased digitization, coupled with the fast aging of systems, is further fueling these silos and disparate sources for technological solutions to continue providing meaningful support to business problems.  

Why Organizations Fail with AI

Purple robot with touch padThe meaning and value of AI in the context of enterprise solutions is continuously evolving. Perceptions of AI have ranged from a robot that will answer all of our questions to that silver bullet application that will automate processes and augment analysis capabilities to predict and make our future better. However, many businesses make the key mistake of assuming that an organization can start and succeed with AI the moment they are given the green light. 

From our experience, organizations in various industries are leveraging or experimenting with some form of AI capabilities and seeing remarkable results. However, many have yet to gain any value from their AI investments. Here are a selection of reasons why: 

  • Lack of clear business applications and relevant use cases: Much akin to any large and disruptive transformation, every organization should first understand how advanced intelligence can impact their business or help solve relevant business problems. Kicking off an AI effort without a business focus leads to a lack of a strategic approach, which fuels misalignment and the lack of operational and cultural changes required to make it a success. 
  • The assumption that AI is a “Single Technology” solution: AI is not a single technology solution. It is a combination of related technologies that address multi-layered advancement requirements within an organization such as analysis, automation, perception, prediction, etc. Organizations looking to “plug-and-play AI” need to reset their expectations and plan for a multi-phase design, development, and integration effort.
  • AI is not fully “there” yet: Although automation has started to relieve the burden of repetitive organizational tasks such as tagging and classification/categorization etc., AI is still an emerging and evolving field.  As such, it will continue to require human validation in order to scale effectively, especially in the use cases that require a high degree of accuracy. 
  • Enterprise information and data is not ready for AI: Machines need to learn a human way of thinking and how an organization operates in order to provide the right solutions. To this end, the information and knowledge we work with on a daily basis needs to be machine readable for AI technologies to do anything with it. “Garbage in, garbage out” is a common refrain among AI practitioners; without high quality, well organized and tagged data, AI applications will not deliver effective results. 

What are the Steps to Getting Started with Enterprise AI?

In a previous blog, I shared how to organize your data by building a knowledge graph, creating the foundations necessary for a successful AI initiative. 

From our experience, the following key considerations continue to commonly deliver a scalable and adaptable AI capability for the enterprises we work with:

  1. Define an overarching vision that outlines a clear meaning, use case definition, and business value of artificial intelligence for your enterprise. This step serves as the institutional footing to set end-user expectations as well as for solidifying internal capabilities to synchronize the “design and build” process.
  2. Understand organizational information maturity, including assessments of current capabilities, the current state of your content or data, tools, processes, and skill sets, as well an evaluation of any existing AI efforts.
  3. Develop an artificial intelligence strategy to align AI use cases across functions and departments, as well as define a delivery process that supports the organization’s long term strategy and allows for incremental delivery with regular validation of assumptions.
  4. Develop a prioritized backlog to incrementally prove and deliver on Enterprise AI initiatives.
  5. Plan for sustainability and governance. Create a scalable AI project prioritization and backlog creation process for future AI initiatives as well as establish data collection Standard Operating Procedures (SOPs) or data mining processes and confirm data quality and tracking policies for existing data sources.
  6. Iterate and scale with each new business question and data source. 

Many large and successful initiatives we’ve led started small, with defined business goal(s), and were delivered incrementally to validate assumptions and drive enterprise alignment one use case at a time. Whatever your industry, our AI strategy approach, user-centered design approach, and in-house technical expertise can help you get started with a 1 to 2-Day Enterprise AI Foundations workshop that will help you understand artificial intelligence capabilities and their relevance to your unique business needs, as well as develop a shared vision with a strategy/roadmap to drive practical development.

Get Started Download whitepaper for Business Cases Ask a Question

The post What is Artificial Intelligence (AI) for the Enterprise? appeared first on Enterprise Knowledge.

]]>
Webinar: Making KM Clickable (Powering Great Enterprise Search with KM) https://enterprise-knowledge.com/webinar-making-km-clickable-powering-great-enterprise-search-with-km/ Fri, 22 Nov 2019 16:33:43 +0000 https://enterprise-knowledge.com/?p=10036 Presented by EK’s Zach Wahl and Joe Hilger, this webinar discusses how KM enables Advanced Enterprise Search.

The post Webinar: Making KM Clickable (Powering Great Enterprise Search with KM) appeared first on Enterprise Knowledge.

]]>
Presented by EK’s Zach Wahl and Joe Hilger, this webinar discusses how KM enables Advanced Enterprise Search.

The post Webinar: Making KM Clickable (Powering Great Enterprise Search with KM) appeared first on Enterprise Knowledge.

]]>
How to Build a Knowledge Graph in Four Steps: The Roadmap From Metadata to AI https://enterprise-knowledge.com/how-to-build-a-knowledge-graph-in-four-steps-the-roadmap-from-metadata-to-ai/ Mon, 09 Sep 2019 13:19:48 +0000 https://enterprise-knowledge.com/?p=9527 The scale and speed at which data and information are being generated today makes it challenging for organizations to effectively capture valuable insights from massive amounts of information and diverse sources. We rely on Google, Amazon, Alexa, and other chatbots … Continue reading

The post How to Build a Knowledge Graph in Four Steps: The Roadmap From Metadata to AI appeared first on Enterprise Knowledge.

]]>
The scale and speed at which data and information are being generated today makes it challenging for organizations to effectively capture valuable insights from massive amounts of information and diverse sources. We rely on Google, Amazon, Alexa, and other chatbots because they help us find and act on information in the same way and manner that we typically think about things. As organizations explore the next generation of scalable data management approaches, leveraging advanced capabilities such as automation becomes a competitive advantage. Think about the multiple times organizations have undergone robust technological transformations. Despite developing a business case, a strategy, and a long-term implementation roadmap, many often still fail to effect or embrace the change. The most common challenges we see facing the enterprise in this space today include:

  • Limited understanding of the business application and use cases to define a clear vision and strategy.
  • Not knowing where to start, in terms of selecting the most relevant and cost-effective business use case(s) as well as supportive business or functional teams to support rapid validations.
  • There are multiple initiatives across the organization that are not streamlined or optimized for the enterprise.
  • Enterprise data and information is disparate, redundant, and not readily available for use.
  • Lack of the required skill sets and training.

Our experience at Enterprise Knowledge demonstrates that most organizations are already either developing or leveraging some form of Artificial Intelligence (AI) capabilities to enhance their knowledge, data, and information management. Commonly, these capabilities fall under existing functions or titles within the organization, such as data science or engineering, business analytics, information management, or data operations. However, given the technological advancements and the increasing values of organizational knowledge and data in our work and the marketplace today, organizational leaders that treat their information and data as an asset and invest strategically to augment and optimize the same have already started reaping the benefits and having their staff focus on more value add tasks and contributing to complex analytical work to build the business. The most pragmatic approaches for developing a tailored strategy and roadmap toward AI begin by looking at existing capabilities and foundational strengths in your data and information management practices, such as metadata, taxonomies, ontologies, and knowledge graphs, as these will serve as foundational pillars for AI. Below, I share in detail a series of steps and successful approaches that will serve as key considerations for turning your information and data into foundational assets for the future of technology.

What is AI?

DNA of a Knowledge GraphAt EK, we see AI in the context of leveraging machines to imitate human behaviors and deliver organizational knowledge and information in real and actionable ways that closely align with the way we look for and process knowledge, data, and information.

What is a Knowledge Graph?

An Enterprise Knowledge Graph provides a representation of an organization’s knowledge, domain, and artifacts that is understood by both humans and machines. To this end, Knowledge Graphs serve as a foundational pillar for AI, and AI provides organizations with optimized solutions and approaches to achieve overarching business objectives, either through automation or through enhanced cognitive capabilities.

Getting Started…

Step 1: Identify Your Use Cases for Knowledge Graphs and AI?

As an enterprise considers undergoing critical transformations, it becomes evident that most of their efforts are usually competing for the same resources, priorities, and funds. Identifying a solid business case for knowledge graphs and AI efforts becomes the foundational starting point to gain support and buy-in. Effective business applications and use cases are those that are driven by strategic goals, have defined business value either for a particular function or cross-functional team, and make processes or services more efficient and intelligent for the enterprise. Prioritization and selection of use cases should be driven by the foundational value proposition of the use-case for future implementations, technical and infrastructure complexity, stakeholder interest, and availability to support implementation. The most relevant use cases for implementing knowledge graphs and AI include:

  • Intuitive search using natural language;
  • Discovering related content and information, structured or unstructured;
  • Reliable content and data governance;
  • Compliance and operational risk prediction; etc.

Relevant Use Cases for Knowledge Graphs and AI

For more information regarding the business case for AI and knowledge graphs, you can download our whitepaper that outlines the real-world business problems that we are able to tackle more efficiently by using knowledge graph data models.

Once your most relevant business question(s) or use cases have been prioritized and selected, you are now ready to move into the selection and organization of relevant data or content sources that are pertinent to provide an answer or solution to the business case.

Step 2: Inventory and Organize Relevant Data

The majority of the content that organizations work with is unstructured in the form of emails, articles, text files, presentations, etc. Taxonomy, metadata, and data catalogs allow for effective classification and categorization of both structured and unstructured information for the purposes of findability and discoverability. Specifically, developing a business taxonomy provides structure to unstructured information and ensures that an organization can effectively capture, manage, and derive meaning from large amounts of content and information.

There are a few approaches for inventorying and organizing enterprise content and data. If you are faced with the challenging task of inventorying millions of content items, consider using tools to automate the process. A great starting place we recommend here would be to conduct user or Subject Matter Expert (SME) focused design sessions, coupled with bottom-up analysis of selected content, to determine which facets of content are important to your use case. Taxonomies and metadata that are the most intuitive and close to business process and culture tend to facilitate faster and more useful terms to structure your content. Organizing your content and data in such a way gives your organization the stepping stone towards having information in machine readable format, laying the foundation for semantic models, such as ontologies, to understand and use the organizations vocabulary, and start mapping relationships to add context and meaning to disparate data.

Step 3: Map Relationships Across Your Data

Ontologies leverage taxonomies and metadata to provide the knowledge for how relationships and connections are to be made between information and data components (entities) across multiple data sources. Ontology data models further enable us to map relationships in a single location at varying levels of detail and layers. This, in turn, sets the groundwork for more intelligent and efficient AI capabilities, such as text mining and identifying context-based recommendations. These relationship models further allow for:

  • Increasing reuse of “hidden” and unknown information;
  • Managing content more effectively;
  • Optimizing search; and
  • Creating relationships between disparate and distributed information items.

Tapping the power of ontologies to define the types of relationships and connections for your data provides the template to map your knowledge into your data and the blueprint needed to create a knowledge graph.

Step 4: Conduct a Proof of Concept – Add Knowledge to your Data Using a Graph Database

Because of their structure, knowledge graphs allow us to capture related data the way the human brain processes information through the lens of people, places, processes, and things. Knowledge graphs, backed by a graph database and a linked data store, provide the platform required for storing, reasoning, inferring, and using data with structure and context. This plays a fundamental role in providing the architecture and data models that enable machine learning (ML) and other AI capabilities such as making inferences to generate new insights and to drive more efficient and intelligent data and information management solutions.

Start small. Conduct a proof of concept or a rapid prototype in a test environment based on the use cases selected/prioritized and the dataset or content source selected. This will give you the flexibility needed to iteratively validate the ontology model against real data/content, fine tune for tagging of internal & external sources to enhance your knowledge graph, deliver a working proof of concept, and continue to demonstrate the benefits while showing progress quickly. Testing a knowledge graph model and a graph database within such a confined scope will enable your organization to gain perspective on value and complexity before investing big.

This approach will position you to adjust and incrementally add more use cases to reach a larger audience across functions. As you continue to enhance and expand your knowledge across your content and data, you are layering the flexibility to add on more advanced features and intuitive solutions such as semantic search including natural language processing (NLP), chatbots, and voice assistants getting your enterprise closer to a Google and Amazon-like experience.

Ready for AI? Automate, Optimize, and Scale.

Core AI features, such as ML, NLP, predictive analytics, inference, etc., lend themselves to robust information and data management capabilities. There is a mutual relationship between having quality content/data and AI. The cleaner and more optimized that our data, is the easier it is for AI to leverage that data and, in turn, help the organization get the most value out of it. Within the context of information and data management, AI provides the organization with the most efficient and intelligent business applications and values that include:

  • Semantic search that provides flexible and faster access to your data through the ability to use natural language to query massive amounts of both unstructured and structured content. Leveraging auto-tagging, categorization, and clustering capabilities further enables continuous enhancement and governance of taxonomies/ontologies and knowledge graphs.
  • Discover hidden facts and relationships based on patterns and inferences that allow for large scale analysis and identification of related topics and things.
  • Optimize data management and governance through machine-trained workflows, data quality checks, security, and tracking.

Organizations that approach large initiatives toward AI with small (one or two) use cases, and iteratively prototype to make adjustments, tend to deliver value incrementally and continue to garner support throughout. The components that go into achieving this organizational maturity also require sustainable efficiency and show continuous value to scale. As your organization is looking to invest in a new and robust set of tools, the most fundamental evaluation question now becomes ensuring the tool will be able to make extensive use of AI.

If you are exploring pragmatic ways to benefit from knowledge graphs and AI within your organization, we can help you bring proven experience and tested approaches to realize and embrace their values.

Get Started Download Whitepaper for Business Cases Ask a Question

The post How to Build a Knowledge Graph in Four Steps: The Roadmap From Metadata to AI appeared first on Enterprise Knowledge.

]]>