use cases Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/use-cases/ Mon, 19 May 2025 17:15:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg use cases Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/use-cases/ 32 32 What is a Large Language Model (LLM)? https://enterprise-knowledge.com/what-is-a-large-language-model-llm/ Wed, 21 Feb 2024 17:00:37 +0000 https://enterprise-knowledge.com/?p=19705   In late November of 2022, artificial intelligence (AI) research and development company OpenAI released ChatGPT, an AI chatbot powered by a Large Language Model (LLM). In the following year, the world witnessed a meteoric rise in the usage of … Continue reading

The post What is a Large Language Model (LLM)? appeared first on Enterprise Knowledge.

]]>
Note: The above image was generated using Dall-E 3 (via ChatGPT).

 

In late November of 2022, artificial intelligence (AI) research and development company OpenAI released ChatGPT, an AI chatbot powered by a Large Language Model (LLM). In the following year, the world witnessed a meteoric rise in the usage of ChatGPT and other LLMs across a diverse array of industries and applications. However, what large language models actually are and what they are capable of is often misunderstood. In this blog, I will define LLMs, explore how they work, explain their strengths and weaknesses, and elaborate on a few of the most common LLM use cases for the enterprise.

 

 

So, what is a Large Language Model?

In short, a Large Language Model is an advanced AI model designed to perform Natural Language Processing (NLP) tasks, including interpreting, translating, predicting, and generating coherent, contextually relevant text. LLMs require extensive training on vast textual datasets that contain trillions of words, like Wikipedia and GitHub, which teaches the model to recognize patterns in text. An LLM such as OpenAI’s GPT-4 isn’t doing any “reasoning” like a human does, at least not yet – it is merely generating output that fits the patterns it has learned through training. It can simply be thought of as doing very sophisticated predictions of which words in which context go in what order. 

 

How does a Large Language Model work? 

All LLMs operate by leveraging immense, layered networks of interconnected nodes that process and transmit information. The structure of the networks draws inspiration from the interconnectedness of the human brain’s network of neurons. Within this framework, LLMs use so-called transformer models – consisting of an encoder and a decoder – to turn input into output. 

In the process of handling a sequence of input text, a tokenizer algorithm first converts the text into a machine-readable format by breaking down the text into small, discrete units called “tokens” for analysis; tokens themselves are often single words or single letters. 

For example, the sentence “Hello, world!” can be tokenized into [“Hello”,  “,”,  “world”,  “!”]. 

 

These tokens are then converted into numerical values known as embedding vectors, which is the format expected by the transformer model. However, because transformers can’t inherently understand the order of words, each embedding vector is combined with a positional encoding. This step ensures the order of the words is taken into account by the model.

After the input text is tokenized, it is passed through the encoder to create attention vectors, which are numerical values that help the model determine the relevance and relationship of each token to the others in the input. This helps the LLM capture dependencies and relationships between tokens, giving it the ability to process the context of each token in the sequence. 

The attention vectors are then passed to the decoder to receive an output embedding, which are then converted back into tokens. The decoder process continues until a “STOP” token is output by the transformer, indicating that no more output text should be generated. This process ensures that the generated output considers the relevant information from the input, maintaining coherence and context in the generated text. This is similar to how a human might receive a question, automatically identify the most important aspects of the question, and give an appropriate response that addresses those aspects.

 

 

Strengths

Large language models exhibit several strengths that businesses can capitalize on:

  • LLMs excel in advanced tasks that require complex NLP like text summarization, content generation, and translation, all of which demonstrate their high level of proficiency in intricate linguistic tasks and creative text manipulation. This enables them to generate human-like output, carry on long conversations regarding almost any topic, recall details from previous messages in the same context, and even be given specific instructions on how they should respond and react to input. 
  • Similarly, large language models learn rapidly and adapt to the context of a conversation without the need for changing the underlying model architecture. This means they quickly grasp concepts without requiring an extensive number of examples. Supplied with enough detail by a user, LLMs can provide support to that user in solving particular or niche problems without ever having been specifically trained to tackle those kinds of problems.
  • Beyond learning human languages, LLMs can also be trained to perform tasks like writing code, retrieving information, and classifying the sentiment of text, among others. Their adaptability extends to a wide array of use cases that can benefit the enterprise in numerous ways, including saving time, increasing efficiency, and enabling employees to work more effectively.
  • Multimodal LLMs can both break down and generate a variety of media content, including images and videos, with natural language prompts. These models have been trained on existing media to understand their components and then use this understanding to create new content or answer questions about visual content. For example, the image at the top of this blog was generated using Dall-E 3 with the prompt “Please design an image representing a large language model, apt for a professional blog post about LLMs, using mostly purple hues”. This prompt was purposefully vague to allow Dall-E 3 to creatively interpret what an LLM could be represented as.

 

Weaknesses

In spite of their strengths, LLMs have numerous weaknesses:

  • During training, LLMs will learn from whatever input they are given. This means that training on low quality input data will cause the LLM to generate low quality output content.  Businesses need to be strict with the management of the data that the model is learning from to avoid the garbage in, garbage out problem. Similarly, businesses should avoid training LLMs on content generated by LLMs, which can lead to irreversible defects in the model and further reduce the quality of the generated output.
  • During training, LLMs will ignore copyright, plagiarize written content, and ingest proprietary data if given access to that kind of content, which can raise concerns about potential copyright infringement issues.
  • The training process and operation of an LLM demands substantial computational resources, which not only limits their applicability to high-power, high-tech environments but also imposes considerable financial burdens on businesses seeking to develop their own models. Building, scaling, and maintaining LLMs can therefore be extremely costly, resource-intensive, and requires expertise in deep learning and transformer models, which poses a significant hurdle.
  • LLMs have a profound double-edged sword in their tendency to generate “hallucinations”. This means they sometimes produce outputs that are factually false or diverge from user intent, as they are only able to predict syntactically correct phrases without a comprehensive understanding of human meaning and truth. However, without hallucination, LLMs would not be able to creatively generate output, so businesses must weigh the cost of hallucinations against the creative potential of the LLM, and determine what level of risk they are willing to take.

 

LLM Use Cases for the Enterprise

Large language models have many applications that utilize their strengths. However, their weaknesses manifest across all use cases, so businesses must make considerations to prevent complications and mitigate risks. These are some of the most common use cases where we have employed LLMs:

Content generation:

  • LLMs can generate human-like content for articles, blogs, and other written materials. As such, they can act as a starting point for businesses to create and publish content. 
  • LLMs can assist in generating code based on natural language descriptions, aiding developers in their work, and making programming more accessible for more business-oriented, non-technical people.

Information Retrieval:

  • LLMs can improve search engine results by better understanding the linguistic meaning of user queries and generating more natural responses that pertain to what the user is actually searching for.
  • LLMs can extract information from large training datasets or knowledge bases to answer queries in an efficient, conversational style, improving access and understanding of organizational information.

Text Analysis:

  • LLMs can generate concise and coherent summaries of longer texts, making them valuable for businesses to quickly extract key information from articles, documents, or conversations.
  • LLMs can analyze text data to determine the sentiment behind it, which is useful for businesses to gauge customer opinions, as well as for social media monitoring and market research.
  • LLMs can be used to do customer and patient intakes, and to perform basic problem solving, in order to save employees time for dealing with more complicated issues.

 

Conclusion

In the past year, large language models have seen an explosion in adoption and innovation, and they aren’t going anywhere any time soon – ChatGPT alone reached 100 million active users in January 2023, and continues to see nearly 1.5 billion website visits per month. The enormous popularity of LLMs is supported by their obvious utility in interpreting, generating, and summarizing text, as well as their applications in a variety of technical and non-technical fields. However, LLMs come with downsides that cannot be brushed aside by any business seeking to use or create one. Due to their non-deterministic and emergent capabilities, businesses should prioritize working with experts in order to properly mitigate risks and capitalize on the strengths of a large language model.

Want to jumpstart your organization’s use of LLMs? Check out our Semantic LLM Accelerator and contact us at info@enterprise-knowledge.com for more information! 

The post What is a Large Language Model (LLM)? appeared first on Enterprise Knowledge.

]]>
Top Knowledge Management Use Cases (with Real World Examples) https://enterprise-knowledge.com/top-knowledge-management-use-cases-with-real-world-examples/ Wed, 28 Jun 2023 20:16:01 +0000 https://enterprise-knowledge.com/?p=18204 Knowledge Management (KM) is presently experiencing a rebirth, with greater executive interest and organizational commitment. Driven by the post-Covid transition to hybrid and remote work, the employee churn during the great resignation, and the explosion of AI driven by knowledge … Continue reading

The post Top Knowledge Management Use Cases (with Real World Examples) appeared first on Enterprise Knowledge.

]]>
Knowledge Management (KM) is presently experiencing a rebirth, with greater executive interest and organizational commitment. Driven by the post-Covid transition to hybrid and remote work, the employee churn during the great resignation, and the explosion of AI driven by knowledge graphs and large language models, the value that KM offers is better understood, and KM initiatives are being prioritized like never before.

At EK, we’ve worked tirelessly over the last decade to ensure KM is understood for the business value it offers and considered in terms of practical solutions and measurable results. We’ve found that defining use cases is one of the best ways to help organizations understand this value, and moreover, prioritize what really matters for them, their mission, and their stakeholders.

There is a lot of writing about KM outcomes, or features, but there is a surprising dearth of writing regarding Knowledge Management Use Cases. In this article, I’ve captured the top ten use cases for KM Transformations. It’s important to note that these aren’t enabling features or software like search or online discussions, nor are they expected outcomes or results like improved findability, knowledge sharing, or knowledge retention. Rather, these are the actual use cases, the reasons why people do KM and what it actually means to the organization’s operations and people.

Zach Wahl's top 10 use cases for knowledge management transformations. There are six classics: knowledge sharing, onboarding and learning, customer support, self-service, idea generation, and regulatory complaince. There are four new cool ones: measurement and efficiency, content assembly, and powering AI.

Of the top ten Knowledge Management use cases, six are what I’d consider “classics,” in that they’ve been present as the leading use cases for much of the short history of KM. The other four are the cool new ones, which are presently driving the industry and will be top organizational priorities for the near future as well.

The following six are the “classic” KM use cases. They hold as much value today, if not more, as they always have. The solutions and technologies we can apply against these use cases have changed over the years, though the use cases themselves remain largely the same.


Use Case #1: Knowledge Sharing and Retention

Knowledge Sharing and Retention Icon

Perhaps no use case is more central to KM than Knowledge Sharing and Retention. Organizations have consistently sought to retain the knowledge and expertise held within the minds of their employees. This tacit knowledge walks out the door with employees if not properly shared with others or transferred into an explicit format that may be captured and managed. KM for Knowledge Sharing and Retention will counteract this “brain drain,” ensuring that information is shared, used, and reused, thereby improving the collective knowledge of the organization and saving countless hours that otherwise would have been spent re-creating lost knowledge (or worse yet, repeating mistakes already made).

Real World Example:

We recently led a “Knowledge Transfer Menu” effort with a global development bank, which was concerned with their increasing attrition at all levels, recognizing their expertise and associated experience was walking out the door. Given the wide number of potential knowledge transfer and sharing techniques, we worked with them to develop a menu of different techniques, allowing them to quickly test a selection of each, ranging in scale, type, and complexity (none, limited, or high technology) to determine what would be most natural to their unique business and employees—to facilitate knowledge sharing either one-to-one, one-to-many, or as a larger community. As a result, the organization adopted a subset of the Knowledge Transfer techniques to establish enterprise wide, and we helped them implement a simple knowledge base to ensure this new knowledge would be managed and easily findable. 

Use Case #2: Onboarding and Learning

onboarding and learning icon

To be fair, onboarding and learning could easily be considered two separate use cases, but both are solved by many of the same solutions and tend to produce similar outcomes. The Onboarding and Learning use cases are all about providing the employee with everything they need to perform, from day one of their job through the entire employee journey. This is much more than employee self-service. The Onboarding and Learning use cases cover the delivery of knowledge at the point of need, connecting learners with teachers or experts, and guiding a learner to develop and extend their knowledge. Effective Onboarding and Learning will result in employee satisfaction, which further leads to higher employee retention.

Real World Example:

A large public agency was seeking to transform from a traditional to modern learning environment. With a highly distributed workforce, the agency needed an innovative means of helping their employees of all levels and tenures connect and learn from each other. We designed and implemented an advanced learning platform for them that integrated all elements of their learning and performance ecosystem so that any employee could intuitively find and discover classes, self-serve learning content, experts, and cohorts with which to engage. The result was a single place for employees to craft their learning journey and managers to track and manage these journeys, while also capturing new knowledge through communities of practice and learning cohorts.

Use Case #3: Customer Support

customer support icon

The Customer Support use case covers call centers, help desks, and the associated knowledge bases responding agents use to deliver concise, complete, and consistent customer service. This use case addresses common customer complaints that they were “bounced around” from person to person, given inconsistent guidance, had to repeat themselves, and were stuck in “on hold” purgatory. Just as the Onboarding and Learning use case results in employee satisfaction and retention, the Customer Support use case results in customer satisfaction and retention. Moreover, appropriate customer support can yield higher revenues and faster deal close times, depending on the industry.

Real World Example:

One of the largest insurance companies in the United States found that nearly half of their help desk calls were unresolved, requiring escalation or follow-up. Not surprisingly, less than 30% of their customers were satisfied with overall customer support. We helped to reengineer their customer support systems by designing a new tagging structure and taxonomy for their knowledge base, implementing new content governance for their knowledge base content, and creating new incentives and measurements for their agents to identify and improve lower-end knowledge base content. As a result, their Tier 1 call resolution improved to nearly 80%, with similar trends for overall customer satisfaction.

Use Case #4: Self-Service

self-service icon

This use case connects to the two preceding, in that it may be either employee self-service or customer self-service. As the name states, this use case is creating easy and intuitive mechanisms to get the right people the information they need, without having to make a call, send an email, or otherwise rely on others. Generationally, end users are increasingly wanting to get their questions answered or complete their desired action via self-service. This use case delivers the right knowledge, information, and data at the point of need, and when delivered properly, does so quickly and intuitively. Self-service can be a major cost saver for an organization, while also improving user satisfaction and delivering a more complete and more customized “answer.”

Real World Example:

One of the world’s largest employers recognized their host of employees was struggling to get fast answers to simple HR and payroll questions. As part of a 360-Degree Employee Satisfaction effort, we helped them redesign their employee self-service system, including the ability for employees to crowd-source questions and answers to a variety of questions, up-vote ideas and priorities for management, and independently track and complete simple changes to their benefits and employee data.

Use Case #5: Idea Generation and Innovation

idea generation and innovation icon

In an increasingly distributed world, the opportunity for water cooler meetings and moments of professional kismet are waning. The Idea Generation and Innovation use case counteracts those trends, creating KM-driven opportunities to share knowledge, ideas, problems, and challenges, invoking the collective expertise of individuals to help. This use case isn’t just about coming up with the next “Flamin’ Hot Cheetos” scale of idea, but also about day-to-day problem solving, small ideas, and incremental wins that can collectively make a big difference for an organization and its performance.

Real World Example:

A leading software company dramatically shifted to a largely remote work environment following the pandemic. However, they recognized that this sudden and permanent shift would eliminate many of the natural opportunities for the “water cooler talk,” moments of happenstance collaboration, and whiteboard innovation that had previously helped them to thrive. We helped them envision and execute a complete menu of techniques to help them transition to remote work while continuing to collaborate and innovate. This included new traditions like virtual brown bags and topic-based online meetups, new tools for synchronous and asynchronous collaboration, and new processes to capture knowledge at the point of generation and deliver it at the point of need.

Use Case #6: Regulatory Compliance and Preparedness

regulatory compliance and preparedness icon

This use case isn’t about addressing the lack of knowledge, information, or data within an organization; rather, it’s about addressing the proliferation of bad knowledge, information, and data. Many organizations have a poor handle on their content, maintaining years of old, obsolete, and incorrect legacy content, which exposes them to undue risk. For some organizations, this presents a regulatory risk, which if unaddressed can result in millions of dollars of fines, lawsuits, and accidents. Successfully implementing this use case not only addresses these types of risks and costs, but it also vastly improves the findability and manageability of content by removing all the potential wrong answers and dead ends.

Real World Example:

During an audit, a global manufacturing company in a highly regulated industry identified a major risk to their operations, finding scores of obsolete, outdated, and incorrect information hosted on their servers. We helped them engineer a content audit and cleanup process to confront this risk, which included a blend of system analytics, automated semantic analysis, and targeted expert reviews to eliminate their “bad” content and also enhance and promote “good” content to new systems so that it could be better leveraged. This process not only removed or archived over 80% of their total content, it also highlighted hidden gems of lessons learned and thought pieces that helped the organization maintain their expertise.


The final four KM use cases are the cool new ones, which have gained great traction only recently and promise high value and organizational prioritization for years to come. These are the ones that are getting the largest budgets and most attention today.

Use Case #7: Getting the Most out of a Digital Transformation

digital transformation icon

The previous decade was marked by massive investments in digital transformations, with organizations seeking to fundamentally update their processes and systems with streamlined technologies and more “online” modes of work. Millions were invested in these transformations, but few organizations reported the results they were expecting. The missing piece for many was KM. Though systems and processes had been updated, insufficient attention was paid to the core content and means of harnessing organizational expertise. This use case focuses on ameliorating those omissions by helping the state of an organization’s knowledge, information, and data “catch up” with the digital transformation.

Real World Example:

A global pharmaceutical company invested millions of dollars in a digital transformation—modernizing, consolidating, and integrating their assorted document, content, and data management systems and implementing a leading enterprise search product. As the multi-year transformation was well underway, however, the organization was not realizing the return on investment they’d anticipated. We conducted an assessment of the organization and helped them pinpoint the critical points of knowledge, information, and data management that would help them realize true business value. These included design and implementation of taxonomy/ontology with auto-tagging of content, improved knowledge capture workflows, design and implementation of search hit types, and a knowledge retention measurement plan. Within six months, they were capturing the returns they’d initially expected from the overall digital transformation.

Use Case #8: Measurement and Efficacy

measurement and efficiency icon

Keying off of previous use cases, with millions invested in digital transformations, more advanced delivery systems and interfaces, and greater customization, organizations are asking whether they’re getting the promised returns and desired impacts to their business. Though always an important question for a business to answer, in the face of a recession, this becomes all the more critical. This use case seeks to answer that question by delivering detailed insights into how people are using an organization’s content and what they’re doing as a result. Are people learning as they should? Are they taking the appropriate actions as a result? This use case delivers a comprehensive view not just into usage, but impacts, allowing organizations to make the right decisions as a result.

Real World Example:

A global financial organization had invested heavily in the creation of an array of new multichannel learning and performance content, as well as improved analytics to track its usage. They had the new content and data, but they lacked the insights to understand it and make decisions as a result of it. We worked with them to go beyond measurement of usage to measurement of efficacy, plotting out the desired impacts and outcomes of each learning topic, then creating measures of performance for each. In cases where the outcomes weren’t reached, we developed processes to engage internal stakeholders and external subject matter experts to help address the underlying issues with the learning content, creating a consistent and positive learning loop to help the organization’s learning environment evolve. We also identified opportunities to use this same process to identify gaps in organizational knowledge and leveraged the same approach to proactively fill those gaps.

Use Case #9: Content Assembly and Customization

content assembly iconThe Content Assembly and Customization use case addresses one of the long-standing issues with many organizations’ content. Too frequently, organizations that have focused on systems and processes have overlooked the reengineering of their content to be more consistent, readable, and customized to the user. This use case focuses on deconstructing content into more structured components that may then be reassembled in more intuitive and personalized ways. There are many potential applications and audiences for this, but the binding concept is that every individual will get just that which applies to them in a clear and concise way.

Real World Example:

A big box retailer with highly complex logistics and a massive fluctuation of employees was constantly struggling to keep key documents like employee handbooks, required safety documents, learning materials, and store-specific guidelines up to date. We worked with the organization to deconstruct this content and then build a content assembly engine to automatically create customized documents for each individual based on their home store, geography, role, and dozens of other factors. This drastically reduced human error and administrative burden while delivering a more customized experience to the individual stakeholders.

Use Case #10: Powering AI

powering ai icon

The hottest new use case is no doubt around AI. Most every conversation seems to involve an organization’s vision for their own Chat GPT, bespoke large language models, and organizational Artificial Intelligence. The questions tend to be the same in many of these organizations. How do we make AI happen? This use case is the enabler for organizational AI, creating the structured content, knowledge maps and ontologies, and integrated systems that will truly make AI real for these organizations in their production environments across the enterprise.

Real World Example:

For many years, we’ve worked with a global development bank to progress towards Knowledge Management maturity. Over the years, we helped them clean up their content, design and apply an enterprise taxonomy, improve their knowledge capture techniques (for consistency and completeness), and implement more consistent information architectures. Each of these KM initiatives yielded its own value for the organization, but the culmination of the work was the creation of a knowledge graph powered by an ontology and connected to the key content and data stores across 12 different applications. This solution allowed them to realize an array of Artificial Intelligence capabilities, including an intelligent chatbot, a recommendation engine, and an application to identify at-risk knowledge topics in the organization, triggering prioritized knowledge transfer and capture techniques.


At this moment, the organizational understanding of KM continues to increase, executives show a growing willingness to support and invest in it, and the associated technologies continue to progress to help KM Transformations become a reality. These use cases can all be faster, easier, better, and more tangible than in the past, making the overall opportunity for meaningful KM Transformations as high as it has ever been.

For more details and use cases visit Enterprise Knowledge.

The post Top Knowledge Management Use Cases (with Real World Examples) appeared first on Enterprise Knowledge.

]]>
Top Graph Use Cases and Enterprise Applications (with Real World Examples) https://enterprise-knowledge.com/top-graph-use-cases-and-enterprise-applications-with-real-world-examples/ Wed, 22 Feb 2023 22:20:48 +0000 https://enterprise-knowledge.com/?p=17581 Graph solutions have gained momentum due to their wide-ranging applications across multiple industries. Gartner predicts that graph technologies will be used in 80% of data and analytics innovations by 2025, up from 10% in 2021. Several factors are driving the … Continue reading

The post Top Graph Use Cases and Enterprise Applications (with Real World Examples) appeared first on Enterprise Knowledge.

]]>
Graph solutions have gained momentum due to their wide-ranging applications across multiple industries. Gartner predicts that graph technologies will be used in 80% of data and analytics innovations by 2025, up from 10% in 2021. Several factors are driving the adoption of knowledge graphs. Specifically, the increasing amount of data being generated and collected, and the need to make sense of it, and its use in artificial intelligence and machine learning, which can benefit from the structured data and context provided by knowledge graphs.

For many organizations, however, the question remains, “Is it the right solution for us?” We get this question regularly. Here, I will draw upon our own experience from client projects and lessons learned to provide a selection of optimal use cases for knowledge graphs and semantic solutions along with real world examples of their applications.

Top Graph Use Cases Enterprise Applications

Use Case #1: Customer 360 / Enterprise 360

Customer 360 / enterprise 360 graphic

Customer data is typically spread across multiple applications, departments, and regions. Each team and system need to keep diverse sets of data about their customers in order to play their specific role – inadvertently leading to siloed experiences. A graph solution allows us to create a connection layer that facilitates consistent aggregation and ingestion of diverse information types from sources, internal or external, to the organization. Graphs boost knowledge discovery and efficient data-driven analytics to understand a company’s relationship with customers and personalize marketing, products, and services.

Real World Examples:

Customer 360 for a Commercial Real-Estate Company

“We lost a multi-million-dollar value customer after one of our regional sales reps offered the customer a property that the customer already owned. How do we get better with understanding our customers? We would like to be able to quickly answer questions like:

  • Who is our repeat customer in North America over the last 10 years?”

Customer 360 for a Global Digital Marketing and Technology Firm

“Our customer databases contain records for more than 2 billion distinct consumers (supposed to be reflecting an estimated 240 million real world individuals) – we need to understand how many versions of ‘Customer A’ we have in order to integrate the intelligence gathered from different data sources to fully understand each customer.”

Solution Outcomes: Lead generation and sales cycles are improved through faster access to content and improved customer intelligence (and ability to customize materials), where a 1% decrease in time spent searching for customer information by a sales rep resulted in $6.24M in cost savings annually. Increased awareness of and ability to leverage customer connections within these companies, helps foster positive customer relationships.

Use Case #2: Content Personalization

content personalization graphucThe next critical step after understanding customers is to personalize and recommend relevant content to them. With the size of data and dropping attention spans of online users, digital personalization has become one of the top priorities for companies’ business models. Especially with third-party cookies being phased out, companies need innovative ways to understand and target their online customers with relevant and personalized content. Graph analytics provide a meaningful way to aggregate information about a customer and create relationships with your solutions and services to determine a way to decide what information is right to share with a customer. 

Real World Examples:

Customer Journey Map for a Healthcare Training and Information Provider

“We want to understand a patient’s journey to serve the next best content and information using the right channel and cadence.”

“We want to deliver tailored training content and course recommendations based on our audience and their setting so that we can connect users with the exact learning content that would help them better master key competencies.”

Solution Outcomes: A semantic recommendation service that is beating accuracy benchmarks and replacing manual processes aggregating content – that is supporting higher-quality, more advanced, and targeted recommendations with clear reasons. Rich metadata and semantic modeling continue to drive the matching of 50K training materials to specific curricula, leading new, data-driven, audience-based marketing efforts that demonstrate how the recommender service is achieving increased engagement and performance from over 2.3 million users.

Use Case #3: Supply Chain and Environmental Social Governance (ESG)

ESG graphicHaving a plan for ESG is no longer an option. Many organizations now have a goal to establish a standardized, central platform to get insights on environmental impacts associated with their supply chain processes. However, this information is typically stored in disparate locations, often hidden within departmental documents or applications. Additionally, there is usually no standardized vocabulary used across different industries, leading to inconsistent understandings of key business and supply chain concepts. Graphs reconcile such data continuously crawled from diverse sources to support interactive queries and provide a graphic representation or model of the elements within supply chain, aiding in pathfinding and the ability to semantically enrich complex machine learning (ML) algorithms and decision making.

Real World Examples:

Aggregating Data to Reduce Carbon Footprints of Supply Chain for a Global Consultancy 

“We are at a pivotal time in ESG where our clients are coming to us to answer questions like:

  • What’s the best material we can use to package Product x?
  • What shipping route is the most fuel efficient?
  • Who was my most ESG compliant plant in 2020?”

Solution Outcomes: Graph embedded, machine-readable relationships between key supply chain and ESG concepts in a way that do not require tables and complex joins that enabled the firm to leverage their extensive knowledge base around methods to reduce environmental impact and guided them in building a centralized database of this knowledge. Consultants can leverage insights that are certified and align with industry standards to provide clients with a strategy that can generate profit while supporting sustainability mission and impact, detect patterns and provide market intelligence to their clients.

Use Case #4: Financial Risk Detection and Prediction

Financial risk detection graphicThe financial industry is made up of a network of markets and transactions. A risk issue in one financial institution could result in a domino effect for many. As such, most large financial organizations have moved their data to a data lake or a data warehouse to understand and manage financial risk in one place. Yet, the biggest challenge for risk analysis continues to suffer from lack of a scalable way of understanding how data is interrelated. A graph or network is enabling institutions to model and visualize these connections as a collection of nodes and points that specifies the exact link between certain financial concepts and entities. Graph-based solutions further leverage the relationships among the entities involved to create a semantically enhanced machine learning model.

Real World Examples:

Financial Risk Reporting for a Federal Financial Regulator 

“Data scientists and economists were finding it difficult to make efficient use of siloed data sources in order to  easily access, interpret, and  regulatory functions including answering questions like:

  • What are the compliance forms and reporting requirements for Bank X?
  • Which financial institutions have filed similar risk compliance issues?
  • Which financial institutions are behind on their risk reporting and filings this year?
  • What’s the revision history and the corresponding policies and procedures for a given regulation?”

Realtime Fraud Detection For Multinational e-Commerce Company

“We want to tap into our extensive historic listing data to understand the relationship between packages being rerouted, listings, and merchants to ultimately detect shipping scams so that we can minimize the fraud risk for online merchants from ‘unpaid’ and fraudulent purchases on their listing items.”

Solution Outcomes: Graph data that enables explorations, linking and understanding of entities such as product, categories/customer, orders that supports risk fraud pattern detections for the organization’s risk engine algorithm. Ultimately resulting in: 

  • Real-time risk fraud detection: Risk fraud pattern detections for risk engines to onboard.
  • A non-disruptive fraud prevention: Help the company identify and truncate fraudulent transactions before they take place without impacting legitimate business transactions.

Use Case #5: Mergers and Acquisitions

M&A graphicMany factors can impact the success of mergers and acquisitions (M&A) and their successful integration as merging with or acquiring new companies inevitably brings another ecosystem of applications, operations, data/content, and vernacular. The process of knowledge transfer and the challenge to enable strategic alignment of processes and data is becoming a rising concern to the already delicate success of M&As. For a knowledge graph, data relationships are first class citizens. Thus, graphs offer ways to semantically harmonize, store, and connect similar or related organizational concepts. The approach further represents information in the way people speak using taxonomies and ontological schemas that allow for storing data with organizational context.

Real World Examples:

Product/Solution Alignment for the World’s Leading Provider of Content Management and Intellectual Property Services

“We have gone through multiple M&As over the past 5 years. We are looking for a way to connect and standardize the data we have across 40 systems that have some overlapping applications, data, and users.”

“On our e-commerce platforms, it’s not clear what our specific products or solutions are. We are losing business due to our inability to consistently name and describe our solution offerings across the organization. How can we align our terminology on our products and solutions company wide?”

Solution Outcome: Graph solution allows for explicitly capturing and aligning the knowledge and data models by providing a comprehensive and structured representation of entities and their relationships. This is aiding in the due diligence process by allowing for the quick identification and analysis of key stakeholders, competitors, and potential synergies. Additionally, the graph serves as a useful tool for gaining a better understanding of the complexities involved in mergers, facilitates the deduplication of work or loss of information and intelligence across and enables context-based decision making.

Use Case #6: Data Quality and Governance

data quality and governance graphicThe size and complexity of data sources and datasets is making traditional data dictionaries and Entity Relationship Diagrams (ERD) inadequate. Knowledge Graphs provide structure for all types of data – either serving as a semantic layer or as a domain mapping solution – and enable the creation of multilateral relations across data sources, explicitly capturing how the data is being used, and what changes are being made to data. As such, knowledge graphs support data governance and quality inspection by providing a contextual understanding of enterprise data, where it is, who can access it and where, and how it will be shared or changed over time. As such, data governance strategies that are leveraging knowledge graph solutions have increased data accessibility and improved data quality and observability at scale. 

Real World Examples:

Graph for Data Quality at a Global Digital Marketer

“Our enterprise has over 20 child organizations that:

  • Lack transparency over which common data sets were available for use,
  • Did not understand the quality of the data available,
  • Have drastically different definitions of key terms, and 
  • Use a database of consumer data containing over 10 billion records, with dirty data and millions of duplicates.”

Solution Outcome: A Graph creation and mapping process alone reduced record count from ~10 billion to ~4 billion with matching algorithms that optimized QA process resulting in 80% record deduplication with 95% accuracy.

Use Case #7: Data as a Product (and Data Interoperability)

Every enterprise data strategy strives to facilitate the flexibility that will allow data to move between current and future systems, minimize limitations of proprietary solutions and avoid vendor lock. To do so, data needs to be created based on a shared terminology, web standards, and security protocols. The Financial Industry Business Ontology (FIBO) from the EDM Council is an example of a conceptual graph model that provides common vocabulary and meaning for key concepts and terms for the financial industry and a way to align and harmonize data irrespective of its source. As a standards’- based data model, graphs allow for consistent ingestion of diverse information types from sources internal or external to the organization (e.g. Linked Data, subscriptions, purchased datasets, etc.). Ultimately allowing organizations to handle large data coming from various sources, including public sources and boost knowledge discovery, industry compliance, and efficient data-driven analytics. 

Real World Examples:

Data-as-a-Product for Global Veterinary that Provides a Comprehensive Suite of Products, Software, and Services for Veterinary Professionals

“Most of our highly interrelated data is stuck behind 4-5 legacy data platforms and it’s hard to unify and understand our data which is slowing down our engineering processes. Ultimately, we need a way to model and describe business processes and data flow between individual veterinary practices and enrich and align their data with industry standards. This will allow us to normalize services, improve efficiency and create the ability to report on the data across practices as well as trends within a specific practice.”

Solution Outcome: Taxonomy/ontology was used as a schema to generate the graph and to describe the key types of ‘things’ vet partners were interested in and how they relate to each other. This is ensuring the use of a common vocabulary from all veterinary practices submitting data and resulting in:

  • Automation of data normalization,
  • Identification of potential drug targets and understanding the relationships between different molecules, and
  • Enablement of the company to provide the ontological data model as a product and a shareable industry standard

Use Case #8: Semantic Search

“Search doesn’t work” usually is a common sentiment at organizations that are only leveraging key words to determine what search results should look like. Semantic search, at its core, is a Search that provides results based on context and meaning. Search relevance, or a search engine’s ability to find and return a page of search results to user intent, isn’t possible without semantic understanding. Knowledge graphs thus create a machine-readable structure that will allow systems to explicitly capture context and thus search engines to understand concepts, entities and the relationships between them. 

Today, many of the search engines we use such as Google, Amazon, Airbnb, etc., all leverage multiple knowledge graphs, along with natural language processing (NLP) and machine learning (ML) to go beyond basic keyword-based searching. Understanding semantic search is becoming fundamental to providing a good search experience that’s rooted in a deep understanding of users and ultimately driving the intended digital experience that garners trust and adoption (be it knowledge transfer, enterprise learning, employee/customer retention, or increased sales).

Real World Examples:

Expert Finder for a Federal Engineering Research Institute 

“We have a retiring workforce and are facing challenges with brain drain. We would like to be able to get quick answers to questions like:

  • What type of paint did we use to manufacture this engineering part in 1956?”

Solution Outcomes: A graph model enables browsing and discovery of previously uncaptured relationships between people, roles, projects, organizations, and engineering materials to aggregate and return in search results. Providing a unified view of institutional information and resulting in reduced time to find an expert and project information from 3-4 weeks to 5-10 minutes.

Use Case #9: Context and Reasoning for AI and ML

Most Enterprise AI projects are stalled due to lack of strategy to get data and knowledge. AI efforts had typically started with Data scientists getting hired to explore and figure out what’s in the data. They often get stuck after some exploration with fundamental questions like: what problem am I solving or how do I know this is good training data? This is resulting in mistakes in the algorithms, bad AI errors, ultimately lack of trust, and then abandonment of AI efforts. Data on its own, does not explain itself nor its journey. Data is only valuable in the context of what it means to end users. Knowledge graphs provide ML and AI a knowledge modeling approach to accelerate the data exploration, connection, and feature extraction process and provide automated data classification based on context during data preparation for AI and ML. 

Real World Examples:

A Semantic Recommendation Service for a Scientific Products and Software Services Supplier

“We need to improve our ML algorithms to automate the aggregation of products and related marketing and manuals, videos, etc. to make personalized content recommendations to our customers investing in our products. This is currently a manual process that requires significant time investment and resources from Marketing, Products, IT. This is becoming business critical for us to manage at a global scale.”

Solution Outcomes: Graph provides a comprehensive and organized view of data, helping improve the performance and explainability of models, and automating several tasks. Specifically, the graph is supporting:

  • Data integration/preparation: integrate and organize data from various sources such as marketing content platforms, Product Information management (PIM) application and more making it easier for ML and AI models to access and understand the data by encoding context through metadata and taxonomies.
  • Automation: support the automation of tasks such as data annotation, data curation, data pre-processing and so on, which can help save time and resources.
  • Explanation: a way to understand and explain the decisions made by ML and AI models, increasing trust and transparency.
  • Reasoning: the graph is used to perform reasoning and inferences, which help the ML and AI algorithms to make more accurate predictions on content recommendations.
  • Personalization: using the knowledge graph, AI is extracting user’s preference and behavior to provide personalized services for a given product.

For more details and use cases visit Enterprise Knowledge.

The post Top Graph Use Cases and Enterprise Applications (with Real World Examples) appeared first on Enterprise Knowledge.

]]>
Data Catalog Evaluation Criteria https://enterprise-knowledge.com/data-catalog-evaluation-criteria/ Mon, 09 Jan 2023 20:09:51 +0000 https://enterprise-knowledge.com/?p=16947 Data Catalogs have risen in adoption and popularity in the past several years, and it’s no coincidence as to why. The amount of data, and therefore metadata, is exploding at a rapid pace and will certainly not slow down anytime … Continue reading

The post Data Catalog Evaluation Criteria appeared first on Enterprise Knowledge.

]]>
Data Catalogs have risen in adoption and popularity in the past several years, and it’s no coincidence as to why. The amount of data, and therefore metadata, is exploding at a rapid pace and will certainly not slow down anytime soon, pushing the need for a cloud solution that creates a source of truth for data and information. It’s difficult to manage and make sense of all of it. Moreover, people are not sure what the best use of all this data is for their businesses. There are so many data catalog vendors out there, all seemingly having the same message, that they are the right choice for you, but that isn’t the case. Choosing the right data catalog for your business depends on several criteria. Before looking at vendors and selection criteria, let’s narrow down what is important for your data catalog solution to have.

Enlarged text reading "Know Your Use Cases and Users"

Before delving into what criteria and vendor you want for your data catalog, thoroughly consider the Use Cases and Users of your business, because they are the main drivers of getting the most efficient use of your data catalog solution.

Use Cases: Consider the root problem that led your business to decide they need a data catalog solution. Beyond the fact that you have siloed data sources that you want to bring together in one centralized location, what are the true needs behind this? Are you trying to enable discovery, governance, data quality, analytics and/or delivery of your data assets? While all data catalog vendors share the common goal of merging your siloed data sources, each vendor will have a tailored functionality that answers one or more of the previous questions.

Users: Who will be accessing your data catalog? Your users should align with your use cases, and knowing who they are will help you focus on the most pertinent criteria for your data catalog. Do you need a platform for data scientists and engineers to build and monitor ETL processes? Are business users using the data catalog as a go-to discovery platform for insights and answers? Some example users of your data catalog might be:

  • Casual Users: Conduct broad searches and perform data discovery.
  • Data Stewards: Make governance decisions about data within their domain.
  • Data Analysts: Analyze data sets to generate insights and trends.
  • Data Architects/Engineers: Build data transformation pipelines (ETL).
  • System Administrators: Monitor system performance, use and other metrics.
  • Mission Enablers: Transform data and information into insights within analysis and reports to support objectives.

Enlarged section header text reading "Selection Criteria"

In the previous section, I listed some potential use cases your organization may be focused on depending on the root cause of your need for a data catalog or identified users. Let’s dive deeper into the 6 different criteria that you should prioritize when evaluating your data catalog solution. 

Sub-header text: "Availability & Discovery"

To maximize the value of your data, you need to understand what you have and how it relates to other data. Increased availability leads to less time catalog users spend looking for data, therefore reducing time to insight and analysis. Discovery allows for greater creativity and innovation of your data and metadata within your infrastructure for your data professionals making your business more efficient. For example, a client I am supporting to implement a data catalog solution needs their casual end users to be able to search for keywords and documents from separate databases and see all related results in one place to reduce time spent searching through multiple databases for the same information.

Sub-header: "Interoperability"

Interoperability pertains to the data catalog’s ability to integrate with your siloed information platforms and aggregate them into one centralized location. Data catalog vendors do not serve every database, data warehouse or data lake on the market. Rather, they will often target one or a few particular business software suites. Integration compatibility across your current environment is necessary to maximize your user experience as well as just making the data catalog usable. In addition to considering system interoperability, evaluate the data interoperability of the catalog. I recommend using a data catalog that will store and relate your data together using graphs and Semantic Web standards. Graphs and the Semantic Web standards help transform unstructured and semi-structured data at scale into meaningful and human readable relationships. Before choosing your catalog, assess the ease of configuration and linking your data catalog to your current environment. An example for checking for interoperability of your data catalog might be that if your current environment spans across multiple data storage providers such as AWS, Google or Microsoft, it’s important that your data catalog can aggregate information from all sources that are mission critical.

Sub-header: "Governance"

Businesses wrap their data in complicated security processes and rules, typically enforced by a specialized data governance team. These security processes and rules are enforced with a top-down approach and slow down your work. The modern and rising data framework highlights the need for governance to be a bottom-up approach to reduce bottlenecks of discovery and analysis. Choose the data catalog that provides governance features that prioritize catalog setup, data quality, data access and end-to-end data lineage. A few key governance features to consider are data ownership, curation, workflow management, and policy/usability controls. These governance features streamline and consolidate efforts to provide proper data access and management for users with an easy to use interface that spans across all data within the catalog. The right data catalog solution for your business will contain and specialize in the governance features needed by your user personas, such as system administrators to control data intake for users based on role, responsibility and sensitivity. For more information regarding metadata governance, check out my colleague’s post on the Best Practices for Successful Metadata Governance.

Sub-header: "Analytics & Reporting '

Analytics and reporting pertains to the ability to develop, automate and deliver analytical summaries and reports about your data. Internally or through integration, your data catalog needs to expand beyond being a centralized repository for your assets and provide analytical insights about how your data is being consumed and what business outcomes it is helping to drive. Some insights that are of interest to many organizations are which datasets are most popular, which users are consuming particular datasets, and the overall quality of the data contained within your data catalog. The most sought after insight I see with client implementations surrounds data usage by user types (analyzing which users consume particular data sets to get a better understanding of the data that has the most business impact).

Sub-header: "Metadata Management"

Metadata often outlasts the lifecycle of the data itself after it is deprecated, replaced, or deleted. Some of the key components of metadata management are availability, quality, lineage, and licensing.

  • Availability: Metadata needs to be stored where it can be accessed, indexed, and discovered in a timely manner.
  • Quality: Metadata needs to have consistency in its quality so that the consumers of the data know it can be trusted.
  • Historical Lineage: Metadata needs to be kept over time to be able to track data curation and deprecation.
  • Proper Licensing: Metadata needs to contain proper licensing information to ensure proper use by the appropriate users.

Depending on your use cases and personas, some of the key components above will take priority over others. Ensure that your data catalog contains, collects and analyzes the metadata your business needs. During the data catalog implementation, one feature I notice that clients usually need from their data catalog is data lineage. If historical lineage of your data is a dealbreaker, this will help narrow down your data catalog search effort.

Sub-header: "Enterprise Scale"

Enterprise scale is the capability for widespread use across multiple organizations, domains, and projects. Your data catalog will need to scale vertically with the amount of data that is ingested, as well as horizontally to continually serve new business ventures within your roadmap. Evaluate how you foresee your data catalog to grow in the coming years. Vertical scaling will reflect a need to continually add more data to the catalog, whereas horizontal scaling will reflect a need to spread the reach of your data catalog to more users.

Visual diagram comparing vertical vs. horizontal scaling

Conclusion

Now that you have an idea of the criteria that are most important when selecting your data catalog vendor, it’s time to explore further into your options. Take advantage of demos offered by data catalog vendors to get a feel for which catalogs have the right fit for your use cases and users. Carefully consider the pros and cons of each vendor’s platform and how their platform can meet the goals of your business catalog. If a data catalog is the right fit for your business and you’re still not sure as to which is the right for you, reach out to us at Enterprise Knowledge and we can help you evaluate your use cases and recommend the right data catalog solution for you!

 

The post Data Catalog Evaluation Criteria appeared first on Enterprise Knowledge.

]]>
Knowledge Graph Use Cases are Priceless https://enterprise-knowledge.com/knowledge-graph-use-cases-are-priceless/ Wed, 30 Nov 2022 15:48:47 +0000 https://enterprise-knowledge.com/?p=16878 At Knowledge Graph Forum 2022, Lulit Tesfaye, Partner and Division Director, and Sara Nash, Senior Consultant, presented on the importance of establishing valuable and actionable use cases for knowledge graph efforts. The talk was on September 29, 2022 in New … Continue reading

The post Knowledge Graph Use Cases are Priceless appeared first on Enterprise Knowledge.

]]>
At Knowledge Graph Forum 2022, Lulit Tesfaye, Partner and Division Director, and Sara Nash, Senior Consultant, presented on the importance of establishing valuable and actionable use cases for knowledge graph efforts. The talk was on September 29, 2022 in New York City. 

Tesfaye and Nash drew on lessons learned from several knowledge graph development efforts to define how to diagnose a bad use case and outlined their impact on initiatives – including strained relationships with stakeholders, time spent reworking priorities, and team turnover. They also share guidance on how to navigate these scenarios and provide a checklist to assess a strong use case.

The post Knowledge Graph Use Cases are Priceless appeared first on Enterprise Knowledge.

]]>
Elevating Your Point Solution to an Enterprise Knowledge Graph https://enterprise-knowledge.com/elevating-your-point-solution-to-an-enterprise-knowledge-graph/ Wed, 16 Nov 2022 16:08:39 +0000 https://enterprise-knowledge.com/?p=16825 I am fortunate to be able to speak with many vendors in the Graph space, as well as company executives and leaders in IT and KM departments around the world. So many of these people are excited about the power … Continue reading

The post Elevating Your Point Solution to an Enterprise Knowledge Graph appeared first on Enterprise Knowledge.

]]>
I am fortunate to be able to speak with many vendors in the Graph space, as well as company executives and leaders in IT and KM departments around the world. So many of these people are excited about the power of knowledge graphs and the graph databases that power them. They want to know how to turn their point solution into an enterprise-wide knowledge graph powering AI solutions and solving critical problems for their clients or their companies. I have answered this question enough times that I thought I would share it in a blog post for others to learn.

Knowledge graphs are new and exciting tools. They provide a different way of managing information and can be used to solve a wide range of problems. Early adopters of this technology typically start with a small, targeted solution to “try it out.” This is a smart way to learn about any new technology, but all too often the project stops at a point solution or becomes pigeonholed for solving one problem when it can be used to solve so many more. The organizations that can grow and expand their graph solution have three things in common:

  • A backlog of use cases,
  • An enterprise ontology, and
  • Marketing and change management.

Knowledge graphs can solve many different types of problems. They can be recommendation engines, search enhancers, AI engines, data fabrics, or knowledge portals. That first solution that an organization picks only does one of these things, and it may also be targeted to just one department or one problem. This is a great way to start, but it can also lead to a stovepipe solution that misses some of the real power of graphs. 

When we start knowledge graph projects with new clients, we always run a workshop with business users from across the organization. During this workshop, we share examples of what can be done with knowledge graphs and help them identify a backlog of use cases that their new knowledge graph can solve. This approach creates excitement for the new technology and gives the project team and the business a vision for how to add to what was built as part of the first solution. Once the first solution is effectively launched, the organization has a roadmap for what is next. If you have already launched your solution and do not have a backlog of use cases, that is okay. You can host a graph workshop at any time to create a list of the next projects. The most important thing is to get that backlog in place and begin to share it with your leadership team so that they can budget for the next project.

The structure of a graph is defined by an ontology. Think of an ontology as a model describing the information assets of the business and how they fit together. Graph databases are easy to change, so organizations can get started with simple knowledge graphs that solve targeted problems without an ontologist. The problem is, the solution will be designed to solve a specific problem rather than being aligned with the business as a whole. A good ontologist will design a model that both solves the initial problem being addressed and aligns with the larger business model of the organization. For example, a graph-enhanced search at a manufacturing company may have products, customers, factories, parts, employees, and designs. The search could be augmented with a simple knowledge graph that describes parts. An ontologist would use this opportunity to model the relationships of all of the organization’s entities up front. This more inclusive approach would allow for a wider range of search results and could serve as the baseline for a number of other projects. This same graph could fuel a recommendation service or chatbot for their customers. It could also be used as the map for their data elements to create a data fabric that simplifies the way people access data within the organization. One graph, properly designed, can easily expand to become the enterprise backbone for a number of different enterprise-centric applications.

Building a backlog of use cases and creating a proper ontology helps ensure that there is a framework and plan to grow. The final challenge in turning a point solution into an enterprise knowledge graph has to do with marketing the solution. Knowledge graphs and graph databases are still new, and the number of things they can do is very broad (see Using Knowledge Graph Data Models to Solve Real Business Problems). As a result, executives often do not know what to do with knowledge graphs. It is important to set success criteria for your point solution and regularly communicate the value it adds to the business. This brings attention to the solution and opens the door for discussions about expanding the knowledge graph. Once you have the executive’s attention, educate them as to what knowledge graphs can do through the industry literature and the backlog of use cases that you have already gathered. This will allow executives to see how they can get even greater value from their investment and drive more funding for your knowledge graph.

Knowledge graphs are powerful information management tools that are only now becoming fully understood. The leading graph database vendors offer free downloads of their software so that organizations can start to understand the true power of these tools. Unfortunately, too often these downloads are used only for small projects that disappear over time. The simple steps I have described above can pave the way to turn your initial project into an enterprise platform powering numerous, critical Artificial Intelligence solutions.

Learn more about how we enable this for our clients by contacting us at info@enterprise-knowledge.com.

The post Elevating Your Point Solution to an Enterprise Knowledge Graph appeared first on Enterprise Knowledge.

]]>
Digital Twins and Knowledge Graphs https://enterprise-knowledge.com/digital-twins-and-knowledge-graphs/ Thu, 05 May 2022 15:56:04 +0000 https://enterprise-knowledge.com/?p=15402 Enterprise knowledge graphs are one of the fastest growing trends in knowledge management. Their intuitive and flexible structure, combined with their emphasis on relationships between entities, make knowledge graphs a natural fit for use cases that require aggregating content from … Continue reading

The post Digital Twins and Knowledge Graphs appeared first on Enterprise Knowledge.

]]>
Enterprise knowledge graphs are one of the fastest growing trends in knowledge management. Their intuitive and flexible structure, combined with their emphasis on relationships between entities, make knowledge graphs a natural fit for use cases that require aggregating content from multiple disconnected systems. For example, many organizations are beginning to use knowledge graphs to establish a “360 degree view” of assets, service levels, issue resolution, and customer experiences to get a full picture of business critical operations. One area where knowledge graphs are quickly adding value is the use of digital twins to model real world assets, such as buildings and equipment. These digital twins enable organizations to visualize, understand, and analyze information about assets across multiple data sources, improving efficiency and decision making across systems.

What’s a Digital Twin?

A digital twin is a digital representation of a real world object using structured, machine readable data. They can be used to gain a better understanding of complex entitiesTwins icon with high quantities of real time data or components, and are often used in asset based industries like manufacturing, utilities, real estate, defense, automotive and aerospace engineering, information technology, and networking communications. A crucial component of the industrial internet of things (IIoT), digital twins leverage the real time data generated by sensors and monitoring systems, combining this with other information about an asset, such as maintenance data, service management data, and even third party data such as weather conditions. This creates a rich representation of not only individual assets, but their relationships and dependencies, giving you a 360 degree view of your entire system and inventory.

These digital representations of real world objects can be used to meet a variety of needs for organizations with complex networks of assets, including:

Fault detection 

Many fault detection systems generate too many false positives, leading to alarm A purple exclamation point fatigue that can limit the response to real issues. Digital twins can be used to find real faults based on real data. For example, buildings often leverage sensor data to identify system outages that may impact building tenants, such as issues with plumbing or electrical systems. Faulty sensors can cause erroneous outage notifications alarms which result in technicians responding onsite to remediate an issue that doesn’t actually exist. Using digital twins, sensor data can be cross referenced against data from multiple sources to determine whether or not the sensor data is accurate. This reduces false positive alarms and identifies faulty sensors, allowing organizations to optimize the assignment of scarce maintenance resources.

Predictive maintenance

An image of a car on a laptop screenUsing digital twins, you can combine information about an asset’s current condition with its maintenance history, allowing you to efficiently target assets due for maintenance and achieve condition-based maintenance or reliability-centered maintenance goals. For example, digital twins of automotive fleets can be used to ensure that each vehicle is optimally maintained, given that vehicle’s maintenance history, mileage, the weather conditions in which it’s been operating, and real time sensor data from each of its components. 

Improved resiliency

Having a 360 view of your systems can help you identify weak points, quickly repair issues, and even use predictive algorithms to prevent failures from occurring in the first place. Two buildings with wifi signals on top of themAll of this produces more resilient systems. For example, digital twins are often used in building information management (BIM) systems. These systems can model all of the components of a building and its subsystems, such as electrical, plumbing, and HVAC. Using past performance metrics and current sensor data, you can establish a full view of your building and determine which aspects may be at risk for failure, enabling you to proactively forecast potential issues and create a more resilient system.

Energy reduction

A laptop with an image of a thermometer and a snowflakeCombining real time energy usage with historical data through digital twins allows you to plan for and optimize energy usage across your system. For example, digital twins can be used to analyze energy consumption patterns across a network of buildings and equipment and compare this with energy prices throughout the day. This information allows you to buy and store energy at cheaper costs, and then supply this energy to the appropriate parts of your system based on need.

Cyber security resiliencecybersecurity icon

Digital twins allow you to capture and visualize your cyber vulnerabilities, allowing you to remedy issues and fulfill reporting obligations, improving compliance with federal and industry standards. For example, digital twins enable you to perform network analysis across interlinked IT systems, identifying vulnerabilities and quarantining issues to prevent spread across the system and reduce the impact of cyber security threats.

Service level management verification 

For organizations that outsource critical system operations and maintenance, digital twins can be used as an effective monitoring and measurement tool for service level compliance. 

icon of service management

In addition to identifying which components of a system are due for maintenance, real time data collected via digital twins can enable verification that service level management agreements are being carried out and having the desired effect. For example, large facilities such as universities and military bases often outsource operation and maintenance of utilities, like water treatment plants, to a third party. However, with large facilities, it can be difficult and time consuming to verify that these service management contracts are being appropriately fulfilled. Digital twins provide effective tools for real-time performance monitoring of facility operations, with visual representations of critical system performance that can be used to ensure vendors are meeting or exceeding contract expectations. 

icon of situation awareness - a bar chart with a target

Improved situational awareness

Digital twins allow you to visualize and analyze real time data to assess the health of not only individual components in a system, but of the overall system itself, allowing you to understand and improve your operational readiness. 

What do knowledge graphs have to do with digital twins?

Digital twins can be created with a variety of technologies, including document databases, time series databases, and relational databases. Because of the different types and amounts of data necessary to model a real world entity in a digital environment, the information required to construct a digital twin is often split across multiple systems. Digital twins that involve real time sensor data can generate massive amounts of information that may need to be stored in dedicated systems for security or scalability purposes. These systems are often siloed and can’t be easily connected to other sources of information about the real world entity that they measure, leading to an incomplete picture of your assets and networks and making it difficult to perform complex analyses.

A central node with branches coming off of it - visual representation of a knowledge graphSemantic knowledge graphs can serve as the “connective tissue” between these disparate systems, connecting data across different platforms. Using an ontology, or graph data model, you can create a schema that allows you to combine data from across multiple systems, acting as a blueprint to show how and where data should be connected. When combined with data transformation processes or virtualization technologies, this ontology can be used to create your knowledge graph, aggregating information about a real world entity from multiple sources and producing a 360 degree view of your system.

In addition to the technical limitations of connecting disparate systems in non-graph technologies, these different systems often use different data models and terminology to describe the same thing, making it hard to connect data even if it’s been aggregated in a single data warehouse or data lake. Developing an enterprise taxonomy allows you to “connect the dots” between different systems, linking system or department-specific terms to standardized concepts that can be applied across your organization and breaking down terminology barriers. 

Additionally, because digital twins often involve complex networks of entities and relationships, knowledge graphs are an ideal way to model digital twin data. Knowledge graphs excel at capturing the complexity of the real world, and they store information in a way that’s intuitive, because data is stored in the same way that human beings think about and visualize information. Knowledge graphs are also much more flexible than traditional relational data models, making it easy to add, change, and remove data in order to respond to real world changes. This reduces the technical burden of creating data, and makes it easier for technical and business users alike to retrieve and understand information. Since digital twins are often used to create a representation of systems involving high degrees of connectivity and interdependence, the flexibility and intuitive nature of knowledge graphs make them a logical fit.

While generating a connected network of your explicit data often unlocks many analysis opportunities, semantic knowledge graphs can also allow you to create logical rules that are leveraged by machines to reason over data and draw inferences, generating new insights. This allows you to see how different components of a system may be connected without having to explicitly state every dependency and interaction. Employing inference over a graph network can be extremely powerful, uncovering cascading effects of changes of which you were previously unaware.

Outcomes

Using a knowledge graph to implement digital twins enables several key outcomes, including:

“What if” analysis

Simulation and modeling are critical tools for any engineering organization. Modeling the dependencies and interactions inside of a system using digital twins and knowledge graphs allows you to test out the potential effects of changes to the system before making them. For example, imagine you’re building a car, but one of the parts you’re using is climbing in cost, so you’d like to replace it with a substitute part from a different manufacturer. A digital twin would allow you to determine which subsystems of the car are impacted and analyze the requirements you’d need to meet in a replacement part.

Identify and fix problems in complex systems

Using real-time sensor data, digital twins can quickly alert you to any issues in your system, enabling you to meet condition based maintenance and reliability centered maintenance needs. With real time data logged to a digital twin, you can combine this information with other data about an asset, such as maintenance records and equipment specifications, allowing you to more efficiently identify the problem and quickly find a solution via data alone. 

Identify problems before they occur

With streaming, real-time data, digital twins may allow you to identify potential problems before they occur. Combining real time data with historical data via a knowledge graph can enable predictive analytics, allowing you to discover patterns that may indicate an impending fault or failure. For example, analyzing past data may show that an HVAC unit consuming more power than usual while also missing the desired temperature setting by a few degrees may be in a degraded state and ready for service or a replacement. Taking proactive action to address the failing unit can prevent an actual failure, as well as the negative effects caused downstream to other components of your system that an HVAC failure might cause. 

Conclusion

When combined with a knowledge graph, digital twins can be a powerful tool for capturing the interconnectivity and complexity of real world systems. The highly relational and flexible nature of knowledge graphs allows them to integrate data across multiple systems, giving you a full 360 view of your assets and their dependencies, increasing your situational awareness and improving the resilience of your systems. Need help getting started with digital twins and knowledge graphs? Contact us today.

 

The post Digital Twins and Knowledge Graphs appeared first on Enterprise Knowledge.

]]>
Taxonomy Use Cases: How To Estimate Effort and Complexity https://enterprise-knowledge.com/taxonomy-use-cases-how-to-estimate-effort-and-complexity/ Thu, 27 Aug 2020 13:00:01 +0000 https://enterprise-knowledge.com/?p=11787 When asked to define taxonomy, I like to define it as a method rather than a thing. I typically say taxonomy is a way of categorizing things hierarchically, from general to more specific. Sounds simple enough, right? After all, who … Continue reading

The post Taxonomy Use Cases: How To Estimate Effort and Complexity appeared first on Enterprise Knowledge.

]]>
When asked to define taxonomy, I like to define it as a method rather than a thing. I typically say taxonomy is a way of categorizing things hierarchically, from general to more specific. Sounds simple enough, right? After all, who hasn’t been grouping together things that have something in common, and slapping a name on that group since they first learned to speak? Every store, every house, every website has a way of categorizing and labeling stuff so that everything belongs in a place. Everyone does it, so it should be easy… right?

As any seasoned taxonomist, librarian, or knowledge manager will tell you: it depends. Specifically, it depends on the purpose of the taxonomy and its intended users. Even subtle differences in purpose or audience in similar environments can lead to vastly different results. Have you ever completely failed to find something in someone else’s kitchen? This is because it was not organized for you, just like you organized your kitchen with your own purposes and needs in mind. The use case, then, is intertwined with an audience or persona and a goal.

This white paper explores taxonomy use cases as an indicator of complexity, and how they can be used to determine the amount of effort that may be required for an organization to design a taxonomy. Effort refers to the amount of dedicated work and brain power that will be needed in order to design for a taxonomy’s complexity, particularly the effort to maintain the taxonomy in the long run and ensure its future success. 

Use Cases

Use cases will establish scope and purpose of a taxonomy. Defining complete and detailed use cases will make a difference in planning out an effort for taxonomy design. Use cases identify who will be using a taxonomy, how they will be using it, and why. These can be similar to user stories in the Agile methodology. Once defined, use cases will delineate relevant scope by defining Minimum Viable Product features, and help decide the direction of a taxonomy (MVP is like a prototype: what are the bare minimum efforts and features we need to put in to this product in order to learn the most about the impact of the product and iteratively expand it?). There may be numerous use cases for a single taxonomy, so it will be necessary to prioritize and create a backlog of use cases that will drive future iterations of a taxonomy. Since we typically focus on First Implementable Versions (a taxonomy MVP), we want to first focus on use cases that are easily compatible with each other and are attainable, recognizing that taxonomies can grow to incorporate future use cases once we have the foundation built.

A use case can be broken down into three parts: the persona, action, and goal. The persona represents an archetype of user that will be interacting with the taxonomy; this could also be a specific role, such as a Sales Representative. The action includes the steps a persona is taking while using the taxonomy; this should also include a specific system in which the taxonomy will be implemented. The goal is the persona’s purpose for using the taxonomy. 

Persona - who is the user? Action - what is the user doing? Goal - what is the goal?

An example use case can be: 

Clark the Customer (persona) needs to be able to use brand, color, and size facets on the customer shop of Shirts.com (action) so that they can find the perfect shirt for their upcoming interview (goal)

These specific details provide clear indicators of a successful taxonomy: we know that our taxonomy must describe clothing through facets (brand, color, size), including styles that are appropriate for interviews (this is a little bit of extra detail, but it can bring a use case to life). We know that the taxonomy must be implemented in faceted search and navigation on a specific system, so knowing whether this system has this capability is identified; there may also be implicit systems (such as databases) in the back-end we need to account for. Lastly, we will need to have a better understanding of how users currently go about using this system to achieve their goals, and what a taxonomy can do to improve the situation.

Classic Taxonomy Use Cases

Classic use cases for taxonomy include: tagging and faceted search for content, basic reporting or analytics, or creating organizational or navigational structures. These use cases are typically applied in content management repositories such as intranets and learning portals, or any other front-facing interfaces such as retail websites. 

Classic use cases are people focused: a customer needs a navigational structure to be clear so that they can find what they’re looking for when they need it. An employee needs to be able to search effectively to find the relevant training on the company learning portal in order to improve at their job. A revenue team needs to be able to classify products and services in one category in order to run reports on their profitability. A data governance team similarly needs to definitively classify data entities and attributes in a single category that corresponds to a business unit, in order to identify data stewards and owners for compliance purposes (such as GDPR or CCPA).

classic use cases are people focused and have a history of repeated implementation to rely onClassic use cases may appear to be less complex and therefore seem easier, but this is deceptive and not always true. Classic use cases can easily multiply into several use cases if it turns out there are multiple personas involved. For instance, you may have customers, sales representatives, and third-party vendors involved in a retail search and navigational use case in which each group has different needs from the taxonomy. Perhaps third party vendors need a way of managing product metadata, and sales representatives need to be able to track sales, while customers need clear facets to find products. 

That being said, Classic Use Cases are “classic” because they’ve been implemented time and time again in systems that most organizations already have (unless they are adding an enterprise taxonomy tool to the mix, which will make a long term effort smoother); taxonomists and developers have reliable previous efforts to lean back on and may have a specific methodology for each use case that can be reused. Classic use cases tend to have a more predictable level of effort estimation, that should also include other factors such as the complexity of the domain, the level of specificity or the breadth of concepts possible, and the type of content the taxonomy will be primarily organizing.

Advanced Use Cases

advanced use cases rely on machine-learning processes, have a higher barrier of entry, and require more specialized effortsAdvanced use cases tend to delve into ontologies, knowledge graphs, and artificial intelligence, but taxonomy is still a foundational aspect of these technologies. These use cases include text parsing and automated classification, predictive analytics, insight inferencing, chatbots, and recommendation engines. While people will still benefit from the end result of these use cases, the complexity of these taxonomies are amplified by the fact they are primarily meant to be utilized by machine learning processes that humans can’t effectively reproduce, on a massive volume of data. A taxonomy meant purely for text parsing and auto-classification will not be directly intuitive or usable by people since these tend to be significantly larger, highly specific, or repetitive as a way to disambiguate concepts, and therefore highly complex. They may also have polyhierarchy or semantic relationships that go beyond hierarchy.

Advanced use cases will require a higher level of effort, more so than classic use cases. The barrier for entry is much higher than a classic use case, requiring specific knowledge regarding machine learning and other semantic capabilities. Advanced use cases will use specific technology that many organizations don’t have, unless they have some of these capabilities already, so new technology may need to be purchased and added to an organization’s system architecture. This is also an actively developing field within artificial intelligence; while of course there are demonstrated successes, these use cases are open to experimentation as the field develops, and may face a higher degree of uncertainty (see my previous blog on NLP and Taxonomy Design to learn more about an example of an advanced use case). 

System Use Case Limitations

Systems that are in scope for taxonomy implementation should be noted as part of the action of a use case, in which a persona uses a system to interact with a taxonomy. The added element of a specific in-scope system opens the possibility of certain limitations that can dictate the design of a taxonomy, and will restrict other use cases. For example, some systems do not handle hierarchical values easily. If a taxonomy informs the values of a metadata field in this kind of system, that field will not be able to fully represent the hierarchy of the taxonomy. 

This means the implementation of a taxonomy will have to get creative, but it also means the usable fields of the taxonomy may be limited to a certain level. In other words, only the lowest level in the taxonomy can be used as metadata values. The taxonomy must conform to this level across the board, and all areas of this taxonomy must go to a certain level of depth in order to be used. A good rule of thumb for taxonomies with the classic use cases is Three Levels. 

A strict hierarchy and a strict number of levels that are both imposed by a system is great for classic use cases, but it will not be ideal for advanced use cases like text parsing. A limitation like this can make fulfilling an advanced use case exceedingly difficult, since certain levels of specificity will have to be sacrificed. This means that certain classic and advanced use cases are incompatible and may require different designs.

System limitations can restrict different types of use cases that may make them incompatible

While system limitations don’t necessarily change the level of effort of a taxonomy design, not knowing system limitations in advance has a risk for more effort if there needs to be significant rework (which can still be accounted for ahead of time if we plan for constant iteration). However, as mentioned above, system limitations will have an effect on other use cases.  As a result, the more systems that are selected to be a part of a taxonomy effort, the higher chance there are system limitations which can impact design decisions; this may increase the level of effort, and restrict the taxonomy’s ability to fulfill other types of use cases, especially if each system roughly corresponds to a classic or advanced use case.

Mixed Use Cases as Indicators of Complexity

Multiple use cases for a taxonomy can be a sign of complex business needs. Multiple use cases can be due to the fact multiple groups of users or even departments are relying on a single taxonomy to achieve their specific goals. Likewise, multiple in-scope systems can indicate multiple groups of users or departments that use their own designated system, each with its own capabilities and limitations that may need to be accounted for. 

Prioritize use cases that are compatible with each other in the initial MVP effortDepending on the nature of these departments, even if they have the same use case, they may require different concepts or structures to be in the taxonomy. For example, if a global enterprise needs a taxonomy for their products and services, it is usually the case that regional offices offer unique services and products, or engage in markets/industries respective to their regions, but not others. 

The implication being, this taxonomy will have parts that are not relevant to specific regions. This increases the potential for misalignment and lower adoption if not identified early on by establishing thorough use cases for each region, which may need to have the ability to designate sections of a master enterprise taxonomy that are relevant to them.

While some use cases are very compatible with each other, every distinct use case for a taxonomy runs the risk of changing the nature or content of a taxonomy, thus potentially increasing the effort required. A taxonomy intended for search and navigation may be a different shape than a taxonomy for reporting, because these entail different users with different goals, even if the taxonomy is modeling the same information domain. As more use cases are introduced to a single taxonomy effort and the effort is not planned accordingly, the higher the risk of not being able to meet expectations, thus lowering adoption.

Conclusion

It’s important to emphasize again that we use terms like “First Implementable Version” and “Initial Design” for a reason: to set expectations that a taxonomy is necessarily iterative, and you don’t need to tackle all possible use cases at once on Day 1. Similarly, expecting to achieve all of your possible use cases within a few months’ initial design project is unrealistic. A sustained effort can be grown as value is realized with an MVP, and then more use cases, as well as the advanced use cases, can eventually be explored. Start small, prioritize the first use cases to the ones that are compatible and attainable, realize and demonstrate the value of your MVP, and grow as necessary.

Taxonomy is incredibly flexible, and it can be designed in many different ways to suit your users’ needs. Taxonomy is an elegant solution to complex, wide ranging yet common problems in the information world. Identifying and analyzing use cases, and considering the potential complexity represented by them, can be used as a way to estimate the effort required for an enterprise taxonomy. From here, a viable long-term roadmap can be created with realistic expectations and priorities. 

Know you need a taxonomy, but unsure where to start? Contact Enterprise Knowledges team of expert taxonomists and KM consultants to learn more.

The post Taxonomy Use Cases: How To Estimate Effort and Complexity appeared first on Enterprise Knowledge.

]]>
What is the Roadmap to Enterprise AI? https://enterprise-knowledge.com/enterprise-ai-in-5-steps/ Wed, 18 Dec 2019 14:00:57 +0000 https://enterprise-knowledge.com/?p=10153 Artificial Intelligence technologies allow organizations to streamline processes, optimize logistics, drive engagement, and enhance predictability as the organizations themselves become more agile, experimental, and adaptable. To demystify the process of incorporating AI capabilities into your own enterprise, we broke it … Continue reading

The post What is the Roadmap to Enterprise AI? appeared first on Enterprise Knowledge.

]]>
Artificial Intelligence technologies allow organizations to streamline processes, optimize logistics, drive engagement, and enhance predictability as the organizations themselves become more agile, experimental, and adaptable. To demystify the process of incorporating AI capabilities into your own enterprise, we broke it down into five key steps in the infographic below.

An infographic about implementing AI (artificial intelligence) capabilities into your enterprise.

If you are exploring ways your own enterprise can benefit from implementing AI capabilities, we can help! EK has deep experience in designing and implementing solutions that optimizes the way you use your knowledge, data, and information, and can produce actionable and personalized recommendations for you. Please feel free to contact us for more information.

The post What is the Roadmap to Enterprise AI? appeared first on Enterprise Knowledge.

]]>