artifical intelligence Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/artifical-intelligence/ Fri, 09 May 2025 19:52:59 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg artifical intelligence Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/artifical-intelligence/ 32 32 Top Knowledge Management Trends – 2025 https://enterprise-knowledge.com/top-knowledge-management-trends-2025/ Tue, 21 Jan 2025 17:35:24 +0000 https://enterprise-knowledge.com/?p=22944 The field of Knowledge Management continues to experience a period of rapid evolution, and with it, growing opportunity to redefine value and reorient decision-makers and stakeholders toward the business value the field offers. With the nature of work continuing to … Continue reading

The post Top Knowledge Management Trends – 2025 appeared first on Enterprise Knowledge.

]]>

EK Knowledge Management Trends for 2025

The field of Knowledge Management continues to experience a period of rapid evolution, and with it, growing opportunity to redefine value and reorient decision-makers and stakeholders toward the business value the field offers. With the nature of work continuing to evolve in a post-Covid world, the “AI Revolution” dominating conversations and instances of Generative AI seemingly everywhere, and the field of Knowledge, Information, Data, and Content Management continuing to connect in new ways, Knowledge Management continues to evolve. 

As in years past, my annual report on Top Knowledge Management Trends for 2025 is based on an array of factors and inputs. As the largest global KM consultancy, EK is in a unique position to identify where KM is and where it is heading. Along with my colleagues, I interview clients and map their priorities, concerns, and roadmaps. We also sample the broad array of requests and inquiries we receive from potential clients and analyze various requests for proposal and information (RFPs and RFIs). In addition, we attend conferences not just for KM, both more broadly across industries and related fields to understand where the “buzz” is. I then supplement these and other inputs with interviews from leaders in the field and inputs from EK’s Expert Advisory Board (EAB). From that, I identify what I see as the top trends in KM.

You can review each of these annual blogs for 2024, 2023, 2022, 2021, 2020, and 2019 to get a sense of how the world of KM has rapidly progressed and to test my own track record. Now, here’s the list of the Top Knowledge Management trends for 2025.

 

1) AI-KM Symbiosis – Everyone is talking about AI and we’re seeing massive budgets allocated to make it a reality for organizations, rather than simply something that demonstrates well but generates too many errors to be trusted. Meanwhile, many KM practitioners have been asking what their role in the world of AI will be. In last year’s KM Trends blog I established the simple idea that AI can be used to automate and simplify otherwise difficult and time-consuming aspects of KM programs, and equally, KM design and governance practices can play a major role in making AI “work” within organizations. I doubled down on this idea during my keynote at last year’s Knowledge Summit Dublin, where I presented the two sides of the coin, KM for AI, and AI for KM, and more recently detailed this in a blog while introducing the term Knowledge Intelligence (KI).

In total, this can be considered as the mutually beneficial relationship between Artificial Intelligence and Knowledge Management, which all KM professionals should be seizing upon to help organizations understand and maximize their value, and for which the broader community is quickly becoming aware. Core KM practices and design frameworks address many of the reliability, completeness, and accuracy issues organizations are reporting with AI – for instance, taxonomy and ontology to enable context and categorization for AI, tacit knowledge capture and expert identification to deliver rich knowledge assets for AI to leverage, and governance to ensure the answers are correct and current. 

AI, on the other hand, delivers inference, assembly, delivery, and machine learning to speed up and automate otherwise time intensive human-based tasks that were rife with inconsistencies. AI can help to deliver the right knowledge to the right people at the moment of need through automation and inference, it can automate tasks like tagging, and even improve tacit knowledge capture, which I cover below in greater detail as a unique trend.

 

2) AI-Ready Content – Zeroing in on one of the greatest gaps in high-performing AI systems, a key role for KM professionals this year will be to establish and guide the processes and organizational structures necessary to ensure content ingested by an organization’s AI systems is connectable and understandable, accurate, up-to-date, reliable, and eminently trusted. There are several layers to this, in all of which Knowledge Management professionals should play a central role. First is the accuracy and alignment of the content itself. Whether we’re talking structured or unstructured, one of the greatest challenges organizations face is the maintenance of their content. This has been a problem long before AI, but it is now compounded by the fact that an AI system can connect with a great deal of content and repackage it in a way that potentially looks new and more official than the source content. What happens when an AI system is answering questions based on an old directive, outdated regulation, or even completely wrong content? What does it do if it finds multiple conflicting pieces of information? This is where “hallucinations” start appearing, with people quickly losing trust in AI solutions.

In addition to the issues of quality and reliability, there are also content issues related to structure and state. AI solutions perform better when content in all forms has been tagged consistently with metadata and certain systems and use cases benefit from consistent structure and state of content as well. For organizations that have previously invested in their information and data practices, leveraging taxonomies, ontologies, and other information definition and categorization solutions, trusted AI solutions will be a closer reality. For the many others, this must be an area of focus.

Notably, we’ve even seen a growing number of data management experts making a call for greater Knowledge Management practices and principles in their own discipline. The world is waking up to the value of KM. In 2025, there will be a growing priority on this age-old problem of getting an organization’s content, and content governance, in order so that those materials surfaced through AI will be consistently trusted and actionable.

 

3) Filling Knowledge Gaps – All systems, AI-driven or otherwise, are only as smart as the knowledge they can ingest. As systems leverage AI more and transcend individual silos to operate for the entire enterprise, there’s a great opportunity to better understand what people are asking for. This goes beyond analytics, though that is a part of it, but rather focuses on an understanding of what was asked that couldn’t be answered. Once enterprise-level knowledge assets are united, these AI and Semantic Layer solutions have the ability to identify knowledge gaps. 

This creates a massive opportunity for Knowledge Management professionals. A key role of KM professionals has always been to proactively fill these knowledge gaps, but in so many organizations, simply knowing what you don’t know is a massive feat in itself. As systems converge and connect, however, organizations will suddenly have an ability to spot their knowledge gaps as well as their potential “single points of failure,” where only a handful of experts possess critical knowledge within the organization. This new map of knowledge flows and gaps can be a tool for KM professionals to prioritize filling the most critical gaps and track their progress for the organization. This in turn can create an important new ability for KM professionals to demonstrate their value and impact for organizations, showing how previously unanswerable questions are now addressed and how past single points of failure no longer exist. 

To paint the picture of how this works, imagine a united organization that could receive regular, automated reports on the topics for which people were seeking answers but the system was unable to provide. The organization could then prioritize capturing tacit knowledge, fostering new communities of practice, generating new documentation, and building new training around those topics. For instance, if a manufacturing company had a notable spike in user queries about a particular piece of equipment, the system would be able to notify the KM professionals, allowing them to assess why this was occurring and begin creating or curating knowledge to better address those queries. The most intelligent systems would be able to go beyond content and even recognize when an organization’s experts on a particular topic were dwindling to the point that a future knowledge gap might exist, alerting the organization to enhance knowledge capture, hiring, or training. 

 

4) AI-Assisted Tacit Knowledge Capture – Since the late 1990’s, I’ve seen people in the KM field seek to automate the process of tacit knowledge capture. Despite many demos and good ideas over the decades, I’ve never found a technical solution that approximates a human-driven knowledge capture approach. I believe that will change in the coming years, but for now the trend isn’t automated knowledge capture, it is AI-assisted knowledge capture. There’s a role for both KM professionals and AI solutions to play in this approach. The human’s responsibilities are to identify high value moments of knowledge capture, understand who holds that knowledge and what specifically we want to be able to answer (and for whom), and then facilitate the conversations and connect to have that knowledge transferred to others. 

That’s not new, but it is now scalable and easier to digitize when AI and automation are brought into the processes. The role of the AI solution is to record and transcribe the capture and transfer of knowledge, automatically ingesting the new assets into digital form, and then leveraging it as part of the new AI body of knowledge to serve up to others at the point of need. By again considering the partnership between Knowledge Management professionals and the new AI tools that exist, practices and concepts that were once highly limited to human interactions can be multiplied and scaled to the enterprise, allowing the KM professional to do more that leverages their expertise, and automating the drudgery and low-impact tasks.

 

5) Enterprise Semantic Layers – Last year in this KM Trends blog, I introduced the concept of the Semantic Layer. I identified it as the next step for organizations seeking enterprise knowledge capabilities beyond the maturity of knowledge graphs, as a foundational framework that can make AI a reality for your organization. Over the last year we saw that term enter firmly into the conversation and begin to move into production for many large organizations. That trend is already continuing and growing in 2025. In 2025, organizations will move from prototyping and piloting semantic layers to putting them into production. The most mature organizations will leverage their semantic layers for multiple different front-end solutions, including AI-assisted search, intelligent chatbots, recommendation engines, and more.

 

6) Access and Entitlements – So what happens when, through a combination of semantic layers, enterprise AI, and improved knowledge management practices an organization actually achieves what they’ve been seeking and connects knowledge assets of all different types, spread across the enterprise in different systems, and representing different eras of the organization? The potential is phenomenal, but there is also a major risk. Many organizations struggle mightily with the appropriate access and entitlements to their knowledge assets. Legacy file drives and older systems possess dark content and data that should be secured but isn’t. This largely goes unnoticed when those materials are “hidden” by poor findability and confused information architectures. All of a sudden, as those issues melt away thanks to AI and semantic layers, knowledge assets that should be secured will be exposed. Though not specifically a knowledge management problem, the work of knowledge managers and others within organizations to break down silos, connect content in context, and improve enterprise findability and discoverability will surface this security and access issue. It will need to be addressed proactively lest organizations find themselves exposing materials they shouldn’t. 

I anticipate this will be a hard lesson learned for many organizations in 2025. As they succeed in the initial phases of production AI and semantic layer efforts, there will be unfortunate exposures. Rather than delivering the right knowledge to the right people, the wrong knowledge will be delivered to the wrong people. The potential risk and impact for this is profound. It will require KM professionals to help identify this risk, not solve it independently, but partner with others in an organization to recognize it and plan to avoid it.

 

7) More Specific Use Cases (and Harder ROI) – In 2024, we heard a lot of organizations saying “we want AI,” “we need a semantic layer,” or “we want to automate our information processes.” As these solutions become more real and organizations become more educated about the “how” and “why,” we’ll see growing maturity around these requests. Rather than broad statements about technology and associated frameworks, we’ll see more organizations formulating cohesive use cases and speaking more in terms of outcomes and value. This will help to move these initiatives from interesting nice-to-have experiments to recession-proof, business critical solutions. The knowledge management professionals’ responsibility is to guide these conversations. Zero your organization in on the “why?” and ensure you can connect the solution and framework to the specific business problems they will solve, and then to the measurable value they will deliver for the organization.

Knowledge Management professionals are poised to play a major role in these new KM Trends. Many of them, as you read above, pull on long-standing KM responsibilities and skills, ranging from tacit knowledge capture, to taxonomy and ontology design, as well as governance and organizational design. The most successful KM’ers in 2025 will be those that merge these traditional skillsets with a deeper understanding of semantics and their associated technologies, continuing to connect the fields of Knowledge, Content, Information, and Data Management as the connectors and silo busters for organizations.

Where does your organization currently stand with each of these trends? Are you in a position to ensure you’re at the center of these solutions for your organization, leading the way and ensuring knowledge assets are connected and delivered with high-value and high-reliability context? Contact us to learn more and get started.

The post Top Knowledge Management Trends – 2025 appeared first on Enterprise Knowledge.

]]>
The Role of AI in the Semantic Layer https://enterprise-knowledge.com/the-role-of-ai-in-the-semantic-layer/ Wed, 29 May 2024 14:30:11 +0000 https://enterprise-knowledge.com/?p=20922 Two significant recent trends in knowledge management, artificial intelligence (AI) and the semantic layer, are reshaping the way organizations interact with their data and leverage it to derive actionable insights. Taking advantage of the interplay between AI and the semantic … Continue reading

The post The Role of AI in the Semantic Layer appeared first on Enterprise Knowledge.

]]>
Two significant recent trends in knowledge management, artificial intelligence (AI) and the semantic layer, are reshaping the way organizations interact with their data and leverage it to derive actionable insights. Taking advantage of the interplay between AI and the semantic layer is key to driving advancements in data extraction, organization, interpretation, and application at the enterprise level. By integrating AI techniques with semantic components and understanding, EK is empowering our clients to break down data silos, connect vast amounts of their organizational data assets in all forms, and comprehensively transform their knowledge and information landscapes. In this blog, I will walk through how AI techniques such as named entity recognition, clustering and similarity algorithms, link detection, and categorization facilitate data curation for input into a semantic layer, which feeds into downstream applications for advanced search, classification, analytics, chatbots, and recommendation capabilities.

graphic depicting AI algorithms feeding into the semantic layer, feeding into downstream applications

Understanding the Semantic Layer

The semantic layer is a standardized framework that serves as a bridge between raw data and user-facing applications by organizing, abstracting, and connecting data and knowledge from structured, unstructured, and semi-structured formats. It encompasses components such as taxonomies, ontologies, business glossaries, knowledge graphs, and related tooling to provide organizations with a unified and contextualized view of their data and information. This enables intuitive user interactions and empowers analysis and decision-making informed by business context. 

AI Techniques in the Semantic Layer

The following AI techniques are useful tools for the curation of semantic layer input and powering downstream applications:

1. Named Entity Recognition

Named entity recognition (NER) is a natural language processing (NLP) technique that involves identifying and categorizing entities within text, such as people, organizations, or locations. By leveraging NER, organizations can automate the extraction process of common entities from large amounts of unstructured textual data to quickly identify key information. Identifying, extracting, and labeling common entities consistently across different datasets streamlines the normalization of data in varied formats from disparate sources for seamless data integration to the semantic layer. These enriched semantic representations of data enable organizations to connect information and surface contextual insights from vast amounts of complex data. 

For a federally funded engineering research center, EK leveraged taxonomy and ontology models to automatically extract key entities from a vast repository of unstructured documents and add structured metadata, ultimately building an enterprise knowledge graph for the organization’s semantic layer. This supported a semantic search platform for users to conduct natural language searches and intuitively browse and navigate through documents by key entities such as person, project, and topic, reducing time spent searching from several days to 5 minutes.

2. Clustering and Similarity Algorithms

Clustering algorithms (e.g., K-means, DBSCAN) partition datasets by creating distinct groups of similar objects to identify patterns and find commonalities between unlabeled or uncategorized data elements, whereas similarity algorithms (e.g., cosine similarity, Euclidean distance, Jaccard similarity) are used to measure the similarity or dissimilarity between two objects or sets of objects. Clustering and similarity algorithms are crucial in various semantic layer-driven use cases and applications, such as chatbots, recommendation engines, and semantic search.

As part of a global financial institution’s semantic layer implementation for their risk management initiatives, EK led taxonomy development efforts by implementing a semi-supervised clustering algorithm to group a large volume of inconsistent and unwieldy free-text risk descriptions based on their semantic similarity. The firm’s subject matter experts used the results to identify common and relevant themes that informed the design of a standard risk taxonomy that will significantly streamline their risk identification and assessment processes.

Identifying groups and patterns within and across datasets is also beneficial for advanced analytics and reporting needs. EK leveraged these AI techniques for a biotechnology company to aggregate and normalize disparate data from multiple legacy systems, providing a full-scale view for comprehensive analysis. EK incorporated this data into the semantic layer and, in turn, automated the generation of regulatory reports and detailed process analytics.

3. Link Detection

Link detection algorithms identify relationships and connections between entities or concepts within a dataset. Uncovering these links enables the construction of semantic networks or graphs, providing a structured representation of an organization’s knowledge domain. Link detection surfaces connections and patterns across data to enhance navigation and semantic search capabilities, ultimately facilitating comprehensive knowledge discovery and efficient information retrieval.

graphic depicting the process of extracting relationships via link detection for the semantic layer

For a global scientific solutions and services provider, EK utilized link detection and prediction algorithms to develop a context-based recommendation system. The semantic layer established links between product data and product marketing content, expediting content aggregation and enabling personalization in the recommender interface for an intuitive and tailored user experience. 

4. Categorization

Categorization involves automatically classifying data or text into predefined categories or topics based on their content or features. Auto-tagging and classification are powerful techniques to organize and typify content from multiple sources that can then be fed into a single repository within the semantic layer. This streamlines information management processes for enhanced data access, connectivity, findability, and discoverability. 

EK leverages AI-based categorization to enable our clients to consistently organize data based on defined access and security requirements applicable within their organizations. For example, a leading federal research organization struggled with managing large amounts of unstructured data across various platforms, resulting in inefficiencies in data access and an increased risk of sensitive data exposure. EK automated content categorization based on predefined sensitivity rules and built a dashboard powered by the semantic layer to streamline the identification and remediation of access issues for overshared sensitive content. With the initial proof of concept, EK successfully automated the scanning and analyzing of about 30,000 documents to identify disparities in sensitivity labels and access designations without burdensome manual efforts.

Conclusion

AI techniques can be used to facilitate data aggregation, standardization, and semantic enrichment for curated input into the semantic layer, as well as to build upon the semantic layer for advanced downstream applications from semantic search to recommendation engines. By harnessing the combined power of AI and semantic layer components, organizations can accelerate the digital transformation of their knowledge and data landscapes and establish truly data-driven processes across the enterprise. Contact EK to see if your organization is ready for AI and learn how you can get started with your own AI-driven semantic layer initiatives. 

The post The Role of AI in the Semantic Layer appeared first on Enterprise Knowledge.

]]>
IA Fast-Track to Search-Focused AI Solutions: Information Architecture Conference 2024 https://enterprise-knowledge.com/ia-fast-track-to-search-focused-ai-solutions-information-architecture-conference-2024/ Tue, 30 Apr 2024 13:28:54 +0000 https://enterprise-knowledge.com/?p=20419 Sara Mae O’Brien-Scott and Tatiana Baquero Cakici, Senior Consultants at Enterprise Knowledge (EK), presented “AI Fast Track to Search-Focused AI Solutions” at the Information Architecture Conference (IAC24) that took place on April 11, 2024 in Seattle, WA. In their presentation, … Continue reading

The post IA Fast-Track to Search-Focused AI Solutions: Information Architecture Conference 2024 appeared first on Enterprise Knowledge.

]]>
Sara Mae O’Brien-Scott and Tatiana Baquero Cakici, Senior Consultants at Enterprise Knowledge (EK), presented “AI Fast Track to Search-Focused AI Solutions” at the Information Architecture Conference (IAC24) that took place on April 11, 2024 in Seattle, WA.

In their presentation, O’Brien-Scott and Cakici focused on what Enterprise AI is, why it is important, and what it takes to empower organizations to get started on a search-based AI journey and stay on track. The presentation explored the complexities of enterprise search challenges and how IA principles can be leveraged to provide AI solutions through the use of a semantic layer. O’Brien-Scott and Cakici showcased a case study where a taxonomy, an ontology, and a knowledge graph were used to structure content at a healthcare workforce solutions organization, providing personalized content recommendations and increasing content findability.

In this session, participants gained insights about the following:

  • Most common types of AI categories and use cases;
  • Recommended steps to design and implement taxonomies and ontologies, ensuring they evolve effectively and support the organization’s search objectives;
  • Taxonomy and ontology design considerations and best practices;
  • Real-world AI applications that illustrated the value of taxonomies, ontologies, and knowledge graphs; and
  • Tools, roles, and skills to design and implement AI-powered search solutions.

The post IA Fast-Track to Search-Focused AI Solutions: Information Architecture Conference 2024 appeared first on Enterprise Knowledge.

]]>
EK’s Ethan Hamilton and Heather Hedden to Speak at the Virtual PoolParty Summit 2024 https://enterprise-knowledge.com/eks-ethan-hamilton-heather-hedden-to-speak-at-the-virtual-poolparty-summit-2024/ Thu, 29 Feb 2024 16:45:40 +0000 https://enterprise-knowledge.com/?p=20015 Enterprise Knowledge will have speakers in two separate sessions of the free, virtual PoolParty Summit schedule for March 19 – 21. This is the third edition of the online event that explores how PoolParty users are harnessing the powers of … Continue reading

The post EK’s Ethan Hamilton and Heather Hedden to Speak at the Virtual PoolParty Summit 2024 appeared first on Enterprise Knowledge.

]]>
Enterprise Knowledge will have speakers in two separate sessions of the free, virtual PoolParty Summit schedule for March 19 – 21. This is the third edition of the online event that explores how PoolParty users are harnessing the powers of Explainable AI and semantics.

EK’s Senior Consultant Heather Hedden will present “Challenges in Creating Taxonomies for Learning & Development” in a joint session with Walmart’s Senior Manager, KM & Capacity, Amber Simpson on March 19 at 12:00 noon EDT. Taxonomies can help provide a foundation to align skills to roles and to learning content. Simpson will discuss Walmart’s current effort to push learning content to associates, and Hedden will look at issues involved in developing a taxonomy of skills, considering the different taxonomy users, how skills can be linked to roles in a simple ontology, and the challenges in application front-end design.

EK’s Data Engineer Ethan Hamilton will be speaking on a panel of thought leaders, “A Roundtable Discussion: Using Knowledge Graphs to Bridge the Gaps in Generative AI,” on March 19 at 1:30 pm EDT. Despite their popularity, many worry about the risks Generative AI and LLMs pose: trustworthiness, ethics, missing information, suitability for workplace decisions, etc. This roundtable will seek to answer some of these questions and present another alternative: Explainable AI. Panelists will speak to the strengths of combining these generative technologies with knowledge graphs that can help explain the results and support the generation of high-quality data to train the models.

The event is free to anyone interested. Learn more and register for the Summit here.

The post EK’s Ethan Hamilton and Heather Hedden to Speak at the Virtual PoolParty Summit 2024 appeared first on Enterprise Knowledge.

]]>
What is a Large Language Model (LLM)? https://enterprise-knowledge.com/what-is-a-large-language-model-llm/ Wed, 21 Feb 2024 17:00:37 +0000 https://enterprise-knowledge.com/?p=19705   In late November of 2022, artificial intelligence (AI) research and development company OpenAI released ChatGPT, an AI chatbot powered by a Large Language Model (LLM). In the following year, the world witnessed a meteoric rise in the usage of … Continue reading

The post What is a Large Language Model (LLM)? appeared first on Enterprise Knowledge.

]]>
Note: The above image was generated using Dall-E 3 (via ChatGPT).

 

In late November of 2022, artificial intelligence (AI) research and development company OpenAI released ChatGPT, an AI chatbot powered by a Large Language Model (LLM). In the following year, the world witnessed a meteoric rise in the usage of ChatGPT and other LLMs across a diverse array of industries and applications. However, what large language models actually are and what they are capable of is often misunderstood. In this blog, I will define LLMs, explore how they work, explain their strengths and weaknesses, and elaborate on a few of the most common LLM use cases for the enterprise.

 

 

So, what is a Large Language Model?

In short, a Large Language Model is an advanced AI model designed to perform Natural Language Processing (NLP) tasks, including interpreting, translating, predicting, and generating coherent, contextually relevant text. LLMs require extensive training on vast textual datasets that contain trillions of words, like Wikipedia and GitHub, which teaches the model to recognize patterns in text. An LLM such as OpenAI’s GPT-4 isn’t doing any “reasoning” like a human does, at least not yet – it is merely generating output that fits the patterns it has learned through training. It can simply be thought of as doing very sophisticated predictions of which words in which context go in what order. 

 

How does a Large Language Model work? 

All LLMs operate by leveraging immense, layered networks of interconnected nodes that process and transmit information. The structure of the networks draws inspiration from the interconnectedness of the human brain’s network of neurons. Within this framework, LLMs use so-called transformer models – consisting of an encoder and a decoder – to turn input into output. 

In the process of handling a sequence of input text, a tokenizer algorithm first converts the text into a machine-readable format by breaking down the text into small, discrete units called “tokens” for analysis; tokens themselves are often single words or single letters. 

For example, the sentence “Hello, world!” can be tokenized into [“Hello”,  “,”,  “world”,  “!”]. 

 

These tokens are then converted into numerical values known as embedding vectors, which is the format expected by the transformer model. However, because transformers can’t inherently understand the order of words, each embedding vector is combined with a positional encoding. This step ensures the order of the words is taken into account by the model.

After the input text is tokenized, it is passed through the encoder to create attention vectors, which are numerical values that help the model determine the relevance and relationship of each token to the others in the input. This helps the LLM capture dependencies and relationships between tokens, giving it the ability to process the context of each token in the sequence. 

The attention vectors are then passed to the decoder to receive an output embedding, which are then converted back into tokens. The decoder process continues until a “STOP” token is output by the transformer, indicating that no more output text should be generated. This process ensures that the generated output considers the relevant information from the input, maintaining coherence and context in the generated text. This is similar to how a human might receive a question, automatically identify the most important aspects of the question, and give an appropriate response that addresses those aspects.

 

 

Strengths

Large language models exhibit several strengths that businesses can capitalize on:

  • LLMs excel in advanced tasks that require complex NLP like text summarization, content generation, and translation, all of which demonstrate their high level of proficiency in intricate linguistic tasks and creative text manipulation. This enables them to generate human-like output, carry on long conversations regarding almost any topic, recall details from previous messages in the same context, and even be given specific instructions on how they should respond and react to input. 
  • Similarly, large language models learn rapidly and adapt to the context of a conversation without the need for changing the underlying model architecture. This means they quickly grasp concepts without requiring an extensive number of examples. Supplied with enough detail by a user, LLMs can provide support to that user in solving particular or niche problems without ever having been specifically trained to tackle those kinds of problems.
  • Beyond learning human languages, LLMs can also be trained to perform tasks like writing code, retrieving information, and classifying the sentiment of text, among others. Their adaptability extends to a wide array of use cases that can benefit the enterprise in numerous ways, including saving time, increasing efficiency, and enabling employees to work more effectively.
  • Multimodal LLMs can both break down and generate a variety of media content, including images and videos, with natural language prompts. These models have been trained on existing media to understand their components and then use this understanding to create new content or answer questions about visual content. For example, the image at the top of this blog was generated using Dall-E 3 with the prompt “Please design an image representing a large language model, apt for a professional blog post about LLMs, using mostly purple hues”. This prompt was purposefully vague to allow Dall-E 3 to creatively interpret what an LLM could be represented as.

 

Weaknesses

In spite of their strengths, LLMs have numerous weaknesses:

  • During training, LLMs will learn from whatever input they are given. This means that training on low quality input data will cause the LLM to generate low quality output content.  Businesses need to be strict with the management of the data that the model is learning from to avoid the garbage in, garbage out problem. Similarly, businesses should avoid training LLMs on content generated by LLMs, which can lead to irreversible defects in the model and further reduce the quality of the generated output.
  • During training, LLMs will ignore copyright, plagiarize written content, and ingest proprietary data if given access to that kind of content, which can raise concerns about potential copyright infringement issues.
  • The training process and operation of an LLM demands substantial computational resources, which not only limits their applicability to high-power, high-tech environments but also imposes considerable financial burdens on businesses seeking to develop their own models. Building, scaling, and maintaining LLMs can therefore be extremely costly, resource-intensive, and requires expertise in deep learning and transformer models, which poses a significant hurdle.
  • LLMs have a profound double-edged sword in their tendency to generate “hallucinations”. This means they sometimes produce outputs that are factually false or diverge from user intent, as they are only able to predict syntactically correct phrases without a comprehensive understanding of human meaning and truth. However, without hallucination, LLMs would not be able to creatively generate output, so businesses must weigh the cost of hallucinations against the creative potential of the LLM, and determine what level of risk they are willing to take.

 

LLM Use Cases for the Enterprise

Large language models have many applications that utilize their strengths. However, their weaknesses manifest across all use cases, so businesses must make considerations to prevent complications and mitigate risks. These are some of the most common use cases where we have employed LLMs:

Content generation:

  • LLMs can generate human-like content for articles, blogs, and other written materials. As such, they can act as a starting point for businesses to create and publish content. 
  • LLMs can assist in generating code based on natural language descriptions, aiding developers in their work, and making programming more accessible for more business-oriented, non-technical people.

Information Retrieval:

  • LLMs can improve search engine results by better understanding the linguistic meaning of user queries and generating more natural responses that pertain to what the user is actually searching for.
  • LLMs can extract information from large training datasets or knowledge bases to answer queries in an efficient, conversational style, improving access and understanding of organizational information.

Text Analysis:

  • LLMs can generate concise and coherent summaries of longer texts, making them valuable for businesses to quickly extract key information from articles, documents, or conversations.
  • LLMs can analyze text data to determine the sentiment behind it, which is useful for businesses to gauge customer opinions, as well as for social media monitoring and market research.
  • LLMs can be used to do customer and patient intakes, and to perform basic problem solving, in order to save employees time for dealing with more complicated issues.

 

Conclusion

In the past year, large language models have seen an explosion in adoption and innovation, and they aren’t going anywhere any time soon – ChatGPT alone reached 100 million active users in January 2023, and continues to see nearly 1.5 billion website visits per month. The enormous popularity of LLMs is supported by their obvious utility in interpreting, generating, and summarizing text, as well as their applications in a variety of technical and non-technical fields. However, LLMs come with downsides that cannot be brushed aside by any business seeking to use or create one. Due to their non-deterministic and emergent capabilities, businesses should prioritize working with experts in order to properly mitigate risks and capitalize on the strengths of a large language model.

Want to jumpstart your organization’s use of LLMs? Check out our Semantic LLM Accelerator and contact us at info@enterprise-knowledge.com for more information! 

The post What is a Large Language Model (LLM)? appeared first on Enterprise Knowledge.

]]>
Top Knowledge Management Trends – 2024 https://enterprise-knowledge.com/top-knowledge-management-trends-2024/ Mon, 29 Jan 2024 15:00:01 +0000 https://enterprise-knowledge.com/?p=19613 For the last several years, I’ve written this article on the Top Knowledge Management Trends of the year. As CEO of the world’s largest Knowledge Management consulting company, I’m lucky to get to witness these trends forming each year. As … Continue reading

The post Top Knowledge Management Trends – 2024 appeared first on Enterprise Knowledge.

]]>

Knowledge Management Trends in 2024

For the last several years, I’ve written this article on the Top Knowledge Management Trends of the year. As CEO of the world’s largest Knowledge Management consulting company, I’m lucky to get to witness these trends forming each year. As in past years, I brought together EK’s KM consultants and thought leaders to guide the development of this list. I looked at the rising themes we see from our clients and prospective clients, including burgeoning topics in requests for proposals we receive. As we’ve helped many of our leading clients develop multi-year KM Transformation roadmaps, or develop their annual priorities and budgets, key themes have taken shape. I’ve supplemented those insights with a series of interviews from KM leaders and practitioners (both internal and external), reviewed topics and discussions from the world’s KM conferences and publications, and evaluated briefings and product roadmaps from vendors across the KM, information management, content management, and data management software spaces. 

You can review my recent annual blogs for 2023, 2022, 2021, 2020, and 2019 to get a sense of how the world of KM has rapidly progressed.

The following are the top KM trends for 2024.

Knowledge Management Trends in 2024Artificial Intelligence, Obviously – It would be a real failure in thought leadership to simply present AI as a new KM trend and leave it as that, especially since I first identified that growing trend for KM back in 2019. However, the specific interplay and overlap between the two disciplines does merit some additional discussion, as there are new and exciting things happening here. 

First, organizations are continuing to recognize that their AI initiatives will fail without the appropriate building blocks in place. We’re not talking about black box AI here, but rather explainable AI that can be trusted by even the most risk-averse organizations. In these cases, traditional KM disciplines including knowledge capture and digital communities (to get the expert knowledge in a digital and ingestible form); content structuring (to ensure it is machine readable and configurable); taxonomies, ontologies, and content tagging (to ensure it can be categorized, related, and contextualized); and information governance (to ensure only the correct and appropriate information is utilized by the AI solution) can provide the necessary building blocks to make AI work. None of these KM topics are new, in fact most are decades old, but collectively they can lay the foundation for enterprise AI. With AI as a top executive priority, but many AI initiatives stalled or experiencing early failures, executives are open to revisiting the benefits of KM.

With AI as a top executive priority, but many AI initiatives stalled or experiencing early failures, executives are open to revisiting the benefits of KM.

The second KM and AI trend flips the first and focuses on leveraging AI to enhance and improve some traditional KM practices. Over the course of my now quarter-century in KM, there have been several early stumbling blocks to successful KM transformations, largely borne out of the highly labor-intensive nature of KM tasks like content cleanup, content tagging, and content restructuring. These tasks are critical to achieving high KM maturity for an organization, but they can take a massive effort to accomplish effectively. AI offers a solution by automating these and other critical but monotonous KM tasks, speeding up transformations while still delivering a high level of accuracy. This trend promises to drastically improve the speed and/or completion of KM and Digital Transformations.

As we progress more deeply into the KM/AI trend, there are three primary use cases that I expect will continue. The first, and most common we’ve seen move to production, is customized learning, where AI is being used to automatically assemble individual learning paths. This involves assembling formal and informal learning, access to experts, knowledge assets, and job aids into customized curricula for each individual learner. The second use case is leveraging AI for the assembly and creation of new knowledge articles, combining an organization’s knowledge, content, and data, into newer, richer, and more actionable knowledge assets. I address that in greater detail when speaking about Semantic Layers and Conversational KM later on. The last, and one that I’m particularly excited about, is the use of AI to identify and capture new knowledge from experts. This is not a new idea, but we’re beginning to see investment in this space that will allow solutions to identify risks related to human expertise leaving an organization, as well as the appropriate moments and mechanisms to capture that expertise so it may be preserved within the organization. This is an early trend, but it’s one I think we’ll be seeing a lot more of a year from now.

Knowledge Management Trends in 2024Focus on KM Doing What AI Can’t – I discuss above how KM can be an enabling factor for AI, and AI can be an accelerator for KM as well. Equally, there has been a lot of discussion about which jobs AI will replace. Though AI will do a lot to facilitate and accelerate KM efforts, the role of the (human) Knowledge Management Expert has never been more important within an organization. Though I have no doubt it will get there in time, AI simply can’t do what we can. To that end, the 2024 KM Trend here is a focus on these key gaps, largely 1) capturing expertise with context and interpretation, ensuring an organization is relying on accurate, current, and trusted information, 2) relating knowledge and facilitating people in ways that will foster collaboration, learning, and innovation, and 3) defining the ontologies and large language models to deliver a digital map of how business is done and the relationships that exist therein. None of these are new skills or topics, but at present they are deeply in demand and are highly valued by mature organizations. The first point will be of particular prominence this year; as organizations are increasingly successful in harnessing their collective knowledge, information, and data, the importance of tacit knowledge capture will surge for many organizations hoping to fill gaps in their organizational intelligence.

It is important to note that what the generative AI community cutely calls hallucinations are actually extremely problematic for an organization seeking AI. A hallucination is actually either a poorly designed connection, a gap in knowledge, or more likely an incorrect input (as in, old, obsolete, or just plain incorrect information). Knowledge Management professionals should be an organization’s hallucination assassins.

Knowledge Management professionals should be an organization’s hallucination assassins.

Knowledge Management Trends in 2024Content Structure and Quality At the beginning of my career, the simple selling point for taxonomies and tagging was adding structure to unstructured information. Now, twenty-five years later, we’re still seeking to enhance our content, but the definition of content has broadened, and the structure we’re seeking is much more mature. The key theme here from the content perspective, as I covered last year, is that KM now covers all forms of content, from tacit to explicit, information to data, and including people, products, and processes all as discrete knowledge assets that can be included as part of an organization’s KM ecosystem. Structure has also progressed from the simple topics of taxonomy and tagging to the design of enterprise-level ontologies, content types, text analytics, and natural language processing to drive not just an understanding of each individual knowledge asset, but the relationships between them and within them.

Knowledge Management Trends in 2024Building the Semantic Layer – The semantic layer is also not a new concept, but it is quickly becoming one of the biggest trends in the overlapping space between KM, Data, and AI. In past years, I’ve written about the trend of Knowledge Graphs and how they’re enabling AI, and now semantic layers are set up to be the next, more powerful, step in that progression. A semantic layer is a standardized framework that organizes and abstracts organizational data (structured, unstructured, semi-structured) and serves as a connector for data and knowledge. It combines many of the core design elements of KM, namely information architecture, taxonomies, ontologies, metadata, and content types, along with traditionally data-centric elements like business glossaries and data catalogs to deliver highly contextualized, integrated knowledge at the point of need. If this is a new term for you, get ready to hear it a lot more. It is more than an enabler for AI, setting organizations up to realize longstanding KM goals of breaking down silos; connecting all forms of data, information, and knowledge with the people who need it; and leveraging analytics to fill gaps in knowledge and performance. In short, it is the solution that may finally deliver true enterprise knowledge for an organization. 

Knowledge Management Trends in 2024Renewed Executive Interest and Openness I’ve noted in the past that executives were already more open to investment in KM due to the pandemic and subsequent trend toward remote/hybrid work, in addition to the “Great Resignation” and battle for talent. Adding to that, at present, is the massive focus on AI. The key to this trend is that it likely means bigger budgets for knowledge management, but only if you know what to listen for. Executives will be asking for the big AI solution, and they will be more specifically seeking automated content assembly, content cleanup, learning, and knowledge layers. The letters “K” and “M” may not be in the request, but mature KM professionals need to understand the ask and know the central role they play in delivering on it. Put simply, KM’ers can finally be the cool kids, but only if they know how to position KM where it belongs within the organization.

KM’ers can finally be the cool kids, but only if they know how to position KM where it belongs within the organization.

Knowledge Management Trends in 2024Conversational Knowledge Management – We’ve all been amazed by the conversational nature of ChatGPT that allows novice users to ask for answers to questions, images, ideas, and even code, conversing to clarify exactly what you want. As we jointly mature in KM and AI, we’re trending toward “conversational” KM solutions that expand on advanced search, knowledge portals, and intelligent chatbots to allow any user to interact with an organization’s knowledge assets and get increasingly pertinent and customized answers that will help them complete their mission. We’ve already delivered this for some of our more advanced customers, but both the associated technologies and the organizational use cases are hitting a point of inflection where conversational KM capabilities will quickly become the norm.

Knowledge Management Trends in 2024KM for Risk Identification and Mitigation – Historically, the value of KM has been difficult to express, but we’ve actually made great progress in that area by focusing on the business outcomes of KM, including improved productivity, cost reduction, employee retention, faster and better onboarding and learning, and customer retention, to name a few of the big ones. A new trend in KM comes with a new type of business value: risk identification and mitigation. KM can help identify and mitigate risks by leveraging comprehensive KM solutions to spot improperly secured or incorrect content. This is of particular value for highly regulated industries or those dealing with confidential information. The ROI on this is clear, as one accidental release of proprietary information can cost millions. Worse yet, the wrong information delivered about how to use a product can cost lives. There are other use cases specifically about spotting gaps in knowledge before they become dire, but in short, an enterprise approach to KM can allow an organization to better understand all of their knowledge, content, and data, allowing them to proactively address measurable risks that might occur. This final trend is particularly noteworthy given how easy it is to justify investment, delivering major impact for organizations.

Do you need help understanding and harnessing the value of these trends? Contact us to learn more and get started.

The post Top Knowledge Management Trends – 2024 appeared first on Enterprise Knowledge.

]]>
The Role of Ontologies with LLMs https://enterprise-knowledge.com/the-role-of-ontologies-with-llms/ Tue, 09 Jan 2024 16:30:43 +0000 https://enterprise-knowledge.com/?p=19451 In today’s world, the capabilities of artificial intelligence (AI) and large language models (LLMs) have generated widespread excitement. Recent advancements have made natural language use cases, like chatbots and semantic search, more feasible for organizations. However, many people don’t understand … Continue reading

The post The Role of Ontologies with LLMs appeared first on Enterprise Knowledge.

]]>
In today’s world, the capabilities of artificial intelligence (AI) and large language models (LLMs) have generated widespread excitement. Recent advancements have made natural language use cases, like chatbots and semantic search, more feasible for organizations. However, many people don’t understand the significant role that ontologies play alongside AI and LLMs. People often ask: do LLMs replace ontologies or complement them? Are ontologies becoming obsolete, or are they still relevant in this rapidly evolving field? 

In this blog, I will explain the continuing importance of ontologies in your organization’s quest for better knowledge retrieval and in augmenting the capabilities of LLMs.

Defining Ontologies and LLMs

Let’s start with quick definitions to ensure we have the same background information.

What is an Ontology

An example ontology for Enterprise Knowledge could include the following entity types: Clients, People, Policies, Projects, and Tools. Additionally, the ontology contains the relationships between types, such as people work on projects, people are experts in tools, and projects are with clients.

An ontology is a data model that describes a knowledge domain, typically within an organization or particular subject area, and provides context for how different entities are related. For example, an ontology for Enterprise Knowledge could include the following entity types:

  • Clients
  • People
  • Policies
  • Projects
  • Experts 
  • Tools

The ontology includes properties about each type, i.e., people’s names and projects’ start and end dates. Additionally, the ontology contains the relationships between types, such as people work on projects, people are experts in tools, and projects are with clients. 

Ontologies define the model often used in a knowledge graph, the database of real-world things and their connections. For instance, the ontology describes types like people, projects, and client types, and the corresponding knowledge graph would contain actual data, such as information about James Midkiff (Person), who worked on semantic search (Project) for a multinational development bank (Client).

What is an LLM

Large Language Model Icon

An LLM is a model trained to understand human sentence structure and meaning. The model can understand text inputs and generate outputs that adhere to correct grammar and language. To briefly describe how an LLM works, LLMs represent text as vectors, known as embeddings. Embeddings act like a numerical fingerprint, uniquely representing each piece of text. The LLM can mathematically compare embeddings of the training set with embeddings from the input text and find similarities to piece together an answer. For example, an LLM can be provided with a large document and asked to summarize it. Since the model can understand the meaning of the large document, transforming it into embeddings, it can easily compile an answer from the provided text.

Organizations can take advantage of open-source LLMs like Llama2, BLOOM, and BERT, as developing and training custom LLMs can be prohibitively expensive. While utilizing these models, organizations can fine-tune (extend) them with domain-specific information to help the LLM understand the nuances of a particular field. The tuning process is much less expensive to perform and can improve the accuracy of a model’s output.

Integrating Ontologies and LLMs

When an organization begins to utilize LLMs, several common concerns emerge:

  1. Hallucinations: LLMs are prone to hallucinate, returning incorrect results based on incomplete or outdated training data or by making statistically-based best guesses.
  2. Knowledge Limitation: Out of the box, LLMs can only answer questions from their training set and the provided input text.
  3. Unclear Traceability: LLMs return answers based on their training data and statistics, and it is often unclear if the provided answer is a fact pulled from input training data or if it is a guess.

These concerns are all addressed by providing LLMs with methods to integrate information from an organization’s knowledge domain.

Fine-tuning with a Knowledge Graph

Ontologies model the facts within an organization’s knowledge domain, while a knowledge graph populates these models with actual, factual values. We can leverage these facts to customize and fine-tune the language model to align with the organization’s manner of describing and interconnecting information. This fine-tuning enables the LLM to answer domain-specific questions, accurately identify named entities relevant to the field, and generate language using the organization’s vocabulary. 

A knowledge graph can be leveraged to customize fine-tuning of the language model to answer domain-specific questions.

Training an LLM with factual information presents challenges similar to those encountered with the original LLM: The training data can become outdated, leading to incomplete or inaccurate responses. To address this, fine-tuning an LLM should be considered a continuous process. Regularly updating the LLM with new and existing relevant information is necessary to maintain up-to-date language usage and factual accuracy. Additionally, it’s essential to diversify the training material fed into the LLM to provide a sample of content in various forms. This involves combining ontology-based facts with varied content and data from the organization’s domain, creating a training set to ensure the LLM is balanced and unbiased toward any specific dataset.

Retrieval Augmented Generation

The primary method used to avoid stale or incomplete LLM responses is Retrieval Augmented Generation (RAG). RAG is a process that augments the input fed into an LLM with relevant information from an organization’s knowledge domain. Using RAG, an LLM can access information beyond its original training set, utilizing this information to produce more accurate answers. RAG can draw from diverse data sources, including databases, search engines (semantic or vector search), and APIs. An additional benefit of RAG is its ability to provide references for the sources used to generate responses.

A RAG can enhance an LLM to produce a cleaner answer

We aim to leverage the ontology and knowledge graph to extract facts relevant to the LLM’s input, thereby enhancing the quality of the LLM’s responses. By providing these facts as inputs, the LLM can explicitly understand the relationships within the domain rather than discerning them statistically. Furthermore, feeding the LLM with specific numerical data and other relevant information increases the LLM’s ability to respond to complex queries, including those involving calculations or relating multiple pieces of information. With accurately tailored inputs, the LLM will provide validated, actionable insights rooted in the organization’s data.

For an example of RAG in action, see the LLM input and response below using a GenAI stack with Neo4j.

An example of how a RAG may improve results from an LLM. A question is posed to a trained model and an accurate answer is produced, as well as references in the footnotes.
A chatbot interface showing a user question and the response from an LLM utilizing an RAG to include Stack Overflow links as sources.

Conclusion

LLMs are an exciting tool that enable us to effectively interpret and utilize an organization’s knowledge, and quickly access valuable answers and insights. Integrating ontologies and their corresponding knowledge graphs ensures that the LLM accurately uses the language and factual content of an organization’s knowledge domain when generating responses. Are you interested in leveraging your organization’s knowledge with an LLM? Contact us for more information on how we can get started.

The post The Role of Ontologies with LLMs appeared first on Enterprise Knowledge.

]]>
KM Fast Track to Search-Focused AI Solutions https://enterprise-knowledge.com/km-fast-track-to-search-focused-ai-solutions/ Thu, 02 Nov 2023 14:32:36 +0000 https://enterprise-knowledge.com/?p=19116 In today’s rapidly evolving digital landscape, the ability to quickly locate and connect critical information is key to organizational success. As organizations struggle with ever-expanding datasets and information silos, the need for effective search-focused artificial intelligence (AI) solutions becomes vital. … Continue reading

The post KM Fast Track to Search-Focused AI Solutions appeared first on Enterprise Knowledge.

]]>
In today’s rapidly evolving digital landscape, the ability to quickly locate and connect critical information is key to organizational success. As organizations struggle with ever-expanding datasets and information silos, the need for effective search-focused artificial intelligence (AI) solutions becomes vital. This infographic takes you on a journey, drawing an analogy with trains, to emphasize the crucial role of Taxonomies, Ontologies, and Knowledge Graphs in improving knowledge findability. These three elements represent your tickets to the fast track. They can propel your organization’s information search into high-speed efficiency to enhance information retrieval and achieve decision-making excellence.

KM Fast Track to Search-Focused AI Solutions

If your organization is considering the adoption of Search-Focused AI solutions, EK is here to help. With our extensive expertise, we specialize in crafting and deploying customized and actionable solutions that enhance organizations’ information search and knowledge findability. Please feel free to contact us for more information.

The post KM Fast Track to Search-Focused AI Solutions appeared first on Enterprise Knowledge.

]]>
Lulit Tesfaye Speaking at Upcoming Webinar “Unlocking AI: Preparing for ChatGPT and Microsoft 365 Copilot” https://enterprise-knowledge.com/lulit-tesfaye-speaking-at-upcoming-webinar-unlocking-ai-preparing-for-chatgpt-and-microsoft-365-copilot/ Thu, 07 Sep 2023 21:26:08 +0000 https://enterprise-knowledge.com/?p=18885 Lulit Tesfaye, Partner and Vice President of Knowledge and Data Services at Enterprise Knowledge, will join Olga Martí, Technical Product Manager at ClearPeople and Gabriel Karawani, Co-Founder of ClearPeople for the live webinar “Unlocking AI: Preparing for ChatGPT & Microsoft … Continue reading

The post Lulit Tesfaye Speaking at Upcoming Webinar “Unlocking AI: Preparing for ChatGPT and Microsoft 365 Copilot” appeared first on Enterprise Knowledge.

]]>
Lulit Tesfaye, Partner and Vice President of Knowledge and Data Services at Enterprise Knowledge, will join Olga Martí, Technical Product Manager at ClearPeople and Gabriel Karawani, Co-Founder of ClearPeople for the live webinar “Unlocking AI: Preparing for ChatGPT & Microsoft 365 Copilot.” The expert panelists will discuss challenges and strategies to harness the full potential of these two AI technologies to optimize team productivity and creativity, exploring:
  • AI roadmap
  • Real-world use cases
  • Do and don’ts
  • Best practices for getting your data ready for AI
Join them for “Unlocking AI: Preparing for ChatGPT and Microsoft 365 Copilot” on September 28 at 15:00 BST (10:00 EDT). Register here!

The post Lulit Tesfaye Speaking at Upcoming Webinar “Unlocking AI: Preparing for ChatGPT and Microsoft 365 Copilot” appeared first on Enterprise Knowledge.

]]>
Is ChatGPT Ready for the Enterprise? https://enterprise-knowledge.com/is-chatgpt-ready-for-the-enterprise/ Wed, 15 Feb 2023 17:06:08 +0000 https://enterprise-knowledge.com/?p=17538 Recently, we were visiting a client showing the latest version of our application when a participant asked, “Why aren’t we using ChatGPT?” It was a good and logical question with the attention that ChatGPT and other AI-based solutions are warranting … Continue reading

The post Is ChatGPT Ready for the Enterprise? appeared first on Enterprise Knowledge.

]]>
Recently, we were visiting a client showing the latest version of our application when a participant asked, “Why aren’t we using ChatGPT?” It was a good and logical question with the attention that ChatGPT and other AI-based solutions are warranting these days. While these tools, built using complex machine-learning components like large-language models (LLMs) and neural networks, offer much promise, their implementation in today’s enterprise should be weighed carefully.

Rightfully so, ChatGPT and similar AI-powered solutions have created quite a buzz in the industry. It really is impressive what they currently do, and they offer much future promise. Since those of us in the technology world have been inundated with questions and remarkable tales about ChatGPT and similar tools, I took it upon myself to do a little experiment.

The Experiment

As a die-hard Cubs fan, I hopped over to the ChatGPT site and asked: “Which Cubs players have won MVPs?”

It provided a list of names that, on the surface, appeared correct. However, a few minutes spent on Google confirmed that one answer was factually wrong, as were some of the supporting facts about correctly identified players.

Impressively, a subsequent question: “Are there any others?” provided another seemingly accurate list of results. ChatGPT remembered the context of my first query and answered appropriately. Despite this, further investigation confirmed that, once again, not all of the information returned was correct.

As shown from this tiny sample, any organization needs to tread carefully when considering implementing ChatGPT and other AI-powered solutions in their current form. It’s quite possible that they lead to more problems than they solve.

Here is a list of the top issues to consider before embarking on an AI-based search solution like ChatGPT.

Accuracy Issues

For all their potential, their current implementations are haunted by one fact – they can return blatantly false information. As shown above, a sizable portion of the answers were wrong, especially during the follow-up question. Unfortunately, this is a common experience.

Further, there is no reference information returned with the result. This produces more questions than it does answers. What is the “source of truth” for the query response? What authoritative document states this information that can be referenced and verified?

Granted, when you perform a search on a traditional keyword search engine, you can sometimes get nefarious, outdated, or incorrect results. Still, these search engines are not selling the promise that they’re returning the single, definitive answer to your question. You are presented with a list to sift through and make your final decision on what is relevant to your particular needs.

While it’s entertaining to ask ChatGPT –  “What is Hamlet’s famous spoken line and repeat it back to me in a pirate’s voice” – would you really want to base an important business decision on feedback that is often inaccurate and unverifiable? All it takes is being burnt by one wrong answer for your users to lose faith in the system.

Complexity and Expense

I like to joke with my clients that we can build any solution quickly, cheaply, and impressively but that they have to pick two of the three. With an AI-based solution like ChatGPT, you may only get to pick one. Implementing an AI solution is inherently complex and expensive. There is a lot of time and complexity involved, and there’s no “point and click, out of the box” option. Relevant tasks to prepare AI for the enterprise include:

  • Designing and planning for both hardware and software,
  • Collecting relevant and accurate data to feed into the system,
  • Building relevant models and training them about your domain-specific knowledge,
  • Developing a user interface,
  • Testing and analyzing your results, then iterating, perhaps multiple times, to make improvements; and,
  • Operationalizing the system into your existing infrastructure, including data integration, support, and monitoring.

Additionally, projects like these require developers with niche, advanced skills. It’s difficult enough finding experienced developers to implement basic keyword search solutions, let alone advanced AI logic. Those that can successfully build these AI-based solutions are few and far between, and in software development, the time of highly-skilled developers comes at a significant cost.

Lack of Explainability

AI-based solutions like ChatGPT tend to be “black box” solutions. Meaning that, although powerful, the logic they use to return results is virtually impossible to explain to a user if it’s even available.

With traditional search engines, the scoring algorithms to rank results are easier to understand. A developer can compare the scores between documents in the result set and quickly understand why one appears higher than the other. Most importantly, this process can be explained to the end user, and adjustments to the scoring can be made easily through search relevancy tuning.

Searching in the enterprise is a different paradigm than the impersonal world of Google, Amazon, and e-commerce search applications. Your users are employees, and you must ensure they are empowered to have productive search experiences. If users can’t intuitively understand why a particular result is showing up for their query, they’re more likely to question the tool’s accuracy. This is especially true for certain users, like librarians, legal assistants, or researchers, who have very specific search requirements and need to understand the logic of the search engine before they trust it.

User Experience and Readiness

The user experience for a tool like ChatGPT will be markedly different. For starters, many of the rich features to which users have grown accustomed – faceting, hit highlighting, phrase-searching – are currently unavailable in ChatGPT.

Furthermore, consider if your users are actually ready to leverage an AI-based solution. For example, how do they normally search? Are they entering 1 or 2 keywords, or are they advanced enough to ask natural language questions? If they’re accustomed to using keywords, a single-term query won’t produce markedly better results in an AI-based solution than a traditional search engine.

Conclusion

Although the current version of ChatGPT may not deliver immediate value to your organization, it still has significant potential. We’re focusing our current research on a couple of areas in particular. First, its capabilities around categorization and auto-summarization are very promising and could easily be leveraged in tandem with the more ubiquitous keyword search engines. Categorization lets you tag your content with key terms and provides rich metadata that powers functionality like facets. Meanwhile, auto-summarization creates short abstracts of your lengthy documents. These abstracts, properly indexed into your search engine, can serve as the basis for providing more accurate search results.

It’s perfectly acceptable to be equally impressed by the promise of tools like ChatGPT yet skeptical of how well their current offerings will meet your real-world search needs. If your organization is grappling with this decision, contact us, and we can help you navigate through this exciting journey.

The post Is ChatGPT Ready for the Enterprise? appeared first on Enterprise Knowledge.

]]>