Enterprise Search Articles - Enterprise Knowledge http://enterprise-knowledge.com/tag/enterprise-search/ Mon, 17 Nov 2025 22:21:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg Enterprise Search Articles - Enterprise Knowledge http://enterprise-knowledge.com/tag/enterprise-search/ 32 32 Semantic Search Advisory and Implementation for an Online Healthcare Information Provider https://enterprise-knowledge.com/semantic-search-advisory-and-implementation-for-an-online-healthcare-information-provider/ Tue, 22 Jul 2025 14:13:12 +0000 https://enterprise-knowledge.com/?p=24995 The medical field is an extremely complex space, with thousands of concepts that are referred to by vastly different terms. These terms can vary across regions, languages, areas of practice, and even from clinician to clinician. Additionally, patients often communicate ... Continue reading

The post Semantic Search Advisory and Implementation for an Online Healthcare Information Provider appeared first on Enterprise Knowledge.

]]>

The Challenge

The medical field is an extremely complex space, with thousands of concepts that are referred to by vastly different terms. These terms can vary across regions, languages, areas of practice, and even from clinician to clinician. Additionally, patients often communicate with clinicians using language that reflects their more elementary understanding of health. This complicates the experience for patients when trying to find resources relevant to certain topics such as medical conditions or treatments, whether through search, chatbots, recommendations, or other discovery methods. This can lead to confusion during stressful situations, such as when trying to find a topical specialist or treat an uncommon condition.

A major online healthcare information provider engaged with EK to improve both their consumer-facing and clinician-facing natural language search and discovery platforms in order to deliver faster and more relevant results and recommendations. Their consumer-facing web pages aimed to connect consumers with healthcare providers when searching for a condition, with consumers often using terms or phrases that may not be an exact match with medical terms. In contrast, the clinicians who purchased licenses to the provider’s content required a fast and accurate method of searching for content regarding various conditions. They work in time-sensitive settings where rapid access to relevant content could save a patient’s life, and often use synonymous acronyms or domain-specific jargon that complicates the search process. The client desired a solution which could disambiguate between concepts and match certain concepts to a list of potential conditions. EK was tasked to refine these search processes to provide both sets of end users with accurate content recommendations.

The Solution

Leveraging both industry and organizational taxonomies for clinical topics and conditions, EK architected a search solution that could take both the technical terms preferred by clinicians and the more conversational language used by consumers and match them to conditions and relevant medical information. 

To improve search while maintaining a user-friendly experience, EK worked to:

  1. Enhance keyword search through metadata enrichment;
  2. Enable natural language search using large language models (LLMs) and vector search techniques, and;
  3. Introduce advanced search features post-initial search, allowing users to refine results with various facets.

The core components of EK’s semantic search advisory and implementation included:

  1. Search Solution Vision: EK collaborated with client stakeholders to determine and implement business and technical requirements with associated search metrics. This would allow the client to effectively evaluate LLM-powered search performance and measure levels of improvement. This approach focused on making the experience faster for clinicians searching for information and for consumers seeking to connect with a doctor. This work supported the long-term goal of improving the overall experience for consumers using the search platform. The choice of LLM and associated embeddings played a key role: by selecting the right embeddings, EK could improve the association of search terms, enabling more accurate and efficient connections, which proved especially critical during crisis situations. 
  2. Future State Roadmap: As part of the strategy portion of this engagement, EK worked with the client to create a roadmap for deploying the knowledge panel to the consumer-facing website in production. This roadmap involved deploying and hosting the content recommender, further expanding the clinical taxonomy, adding additional filters to the knowledge panel (such as insurance networks and location data), and search features such as autocomplete and type-ahead search. Setting future goals after implementation, EK suggested the client use machine learning methods to classify consumer queries based on language and predict their intent, as well as establish a way to personalize the user experience based on collected behavioral data/characteristics.
  3. Keyword and Natural Language Search Enhancement: EK developed a gold standard template for client experts in the medical domain to provide the ideal expected search results for particular clinician queries. This gold standard served as the foundation for validating the accuracy of the search solution in pointing clinicians to the right topics. Additionally, EK used semantic clustering and synonym analysis in order to identify further search terms to add as synonyms into the client’s enterprise taxonomy. Enriching the taxonomy with more clinician-specific language used when searching for concepts with natural language improved the retrieval of more relevant search results.
  4. Semantic Search Architecture Design and LLM Integration: EK designed and implemented a semantic search architecture to support the solution’s search features, EK connecting the client’s existing taxonomy and ontology management system (TOMS), the client’s search engine, and a new LLM. Leveraging the taxonomy stored in the TOMS and using the LLM to match search terms and taxonomy concepts based on similarity enriched the accuracy and contextualization of search results. EK also wrote custom scripts to evaluate the LLM’s understanding of medical terminology and generate evaluation metrics, allowing for performance monitoring and continuous improvement to keep the client’s search solution at the forefront of LLM technology. Finally, EK created a bespoke, reusable benchmark for LLM scores, evaluating how well a certain model matched natural language queries to clinical search terms and allowing the client to select the highest-performing model for consumer use.
  5. Semantic Knowledge Panel: To demonstrate the value this technology would bring to consumers, EK developed a clickable, action-oriented knowledge panel that showcased the envisioned future-state experience. Designed to support consumer health journeys, the knowledge panel guides users through a seamless journey – from conversational search (e.g. “I think I broke my ankle”), to surfacing relevant contextual information (such as web content related to terms and definitions drawn from the taxonomy), to connecting users to recommended clinicians and their scheduling pages based on their ability to treat the condition being searched (e.g. An orthopedist for a broken ankle). EK’s prototype leveraged a taxonomy of tagged keywords and provider expertise, with a scoring algorithm that assessed how many, and how well, those tags matched the user’s query. This scoring informed a sorted display of provider results, enabling users to take direct action (e.g. scheduling an appointment with an orthopedist) without leaving the search experience.

The EK Difference

EK’s expertise in semantic layer, solution architecture, artificial intelligence, and enterprise search came together to deliver a bespoke and unified solution that returned more accurate, context-aware information for clinicians and consumers. By collaborating with key medical experts to enrich the client’s enterprise taxonomy, EK’s semantic experts were able to share unique insights and knowledge on LLMs, combined with their experience with applying taxonomy and semantic similarity in natural language search use cases, to place the client in the best position to enable accurate search. EK also was able to upskill the client’s technical team on semantic capabilities and the architecture of the knowledge panel through knowledge transfers and paired programming, so that they could continue to maintain and enhance the solution in the future.

Additionally, EK’s solution architects, possessing deep knowledge of enterprise search and artificial intelligence technologies, were uniquely positioned to provide recommendations on the most advantageous method to seamlessly integrate the client’s TOMS and existing search engine with an LLM specifically developed for information retrieval. While a standard-purpose LLM could perform these tasks to some extent, EK helped design a purpose-built semantic search solution leveraging a specialized LLM that better identified and disambiguated user terms and phrases. 

Finally, EK’s search experts were able to define and monitor key search metrics with the client’s team, enabling them to closely monitor improvement over time, identifying trends and suggesting improvements to match. These search improvements resulted in a solution the client could be confident in and trust to be accurate.

The Results

The delivery of a semantic search prototype with a clear path to a production, web-based solution resulted in the opportunity for greatly augmented search capabilities across the organization’s products. Overall, this solution allowed both healthcare patients and clinicians to find exactly what they are looking for using a wide variety of terms.

As a result of EK’s semantic search advisory and implementation efforts, the client was able to:

  1. Empower potential patients to use web-based semantic search platform to search for specialists who can treat their conditions quickly and easily find care; 
  2. Streamline the content delivery process in critical, time-sensitive situations such as emergency rooms by providing rapid and accurate content that highlights and elaborates on potential diagnoses and treatments to healthcare professionals; and
  3. Identify potential data and metadata gaps in the healthcare information database that the client relies on to populate its website and recommend content to users.

Looking to improve your organization’s search capabilities? Want to see how LLMs can power your semantic ecosystem? Learn more from our experience or contact us today.

Download Flyer

Ready to Get Started?

Get in Touch

The post Semantic Search Advisory and Implementation for an Online Healthcare Information Provider appeared first on Enterprise Knowledge.

]]>
IA Fast-Track to Search-Focused AI Solutions: Information Architecture Conference 2024 https://enterprise-knowledge.com/ia-fast-track-to-search-focused-ai-solutions-information-architecture-conference-2024/ Tue, 30 Apr 2024 13:28:54 +0000 https://enterprise-knowledge.com/?p=20419 Sara Mae O’Brien-Scott and Tatiana Baquero Cakici, Senior Consultants at Enterprise Knowledge (EK), presented “AI Fast Track to Search-Focused AI Solutions” at the Information Architecture Conference (IAC24) that took place on April 11, 2024 in Seattle, WA. In their presentation, … Continue reading

The post IA Fast-Track to Search-Focused AI Solutions: Information Architecture Conference 2024 appeared first on Enterprise Knowledge.

]]>
Sara Mae O’Brien-Scott and Tatiana Baquero Cakici, Senior Consultants at Enterprise Knowledge (EK), presented “AI Fast Track to Search-Focused AI Solutions” at the Information Architecture Conference (IAC24) that took place on April 11, 2024 in Seattle, WA.

In their presentation, O’Brien-Scott and Cakici focused on what Enterprise AI is, why it is important, and what it takes to empower organizations to get started on a search-based AI journey and stay on track. The presentation explored the complexities of enterprise search challenges and how IA principles can be leveraged to provide AI solutions through the use of a semantic layer. O’Brien-Scott and Cakici showcased a case study where a taxonomy, an ontology, and a knowledge graph were used to structure content at a healthcare workforce solutions organization, providing personalized content recommendations and increasing content findability.

In this session, participants gained insights about the following:

  • Most common types of AI categories and use cases;
  • Recommended steps to design and implement taxonomies and ontologies, ensuring they evolve effectively and support the organization’s search objectives;
  • Taxonomy and ontology design considerations and best practices;
  • Real-world AI applications that illustrated the value of taxonomies, ontologies, and knowledge graphs; and
  • Tools, roles, and skills to design and implement AI-powered search solutions.

The post IA Fast-Track to Search-Focused AI Solutions: Information Architecture Conference 2024 appeared first on Enterprise Knowledge.

]]>
Content Engineering for Personalized Product Release Notes https://enterprise-knowledge.com/content-engineering-for-personalized-product-release-notes/ Tue, 14 Nov 2023 16:36:46 +0000 https://enterprise-knowledge.com/?p=19222 The Challenge A global software vendor with a vast portfolio of cloud software products needed to deliver more personalized, timely, and relevant release notes of upcoming product changes in a continuous implementation, continuous delivery (CI/CD) environment to both internal and … Continue reading

The post Content Engineering for Personalized Product Release Notes appeared first on Enterprise Knowledge.

]]>

The Challenge

A global software vendor with a vast portfolio of cloud software products needed to deliver more personalized, timely, and relevant release notes of upcoming product changes in a continuous implementation, continuous delivery (CI/CD) environment to both internal and external end-users. The delivery and structure of release notes to product administrators and customers made the content hard to find and less relevant to external client organizations’ product administrators and individual customers in their product instances. The CI/CD environment for content delivery presented the challenge of consistent releases and updates to system operations, increasing the likelihood of disruptions occurring when introducing the latest change. 

Customers struggled to understand product changes and reported they did not receive enough information to prepare their organization for continual product changes, resulting in diminished customer trust in the software vendor.

The Solution

As part of this end-to-end advanced content engagement, EK defined the Current and Target State of content management and the content model while accounting for new content authoring processes, the software vendor’s unique brand voice, and their long-term content operations vision. EK focused on developing a comprehensive content model supporting structured and componentized release note content, improving user experience (UX) interactions, and leveraging the organization’s taxonomy to filter the content for more personalized delivery.

EK facilitated a series of focus groups across the software vendor’s various teams of content authors, marketing, technical SMEs, and executive leadership to define the current state of content authoring processes and content management and ensure cross-team alignment on the target state for authoring, content management, and structured content model design. EK’s team collaborated with the SMEs at the client organization and the vendor of the solution’s CMS when designing the high-level solution architecture through a content engineering and content strategy lens, leveraged the client’s existing tech stack, and enabled the implementation of a structured and componentized content model in a headless CMS environment. EK carefully considered the stakeholder requirements in our delivery of the following:

Holistic Solution Architecture Design – EK collaboratively developed a solution architecture for data integration and data flow between existing systems in the client organization’s technology ecosystem. The solution design also supports structured content authoring, publishing processes, and multi-channel contextualized delivery for three specific delivery channels.

Content Operations Enhancement – EK leveraged human-centered design practices when developing content governance around authoring processes that streamlined authoring workflows and when contributing to change management planning for author training. EK’s efforts included developing and designing a structured content model and taxonomy design that enabled content reuse in multiple personalized contexts without duplication. 

Content Personalization Strategy – EK’s consultants defined a content strategy that accounted for the complexities around content, process, and personalization requirements gathered in stakeholder sessions to transform a legacy content model into structured content components. EK’s structured content design for In-product release notes provides comprehensive and personalized information about each new feature or fix within the specific product version purchased by an enterprise customer in addition to delivering content about relevant features within an individual subscriber’s product version.

 The componentized content model and newly enhanced metadata definition enabled tailored multi-channel delivery of product release notifications and contextualized in-product announcements. The comprehensive content model also supports a product administration interface that provides comprehensive and personalized information about each new feature or fixes relevant to the customer’s specific instance. EK’s team developed the high-level solution architecture for data integration and data flow between the content management system and the internal environments of the vendor’s portfolio of products to support new authoring and publishing processes.

The EK Difference

EK consultants leveraged their vast knowledge and expertise in advanced content management practices to solve the client’s content and system design challenges, producing authoring, modeling, and delivery architecture design requirements as well as content operation processes. EK’s expert consultants conducted multiple interviews, facilitated collaborative working sessions, and engaged in supplemental conversations to align disparate stakeholders across various product and content teams. The outcome of this human-centered design approach enabled our consultants to holistically understand stakeholders’ needs and bridge gaps across stakeholders’ diverse goals, and the company’s long-term vision. 

EK’s approach to stakeholder alignment merged personal stakeholder interaction that enabled the development of a comprehensive content strategy with a technical solutions architecture that supported recommendations. EK’s technology experts leveraged existing system architecture processes and data flows in our end-to-end delivery of content engineering, content strategy, and content architecture design in an effort to reduce technology cost savings and training burden.

EK collaboratively worked with the client when defining the distinct internal authoring experience, external end-user experience, and content experiences necessary for specific delivery channels. The outcome of EK’s end-to-end engagement provided the client with the right content structure, system architecture, and content strategy to successfully deliver personalized and relevant content promptly to end-users within a continuous implementation, continuous delivery (CI/CD) environment.

The Results

EK’s content design and implementation guidance enabled personalized and consistent delivery of product release notifications to the software vendor’s numerous product administrators and customers at the point of need through distinct delivery channels. 

Content governance and editorial guidelines ensure authoring processes and experiences are now intuitive, and flexible. The structured content model for componentized content and enhanced metadata streamlined content creation processes to be more efficient, by providing accurate references to related content and components for reuse in the authoring environment to increase content authors’ awareness of existing content and reduce rework. 

The new componentized content model supports customized and contextualized end-user experiences, which improved overall customer engagement. Since implementation of the multi-channel delivery strategy, structured content model, and system architecture integration, the software vendor saw a 300% increase in product release notification engagement. Customers now only consume relevant product release notifications and announcement content in a more timely manner, so they have increased time to prepare for changes and mitigate organizational risk. End-users now have the option to quickly reference content within product applications or the comprehensive product documentation web page for changes, and now interact with enriched information on each change, how to prepare for it, who it impacts, and when to expect the change.

Ready to Get Started?

Get in Touch

The post Content Engineering for Personalized Product Release Notes appeared first on Enterprise Knowledge.

]]>
Knowledge Portal Architecture Explained https://enterprise-knowledge.com/knowledge-portal-architecture-explained/ Thu, 09 Nov 2023 16:00:14 +0000 https://enterprise-knowledge.com/?p=19157 At its core, a Knowledge Portal consists of five components: Web UI, API Layer, Enterprise Search Engine, Knowledge Graph, and Taxonomy/Ontology Management System. Continue reading

The post Knowledge Portal Architecture Explained appeared first on Enterprise Knowledge.

]]>
In today’s data-driven world, the need for efficient knowledge management and dissemination has never been more critical. Users are faced with an overwhelming amount of content and information, and thus need an efficient, intuitive, and structured way to retrieve it. Additionally, organizational knowledge is often inconsistent, incomplete, and dispersed among various systems.

The solution? A Knowledge Portal: a dynamic and interconnected system designed to transform the way we manage, access, and leverage knowledge. This provides users with a comprehensive Enterprise 360 view of all of the information they need to successfully do their jobs. At its core, a Knowledge Portal consists of five components: Web UI, API Layer, Enterprise Search Engine, Knowledge Graph, and Taxonomy/Ontology Management System. 

A Knowledge Portal consists of five main components described below: 1. A Web UI: Provides users with a way to interact with the portal’s content, incorporating features such as search functionality, aggregation pages, and navigation menus. 2. API Layer: Serves the Simple Web UI consolidated, streamlined information via various endpoints. Enables other client applications to integrate with and consume the connected, cleaned Knowledge Portal content. 3. Enterprise Search Engine: Indexes and retrieves relevant information to display in the Knowledge Portal based on user queries. Allows relevant results from all integrated enterprise repositories to be discovered in the Portal. 4. Knowledge Graph: Represents the structure and connections of the organization’s knowledge. Captures concepts, entities, attributes, and their relationships in a graph database format. Enhances search results by providing contextual information and connected content. 5. Taxonomy and Ontology Manager: Defines and maintains controlled vocabularies, taxonomies, and ontologies, which allow for consistent and relevant metadata tagging and content organization. Ensures search precision and accuracy.

The diagram below displays how these five components interact within the context of an Enterprise Knowledge Portal implementation.

This diagram displays how the components of a Knowledge Portal interact with one another. At the bottom of the diagram, there are various data repositories, content management systems, and other enterprise data stores. Content from these repositories will be indexed by the Enterprise Search Engine and categorized/tagged by the Taxonomy and Ontology Manager. The tagged/categorized content will be ingested into the Knowledge Graph where it can be associated and linked to more organizational knowledge. The search engine can also index content from the Knowledge Graph. Then the backend API layer exposes and serves this tagged, indexed content from the Search Engine and Knowledge Graph. The API layer can be leveraged by various existing or future client applications. For the Knowledge Portal specifically, the API Layer serves content to the Knowledge Portal Web UI, which ultimately provides the end user an Enterprise 360 view of their organization’s content and knowledge.

Collectively, these components create a unified platform, empowering both organizations and individuals to discover information, break down organizational silos, and make informed decisions. 

EK has expertise in Knowledge Portal implementations, and we would love to help you take the next step on your knowledge management journey. Please contact us for more information.

Special thank you to Adam Eltarhoni for his contributions to this infographic! 

The post Knowledge Portal Architecture Explained appeared first on Enterprise Knowledge.

]]>
Wahl Keynoting Global Tacit KM Conference https://enterprise-knowledge.com/wahl-keynoting-global-tacit-km-conference/ Wed, 01 Nov 2023 20:13:11 +0000 https://enterprise-knowledge.com/?p=19180 Enterprise Knowledge CEO Zach Wahl will be delivering the opening keynote address at the upcoming Global Tacit Knowledge Management Forum (GO-TKM) conference. This is the first year of the conference, which will be held in Davos, Switzerland on the 13th … Continue reading

The post Wahl Keynoting Global Tacit KM Conference appeared first on Enterprise Knowledge.

]]>

Enterprise Knowledge CEO Zach Wahl will be delivering the opening keynote address at the upcoming Global Tacit Knowledge Management Forum (GO-TKM) conference. This is the first year of the conference, which will be held in Davos, Switzerland on the 13th and 14th of November. The conference will be a hybrid event, available both in person and online.

Wahl will be delivering a talk titled “Seeing the Unseen: Empowering Progress through Tacit Knowledge.” The talk will discuss how to identify high value moments of knowledge capture, remove the impediments to knowledge sharing, and develop a mature knowledge sharing culture and organization. The talk will also cover the trending topics of artificial intelligence, knowledge graphs, and large language models in capturing, managing, enhancing, and sharing tacit knowledge in order for it to deliver the greatest value possible.

Wahl is also being recognized by the GO-TKM foundation as one of 50 Most Influential People in tacit Knowledge Management.

For more information on the conference, to view the complete agenda, and to register, visit: https://go-tkm.org/davos-forum/

The post Wahl Keynoting Global Tacit KM Conference appeared first on Enterprise Knowledge.

]]>
Exploring Vector Search: Advantages and Disadvantages https://enterprise-knowledge.com/exploring-vector-search-advantages-and-disadvantages/ Thu, 26 Oct 2023 17:22:14 +0000 https://enterprise-knowledge.com/?p=19120 The search for information is at the core of enhancing productivity and decision-making in the enterprise. In today’s digital age, searching for information has become more intuitive. With just a few clicks, we can explore vast knowledge and gain once-inaccessible … Continue reading

The post Exploring Vector Search: Advantages and Disadvantages appeared first on Enterprise Knowledge.

]]>
The search for information is at the core of enhancing productivity and decision-making in the enterprise. In today’s digital age, searching for information has become more intuitive. With just a few clicks, we can explore vast knowledge and gain once-inaccessible insights. The ability to search for information empowers individuals and organizations to stay informed, make educated decisions, and ultimately drive success. The introduction of numerous search strategies and frameworks has facilitated access. Although, it has also presented a problematic option for companies that must select between various search systems to deliver knowledge to their consumers at their point of need. Vector search is one of the latest enterprise search frameworks that leverages the power of large language models (LLMs) to index and retrieve content. In this article, I will examine the main advantages and disadvantages of vector search when choosing the framework for your enterprise search initiative.

 

Advantages of Vector Search

One of vector search’s main advantages is its ability to deliver highly relevant and accurate search results. Unlike traditional keyword-based search systems, which only match exact words or phrases, vector search considers the semantic meaning and context of the search query. For example, if you are searching for “apple stock”, then keyword search will retrieve content related to those keywords which may include food recipes, or references to the “big apple”, while vector search will retrieve content in the financial domain. Moreover, even if the user does not use exact keywords, the system can still understand the query’s intent and return relevant results based on semantic similarity and context. This functionality dramatically improves the user experience and increases the likelihood of quickly and efficiently finding the desired information. Furthermore, they are well-suited to handling conversational search queries and understanding user intent, thus enhancing user engagement.

Another main advantage of vector search is the versatility of the content and use cases it can accommodate by leveraging the multiple language tasks its underlying LLM model can perform. These are three main features that drive the primary differentiation between vector search and other search methods:

Multilingual Capabilities: Vector search engines have LLMs that interpret and process linguistic nuances, ensuring accurate information retrieval even in complex multilingual settings. Additionally, the multilingual capabilities of vector search engines make them invaluable tools for cross-lingual information retrieval, facilitating knowledge sharing and collaboration across language barriers.
Summarization, Named Entity Recognition (NER), and Natural Language Generation (NLG): Advanced vector search engines can efficiently summarize top results or lengthy documents. They can also use named entity recognition (NER) to identify, extract, and classify named entities such as people, organizations, and locations from unstructured content. Furthermore, generative AI enables these search engines to generate human-like text, which makes them useful for tasks such as content creation and automated report generation. These features benefit chatbots, virtual assistants, and customer support applications.
Recommendation System and Content-to-Content Search: Vector search engines go beyond retrieving search results; they can also power recommendation systems and content-to-content search. By representing content as vectors and measuring their similarity, vector search can efficiently identify duplicate or closely related documents. It is a valuable tool for organizations aiming to maintain content quality and integrity and those seeking to deliver relevant content recommendations to their users. This capability allows vector search engines to excel in plagiarism detection, content recommendation, and content clustering applications.

In summary, the advantages of vector search are numerous and compelling. Its ability to provide highly relevant and context-aware search results, its versatility in accommodating diverse language tasks, and its support for summarization, named entity recognition, natural language generation, recommendation systems, and content-to-content searches demonstrate its relevance as part of a comprehensive search strategy. However, it’s also essential to explore this approach’s potential drawbacks. Let’s shift our focus to the challenges of vector search to gain a more comprehensive understanding of its implications and limitations in various contexts.

 

Disadvantages of Vector Search

Vector search undoubtedly presents a wide range of opportunities but has challenges and limitations. One of the main disadvantages is the complex implementation process required for vector search. It can require significant computational power and expertise to properly design and implement the algorithms and models needed for vector search. It is essential to have a solid understanding of these drawbacks to conduct an accurate and thorough analysis of the viability of vector search in various settings. Here are some additional disadvantages of vector search:

Loss of Transparency and Hidden Bias: The inner workings of vector search engines are often opaque since they rely on pre-trained LLMs to vectorize the content. This lack of transparency can be a drawback in scenarios where you must explain or justify search results, such as in regulatory compliance or auditing processes. In these situations, the inability to explain clearly how the vector search engine arrived at specific results can raise concerns regarding bias or unfairness. Additionally, the lack of transparency can hinder efforts to identify and rectify potential issues or biases in the search algorithm.
Challenges in Specialized and Niche Contexts: Vector search encounters difficulties with rare or niche items, struggles to capture nuanced semantic meanings, and may need more precision in highly specialized fields. This limitation can lead to suboptimal search results in industries where precise terminology is crucial, like legal, healthcare, or scientific research. In this instance, a graph-based semantic search engine would be ideal because it could leverage an ontology to capture the intricate relationships and connections between specialized terms and concepts defined in an industry or enterprise taxonomy.
Performance vs. Accuracy Trade-off: LLM-based content vectorization can provide vectors of varying dimensions. The higher the dimensionality, the more information can be kept in vectors, resulting in more exact search results. The high dimensionality, however, comes at a higher processing cost and slower response times. As a result, vector search engines use approximate closest neighbor (ANN) techniques to accelerate the process while sacrificing some search precision. These algorithms provide outcomes similar, but not identical, to their nearest neighbors. It’s a trade-off between speed and precision, and organizations must decide how much precision they’re willing to give up for faster search speeds.
Privacy Concerns: Handling sensitive or personal data with vector search engines, especially when using APIs to access and train LLM services, may raise privacy concerns. If not carefully managed, the training and utilization of such models could result in unintentional data exposure, leading to data breaches or privacy violations.

Overall, the complex implementation process demands computational power and expertise, while the lack of transparency and potential hidden biases can raise concerns, particularly in compliance- and fairness-sensitive contexts. Vector search struggles in specialized fields and encounters a trade-off between search speed and precision when employing approximate nearest-neighbor algorithms to deal with high vector dimensionality and content at scale. Furthermore, handling sensitive data poses privacy risks if not carefully managed. Understanding these disadvantages is pivotal to making informed decisions regarding adopting vector search.

 

Conclusion

In conclusion, vector search represents a significant leap in search technology but requires careful assessment to maximize its benefits and mitigate potential limitations in diverse applications. As knowledge management and AI continue to evolve, the right search strategy can be a game-changer in unlocking the full potential of your organization’s knowledge assets. At EK, we recognize that adopting vector search should align with the organization’s goals, resources, and data characteristics. We recently worked with one of our clients to iteratively develop the vector search process and training algorithms to help them take advantage of their multilingual content and varied unstructured and structured data. Contact us to have our search experts work closely with you to understand your specific requirements and design a tailored search solution that optimizes the retrieval of relevant and accurate information.

The post Exploring Vector Search: Advantages and Disadvantages appeared first on Enterprise Knowledge.

]]>
Enterprise Knowledge Playing Central Role at KMWorld 2023 https://enterprise-knowledge.com/enterprise-knowledge-playing-central-role-at-kmworld-2023/ Tue, 24 Oct 2023 17:03:11 +0000 https://enterprise-knowledge.com/?p=19096 Enterprise Knowledge (EK) is once again playing a central role at the upcoming KMWorld Conference in Washington, DC. This year, EK is delivering eleven sessions throughout KMWorld and the associated events, including Taxonomy Boot Camp, Enterprise Search & Discovery, Enterprise … Continue reading

The post Enterprise Knowledge Playing Central Role at KMWorld 2023 appeared first on Enterprise Knowledge.

]]>
Enterprise Knowledge (EK) is once again playing a central role at the upcoming KMWorld Conference in Washington, DC. This year, EK is delivering eleven sessions throughout KMWorld and the associated events, including Taxonomy Boot Camp, Enterprise Search & Discovery, Enterprise AI World, and Text Analytics Forum. EK is offering an array of thought leadership sessions sharing approaches and methodologies. Notably, EK is also speaking with clients, delivering advanced case studies on knowledge graphs, enterprise learning solutions, and content assembly.  

 

  • ASML’s Taxonomy Adventure – Taxonomy Boot Camp – November 6th – Tatiana Baquero Cakici, EK Senior Consultant co-presenting with Daniel Canter, Learning & Knowledge Management, KM Program Manager, ASML and Dutch KM Network
  • Taxonomy Roller Coasters: Techniques to Keep Stakeholders on the Ride – Taxonomy Boot Camp – November 6th – Laurie Gray, EK Principal Consultant, co-presenting with Kate Vilches, Knowledge Management Lead, Ulteig
  • Taxonomy Showdown—Point/Counterpoint With Taxonomy Experts – Taxonomy Boot Camp – November 7th – Zach Wahl, EK CEO, and Tatiana Baquero Cakici, EK Senior Consultant
  • Nonprofit KM Journey to Success: Lessons & Learnings – KMWorld – November 7th – Sara Duane, EK Consultant
  • Enterprise KM & Learning – KMWorld – November 9th – Taylor Paschal, EK KM Consultant
  • Building KM Archetypes – KMWorld – November 9th – Taylor Paschal, EK KM Consultant
  • Enterprise AI: Using Knowledge Graphs & Data Fabric as a Pillar for AI – Enterprise AI World – November 9th – Lulit Tesfaye, EK VP & Partner, Knowledge & Data Services co-presenting with Jans Aasman, CEO, Franz Inc
  • KM & eLearning: Improving Learner Experience & Business Outcomes – KMWorld – November 9th – Deneena Lanius, EK Senior Consultant co-presenting with Melinda Geist, Digital Operations Manager, Sales and Marketing, Intel
  • Enterprise Cognitive Architectures: How LLMs Like ChatGPT Will Integrate With the Enterprise – Text Analytics Forum – November 9th – Ethan Hamilton, EK Analyst co-presenting with Joe Hilger, EK COO
  • Scaling Knowledge Graph Architectures With AI – Text Analytics Forum – November 9th – Sara Nash, EK Principal Consultant, co-presenting with Urmi Majumder, EK Principal Consultant
  • Road Maps for Future Enterprise Search & Discovery – Enterprise Search & Discovery – November 9th – EK COO Joe Hilger

To listen to what our speakers are discussing in during KMWorld 2023 and what you shouldn’t miss, check out this special episode of Knowledge Cast: KMWorld Speakers 2023.

EK is also hosting a booth at KMWorld and serving as a sponsor for the event. EK’s active presence continues a long-running tradition of thought leadership. EK’s book, ‘Making Knowledge Management Clickable’ published by Springer books, Knowledge Cast, the #1 rated KM podcast, and nearly 500 free and open items in EK’s knowledge base all supplement our active participation in the world’s leading KM conferences and events.

For more information on the event and to register, visit: https://secure.infotoday.com/RegForms/KMWorld/

 

About Enterprise Knowledge 

Enterprise Knowledge (EK) is a services firm that integrates Knowledge Management, Information and Data Management, Information Technology, and Agile Approaches to deliver comprehensive solutions. Our mission is to form true partnerships with our clients, listening and collaborating to create tailored, practical, and results-oriented solutions that enable them to thrive and adapt to changing needs. At the heart of these services, we always focus on working alongside our clients to understand their needs, ensuring we can provide practical and achievable solutions on an iterative, ongoing basis. 

The post Enterprise Knowledge Playing Central Role at KMWorld 2023 appeared first on Enterprise Knowledge.

]]>
Secure LLM Powering Semantic Search for a Multinational Development Bank https://enterprise-knowledge.com/secure-llm-powering-semantic-search-for-a-multinational-development-bank/ Tue, 17 Oct 2023 16:04:39 +0000 https://enterprise-knowledge.com/?p=19072 The Challenge A multinational development bank with 48 member countries provides loans, grants, and technical assistance to its members for a wide range of development projects to reduce poverty and inequality in the region. The projects cover many topics, including … Continue reading

The post Secure LLM Powering Semantic Search for a Multinational Development Bank appeared first on Enterprise Knowledge.

]]>

The Challenge

A multinational development bank with 48 member countries provides loans, grants, and technical assistance to its members for a wide range of development projects to reduce poverty and inequality in the region. The projects cover many topics, including infrastructure, education, healthcare, and environmental sustainability. Throughout the lifecycle of their projects, their subject matter experts (SMEs) create content in the form of blogs, articles, research papers, reports, and lessons learned.

All information related to a prospective project, including details about the client and performance and experience from comparable projects, must be evaluated before proceeding, especially during the proposal phase, to plan and price the project correctly. However, it has proven challenging to make all content findable within their current search experience entirely based on keyword searches. The content is typically stored in a nested file system, with each project having its own location and owner, resulting in information silos. Furthermore, with four official languages (English, Spanish, French, and Portuguese), surfacing content across languages and topics has proven inefficient and time-consuming for most of their users, whose query patterns range from single keywords to lengthy phrases.

The bank recognized it needed a better search experience to deliver its knowledge assets to its users at their point of need. The solution required implementing a unified search system that could seamlessly navigate through siloed content and break down language barriers. Additionally, it needed to provide efficient and accurate results for users with diverse query patterns, regardless of the length or complexity of their search queries. 

The Solution

EK designed, developed, and deployed a hybrid search engine to address these issues. The search engine uses large language models (LLMs) to understand and interpret user requests in various languages. The search engine can deliver accurate results in all four languages using LLMs, eliminating users needing to switch between languages or complete separate queries. Furthermore, the hybrid approach combines traditional keyword-based search with natural language processing algorithms to provide a comprehensive and efficient search experience for customers with a wide range of query patterns.

Instead of relying solely on keywords, the search engine allows users to enter their queries more conversationally, allowing them to ask questions or provide specific details. Now, users can find the information they need quickly with an accuracy improvement in the search results, making the search engine easier to use and more intuitive for the user, resulting in an overall improvement in the quality of the user experience.

An open-source vector database that uses a fine-tuned language model to automatically vectorize and index content for quick retrieval powers the solution. The same language model is used at query time to vectorize the query and match it against the indexed content. Because of the use of the LLM to vectorize the content and the query, the search engine can return relevant results based on the query’s semantic meaning. At the same time, a second process, keyword search, is initiated. The final result is a weighted score between keyword-based and vector-based search results based on several query attributes, such as length or terminology.

The EK Difference

EK formed a team of search experts, data engineers, and data scientists to plan and test the solution and provide a holistic infrastructure for the hybrid search feature. This team was able to evaluate different methods and components to ensure each of the elements was best-of-class for the end solution. The process involved testing several vector databases, pre-trained language models, and fine-tuning a multilingual language model on the bank’s corpus to optimize the search algorithm.

Furthermore, the team researched user behavior and feedback extensively to understand users’ specific needs and preferences. Combining their expertise with the insights gathered, EK developed a highly efficient and accurate hybrid search feature that provides relevant search results and understands the context and intent behind user queries. This hybrid search feature has dramatically improved the overall user experience and made it easier for the bank’s users to find the information they require at the point of need.

The Results

The result is a powerful and user-friendly search tool that enhances productivity and eliminates language barriers for users across the bank’s subsidiaries in member countries. This search tool has been instrumental in streamlining information and collaboration among users from different teams, allowing for seamless knowledge sharing and problem-solving. Furthermore, combining keyword and vector searches powered by LLMs ensures that search results are curated with AI augmentation but highly accurate and tailored to each user’s specific needs, further improving their overall experience.

Download Flyer

Ready to Get Started?

Get in Touch

The post Secure LLM Powering Semantic Search for a Multinational Development Bank appeared first on Enterprise Knowledge.

]]>
Recommendation Engine Automatically Connecting Learning Content & Product Data https://enterprise-knowledge.com/recommendation-engine-automatically-connecting-learning-content-product-data/ Tue, 19 Sep 2023 15:14:24 +0000 https://enterprise-knowledge.com/?p=18980 The EK Difference EK’s hallmark is rooted in the proficiency of advanced AI and knowledge graph technologies, as well as our profound commitment to client relationships. Working closely with the company’s content and data teams, EK displayed a robust understanding … Continue reading

The post Recommendation Engine Automatically Connecting Learning Content & Product Data appeared first on Enterprise Knowledge.

]]>

The Challenge

A bioscience technology provider – and a leader in scientific research and solutions – identified a pivotal challenge within their digital ecosystem, particularly on their public facing e-commerce website. While the platform held an extensive reservoir of both product information and associated educational content, the content and data existed disjointedly (spread across more than five systems). As a result, their search interface failed to offer users a holistic and enriching experience. A few primary issues at hand were:

  • The search capability was largely driven by keywords, limiting its potential to be actionable.
  • The platform’s search functionality didn’t seamlessly integrate all available resources, leading to underutilized assets and a compromised user experience.
  • The painstaking manual process of collating content posed internal challenges in governance and hindered scalability.
  • In the absence of a cohesive content classification system, there was a disjunction between product information and corresponding educational content.
  • Inconsistencies plagued the lifecycle management of marketing content.
  • The array of platforms, managed by different product teams, exposed alignment challenges and prevented a unified user experience.

From a business perspective, the challenges were even more dire. The company faced potential revenue losses as users couldn’t gain enough insight to make buying decisions. The user experience became frustrating due to irrelevant content and inefficient searches, limiting employees with manual processes and impeding data-driven decision-making regarding the value of the site’s content; this caused both employees and customers to resort to doing Google searches that routed them back to the site to find what they needed.

The company engaged EK to help bridge the gap between product data and marketing and educational content to ultimately improve the search experience on their platform. 

The Solution

Assessing Current Content and Asset Capabilities at Scale

EK commenced its engagement by comprehensively assessing the company’s current content and asset capabilities. This deep dive included a data mapping and augmented corpus analysis effort into the content and technologies that power their website, such as Adobe AEM (marketing content), a Learning Management System (LMS) with product-related educational content, a Product Information Management (PIM) solution with over 70,000 products, and Salesforce for storing customer data. This provided a clear picture of the existing content and data landscape.

A Semantic Data Model 

With a deeper understanding of the content’s diversity and the need for efficient classification, EK defined and implemented a robust taxonomy and ontology system. This provided a structured way to classify and relate content, making it more discoverable and actionable for users. To tangibly demonstrate the potential of knowledge graphs, EK implemented a POC. This POC aimed to bridge the silos between the different systems, allowing for a more unified and cohesive content experience that connected product and marketing information.

Integrated Data Resources and Knowledge Graph Embeddings

EK utilized an integrated data set to counter data fragmentation across different platforms. A more cohesive content resource was built by combining Adobe AEM and LMS data with manually curated data and extracted information from the rendered website. However, the critical leap came when the entire knowledge graph, which encapsulated this unified data set, was loaded into memory. This in-memory knowledge graph paved the way for real-time processing and analysis, which is essential for generating meaningful embeddings.

Similarity Index and Link Classifier: Two-Fold Search Enhancement

  • Similarity Index: EK’s Enterprise AI and Search experts worked together to convert the in-memory knowledge graph into vector embeddings. These embeddings, teeming with intricate data relationships, were harnessed to power a similarity index; this index stands as a testament to AI’s potential, offering content recommendations rooted in contextual relevance and similarity metrics.
  • Link Classifier: Building upon the embeddings, EK introduced a machine learning (ML) classifier. This tool was meticulously trained to discern patterns and relationships within the embeddings, establishing connections between products and content. Consequently, the system was endowed with the capability to recommend content corresponding to a user’s engagement with a product or related content. This transformed the user journey, enriching it with timely and pertinent content suggestions.

ML-Infused User Experience Enhancement

Venturing beyond conventional methodologies, EK incorporated ML, knowledge graphs, taxonomy, and ontology to redefine the user experience. This allowed users to navigate and discover important content through an ML-powered content discovery system, yielding suggestions that resonated with their needs and browsing history.

Unified Platform Management via Predictive Insights

Addressing the multifaceted challenge of various teams steering different platforms, EK integrated the machine learning classifier with predictive insights. This fusion empowered teams with the foresight to gauge user preferences, allowing them to align platform features and fostering a cohesive and forward-looking digital landscape.

Search Dashboard Displaying ML-based Results

Concluding their engagement, EK presented with a search dashboard. This dashboard, designed to exhibit two distinct types of results – similarity index and link classifier – served as a window for the organization to witness and evaluate the dual functionalities. The underlying intent was to grant their e-commerce website backend avenues to elevate their search capabilities, giving them a comparative view of multiple ML-based systems.

The EK Difference

EK’s hallmark is rooted in the proficiency of advanced AI and knowledge graph technologies, as well as our profound commitment to client relationships. Working closely with the company’s content and data teams, EK displayed a robust understanding of the technological necessities and the organizational dynamics at play. Even when the level of effort and need from the solution extended beyond the initial scope of work, EK’s flexible approach allowed for open dialogue and iterative development and value demonstration. This ensured that the project’s progression aligned closely with the evolving needs of our client.

Recognizing the intricacy of the project and the importance of a well-documented process, EK meticulously enhanced the documentation of both the delivery process and development. This created transparency and ensured that all the resources needed to carry forward, modify, or scale the implemented solution are in place for the future.

Moreover, given the complexity and nuances involved in such large-scale implementations, EK provided a repeatable framework to validate AI results with stakeholders and maintain integrity and explainability of solutions with human-in-the-loop development throughout the engagement. This was achieved through iterative sessions, ensuring the final system met technical benchmarks and resonated with the company’s organizational context and language.

The Results

The engagement equipped the organization with a state-of-the-art, context-based recommendation system specifically tailored for their vast and diverse digital ecosystem. This solution drastically improved content discoverability, relevance, and alignment, fundamentally enhancing the user experience on their product website.

The exploratory nature of the project further unveiled opportunities for additional  enhancements, particularly in refining the data, optimizing the system and exposing areas where the firm had gaps in content creation or educational materials as it relates to products. Other notable results include:

  • Automated framework to standardized metadata across systems for over 70,000 product categories;
  • A Proof of Concept (POC) that bridged content silos across 4+ different systems, demonstrating the potential of knowledge graphs;
  • A machine-learning classifier that expedited content aggregation and metadata application process through automation; and
  • Increased user retention and better product discovery, leading to 6 figures in closed revenue.
Download Flyer

Ready to Get Started?

Get in Touch

The post Recommendation Engine Automatically Connecting Learning Content & Product Data appeared first on Enterprise Knowledge.

]]>
Expert Analysis: How Do I Build an Expert Finder? https://enterprise-knowledge.com/expert-analysis-how-do-i-build-an-expert-finder/ Thu, 29 Jun 2023 16:16:23 +0000 https://enterprise-knowledge.com/?p=18219 The search for subject matter experts (SMEs) is an age-old problem when looking for assistance and mentorship or building a team around a subject matter. Expert Finders solve this problem by allowing users to search for colleagues with specific expertise. … Continue reading

The post Expert Analysis: How Do I Build an Expert Finder? appeared first on Enterprise Knowledge.

]]>
The search for subject matter experts (SMEs) is an age-old problem when looking for assistance and mentorship or building a team around a subject matter. Expert Finders solve this problem by allowing users to search for colleagues with specific expertise. Instead of searching for content, Expert Finders focus on leveraging information found in content, tangential to the content, or from other organizational data sources to determine relevant people for a search query. Access to Experts is a key component of a Learning and Performance Ecosystem approach to workforce development, and Expert Finders are a technical tool that can enable this access.

In this blog, two of our technical consultants, Jade McDaniels and James Midkiff, answer common questions about Expert Finders, focusing on how to build and leverage them in an organization.

What defines an expert at my organization, and how do I generate that information?

Jade McDaniels

Experts are typically recognized as individuals who possess knowledge, skills, and experience regarding a subject area beyond that of a knowledgeable audience. Experts demonstrate a deep understanding of a specific field of study and are capable of wielding that knowledge to educate others, solve problems, and discover new opportunities for skill application. Within an organization, a few general markers to identify an Expert are:

  • project participation records,
  • contribution to content in their subject field knowledge base,
  • certifications; and,
  • the types of institutional groups they engage.

An important point is that these markers, which can be leveraged to identify an Expert, are not all indicators of what people have learned (i.e. certifications). Many of the indicators of expertise are performance-based (i.e. participation in projects that align with the topic).

Most, if not all, of the information that describes expertise and that will fuel the Expert Finder already exists within your organization and does not need to be created. For example, enterprise HR Systems hold person-specific details on employees, including their name, role, and records for past project participation and the capacity in which they served on the project. You can leverage wikis and document management systems like Confluence, Sharepoint, Smartsheet, etc., to aggregate metadata supporting expert-specific criteria. Expert Finders exist as the integration point between HR systems and enterprise knowledge repositories and demonstrate strong capabilities when aligning an expert with their sphere of influence and work produced – saving the cost of manual effort when building that data from scratch.

James Midkiff

For most organizations, the biggest hurdle to Expert Finders is identifying who is an expert in a given subject matter. Experts are rarely self-proclaimed and often need to be identified by their experiences. Someone may be considered an expert if they have;

  • Demonstrated related learning such as:
    • received a degree in a given topic,
    • taken or led training courses, or
    • earned certificates related to the subject matter.
  • Demonstrated related performance such as:
    • presented or authored a blog, book, or other publication on a subject,
    • contributed to the organization’s knowledge base of communities of practice on the subject, or
    • work or have worked on a relevant project at the organization.

There are many use cases when determining expertise, and we need to consider all of them when generating a list of experts. Initially, we can aggregate information by pulling metadata from the project or client management software, querying relevant data points from the enterprise data warehouse, or performing a project document analysis. Then, we leverage the aggregated information to generate confidence scores between individuals and subjects to measure how likely an individual is an expert in a subject. This approach may involve a custom data model (ontology) to enable our understanding of how an individual relates to topics and gives us precise methods to identify relationships. Confidence scores are how we can fine-tune our calculation of expertise and Expert Finder results.

Why do we need an Expert Finder?

Jade McDaniels

Expert Finders are essential to a unified view of an organization’s knowledge assets. The information existing in professionals’ heads, also known as tacit knowledge, is valuable and should be captured in a way that is easy to find and connect with for peers and leaders. Experts represent pillars of knowledge and confirm the strong points that exist within business units. The ability to recognize knowledge authorities and their areas of expertise optimizes opportunities for collaboration, creating innovative solutions, and knowledge retention.

Experts also maximize learning and performance outcomes in communities of practice, mentoring, and other peer-to-peer learning frameworks. While anyone at any level can contribute to a knowledge space, experts are the leaders as they’ve procured knowledge and experiences by applying knowledge. This voice of experience bolsters performance outcomes.

James Midkiff

Expert Finders are critical to helping people grow. Expertise is a source of knowledge, and connecting people over a shared or new area of interest enables individuals to learn from their peers in a natural way. It is even more important to intentionally support Social Learning with the exponential increase of hybrid work environments, enabling virtual colleagues, potentially new to the organization, to meet and socialize with peers with shared interests.

For team management, Expert Finders enable project managers to build teams with the right knowledge, skills, and abilities for success. Experts are easily located and given opportunities to share their wealth of knowledge and experience with their team members. Team members can identify peer mentors to help in their own individual growth.

Expert Finders are a natural extension to existing search tools that let individuals search for expertise, whether they’re looking for information or individuals. Depending on the system, we recommend integrating an Expert Finder with pre-existing search portals to help raise awareness of expertise at an organization.

Do I have to buy an expert finder software/system? What tools are available for Expert Finders?

Jade McDaniels

In the technology marketplace, there are a few existing out-of-the-box (OOTB) tools to leverage when searching for people within an organization. However, it’s important to note that the features advertised by available systems may only address some of the use cases and business needs of your institution. If you decide on a ready-made solution, you should align your decision around which system most meets your user requirements and any business goals that you wish to achieve.

Do you have to buy an Expert Finder system? The short answer is no. Off-the-shelf software offers many advantages, including rapid implementation and no/low maintenance support costs. Nonetheless, ready-made software is often limited in scalability and customization opportunities. Additionally, these systems are not guaranteed to integrate seamlessly with standing technological infrastructure. Instead, you can build your own Expert Finder to meet your business’s unique needs in terms of scalability, user requirement satisfaction, and integration with existing systems.

James Midkiff

Investing in an Expert Finder is important, whether OOTB or through custom development. There are a handful of available tools that provide the ability to search for people, including Glean and IBM Watson, or human resource platforms like Sift, Workday, and WorkForce Now. It is valuable to look into these options to see if any tools can provide all the features necessary to fit your use case. However, building your own Expert Finder will enable the most flexibility in how you model and curate expertise.

Designing an ontology will give you the best understanding of how to generate individual expertise. Once you develop the model, I recommend pulling in a sample of the data to instantiate the ontology and leverage a search engine to quickly retrieve experts. The data pipelines generating metadata about an individual and the relevance query used in the search engine are completely open for you to tune to fit your expertise needs. Free Lucene-based search engines (like Elasticsearch or Solr) are easy to get up and running. If you’re feeling adventurous, you can experiment with a vector search engine (like Milvus, Qdrant, or Weaviate) since they enable more semantic search capabilities and the ability to understand more about the input search query.

How do I get started building an Expert Finder?

Jade McDaniels

The first step in building an Expert Finder is determining whether or not your organization would benefit from implementing one. Before expending resources on a ready-made platform or investing in a custom code solution, confirm that an Expert Finder aligns with the problems you want to alleviate, behaviors you want to enable, and business values you wish to promote.

From there, approach building your Expert Finder from 4 different perspectives.

  • Source System(s): Consider if and where the data that determines expertise lives within your organization. Recognizing source data repositories will aid in the process of determining the technology you need to integrate with said systems.
  • Data Processing: This point focuses on how the system will process information to infer the relationship between individuals and an area of expertise and determine who is an expert. Establishing these relationships requires the implementation of data modeling and, thus, a tool that can facilitate those connections and processes.
  • Search: The purpose of an Expert Finder is to support finding information and individuals. It’s essential to understand how a user will search for said information and individuals and confirm that the utilized search application complements their search patterns.
  • Design: Effectively connecting searchers to information or people requires the thoughtful delivery of those details. Consider how knowledge should be shaped and delivered as a search result for optimal engagement and action by end users.

James Midkiff

I tend to like lists, so here are my 5 steps to build an Expert Finder.

  1. Brainstorm Expert Finder Use Cases: Ideate who could benefit from expert information and how they might interact with an Expert Finder. What business value does an Expert Finder have for your use case?
  2. Identify Expertise Data: Determine the data (generated or made available for processing) to leverage for determining expertise in a subject area.
  3. Design the Interactions: Once you have brainstormed the use cases and identified supporting information, try designing what that use case could look like if it were implemented.
  4. Architect the Integrations: Now that we know the goal, take a step back and determine the location of supporting information and how to pull that data into one location for the Expert Finder.
  5. Implement the Use Case: After documenting the use cases (and business values), the designs are in place, and the architecture is ready to go, start putting the pieces together.

Make sure to start small, prioritizing architecture implementation to enhance search relevance. Iteratively update the data pipelines and search queries to ensure expertise searches match expectations.

Conclusion

Expert Finders are search interfaces that leverage enterprise learning and performance data to make expertise findable across your organization. Expert Finders enable individuals to find mentors, communities of practice to find SME leads, and business leaders to identify emerging organizational leaders. The decision to build an Expert Finder from scratch or utilize an OOTB solution is largely based on your business’s needs. If you are looking for help determining which solution is best for your organization, please contact us.

The post Expert Analysis: How Do I Build an Expert Finder? appeared first on Enterprise Knowledge.

]]>