Enterprise Search Articles - Enterprise Knowledge https://enterprise-knowledge.com/category/enterprise-search/ Mon, 17 Nov 2025 22:21:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg Enterprise Search Articles - Enterprise Knowledge https://enterprise-knowledge.com/category/enterprise-search/ 32 32 Semantic Search Advisory and Implementation for an Online Healthcare Information Provider https://enterprise-knowledge.com/semantic-search-advisory-and-implementation-for-an-online-healthcare-information-provider/ Tue, 22 Jul 2025 14:13:12 +0000 https://enterprise-knowledge.com/?p=24995 The medical field is an extremely complex space, with thousands of concepts that are referred to by vastly different terms. These terms can vary across regions, languages, areas of practice, and even from clinician to clinician. Additionally, patients often communicate ... Continue reading

The post Semantic Search Advisory and Implementation for an Online Healthcare Information Provider appeared first on Enterprise Knowledge.

]]>

The Challenge

The medical field is an extremely complex space, with thousands of concepts that are referred to by vastly different terms. These terms can vary across regions, languages, areas of practice, and even from clinician to clinician. Additionally, patients often communicate with clinicians using language that reflects their more elementary understanding of health. This complicates the experience for patients when trying to find resources relevant to certain topics such as medical conditions or treatments, whether through search, chatbots, recommendations, or other discovery methods. This can lead to confusion during stressful situations, such as when trying to find a topical specialist or treat an uncommon condition.

A major online healthcare information provider engaged with EK to improve both their consumer-facing and clinician-facing natural language search and discovery platforms in order to deliver faster and more relevant results and recommendations. Their consumer-facing web pages aimed to connect consumers with healthcare providers when searching for a condition, with consumers often using terms or phrases that may not be an exact match with medical terms. In contrast, the clinicians who purchased licenses to the provider’s content required a fast and accurate method of searching for content regarding various conditions. They work in time-sensitive settings where rapid access to relevant content could save a patient’s life, and often use synonymous acronyms or domain-specific jargon that complicates the search process. The client desired a solution which could disambiguate between concepts and match certain concepts to a list of potential conditions. EK was tasked to refine these search processes to provide both sets of end users with accurate content recommendations.

The Solution

Leveraging both industry and organizational taxonomies for clinical topics and conditions, EK architected a search solution that could take both the technical terms preferred by clinicians and the more conversational language used by consumers and match them to conditions and relevant medical information. 

To improve search while maintaining a user-friendly experience, EK worked to:

  1. Enhance keyword search through metadata enrichment;
  2. Enable natural language search using large language models (LLMs) and vector search techniques, and;
  3. Introduce advanced search features post-initial search, allowing users to refine results with various facets.

The core components of EK’s semantic search advisory and implementation included:

  1. Search Solution Vision: EK collaborated with client stakeholders to determine and implement business and technical requirements with associated search metrics. This would allow the client to effectively evaluate LLM-powered search performance and measure levels of improvement. This approach focused on making the experience faster for clinicians searching for information and for consumers seeking to connect with a doctor. This work supported the long-term goal of improving the overall experience for consumers using the search platform. The choice of LLM and associated embeddings played a key role: by selecting the right embeddings, EK could improve the association of search terms, enabling more accurate and efficient connections, which proved especially critical during crisis situations. 
  2. Future State Roadmap: As part of the strategy portion of this engagement, EK worked with the client to create a roadmap for deploying the knowledge panel to the consumer-facing website in production. This roadmap involved deploying and hosting the content recommender, further expanding the clinical taxonomy, adding additional filters to the knowledge panel (such as insurance networks and location data), and search features such as autocomplete and type-ahead search. Setting future goals after implementation, EK suggested the client use machine learning methods to classify consumer queries based on language and predict their intent, as well as establish a way to personalize the user experience based on collected behavioral data/characteristics.
  3. Keyword and Natural Language Search Enhancement: EK developed a gold standard template for client experts in the medical domain to provide the ideal expected search results for particular clinician queries. This gold standard served as the foundation for validating the accuracy of the search solution in pointing clinicians to the right topics. Additionally, EK used semantic clustering and synonym analysis in order to identify further search terms to add as synonyms into the client’s enterprise taxonomy. Enriching the taxonomy with more clinician-specific language used when searching for concepts with natural language improved the retrieval of more relevant search results.
  4. Semantic Search Architecture Design and LLM Integration: EK designed and implemented a semantic search architecture to support the solution’s search features, EK connecting the client’s existing taxonomy and ontology management system (TOMS), the client’s search engine, and a new LLM. Leveraging the taxonomy stored in the TOMS and using the LLM to match search terms and taxonomy concepts based on similarity enriched the accuracy and contextualization of search results. EK also wrote custom scripts to evaluate the LLM’s understanding of medical terminology and generate evaluation metrics, allowing for performance monitoring and continuous improvement to keep the client’s search solution at the forefront of LLM technology. Finally, EK created a bespoke, reusable benchmark for LLM scores, evaluating how well a certain model matched natural language queries to clinical search terms and allowing the client to select the highest-performing model for consumer use.
  5. Semantic Knowledge Panel: To demonstrate the value this technology would bring to consumers, EK developed a clickable, action-oriented knowledge panel that showcased the envisioned future-state experience. Designed to support consumer health journeys, the knowledge panel guides users through a seamless journey – from conversational search (e.g. “I think I broke my ankle”), to surfacing relevant contextual information (such as web content related to terms and definitions drawn from the taxonomy), to connecting users to recommended clinicians and their scheduling pages based on their ability to treat the condition being searched (e.g. An orthopedist for a broken ankle). EK’s prototype leveraged a taxonomy of tagged keywords and provider expertise, with a scoring algorithm that assessed how many, and how well, those tags matched the user’s query. This scoring informed a sorted display of provider results, enabling users to take direct action (e.g. scheduling an appointment with an orthopedist) without leaving the search experience.

The EK Difference

EK’s expertise in semantic layer, solution architecture, artificial intelligence, and enterprise search came together to deliver a bespoke and unified solution that returned more accurate, context-aware information for clinicians and consumers. By collaborating with key medical experts to enrich the client’s enterprise taxonomy, EK’s semantic experts were able to share unique insights and knowledge on LLMs, combined with their experience with applying taxonomy and semantic similarity in natural language search use cases, to place the client in the best position to enable accurate search. EK also was able to upskill the client’s technical team on semantic capabilities and the architecture of the knowledge panel through knowledge transfers and paired programming, so that they could continue to maintain and enhance the solution in the future.

Additionally, EK’s solution architects, possessing deep knowledge of enterprise search and artificial intelligence technologies, were uniquely positioned to provide recommendations on the most advantageous method to seamlessly integrate the client’s TOMS and existing search engine with an LLM specifically developed for information retrieval. While a standard-purpose LLM could perform these tasks to some extent, EK helped design a purpose-built semantic search solution leveraging a specialized LLM that better identified and disambiguated user terms and phrases. 

Finally, EK’s search experts were able to define and monitor key search metrics with the client’s team, enabling them to closely monitor improvement over time, identifying trends and suggesting improvements to match. These search improvements resulted in a solution the client could be confident in and trust to be accurate.

The Results

The delivery of a semantic search prototype with a clear path to a production, web-based solution resulted in the opportunity for greatly augmented search capabilities across the organization’s products. Overall, this solution allowed both healthcare patients and clinicians to find exactly what they are looking for using a wide variety of terms.

As a result of EK’s semantic search advisory and implementation efforts, the client was able to:

  1. Empower potential patients to use web-based semantic search platform to search for specialists who can treat their conditions quickly and easily find care; 
  2. Streamline the content delivery process in critical, time-sensitive situations such as emergency rooms by providing rapid and accurate content that highlights and elaborates on potential diagnoses and treatments to healthcare professionals; and
  3. Identify potential data and metadata gaps in the healthcare information database that the client relies on to populate its website and recommend content to users.

Looking to improve your organization’s search capabilities? Want to see how LLMs can power your semantic ecosystem? Learn more from our experience or contact us today.

Download Flyer

Ready to Get Started?

Get in Touch

The post Semantic Search Advisory and Implementation for an Online Healthcare Information Provider appeared first on Enterprise Knowledge.

]]>
Extracting Knowledge from Documents: Enabling Semantic Search for Pharmaceutical Research and Development https://enterprise-knowledge.com/extracting-knowledge-from-documents-enabling-semantic-search/ Mon, 03 Mar 2025 18:00:37 +0000 https://enterprise-knowledge.com/?p=23177 The Challenge A major pharmaceutical research and development company faced difficulty creating regulatory reports and files based on years of drug experimentation data. Their regulatory intelligence teams and drug development chemists spent dozens of hours searching through hundreds of thousands … Continue reading

The post Extracting Knowledge from Documents: Enabling Semantic Search for Pharmaceutical Research and Development appeared first on Enterprise Knowledge.

]]>

The Challenge

A major pharmaceutical research and development company faced difficulty creating regulatory reports and files based on years of drug experimentation data. Their regulatory intelligence teams and drug development chemists spent dozens of hours searching through hundreds of thousands of documents to find past experiments and their results in order to fill out regulatory compliance documentation. The company’s internal search platform enabled users to look for documents, but required exact matches on specific keywords to surface relevant results, and lacked useful search filters. Additionally, due to the nature of chemistry and drug development, many documents were difficult to understand at a glance and required scientists to read through them in order to determine if they were relevant or not.

The Solution

EK collaborated with the company to improve their internal search platform by enhancing Electronic Lab Notebook (ELN) metadata, thereby increasing the searchability and findability of critical research documents, and created a strategy for leveraging ELNs in AI-powered services such as chatbots and LLM-generated document summaries. EK worked with the business stakeholders to evaluate the most important information within ELNs and understand the document structure, and developed semantic models in their taxonomy management system with more than 960 relevant concepts designed to capture the way their expert chemists understand the experimental activities and molecules referenced in the ELNs. With the help of the client’s technical infrastructure team, EK developed a new corpus analysis and ELN autotagging pipeline that leveraged the taxonomy management system’s built-in document analyzer and integrated the results with their data warehouse and search schema. Through three rounds of testing, EK iteratively improved the extraction of metadata from ELNs using the concepts in the semantic model to provide additional metadata on over 30,000 ELNs to be leveraged within the search platform. EK wireframed 6 new User Interface (UI) features and enhancements for the search platform designed to leverage the additional metadata provided by the autotagging pipeline, including search-as-you-type functionality and improved search filters, and socialized them with the client’s UI/ User Experience (UX) team. Finally, EK supported the client with strategic guidance for leveraging their internal LLM service to create accurate regulatory reports and AI summaries of ELNs within the search platform.

    The EK Difference

    EK leveraged its understanding of the capabilities and features of enterprise search platforms, and taxonomy management systems’ functionality, to advise the organization on industry standards and best practices for managing its taxonomy and optimizing search with semantics. Furthermore, EK’s experience working with other pharmaceutical institutions and large organizations in the development of semantic models benefited the client by ensuring their semantic models were comprehensively and specifically tailored to meet their needs for the development of their semantic search platform and generative AI use cases. Throughout the engagement, EK incorporated an Agile project approach that focused on iterative development and regular insight gathering from client stakeholders, to quickly prototype enhancements to the autotagging pipeline, semantic models and the search platform that the client could present to internal stakeholders to gain buy-in for future expansion. 

    The Results

    EK’s expertise in knowledge extraction, semantic modeling and implementation, along with a user-focused strategy that ensured that improvements to the search platform were grounded in stakeholder needs, enabled EK to effectively provide the client with a major update to their search experience. As a result of the engagement, the client’s newly established autotagging pipeline is enhancing tens of thousands of critical research documents with much-needed additional metadata, enabling dynamic context-aware searches and providing users of the search platform with insight at a glance into what information an ELN contains. The semantic models powering the upgraded search experience allow users to look for information using natural, familiar language by capturing synonyms and alternative spellings of common search terms, ensuring that users can find what they are looking for without having to do multiple searches. The planned enhancements to the search platform will save scientists at the company hours every week from searching for information and judging if specific ELNs are useful for their purposes or not, reducing reliance on individual employee knowledge and the need for the regulatory intelligence team to rediscover institutional knowledge. Furthermore, the company is equipped to move forward towards leveraging the combined power of semantic models and AI to improve the speed and efficiency of document understanding and use. By utilizing improved document metadata provided by the auto-tagging pipeline in conjunction with their internal LLM service, they will be able to generate factual document summaries in the search platform and automate the creation of regulatory reports in a secure, verifiable, and hallucination-free manner. 
     

     

    The post Extracting Knowledge from Documents: Enabling Semantic Search for Pharmaceutical Research and Development appeared first on Enterprise Knowledge.

    ]]>
    Semantic Maturity Spectrum: Search with Context https://enterprise-knowledge.com/semantic-maturity-spectrum-search-with-context/ Tue, 26 Nov 2024 20:57:09 +0000 https://enterprise-knowledge.com/?p=22477 EK’s Urmi Majumder and Madeleine Powell jointly delivered the presentation ‘Semantic Maturity Spectrum: Search with Context’ at the MarkLogic World Conference on September 24, 2024. Semantic search has long proven to be a powerful tool in creating intelligent search experiences. … Continue reading

    The post Semantic Maturity Spectrum: Search with Context appeared first on Enterprise Knowledge.

    ]]>
    EK’s Urmi Majumder and Madeleine Powell jointly delivered the presentation ‘Semantic Maturity Spectrum: Search with Context’ at the MarkLogic World Conference on September 24, 2024.

    Semantic search has long proven to be a powerful tool in creating intelligent search experiences. By leveraging a semantic data model, it can effectively understand the searcher’s intent and the contextual meaning of the terms to improve search accuracy. In this session, Majumder and Powell presented case studies for three different organizations across three different industries (finance, pharmaceuticals, and federal research) that started their semantic search journey at three very different maturity levels. For each case study, they described the business use case, solution architecture, implementation approach, and outcomes. Finally, Majumder and Powell rounded out the presentation with a practical guide to getting started with semantic search projects using the organization’s current maturity in the space as a starting point.

    The post Semantic Maturity Spectrum: Search with Context appeared first on Enterprise Knowledge.

    ]]>
    Expert Analysis: Keyword Search vs Semantic Search – Part Two https://enterprise-knowledge.com/expert-analysis-keyword-search-vs-semantic-search-part-two/ Thu, 15 Aug 2024 15:56:04 +0000 https://enterprise-knowledge.com/?p=22029 In Part 1 of this series from two of our senior consultants, Fernando Islas and Chris Marino, the focus was on the differences between Keyword Search and Semantic Search. In Part 2 of this Expert Analysis blog, Islas and Marino … Continue reading

    The post Expert Analysis: Keyword Search vs Semantic Search – Part Two appeared first on Enterprise Knowledge.

    ]]>
    In Part 1 of this series from two of our senior consultants, Fernando Islas and Chris Marino, the focus was on the differences between Keyword Search and Semantic Search. In Part 2 of this Expert Analysis blog, Islas and Marino are back to focus on the different tools available for each method, as well as discuss what it takes to move them from the design phase to a full-blown production system. They also offer some advice on how to realize the benefits of each in the same solution. 

    Note: We recognize that in the industry, semantic search is used interchangeably for any search method that can infer the users’ intent or context from their queries, beyond keywords. In this blog, we will be specifically discussing vector search.

    Expert analysis graphic showing Chris and Fernando

    What are the different tools available for Keyword Search and Vector Search?

    Keyword (Chris Marino)

    There are many tools available to choose from when implementing keyword search, both open-source (or “free and open”) and proprietary. These distinct options offer the classic choice in software development – “build vs. buy.”  

    Do you prefer the flexibility of building your search solution from the ground up while investing in developer time and resources? Or, do you prefer to buy a solution which you can start quickly and leverage many built-in features, though at the cost of a substantial subscription? 

    The main benefit of choosing the “build” option is that you have absolute flexibility in how you design and build your search system. You control all aspects of the solution – ingesting content, structuring your search index, developing your Search UI – and can tailor this to your exact specifications. The downside of this approach is that you have to account for additional time and resources – you are literally building your search solution from the ground up.

    There are many excellent options for the build option, including:

    • Elasticsearch – most widely-known solution which comes in many different flavors tailored to your needs
    • Solr – established open-source tool, though not quite as popular
    • OpenSearch – offering from Amazon, derived from Elasticsearch

    On the buy side, there are a vast number of tools that can be purchased on a subscription basis to provide excellent search capability for your enterprise. These full-featured applications come equipped with the functionality you expect from a enterprise search solution: 

    • Connectors to integrate with your internal systems for indexing content 
    • Standard UI for displaying search results 
    • Pre-built suite of search features including faceting, auto-completion, and suggestions

    The drawbacks to these solutions are that the subscription prices are quite steep and you lack the flexibility to design these systems exactly as your organization needs. 

    Some of the most popular tools include:

    • Squirro
    • Elastic Cloud (subscription-based Elasticsearch environment)
    • Sinequa
    • Mindbreeze
    • Glean
    • Algolia
    • Coveo
    • Lucidworks

    A logical third category is simply using Microsoft as a solution, which offers its own proprietary set of search tools. If your focus is solely on search within the Microsoft ecosystem, it makes sense to leverage their search offerings. However, trying to integrate external content into the Microsoft search experience can be very challenging. 

    Vector (Fernando Islas)

    Opposite to keyword search, which relies on sparse vectors (derived from the corpus vocabulary), vector search relies on dense vectors, encodings produced by a language model to capture the semantic meaning and context of the text. However, they are both vectors, and as such, existing keyword search engines have been adding this functionality to their features, so it is common to find tools and vendors available for both search types.

    We can find tools such as Elasticsearch, Vectorsearch, Weaviate, Pinecone, Milvus, and Faiss among the most popular in the vendor landscape. Microsoft and AWS also have their vector search offerings: in Microsoft, vector search within Azure Cognitive Search and vector search on embeddings in Azure Cosmos DB for MongoDB vCore, and Amazon OpenSearch Service with vector search collections in AWS.

    When selecting a vector search solution, factors such as pricing, security, user-friendliness, content type (e.g., text, images, media), scalability requirements, and ease of integration into existing infrastructure should be carefully evaluated. Additionally, the solution’s efficiency, relevance ranking capabilities, and the availability of support and maintenance should be assessed to ensure it aligns with their specific use case and long-term goals.

    What are the steps from development to deployment for Keyword Search and Vector Search?

    Keyword (Chris Marino)

    Developing and deploying a search solution is straightforward and fits well within an iterative process. One of the benefits to this approach is that you follow a set of repeatable steps per source system. As you proceed with additional systems, you become more familiar with the process and adept at executing it.

    The steps include:

    • Analyzing the content in your source system to know what type of information it holds (structured, semi-structured, unstructured) and accounting for any security considerations like permissions and access controls.
    • Setting up your connectors which control your indexing routines to access the data from the source system, aggregate it, and index it into your search engine.
    • Configuring your search engine, including mapping your fields correctly, storing the content in the most efficient manner for querying, and accounting for your security requirements. 
    • Developing your Search UI which revolves around your search result pages – the look and feel of the results, the incorporation of action-oriented results, faceting and other rich search features.
    • Testing for performance and relevancy to ensure that users are getting the results they expect from queries and that the system responds in a timely, intuitive manner.
    • Iterating by incorporating user feedback and making modifications in any of the previous steps to improve the overall experience.

    As a general rule, we estimate that it takes 2-4 weeks per source system depending on its complexity which is affected by items such as permissions and access controls, volume of data, and disparity of data.

    Vector (Fernando Islas)

    Every search initiative should start with investigating the organization’s current search landscape, including analytics on users’ queries, the expected content to be retrieved, the content’s relevance, and the users’ interaction with the search platform. This analysis will surface the content types, users, use cases, and teams or divisions that would benefit the most from introducing vector search. With this in mind, a typical pipeline for vector search instantiation would include the following:

    • Data Preparation and Processing: Begin by locating and identifying all the content that needs to be indexed in the search engine and ensure it’s properly cleaned, structured, and formatted for vectorization and indexing.
    • Large Language Model (LLM) Selection: Choose an appropriate large language model (LLM) that aligns with your data type and use case, considering factors like pre-trained embeddings, architecture, and scalability. 
    • LLM Fine-Tuning (Optional but Recommended): Fine-tune the selected LLM if necessary, using domain-specific data to improve its performance in capturing semantic similarities within your content.
    • Content Vectorization and Indexing: To convert your content into dense vectors, implement the vectorization process. Then, create an index structure to efficiently store and retrieve these vectors, selecting suitable indexing and optimization algorithms.
    • Scalability and User Interaction: Ensure the deployed vector search system is scalable to accommodate increasing data and user loads. Focus on user interaction aspects, providing documentation, training, and mechanisms for user feedback to optimize and enhance the search experience continually. 

    Is there a way to leverage both Keyword and Vector Search in a solution?

    Vector (Fernando Islas)

    A hybrid search approach combines traditional keyword search with advanced vector search techniques to provide users with more accurate and relevant results. The user submits a single query that retrieves keyword-based and vector-based results independently. Then, the results from keyword and vector searches are merged into a unified ranking, offering users a comprehensive view of content relevance based on the presence of keywords and semantic similarities. While this hybrid approach provides superior search precision and context awareness, it requires ongoing tuning to optimize the balance between keyword and vector-based relevance, which may not only vary from use case to use case but often on a query-to-query basis.

    Keyword (Chris Marino)

    As Fernando explains, the concept of hybrid search has become very popular with the introduction of vector-based, semantic search. A search application benefits from the best of both worlds – melding the “tried and tested” nature of keyword search with the more modern semantic search experience. One contributing factor towards a successful implementation is keeping in mind where each method excels.

    Keyword search excels for:

    • Boolean searches
    • Exact phrase matching
    • Search lookup by metadata (specific attributes like ZIP Code, State, Business Unit, etc.)

    Semantic Search excels for:

    • Searches requiring context
    • Searches which return relevant results even when the exact search terms are not found within the content

    Conclusion

    In this Expert Analysis, we covered the differences between Keyword and Semantic Search from a tooling and production deployment perspective. Choosing the right technologies and tools for your search project can be challenging and requires a thoughtful, reasoned approach.

    If you are embarking on a search project and need proven expertise to help guide you to success, contact us!

    The post Expert Analysis: Keyword Search vs Semantic Search – Part Two appeared first on Enterprise Knowledge.

    ]]>
    Analyzing and Optimizing Content for the Semantic Layer https://enterprise-knowledge.com/analyzing-and-optimizing-content-for-the-semantic-layer/ Tue, 18 Jun 2024 15:04:41 +0000 https://enterprise-knowledge.com/?p=21522 As I wrote in my previous post, Adding Context to Content in the Semantic Layer, the organizational challenge of effectively generating, managing, and distributing content can be addressed by integrating content into a semantic layer. The semantic layer enriches the … Continue reading

    The post Analyzing and Optimizing Content for the Semantic Layer appeared first on Enterprise Knowledge.

    ]]>
    As I wrote in my previous post, Adding Context to Content in the Semantic Layer, the organizational challenge of effectively generating, managing, and distributing content can be addressed by integrating content into a semantic layer. The semantic layer enriches the content by incorporating data about it, using metadata to describe the context, topics, and entities represented in the content. Content, once enriched, can be interpreted and analyzed along with other data sources to support discoverability and distribution. To maximize the potential for content in the semantic layer, begin by doing a content analysis to assess its readiness and prepare it for ingestion.

    In this post, I share the factors that affect whether your content is ready for the semantic layer, how those factors are assessed (including nuances related to some sample use cases), and the steps to remediate the issues and gaps found in the content audit

    Content Analysis Defined

    Content analysis, or content auditing, is the process of assessing content against a set of defined, measurable criteria and the needs of the business and the content’s audiences. The analysis also considers the content operations surrounding the planning, creation, production, and distribution of the content to all the systems that consume it, whether those be web content management systems, knowledge portals, digital asset management systems, enterprise search, or AI-enabled features. 

    When conducted in advance of the integration of content into a semantic layer—a structured representation of data, content, knowledge assets, and the relationships among them–content analysis becomes a powerful tool for preparing content for understanding, categorization, and enrichment.

    The outcome of an audit is insight into the content’s quantity, quality, and structure, and a set of recommendations for content improvement. When the semantic layer adds context and metadata to that analysis of the content, you have a content source that is well-optimized to support whatever your organizational needs may be. Examples of how content integrated into a semantic layer can be leveraged include

    • knowledge discovery, as in a semantic search program or knowledge portal
    • content generation–for example, the automated creation of reports or development of chatbots 
    • content recommenders that use the rich data about the content combined with user behavior to suggest relevant content.

    Factors Affecting Content Readiness

    When preparing content for a semantic layer, the quality of your overall content ecosystem is essential. Factors such as content duplication, recency, and availability will affect the content’s readiness. Within that ecosystem, targeted content will need to be assessed along several dimensions to ensure effectiveness and usability:

    • Structure: Content that has been modeled and structured in its source system allows for easier categorization and relationship identification.
    • Semantic richness: Through techniques such as semantic tagging and entity extraction, the content can be enriched with metadata that describes its context, topics, and entities.
    • Presence and quality of metadata: Metadata, including tags, keywords, and annotations, is vital in describing the content’s context, meaning, and relationships.
    • Consistency and standardization: Content should adhere to consistent formatting, naming conventions, and data standards to ensure interoperability and ease of integration within the semantic layer. Consistency facilitates accurate data interpretation and enables effective knowledge extraction.
    • Content quality: Content that is well-written to be current, accurate, and useful affects not only the end user experience but is critical for reliable semantic analysis and inference.
    • Volume and complexity: The amount of ingested content can affect the performance and scalability of consuming systems.

    By addressing these aspects of content preparation, you can enhance the effectiveness and value of the semantic layer, enabling more accurate, intelligent, and context-aware knowledge representation and discovery.

    Auditing Content for Readiness

    Designing a content audit to assess the readiness of content for ingestion into a semantic layer involves taking a structured approach to evaluating the content. General principles for auditing content begin with setting objectives and scope, including the metrics by which you will measure success; inventorying and categorizing the content sources for the semantic layer; assessing content structure; determining how well understood, available, and consistently used metadata is across content sources; analyzing domain-specific content; and evaluating the overall quality of the content.

    When auditing content readiness for different use cases involving a semantic layer—such as knowledge discovery, chatbot development, content recommendation engine development, or content generation—you will need to tailor the steps to address each application’s unique requirements and goals. Here’s how you might frame the audit steps differently for each use case:

    Auditing Content for Readiness
    Click to expand image

    General Principles for All Use Cases

    • Flexibility: Each step must be adapted to the specific needs and characteristics of the use case. EK worked with the marketing operations team at a global telecommunications company to define the structure of products, the relationship between product components, and the taxonomy necessary to traverse those relationships. The product content model now enables the intelligent assembly of sales collateral, as well as multi-channel publishing to multiple user experiences including sales enablement portals, marketing websites, mobile applications, and social media.
    • User-centric approach: Consider the end-user experience and how content will be consumed or interacted with. For example, EK recently worked with a global software vendor who needed to deliver more personalized, timely, and relevant release notes of upcoming product changes in a continuous implementation, continuous delivery (CI/CD) environment to both internal and external end-users. EK focused on developing a comprehensive content model supporting structured and componentized release note content, improving user experience (UX) interactions, and leveraging the organization’s taxonomy to filter the content for more personalized delivery. We leveraged human-centered design practices and facilitated a series of focus groups across the teams of content authors, marketing, technical SMEs, and executive leadership to define the current state of content authoring processes and content management and ensure cross-team alignment on the target state for authoring, content management, and structured content model design. EK carefully considered the stakeholder requirements in our delivery of the solution.
    • Feedback loops: Implement robust feedback mechanisms to continuously improve content readiness and alignment with the semantic layer’s goals. EK worked with a bioscience technology provider who needed EK’s help to bridge the gap between product data and marketing and educational content to ultimately improve the search experience on their platform. EK incorporated ML, knowledge graphs, taxonomy, and ontology to redefine the user experience, allowing users to discover important content through an ML-powered content discovery system, yielding suggestions that resonated with their needs and browsing history. EK’s flexible approach allowed for open dialogue and iterative development and value demonstration, ensuring that the project’s progression aligned closely with the evolving needs of our client.

    Taking Action on Your Audit

    The outcome of your audit, as stated above, will be a set of issues to address that were uncovered by the analysis. To address those issues and ensure that your content is well-prepared for integration into the semantic layer and optimized for various AI-enabled applications, follow these steps.

    1. Optimize content structure: For easier categorization and better integration within the semantic layer, reorganize and reformat content to adhere to standardized formats like JSON, XML, or DITA. Break down large content pieces into smaller, reusable components that can be individually tagged for targeted, dynamic assembly and delivery. For example, you might convert a set of technical manuals from PDF format to a structured content model that breaks out individual topics for reassembly and reuse in multiple contexts.
    2. Ensure consistency and standardization: Improve consistency and interoperability of content by developing and enforcing content standards and naming conventions across the content domains. Implement governance as part of your content operations, by creating a style guide, training content creators, and providing templates that automatically enforce these standards.
    3. Enhance metadata quality: To improve searchability and context-awareness in consuming systems, add missing metadata and improve existing entries. Use tools for semantic tagging and entity extraction to automatically generate and add metadata tags (for example, author names, publication dates, keywords, and abstract summaries for a set of articles) where they are missing.
    4. Increase semantic richness: To enhance automated reasoning and inferencing capabilities, add detailed annotations and semantic tags to content. Use a natural language processing (NLP) tool to identify and tag entities and relationships in a database of content. Add these annotations directly into the content’s metadata fields.
    5. Improve content quality: Achieve higher reliability and accuracy by implementing a quality assurance program that includes regular audits to review content and plan for updates. To make this process more efficient, consider using a content management tool that scans content to find and correct errors and inconsistencies and identifies duplicated or outdated content. 
    6. Optimize scalability and performance: Enable efficient and reliable content operations that manage ingestion and processing by scaling up and optimizing tool capabilities and refine workflows for handling large volumes of content. 
    • Implement iterative improvements: Enhance content readiness and alignment with semantic layer requirements on an ongoing basis. Establish a continuous improvement cycle based on feedback and audit findings. Set up regular review meetings with content teams to assess progress and adjust strategies. Analyze user feedback and audit reports. Track improvements and report status to content stakeholders.

    Summary

    Incorporating a semantic layer into content management and distribution is a transformative approach to addressing the organizational challenges of handling vast amounts of information. As outlined above, the process begins with a comprehensive content analysis or audit to evaluate the readiness of content for integration into the semantic layer. This step is critical for enhancing the content’s structure, semantic richness, metadata quality, consistency, and overall quality.

    By enriching content with metadata and contextual information, organizations can significantly boost the capabilities of consuming systems such as AI-enabled chatbots, knowledge management systems, recommendation engines, and enterprise search. The audit process identifies gaps and provides insights into the content’s quantity, quality, and structure, along with actionable recommendations for improvement.

    Key factors such as structure, semantic richness, metadata quality, consistency, content quality, and scalability play vital roles in ensuring the effectiveness of content within the semantic layer. Properly structured and componentized content enriched with detailed metadata ensures accurate categorization and relationship identification, facilitating automated reasoning and inferencing.

    Addressing these aspects during content preparation optimizes the content for semantic enrichment and enhances downstream distribution in relevant contexts. This leads to more accurate, intelligent, and context-aware knowledge representation and discovery, ultimately maximizing the content’s potential in the semantic layer.

    Follow these audit steps and address the outlined factors to unlock your content’s full potential to drive better decision-making, improve user experiences, and achieve greater operational efficiency.

    Looking for expert assistance with your content audit project? Contact us.

    The post Analyzing and Optimizing Content for the Semantic Layer appeared first on Enterprise Knowledge.

    ]]>
    Adding Context to Content in the Semantic Layer https://enterprise-knowledge.com/adding-context-to-content-in-the-semantic-layer/ Wed, 22 May 2024 13:59:14 +0000 https://enterprise-knowledge.com/?p=20720 Content is a critical organizational asset. Whether product documentation, sales and marketing materials, industry rules and regulations, employee policies, or learning materials, content forms the backbone of operations, decision-making, and end-user experiences. Organizations are challenged to generate content and structure … Continue reading

    The post Adding Context to Content in the Semantic Layer appeared first on Enterprise Knowledge.

    ]]>
    Content is a critical organizational asset. Whether product documentation, sales and marketing materials, industry rules and regulations, employee policies, or learning materials, content forms the backbone of operations, decision-making, and end-user experiences. Organizations are challenged to generate content and structure and manage it effectively for efficiency, discoverability, and compliance. The semantic layer addresses this challenge by adding context to content, capturing its relationships, and integrating it with organizational data into an integrated view.

    Understanding the Semantic Layer

    Before we explore the role of content in the semantic layer, a quick refresher on what a semantic layer is. In a recent post by EK’s Lulit Tesfaye, Partner and the VP for Knowledge & Data Services and Engineering, she proposes this definition:

    A semantic layer is a standardized framework that organizes and abstracts organizational data (structured, unstructured, semi-structured) and serves as a data connector for data and knowledge. Larger than a data fabric, which is more focused on structured data, a semantic layer connects all organizational knowledge assets via a well-defined and standardized semantic framework, including content items, data, files, and media. It allows organizations to represent organizational knowledge and domain meaning to systems and applications, defining the relationship between content and data. 

    Beyond the data fabric, a semantic layer that incorporates content makes it possible to organize and enrich content with semantic meaning, empowering consuming systems and end users with advanced content management, discovery, and analytical capabilities.

    Illustration of the elements of the semantic layer

    Enriching Content in the Semantic Layer

    The semantic layer provides contextual understanding to content by capturing the meaning, relationships, and domain-specific knowledge embedded within it by incorporating data about the content. Through techniques such as semantic tagging and entity extraction, the content is enriched with metadata that describes its context, topics, and entities. This contextual understanding enables users to interpret and analyze content and other integrated data sources in a more meaningful way, leading to deeper insights and better decision-making.

    When content is broken down into a dynamic content model, it becomes possible to realize the benefits of the semantic layer at a granular level. Dynamic content lets you use your content as data and realize the full value of your business assets by making those structured content elements available to various consuming systems and experiences. When those structured content elements are individually enriched with metadata like tags, categories, and keywords, they become building blocks that can be targeted and assembled at the point of need, creating contextually relevant content experiences. The semantic layer allows you to find the content that has an answer and directs you to the answer itself. Combining dynamic content and the semantic layer creates opportunities for improved content operations processes through the content lifecycle. Examples of content that can be managed in this structured manner include rules and regulations, employee handbooks, technical documentation, sales enablement and product marketing copy, and learning content.

    Content Operations and the Semantic Layer

    Content integrated into a semantic layer supports the capabilities of content operations systems in several ways. The content can be better structured and organized in a content or digital asset repository for more efficient and relevant access. Recommendation engines can use the data to deliver content that is aligned with user interests and needs. The Entity recognition and topic modeling facilitates content analysis to inform strategy and decision-making. When fed into consuming systems, the combination of the content and the semantic layer facilitates effective data management, knowledge discovery, and decision-making processes within an organization.

    Illustration of the layers in a semantic publishing framework

    When content is incorporated into a semantic framework, organizations have a powerful tool for unlocking the full potential of their content investment as it is integrated with the full range of organizational data and knowledge.

    Use Cases for Content in the Semantic Layer

    The need for actionable insights and efficient information retrieval can be supported by the integration of a content graph within a semantic layer, consolidating disparate content sources into a cohesive knowledge repository. This integration process incorporates standardized metadata and establishes mappings of the relationships and dependencies between different concepts and entities. It facilitates the automation of content generation processes, leveraging advanced algorithms and structured data to produce tailored content at scale.

    Let’s look at how this capability can enable applications in knowledge discovery, automated content generation, and content recommendations.

    Knowledge Discovery

    Represented through ontologies, taxonomies, and knowledge graphs, the content within the semantic layer embodies organizational knowledge and domain expertise. This rich knowledge representation facilitates exploration and discovery, allowing users to uncover valuable insights and patterns hidden within interconnected content.

    When that content is modeled and structured in the source systems, it can be enriched with metadata at the component level. The structure of content in a content repository plays a crucial role in facilitating data organization and abstraction within the semantic layer regardless of its original format or source. By incorporating the content metadata into a common semantic framework with other types of data, the semantic layer enables integration and interoperability between different data types, allowing users to access and leverage all relevant information from a single source. That source is then optimized for the kind of system that consumes it, whether a web content management system, a learning management system, a marketing campaign management system, or another use case.

    In a recent project, EK worked with a federal space research institute to develop an enterprise knowledge graph to connect the dots between people, projects, engineering components, and engineering topics. Leveraging and enriching an existing taxonomy and ontology at the organization, EK automatically extracted key entities from a repository of unstructured documents, adding structured metadata to these text files. That knowledge graph was then incorporated into a semantic search platform to enable faceted search and navigation across individuals, projects, and unstructured text documents, significantly reducing time spent finding information.

    Automated Content Generation

    Managing content within a semantic layer can also facilitate content generation for highly structured content types such as those used in industries such as pharma (e.g., rules, regulations, definitions, reports). By organizing content elements and their relationships in a structured format, the semantic layer provides the foundation for understanding the context and meaning of data. This contextual understanding enables intelligent algorithms to analyze existing content, identify patterns, and generate new content autonomously.

    Using Natural Language Processing (NLP) techniques, algorithms can extract insights from structured content and generate human-readable text based on predefined templates or rules, such as articles, reports, or summaries. By leveraging machine learning models trained on vast amounts of data, these algorithms can adapt and improve over time, producing increasingly accurate and relevant content.

    Additionally, the semantic layer facilitates the reuse of existing content components in a structured content environment, such as paragraphs, sections, or entire documents, to create new content. Automated systems can dynamically assemble tailored content pieces by identifying and retrieving relevant content based on semantic relationships and metadata.

    The combination of structured content management in a semantic layer and advanced algorithms for automated content generation opens up possibilities for streamlining content creation processes, reducing manual effort, and delivering personalized and timely content at scale.

    EK recently worked with a pharmaceutical company to automate regulatory report generation. We created an ontological data model and knowledge graph to map experiment management and results data across the product development lifecycle. By extracting and connecting information in the semantic layer, analysts can quickly generate reports needed for regulatory filings and use templatized content models to embed those reports seamlessly.

    Content Recommendations

    The semantic layer enriches content with additional metadata and annotations to enhance its discoverability, relevance, and usability. By analyzing content for relevant keywords, topics, and sentiment, the semantic layer enables targeted content recommendations and tailored user experiences. Leveraging ontologies and knowledge graphs, content can be organized within the semantic layer, making it easier to surface relevant content to users based on their interests and browsing history. Through continuous analysis of user interactions and feedback, content recommenders can adapt and refine their recommendations in real time, ensuring relevance and timeliness.

    With content integrated into the semantic layer, organizations can implement next-generation content recommenders, offering hyper-personalized experiences by leveraging enriched metadata and advanced algorithms to support targeted content delivery and recommend similar content. Personalized content recommendations increase user satisfaction and drive higher engagement metrics such as click-through rates, time spent on site, and conversion rates.

    Semantic layers can identify related content items by analyzing the similarity between pieces of content. This similarity analysis enables the creation of features such as “You May Also Like” recommendations or relevant course materials based on user identity and behavior, enhancing engagement and content discovery.

    EK recently worked with an organization that provides online healthcare compliance training solutions to build a cloud-based recommendation engine microservice supported by a semantic data layer that unifies, relates, and organizes the source data (course titles, descriptions, tags, and more). The engine generates recommendations for compliance terms for tagging content to appear in recommended courses for customers to ensure that they comply with regulations critical to their role, setting, and jurisdiction.

    Summary

    Structuring and organizing content within the semantic layer empowers organizations to extract valuable insights, standardize metadata, identify relationships, and enhance searchability. This facilitates the seamless integration of diverse content sources into applications such as recommendation engines, dashboards, and end-user-facing content. This integration empowers users to access and leverage all relevant information from a single source, driving better decision-making and knowledge discovery.

    The post Adding Context to Content in the Semantic Layer appeared first on Enterprise Knowledge.

    ]]>
    IA Fast-Track to Search-Focused AI Solutions: Information Architecture Conference 2024 https://enterprise-knowledge.com/ia-fast-track-to-search-focused-ai-solutions-information-architecture-conference-2024/ Tue, 30 Apr 2024 13:28:54 +0000 https://enterprise-knowledge.com/?p=20419 Sara Mae O’Brien-Scott and Tatiana Baquero Cakici, Senior Consultants at Enterprise Knowledge (EK), presented “AI Fast Track to Search-Focused AI Solutions” at the Information Architecture Conference (IAC24) that took place on April 11, 2024 in Seattle, WA. In their presentation, … Continue reading

    The post IA Fast-Track to Search-Focused AI Solutions: Information Architecture Conference 2024 appeared first on Enterprise Knowledge.

    ]]>
    Sara Mae O’Brien-Scott and Tatiana Baquero Cakici, Senior Consultants at Enterprise Knowledge (EK), presented “AI Fast Track to Search-Focused AI Solutions” at the Information Architecture Conference (IAC24) that took place on April 11, 2024 in Seattle, WA.

    In their presentation, O’Brien-Scott and Cakici focused on what Enterprise AI is, why it is important, and what it takes to empower organizations to get started on a search-based AI journey and stay on track. The presentation explored the complexities of enterprise search challenges and how IA principles can be leveraged to provide AI solutions through the use of a semantic layer. O’Brien-Scott and Cakici showcased a case study where a taxonomy, an ontology, and a knowledge graph were used to structure content at a healthcare workforce solutions organization, providing personalized content recommendations and increasing content findability.

    In this session, participants gained insights about the following:

    • Most common types of AI categories and use cases;
    • Recommended steps to design and implement taxonomies and ontologies, ensuring they evolve effectively and support the organization’s search objectives;
    • Taxonomy and ontology design considerations and best practices;
    • Real-world AI applications that illustrated the value of taxonomies, ontologies, and knowledge graphs; and
    • Tools, roles, and skills to design and implement AI-powered search solutions.

    The post IA Fast-Track to Search-Focused AI Solutions: Information Architecture Conference 2024 appeared first on Enterprise Knowledge.

    ]]>
    Expert Analysis: Top 5 Considerations When Building a Modern Knowledge Portal https://enterprise-knowledge.com/expert-analysis-top-5-considerations-when-building-a-modern-knowledge-portal/ Tue, 30 Jan 2024 16:30:38 +0000 https://enterprise-knowledge.com/?p=19544 Knowledge Portals aggregate and present various types of content – including unstructured content, structured data, and connections to people and enterprise resources. This facilitates the creation of new knowledge and discovery of existing information. The following article highlights five key … Continue reading

    The post Expert Analysis: Top 5 Considerations When Building a Modern Knowledge Portal appeared first on Enterprise Knowledge.

    ]]>
    Knowledge Portals aggregate and present various types of content – including unstructured content, structured data, and connections to people and enterprise resources. This facilitates the creation of new knowledge and discovery of existing information.

    The following article highlights five key factors that design and implementation teams should consider when building a Knowledge Portal for their organizations.

    Kate Erfle and Gui Galdamez

    Sources of Truth

    Sources of TruthGui GaldamezGuillermo Galdamez

    We define ‘sources of truth’ as the various systems responsible for generating data, recording transactions, or storing key documents about the vital business processes of an organization. These systems are fundamental to the day-to-day operations and long-term strategic objectives of the business. 

    In a modern enterprise, the systems supporting diverse business processes can number in the dozens, if not hundreds, depending on the organization’s size. However, from the business perspective of a Knowledge Portal implementation, it is critical to prioritize integrations with each source based on appropriate criteria. Drawing from our experience, we’ve identified three key factors that Knowledge Portal leaders should consider:

    • Business value. The source must contain data that is fundamental to both the business and to the Knowledge Portal’s objectives, aligning with the users’ expectations.
    • Data readiness. The data within the source should be in a state ready for ingestion and aggregation (more on this in the next section). 
    • Technical readiness. It may sound obvious, but the source systems need to be capable of providing data to the Knowledge Portal. In some cases, a source system might be under active development (and not yet operational), or it might have limited functionality for exporting the necessary data for the Knowledge Portal’s use cases.

    Kate Erfle

    Kate ErfleA Knowledge Portal should draw from well-defined information sources that are recognized as being authoritative and trusted. A Knowledge Portal isn’t intended to act as the “source of truth” itself, but rather to aggregate and meaningfully connect data sources and repositories, providing a cohesive “view of truth.” 

    As Guillermo pointed out, there are several key data and technical readiness factors to consider when integrating source systems within a Knowledge Portal ecosystem. For a successful implementation, the source systems should meet the following technical criteria:

    • The data must be clean, consistent, and standardized (more on this in the next section).
    • The data should be accessible in a standard, compatible format, either via an API or manual export.
    • The data must be protected by necessary and appropriate security measures, or it should provide data points that can be used to implement and enforce these security measures.

    Once a data source meets the established criteria for quality, import/export capabilities, and security, it becomes eligible for integration with the Knowledge Portal. Within the portal, it may be possible to create or update content, but the data source remains its own “source of truth”. All changes made within the Knowledge Portal should be reflected in the corresponding source system to maintain consistency, accuracy, and integrity of the source system data. During the design and implementation of a Knowledge Portal, it is critical to consider the impact of user actions and to ensure that any changes are accurately reflected in the source data. This approach ensures the continued accuracy and reliability of data from the source systems.

    Information Quality

    Information Quality

    Gui Galdamez

    Guillermo Galdamez

    One of the most common issues I encounter when talking to our clients is the perception that their data and unstructured content is unreliable. This could be due to various issues: the data might be incomplete, duplicative, outdated, or just plain wrong. As a result, employees can spend hours not only searching for information and data but also tracking down people who can confirm its reliability and usability.

    In discussing content and data quality, one of the foundational steps is taking inventory of the ‘stuff’ contained within your prioritized sources of truth. Though the maxim “You can’t manage what you can’t measure,” has often sparked debate about its merits, this is one occasion where it is notably relevant. It is important for the implementation team, as well as the business to have visibility of the data and content it means to ingest and display through the Knowledge Portal. Performing a content analysis is key in providing the Knowledge Portal team with the information they need to ensure that information provided by the Knowledge Portal is consistent, reliable, and timely

    A content inventory and audit often reveals areas where data and content needs to be remediated, migrated, archived, or disposed of. The Knowledge Portal team should take this opportunity to perform data and content cleanup. During development, the Knowledge Portal Team can collaborate with various teams to improve data and content quality. Even following its launch, the Portal, by aggregating and presenting information in new ways, can reveal gaps or inconsistencies across its sources. It will be helpful to define feedback mechanisms between users, the Knowledge Portal Team, and data and content owners to be able to address instances where data and content needs to be maintained. 

    Gaining and sustaining user trust is crucial for Knowledge Portals. Users will continuously visit the Portal as long as they perceive that it solves their previous challenges. If the Portal becomes a new ‘junk drawer’ for data, engagement will decline rapidly. To avoid this, implement a strong change management and communications strategy to continually remind users about the Portal’s capabilities and value.

    Kate ErfleKate Erfle

    Maintaining high-quality data and content is crucial for a Knowledge Portal’s success. As Guillermo stated, the implementation phase of a Knowledge Portal offers the perfect opportunity for data cleanup.

    To begin, it’s important for individual system owners and administrators to do what is feasible within their systems to ensure high-quality data. Before it’s provided to the Portal, several transformation and cleaning steps can be applied directly to the source system data. The Knowledge Portal implementation team should collaborate closely with the various data repository teams to ensure the required data fields are standardized, cleaned, and validated before being exported. By working together, these teams can assess the current state of the data, identify missing fields, spot discrepancies, and address inconsistencies.

    If the data from source systems still contains imperfections, a few remediation strategies can be applied to prepare it for integration: 

    • Removing Placeholder or Dummy Data: If the data source team is unable to remediate placeholder or dummy data, the Portal team can compile a list of these “dummy values” and remove them entirely. Displaying a field as “empty” is preferable to showing a fake or false value.
    • Normalizing Terms with a Controlled Vocabulary: In cases where the source system lacks a controlled vocabulary, the Portal team can align certain data fields with the Knowledge Portal’s taxonomy and ontology. This involves using synonyms to match various representations of the same concept into one concise point.
    • Enforcing Data Standards through APIs: The Portal team’s APIs can be configured to expect and enforce specific data standards, models, and types, ensuring that only accurately conforming data is ingested and displayed to the end user. Such enforcement can also highlight required fields and alert data teams when essential data is missing, which increases visibility into the underlying issues associated with bad data.

    Guillermo emphasized the importance of remedying data issues to build and maintain user trust and buy-in. Effectively addressing bad data is also critical to avoid significant issues:

    • Preventing Unauthorized Access to Information: Without proper security measures and clear definitions of user identities and access rights, there’s a high risk of sensitive information being exposed. The data needs to clearly indicate who should be granted access, and users need to be uniquely and consistently identifiable across systems.
    • Ensuring Full Functionality of the Knowledge Portal: If data is incomplete or untrustworthy, it impedes the use of advanced capabilities and functionalities of the Knowledge Portal. Reliable data is foundational for seeing its full potential.

    Business Meaning and Context

    Business Meaning and Context

    Gui Galdamez

    Guillermo Galdamez

    As mentioned earlier, Knowledge Portals aggregate information from diverse sources and present it to users, introducing a new capability to the organization. It’s essential for the Knowledge Portal team to fully understand the data and information being presented to the users. This includes knowing its significance and business value, its origin, how it is generated, and its connection to other business processes. Keep in mind that this information is seldom presented to users all at once, so they will likely face a learning curve to utilize the Knowledge Portal effectively. This challenge can be mitigated through thoughtful design, change management, training, and communication. 

    Designs for a Knowledge Portal need to strategically organize different information elements. This involves not only prioritizing these elements based on relative importance, but also ensuring they align with business logic and are linked to related data, information, and people. In other words, the design needs to be understandable to all intended users at a single glance. Achieving this requires clear, prioritized use cases tailored to the Knowledge Portal’s audiences, combined with thorough user research that informs user expectations. Knowing this, it becomes easier to design with user needs and objectives in mind and have it more seamlessly fit into their daily workflows and activities. 

    Effective change management, training, and communications help reinforce the purpose and the value of a Knowledge Portal, which might not always be intuitive to everyone across the organization; some users may be resistant to change, preferring to stick to familiar routines. It’s crucial for the Knowledge Portal team to understand these users’ motivations, their hesitations, and what they value. Clearly articulating the individual benefits users will gain from the Portal, setting clear expectations, and providing guidance on using the Portal successfully are crucial for new users to adopt it and appreciate its value in their work.

    Kate ErfleKate Erfle

    It is essential to provide context to the information available on the portal, especially within a specific business or industry setting. This involves adding metadata, descriptions, and categorizations to data, allowing siloed, disconnected information to be associated and helping users discover content relevant to their needs quickly and efficiently. 

    A robust metadata system and a well-defined taxonomy can aid in organizing and presenting content in a meaningful way. It’s important to evaluate the current state of existing taxonomies and controlled vocabularies across each source system, as well as to assess the prevalence and consistency of metadata applied to content within these systems. These evaluations help determine the level of effort required to standardize and connect content effectively. To obtain the full benefits of a Knowledge Portal–creating an Enterprise 360 view of the organization’s assets, knowledge, and data–this content needs to be well-defined, categorized, and described.

    Security and Governance

    Security and GovernanceGui Galdamez

    Guillermo Galdamez

    One of the most common motivations driving the implementation of Knowledge Portals is the user’s need to quickly find specific information required for their work. However, users often overlook the equally important aspect of securing this information. 

    Often, information is shared through unsecured channels like email, chat, or other common communication methods at users’ disposal. This approach places the responsibility entirely on the sender to ascertain and decide if a recipient is authorized to receive the information. Sometimes senders mistakenly send information to the wrong person, or they may need additional time to verify the recipient’s access rights. Furthermore, senders may need to redact parts of the information that the recipient isn’t permitted to see, which adds another time-consuming step. 

    The Knowledge Portal implementation must address this organizational challenge. At times, the Knowledge Portal team will need to guide the organization in establishing a clear framework for access control. This is especially necessary when the Knowledge Portal creates new types of information and data by aggregating, repackaging, and delivering them to users.

    Kate ErfleKate Erfle

    Security and governance are paramount in the construction of a Knowledge Portal. They profoundly influence various implementation details and are critical for ensuring the confidentiality, integrity, and availability of information within the portal.

    The first major piece of security and governance is user authentication, which involves verifying a user’s identity. Several options for implementing user authentication include traditional username and password, Multi-Factor Authentication (MFA), and Single Sign-On (SSO). These choices will be influenced by the existing authentication and identity management systems in use within the client organization. Solidifying these design decisions early in the architecting process is critical as they affect many facets of the portal’s implementation.

    The second major piece of security and governance is user authorization, which involves granting users permission to access specific resources based on their identity, as established through user authentication. Multiple authorization models may be necessary based on the level of fine-grained access control required. Popular models include: 

    • Role-Based Access Control (RBAC): This model involves defining roles (e.g., admin, user, manager) and assigning specific permissions to each. Users are then assigned to these roles, which determine their access level.
    • Attribute-Based Access Control (ABAC): In this model, access decisions are based on user attributes (e.g., department, location, job title), with policies that specify the conditions for access.

    Depending on the organization’s use case, one or a combination of these may be used to manage user access and ensure sensitive data is secured. The difficulty and complexity of the implementation will be directly correlated with the current and target state of identity and security management across the organization, as well as the breadth and depth of data classification applied to the organization’s data.

    Information Seeking and Action

    Information Seeking and Action

    Gui Galdamez

    Guillermo Galdamez

    Knowledge Portal users will approach their quest for information in a variety of ways. Users may prefer to browse through content during exploratory sessions, or they may leverage search when they know precisely what they need. Often, users employ a combination of these approaches depending on their specific needs for data or content. 

    For instance, in a recent Knowledge Portal project, our user research revealed that individuals rarely searched for documents directly. Instead, they searched for various business entities and then browsed through related documents. This prompted the team to reevaluate the prioritization of documents in search results and the necessary data points that should be displayed alongside these documents to provide meaningful context

    In summary, having a strong user research strategy is essential to understand what type of data and information users are seeking, their reasons for needing it, their subsequent use of it, and how this supports the broader organization’s processes and objectives.

    Kate ErfleKate Erfle

    Knowledge Portals are designed to provide users with access to a broader range of information and resources than available in the various source systems, and they should facilitate users in both finding necessary information and taking meaningful actions based on that information. 

    Information Seeking Involves:

    • Search Functionality: A robust search engine matches user queries to the most relevant content. This involves keyword relevance, search and ranking algorithms, and user-specific parameters. Tailoring these parameters to the organization’s specific business use cases improves search accuracy. The incorporation of taxonomies and ontologies for content categorization, tagging, and filtering further refines search results, aligning them with organization-specific terminology, and enables users to sift through results using familiar business vernacular.
    • Browsing and Navigation: Well-structured categories, facets, menus, and user-friendly navigation features help users discover not just the information they directly seek, but also related, relevant content they may not have anticipated. This can be done through various interfaces, including mobile applications, enhancing the portal’s accessibility and user interaction.
    • Dynamic Content Aggregation and Personalization: A standout feature of Knowledge Portals is their ability to aggregate data from various sources into a single page, which can be highly personalized. For instance, a project aggregator page might include sections on related projects, prioritizing those relevant to the user’s department.

    Action Involves:

    • Integration with Source Systems or Applications: Providing seamless links to source systems within the Knowledge Portal allows users to easily find content and perform CRUD (Create, Read, Update, Delete) operations on the original content.
    • Task Support: Tools for document generation, data visualization, workflow automation, and more, assist users in their daily tasks and enable them to make the most of source data and optimize business processes.
    • Learning and Performance Support: Dynamic content and interactive features encourage users to actively engage with content which strengthens their comprehension and absorption of information
    • Feedback Mechanism: Enabling users to contribute feedback on content and documents within the portal fosters continuous improvement and enhances the portal’s effectiveness over time.

    Closing

    The business and technical considerations outlined here are essential for creating a Knowledge Portal that intuitively delivers information to its users. Keep in mind that these considerations are interconnected, and a well-designed Knowledge Portal should strike a balance between them to provide users with a seamless and enriching experience. Should your organization aspire to implement a Knowledge Portal, our team of experts can guide you through these challenges and intricacies, ensuring a successful deployment.

    The post Expert Analysis: Top 5 Considerations When Building a Modern Knowledge Portal appeared first on Enterprise Knowledge.

    ]]>
    Content Engineering for Personalized Product Release Notes https://enterprise-knowledge.com/content-engineering-for-personalized-product-release-notes/ Tue, 14 Nov 2023 16:36:46 +0000 https://enterprise-knowledge.com/?p=19222 The Challenge A global software vendor with a vast portfolio of cloud software products needed to deliver more personalized, timely, and relevant release notes of upcoming product changes in a continuous implementation, continuous delivery (CI/CD) environment to both internal and … Continue reading

    The post Content Engineering for Personalized Product Release Notes appeared first on Enterprise Knowledge.

    ]]>

    The Challenge

    A global software vendor with a vast portfolio of cloud software products needed to deliver more personalized, timely, and relevant release notes of upcoming product changes in a continuous implementation, continuous delivery (CI/CD) environment to both internal and external end-users. The delivery and structure of release notes to product administrators and customers made the content hard to find and less relevant to external client organizations’ product administrators and individual customers in their product instances. The CI/CD environment for content delivery presented the challenge of consistent releases and updates to system operations, increasing the likelihood of disruptions occurring when introducing the latest change. 

    Customers struggled to understand product changes and reported they did not receive enough information to prepare their organization for continual product changes, resulting in diminished customer trust in the software vendor.

    The Solution

    As part of this end-to-end advanced content engagement, EK defined the Current and Target State of content management and the content model while accounting for new content authoring processes, the software vendor’s unique brand voice, and their long-term content operations vision. EK focused on developing a comprehensive content model supporting structured and componentized release note content, improving user experience (UX) interactions, and leveraging the organization’s taxonomy to filter the content for more personalized delivery.

    EK facilitated a series of focus groups across the software vendor’s various teams of content authors, marketing, technical SMEs, and executive leadership to define the current state of content authoring processes and content management and ensure cross-team alignment on the target state for authoring, content management, and structured content model design. EK’s team collaborated with the SMEs at the client organization and the vendor of the solution’s CMS when designing the high-level solution architecture through a content engineering and content strategy lens, leveraged the client’s existing tech stack, and enabled the implementation of a structured and componentized content model in a headless CMS environment. EK carefully considered the stakeholder requirements in our delivery of the following:

    Holistic Solution Architecture Design – EK collaboratively developed a solution architecture for data integration and data flow between existing systems in the client organization’s technology ecosystem. The solution design also supports structured content authoring, publishing processes, and multi-channel contextualized delivery for three specific delivery channels.

    Content Operations Enhancement – EK leveraged human-centered design practices when developing content governance around authoring processes that streamlined authoring workflows and when contributing to change management planning for author training. EK’s efforts included developing and designing a structured content model and taxonomy design that enabled content reuse in multiple personalized contexts without duplication. 

    Content Personalization Strategy – EK’s consultants defined a content strategy that accounted for the complexities around content, process, and personalization requirements gathered in stakeholder sessions to transform a legacy content model into structured content components. EK’s structured content design for In-product release notes provides comprehensive and personalized information about each new feature or fix within the specific product version purchased by an enterprise customer in addition to delivering content about relevant features within an individual subscriber’s product version.

     The componentized content model and newly enhanced metadata definition enabled tailored multi-channel delivery of product release notifications and contextualized in-product announcements. The comprehensive content model also supports a product administration interface that provides comprehensive and personalized information about each new feature or fixes relevant to the customer’s specific instance. EK’s team developed the high-level solution architecture for data integration and data flow between the content management system and the internal environments of the vendor’s portfolio of products to support new authoring and publishing processes.

    The EK Difference

    EK consultants leveraged their vast knowledge and expertise in advanced content management practices to solve the client’s content and system design challenges, producing authoring, modeling, and delivery architecture design requirements as well as content operation processes. EK’s expert consultants conducted multiple interviews, facilitated collaborative working sessions, and engaged in supplemental conversations to align disparate stakeholders across various product and content teams. The outcome of this human-centered design approach enabled our consultants to holistically understand stakeholders’ needs and bridge gaps across stakeholders’ diverse goals, and the company’s long-term vision. 

    EK’s approach to stakeholder alignment merged personal stakeholder interaction that enabled the development of a comprehensive content strategy with a technical solutions architecture that supported recommendations. EK’s technology experts leveraged existing system architecture processes and data flows in our end-to-end delivery of content engineering, content strategy, and content architecture design in an effort to reduce technology cost savings and training burden.

    EK collaboratively worked with the client when defining the distinct internal authoring experience, external end-user experience, and content experiences necessary for specific delivery channels. The outcome of EK’s end-to-end engagement provided the client with the right content structure, system architecture, and content strategy to successfully deliver personalized and relevant content promptly to end-users within a continuous implementation, continuous delivery (CI/CD) environment.

    The Results

    EK’s content design and implementation guidance enabled personalized and consistent delivery of product release notifications to the software vendor’s numerous product administrators and customers at the point of need through distinct delivery channels. 

    Content governance and editorial guidelines ensure authoring processes and experiences are now intuitive, and flexible. The structured content model for componentized content and enhanced metadata streamlined content creation processes to be more efficient, by providing accurate references to related content and components for reuse in the authoring environment to increase content authors’ awareness of existing content and reduce rework. 

    The new componentized content model supports customized and contextualized end-user experiences, which improved overall customer engagement. Since implementation of the multi-channel delivery strategy, structured content model, and system architecture integration, the software vendor saw a 300% increase in product release notification engagement. Customers now only consume relevant product release notifications and announcement content in a more timely manner, so they have increased time to prepare for changes and mitigate organizational risk. End-users now have the option to quickly reference content within product applications or the comprehensive product documentation web page for changes, and now interact with enriched information on each change, how to prepare for it, who it impacts, and when to expect the change.

    Ready to Get Started?

    Get in Touch

    The post Content Engineering for Personalized Product Release Notes appeared first on Enterprise Knowledge.

    ]]>
    KM Fast Track to Search-Focused AI Solutions https://enterprise-knowledge.com/km-fast-track-to-search-focused-ai-solutions/ Thu, 02 Nov 2023 14:32:36 +0000 https://enterprise-knowledge.com/?p=19116 In today’s rapidly evolving digital landscape, the ability to quickly locate and connect critical information is key to organizational success. As organizations struggle with ever-expanding datasets and information silos, the need for effective search-focused artificial intelligence (AI) solutions becomes vital. … Continue reading

    The post KM Fast Track to Search-Focused AI Solutions appeared first on Enterprise Knowledge.

    ]]>
    In today’s rapidly evolving digital landscape, the ability to quickly locate and connect critical information is key to organizational success. As organizations struggle with ever-expanding datasets and information silos, the need for effective search-focused artificial intelligence (AI) solutions becomes vital. This infographic takes you on a journey, drawing an analogy with trains, to emphasize the crucial role of Taxonomies, Ontologies, and Knowledge Graphs in improving knowledge findability. These three elements represent your tickets to the fast track. They can propel your organization’s information search into high-speed efficiency to enhance information retrieval and achieve decision-making excellence.

    KM Fast Track to Search-Focused AI Solutions

    If your organization is considering the adoption of Search-Focused AI solutions, EK is here to help. With our extensive expertise, we specialize in crafting and deploying customized and actionable solutions that enhance organizations’ information search and knowledge findability. Please feel free to contact us for more information.

    The post KM Fast Track to Search-Focused AI Solutions appeared first on Enterprise Knowledge.

    ]]>