Chris Marino, Author at Enterprise Knowledge https://enterprise-knowledge.com Tue, 21 Oct 2025 15:23:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg Chris Marino, Author at Enterprise Knowledge https://enterprise-knowledge.com 32 32 Incorporating Unified Entitlements in a Knowledge Portal https://enterprise-knowledge.com/incorporating-unified-entitlements-in-a-knowledge-portal/ Wed, 12 Mar 2025 17:37:34 +0000 https://enterprise-knowledge.com/?p=23383 Recently, we have had a great deal of success developing a certain breed of application for our customers—Knowledge Portals. These knowledge-centric applications holistically connect an organization’s information—its data, content, people and knowledge—from disparate source systems. These portals provide a “single … Continue reading

The post Incorporating Unified Entitlements in a Knowledge Portal appeared first on Enterprise Knowledge.

]]>
Recently, we have had a great deal of success developing a certain breed of application for our customers—Knowledge Portals. These knowledge-centric applications holistically connect an organization’s information—its data, content, people and knowledge—from disparate source systems. These portals provide a “single pane of glass” to enable an aggregated view of the knowledge assets that are most important to the organization. 

The ultimate goal of the Knowledge Portal is to provide the right people access to the right information at the right time. This blog focuses on the first part of that statement—“the right people.” This securing of information assets is called entitlements. As our COO Joe Hilger eloquently points out, entitlements are vital in “enabling consistent and correct privileges across every system and asset type in the organization.” The trick is to ensure that an organization’s security model is maintained when aggregating this disparate information into a single view so that users only see what they are supposed to.

 

The Knowledge Portal Security Challenge

The Knowledge Portal’s core value lies in its ability to aggregate information from multiple source systems into a single application. However, any access permissions established outside of the portal—whether in the source systems or an organization-wide security model—need to be respected. There are many considerations to take into account when doing this. For example, how does the portal know:

  • Who am I?
  • Am I the same person specified in the various source systems?
  • Which information should I be able to see?
  • How will my access be removed if my role changes?

Once a user has logged in, the portal needs to know that the user has Role A in the content management system, Role B in our HR system, and Role C in our financial system. Since the portal aggregates information from the aforementioned systems, it uses this information to ensure what I see in the portal is reflective of what I would see in any of the individual systems. 

 

The Tenets of Unified Entitlements in a Knowledge Portal

At EK, we have a common set of principles that guide us when implementing entitlements for a Knowledge Portal. They include:

  • Leveraging a single identity via an Identity Provider (IdP).
  • Creating a universal set of groups for access control.
  • Respecting access permissions set in source systems when available.
  • Developing a security model for systems without access permissions.

 

Leverage an Identity Provider (IdP)

When I first started working in search over 20 years ago, most source systems had their own user stores—the feature that allows a user to log into a system and uniquely identifies them within the system. One of the biggest challenges for implementing security was correctly mapping a user’s identity in the search application to their various identities in the source systems sending content to the search engine.

Thankfully, enterprise-wide Identity Providers (IdP)  like Okta, Microsoft Entra ID (formerly Azure Active Directory), and Google Cloud Identity are ubiquitous these days.  An Identity Provider (IdP) is like a digital doorkeeper for your organization. It identifies who you are and shares that information with your organization’s applications and systems.

By leveraging an IdP, I can present myself to all my applications with a single identifier such as “cmarino@enterprise-knowledge.com.” For the sake of simplicity in mapping my identity within the Knowledge Portal, I’m not “cmarino” in the content management system, “marinoc” in the HR system, and “christophermarino” in the financial system.

Instead, all of those systems recognize me as “cmarino@enterprise-knowledge.com” including the Knowledge Portal. And the subsequent decision by the portal to provide or deny access to information is greatly simplified. The portal needs to know who I am in all systems to make these determinations.

 

Create Universal Groups for Access Control

Working hand in hand with an IdP, the establishment of a set of universally used groups for access control is a critical step to enabling Unified Entitlements. These groups are typically created within your IdP and should reflect the common groupings needed to enforce your organization’s security model. For instance,  you might choose to create groups based on a department or a project or a business unit. Most systems provide great flexibility in how these groups are created and managed.

These groups are used for a variety of tasks, such as:

  • Associating relevant users to groups so that security decisions are based on a smaller, manageable number of groups rather than on every user in your organization.
  • Enabling access to content by mapping appropriate groups to the content.
  • Serving as the unifying factor for security decisions when developing an organization’s security model.

As an example, we developed a Knowledge Portal for a large global investment firm which used Microsoft Entra ID as their IdP. Within Entra ID, we created a set of groups based on structures like business units, departments, and organizational roles. Access permissions were applied to content via these groups whether done in the source system or an external security model that we developed. When a user logged in to the portal, we identified them and their group membership and used that in combination with the permissions of the content. Best of all, once they moved off a project or into a different department or role, a simple change to their group membership in the IdP cascaded down to their access permissions in the Knowledge Portal.

 

Respect Permissions from Source Systems

The first two principles have focused on identifying a user and their roles. However, the second key piece to the entitlements puzzle rests with the content. Most source systems natively provide the functionality to control access to content by setting access permissions. Examples are SharePoint for your organization’s sensitive documents, ServiceNow for tickets only available to a certain group, or Confluence pages only viewable by a specific project team. 

When a security model already exists within a source system, the goal of integrating that content within the Knowledge Portal is simple: respect the permissions established in the source. The key here is syncing your source systems with your IdP and then leveraging the groups managed there. When specifying access to content in the source, use the universal groups. 

Thus, when the Knowledge Portal collects information from the source system, it pulls not only the content and its applicable metadata but also the content’s security information. The permissions are stored alongside the content in the portal’s backend and used to determine whether a specific user can view specific content within the portal. The permissions become just another piece of metadata by which the content can be filtered.

 

Develop Security Model for Unsupported Systems

Occasionally, there will be source systems where access permissions have not or can not be supported. In this case, you will have to leverage your own internal security model by developing one or using an entitlements tool. Instead of entitlements stored within the source system, the entitlements will be managed through this internal model. 

The steps to accomplish this include:

  • Identify the tools needed to support unified entitlements;
  • Build the models for applying the security rules; and
  • Develop the integrations needed to automate security with other systems. 

The process to implement this within the Knowledge Portal would remain the same: store the access permissions with the content (mapped using groups) and use these as filters to ensure that users see only the information they should.

 

Conclusion

Getting unified entitlements correct for your organization plays a large part in a successful Knowledge Portal implementation. If you need proven expertise to help guide managing access to your organization’s valuable information, contact us

The “right people” in your organization will thank you.

The post Incorporating Unified Entitlements in a Knowledge Portal appeared first on Enterprise Knowledge.

]]>
Expert Analysis: Keyword Search vs Semantic Search – Part One https://enterprise-knowledge.com/expert-analysis-keyword-search-vs-semantic-search-part-one/ Mon, 20 Mar 2023 19:30:55 +0000 https://enterprise-knowledge.com/?p=17820 For a long time, keyword search was the predominant method to provide search to an enterprise application. In fact, it is still a tried-and-true means to help your users find what they are looking for within your content. However, semantic … Continue reading

The post Expert Analysis: Keyword Search vs Semantic Search – Part One appeared first on Enterprise Knowledge.

]]>
For a long time, keyword search was the predominant method to provide search to an enterprise application. In fact, it is still a tried-and-true means to help your users find what they are looking for within your content. However, semantic search has recently gained wider acceptance as a plausible alternative to keyword search. In this Expert Analysis blog, two of our senior consultants, Fernando Aguilar and Chris Marino, explain these different methods and provide guidance on when to choose one over the other.

What’s the difference between a keyword search system and a semantic search system?

Keyword Search (Chris Marino)

The heart of a keyword search system is a data structure called an “inverted index.” You can think of it as a two-column table. Each row in the table corresponds to a term found in your corpus of documents. One column contains the term, and the other column contains a list of all your documents (by ID) where that particular term appears. The process of filling up this table with the content in your documents is called “indexing.”

When a user performs a search in a keyword system, the search engine takes the words from their query and looks for an exact match in the inverted index. Then, it returns the list of matching documents. However, instead of returning them in random order, it applies a ranking (or scoring) algorithm to ensure that the more relevant documents appear first. This ranking algorithm is normally based on a couple of factors: “term frequency” (the number of times the terms appear in the document) and the rarity of the word across your entire corpus of documents. For example, if you search for “vacation policy” in your company’s documents, “vacation” most likely appears less frequently than “policy,” so those documents with “vacation” should have a higher score.

Semantic Search (Fernando Aguilar)

Semantic search, also known as vector search, is a type of search method that goes beyond traditional keyword-based search and attempts to understand the intent and meaning behind the user’s query. It uses natural language processing (NLP) and machine learning algorithms to analyze the context and relationships between words and concepts in a query, and to identify the most relevant results based on their semantic meaning. This approach is often used in applications such as chatbots, virtual assistants, and enterprise search to provide more accurate and personalized results to users.

In contrast to keyword search, which relies on matching specific keywords or phrases in documents or databases, semantic search is able to understand the underlying meaning of the query and identify related concepts, synonyms, and even ambiguous terms. This enables it to provide more comprehensive and relevant results, especially in cases where the user’s intent may not be well-defined or where multiple meanings are possible.

What are the Pros and Cons of using Keyword Search vs Semantic Search?

Keyword Search (Chris Marino)

Keyword search is a workhorse application that has been around for decades. This fact makes it a natural choice for many search solutions. It tends to be easier to implement because it’s a more familiar application. It’s been battle-tested, and there are a wealth of developers out there who know how to integrate it. As with many legacy systems, there are many thought pieces, ample documentation, pre-built components, and sample applications available via a Google search (or just ask ChatGPT).

Another benefit of keyword search is its interpretability – the ability for a user to understand why a certain result matched the query. You can easily see the terms you have searched for in your results. Although there is an algorithm performing the scoring ranking, a search developer can quickly discern why a certain result appeared before another and make tweaks to impact the algorithm. Conversely, the logic behind semantic search results is more of a “black box” variety. It’s not always readily apparent why a particular result was returned. This has a significant impact on overall user experience; when users understand why they’re getting a search result, they trust the system and feel more positively towards it.

The biggest drawback of keyword search is that it lacks the ability to determine the proper context of your searches. Instead of seeing your search terms as concepts or things, it sees them simply as strings of characters. Take for instance the following query:

“What do eagles eat?”

Keyword search processes and searches for each term individually. It has no concept that you are asking a question or that “what” and “do” are unimportant. Further, there are many different concepts known as “Eagles”: the bird-of-prey, the 70’s rock group, the Philadelphia football team, and the Boston College sports teams. While a person can surmise that you’re interested in the bird, keyword search is simply looking for any mention of the letter string: “e-a-g-l-e.”

Semantic Search (Fernando Aguilar)

Semantic search has gained popularity in recent years due to its ability to understand the intent and meaning behind the user’s query, resulting in more relevant and personalized results. However, not all use cases benefit from it. Understanding the advantages, limitations, and the trade-offs between semantic and keyword search can help you choose the best approach for your organization’s specific needs.

Pros:

  • Semantic search makes search results more comprehensive and inclusive by identifying and matching term synonyms and variations.
  • Vector search provides more relevant results by considering a query’s context, allowing it to differentiate between “Paris,” the location, and “Paris,” a person’s name. It also understands the relationship between its terms, such as part-of-speech (POS) tagging, and identifying different terms as verbs, adjectives, adverbs, or nouns.
  • It enables the user to express their intent more accurately by allowing them to make queries using natural language phrases, synonyms, or variations of terms and misspellings, leading to a more user-friendly search experience.

Cons:

  • Calculating similarity metrics to retrieve search results is computationally intensive. Optimization algorithms are generally needed to speed up the process. However, faster search times come at the cost of decreased accuracy.
  • The search results can be less relevant if users are accustomed to searching using one or two-term queries instead of using search phrases. Therefore, it is essential to analyze current search patterns before implementing vector search.
  • Pre-trained language models need to be fine-tuned to learn and understand the relationships between words in the context of your business domain. Fine-tuning a language model will improve the accuracy of the search results, but training is usually time-consuming and resource intensive.

How do the use cases for each type of search differ?

Keyword Search (Chris Marino)

In general, any search use case is a good case for keyword search. It has been around for many years and, when configured correctly, can provide solid results at a reasonable cost. However, there are a few use cases that are particularly well-suited for keyword search: academic and legal search, primarily by librarians. It’s been my experience that these types of searchers have very exact, complex queries. Characteristics of these queries might include:

  • Exact phrase matching
  • Multi-field searches (“show me documents with X in Field 1, Y in Field 2, Z in Field 3 …”)
  • Heavy boolean searches (“show me this OR these AND those but NOT that”)

In these instances, the user needs to ensure and validate that each result matches their exact query. They are not looking for suggestions. Precision (“show me exactly what I asked for”) is more important than recall (“show me things I may be interested in but didn’t specifically request”).

Semantic Search (Fernando Aguilar)

The primary use case differentiator will be determined by how search users format their queries. Semantic search will prove best for users that submit search phrases where context, word relationships, and term variations are present versus searching for a couple of exact terms. Hence, beyond a search query, chatbots, virtual assistants, or customer service applications are great examples where users may be conversationally asking questions.

What are the cool features found in keyword search vs semantic search?

Keyword Search (Chris Marino)

There are a number of features that keyword search provides to improve a searcher’s overall experience. Some of the main ones include facets, phrase-searching, and snippets.

Facets

Facets are filters that let you refine your search results to only view items that are of particular interest to you based on a common characteristic. Think of the left-hand side of an Amazon search results page. They are based on the metadata associated with your documents, so the richer your metadata, the better options you can provide to your users. In an enterprise setting, common facets are geography-based ones (State, Country), enterprise-based ones (Department, Business Unit), and time-based ones (Published Date, Modified Date – whose values can even contain relative values like “Today”, “”, “Last 7 days”, “This Year”).

Phrase searching

Phrase searching allows you to find exact phrases in your documents, normally by including the phrase within quotation marks (“”). A search for “tuition reimbursement” will only return documents that match this exact phrase, and not documents that only mention “tuition” or “reimbursement” independent from one another.

Snippets

Snippets are small sections from your document which include your search terms and are displayed on the search results page. They show the search terms in the context of the overall document, e.g., the main sentence that contains the terms. This helps by providing a visual cue to help the searcher understand why this particular document appears. Normally, the search results page displays the title of the document, but often your search term does not appear in the title. By displaying the snippet, with the search term highlighted, the user feels validated that the search has returned relevant information. We refer to this as “information scent.”

Semantic Search (Fernando Aguilar)

Currently, semantic search is one of the most promising techniques for improving search and organizing information. While semantic methods have already proven effective in a variety of fields, such as computer vision and natural language processing, there are several cool features that make semantic search an exciting area to watch for enterprise search. Some examples include:

  • Concurrent Multilingual Capabilities: Vector search can leverage multilingual language models to retrieve content regardless of the language of the content or the query itself.
  • Text-to-Multimodal Search: Natural language queries can retrieve un-tagged video, image, or audio content, depending on the model used to create the vectors.
  • Content Similarity Search: Semantic search can also take content as query input, so applications can retrieve similar content to the one the user is currently viewing.

Conclusion

If perfecting the relevancy of your search results isn’t directly tied to your organization’s revenue or mission achievement, keyword search provides an efficient, proven, and effective method for implementing search in your application. On the other hand, semantic search will be a better solution when the clients are using natural language to describe what they are looking for, when the content to be retrieved is not all text-based, or when an API (not a person) is consuming your search.

Check out some of our other thought leadership pieces on search:

5 Steps to Enhance Search with a Knowledge Graph

Dashboards – The Changing Face of Search

And if you are embarking on your own search project and need proven expertise to help guide you to success, contact us!

The post Expert Analysis: Keyword Search vs Semantic Search – Part One appeared first on Enterprise Knowledge.

]]>
Is ChatGPT Ready for the Enterprise? https://enterprise-knowledge.com/is-chatgpt-ready-for-the-enterprise/ Wed, 15 Feb 2023 17:06:08 +0000 https://enterprise-knowledge.com/?p=17538 Recently, we were visiting a client showing the latest version of our application when a participant asked, “Why aren’t we using ChatGPT?” It was a good and logical question with the attention that ChatGPT and other AI-based solutions are warranting … Continue reading

The post Is ChatGPT Ready for the Enterprise? appeared first on Enterprise Knowledge.

]]>
Recently, we were visiting a client showing the latest version of our application when a participant asked, “Why aren’t we using ChatGPT?” It was a good and logical question with the attention that ChatGPT and other AI-based solutions are warranting these days. While these tools, built using complex machine-learning components like large-language models (LLMs) and neural networks, offer much promise, their implementation in today’s enterprise should be weighed carefully.

Rightfully so, ChatGPT and similar AI-powered solutions have created quite a buzz in the industry. It really is impressive what they currently do, and they offer much future promise. Since those of us in the technology world have been inundated with questions and remarkable tales about ChatGPT and similar tools, I took it upon myself to do a little experiment.

The Experiment

As a die-hard Cubs fan, I hopped over to the ChatGPT site and asked: “Which Cubs players have won MVPs?”

It provided a list of names that, on the surface, appeared correct. However, a few minutes spent on Google confirmed that one answer was factually wrong, as were some of the supporting facts about correctly identified players.

Impressively, a subsequent question: “Are there any others?” provided another seemingly accurate list of results. ChatGPT remembered the context of my first query and answered appropriately. Despite this, further investigation confirmed that, once again, not all of the information returned was correct.

As shown from this tiny sample, any organization needs to tread carefully when considering implementing ChatGPT and other AI-powered solutions in their current form. It’s quite possible that they lead to more problems than they solve.

Here is a list of the top issues to consider before embarking on an AI-based search solution like ChatGPT.

Accuracy Issues

For all their potential, their current implementations are haunted by one fact – they can return blatantly false information. As shown above, a sizable portion of the answers were wrong, especially during the follow-up question. Unfortunately, this is a common experience.

Further, there is no reference information returned with the result. This produces more questions than it does answers. What is the “source of truth” for the query response? What authoritative document states this information that can be referenced and verified?

Granted, when you perform a search on a traditional keyword search engine, you can sometimes get nefarious, outdated, or incorrect results. Still, these search engines are not selling the promise that they’re returning the single, definitive answer to your question. You are presented with a list to sift through and make your final decision on what is relevant to your particular needs.

While it’s entertaining to ask ChatGPT –  “What is Hamlet’s famous spoken line and repeat it back to me in a pirate’s voice” – would you really want to base an important business decision on feedback that is often inaccurate and unverifiable? All it takes is being burnt by one wrong answer for your users to lose faith in the system.

Complexity and Expense

I like to joke with my clients that we can build any solution quickly, cheaply, and impressively but that they have to pick two of the three. With an AI-based solution like ChatGPT, you may only get to pick one. Implementing an AI solution is inherently complex and expensive. There is a lot of time and complexity involved, and there’s no “point and click, out of the box” option. Relevant tasks to prepare AI for the enterprise include:

  • Designing and planning for both hardware and software,
  • Collecting relevant and accurate data to feed into the system,
  • Building relevant models and training them about your domain-specific knowledge,
  • Developing a user interface,
  • Testing and analyzing your results, then iterating, perhaps multiple times, to make improvements; and,
  • Operationalizing the system into your existing infrastructure, including data integration, support, and monitoring.

Additionally, projects like these require developers with niche, advanced skills. It’s difficult enough finding experienced developers to implement basic keyword search solutions, let alone advanced AI logic. Those that can successfully build these AI-based solutions are few and far between, and in software development, the time of highly-skilled developers comes at a significant cost.

Lack of Explainability

AI-based solutions like ChatGPT tend to be “black box” solutions. Meaning that, although powerful, the logic they use to return results is virtually impossible to explain to a user if it’s even available.

With traditional search engines, the scoring algorithms to rank results are easier to understand. A developer can compare the scores between documents in the result set and quickly understand why one appears higher than the other. Most importantly, this process can be explained to the end user, and adjustments to the scoring can be made easily through search relevancy tuning.

Searching in the enterprise is a different paradigm than the impersonal world of Google, Amazon, and e-commerce search applications. Your users are employees, and you must ensure they are empowered to have productive search experiences. If users can’t intuitively understand why a particular result is showing up for their query, they’re more likely to question the tool’s accuracy. This is especially true for certain users, like librarians, legal assistants, or researchers, who have very specific search requirements and need to understand the logic of the search engine before they trust it.

User Experience and Readiness

The user experience for a tool like ChatGPT will be markedly different. For starters, many of the rich features to which users have grown accustomed – faceting, hit highlighting, phrase-searching – are currently unavailable in ChatGPT.

Furthermore, consider if your users are actually ready to leverage an AI-based solution. For example, how do they normally search? Are they entering 1 or 2 keywords, or are they advanced enough to ask natural language questions? If they’re accustomed to using keywords, a single-term query won’t produce markedly better results in an AI-based solution than a traditional search engine.

Conclusion

Although the current version of ChatGPT may not deliver immediate value to your organization, it still has significant potential. We’re focusing our current research on a couple of areas in particular. First, its capabilities around categorization and auto-summarization are very promising and could easily be leveraged in tandem with the more ubiquitous keyword search engines. Categorization lets you tag your content with key terms and provides rich metadata that powers functionality like facets. Meanwhile, auto-summarization creates short abstracts of your lengthy documents. These abstracts, properly indexed into your search engine, can serve as the basis for providing more accurate search results.

It’s perfectly acceptable to be equally impressed by the promise of tools like ChatGPT yet skeptical of how well their current offerings will meet your real-world search needs. If your organization is grappling with this decision, contact us, and we can help you navigate through this exciting journey.

The post Is ChatGPT Ready for the Enterprise? appeared first on Enterprise Knowledge.

]]>
Learning 360: Crafting a Comprehensive View of Learning Content Using a Graph https://enterprise-knowledge.com/learning-360-crafting-a-comprehensive-view-of-learning-content-using-a-graph/ Wed, 03 Aug 2022 13:00:44 +0000 https://enterprise-knowledge.com/?p=15786 Chris Marino, a Principal Solution Consultant at Enterprise Knowledge (EK), was a featured speaker at this year’s Data Architecture Online event organized by Dataversity. Marino presented his webinar “Learning 360: Crafting a Comprehensive View of Learning Content Using a Graph” … Continue reading

The post Learning 360: Crafting a Comprehensive View of Learning Content Using a Graph appeared first on Enterprise Knowledge.

]]>
Chris Marino, a Principal Solution Consultant at Enterprise Knowledge (EK), was a featured speaker at this year’s Data Architecture Online event organized by Dataversity. Marino presented his webinar “Learning 360: Crafting a Comprehensive View of Learning Content Using a Graph” on July 20, 2022. In his presentation, Marino took participants through the entire Graph development process, including planning, designing, and developing the new tool, highlighting benefits to the organization and lessons learned throughout the process.

The post Learning 360: Crafting a Comprehensive View of Learning Content Using a Graph appeared first on Enterprise Knowledge.

]]>
Dashboards – The Changing Face of Search https://enterprise-knowledge.com/dashboards-the-changing-face-of-search/ Fri, 22 Apr 2022 13:30:00 +0000 https://enterprise-knowledge.com/?p=15275 In my twenty-plus years of search consulting, I’ve seen the technology move from something that hopefully worked on a good day, to a generally acceptable experience that was common, but typical. For a long time, search was all about words; … Continue reading

The post Dashboards – The Changing Face of Search appeared first on Enterprise Knowledge.

]]>
Graphic with a computer visualizing dashboards.

In my twenty-plus years of search consulting, I’ve seen the technology move from something that hopefully worked on a good day, to a generally acceptable experience that was common, but typical. For a long time, search was all about words; you’d enter keywords and get text links returned. Though serviceable, this experience is far from natural or intuitive to how we as humans think and want to interact with an enterprise search solution. We’re now in a period in which search is maturing from words to a more interactive experience – one in which the relevant information can be delivered without the searcher having to ask for it. Dashboards facilitate this transition from an experience that requires an initial action on behalf of the searcher to a more interactive one where more information can be gleaned directly from the search results screen, and more actions can be taken that will facilitate the user’s search goals.

For the longest time, Enterprise Search has looked and acted the same way. Most of us can relate to the experience:

  1. Open a browser and navigate to your application.
  2. Enter some keywords in a Search Box.
  3. View a list of results rendered as blue links with perhaps a list of facets or filters running down the left-hand side of the results.
  4. Move back and forth between the results page and the actual results by clicking on a possible result and skimming the content.

In this instance, the search results are simply a list of links. Serviceable, sure, but not very intuitive, as in most cases they provide only a sliver of a hint about what the searcher will find once they click, and they seldom provide any actionable information without forcing the user to click at all. Truthfully, that is still an effective means of finding content and shouldn’t be entirely disregarded. There are still many times when your searcher will want the comfort of this experience. But that doesn’t mean it should or can be the only way to find content.

Introducing dashboards to an application’s Search UI provides three main benefits:

  • Returning answers, not simply documents.
  • Pushing information to a searcher instead of requiring them to pull it.
  • Providing a more visually appealing experience that isn’t just easier to understand, but more pleasant as well.

Let’s take a look at each in more detail, looking at what it is and why it’s beneficial to an organization.

Returning Answers, Not Simply Documents

One thing that I have learned in my time advising clients in this area is that not every answer to a user’s search is a document. Effective search is about finding and returning answers in all their forms. We talk about this in terms of the complete spectrum of content, or the complete ecosystem of knowledge resources. Unlike traditional enterprise search, today’s searcher is not always looking for a document, she’s looking for information. Perhaps a document, but perhaps a mix of results that could include other media as well as experts from whom to connect, communities with which to interact, and learning resources to enhance one’s performance.

Dashboards allow the presentation of content in a variety of formats, not just textual. For example, if I’m a call center agent indexing all my customer inquiries, determining which of my stores received the most complaints is not contained in a single document. However, a chart showing all of my calls broken down by stores will provide that information immediately. Getting the right answers to my searchers at the right time makes them substantially more productive.

Pushing information

Incorporating dashboards provides relevant information to a searcher without them having to take any action. Gone are the days when a searcher has to spend time crafting the perfect query before seeing results on the page. Relevant content appears on the home page in the form of dashboard components as soon as a searcher visits the page. This provides value to an organization as an employee spends less time looking for information, and more time acting on the relevant content they have found.

Visually appealing

Instead of presenting a staid page of links, dashboards provide the ability to present an interactive, immersive experience to a searcher. Built on the structured data gleaned from your content, a searcher can click through geo-location-based maps, traverse in and out of charts and graphs, and explore hot topics via heat maps to surface the information that’s important to them. This benefits the organization by increasing adoption of the search platform. A more appealing experience naturally leads to greater use which produces informed, productive employees.

Now that we’ve seen the benefits of incorporating dashboards into your search UIs, look for an upcoming blog where I talk about how your organization can get there, and what the key pieces are that you need to have in place to leverage their capabilities.

In the meantime, contact us for any assistance with your search and content needs.

The post Dashboards – The Changing Face of Search appeared first on Enterprise Knowledge.

]]>
Integrating Search and Knowledge Graphs Series Part 1: Displaying Relationships https://enterprise-knowledge.com/integrating-search-and-knowledge-graphs-series-part-1-displaying-relationships/ Mon, 19 Oct 2020 14:21:09 +0000 https://enterprise-knowledge.com/?p=12090 I’ve spent many years helping clients implement enterprise search solutions and am constantly looking for new ways to improve a user’s search experience so that they can find relevant content as well as discover new content they may not have … Continue reading

The post Integrating Search and Knowledge Graphs Series Part 1: Displaying Relationships appeared first on Enterprise Knowledge.

]]>
I’ve spent many years helping clients implement enterprise search solutions and am constantly looking for new ways to improve a user’s search experience so that they can find relevant content as well as discover new content they may not have previously known existed. Recently, EK has spent time designing and building knowledge graphs which has given me ample opportunity to ponder some practical ways to integrate these two powerful technologies.

In this three-part series, I’ll lead you through a phased approach for integrating knowledge graphs with your current search solution. The series will begin with simple, practical steps to help you get started quickly and conclude with an exploration of more advanced functionalities that truly harnesses the semantic capabilities provided by a knowledge graph: 

  • Part 1:  Showing Relationships – Expose the relationships in your content via search results.
  • Part 2:  Enhance Results – Enhance search results by presenting facts, not just returning documents.
  • Part 3:  Provide Context – Leverage the power of the knowledge graph to provide context to your search and improve relevancy.

What is a Knowledge Graph?

A knowledge graph consists of nodes (entities) and edges (relationships), and provides a means to model your domain and expose the rich relationships found within your content. Imagine a tool that lets you show the people, places, and things that make up your domain and the myriad ways each piece of content is connected to another.

Showing Relationships to Enhance the Search Experience

One of the simplest ways to begin integrating your knowledge graph with search is exposing the relationships between content when displaying search results. This increases the discoverability elements of your search results page by not only displaying the results that matched your search, but also valuable, connected content. To accomplish this, find the existing relationships stored in your Knowledge Graph and add them as fields to your search records.

A real world example helps illustrate the necessary steps. Take, for instance, the case of a legal publisher whose content consists of a connected web of statutes, regulations, cases, legal guidance, commentary articles, and a variety of other knowledge assets. The publisher’s knowledge graph contains a node for each piece of content with applicable properties like Title, Published Date, Author, and Code section. Further, a series of edges show the varied relationships between the nodes such as:

– Regulation A derives its authority from Statue B

– Court Case C involves interpretation of Regulation A

– Article E contains references to Court Case C

– Legal Guidance G was mentioned in Article E

Knowledge Graph connecting concepts

Prioritizing Which Relationships Enhance Search

The first step in the integration process is to design the look of the individual search results – which relationships do you want to show for each result. While it’s desirable to show more detail than just the document title as a link, there is still limited screen space available – don’t show all possible relationships. Instead, focus on those that convey additional context or meaning to the user. What are the items that would lead them to follow these links to discover new content.

At EK, we are big proponents of “action-oriented” search which emphasizes understanding what action a user wants to take with the search results. In an Action Oriented Search Workshop, we design how the result should appear on the page. 

A common scenario would be that a user wants to see related documents to the result they’re currently viewing. Referring to the example from the previous section, a user viewing Regulation A as a result may very well want to see related statutes, or court cases which reference “Regulation A.” Instead of presenting a static list of links, the search results can easily be enhanced to display this related content, providing the user with the ability to discover and investigate information they may not have known existed.

search results example

Integrating Relationships Into the Search Engine

Now that we know which relationships we want to display, the next step involves integrating the relationships from our knowledge graph into our search engine. Since we’re only using the relationships for display purposes at this point, all we need to do is store these as additional fields in our search record.

At this point, there are two approaches to integrating the content in your knowledge graph into your search application. One method is to add logic to the search application code which queries the knowledge graph for each search result to obtain the required information. However, this requires a call to a second system (the knowledge graph) which is not as performant as a highly-tuned search engine. The resulting call can have a negative impact on your search performance which can easily lead to user frustration as they wait for the results to render.

The preferred approach is to integrate content from the knowledge graph during the indexing process. Indexing a record involves accessing a piece of content from a source system, processing it into a format that the search engine understands, and then pushing it to the search index for querying. The component that handles this processing is called a “connector.” Normally, we develop our own connectors using custom code to ensure they meet the exact needs of our clients. 

In this approach, the connector code would not only query the source system for content, but also query the knowledge graph to obtain the relevant relationships for that piece of content. These multiple sets of information would be merged together in order to add these relationships as fields to the current record. Further, the indexing process is “off-line” so it’s acceptable if it takes a little longer going into the search engine. Once indexed, the search engine will return results at blazing speed.

For example, let’s say the current record was a specific regulation – “Section 23.2(c)(ii).” By querying the knowledge graph for this particular section, you discover the following relationships exist:

  • “Regulation 23.2(c)(ii)” derives from “Statute 123.456.”
  • The article “Common Issues related to New Requirements” mentions “Regulation 23.2(c)(ii).”
  • The court case “Wyatt v. Hilger, March 7, 2020” interprets “Regulation 23.2(c)(ii).”

json code sample

Storing these relationships in the search index allows us to render a much richer result type to the user.

serach index example

Summary

Leveraging the stored relationships for display purposes with your search results is a simple, effective first step to integrating your knowledge graph with your search application. The relationships from your knowledge graph are captured as additional fields in your search records to render intuitive, action-oriented results. 

In summary, follow these steps:

  • Determine which relationships add the most value to search;
  • Design the search UX to display those relationships; and
  • Integrate the relationships into the search engine.

In the next part of the series, we’ll investigate how you can enhance your search results page even further by combining facts stored in the Knowledge Graph with the documents stored in your search engine.

In the meantime, contact us for any assistance with your search and knowledge graph needs.

The post Integrating Search and Knowledge Graphs Series Part 1: Displaying Relationships appeared first on Enterprise Knowledge.

]]>
A Knowledge Graph Feast https://enterprise-knowledge.com/a-knowledge-graph-feast/ Wed, 21 Nov 2018 00:38:56 +0000 https://enterprise-knowledge.com/?p=7943 As Thanksgiving nears, many of us will be scouring recipe books looking for the perfect meal to prepare for family or friends. Visions of golden, brown turkey, creamy mashed potatoes, and savory stuffing (along with a challenge from one of … Continue reading

The post A Knowledge Graph Feast appeared first on Enterprise Knowledge.

]]>
As Thanksgiving nears, many of us will be scouring recipe books looking for the perfect meal to prepare for family or friends. Visions of golden, brown turkey, creamy mashed potatoes, and savory stuffing (along with a challenge from one of our principals) gave me an interesting idea – concoct the perfect Knowledge Graph recipe.

Enterprise Knowledge Graphs are becoming extremely popular as many organizations look to introduce them into their information ecosystems. Combined with some commonly used tools, it’s possible to develop a solution that satisfies the hunger pangs of even the most mature organizations – lack of findability, irrelevant or outdated content, and other pain points.

Recipe for Knowledge Graph Feast

As with any good recipe, we need a list of ingredients from which to create our final product. What are the individual parts that make up the whole feast?

There are 4 key components to a successful Knowledge Graph implementation:

Ingredients

1 Content Management System (CMS)

1 Taxonomy Management Tool

1 Search Engine

1 Knowledge Graph

Step 1 – Leverage a Content Management System

Your CMS should be more than just a repository for Word documents, PDFs, and other pieces of content. Enhance your content with the perfect complement – metadata. Metadata adds color to your content – an author, a created date, a subject, a description. Further, this metadata builds the foundation for functionality, which you can expose later on in your search engine and Knowledge Graph.

Additionally, make good use of Content Types which provide a means to logically organize your content by applying metadata. My colleague offered a very helpful explanation of what they are and why you need to use them.

Step 2 – Use a Taxonomy Management Tool

Creating good metadata as suggested, requires a pre-existing enterprise taxonomy to normalize the terms you use to describe your content. There are many tools on the market that allow you to easily store and manage your taxonomy in a centralized location. Thus, you can author your taxonomy in one place, yet leverage it in many places by integrating with other systems and applications.

Further, many of these tools provide you the ability to introduce artificial intelligence (AI) into your enterprise through auto-tagging and categorization capabilities. Tags can be simply and effectively applied to your content via these techniques.

Step 3 – Implement a Search Engine

A search engine is a key part of the recipe to capture and expose your company’s content, both structured and unstructured. Properly designed and configured, a search engine can ingest content from any repository – turning siloed data into findable, actionable information.

A search engine provides a single application where users can find any information, regardless of the original source.

  • Policies and procedures in Word documents or PDFs sitting on a file system
  • Blogs and news items in your web Content Management Systems
  • Reports aggregated from your database system

Step 4 – Build Your Knowledge Graph

With a CMS, taxonomy management tool, and search engine in place, it’s time to add a semantic flavor to the feast. A Knowledge Graph lets you model your domain and expose it to a wider audience. Most importantly, it exposes the connections and relationships that exist between the entities that comprise your organization. From a technical perspective, it can traverse these relationships and find new ones quickly and efficiently.

As an example, you can store information on your employees (Employee Name, Employee ID, Department, Location) as well as information on your projects (Project Name, Project ID, Description and Subject) all in one centralized location, which is one of the key capabilities of a knowledge graph. Further, from those connections, you can gain valuable insights like Employee A must be very proficient at Subject B since many of their projects were about (e.g. “tagged”) Subject B.

The Finished Product

Like every good recipe, we can conclude with a sneak peek at the finished product. How do the individual pieces discussed previously fit together?

Imagine this Search Results page on your organization’s intranet.

Wireframe of a mock company intranet

Further, suppose an employee is looking to expand their professional development and wants to find information on your tuition benefit policy. If you followed the recipe, you would have done things like:

  • Created a Company Policy content type which includes metadata like title, description, topic, and populated them with your company policies, like tuition assistance.
  • Added relevant terms to your business taxonomy which describe your company’s content like “tuition”, “professional development”, “FAQ”, and “article.”
  • Provided a single application for easily retrieving relevant content by indexing a vast array of content like FAQs and company news articles into a search engine.
  • Created a Knowledge Graph and populated with the “things” that comprise your organization – employees, projects, policies, etc. – along with the connections between them.

Thus, when an employee performs a search on the company intranet, an information feast appears.

  • Search results containing relevant items like FAQs and company news articles, indexed and served from the search engine.
  • Facets (left-hand side) powered by the search engine but managed in the taxonomy and exposed to the search engine via the CMS which allows employees to filter on the content that meets their needs.
  • Relevant concepts (right-hand side) returned from querying the Knowledge Graph which allows employees to quickly and easily take action.

As with preparing any fine meal, crafting the perfect Knowledge Graph solution requires great care and effort. But with the proper guidance, the daunting becomes surmountable. And the finished meal is well worth it. So learn more and contact us, we’re here to help.

The post A Knowledge Graph Feast appeared first on Enterprise Knowledge.

]]>
Improving Findability using Content Deconstruction https://enterprise-knowledge.com/improving-findability-using-content-deconstruction/ Tue, 10 Apr 2018 15:22:35 +0000 https://enterprise-knowledge.com/?p=7284 Lengthy documents spanning tens or hundreds of pages are commonplace in today’s enterprises. However, they provide a host of issues when trying to integrate into a company’s intranet. These issues can be solved by the use of content deconstruction - breaking large documents into more granular, manageable units of content - for improved tagging and findability. Continue reading

The post Improving Findability using Content Deconstruction appeared first on Enterprise Knowledge.

]]>
At EK, one of our core tenets when considering content strategy, content architecture, and broader content management is the definition of “chewable chunks” of content. Our approach is to break content into smaller, more manageable pieces both for easier management as well as improved findability and usability.  We refer to this process as “content deconstruction”.

The wide use of Microsoft Office has made it easy to create long, voluminous documents that can be stored as a PDF and published to your site. But think about how that document would be surfaced on your website or intranet. As an example, let’s take a document which is very common in all organizations – the employee handbook. Our EK handbook is 46 pages, and we’re not a large company.

Let’s suppose I’m interested in finding out more information about EK’s tuition assistance program – hmm, nice perk! My first step would be to search our intranet for a keyword like “tuition”. As expected, one of the top hits (if the relevance of the search results is tuned correctly) is the handbook.

The link in the results page opens the PDF, but not at a location that reinforces my search. I’m sitting and staring at the cover page of the handbook. Unfortunately, there’s nothing about tuition on the cover page. Therefore, I must either page down to the table of contents looking for information on tuition or do a find for “tuition” in the document until I get to the relevant section.

Our recommended approach would leverage your existing content management system to deconstruct the handbook into a series of smaller, more manageable records as opposed to a single lengthy document. Instead of surfacing the whole document, I now have the ability to display the individual sections on their own.

With these smaller, individual sections of content, I could apply tags – “tuition,” “professional development” – in a more granular, directed manner. Further, the links on my search results page will now take me directly to the relevant section of the handbook – “Tuition and Training Assistance” instead of opening a PDF. Finally, I have created a series of content that can be used similarly to Google’s featured snippets – blocks of relevant content that appear at the top of the search results page.

At EK, we recommend looking for several key characteristics to identify candidate documents for deconstruction. Characteristics to consider are:

  • Length. The document size should be 20 or more pages.
  • Structured, hierarchical. The document should have some structure or hierarchy that can easily be captured. It could be as simple as section headings, or as complex as a formal outline.
  • Relevant. The document should be high-value. There is time and effort required to transform the document, and you probably won’t have the ability to transform all your documents. Use analytics or your own intuitive knowledge of your content to pick good candidates.
  • Multiple audiences. Consider documents that have been written for multiple audiences. Often, we see content for a specific audience, included deep within a larger, broader document. Someone interested in this targeted content has to wade through multiple pages trying to find applicable content.

Overall for the organization, this translates to:

  • Increased productivity. Smaller documents lead to more precise tagging which means workers are spending less time looking for content. Instead of adding a wide variety of tags to a single document, you can add a single tag to the specific content where it’s discussed.
  • Improved findability. The search relevancy algorithms underlying most modern search engines work best when dealing with documents of similar size. Lengthy documents can negatively influence the results. This means users will spend less time looking for the right content and more time acting upon it.
  • Enhanced usability. It’s much cleaner to display a few paragraphs within the context of an intranet instead of a large PDF which requires a viewer or takes you out of the site by opening a separate application. With deconstructed content chunks, users will spend less time sifting through documents and more time getting answers.

Overall, the replacement of large, unwieldy documents with a series of smaller, flexible sections will lead to more efficient content management and productivity, findability, and usability for the organization.  In short, this means time and cost saved for the organization and higher levels of up-scaling, support, and learning for your end users.

Looking for help getting started?  EK is here to help you start the deconstruction process! Contact us

The post Improving Findability using Content Deconstruction appeared first on Enterprise Knowledge.

]]>