technology solution Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/technology-solution/ Mon, 03 Nov 2025 21:33:10 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg technology solution Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/technology-solution/ 32 32 How to Inject Organizational Knowledge in AI: 3 Proven Strategies to Achieve Knowledge Intelligence https://enterprise-knowledge.com/inject-organizational-knowledge-in-ai/ Thu, 31 Oct 2024 14:07:18 +0000 https://enterprise-knowledge.com/?p=22332 Generative AI (GenAI) has made Artificial Intelligence (AI) more accessible to the business, specifically by empowering organizations to leverage large language models (LLMs) for a wide range of applications. From enhancing customer support to automating content creation and operational processes, … Continue reading

The post How to Inject Organizational Knowledge in AI: 3 Proven Strategies to Achieve Knowledge Intelligence appeared first on Enterprise Knowledge.

]]>
Generative AI (GenAI) has made Artificial Intelligence (AI) more accessible to the business, specifically by empowering organizations to leverage large language models (LLMs) for a wide range of applications. From enhancing customer support to automating content creation and operational processes, the investment in AI has surged in the past two years – primarily through proofs of concept (POCs) and pilot projects.

For many organizations, however, these efforts have failed to yield the anticipated results proportional to their investments. According to Gartner’s recently published “Hype Cycle for Artificial Intelligence, 2024”, AI has entered the “trough of disillusionment.” 

We’re witnessing this firsthand as organizations are hiring EK to address AI projects that have stalled due to content and data challenges. Many are still grappling with how to ensure quality and diversity of their AI products  – with the biggest hurdle being the lack of institutional and domain knowledge that AI requires to deliver meaningful results tailored to a specific organization. 

Various inputs that can be included into an AI solution, including experts, structured data, and unstructured data.

The reality is that algorithms trained in one company or public data sets may not work well on organization and domain-specific problems, especially in domains where industry preferences are relevant. Thus, organizational knowledge is a prerequisite for success, not just for generative AI, but for all applications of enterprise AI and data science solutions. This is where experience and best practices in knowledge and data management are lending the AI space effective and proven approaches to how domain and institutional knowledge can be shared effectively. Below, I have picked the top three ways we have been working to embed domain and organizational knowledge and empower enterprise AI solutions.

 

1. Semantic Layer

Without context, raw data or text is often messy, outdated, redundant, and unstructured, making it difficult for AI algorithms to extract meaningful use. The key step to addressing this AI problem involves the ability to connect all types of organizational knowledge assets, i.e., using shared language, involving experts, related data, content, videos, best practices, and operational insights from across the organization. In other words, to fully benefit from an organization’s knowledge and information, both structured and unstructured information as well as expertise knowledge must be represented and understood by machines.

A Semantic Layer provides AI with a programmatic framework to make organizational context, content, and domain knowledge machine readable. Techniques such as data labeling, taxonomy development, business glossaries, ontology and knowledge graph creation make up the semantic layer to facilitate this process.  

 

How a Semantic Layer Provides Context for Structured Data

  • Contextual Metadata & Business Glossaries: A semantic layer provides a framework to add contextual metadata and glossaries to structured data through definitions, usage examples, data lineage, category tags, etc. This enrichment aids analytics and AI teams in understanding organizational nomenclature, signaling the importance of one dataset compared to another, and improves their ability to align on using the right data for their analytics, metrics, and AI models. 
  • Hierarchical Structures (Taxonomies): Implementing hierarchical structures within the semantic layer allows for categorization and sub-categorization of data. This structure helps AI models identify broader/narrower relationships and dependencies with the data, making it easier for AI algorithms to derive and understand organizational frameworks. For instance, product or service categories allow AI models to analyze and understand relationships between data points related in those domains and  recommend similar or related data or services. This allows data teams to understand and incorporate implicit business concepts for AI as well as discover new information they would have otherwise not looked for or knew existed.
  • Encoding Business Logic (Ontologies): Using standardized data modeling schemas, such as ontologies, a semantic layer allows for programmatically applying business rules and logic that govern data relationships, entities, and constraints. By incorporating this logic, AI models gain a deeper understanding of the operational context in which the data exists, leading to more relevant and actionable insights. For example, our clients in the pharmaceutical industry use ontologies to explicitly define their domains and connect information about drugs, diseases, and biological pathways. This enables AI models to identify potential drug targets, predict drug interactions or adverse effects, and accelerate their drug discovery process.
  • Data Aggregation & Semantic Mapping (Knowledge Graphs): A knowledge graph aggregates data from multiple structured sources (like databases, data warehouses, CRM systems, etc.) into a unified view without the need to physically move or migrate data. In so doing, it provides a comprehensive view of organizational knowledge assets for AI models, enabling AI to draw knowledge and insights from broader sources. Furthermore, knowledge graphs allow organizations to create semantic mappings between different data schemas  (e.g., mapping “customer ID” in one system to “client ID” in another)  and helps AI understand the meaning and relationships of data fields ensuing models interpret data consistently while normalizing data quality across various sources.

 

How a Semantic Layer Extracts Knowledge from Unstructured Content 

  • Natural Language Processing (NLP): Natural language based models help analyze unstructured text to identify entities, concepts, and relationships from a large corpus of unstructured content – extracting key phrases or sentiments from documents and text and enabling AI to understand context and meaning. For instance, at a global policy research institute, we are leveraging LLMs for NLP by monitoring industry news and social media and extracting information about news trends to build taxonomy structures and inform a recommendation engine that is providing targeted policy recommendations. 
  • Named Entity Recognition (NER) and Classification: Organizations are augmenting knowledge model development by automatically identifying and classifying entities such as people, places, and things within unstructured content to create enterprise taxonomies and knowledge graphs. For example, by extracting and classifying entities like patient names, diagnoses, medications, and procedures from unstructured medical records, healthcare providers are connecting and applying the knowledge associated with these entities for clinical research and improving patient care outcomes. The structured representation of entities within text allows AI to leverage information for more precise responses and analysis.
  • Taxonomy, Ontology, and Graph Construction: By defining relationships between concepts, domain-specific ontologies that organize unstructured content into a structured framework enable AI solutions to understand and infer knowledge from the content more effectively. A semantic layer is able to build knowledge graphs from unstructured data, linking entities (extracted using NLP/NERs) and their attributes. This map-like representation of information helps AI systems navigate complex relationships and generate insights by making the knowledge explicit about how data is interconnected.

 

2. Domain Knowledge Capture and Expertise 

AI systems need to learn from explicit content and data as well as the insights and intuition of human experts. This is where knowledge management (KM) and AI are becoming increasingly intertwined. 

On the one hand, the traditional challenges of capturing, sharing, and transferring knowledge are becoming more pronounced, as many organizations struggle with a retiring workforce, high turnover rates, slow upskilling processes, and the limitations of AI systems that often fall short of expectations. Knowledge Capture and Transfer are becoming even more integral for organizations to enable knowledge flow among experts, and the ability to capture, disseminate, and use its institutional knowledge.

On the other hand, the expanding landscape of AI is opening up new possibilities for KM, especially in automating knowledge capture and transfer approaches. For instance, in many of our projects, experienced domain experts and AI engineers are collaborating to define rules and heuristics that reflect organizational wisdom – by creating decision trees, developing rule-based solutions, or using NLP and semantics to extract and infer expert knowledge from documents and conversations. Specific approaches that enable this process include: 

  • Mining Expert Libraries: Mining repositories of case studies and use cases, including extraction of knowledge from images and videos, that illustrate domain expertise in action by building a structured repository of facts and relationships, enable AI to learn from real-world applications and scenarios. 
  • Expert Annotations: Engaging subject matter experts to annotate datasets with contextual information and business logic or operational concepts (using metadata, taxonomies, ontologies) enhances understanding for AI models by making tacit knowledge explicit. 
  • Automated Knowledge Capture: Advanced applications of the expert annotation approach also include using AI-powered knowledge discovery tools that automatically analyze and extract knowledge from text or voice using NLP techniques, augmenting the development of knowledge graphs. This approach allows for the discovery of knowledge that is tacit in relationships between content in order to systematically provide it for AI training.
  • Embedded Feedback Loops: To ensure alignment with domain knowledge, many organizations and KM/AI solutions should incorporate a feedback loop. This involves providing domain experts with an embedded process and tools to review AI outputs and provide corrections or enhancements. That feedback can be used to refine models based on real-world applications and organizational changes.

 

3. Retrieval Augmented Generation (RAG) 

Retrieval Augmented Generation (RAG) architecture is a mechanism to provide LLMs with relevant organizational information and knowledge. However, LLMs trained on outdated content have resulted in mistakes in decision-making that have real consequences to an organization’s bottom line. 

Several RAG architectures are used for domain-specific knowledge transfer within the organizations we work with. The table below provides a comparison of these approaches and ideal scenarios for effective applications or use.

Many organizations are seeing better results from employing hybrid approaches to cater to specific use cases and solutions. For example, one of the top tax and financial services firm we are working with is leveraging “Semantic Routing” techniques in order to respond with the most accurate and specific information for their search solutions by evaluating users’ query and determining the best route to take from the above three approaches to fetch, combine, and deliver a response to user queries.  

Institutional or domain knowledge provides the specific context in which a given AI model will be applied within the enterprise.

 

Conclusion

Successfully injecting organizational knowledge into AI is not just a technical challenge but also a strategic organizational decision that requires a shift in mindset and collaboration across knowledge, content, data, and AI teams and solutions. 

  • Organizational experts know how to interpret data and how to handle missing values and outliers. They are crucial for identifying relevant data sources, interpreting information with contextual nuances, and helping in addressing data quality and bias issues. 
  • KM and content teams need AI literacy for effective collaboration – to provide expertise in knowledge retrieval and ensure content readiness and quality for AI solutions. 
  • Data and AI teams need to have a deep understanding of the organization’s domain knowledge, business objectives, and access to reliable data regardless of its type and location. 

The semantic layer and the knowledge extraction/application approaches discussed above facilitate this integration and ensure that AI can operate not just as a tool, but as an intelligent organizational partner that understands the unique nuances of an organization and enables knowledge intelligence.

Is your AI initiative stalled? Does it lack the knowledge necessary to make it trustworthy and valuable? Contact us to learn how to put your knowledge at the center of your AI.

The post How to Inject Organizational Knowledge in AI: 3 Proven Strategies to Achieve Knowledge Intelligence appeared first on Enterprise Knowledge.

]]>
Consolidation in the Semantic Software Industry https://enterprise-knowledge.com/consolidation-in-the-semantic-software-industry/ Tue, 01 Oct 2024 14:52:51 +0000 https://enterprise-knowledge.com/?p=22218 As a technology SME in the KM space, I am excited about the changes happening in the semantic software industry. Just two years ago, in my book, I provided a complete analysis of the leading providers of taxonomy and ontology … Continue reading

The post Consolidation in the Semantic Software Industry appeared first on Enterprise Knowledge.

]]>
As a technology SME in the KM space, I am excited about the changes happening in the semantic software industry. Just two years ago, in my book, I provided a complete analysis of the leading providers of taxonomy and ontology management systems, as well as graph providers, auto-tagging systems, and more. While the software products I evaluated are still around, most of them have new owners. The amount of change that has happened in just two years is incredible.

We recognized the importance of these products early on at EK. Enterprise-Scale Knowledge Management cannot work without technology solutions that capture, align, and make information discoverable. We partnered with organizations like The Semantic Web Company, Synaptica, OntoText, TopQuadrant, and Neo4j to help some of the world’s most well-known companies solve some of the most complex knowledge management problems.

Now the rest of the industry is realizing the importance of semantics and the semantic layer. Well-funded software companies are acquiring many of the independent software vendors in this space so that they can offer a more comprehensive semantic layer solution to their customers. MarkLogic, an enterprise NoSQL database, bought Semaphore (a taxonomy/ontology management platform) and both were later acquired by Progress Software. Squirro (an Artificial Intelligence (AI) enabled search platform) bought Synaptica (taxonomy/ontology management software) and has also purchased meetsynthia.ai (prompt engineering solution for AI). Fluree (a graph data management platform) bought Mondeca (a taxonomy/ontology management software). In this same timeframe, Samsung Electronics acquired Oxford Semantic (an RDF graph database). In each case, these vendors are looking to offer their customers a single integrated solution for the semantic layer.

Examples of various semantic solution vendors being consolidated - including Semaphore, MarkLogic, Synaptica, and many others.

Organizations that purchased these products suddenly have risks to their software investment. The new product owner could choose to take the software in a different direction that does not support your use case. You could have new points of contact that do not understand your organization’s needs. Senior leadership in your organization may want to minimize the investment in a tool that was recently purchased out of fear. While less likely in this case, the vendor could also choose to wind down support for the product. On a more positive note, these larger, more well-funded companies might add compelling new features or they might integrate it with their existing tools to provide a comprehensive solution to what you would have had to integrate yourself.

There are two primary drivers behind all of this industry change. The first is the explosion of generative AI. Companies are trying to implement generative AI projects, and they are failing. According to Gartner, 85% of AI projects fail. Data quality and a proper understanding of AI tools and capabilities are two of the most common causes of these failures. The semantic tools we are talking about address these issues. Taxonomy and ontology management systems along with graph databases organize and curate information so that data quality problems are minimized. In addition, many of these tools are now offering frameworks for generative AI solutions. Think of them as a configurable engine on which generative AI solutions can be built. Every software vendor is looking for a way to provide AI solutions to their customers. Our favorite semantic software tools are being bought up so that the vendors can provide a single integrated AI solution to their clients.

The second major driver is the semantic layer. As data continues to grow exponentially, the need to map data in a way that the business understands has become even more critical. One of our retail clients had 12 different point of sale systems. Answering a simple question like “What is an average sales transaction?” was incredibly complicated. A knowledge graph can map each of these data elements to a transaction and a transaction amount in a machine-readable way. Business leaders can ask for this information in a way that makes sense to them and the knowledge graph automatically generates the answers from the data where it sits. As more organizations understand the power of a semantic layer, the need for semantic tools continues to grow. Data vendors see this opportunity and are purchasing semantic tools that they can integrate into their current solution stack.

Given all of the momentum in this area, we will continue to see more acquisitions of semantic software solutions. Our team at EK is watching this industry closely to guide our customers so that they will have the best vendors with the best solutions during this changing time. If you are thinking about purchasing a semantic software product we have a proprietary matrix of semantic solutions that was developed over 10 years and has over 200 requirements for semantic capabilities. If you are concerned because your product was purchased, we know all of the players in the industry and can guide you to the best possible answer for moving forward. Contact us so that we can give you the right guidance both now and in the long term.

The post Consolidation in the Semantic Software Industry appeared first on Enterprise Knowledge.

]]>
Is ChatGPT Ready for the Enterprise? https://enterprise-knowledge.com/is-chatgpt-ready-for-the-enterprise/ Wed, 15 Feb 2023 17:06:08 +0000 https://enterprise-knowledge.com/?p=17538 Recently, we were visiting a client showing the latest version of our application when a participant asked, “Why aren’t we using ChatGPT?” It was a good and logical question with the attention that ChatGPT and other AI-based solutions are warranting … Continue reading

The post Is ChatGPT Ready for the Enterprise? appeared first on Enterprise Knowledge.

]]>
Recently, we were visiting a client showing the latest version of our application when a participant asked, “Why aren’t we using ChatGPT?” It was a good and logical question with the attention that ChatGPT and other AI-based solutions are warranting these days. While these tools, built using complex machine-learning components like large-language models (LLMs) and neural networks, offer much promise, their implementation in today’s enterprise should be weighed carefully.

Rightfully so, ChatGPT and similar AI-powered solutions have created quite a buzz in the industry. It really is impressive what they currently do, and they offer much future promise. Since those of us in the technology world have been inundated with questions and remarkable tales about ChatGPT and similar tools, I took it upon myself to do a little experiment.

The Experiment

As a die-hard Cubs fan, I hopped over to the ChatGPT site and asked: “Which Cubs players have won MVPs?”

It provided a list of names that, on the surface, appeared correct. However, a few minutes spent on Google confirmed that one answer was factually wrong, as were some of the supporting facts about correctly identified players.

Impressively, a subsequent question: “Are there any others?” provided another seemingly accurate list of results. ChatGPT remembered the context of my first query and answered appropriately. Despite this, further investigation confirmed that, once again, not all of the information returned was correct.

As shown from this tiny sample, any organization needs to tread carefully when considering implementing ChatGPT and other AI-powered solutions in their current form. It’s quite possible that they lead to more problems than they solve.

Here is a list of the top issues to consider before embarking on an AI-based search solution like ChatGPT.

Accuracy Issues

For all their potential, their current implementations are haunted by one fact – they can return blatantly false information. As shown above, a sizable portion of the answers were wrong, especially during the follow-up question. Unfortunately, this is a common experience.

Further, there is no reference information returned with the result. This produces more questions than it does answers. What is the “source of truth” for the query response? What authoritative document states this information that can be referenced and verified?

Granted, when you perform a search on a traditional keyword search engine, you can sometimes get nefarious, outdated, or incorrect results. Still, these search engines are not selling the promise that they’re returning the single, definitive answer to your question. You are presented with a list to sift through and make your final decision on what is relevant to your particular needs.

While it’s entertaining to ask ChatGPT –  “What is Hamlet’s famous spoken line and repeat it back to me in a pirate’s voice” – would you really want to base an important business decision on feedback that is often inaccurate and unverifiable? All it takes is being burnt by one wrong answer for your users to lose faith in the system.

Complexity and Expense

I like to joke with my clients that we can build any solution quickly, cheaply, and impressively but that they have to pick two of the three. With an AI-based solution like ChatGPT, you may only get to pick one. Implementing an AI solution is inherently complex and expensive. There is a lot of time and complexity involved, and there’s no “point and click, out of the box” option. Relevant tasks to prepare AI for the enterprise include:

  • Designing and planning for both hardware and software,
  • Collecting relevant and accurate data to feed into the system,
  • Building relevant models and training them about your domain-specific knowledge,
  • Developing a user interface,
  • Testing and analyzing your results, then iterating, perhaps multiple times, to make improvements; and,
  • Operationalizing the system into your existing infrastructure, including data integration, support, and monitoring.

Additionally, projects like these require developers with niche, advanced skills. It’s difficult enough finding experienced developers to implement basic keyword search solutions, let alone advanced AI logic. Those that can successfully build these AI-based solutions are few and far between, and in software development, the time of highly-skilled developers comes at a significant cost.

Lack of Explainability

AI-based solutions like ChatGPT tend to be “black box” solutions. Meaning that, although powerful, the logic they use to return results is virtually impossible to explain to a user if it’s even available.

With traditional search engines, the scoring algorithms to rank results are easier to understand. A developer can compare the scores between documents in the result set and quickly understand why one appears higher than the other. Most importantly, this process can be explained to the end user, and adjustments to the scoring can be made easily through search relevancy tuning.

Searching in the enterprise is a different paradigm than the impersonal world of Google, Amazon, and e-commerce search applications. Your users are employees, and you must ensure they are empowered to have productive search experiences. If users can’t intuitively understand why a particular result is showing up for their query, they’re more likely to question the tool’s accuracy. This is especially true for certain users, like librarians, legal assistants, or researchers, who have very specific search requirements and need to understand the logic of the search engine before they trust it.

User Experience and Readiness

The user experience for a tool like ChatGPT will be markedly different. For starters, many of the rich features to which users have grown accustomed – faceting, hit highlighting, phrase-searching – are currently unavailable in ChatGPT.

Furthermore, consider if your users are actually ready to leverage an AI-based solution. For example, how do they normally search? Are they entering 1 or 2 keywords, or are they advanced enough to ask natural language questions? If they’re accustomed to using keywords, a single-term query won’t produce markedly better results in an AI-based solution than a traditional search engine.

Conclusion

Although the current version of ChatGPT may not deliver immediate value to your organization, it still has significant potential. We’re focusing our current research on a couple of areas in particular. First, its capabilities around categorization and auto-summarization are very promising and could easily be leveraged in tandem with the more ubiquitous keyword search engines. Categorization lets you tag your content with key terms and provides rich metadata that powers functionality like facets. Meanwhile, auto-summarization creates short abstracts of your lengthy documents. These abstracts, properly indexed into your search engine, can serve as the basis for providing more accurate search results.

It’s perfectly acceptable to be equally impressed by the promise of tools like ChatGPT yet skeptical of how well their current offerings will meet your real-world search needs. If your organization is grappling with this decision, contact us, and we can help you navigate through this exciting journey.

The post Is ChatGPT Ready for the Enterprise? appeared first on Enterprise Knowledge.

]]>
The Importance of a Semantic Layer in a Knowledge Management Technology Suite https://enterprise-knowledge.com/the-importance-of-a-semantic-layer-in-a-knowledge-management-technology-suite/ Thu, 27 May 2021 16:43:36 +0000 https://enterprise-knowledge.com/?p=13229 One of the most common Knowledge Management (KM) pitfalls at any organization is the inability to find fresh, reliable information at the time of need.  One of, if not the most prominent, causes of this inability to quickly find information … Continue reading

The post The Importance of a Semantic Layer in a Knowledge Management Technology Suite appeared first on Enterprise Knowledge.

]]>
One of the most common Knowledge Management (KM) pitfalls at any organization is the inability to find fresh, reliable information at the time of need. 

One of, if not the most prominent, causes of this inability to quickly find information that EK has seen more recently is that an organization possesses multiple content repositories that lack a clear intention or purpose. As a result, users are forced to visit each repository within their organization’s technology landscape one at a time in order to search for the information that they need. Further, this problem is often exacerbated by other KM issues, such as a lack of proper search techniques, organization mismanagement of content, and content sprawl and duplication. In addition to a loss in productivity, these issues lead to rework, individuals making decisions on outdated information, employees losing precious working time trying to validate information, and users relying on experts for information they cannot find on their own. 

Along with a solid content management and KM related strategy, EK recommends that clients experiencing these types of findability related issues also seek solutions at the technical level. It is critical that organizations take advantage of the opportunity to streamline the way their users access the information they need to do their jobs; this will allow for the reduction of time and effort of users spent searching for information, as well as the assuage of the aforementioned challenges. This blog will explain how organizations can proactively mitigate the challenges of siloed information in different applications by instituting a unique set of technical solutions, including taxonomy management systems, metadata hubs, and enterprise search, to alleviate these problems.

With the abundance and variety of content that organizations typically possess, it is often unrealistic to have one repository that houses all types of content. There are very few, if any, content management systems on the market that can optimally support the storage of every type of content an organization may have, let alone possess the search and metadata capabilities required for proper content management. Organizations can address this dilemma by having a unified, centralized search experience that is able to search all content repositories in a secure and safe manner. This is achieved through the design and implementation of a semantic layer – a combination of unique solutions that work together to provide users one place to go to for searching for content, but behind the scenes allow for the return of results from multiple locations.

In the following sections, I will illustrate the value of Taxonomy Management Systems, Enterprise Search, and Metadata Hubs that make up the semantic layer, which collectively enable a unique and highly beneficial set of solutions.

The semantic layer is made up of three main systems/solutions: a Taxonomy Management System (TMS), an Enterprise Search (ES) tool, and a Metadata Hub.
As seen in the image above, the semantic layer is made up of three main systems/solutions: a Taxonomy Management System (TMS), an Enterprise Search (ES) tool, and a Metadata Hub.

Taxonomy Management Systems

In order to pull consistent data values back from different sources and filter, sort, and facet that data, there must be a taxonomy in place that applies to all content, in all locations. This is achieved by the implementation of an Enterprise TMS, which can be used to create, manage, and apply an enterprise-wide taxonomy to content in every system. This is important because it’s likely there are already multiple, separate taxonomies built into various content repositories that are different from one another and therefore cannot be leveraged in one system. An enterprise wide taxonomy allows for the design of a taxonomy that applies to all content, regardless of its type or location. An additional benefit of having an enterprise TMS is that organizations can utilize the system’s auto-tagging capabilities to assist in the tagging of content in various repositories. Most, if not all major contenders in the TMS industry provide auto-tagging capabilities, and organizations can use these capabilities to significantly reduce the burden on content authors and curators to manually apply metadata to content. Once integrated with content repositories, the TMS can automatically parse content, assign metadata based on a controlled vocabulary (stored in the enterprise taxonomy), and return those tags to a central location.

Metadata Hub

The next piece of this semantic layer puzzle is a metadata hub. We often find that one or more content repositories in an organization’s KM ecosystem lack the necessary metadata capabilities to describe and categorize content. This is extremely important because it facilitates the efficient indexing and retrieval of content. A ‘metadata hub’ can help to alleviate this dilemma by effectively giving those systems their needed metadata capabilities as well as creating a single place to store and manage that metadata. The metadata hub, when integrated with the TMS can apply the taxonomy and tag content from each repository, and store those tags in a single place for a search tool to index. 

This metadata hub acts as a ‘manage in place’ solution. The metadata hub points to content in its source location. Tags and metadata that are being generated are only stored in the metadata hub and are not ‘pushed’ down to the source repositories. This “pushing down” of tags can be achieved with additional development, but is generally avoided as not to disrupt the integrity of content within its respective repository. The main goal here is to have one place that contains metadata about all content in all repositories, and that this metadata is based on a shared, enterprise-wide taxonomy.

Enterprise Search

The final component of the semantic layer is Enterprise Search (ES). This is the piece that allows for individuals to perform a single search as opposed to visiting multiple systems and performing multiple searches, which is far from the optimal search experience. The ES solution acts as the enabling tool that makes the singular search experience possible. This search tool is the one that individuals will use to execute queries for content across multiple systems and includes the ability to filter, facet, and sort content to narrow down search results. In order for the search tool to function properly, there must be integrations set up between the source repositories, the metadata hub, and the TMS solution. Once these connectors are established, the search tool will be able to query each source repository with the search criteria provided by the user, and then return metadata and additional information made available by the TMS and metadata hub solutions. The result is a faceted search solution similar to what we are all familiar with at Amazon and other leading e-commerce websites. These three systems work together to not only alleviate the issues created by a lack of metadata functionalities in source repositories, but also to give users a single place to find anything and everything that relates to their search criteria.

Bringing It All Together

The value of a semantic layer can be exemplified through a common use case:

Let’s say you are trying to find out more information about a certain topic within your organization. In order to do this, you would love to perform a search for everything related to this certain topic, but realize that you have to visit multiple systems to do so. One of your content repositories stores digital media, i.e. videos and pictures, another of your content repositories stores scholarly articles, and another one stores information on individuals who are experts on the topic. There could be many more repositories, and you must visit each one separately and search within each system to gather the information you need. This takes considerable time and effort and in a best case scenario makes for a painstakingly long search process. In a worst case scenario, content is missed and the research is incomplete.

With the introduction of the semantic layer, the searchers would only have to visit one location and perform a single search. When doing so, searchers would see the results from each individual repository all in one location. Additionally, searchers would have extensive amounts of metadata on each piece of content to filter to ensure that they find the information they are looking for. Normally when we build these semantic layers the search allows users the option to narrow results by source system, content type (article, person, digital media), date created or modified, and many more. Once the searcher has found their desired content, a convenient link is provided which will take them directly to the content in its respective repository. 

Closing

The increasingly common issue of having multiple, disparate content repositories in a KM technology stack is one that causes organizations to lose valuable time and effort, while hindering employees’ ability to efficiently find information through mature, proven metadata and search capabilities. Enterprise Knowledge (EK) specializes in the design and implementation of the exact systems mentioned above and has proven experience building out these types of technologies for clients. If your company is facing issues with the findability of your content, struggling with having to search for content in multiple places, or even finding that searching for information is a cumbersome task, we can help. Contact us with any questions you have about how we can improve the way your organization searches for and finds information within your KM environment.

The post The Importance of a Semantic Layer in a Knowledge Management Technology Suite appeared first on Enterprise Knowledge.

]]>