Knowledge Portal Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/knowledge-portal/ Tue, 16 Sep 2025 19:08:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg Knowledge Portal Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/knowledge-portal/ 32 32 Top Semantic Layer Use Cases and Applications (with Real World Case Studies)   https://enterprise-knowledge.com/top-semantic-layer-use-cases-and-applications-with-realworld-case-studies/ Thu, 01 May 2025 17:32:34 +0000 https://enterprise-knowledge.com/?p=23922 Today, most enterprises are managing multiple content and data systems or repositories, often with overlapping capabilities such as content authoring, document management, or data management (typically averaging three or more). This leads to fragmentation and data silos, creating significant inefficiencies. … Continue reading

The post Top Semantic Layer Use Cases and Applications (with Real World Case Studies)   appeared first on Enterprise Knowledge.

]]>
Today, most enterprises are managing multiple content and data systems or repositories, often with overlapping capabilities such as content authoring, document management, or data management (typically averaging three or more). This leads to fragmentation and data silos, creating significant inefficiencies. Finding and preparing content and data for analysis takes weeks, or even months, resulting in high failure rates for knowledge management, data analytics, AI, and big data initiatives. Ultimately, negativity impacting decision-making capabilities and business agility.

To address these challenges, over the last few years, the semantic layer has emerged as a framework and solution to support a wide range of use cases, including content and data organization, integration, semantic search, knowledge discovery, data governance, and automation. By connecting disparate data sources, a semantic layer enables richer queries and supports programmatic knowledge extraction and modernization.

A semantic layer functions by utilizing metadata and taxonomies to create structure, business glossaries to align on the meaning of terms, ontologies to define relationships, and a knowledge graph to uncover hidden connections and patterns within content and data. This combination allows organizations to understand their information better and unlock greater value from their knowledge assets. Moreover, AI is tapping into this structured knowledge to generate contextual, relevant, and explainable answers.

So, what are the specific problems and use cases organizations are solving with a semantic layer? The case studies and use cases highlighted in this article are drawn from our own experience from recent projects and lessons learned, and demonstrate the value of a semantic layer not just as a technical foundation, but as a strategic asset, bridging human understanding with machine intelligence.

 

 

Semantic Layer Advancing Search and Knowledge Discovery: Getting Answers with Organizational Context

Over the past two decades, we have completed 50-70 semantic layer projects across a wide range of industries. In nearly every case, the core challenges revolve around age-old knowledge management and data quality issues—specifically, the findability and discoverability of organizational knowledge. In today’s fast-paced work environment, simply retrieving a list of documents as ‘information’ is no longer sufficient. Organizations require direct answers to discover new insights. Most importantly, organizations are looking to access data in the context of their specific business needs and processes. Traditional search methods continue to fall short in providing the depth and relevance required to make quick decisions. This is where a semantic layer comes into play. By organizing and connecting data with context, a semantic layer enables advanced search and knowledge discovery, allowing organizations to retrieve not just raw files or data, but answers that are rich in meaning, directly tied to objectives, and action-oriented. For example, supported by descriptive metadata and explicit relationships, semantic search, unlike keyword search, understands the meaning and context of our queries, leading to more accurate and relevant results by leveraging relationships between entities and concepts across content, rather than just matching keywords. This powers enterprise search solutions and question-answering systems that can understand and answer complex questions based on your organization’s knowledge. 

Case Study: For our clients in the pharmaceuticals and healthcare sectors, clinicians and researchers often face challenges locating the most relevant medical research, patient records, or treatment protocols due to the vast amount of unstructured data. A semantic layer facilitates knowledge discovery by connecting clinical data, trials, research articles, and treatment guidelines to enable context-aware search. By extracting and classifying entities like patient names, diagnoses, medications, and procedures from unstructured medical records, our clients are advancing scientific discovery and drug innovation. They are also improving patient care outcomes by applying the knowledge associated with these entities in clinical research. Furthermore, domain-specific ontologies organize unstructured content into a structured network, allowing AI solutions to better understand and infer knowledge from the data. This map-like representation helps systems navigate complex relationships and generate insights by clearly articulating how content and data are interconnected. As a result, rather than relying on traditional, time-consuming keyword-based searches that cannot distinguish between entities (e.g., “drugs manufactured by GSK” vs. “what drugs treat GSK”?), users can perform semantic queries that are more relevant and comprehend meaning (e.g., “What are the side effects of drug X?” or “Which pathways are affected by drug Y?”), by leveraging the relationships between entities to obtain precise and relevant answers more efficiently.

 

Semantic Layer as a Data Product: Unlocking Insights by Aligning & Connecting Knowledge Assets from Complex Legacy Systems

The reality is that most organizations face disconnected data spread across complex, legacy systems. Despite well-intended investments and efforts in enterprise knowledge and data management efforts, typical repositories often remain outdated, including legacy applications, email, shared network drives, folders, and information saved locally on desktops or laptops. Global investment banks, for instance, struggle with multiple outdated record management, risk, and compliance tracking systems, while healthcare organizations continue to contend with disparate electronic health record (EHR) systems and/or Electronic Medical Records (EMRs). These challenges hinder the ability to communicate and share data with newer, more advanced systems, are typically not designed to handle the growing demands of modern data, and result in businesses grappling with siloed information in legacy systems that make regulatory reporting onerous, manual, and time-consuming. The solution to these issues lies in treating the semantic layer as an abstracted data product itself whereby organizations employ semantic models to connect fragmented data from legacy systems, align shared terms across these systems, provide descriptive metadata and meaning, and connect data to empower users to query and access data with additional context, relevance, and speed. This approach not only streamlines decision-making but also modernizes data infrastructure without requiring a complete overhaul of existing systems.

Case Study: We are currently working with a global financial firm to transform their risk management program. The firm manages 21 bespoke legacy applications, each handling different aspects of their risk processes where compiling a comprehensive risk report typically took up to two months, and answering key questions like, “What are the related controls and policies relevant to a given risk in my business?” was a complex, time-consuming task to tackle. The firm engaged with us to augment their data transformation initiatives with a semantic layer and ecosystem. We began by piloting a conceptual graph model of their risk landscape, defining core risk taxonomies to connect disparate data across the ecosystem. We used ontologies to explicitly capture the relationships between risks, controls, issues, policies, and more. Additionally, we leveraged large language models (LLMs) to summarize and reconcile over 40,000 risks, which had previously been described by assessors using free text.

This initiative provided the firm with a simplified, intuitive view where users could quickly look up a risk and find relevant information in seconds via a graph front-end. Just 1.5 years later, the semantic layer is powering multiple key risk management tools, including a risk library with semantic search and knowledge panels, four recommendation engines, and a comprehensive risk dashboard featuring threshold and tolerance analysis. The early success of the project was due to a strategic approach: rather than attempting to integrate the semantic data model across their legacy applications, the firm treated it as a separate data product. This allowed risk assessors and various applications to use the semantic layer as modular “Lego bricks,” enabling flexibility and faster access to critical insights without disrupting existing systems.

 

Semantic Layer for Data Standards and Interoperability: Navigating the Dynamism of Data & Vendor Limitations 

Various data points suggest that, today, the average tenure of an S&P 500 technology company has dropped dramatically from 85 years to just 12-15 years. This rapid turnover reflects the challenges organizations face with the constant evolution of technology and vendor solutions. The ability to adapt to new tools and systems, while still maintaining operational continuity and reducing risk, is a growing concern for many organizations. One key solution to this challenge is using frameworks and standards that are created to ensure data interoperability, offering the flexibility to navigate data organization and abstracting data from system and vendor limitations. A proper semantic layer employs universally adopted semantic web (W3C) and data modeling standards to design, model, implement, and govern knowledge and data assets within organizations and across industries. 

Case Study: A few years ago, one of our clients faced a significant challenge when their graph database vendor was acquired by another company, leading to a sharp increase in both license and maintenance fees. To mitigate this, we were able to swiftly migrate all of their semantic data models from the old graph database to a new one within less than a week (the fastest migration we’ve ever experienced). This move saved the client approximately $2 million over three years. The success of the migration was made possible because their data models were built using semantic web standards (RDF-based), ensuring standards based data models and interoperability regardless of the underlying database or vendor. This case study highlights a fundamental shift in how organizations approach data management. 

 

Semantic Layer as the Framework for a Knowledge Portal 

The growing volume of data, the need for efficient knowledge sharing, and the drive to enhance employee productivity and engagement are fueling a renewed interest in knowledge portals. Organizations are increasingly seeking a centralized, easily accessible view of information as they adopt more data-driven, knowledge-centric approaches. A modern Knowledge Portal consolidates and presents diverse types of organizational content, ranging from unstructured documents and structured data to connections with people and enterprise resources, offering users a comprehensive “Enterprise 360” view of related knowledge assets to support their work effectively.

While knowledge portals fell out of favor in the 2010s due to issues like poor content quality, weak governance, and limited usability, today’s technological advancements are enabling their resurgence. Enhanced search capabilities, better content aggregation, intelligent categorization, and automated integrations are improving findability, discoverability, and user engagement. At its core, a Knowledge Portal comprises five key components that are now more feasible than ever: a Web UI, API layers, enterprise search engine, knowledge graph, and taxonomy/ontology management tools—half of which form part of the semantic layer.

Case Study: A global investment firm managing over $250 billion in assets partnered with us to break down silos and improve access to critical information across its 50,000-employee organization. Investment professionals were wasting time searching for fragmented, inconsistent knowledge stored across disparate systems, often duplicating efforts and missing key insights. We designed and implemented a Knowledge Portal integrating structured and unstructured content, AI-powered search, and a semantic layer to unify data from over 12 systems including their primary CRM (DealCloud), additional internal/external systems, while respecting complex access permissions and entitlements. A big part of the portal involved a semantic layer architecture which included the rollout of metadata and taxonomy design, ontology and graph modeling and storage, and an agile development process that ensured high user engagement and adoption. Today, the portal connects staff to both information and experts, enabling faster discovery, improved collaboration, and reduced redundancy. As a result, the firm saw measurable gains in their productivity, staff and client onboarding efficiency, and knowledge reuse. The company continues to expand the solution to advanced use cases such as semantic search applications and robust global use cases.

 

Semantic Layer for Analytics-Ready Data 

For many large-scale organizations, it takes weeks, sometimes months, for analytics teams to develop “insights” reports and dashboards that fulfill data-driven requests from executives or business stakeholders. Navigating complex systems and managing vast data volumes has become a point of friction between established software engineering teams managing legacy applications and emerging data science/engineering teams focused on unlocking analytics insights or data products. Such challenges persist as long as organizations work within complex infrastructures and proprietary platforms, where data is fragmented and locked in tables or applications with little to no business context. This makes it extremely difficult to extract useful insights, handle the dynamism of data, or manage the rising volumes of unstructured data, all while trying to ensure that data is consistent and trustworthy. 

Picture this scenario and use case from a recent engagement: a global retailer, with close to 40,000 store locations across the globe had recently migrated its data to a data lake in an attempt to centralize their data assets. Despite the investment, they still faced persistent challenges when new data requests came from their leadership, particularly around store performance metrics. Here’s a breakdown of the issues:

  • Each time a leadership team requested a new metric or report, the data team had to spin up a new project and develop new data pipelines.
  • 5-6 months was required for a data analyst to understand the content/data related to these metrics—often involving petabytes of raw data.
  • The process involved managing over 1500 ETL pipelines, which led to inefficiencies (what we jokingly called “death by 2,000 ETLs”).
  • Producing a single dashboard for C-level executives cost over $900,000.
  • Even after completing the dashboard, they often discovered that the metrics were being defined and used inconsistently. Terms like “revenue,” “headcount,” or “store performance” were frequently understood differently depending on who worked on the report, making output reports unreliable and unusable. 

This is one example of why organizations are now seeking and investing in a coherent, integrated way to bridge these gaps and understand their vast data ecosystems. Because organizations often work with complex systems, ranging from CRMs and ERPs to data lakes and cloud platforms, extracting meaningful insights from this data requires a coherent, integrated view that can bridge these gaps. This is where the semantic layer serves as a pragmatic tool that enables organizations to bridge these gaps, streamline processes, and transform how data is used across departments. Specifically for these use cases, semantic data is gaining significant traction across diverse pockets of the organization as the standard interpreter between complex systems and business goals. 

 

Semantic Layer for Delivering Knowledge Intelligence 

Another reality many organizations are grappling with today is that basic AI algorithms trained in public data sets may not work well on organization and domain-specific problems, especially in domains where industry preferences are relevant. Thus, organizational knowledge is a prerequisite for success, not just for generative AI, but for all applications of enterprise AI and data science solutions. This is where experience and best practices in knowledge and data management lend the AI space effective and proven approaches to sharing domain and institutional knowledge. Especially for technical teams that are tasked with making AI “work” or provide value for their organization, they are looking for programmatic ways for explicitly modeling relationships between various data entities, providing business context to tabular data, and extracting knowledge from unstructured content, ultimately delivering what we call Knowledge Intelligence.

A well-implemented semantic layer abstracts the complexities of underlying systems and presents a unified, business-friendly view of data. It transforms raw data into understandable concepts and relationships, as well as organizes and connects unstructured data. This makes it easier for both data teams and business users to query, analyze, and understand their data, while making this organizational knowledge machine-ready and readable. The semantic layer standardizes terminology and data models across the enterprise, and provides the required business context for the data. By unifying and organizing data in a way that is meaningful to the business, it ensures that key metrics are consistent, actionable, and aligned with the company’s strategic objectives and business definitions.

Case Study: With the aforementioned global retailer, as their data and analytics teams worked to integrate siloed data and unstructured content, we partnered with them to build a semantic ecosystem that streamlined processes and provided the business context needed to make sense of their vast data. Our approach included: 

  • Standardized Metadata and Vocabularies: Developed standardized metadata and vocabularies to describe their key enterprise data assets, especially for store metrics like sales performance, revenue, etc. This ensured that everyone in the organization used the same definitions and language when discussing key metrics. 
  • Explicitly Defined Concepts and Relationships: We used ontologies and graphs to define the relationships between various domains such as products, store locations, store performance, etc. This created a coherent and standardized model that allowed data teams to work from a shared understanding of how different data points were connected.
  • Data Catalog and Data Products: We helped the retailer integrate these semantic models into a data catalog that made data available as “data products.” This allowed analysts to access predefined, business-contextualized data directly, without having to start from scratch each time a new request was made.

This approach reduced report generation steps from 7 to 4 and cut development time from 6 months to just 4-5 weeks. Most importantly, it enabled the discovery of previously hidden data, unlocking valuable insights to optimize operations and drive business performance.

 

Semantic Layer as a Foundation for Reliable AI: Facilitating Human Reasoning and Explainable Decisions

Emerging technologies (like GenAI or Agentic AI) are democratizing access to information and automation, but they also contribute to the “dark data” problem—data that exists in an unstructured or inaccessible format but contains valuable, sensitive, or bad information. While LLMs have garnered significant attention in conversational AI and content generation, organizations are now recognizing that their data management challenges require more specialized, nuanced, and somewhat ‘grounded’ approaches that address the gaps in explainability, precision, and the ability to align AI with organizational context and business rules. Without this organizational context, raw data or text is often messy, outdated, redundant, and unstructured, making it difficult for AI algorithms to extract meaningful information. The key step to addressing this AI problem involves the ability to connect all types of organizational knowledge assets, i.e., using shared language, involving experts, related data, content, videos, best practices, lessons learned, and operational insights from across the organization. In other words, to fully benefit from an organization’s knowledge and information, both structured and unstructured information, as well as expert knowledge, must be represented and understood by machines. A semantic layer provides AI with a programmatic framework to make organizational context, content, and domain knowledge machine-readable. Techniques such as data labeling, taxonomy development, business glossaries, ontology, and knowledge graph creation make up the semantic layer to facilitate this process. 

Case Study: We have been working with a global foundation that had previously been through failed AI experiments as part of a mandate from their CEO for their data teams to “figure out a way” to adopt LLMs to evaluate the impact of their investments on strategic goals by synthesizing information from publicly available domain data, internal investment documents, and internal investment data. The challenge for previously failed efforts lay in connecting diverse and unstructured information to structured data and ensuring that the insights generated were precise, explainable, reliable, and actionable for executive stakeholders. To address these challenges, we took a hybrid approach that leveraged LLMs that were augmented through advanced graph technology and a semantic RAG (Retrieval Augmented Generation) agentic workflow. To provide the relevant organizational metrics and connection points in a structured manner, the solution leveraged an Investment Ontology as a semantic backbone that underpins their disconnected source systems, ensuring that all investment-related data (from structured datasets to narrative reports) is harmonized under a common language. This semantic backbone supports both precise data integration and flexible query interpretation. To effectively convey the value of this hybrid approach, we leveraged a chatbot that served as a user interface to toggle back and forth between the basic GPT model vs. the graph RAG solution. The solution consistently outperformed the basic/naive LLMs for complex questions, demonstrating the value of semantics for providing organizational context and alignment and ultimately, delivering coherent and explainable insights that bridged structured and unstructured investment data, as well as provided a transparent AI mapping that allowed stakeholders to see exactly how each answer was derived.

 

Closing 

Now more than ever, the understanding and application of semantic layers are rapidly advancing. Organizations across industries are increasingly investing in solutions to enhance their knowledge and data management capabilities, driven in part by the growing interest to benefit from advanced AI capabilities. 

The days of relying on a single, monolithic tool are behind us. Enterprises are increasingly investing in semantic technologies to not only work with the systems of today but also to future-proof their data infrastructure for the solutions of tomorrow. A semantic layer provides the standards that act as a universal “music sheet,” enabling data to be played and interpreted by any instrument, including emerging AI-driven tools. This approach ensures flexibility, reduces vendor lock-in, and empowers organizations to adapt and evolve without being constrained by legacy systems.

If you are looking to learn more about how organizations are approaching semantic layers at scale or are you seeking to unstick a stalled initiative, you can learn more from our case studies or contact us if you have specific questions.

The post Top Semantic Layer Use Cases and Applications (with Real World Case Studies)   appeared first on Enterprise Knowledge.

]]>
Incorporating Unified Entitlements in a Knowledge Portal https://enterprise-knowledge.com/incorporating-unified-entitlements-in-a-knowledge-portal/ Wed, 12 Mar 2025 17:37:34 +0000 https://enterprise-knowledge.com/?p=23383 Recently, we have had a great deal of success developing a certain breed of application for our customers—Knowledge Portals. These knowledge-centric applications holistically connect an organization’s information—its data, content, people and knowledge—from disparate source systems. These portals provide a “single … Continue reading

The post Incorporating Unified Entitlements in a Knowledge Portal appeared first on Enterprise Knowledge.

]]>
Recently, we have had a great deal of success developing a certain breed of application for our customers—Knowledge Portals. These knowledge-centric applications holistically connect an organization’s information—its data, content, people and knowledge—from disparate source systems. These portals provide a “single pane of glass” to enable an aggregated view of the knowledge assets that are most important to the organization. 

The ultimate goal of the Knowledge Portal is to provide the right people access to the right information at the right time. This blog focuses on the first part of that statement—“the right people.” This securing of information assets is called entitlements. As our COO Joe Hilger eloquently points out, entitlements are vital in “enabling consistent and correct privileges across every system and asset type in the organization.” The trick is to ensure that an organization’s security model is maintained when aggregating this disparate information into a single view so that users only see what they are supposed to.

 

The Knowledge Portal Security Challenge

The Knowledge Portal’s core value lies in its ability to aggregate information from multiple source systems into a single application. However, any access permissions established outside of the portal—whether in the source systems or an organization-wide security model—need to be respected. There are many considerations to take into account when doing this. For example, how does the portal know:

  • Who am I?
  • Am I the same person specified in the various source systems?
  • Which information should I be able to see?
  • How will my access be removed if my role changes?

Once a user has logged in, the portal needs to know that the user has Role A in the content management system, Role B in our HR system, and Role C in our financial system. Since the portal aggregates information from the aforementioned systems, it uses this information to ensure what I see in the portal is reflective of what I would see in any of the individual systems. 

 

The Tenets of Unified Entitlements in a Knowledge Portal

At EK, we have a common set of principles that guide us when implementing entitlements for a Knowledge Portal. They include:

  • Leveraging a single identity via an Identity Provider (IdP).
  • Creating a universal set of groups for access control.
  • Respecting access permissions set in source systems when available.
  • Developing a security model for systems without access permissions.

 

Leverage an Identity Provider (IdP)

When I first started working in search over 20 years ago, most source systems had their own user stores—the feature that allows a user to log into a system and uniquely identifies them within the system. One of the biggest challenges for implementing security was correctly mapping a user’s identity in the search application to their various identities in the source systems sending content to the search engine.

Thankfully, enterprise-wide Identity Providers (IdP)  like Okta, Microsoft Entra ID (formerly Azure Active Directory), and Google Cloud Identity are ubiquitous these days.  An Identity Provider (IdP) is like a digital doorkeeper for your organization. It identifies who you are and shares that information with your organization’s applications and systems.

By leveraging an IdP, I can present myself to all my applications with a single identifier such as “cmarino@enterprise-knowledge.com.” For the sake of simplicity in mapping my identity within the Knowledge Portal, I’m not “cmarino” in the content management system, “marinoc” in the HR system, and “christophermarino” in the financial system.

Instead, all of those systems recognize me as “cmarino@enterprise-knowledge.com” including the Knowledge Portal. And the subsequent decision by the portal to provide or deny access to information is greatly simplified. The portal needs to know who I am in all systems to make these determinations.

 

Create Universal Groups for Access Control

Working hand in hand with an IdP, the establishment of a set of universally used groups for access control is a critical step to enabling Unified Entitlements. These groups are typically created within your IdP and should reflect the common groupings needed to enforce your organization’s security model. For instance,  you might choose to create groups based on a department or a project or a business unit. Most systems provide great flexibility in how these groups are created and managed.

These groups are used for a variety of tasks, such as:

  • Associating relevant users to groups so that security decisions are based on a smaller, manageable number of groups rather than on every user in your organization.
  • Enabling access to content by mapping appropriate groups to the content.
  • Serving as the unifying factor for security decisions when developing an organization’s security model.

As an example, we developed a Knowledge Portal for a large global investment firm which used Microsoft Entra ID as their IdP. Within Entra ID, we created a set of groups based on structures like business units, departments, and organizational roles. Access permissions were applied to content via these groups whether done in the source system or an external security model that we developed. When a user logged in to the portal, we identified them and their group membership and used that in combination with the permissions of the content. Best of all, once they moved off a project or into a different department or role, a simple change to their group membership in the IdP cascaded down to their access permissions in the Knowledge Portal.

 

Respect Permissions from Source Systems

The first two principles have focused on identifying a user and their roles. However, the second key piece to the entitlements puzzle rests with the content. Most source systems natively provide the functionality to control access to content by setting access permissions. Examples are SharePoint for your organization’s sensitive documents, ServiceNow for tickets only available to a certain group, or Confluence pages only viewable by a specific project team. 

When a security model already exists within a source system, the goal of integrating that content within the Knowledge Portal is simple: respect the permissions established in the source. The key here is syncing your source systems with your IdP and then leveraging the groups managed there. When specifying access to content in the source, use the universal groups. 

Thus, when the Knowledge Portal collects information from the source system, it pulls not only the content and its applicable metadata but also the content’s security information. The permissions are stored alongside the content in the portal’s backend and used to determine whether a specific user can view specific content within the portal. The permissions become just another piece of metadata by which the content can be filtered.

 

Develop Security Model for Unsupported Systems

Occasionally, there will be source systems where access permissions have not or can not be supported. In this case, you will have to leverage your own internal security model by developing one or using an entitlements tool. Instead of entitlements stored within the source system, the entitlements will be managed through this internal model. 

The steps to accomplish this include:

  • Identify the tools needed to support unified entitlements;
  • Build the models for applying the security rules; and
  • Develop the integrations needed to automate security with other systems. 

The process to implement this within the Knowledge Portal would remain the same: store the access permissions with the content (mapped using groups) and use these as filters to ensure that users see only the information they should.

 

Conclusion

Getting unified entitlements correct for your organization plays a large part in a successful Knowledge Portal implementation. If you need proven expertise to help guide managing access to your organization’s valuable information, contact us

The “right people” in your organization will thank you.

The post Incorporating Unified Entitlements in a Knowledge Portal appeared first on Enterprise Knowledge.

]]>
Galdamez and Cross to Speak at the APQC 2025 Process & Knowledge Management Conference https://enterprise-knowledge.com/guillermo-galdamez-benjamin-cross-will-be-presenting-at-apqc-2025/ Fri, 31 Jan 2025 19:08:43 +0000 https://enterprise-knowledge.com/?p=23047 Guillermo Galdamez, Principal Knowledge Management Consultant, and Benjamin Cross, Project Manager, will be presenting “Knowledge Portals: Manifesting A Single View Of Truth For Your Organization” at the APQC 2025 Process & Knowledge Management Conference on April 10th. In this presentation, … Continue reading

The post Galdamez and Cross to Speak at the APQC 2025 Process & Knowledge Management Conference appeared first on Enterprise Knowledge.

]]>
Guillermo Galdamez, Principal Knowledge Management Consultant, and Benjamin Cross, Project Manager, will be presenting “Knowledge Portals: Manifesting A Single View Of Truth For Your Organization” at the APQC 2025 Process & Knowledge Management Conference on April 10th.

In this presentation, Galdamez and Cross will go into an in-depth explanation of Knowledge Portals, their value to organizations, the technical components that make up these solutions, lessons learned from their implementation across multiple clients in different industries, how and when to make the case to get started on a Knowledge Portal design and implementation effort, and how these solutions can become a catalyst for a knowledge transformation within organizations.

Find out more about the event and register at the conference website.

The APQC 2025 Process & Knowledge Management Conference will be hosted in Houston, Texas, April 9 and 10. The conference theme is: Integrate, Influence, Impact. EK consultants Guillermo Galdamez and Benjamin Cross are featured speakers.

The post Galdamez and Cross to Speak at the APQC 2025 Process & Knowledge Management Conference appeared first on Enterprise Knowledge.

]]>
Knowledge Portal Architecture Explained https://enterprise-knowledge.com/knowledge-portal-architecture-explained/ Thu, 09 Nov 2023 16:00:14 +0000 https://enterprise-knowledge.com/?p=19157 At its core, a Knowledge Portal consists of five components: Web UI, API Layer, Enterprise Search Engine, Knowledge Graph, and Taxonomy/Ontology Management System. Continue reading

The post Knowledge Portal Architecture Explained appeared first on Enterprise Knowledge.

]]>
In today’s data-driven world, the need for efficient knowledge management and dissemination has never been more critical. Users are faced with an overwhelming amount of content and information, and thus need an efficient, intuitive, and structured way to retrieve it. Additionally, organizational knowledge is often inconsistent, incomplete, and dispersed among various systems.

The solution? A Knowledge Portal: a dynamic and interconnected system designed to transform the way we manage, access, and leverage knowledge. This provides users with a comprehensive Enterprise 360 view of all of the information they need to successfully do their jobs. At its core, a Knowledge Portal consists of five components: Web UI, API Layer, Enterprise Search Engine, Knowledge Graph, and Taxonomy/Ontology Management System. 

A Knowledge Portal consists of five main components described below: 1. A Web UI: Provides users with a way to interact with the portal’s content, incorporating features such as search functionality, aggregation pages, and navigation menus. 2. API Layer: Serves the Simple Web UI consolidated, streamlined information via various endpoints. Enables other client applications to integrate with and consume the connected, cleaned Knowledge Portal content. 3. Enterprise Search Engine: Indexes and retrieves relevant information to display in the Knowledge Portal based on user queries. Allows relevant results from all integrated enterprise repositories to be discovered in the Portal. 4. Knowledge Graph: Represents the structure and connections of the organization’s knowledge. Captures concepts, entities, attributes, and their relationships in a graph database format. Enhances search results by providing contextual information and connected content. 5. Taxonomy and Ontology Manager: Defines and maintains controlled vocabularies, taxonomies, and ontologies, which allow for consistent and relevant metadata tagging and content organization. Ensures search precision and accuracy.

The diagram below displays how these five components interact within the context of an Enterprise Knowledge Portal implementation.

This diagram displays how the components of a Knowledge Portal interact with one another. At the bottom of the diagram, there are various data repositories, content management systems, and other enterprise data stores. Content from these repositories will be indexed by the Enterprise Search Engine and categorized/tagged by the Taxonomy and Ontology Manager. The tagged/categorized content will be ingested into the Knowledge Graph where it can be associated and linked to more organizational knowledge. The search engine can also index content from the Knowledge Graph. Then the backend API layer exposes and serves this tagged, indexed content from the Search Engine and Knowledge Graph. The API layer can be leveraged by various existing or future client applications. For the Knowledge Portal specifically, the API Layer serves content to the Knowledge Portal Web UI, which ultimately provides the end user an Enterprise 360 view of their organization’s content and knowledge.

Collectively, these components create a unified platform, empowering both organizations and individuals to discover information, break down organizational silos, and make informed decisions. 

EK has expertise in Knowledge Portal implementations, and we would love to help you take the next step on your knowledge management journey. Please contact us for more information.

Special thank you to Adam Eltarhoni for his contributions to this infographic! 

The post Knowledge Portal Architecture Explained appeared first on Enterprise Knowledge.

]]>
Five Lessons in Developing and Deploying a Modern Knowledge Portal https://enterprise-knowledge.com/five-lessons-in-developing-and-deploying-a-modern-knowledge-portal/ Tue, 05 Sep 2023 16:25:43 +0000 https://enterprise-knowledge.com/?p=18842 Knowledge Portals are some of the most exciting and promising emerging KM solutions. They create new capabilities to deliver a wide range of unstructured content, structured data, and connections to people and resources in the context of user’s work. Over … Continue reading

The post Five Lessons in Developing and Deploying a Modern Knowledge Portal appeared first on Enterprise Knowledge.

]]>
Knowledge portal lessons learnedKnowledge Portals are some of the most exciting and promising emerging KM solutions. They create new capabilities to deliver a wide range of unstructured content, structured data, and connections to people and resources in the context of user’s work. Over the past year, my team has been working on one of EK’s largest implementation projects to date; we designed, built, and deployed a Knowledge Portal for a global investment firm to provide a 360-degree view into its operations and investment lifecycle. 

At the project’s onset, I was presented with the opportunity to act as the KM Lead to help guide the team and our client in KM best practices and align the solution to business needs. My role later evolved to include SCRUM Project Owner responsibilities as well, working with multiple stakeholders to establish priorities for development and paving a path for our development team to release valuable iterations of our solution. 

The journey of developing a Knowledge Portal has had its ups and downs. There were also many hidden dangers, latent mistakes, and sudden risks along the way. It could have been very easy to go down the wrong path and waste our team’s efforts and resources. In this blog, I will share 5 of the lessons I’ve collected over the past year in the implementation of a Knowledge Portal – hopefully these will serve you well in your own Knowledge Portal journey. 

 

1. Pragmatism Over Idealism

Pragmatism icon When we first begin talking about Knowledge Portals with our stakeholders, you can see their eyes illuminate with thrust open doors of opportunities. Business leaders and their staff have so many needs and ideas that could potentially be addressed through a Knowledge Portal. 

However, Knowledge Portals are highly complex solutions that incorporate multiple technologies and data from upstream systems that must all work in concert to achieve stakeholders’ needs and expectations. Throughout the development of the Knowledge Portal, we ran into a multitude of blockers, some outside our control, and many of which prevented us from achieving our initial priorities.

In the spirit of Agile, our team’s focus was to deliver value and results as early as possible, so we got creative. Features that were once fourth on our backlog rose to the top. If we had stayed the course and worked to remove immediate roadblocks, we may have delayed our releases by half a year. Showing early business value was critical to sustain engagement with our key stakeholders and maintain the project’s momentum. Our first release incorporated all of our underlying technologies, giving us a very solid foundation to release further functionality quicker in the future. 

 

2. Content Quality and Governance Takes a Front Seat

Quality icon A Knowledge Portal is a powerful tool to surface content and data that was previously “locked away” in information silos, guarded by bureaucracy and obscure procedures. We knew from the beginning that displaying accurate and reliable content and data would be a key to the Knowledge Portal’s success.

Throughout the development process, we found that some of the data was not ready to be displayed on the Portal. This slowed us down, but we worked with different data owners to help fix the erroneous data. To be fair, prior to the Portal, it was cumbersome for them to get a holistic view of their data, especially if it was spread across multiple repositories. The success of a Knowledge Portal is dependent on the quality, consistency, and accessibility of the underlying content and data. It is critical to plan for this, and to expect time and effort for cleanup and enhancements. To an extent, our development of the Knowledge Portal encouraged our client to find solutions to some of their data quality issues, as well as create new processes and guidance on data management. This process ensures that data is maintained and users perceive it as being reliable.

Expect issues with data and content quality. It will be helpful for you to know who data owners are and to have a protocol in place for raising and escalating issues when they occur. This is something I would recommend doing early on in the project – at the kickoff, or before, if possible. Later, incorporate data checks throughout the entire development lifecycle; use actual sample data in design wireframes, get previews of data during development, and dedicate time during testing to validate whether the incoming data meets user wants and expectations.

 

3. Technical Partnerships Are Key

Partnerships icon Given the complexity of Knowledge Portals, it can be a humbling experience to tackle their enterprise development. It is difficult for any one person to have the depth of expertise in the range of knowledge domains it covers: UX, taxonomy and ontology, search, strategic business alignment, data management, content modeling, systems infrastructure and architecture, and many more.

A successful Knowledge Portal implementation requires a combination of design, business, and technology skills. While I could focus on some of the definition and prioritization of use cases and features, my technical partners could focus on the nuts-and-bolts of how these features would come together. This being said, the idiom “it takes a village” really applies to the development of a Knowledge Portal. Being part of the EK Team was a real advantage because of the quick access to a depth of expertise in every necessary aspect to build a Knowledge Portal: UX designers, engineers and developers, data scientists, ontologists and taxonomists, technical analysts, and Agilists to support our work. 

There were a few additional technical partnerships which were key to the Knowledge Portal’s success. Our relationship with infrastructure and security teams facilitated our adherence to internal standards and requirements that the Portal needed to meet. Although we are technology-agnostic, our partnership technology vendors facilitated our work as well.They aided in both maximizing the use of the features of each individual component of the Knowledge Portal and the expedited resolution of any unforeseen issue or question that our development team had.  

Just as important as it was to bring data and content together in the Portal, we brought together disparate teams and individuals to make the Knowledge Portal a reality. 

 

4. Early Technology Choices Are Critical

tech icon When we started the project, the organization lacked the full suite of technologies needed to implement the Knowledge Portal, so one of the first steps we took was defining and prioritizing business and technical requirements. We then took a close look at the leaders in each of the spaces to evaluate and choose the technologies that would satisfy these requirements.

Even though early iterations of the Knowledge Portal did not make full use of each of the technical solutions’ capabilities, smart choices early on future-proofed our Knowledge Portal. The technologies, coupled with our design approach (focused on solving business problems as opposed to delivering features), resulted in an easily extensible Knowledge Portal. As new requirements come up, we have been able to fulfill them without the need for further technology investments. 

As we move forward, I am sure new requirements and priorities will surface that will necessitate the incorporation of the latest technologies; however, I am confident our experts will provide guidance to stakeholders on emergent technologies that can continue to extend the overall solution.

 

5. If You Build It… They May Not Come

adoption icon Being a KM practitioner, I find the concept of Knowledge Portals thrilling. Most people in an organization will not share this sentiment, and it will lead to disappointment if your target users are not adopting and frequently using the solution you’ve been working on. I’ve found users mostly want to get on with their work, and changing habits is hard. Even if you are providing them with new capabilities for their teams, they may not have the additional attention and energy to embrace a new way of doing things. Deploying any new KM system requires a robust change management and communications strategy to drive adoption of tools and healthy KM practices.

For success on any initiative of this scale, change management and communications is critical. We worked together with our stakeholders and users to establish a virtuous cycle of feedback, where our development is now more responsive to users’ needs, while at the same time able to comprehensively communicate the value and expectations of the Knowledge Portal.

We assembled a cross-functional group of stakeholders to inform our design and refine our messaging and promotion around the Knowledge Portal. We also leveraged different data and knowledge management stakeholders to champion our cause and help articulate to their colleagues the value that the Knowledge Portal brought to their team work. Ultimately, we need to demonstrate how the Knowledge Portal is not only aligned to the overall organization’s strategic objectives, but how it will make each individual user’s life easier. 

 

Closing

The development of a Knowledge Portal is quite the endeavor. It requires the effort of a multidisciplinary team to get all of its components working in harmony and aligned to tangible business objectives. If you are considering implementing a Knowledge Portal in your organization, please contact us! We have quite a few more lessons to share. 

The post Five Lessons in Developing and Deploying a Modern Knowledge Portal appeared first on Enterprise Knowledge.

]]>
When a Knowledge Portal Becomes a Learning and Performance Portal https://enterprise-knowledge.com/when-a-knowledge-portal-becomes-a-learning-and-performance-portal/ Fri, 21 Jul 2023 15:56:52 +0000 https://enterprise-knowledge.com/?p=18436 EK’s CEO,  Zach Wahl, previously published Knowledge Portals Revisited, a blog that spells out an integrated suite of systems that actually puts users’ needs at the center of a knowledge management solution. We’ve long acknowledged that content may need to … Continue reading

The post When a Knowledge Portal Becomes a Learning and Performance Portal appeared first on Enterprise Knowledge.

]]>
EK’s CEO,  Zach Wahl, previously published Knowledge Portals Revisited, a blog that spells out an integrated suite of systems that actually puts users’ needs at the center of a knowledge management solution. We’ve long acknowledged that content may need to live in specialized repositories all across the enterprise, but we finally have a solution that gives users one place to search for and discover meaningfully contextualized knowledge within one portal.

Knowledge Portals, as described in Wahl’s blog, integrate data and information from multiple sources so that organizations can more efficiently generate insights and make data-driven decisions. However, if our goal is not just to enable knowledge insights but also to improve learning and performance, there are some additional design imperatives:

  • Findability of content by task or competency
  • Focus on the actions which enable learning
  • Measurement of learning and performance

 

Findability of Content for Learning and Performance

When designing a Knowledge Portal, one of the key considerations is how content is organized to optimize findability. Recently the EK team designed and developed a few Knowledge Portals and the information architecture and metadata strategies centered around business-specific concepts:

  • For a global investment firm, the key organizing principle was deals and investments. Employees of the organization needed to see all of the data and information about a particular deal in one place so they could spot trends and analyze relationships between data points more effectively.
  • For a manufacturing company, the key organizing principle was products and solutions. The Knowledge Portal needed to dynamically aggregate all of the information about a product, from the technical specifications to the customer success stories all in one place. That place became one dynamically assembled page for each product.

A Knowledge Portal gives you the ability to see all of the diverse knowledge assets in context. But developing the skills and abilities to apply and solve complex problems using those knowledge assets – that requires dedicated learning and performance improvement strategies. When the goal of the Portal is not knowledge but instead improving learning and performance, the organizing principles change from business concepts to competencies or tasks.

  • If the primary driver is to improve learning, the organizing principle becomes competencies. For example, Indeed.com identifies eighteen essential sales professional competencies. These competencies, such as upselling, negotiation, and product knowledge, would serve as an excellent navigational structure and Primary Metadata Fields for organizing a learning-focused portal for the organization.
  • If the primary driver is to improve performance, the organizing principle becomes tasks. Put yourself in the shoes of a busy sales professional trying to complete a simple task, such as preparing and delivering a product demo, unsure of the correct process. The sales professional needs to quickly find the performance support and supporting knowledge necessary to complete their task – they don’t want to wade through a competency-based navigational structure about higher-level concepts like upselling. Instead, they seek a system that allows them to find action-oriented, task-focused information at the point of need.

 

The technical applications of these concepts are manifested in a few ways. In its simplest format, the key organizing principle informs the navigation menu, as shown in the gif above. Competencies or tasks can also serve as the top level of a hierarchical taxonomy, enabling users to filter search results by competency or task.

If we want to get beyond search and navigation and start to use AI to automate recommendations of content, we can build an ontological data model as the foundation of this functionality. In this instance, the key organizing principle must become central to our ontology. This can often be achieved by having many entities of a particular category or class. For example, in a Learning Portal, there would be more competency entities than entities of any other category or class. In a Performance Portal, we would emphasize task entities in the ontology design.

Actions to Enable Learning and Performance

Findability of knowledge assets solves the problem of access to knowledge, but to develop skills and abilities, users must invest a bit more effort. Through active engagement with new information and content, individuals can enhance their understanding, retention, and application of knowledge. A Learning and Performance Portal builds on the foundation of a Knowledge Portal by not only aggregating information but also incorporating features that encourage active engagement and interaction.

A typical level of engagement and interaction might be reviewing a piece of learning content (reading a summary of a process and or watching a video about it) and then answering a question. E-Learning courses often handle this through multiple choice, true/false, or matching questions. These types of assessment questions add multiple benefits at once – we collect formative or summative data about learner performance, and we also provide the learner an opportunity to pause and reflect on what they just read. Instructional designers have a lot of tricks up their sleeves to promote interaction and reflection, including branching scenarios and gamification. These types of dynamic interactions must be incorporated into our Learning and Performance Portals rather than simply enabling users to see all of the information in one place.

Another way we can promote dynamic interaction and reflection with new ideas is by enabling interactions with real people. When our Learning and Performance Portals aggregate all of the information about a competency or task in one place, we should include relevant Communities of Practice (CoPs) and contact information for Subject Matter Experts (SMEs). What better way is there to actively engage with new concepts than by asking questions and engaging in dialogue? What better way to learn a new task or process than by helping to collaboratively improve it?

Measuring Learning and Performance

In a Knowledge Portal, typical data points collected include the number of unique visitors to the portal, which pages they’re spending time on, which pages have high bounce rates, and what terms users are frequently searching for. This data helps us understand the performance of the content and the portal itself. But in a Learning and Performance Portal, we need to understand the specific learning content users were exposed to and, ideally, data that indicates mastery of concepts and/or successful performance of tasks.

In compliance-focused situations, data must be able to confirm that a specific employee fulfilled the requirement of accessing the correct information, serving the purpose of liability avoidance. Data that indicates that the employee completed a course or watched a video suffices in these cases. EK developed a Learning and Performance Portal for a client, which captured Experience API (xAPI) activity statements for each individual, tracking not only whether they started an instructional video but whether they watched it all the way to the end. We were able to generate reports showing which users never viewed the video, viewed but didn’t play the video at all, played a portion of the video, or completed the video.

While compliance remains an important requirement, utilizing Learning and Performance Portals enables organizations to go beyond simply checking off completion. They allow for a more comprehensive assessment of learners’ knowledge and skills, providing a more holistic view of their learning journey and progress. Depending on the learning strategy employed, Learning and Performance Portals can capture additional data beyond mere completion status. This may include tracking whether learners correctly answered formative or summative assessment questions and the number of attempts it took them to do so. Furthermore, the portals can record any earned badges or certifications as a result of completing specific learning activities. xAPI activity statements can be used to track whether or not a learner connected with an SME, joined a CoP, chatted with a mentor, or performed well in a multiplayer game.

By capturing this data within Learning and Performance Portals, organizations can gain insights into learners’ proficiency levels, their progress in mastering specific topics, and their overall engagement with the learning materials. This data can be valuable for assessing the effectiveness of the learning programs, identifying areas where additional support or resources may be required, and finding and recognizing individuals who have demonstrated competence through job performance.

Summary

A modern learning ecosystem requires a diverse body of learning content, from eLearning courses and webinars to performance support and communities of practice. Often we need multiple systems to best enable this diverse learning content. A Learning and Performance Portal can provide that single entry point for learners so that they can find everything they need to develop a new competency or perform a task all in one place. Further, this learning content is automatically aggregated – removing a manual content maintenance burden from your instructional designers and trainers. If you can use some support in the design and development of a Learning and Performance Portal, Enterprise Knowledge can help.

The post When a Knowledge Portal Becomes a Learning and Performance Portal appeared first on Enterprise Knowledge.

]]>
EK at Ten: Looking Back on Our Greatest Knowledge Management Solutions https://enterprise-knowledge.com/ek-at-ten-looking-back-on-our-greatest-knowledge-management-solutions/ Wed, 19 Jul 2023 15:14:56 +0000 https://enterprise-knowledge.com/?p=18383 We recently celebrated the ten year anniversary of Enterprise Knowledge (EK). As founders, this was very special for both Zach and I. I’ve been reflecting on the amazing people, clients, and projects we have been involved with and pondering all … Continue reading

The post EK at Ten: Looking Back on Our Greatest Knowledge Management Solutions appeared first on Enterprise Knowledge.

]]>

We recently celebrated the ten year anniversary of Enterprise Knowledge (EK). As founders, this was very special for both Zach and I. I’ve been reflecting on the amazing people, clients, and projects we have been involved with and pondering all we’ve accomplished in a short ten years. I could spend hours talking about the incredible people that work for, and have worked at, EK (the EKers), and nearly the same amount of time talking about our wonderful clients. These EKers and our clients have partnered together to create a number of cutting-edge and impactful solutions, many of which offer great insight into the possibilities and future of knowledge management. In the spirit of Knowledge Sharing, I’d like to share some of these solutions with each of you.

 

Multichannel Customized Content Management

I always dreamed of building a content management system (CMS) that looked and worked like Google Docs for the authors. This tool would also componentize content automatically and enable authors to publish content in multiple formats. About four years ago, we got a chance to build this dream solution. 

Our client needed to make sure that critical information gathered at all hours of the day could be collaboratively captured and shared with key employees around the world. In addition, they wanted to store the information so that they could analyze trends about each of the alerts in their publications. For example, they might want to see all alerts about a specific subject so that they could identify problems or patterns of behavior that might be missed over time. Groups of people authored this content, so a collaborative editing solution like Google Docs was crucial.

The EK team developed a content management solution using a combination of open-source products like Drupal, ProseMirror, MongoDB, and MySQL. We used Drupal as the CMS for our clients to administer users, manage taxonomies, and add and publish content. ProseMirror was the collaborative editing library that allowed multiple users to edit a document at the same time. Each piece of content looked like a regular Word document with a title, headings, and bulleted lists; this was done without fields or forms to give the authors the feel of actually working in MS Word. Behind the scenes, the content was saved as JSON files in MongoDB so that the title, headers, bulleted items, and any metadata were stored separately and could be queried as needed. Finally, we developed a custom publishing process that combined the content into different formats and then delivered them across 10 different mediums.

The Result

Our client now has a tool that allows their teams to collaboratively develop content immediately and deliver it to people around the world in seconds. In addition, they have a repository of the structured content they have collected that can be queried to identify trends or related information.

AI for the Enterprise

This next solution was developed six years ago and is still in use today. It was one of the first Knowledge Graph solutions that we developed for an entire organization, cementing us as a global leader in designing and implementing Knowledge Graphs at scale.

Our client invests in and manages projects all over the world. As such, they have experts in different countries and different types of projects. These experts found that they were regularly being tripped up by the things they did not know, and they needed a tool that would proactively provide news about the country they were planning a project in, or updates on solutions similar to the one(s) they were discussing. A standard search solution would not solve this problem, because these experts often did not know that they even needed to search for information. Our client needed a solution that pushed the right information to people at the right time, even if they did not ask for it directly.

Our solution to their problem was a semantic hub based on a Knowledge Graph. This Knowledge Graph stored metadata about news and projects from over 12 different systems, and captured information about the areas of interest or responsibilities of all of their employees. The semantic hub used this information to automatically push content to people based on what they needed to know.

The first implementation of the semantic hub was integrated in Microsoft Exchange. As people scheduled meetings, they could add the semantic hub (or “brain”) as an invitee. The semantic hub read the meeting invitation and the people in it to select content that was relevant to the topic being discussed, which was then sent to the meeting organizer in advance of the meeting so that they were prepared with the latest information about the country or solution they were discussing.

This recommendation service was then used to power a chatbot, enhance search, and provide regular mailings on topics of interest to employees within the organization. This is a key point: a well-designed Knowledge Graph can be applied again and again to power different solutions.

The Result

The Semantic Hub gave our client a way to push the right information to people at the right time. As a result, our client’s people are more prepared and productive in their conversations with one another and with the people their organization works with.

A True Data Fabric

The next story is a more recent one, completed in the last few years. What I like about this one is that it solves some of the most challenging issues in business intelligence. KM has historically focused on documents and other unstructured or semi-structured information; however, structured data, often stored in data warehouses and data lakes, is just as important a source of knowledge for modern workers.

Our client had a 300-petabyte data lake with data sets from across 10 different business units. The data was available, but the data lake was so large that it was difficult for business users to find the information they needed. In addition, there were constant challenges with data quality. The size and complexity of this repository and the size of the data lake made it so that a data catalog could not easily solve all of these problems. We worked with our client to design an ontology for the key information assets in the data lake, and instantiated it as a Knowledge Graph (a data fabric) that mapped the critical entities of the organization to the data elements in each data set across the lake. 

For example, if the business users wanted to know when a customer started doing business with them, they had to search across each of the different data sets that represented each type of account that was opened. The name of the data element and the structure of each of these open dates were all different. This is a common problem that many of our data clients deal with. The Knowledge Graph solved this by mapping the entity Account Open Date to each of the tables and fields where that information was stored. Business users could now ask questions about the account open date to the Knowledge Graph and it would query all of the relevant tables automatically. As tables were added or changed, the mapping was updated and the user was able to ask business questions without having to know the structure of the data.

The amazing thing about this solution is not the technology we developed: the idea of using a Knowledge Graph as the middle layer between users and the data they need is one that Gartner has been pushing for a while. The amazing part of this solution is the impact it had on both business and IT. Business users could ask for information in a sensible fashion without having to rely on complex SQL statements or any deep knowledge of the data sets that they were querying.

The Result

Our client developed an ontology that not only maps key data entities to all of the data in their data lake, but also ensures that new data meets predetermined quality standards. As new data sets are submitted to the data lake, the data elements are mapped to the appropriate entities and the data is checked for compliance with the expected data values defined in the graph. Our client now has a higher-quality and more accessible data lake for their business users.

Knowledge Portal (A.K.A. Enterprise 360)

The ability to pull data of any structure from multiple systems and display it in a single screen has been a goal of every forward-thinking CIO that I have ever worked with. Knowledge Graphs finally give us the ability to do this by mapping information assets from across the enterprise to the entities that matter to the business. Customer 360 has been a hot term for a few years now, but Knowledge Portals, powered by graphs, allow business users to see more than just customers: they can see consolidated data about their people, products, projects, and anything of importance to running the business. We recently implemented a Knowledge Portal that serves as an Enterprise 360 solution for a large sovereign wealth fund.

Information about the deals our client was investigating was spread across 8 different systems, and investment information was spread across 12 systems. They also had very little ability to see information about what deals and investments their employees worked on, the companies they work with, the people in those companies, and the banks they worked alongside. We proposed a Knowledge Portal and have been implementing it for just over a year now.

The graph behind the portal maps key data to the most important pieces of information wherever that information resides. This information is then pulled together as part of a nightly feed, or dynamically, so that their employees can see all of the most important information about their deals and investments in a single application. We are working on future releases that will provide information about each employee and the deals and investments that they worked on, as well as information about the banks, companies, and executives that deals and investments were made with. When completed, our client will have a single place to see the information they need for all of their most important decisions: an Enterprise 360 knowledge system.

The Result

Our client has called this new system their most important investment in KM. They have rolled it out to all of the investors in their organization and are now providing rewards to the people that share the most information with one another. Our client has also started talking to others about the value this Knowledge Portal presents for all of their employees. This is a KM solution that impacts every area of their business, allowing people to make more informed decisions.

Enterprise-Wide Learning Ecosystem

The way in which people work and engage with their employers has changed a lot in the past few years. Between the hybrid working model and the impacts of the Great Resignation, companies are recognizing the critical importance of training. An effective learning strategy involves more than a series of courses, either taken one time or annually as part of a certification. The best organizations invest in learning solutions that provide multiple methods for learning, delivering knowledge and performance support at the point of need. We had a chance to develop one of these next-generation learning portals for one of our clients.

Our client was a federal agency with people working in remote areas around the country. Their employees are highly independent and passionate about their work, and our client needed a better way to disseminate information, improve collaboration, and lower the costs associated with in-person training. We designed, developed, and helped roll out a new kind of learning portal. This portal offered many different ways to learn, including:

  • Structured learning in the form of online classes and links to in-person classes;
  • Access to experts;
  • Job aids, instructional videos, and interactive learning activities; and
  • Communities of practice and discussion forums.

The new learning portal was built using common open-source tools like WordPress and Elasticsearch. The portal enables employees to learn from the organization and their peers through a front page with a Google-like search bar, making the act of finding information quick and easy. Users also have the option to navigate through topics of interest or discussion forums where employees can share thoughts with one another. We developed taxonomies to categorize learning content and make them more intuitively findable at the point of need.

In addition to making learning content more findable, the learning portal’s features enabled a modernized learning strategy, blending multiple facets of an andragogically-sound learning and performance ecosystem. The discussion forums included gamification features such as user ratings of documents and content shared in the communities. Users who frequently shared highly-rated content (as evaluated by their peers) earned badges which were incorporated into the user interface of the learning portal, rewarding users for their contributions. 

The agency’s learning technology stack already included a Learning Management System (LMS), existing structured learning from which was made findable within the context of the additional, diverse learning content. However, the learning portal itself became the content repository for the informal and social learning resources such as job aids and communities of practice. To track informal learning activity such as learners commenting in communities of practice or watching instructional videos, the learning portal also included a Learning Records Store (LRS) to store xAPI activity statements. This enabled powerful dashboards so that trainers and learners alike could evaluate the impact of informal and social learning on workplace performance.

The Result

This learning portal gave our client a new way to share information across their entire workforce. It was an immediate success, with users actively engaging in the portal and in the discussion forums where they could share their experiences with one another. This new tool lowered the overall cost of training by prioritizing in-person structured learning for the most critical workplace tasks and enabling asynchronous learning options for performance support of tasks that were less business critical. It was so successful, our client decided to make parts of the portal public so that volunteers and partner agencies could also access the learning content developed and shared by their employees.

Conclusion

Writing this article reminds me how much KM has changed over the years. Connecting the historically disparate fields of KM, information management, data management, and IT, we are now able to use cutting-edge technology solutions to break down information silos and deliver the right information to the right people at the right time. The impact of these systems is greater than ever before: these types of solutions are moving KM from a small back office service to an enterprise solution that allows our clients to be leaders in their industries. With new advancements in AI like ChatGPT and enhanced machine learning, I am more excited than ever to see what our team and our clients will come up with as the next generation of KM enablers.

The post EK at Ten: Looking Back on Our Greatest Knowledge Management Solutions appeared first on Enterprise Knowledge.

]]>