knowledge graphs Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/knowledge-graphs/ Tue, 18 Nov 2025 19:55:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg knowledge graphs Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/knowledge-graphs/ 32 32 Semantic Layer Symposium 2025: Knowledge Graphs Panel — The Rising Star of the Knowledge Management Toolkit https://enterprise-knowledge.com/semantic-layer-symposium-2025-knowledge-graphs-panel-the-rising-star-of-the-knowledge-management-toolkit/ Tue, 18 Nov 2025 19:54:33 +0000 https://enterprise-knowledge.com/?p=26072 In October of this year, Enterprise Knowledge held our annual Semantic Layer Symposium (SLS) in Copenhagen, Denmark, bringing together industry thought leaders, data experts, and practitioners to explore the transformative potential, and reflect on the successful implementation, of semantic layers. … Continue reading

The post Semantic Layer Symposium 2025: Knowledge Graphs Panel — The Rising Star of the Knowledge Management Toolkit appeared first on Enterprise Knowledge.

]]>
In October of this year, Enterprise Knowledge held our annual Semantic Layer Symposium (SLS) in Copenhagen, Denmark, bringing together industry thought leaders, data experts, and practitioners to explore the transformative potential, and reflect on the successful implementation, of semantic layers. With a focus on practical applications, real-world use cases, actionable strategies, and proven paths to delivering measurable value, the symposium provided attendees with tangible insights they can apply within their organizations.

We’re excited to continue to release these discussions for viewing: next up, a panel moderated by Barry Byrne of Novartis, featuring Kurt Kragh Sørensen (Novartis), Daan Hannessen (Shell), and Sara Mae O’Brien-Scott (Enterprise Knowledge). And check out Daan’s pre-SLS Knowledge Cast episode!

Panel – Knowledge Graphs: The Rising Star of the Knowledge Management Toolkit

Panel Moderator: Barry Byrne (Novartis)Panelists: Kurt Kragh Sørensen (Novartis) & Daan Hannessen (Shell) & Sara Mae O’Brien-Scott (Enterprise Knowledge)

Leading organizations are increasingly turning to knowledge graphs to connect information, enable intelligent discovery, and unlock new business value. In this panel, world-class practitioners share real stories of how they have implemented knowledge graphs as part of their knowledge management strategies. Expect practical lessons, proven approaches, and insights into why graphs are quickly becoming an essential part of the enterprise toolkit.

The post Semantic Layer Symposium 2025: Knowledge Graphs Panel — The Rising Star of the Knowledge Management Toolkit appeared first on Enterprise Knowledge.

]]>
How KM Leverages Semantics for AI Success https://enterprise-knowledge.com/how-km-leverages-semantics-for-ai-success/ Wed, 03 Sep 2025 19:08:31 +0000 https://enterprise-knowledge.com/?p=25271 This infographic highlights how KM incorporates semantic technologies and practices across scenarios to enhance AI capabilities.

The post How KM Leverages Semantics for AI Success appeared first on Enterprise Knowledge.

]]>
This infographic highlights how KM incorporates semantic technologies and practices across scenarios to enhance AI capabilities.

To get the most out of Large Language Model (LLM)-driven AI solutions, you need to provide them with structured, context-rich knowledge that is unique to your organization. Without purposeful access to proprietary terminology, clearly articulated business logic, and consistent interpretation of enterprise-wide data, LLMs risk delivering incomplete or misleading insights. This infographic highlights how KM incorporates semantic technologies and practices across scenarios to  enhance AI capabilities and when they're foundational — empowering your organization to strategically leverage semantics for more accurate, actionable outcomes while cultivating sound knowledge intelligence practices and investing in your enterprise's knowledge assets. Use Case: Expert Elicitation - Semantics used for AI Enhancement Efficiently capture valuable knowledge and insights from your organization's experts about past experiences and lessons learned, especially when these insights have not yet been formally documented.  By using ontologies to spot knowledge gaps and taxonomies to clarify terms, an LLM can capture and structure undocumented expertise—storing it in a knowledge graph for future reuse. Example:  Capturing a senior engineer's undocumented insights on troubleshooting past system failures to streamline future maintenance. Use Case: Discovery & Extraction - Semantics used for AI Enhancement Quickly locate key insights or important details within a large collection of documents and data, synthesizing them into meaningful, actionable summaries, and delivering these directly back to the user. Ontologies ensure concepts are recognized and linked consistently across wording and format, enabling insights to be connected, reused, and verified outside an LLM's opaque reasoning process. Example: Scanning thousands of supplier agreements to locate variations of key contract clauses—despite inconsistent wording—then compiling a cross-referenced summary for auditors to accelerate compliance verification and identify high-risk deviations. Use Case: Context Aggregation - Semantics for AI Foundations Gather fragmented information from diverse sources and combine it into a unified, comprehensive view of your business processes or critical concepts, enabling deeper analysis, more informed decisions, and previously unattainable insights. Knowledge graphs unify fragmented information from multiple sources into a persistent, coherent model that both humans and systems can navigate. Ontologies make relationships explicit, enabling the inference of new knowledge that reveals connections and patterns not visible in isolated data. Example: Integrating financial, operational, HR, and customer support data to predict resource needs and reveal links between staffing, service quality, and customer retention for smarter planning. Use Case: Cleanup and Optimization - Semantics for AI Enhancement Analyze and optimize your organization's knowledge base by detecting redundant, outdated, or trivial (ROT) content—then recommend targeted actions or automatically archive and remove irrelevant material to keep information fresh, accurate, and valuable. Leverage taxonomies and ontologies to recognize conceptually related information even when expressed in different terms, formats, or contexts; allowing the AI to uncover hidden redundancies, spot emerging patterns, and make more precise recommendations than could be justified by keyword or RAG search alone. Example: Automatically detecting and flagging outdated or duplicative policy documents—despite inconsistent titles or formats—across an entire intranet, streamlining reviews and ensuring only current, authoritative content remains accessible. Use Case: Situated Insight - Semantics used for AI Enhancement Proactively deliver targeted answers and actionable suggestions uniquely aligned with each user's expressed preferences, behaviors, and needs, enabling swift, confident decision-making. Use taxonomies to standardize and reconcile data from diverse systems, and apply knowledge graphs to connect and contextualize a user's preferences, behaviors, and history; creating a unified, dynamic profile that drives precise, timely, and highly relevant recommendations. Example: Instantly curating a personalized learning path (complete with recommended modules, mentors, and practice projects) based on an employee's recent performance trends, skill gaps, and long-term career goals, accelerating both individual growth and organizational capability. Use Case: Context Mediation and Resolution - Semantics for AI Foundations Bridge disparate contexts across people, processes, technologies, etc., into a common, resolved machine readable understanding that preserves nuance while eliminating ambiguity. Semantics establish a shared, machine-readable understanding that bridges differences in language, structure, and context across people, processes, and systems. Taxonomies unify terminology from diverse sources, while ontologies and knowledge graphs capture and clarify the nuanced relationships between concepts—eliminating ambiguity without losing critical detail. Example: Reconciling varying medical terminologies, abbreviations, and coding systems from multiple healthcare providers into a single, consistent patient record—ensuring that every clinician sees the same unambiguous history, enabling faster diagnosis, safer treatment decisions, and more effective care coordination. Learn more about our work with AI and semantics to help your organization make the most out of these investments, don't hesitate to reach out at:  https://enterprise-knowledge.com/contact-us/

The post How KM Leverages Semantics for AI Success appeared first on Enterprise Knowledge.

]]>
Scaling Knowledge Graph Architectures with AI https://enterprise-knowledge.com/scaling-knowledge-graph-architectures-with-ai/ Thu, 30 Nov 2023 16:45:28 +0000 https://enterprise-knowledge.com/?p=19340 Sara Nash and Urmi Majumder, Principal Consultants at Enterprise Knowledge, presented “Scaling Knowledge Graph Architectures with AI” on November 9th, 2023 at KM World in Washington D.C. In this presentation, Nash and Majumder defined a Knowledge Graph architecture and reviewed … Continue reading

The post Scaling Knowledge Graph Architectures with AI appeared first on Enterprise Knowledge.

]]>
Sara Nash and Urmi Majumder, Principal Consultants at Enterprise Knowledge, presented “Scaling Knowledge Graph Architectures with AI” on November 9th, 2023 at KM World in Washington D.C. In this presentation, Nash and Majumder defined a Knowledge Graph architecture and reviewed how AI can support the creation and growth of Knowledge Graphs. Drawing from their experience in designing enterprise Knowledge Graphs based on knowledge embedded in unstructured content, Nash and Majumder defined approaches for entity and relationship extraction depending on Enterprise AI maturity and highlighted other key considerations to incorporate AI capabilities into the development of a Knowledge Graph. Check out the presentation below to learn how to: 

  • Assess entity and relationship extraction readiness according to EK’s Extraction Maturity Spectrum and Relationship Extraction Maturity Spectrum.
  • Utilize knowledge extraction from content to translate important insights into organizational data.
  • Extract knowledge with three approaches:
    • RedEx Rule
    • Auto-Classification Rule
    • Custom ML Model
  • Examine key factors such as how to leverage SMEs, iterate AI processes, define use cases, and invest in establishing robust AI models.

The post Scaling Knowledge Graph Architectures with AI appeared first on Enterprise Knowledge.

]]>
Knowledge Management Trends in 2023 https://enterprise-knowledge.com/knowledge-management-trends-in-2023/ Tue, 24 Jan 2023 17:24:39 +0000 https://enterprise-knowledge.com/?p=17306 As CEO of the world’s largest Knowledge Management consulting company, I am fortunate to possess a unique view of KM trends. For each of the last several years, I’ve written an annual list of these KM trends, and looking back, … Continue reading

The post Knowledge Management Trends in 2023 appeared first on Enterprise Knowledge.

]]>
Graphic for Knowledge Management Trends

As CEO of the world’s largest Knowledge Management consulting company, I am fortunate to possess a unique view of KM trends. For each of the last several years, I’ve written an annual list of these KM trends, and looking back, I’m pleased to have (mostly) been on point, having successfully identified such KM trends as Knowledge Graphs, the confluence of KM and Learning, the increasing focus on KM Return on Investment (ROI), and the use of KM as the foundation for Artificial Intelligence.

Every year in order to develop this list, I engage EK’s KM consultants and thought leaders to help me identify what trends merit inclusion. We consider factors including themes in requests for proposals and requests for information; the strategic plans and budgets of global organizations; priorities for KM transformations; internal organizational surveys; interviews with KM practitioners, organizational executives, and business stakeholders; themes from the world’s KM conferences and publications, interviews with fellow KM consultancies and KM software leaders; and the product roadmaps for leading KM technology vendors.

The following are the seven KM trends for 2023:

 

Graphic for Knowledge Management TrendsKM at a Crossroads – The last several years have seen a great deal of attention and funding for KM initiatives. Both the pandemic and great resignation caused executives to realize their historical lack of focus on KM resulted in knowledge loss, unhappy employees, and an inability to efficiently upskill new hires. At the same time, knowledge graphs matured to the point where KM systems could offer further customization and ability to integrate multiple types of content from disparate systems more easily.

In 2023, much of the world is bracing for a recession, with the United States and Europe likely to experience a major hit. Large organizations have been preparing for this already, with many proactively reducing their workforce and cutting costs. Historically, organizations have drastically reduced KM programs, or even cut them out entirely, during times of economic stress. In 2008-2009, for instance, organizational KM spending was gutted, and many in-house KM practitioners were laid off.

I anticipate many organizations will do the same this year, but far fewer than in past recessions. The organizations that learned their lessons from the pandemic and staffing shortages will continue to invest in KM, recognizing the critical business value offered. KM programs are much more visible and business critical than they were a decade ago, thanks to maturation in KM practices and technologies. Knowledge Management programs can deliver business resiliency and competitive advantage, ensure that knowledge is retained in the organization, and enable employee and customer satisfaction and resulting retention. The executives that recognize this will continue their investments in KM, perhaps scaled down or more tightly managed, but continued nonetheless. 

Less mature organizations, on the other hand, will repeat the same mistakes of the past, cutting KM, and with it, walking knowledge out the door, stifling innovation, and compounding retention issues, all for minimal and short-term savings. This KM trend, put simply, will be the divergence between organizations that compound their existing issues by cutting KM programs and those that keep calm and KM on.

 

Graphic for Knowledge Management TrendsFocus on Business Value and ROI – Keying off the previous trend, and revisiting a trend I’ve identified in past years, 2023 will bring a major need to quantify the value of KM. In growth years when economies are booming, we’ve typically seen a greater willingness for organizations to invest in KM efforts. This year, there will be a strong demand to prove the business value of KM. 

For KM practitioners, this means being able to measure business outcomes instead of just KM outcomes. Examples of KM outcomes are improved findability and discoverability of content, increased use and reuse of information, decreased knowledge loss, and improved organizational awareness and alignment. All of these things are valuable, as no CEO would say they don’t want them for their organization, and yet none of them are easily quantifiable and measurable in terms of ROI. Business outcomes, on the other hand, can be tied to meaningful and measurable savings, decreased costs, or improved revenues. Business outcomes resulting from KM transformations can include decreased storage and software license costs, improved employee and customer retention, faster and more effective employee upskilling, and improved sales and delivery. The KM programs that communicate value in terms of these and other business outcomes will be those that thrive this year.

This KM trend is a good one for the industry, as it will require that we put the benefits to the organization and end users at the center of any decision.

 

Graphic for Knowledge Management TrendsKnowledge Portals – Much to the surprise, if not disbelief, of many last year, I predicted that portals would make a comeback from their heyday in the early 2000’s. The past year validated this prediction, with more organizations making multi-year and multi-million dollar investments in KM transformations with a Knowledge Portal (or KM Portal) at the center of the effort. As I wrote about recently, both the critical awareness of KM practices as well as the technology necessary to make a Knowledge Portal work have come a long way in the last twenty years. Steered further by the aforementioned drivers of remote work and the great resignation, organizations are now implementing Knowledge Portals at the enterprise level. 

The use cases for Knowledge Portals vary, with some treating the system as an intranet or knowledge base, others using it as a hub for learning or sales, and still others using it more for tacit knowledge capture and collaboration. Regardless of the use cases, what makes these Knowledge Portals really work is the usage of Knowledge Graphs. Knowledge Graphs can link information assets from multiple applications and display them on a single screen without complicated and inflexible interface development. CIOs now have a way to do context-driven integration, and business units can now see all of the key information about their most critical assets in a single location. What this means is that Knowledge Portals can now solve the problem of application information silos, enabling an organization to collectively understand everything its people need to know about its most important knowledge assets.

 

Graphic for Knowledge Management TrendsContext-Driven KM – We’ve all heard the phrase, “Content is King,” but in today’s KM systems, Context is the new reigning monarch. The new trend in advanced knowledge systems is for them to be built not just around information architecture and content quality, but around knowledge graphs that provide a knowledge map of the organization. A business model and knowledge map expressed as an ontology delivers a flexible, expandable means of relating all of an organization’s knowledge assets, in context, and revealing them to users in a highly intuitive, customized manner. Put simply, this means that any given user can find what they’re looking for and discover that which they didn’t even know existed in ways that feel natural. Our own minds work in the same way as this technology, relating different memories, experiences, and thoughts. A system that can deliver on this same approach means an organization can finally harness the full breadth of information they possess across all of their locations, systems, and people for the purposes of collaboration, learning, efficiency, and discovery. Essentially, it’s what everyone has always wanted out of their information systems, and now it’s a reality.

 

Graphic for Knowledge Management TrendsData Firmly in KM – Historically, most organizations have drawn a hard line between unstructured and structured information, managing them under different groups, in different systems, with different rules and governance structures. As the thinking around KM continues to expand, and KM systems continue to mature, this dichotomy will increasingly be a thing of the past. The most mature organizations today are looking at any piece of information, structured or unstructured, physical or digital, as a knowledge asset that can be connected and contextualized like any other. This includes people and their expertise, products, places, and projects. The broadening spectrum of KM is being driven by knowledge graphs and their expanding use cases, but it also means that topics like data governance, metadata hubs, data fabric, data mesh, data science, and artificial intelligence are entering the KM conversation. In short, the days of arguing that an organization’s data is outside the realm of a KM transformation are over.

 

Graphic for Knowledge Management TrendsPush Over Pull – When considering KM systems and technology, the vast majority of the discussion has centered around findability and discoverability. We’ve often talked about KM systems making it easier for the right people to find the information they need to do their jobs. As KM technologies mature, the way we think about connecting people and the knowledge they need is shifting. Rather than just asking, “How can we enable people to find the right information?”, we can also think more seriously about how we proactively deliver the right information to those people. This concept is not new, but the ability to deliver on it is increasingly real and powerful.

When we combine an understanding of all of our content in context, with an understanding of our people and analytics to inform us how people are interacting with that content and what content is new or changing, we’re able to begin predictively delivering content to the right people. Sometimes, this is relatively basic, providing the classic “users who looked at this product also looked at…” functionality by matching metadata and/or user types, but increasingly it can leverage graphs and analytics to recognize when a piece of content has changed or a new piece of content of a particular type or topic has been created, triggering a push to the people the system predicts could use that information or may wish to be aware of it. Consider a user who last year leveraged twelve pieces of content to research a report they authored and published. An intelligent system can recognize the author should be notified if one of the twelve pieces of source content has changed, potentially suggesting to the content author they should revisit their report and update it.

Overall, the trend we’re seeing here is about Intelligent Delivery of content and leveraging AI, Machine Learning, and Advanced Content Analytics in order to deliver the right content to individuals based on what we know and can infer about them. We’re seeing this much more as a prioritized goal within organizations but also as a feature software vendors are seeking to include in their products.

 

Graphic for Knowledge Management TrendsPersonalized KM – With all the talk of improved technology, delivery, and context, the last trend is more of a summary of trends. KM, and KM systems, are increasingly customized to the individual being asked to share, create, or find/leverage content. Different users have different missions, with some more consumers of knowledge within an organization and others more creators or suppliers of that knowledge. Advanced KM processes and systems will recognize a user’s responsibility and mandates and will enable them to perform and deliver in the most intuitive and seamless way possible. 

This trend has a lot to do with content assembly and flexible content delivery. It means that, with the right knowledge about the user, today’s KM solutions can assemble only that information that pertains to the user, removing all of the detritus that surrounds it. For instance, an employee doesn’t need to wade through hundreds of pages of an employee handbook that aren’t pertinent to them; instead, they should receive an automatically generated version specifically for their location, role, and benefits.

The customized KM trend isn’t just about consuming information, however. More powerfully, it is also about driving knowledge sharing behaviors. For example, any good project manager should capture lessons learned at the end of a project, yet we often see organizations fail to get their PMs to do this consistently. A well-designed KM system will recognize an individual as a PM, understand the context of the projects they are managing, and be able to leverage data to know when that project is completed, thereby prompting the user with a specific lessons learned template at the appropriate time to capture that new set of information as content. That is customized KM. It becomes part of the natural work and operations of systems, and it makes it easier for a user to “do the right thing” because the processes and systems are engineered specifically to the roles and responsibilities of the individual.

Another way of thinking about these trends is by invoking the phrase “KM at the Point of Need,” derived from a phrase popularized in the learning space (Learning at the Point of Need). We’re seeing KM head toward delivering highly contextualized experiences and knowledge to the individual user at the time and in the way they need it and want it. What this means is that KM becomes more natural, more simply the way that business is done rather than a conscious or deliberate act of “doing KM.” This is exciting for the field, and it represents true business value and transformation.

 

Do you need help understanding and harnessing the value of these trends? Contact us to learn more and get started.

 

The post Knowledge Management Trends in 2023 appeared first on Enterprise Knowledge.

]]>
5 Steps to Enhance Search with a Knowledge Graph https://enterprise-knowledge.com/5-steps-to-enhance-search-with-a-knowledge-graph/ Tue, 24 Jan 2023 16:47:36 +0000 https://enterprise-knowledge.com/?p=17301 As search engines and portals evolve, users have come to expect more advanced features common to popular websites like Google or Amazon. Users expect search engines to understand what they are asking for and give them the ability to easily … Continue reading

The post 5 Steps to Enhance Search with a Knowledge Graph appeared first on Enterprise Knowledge.

]]>
As search engines and portals evolve, users have come to expect more advanced features common to popular websites like Google or Amazon. Users expect search engines to understand what they are asking for and give them the ability to easily scan and drill down to the desired information.

Knowledge graphs are commonly paired with enterprise search to meet these expectations, enabling users to explore connections between information and extend search results with contextual data. To help get started enhancing your search results with a knowledge graph, we put together the following five-step process that adheres to search, knowledge graph, and search design best practices.

For a deeper dive into each of the five steps, check out my corresponding white paper on the topic. EK has expertise in enterprise search, ontology design, and knowledge graph implementations, and we would love to work with you on your next search journey. Please feel free to contact us for more information.

The post 5 Steps to Enhance Search with a Knowledge Graph appeared first on Enterprise Knowledge.

]]>
Constructing KM Technology: Tips for Implementing Your KM Technology Solutions https://enterprise-knowledge.com/tips-for-implementing-km-technology-solutions/ Mon, 15 Aug 2022 15:10:55 +0000 https://enterprise-knowledge.com/?p=16156 In the digital age that we now live in, making Knowledge Management (KM) successful at any organization relies heavily on the technologies used to accomplish every day tasks. Companies are recognizing the importance of providing their workforce with smarter, more … Continue reading

The post Constructing KM Technology: Tips for Implementing Your KM Technology Solutions appeared first on Enterprise Knowledge.

]]>
In the digital age that we now live in, making Knowledge Management (KM) successful at any organization relies heavily on the technologies used to accomplish every day tasks. Companies are recognizing the importance of providing their workforce with smarter, more efficient, and highly specialized technological tools so that employees can maximize productivity in their everyday work. There’s also the expectation for a KM system, like SharePoint, to act as an all-in-one solution. Companies in search of software solutions often make the mistake of thinking a single system can effectively fulfill all of their needs including content management, document management, AI-powered search, automated workflows, etc., which simply isn’t the case. The reality is that multi-purpose software tools may be able to serve more than one business function, but in doing so only deliver basic features that lack necessary specifications and result in a sub-par product. More information on the need for a multi-system solution can be found in this blog about the importance of a semantic layer in a knowledge management technology suite.

In our experience at Enterprise Knowledge (EK), we consider the following to be core and essential systems for most integrated KM technology solutions:

  • Content Management Systems
  • Taxonomy Management Systems
  • Enterprise Search Tools
  • Knowledge Graphs

The systems mentioned above are essential tools to enable successful and mature KM, and when integrated with one another can serve to revolutionize the interaction between an organization’s staff and its information. EK has seen the most success with client organizations once they have understood the need for a blended set of technological tools and taken the steps to implement and integrate them with one another.

Once this need for a combined set of specialized solutions is realized, the issue of how to implement these solutions becomes ever-present and must be approached with a specific strategy for design and deployment. This blog will help to outline some of the key tips and guidelines for the implementation of a KM technology solution, regardless of its current state.

CMS, TMS, Search Engine

Prioritizing Your Technology Needs

When thinking about the approach to implementing an organization’s identified technology solutions, there is often an inclination to prioritize solutions that are considered “state-of-the-art” or “cooler” than others. This is understandable, especially with the new-age technology that is on the market and able to create a “wow” factor for a business’ employees and customers. However, it is important to remember that the order in which systems are implemented relies heavily on the current makeup of the organization’s technology stack. For example, although it might be tempting to take on the implementation of an AI-powered knowledge graph or a chat-bot that has Natural Language Processing (NLP) capabilities, the quality of your results and real-world usability of the product will increase dramatically if you also include other technologies such as a graph database to provide the foundation for a knowledge graph, or a Taxonomy Management System to allow for the design and curation of an enterprise taxonomy and/or ontology.

Depending on your organization’s level of maturity with respect to its technology ecosystem, the order in which systems are implemented must be strategically defined so that one system can build off of and enhance the previous. Typically, if an organization does not possess a solidified instance of any of the core KM technologies, the logical first step is to implement a Content Management System (CMS) or Document Management System (DMS), or in some cases, both. Following the “content first” approach, commonly used in web design and digitalization, organizations must first have a place in which they can effectively store, manage, and access their content, as an organization’s content is arguably one of its most valuable assets. Furthermore, one could argue that all core KM technologies are centered around an organization’s content and exist to improve/enhance that content whether it is adding to its structure, creating ways to more efficiently store and describe it, or more effectively searching and retrieving it at the time of need.

Once an organization has a solidified CMS solution in place, the next step is to implement tools geared towards the enhancement and findability of that content. One system in particular that helps to drastically improve the quality of an organization’s content by managing and deploying enterprise wide taxonomies and ontologies is a Taxonomy Management Systems (TMS). TMS solutions are integrated with an organization’s CMS and search tools and serve as a place to create, deploy, and manage poly-hierarchical taxonomies in a single place. TMS tools allow organizations to add structure to their content, describe it in a way that significantly improves organization, and fuel search by providing a set of predefined values from a controlled vocabulary that can be used to create facets and other forms of search-narrowing instruments. A common approach to implementing your technology ecosystem involves the simultaneous implementation of an enterprise search solution alongside the TMS implementation. Once again, the idea of one solution building off another is present here, as enterprise search tools feed off of the previously implemented CMS instance by utilizing Access Control List (ACL) specifications, security trimming considerations, content structure details, and many more. Once these three systems are in place, organizations can afford to look into additional tools such as Knowledge Graphs, AI-powered chatbots, and Metadata Catalogs.

Defining Business Logic and Common Uses

There is a great deal of preparation involved with the implementation of KM technologies, especially when considering the envisioned use of the system by organizational staff. As part of this preparation, a thorough analysis of existing business processes and standard operating procedures must be executed to account for the specific needs of users and how those needs will influence the design of the target system. Although it is not always initially obvious, the way in which a system is going to be used will heavily impact how that system is designed and implemented. As such, the individuals responsible for implementation must have a well-documented, thorough understanding of what end users will need from the tool, combined with a comprehensive list of core use cases. These types of details are most commonly elicited through a set of analysis activities with the system’s expected users.

Without these types of preliminary activities, the implementation process will seldom go as planned. This is because various detours will have to be taken to accommodate the business process details that are unique to the organization and therefore not ‘pre-baked’ into software solutions. These considerations sometimes come in the form of taxonomy/controlled list requirements, customizable workflows, content type specifications, and security concerns, to name a few.

If the proper arrangements aren’t made before implementing software and integrating with additional systems, it will almost always affect the scope of your implementation effort. Software implementation is not a “one size fits all” type of effort; there are certain design elements that are based on the business and functional requirements of the target solution, and these must be identified in the initial stages of the project. EK has seen how the lack of these preparatory activities can have impacts on project timelines, most commonly because of delays due to unforeseen circumstances. This results in extended deadlines, change requests, additional investment, and other general inefficiencies.

Recruiting the Proper Resources

In addition to the activities needed before implementation, it is absolutely essential to ensure that the appropriate resources are assigned to the project. This too can create issues down the road if not given the appropriate amount of time and attention before beginning the project. Generally speaking, there are a few standard roles that are necessary for any implementation project, regardless of the type or complexity of the effort. These roles are listed and described below:

  • KM Designer/Consultant: Regardless of the type of system to be implemented, having a KM consultant on board is needed for various reasons. A KM consultant will be able to assist with the non-developmental areas of the project, for example designing taxonomies/ontologies, content types, search experiences, and/or governance structures.
  • Senior Solutions Architect: Depending on the level of integration required, a Senior Solutions Architect is likely required. This is ideally a person with considerable experience working with multiple types of technologies that are core to KM. This person should have a thorough and comprehensive understanding of how to arrange systems into a technology suite and how each component works, both alone and as part of a larger, combined solution. Familiarity with REST, SOAP, and RPC APIs, along with other general knowledge about the communication between software is a must.
  • Technology Subject Matter Expert (SME): This role is absolutely critical to the success of the implementation, as there will be a need for someone who specializes in the type of software being implemented. For example, if an organization is working to implement a TMS and integrate it with other systems, the project will need to staff a TMS integration SME to ensure the system is installed according to implementation best practices. This person will also be responsible for a large portion of the “installment” of the software, meaning they will be heavily involved with the initial set up and configuration based on the organization’s specific use of the system.
  • KM Project Manager: As is common with all projects, there will be a need for a project manager to coordinate meetings, ensure the project is on schedule, and facilitate the ongoing alignment of all engaged parties. This person should be familiar with KM so that they can align efforts with best practices and help facilitate KM-related decisions.
  • API Developer(s): Depending on the level of integration required, a developer may be needed to develop code to serve as a connector between systems. This individual must be familiar with the communication logic needed between systems and have a thorough understanding of APIs as well. The programming language in which any custom coding is needed will vary from organization to organization, but it is required that the developer has experience with the identified language.

The list above is by no means exhaustive, nor does it contain resources that are commonly assumed to be a part of any implementation effort. These roles are simply the unique ones that help with successful implementations. Also, depending on the level of effort required, there may be a need for multiple resources at each role, such as the developer or SME role. This type of consideration is important, as the project will need to have ample resources according to the project’s defined timeline.

Defining a Realistic Timeline

One final factor to consider when preparing for a technology solution implementation effort is the estimated time with which the project is expected to be completed. Implementation efforts are notoriously difficult to estimate in terms of time and resources needed, which often results in the over- or under- allocation of financing for a given effort. As a result of this, it’s recommended to err on the side of caution and incorporate more time than is initially estimated for the project to reach completion. If similar efforts have been completed in the past, utilize informal benchmarking. If available resources have experience implementing similar solutions, bring them to the forefront. The best way to estimate the level of effort and time needed to complete certain tasks is to look at historical data, which in this case would be previous implementation efforts.

In EK’s experience implementing large scale and highly complex software and custom solutions, we have learned that it is important to prepare for the unexpected to ensure the expected timeline is not derailed by unanticipated delays. For example, one common consideration we have encountered many times and one that has created significant delays is the need to get individuals appropriate access to certain systems or organizational resources. This is especially relevant with third-party consultants and when the system(s) in question have high security requirements. Additionally, there are several KM-related considerations that can unexpectedly lengthen a project’s timeline, such as the quality/readiness of content, governance standards and procedures that may be lacking, and/or change management preparations.

Conclusion

There are many factors that go into an implementation effort and, unfortunately, a lot of ways one can go wrong. Very seldom are projects like these executed to perfection, and a majority of the times that they fail or go awry is due to one or a combination of a few of the factors mentioned above. The good news and common theme with these considerations is that these pitfalls can mostly be avoided with the proper planning, preparation, and estimates (with regards to both time and resources). The initial stages of an implementation effort are the most critical, as these are the times where project planners need to be honest and realistic with their projections. There is often the tendency to begin development as soon as possible, and to skip most of the preparatory activities due to an eagerness to get started. It is important to remember that successful implementation efforts require the necessary legwork, even if it may seem superfluous at the time. Does your company need assistance implementing a piece of technology and is not sure how to get started? EK provides end-to-end services beginning with strategy and design and ending with the implementation of fully functional KM systems. Reach out to us! Contact us with any questions or general inquiries.

The post Constructing KM Technology: Tips for Implementing Your KM Technology Solutions appeared first on Enterprise Knowledge.

]]>
A Data Scientist Perspective on Knowledge Graphs (Part 1): the Data-Driven Challenge https://enterprise-knowledge.com/a-data-scientist-perspective-on-knowledge-modeling-part-1/ Tue, 12 Apr 2022 15:00:37 +0000 https://enterprise-knowledge.com/?p=15176 Photographer: Johannes Plenio This series of articles is a data scientist, and data engineer, perspective on knowledge graphs, which is intended not only for other data scientists and engineers, the nerdy role in the office that no one truly understands; … Continue reading

The post A Data Scientist Perspective on Knowledge Graphs (Part 1): the Data-Driven Challenge appeared first on Enterprise Knowledge.

]]>
A picture of a fisherman on a boat.

Photographer: Johannes Plenio

This series of articles is a data scientist, and data engineer, perspective on knowledge graphs, which is intended not only for other data scientists and engineers, the nerdy role in the office that no one truly understands; but also for executives and business groups, who ultimately decide where to steer the organization, and are inundated with a multitude of use cases and business capabilities; as well as for project managers, who are tasked with leading a group of cross-functional teams to move their data projects into successful efforts.

The goal of this series of articles is not to describe what data science, engineering, or machine learning is, but it will ultimately depict what these are, and the reason why we hear about these distinct names, or roles, that intricately work together. Typically, these roles are executed by the same person in small teams, hence creating the confusion.


This article, part 1, will be focusing on the data scientist path from knowledge discovery to solving a business challenge. To understand their perspective, we will explore the challenges facing data scientists and discuss knowledge management as a solution. In the second article of this series, part 2, we will see how knowledge graphs apply to data science and machine learning in the enterprise or business context.

A Scientific Approach to Business

The important word in data science, from a data scientist’s perspective, is science, not data. And science starts with a question (or hypothesis). Data comes in secondly to validate or refute a hypothesis, significantly answering the question, or not.

From a business perspective, this approach means that what matters first is the initial question. Asking the wrong question will always lead to the wrong answer.

Process DiagramFrom our experience within the enterprise, it means that a data scientist might have a cross-functional role based on the question asked, or the problem exposed, as the data needed to answer the question, or solve the problem, might be spread among different units.

Simple problems usually require simple solutions. Data scientists might fit in a single department but we gather that they are often involved in more than one, sitting between departments.

The key role of a data scientist is to answer questions with data, find the model that best suits a problem, assess its performance by developing quality assurance tests (statistical tests), and determine what is needed (often better or more data) to improve the model.

Most of the research executed by a data scientist will consist of refining the initial question asked, or redefining the initial challenge exposed. Usually uncovering other questions, or challenges and iteratively making them more precise and more contextualized.

Here are a few examples of the questions or hypotheses that data scientists are confronted with:

  • What will be our revenue next year?increased profit graphic
  • How can we maximize profits?
  • How can we increase sales?
  • Which products or services should we prioritize?
  • Which marketing campaign brings in more customers?

As we can see, the extent to which these questions apply is vast and they are mainly business economics questions, although data science can apply to more operational or organizational questions as well:

  • How can we improve our processes?iterative development graphic
  • How can we increase service uptime?
  • How can we optimize tasks among employees?

Taking a scientific approach, the initial question that gave life to this series of articles shall be: What is a data scientist’s perspective on knowledge modeling and engineering in a business/enterprise context?

Let’s contextualize it in order to further refine this question.

The Data-Driven Challenge

Businesses are more and more confronted with AI which is now becoming ubiquitous. We hear about data scientists and engineers, sometimes AI or machine learning (ML) engineers, automating business processes, developing predictive models, and many other algorithmic things.

Artificial intelligence is now involved in many, if not most, business processes. Indeed, there are questions to answer, and challenges to overcome, at all levels and in every department of a company. Executives have strategic problems – where to go, how to innovate? Businesses have business problems – how to earn or do more with less. A level lower, organizations have organizational problems – who needs to know what, what do we need where, etc.

"Companies must re-examine the ways that they think about data as a business asset of their organizations. Data flows like a river through any organization. It must be managed from capture and production through its consumption and utilization at many points along the way." - Randy Bean, Harvard Business ReviewWhy Is It So Hard to Become a Data-Driven Company?

Businesses now have data lakes, because data is structurally siloed across their company. As the amount of data gathered is tremendous, they hired data scientists and engineers in order to make sense of it all.

In theory, everybody knows that. In practice, it is never as easy as it sounds, with people typically not knowing where to start. In the next section, we will dive into the data scientist’s path forward.

picture of a man swimming underwater with some fishPhotographer: Rodrigo Pederzini

The Data Scientist Path

Although business shall prevail over technology. The very empirical nature of economics forces the other way around when it comes to data science and engineering applied to business. The term data-driven depicts it best, but the issue here is that a data scientist can be left over with only data and a simple “find something” instruction. We will indicate this extreme case of pure discovery, at the very beginning, on the far left of our path, as shown in the following figure.

A line diagram with Discovery on the far left and "Precise Question / Well-defined problem" on the right

The more we move to the right side of the data science path depicted above, the closer we get to a precise business question, or challenge, and consequently the closer we get to an applicable solution.

Although its path might not be as straightforward, a data scientist strives to move from left to right.

The data scientist’s path is similar for most questions or challenges and consists of data quality assessments (is the data appropriate, sufficient, accessible and discoverable?), exploratory data analysis (can we extract patterns or trends from the data?), and feasibility checks (can we get actionable results or business insights?)

A line diagram with 3 branches coming off: 1) Data quality assessment, 2) exploratory data analysis, 3) feasability checks

In practice, data assessment and exploratory data analysis are always about the same process, and the answers will depend on the initial question asked. As most cases follow the same path, the only difference is the time it takes and the end result, both of which depend on the question asked, or challenge definition.

Let’s have a first look at the two extreme scenarios, two very different questions, or problems, that are “find something” and “find this”, and see how they will differ in practice. We will also see how machine learning algorithms fit between these two. And finally, we will picture for each scenario what are the best and worst cases we can expect from them.

Scenario 1: No direction, purely discovery 

The enterprise generates tremendous amounts of data, about customers, employees, or resources, and we want to make sense of it all, or simply extract some business insights out of it.

The data scientist gets confronted with an overly general demand: “Find something in our data.”discovery graphic

In machine learning terms, most models in use are of the unsupervised learning family that will end up in a classification, or categorization, problem. This is commonly called information or knowledge discovery or retrieval.

Regardless of the output, businesses end up with the same tricky question: how to ensure value out of it? And tackling this issue is fairly simple: define value.

Indeed, this scenario requires at least some business context and objectives, otherwise the team might dig into pointless directions.

Worst case: the project ends up in endless research or inapplicable findings, impacting the ability to retain the team with a value proposition.

Best case: the project ends up in a classification problem, or categorization, leading to the next scenario; more precise questions, better defined challenges.

To avoid the worst case and achieve the best results, enterprises and managers should contextualize, or structure, our research and findings. This shall ultimately lead to more specific questions, which is the second scenario presented here after.


Scenario 2: Precise business question

The organization has a specific question, or goal. The data scientist’s job will be to study the feasibility of the question regarding data availability and quality, and the potential answers based on statistical significance and information availability (engineering), which together shall be able to conclude, potentially providing an answer to the initial question."time is money" graphic

Because we don’t know in advance whether we shall be able to conclude or not on a question – often due to lack of or poor data. The result, the answer, to a question is uncertain.

Therefore, this scenario requires data scientists to have a list of (multiple) precise (well-defined) business questions that address specific business challenges or use cases. Having multiple questions increases the chance of gaining valuable insights from the analysis.

The machine learning models at play here are typically of the supervised learning family. Although many solutions require solely simple, more intricate, or time series regressions depending on the problem.

The result is a list of feasibility checks, prototypes, or proofs of concept.

Worst case: the data is not as good as expected, or is simply not available at the moment. The problem gets postponed to when data is available.

For example, a monthly prediction requires, due to yearly seasonality, at least 24 months of data.

Best case: the data is good and the model works. We have a proof-of-concept that can lead to an implementation or integration phase.

Again, to avoid the worst case and achieve the best results, enterprises and managers should contextualize, or structure, our questions and answers.


Closing

The differences between the two scenarios discussed above are, of course the output, whether it shall end up in a classification or in a predictive problem, and how timely (and expensive) they are, both depending on the initial question, or challenge definition.

A line diagram with 2 additional branches titled "Proof of concept" and "Classification (or categorization)"

In reality, the difference between “find something” and “find this” is rather significant.

“Find something” can lead to unnecessary answers, such as solutions without a problem. We will see later in this paper how to avoid that situation.

“Find this”, “Find why this is”, “Find how this is”, or “Find a solution to this”, are already more precise and tangible questions but require “this” to be defined.

Companies will often place value in being data-driven, or following the data-driven approach, but we’ve seen here that an organization can be data-driven yet still ask the wrong questions. The value of a data science project is defined by the initial question. The data-driven approach is most valuable when the initial question is valuable to the business, meaning the answer to the question can be leveraged and have an impact on the enterprise.

A successful data science project starts with a good question. It does not necessarily mean that you will get a valid answer to your question, but rather that you will be able to answer the question with the data that you have.

Overall, we ensure value from a data science approach by generally being able to use information or knowledge extracted from discovery in order to tackle precise business questions or well defined problems.

The way our data scientist’s path fits within the company will be the subject of the second article, part 2, of this series. We will first put our data scientist’s path within the enterprise context, and second, we will see how knowledge graphs come in handy for that matter. Indeed, we will see how discovery in data science naturally leads to knowledge modeling and in turn how knowledge modeling helps define better, more precise, questions.

 

The post A Data Scientist Perspective on Knowledge Graphs (Part 1): the Data-Driven Challenge appeared first on Enterprise Knowledge.

]]>
Knowledge Graph Accelerator for Data Standardization and Environmental Social Governance (ESG) https://enterprise-knowledge.com/knowledge-graph-accelerator-for-data-standardization-and-environmental-social-governance-esg/ Fri, 04 Mar 2022 15:00:00 +0000 https://enterprise-knowledge.com/?p=14466 The Challenge A global management consulting firm needed to establish a standardized, central platform for consultants to see insights on environmental impacts associated with supply chain processes. Historically, this information was stored in disparate locations, often hidden within departmental documents … Continue reading

The post Knowledge Graph Accelerator for Data Standardization and Environmental Social Governance (ESG) appeared first on Enterprise Knowledge.

]]>

The Challenge

A global management consulting firm needed to establish a standardized, central platform for consultants to see insights on environmental impacts associated with supply chain processes. Historically, this information was stored in disparate locations, often hidden within departmental documents or client deliverables. Additionally, there was no standardized vocabulary used across these different data sources, leading to inconsistent understandings of key business concepts. These challenges caused consultants to lose valuable time searching for data and experts that could provide relevant insights on a new project. Thus, the firm recognized a need to create a centralized location to find data on past projects that would help consultants support future recommendations to their clients. This challenge required rigorous knowledge modeling that would be scalable and provide a reliable access point for information retrieval at the time of need.

The Solution

EK employed our Knowledge Graph Accelerator model to rapidly create a semantic data layer and web application that connected consultants to both the relevant data and people in a quick and efficient manner. To build the semantic data layer, EK created an ontology and starter taxonomy to establish relationships between core concepts and standardize the knowledge model. From there, the team implemented a knowledge graph datastore using data catalog and transformation tooling in partnership with data.world. EK established parameterized queries to enable self-serve analysis for consultants within data.world and also built a web app with faceted navigation on the taxonomy on top of these queries, surfacing relevant insights for business users to quickly make decisions.

EK then provided a clear roadmap and guidelines for the firm to implement and scale knowledge graphs across multiple practices and data sources. As part of defining this roadmap EK included the following recommendations:

  • Guidance on how to make changes or adjustments to the application’s taxonomies
  • Technology recommendations for taxonomy and ontology management, cloud-based graph storage, data cataloging, and ETL workflows (See Figure 1)
  • Roles, responsibilities, and an organizational structure to make graph application adoption successful
The 4 steps for knowledge graph creation: 1) Upload source data and create intermediary datasets as needed for mapping, 2) map the source data based on the ontology model and by defining specific mapping rules, 3) Export the file containing the mapping rules and use it to materialize a physical graph store, 4) The knowledge graph can now be queried using APIs to downstream applications accessible by users
Figure 1:  EK Consultants developed a general process for uploading data into a graph and making it accessible through a web-based application.

The EK Difference

EK’s extensive experience in designing and implementing knowledge graph solutions enabled the team to demonstrate tangible, clickable results in less than two months. Together with the consulting firm, the team implemented a starter taxonomy, ontology, and frontend application, all of which were architected in a way that allowed for an easily scaled solution and rapid future growth of this product. EK partnered with the firm’s sustainability consulting group to ensure a transparent and collaborative process and utilized an agile development method to develop and gather feedback concurrently. After initial development, EK provided support in migration of this software into different environments as well as continued recommendations on strategy and how to achieve rapid adoption across the firm’s larger practices.

The Results

EK enabled the firm to advise on efficient measures to limit environmental impact and optimize cost. The firm can now leverage their extensive knowledge base around methods to reduce environmental impact and scale a centralized database of previously-tacit knowledge surrounding sustainability solutions. In the end, the firm left with the following outcomes:

  • Greater Connectivity: Embedded, machine-readable relationships between all concepts that have hierarchy and do not require tables and complex joins.
  • Scalability: Data model designed for organic growth that does not require extensive rework when adding new relationships and a variety of analytics and data from multiple sources. 
  • Performance: A performant data store that delivers query results quickly and enables integration with multiple downstream applications. 
  • Interoperability: A data model that is based on web standards, minimizing vendor lock, ensuring ease of management, long term scale, and interoperability across the firm’s systems and platforms.
  • Quality and Reputability: Consultants can leverage insights that are certified and align with industry standards to provide clients with a strategy that can generate profit while supporting sustainability mission and impact. 
  • Industry Intelligence: The firm will be able to connect knowledge and data related to environmental sustainability to detect patterns and provide market intelligence to their clients.

Through this use case, the firm was also able to see the value of graph data in action, arming them with a demo-able product that could test the compatibility of graph modeling for their solutions. This drove the firm to expand knowledge graph products to new areas of the organization to support standardization and connectivity.

The post Knowledge Graph Accelerator for Data Standardization and Environmental Social Governance (ESG) appeared first on Enterprise Knowledge.

]]>
Improving Product Documentation With a CCMS and a Knowledge Graph https://enterprise-knowledge.com/improving-product-documentation-with-a-ccms-and-a-knowledge-graph/ Tue, 18 Jan 2022 15:00:00 +0000 https://enterprise-knowledge.com/?p=14224 The Challenge A financial solutions provider wanted to improve and personalize customer interaction with support and technical documentation as well as streamline the content authoring process to ensure consistent messaging and avoid duplicated information. Before the EK team’s involvement, technical … Continue reading

The post Improving Product Documentation With a CCMS and a Knowledge Graph appeared first on Enterprise Knowledge.

]]>

The Challenge

A financial solutions provider wanted to improve and personalize customer interaction with support and technical documentation as well as streamline the content authoring process to ensure consistent messaging and avoid duplicated information. Before the EK team’s involvement, technical product documentation was developed and maintained by siloed groups, creating inconsistent vocabularies and content experiences. As a result, customers were often frustrated and confused as they tried to leverage the product documentation to perform tasks. The organization was looking to improve customer experience by providing personalized content and intuitive content navigation.

The Solution

EK collaborated with the financial provider to better leverage a componentized content management system (CCMS), enabling the organization to consistently model, create, and reuse content previously curated and authored by siloed business units. The “componentized” feature was critical to allow the organization to produce reusable product documentation segments. EK integrated an auto-tagging system to more accurately and efficiently apply consistent vocabularies to each of these segments so that the organization could assemble and reassemble content components, aligning them with customer profiles to create personalized customer experiences. Once product documentation content was more meaningfully structured and tagged with consistent vocabularies, EK developed a series of APIs to deliver the content for display in various omnichannel front-end delivery platforms and experiences.

Additionally, EK worked with organizational subject matter experts to design and implement a taxonomy that enables authors to associate content segments with financial topics, solutions, and user-facing views. These associations help drive the personalization of content for each user and organization. On top of the CCMS, EK architected and implemented a knowledge graph to store the content segments and their metadata for quick reference when building content and front-end views. Segments are stored separately and pulled together from the graph to create a seamless search and documentation experience for users.

The EK Difference

The EK team leveraged our experience with taxonomy design, content management, ontology design, knowledge graphs, and enterprise search to strategize and implement the CCMS solution for the organization. Our taxonomists and ontologists collaborated with the organization’s subject matter experts to capture and translate existing metadata and authoring processes into taxonomies and content models that could be leveraged by the CCMS and knowledge graph. Our technical expertise guided the organization to build a flexible, knowledge graph foundation that could be leveraged through a bespoke API layer to power their vision of content curation, delivery, and search through their user-facing platforms.

The Results

The combination of a CCMS platform and a knowledge graph allows for dynamic assembly of content components and data elements to produce a flexible, adaptive, and customized client experience for the financial solutions provider. The new solution connects clients to the content and data they need to perform their work in a vastly more efficient manner that saves time for both the financial solutions provider as well as their clients. The CCMS platform is providing the organization with the ability to create individual content segments, tag them with a consistent taxonomy, and produce personalized views for end-users. With personalized content and connected data, users are provided only the documentation relevant to the version and tier of service they have purchased. Additionally, we can ensure that product information is described with a consistent vocabulary across all documentation. This removes the previous need to comb through irrelevant and inconsistent information to find the relevant documentation. Leveraging a knowledge graph allows us to connect content with relevant data elements to produce richer and more detailed documentation.

The post Improving Product Documentation With a CCMS and a Knowledge Graph appeared first on Enterprise Knowledge.

]]>
Expert Analysis: Does My Organization Need a Graph Database? https://enterprise-knowledge.com/expert-analysis-does-my-organization-need-a-graph-database/ Fri, 14 Jan 2022 15:00:16 +0000 https://enterprise-knowledge.com/?p=14185 As EK works with our clients to integrate knowledge graphs into their technical ecosystems, client stakeholders often ask, “Why should we leverage knowledge graphs?” and more specifically, “Do we need a graph database?” Our consultants then collaborate with stakeholders to … Continue reading

The post Expert Analysis: Does My Organization Need a Graph Database? appeared first on Enterprise Knowledge.

]]>
As EK works with our clients to integrate knowledge graphs into their technical ecosystems, client stakeholders often ask, “Why should we leverage knowledge graphs?” and more specifically, “Do we need a graph database?” Our consultants then collaborate with stakeholders to weigh the pros and cons of using a knowledge graph and graph database to solve their findability and discoverability use cases. In this blog, two of our senior technical consultants, Bess Schrader and James Midkiff, answer common questions about knowledge graphs and graph databases, focusing on how to best fit them into your organization’s ecosystem without overengineering the solution.

Why should I leverage knowledge graphs?

Bess Schrader

Knowledge graphs model information in the same way human beings think about information, making it easier to organize, store, and logically find data. This reduces the silos between technical users and business users, reduces the ambiguity about what data and information mean, and makes knowledge more sustainable and accessible. Knowledge graphs are the implementation of an ontology, a critical design component for understanding your organization. 

Many graph databases also support inference, allowing you to explore previously uncaptured relationships in your data, based on logic developed in your ontology. This reasoning capability can be an incredibly powerful tool, helping you gain insights from your business logic.

James Midkiff

Knowledge graphs are a concept, a way of thinking, and they aren’t always necessarily tied to a graph database. Even if you are against adopting a graph database, you should design an ontology for your organization’s data to visualize and align with how you and your colleagues think. Modeling your organization gives you the complete view and vision for how to best leverage your organization’s content. This vision is your knowledge graph, an innovative way for your organization to tackle the latest data problems. However, this ontology doesn’t have to be implemented in a graph database. The technical implementation should be built using technologies that efficiently support the use cases and are easy to maintain.

Does my use case require a graph database?

Bess Schrader

Any organization that wants to map its internal data to external data would benefit from a graph. If your use case includes publishing your data and connecting to other data sets, a knowledge graph and graph database (particularly one that uses the Resource Description Framework, or RDF) are the way to go to ensure the data is flexible and interoperable. Even if you do not intend to connect and/or publish data, storing robust definitions alongside data in a graph is one of the best ways to ensure that the meaning behind fields is not lost. With the addition of RDF*, the expressivity of a graph to describe organizational data is unmatched by other data formats.

When your ontology and instance data are all in the same place (a graph database), technical and non-technical users alike can always determine what a given field is supposed to mean. This ensures that your data is sustainable and maintainable. For example, many organizations use acronyms or abbreviations when setting up relational databases or nested data structures like JSON or XML. However, the definition and usage notes for these fields are often not included alongside the data itself. This leads to situations where data consumers and developers may find, for example, a field called “pqf” in a SQL table or JSON file created several years ago by a former employee. If no one at the organization knows what “pqf” means or what downstream systems might be using this field, this data becomes an unusable maintenance burden.

However, using well-formed ontologies and RDF knowledge graphs, this property “pqf” would be a “first-class citizen” with its own properties, including a label (“Prequalified for Funding”) and definition (“This field indicates whether a customer has been prequalified for a financial product. A value of ‘true’ indicates that the customer has been preapproved”), explaining what the property means and how it should be used. This reduces ambiguity and confusion for both developers and data consumers.

James Midkiff

A majority of knowledge graph use cases involve information discovery and search. Graph databases are flexible, allowing you to easily adapt the model as new data and use cases are considered. Additionally, graphs make it painless to aggregate data from separate data sources and combine the data to create a single view of an employee, topic, or other important entity at your organization. Below are some questions to ask when faced with this question.

  • Does the use case require data model flexibility and are the use cases going to adapt or change over time?
  • Do you need to combine data from multiple sources into a single view?
  • Do you need to be able to search for multiple types of information simultaneously?

If you answer yes to any of the above, graph databases are a good solution. Some use cases do not require cross-entity examination (i.e. asking questions across relationships) or are not easily calculated in a graph. In these cases, you should not invest in learning and implementing a graph database. As an alternative, you can create a dynamic model inside of a NoSQL database and provide search functionality via a search engine. You can also do network-based and machine learning calculations in your programming language of choice after a small data transformation. As stated previously, implementations should be largely driven by the use cases they are supporting and will support in the future.

I’m nervous about migrating to a new data format. Why should my team learn about and invest in graph database technologies?

Bess Schrader

In addition to the advantages described above, one major benefit of using RDF-compliant graph databases is that they’re based on standards that have been maintained by the W3C for over two decades. These standards, including RDF and SPARQL, were developed over 20 years ago to promote long-term growth for the web. In other words, RDF is not a trendy new format that may disappear in five years, and you can be confident when investing in learning about this technology. The use of standards provides freedom from proprietary vendor tools, enabling you to effortlessly create, move, integrate, and share your data between different standards-compliant software. Using semantic web standards also enables you to seamlessly connect your content and data to a taxonomy (whether internal or external), as most taxonomies are created and stored in an RDF format.

Similarly, SPARQL, the RDF query language, is based on pattern matching and can be easier to learn for non-technical users than more complex programming languages. SPARQL also allows for federated queries, which enable a user to query across multiple knowledge graphs stored in different graph databases, as long as the databases are RDF-compliant. Using federated queries, you could query your own data (e.g. a dataset of articles discussing the stock market and finances) in combination knowledge graphs like with Wikidata, a free and openly accessible RDF knowledge graph used by Wikipedia. This would allow you to take any article that mentions a stock ticker symbol, follow that symbol to the Wikidata entry, and pull back the size and industry of the organization to which the ticker refers. You could then leverage this information to filter your articles by industry or company size, without needing to gather that information yourself. In other words, federated queries allow you to query beyond the bounds of your own organization’s knowledge.

James Midkiff

Many organizations do not need to externally share the knowledge graph data they create. The data may support externally-facing use cases, like chatbots, search, and knowledge panels, and this is usually more than sufficient to meet an organization’s knowledge graph needs. Taxonomies can be transformed and imported into any relational or NoSQL database in a similar manner that we use to translate all other data formats into RDF when building a graph. While graph databases can make this connection more seamless, they are by no means the only way to implement a taxonomy. Relational and NoSQL databases are more commonly used, making it easier to find the necessary skill sets to implement and maintain them. With so many developers used to query languages like SQL, the pattern-based nature of SPARQL can be difficult for developers to learn and adopt.

To be clear, graph databases are an investment. They’re a shift in how we approach and integrate data, which can lead to some adoption costs. However, they can also bring advantages to an organization in addition to what Bess mentioned above. 

  • Comprehensive, Connected Data – Graphs provide descriptive data models and the ability to query and combine multiple graphs together painlessly, without requiring the join tables, intermediary schemas, or rules often required by relational databases.
  • Extendable Foundation – Knowledge graphs and graph databases enable the reuse of existing information as well as the flexibility to add more types of data, properties, and relationships to implement new use cases with minimal effort. 
  • Lower Costs – The upfront investment (licensing fees, the cost of migrating data, and the cost of hiring or growing the appropriate skill sets) can balance out in the long term given the flexibility to adapt the data model with evolving data and use cases.

Graph technologies are important to consider when building for the future and scale of data at your organization.

Conclusion

Like any major data architecture component, graph databases have their fair share of both pros and cons, and the choice to use them will ultimately come down to what fits the needs of each organization. If you’re looking for help in determining whether a graph database is a good fit for your organization, contact us.

The post Expert Analysis: Does My Organization Need a Graph Database? appeared first on Enterprise Knowledge.

]]>