Enterprise Knowledge Graphs Articles - Enterprise Knowledge http://enterprise-knowledge.com/tag/enterprise-knowledge-graphs/ Tue, 21 Oct 2025 14:56:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg Enterprise Knowledge Graphs Articles - Enterprise Knowledge http://enterprise-knowledge.com/tag/enterprise-knowledge-graphs/ 32 32 The Resource Description Framework (RDF) https://enterprise-knowledge.com/the-resource-description-framework-rdf/ Mon, 24 Feb 2025 19:33:53 +0000 https://enterprise-knowledge.com/?p=23200 Simply defined, a knowledge graph is a network of entities, their attributes, and how they’re related to one another. While these networks can be captured and stored in a variety of formats, most implementations leverage a graph based tool or database. However, within the world of graph databases, there are a variety of syntaxes or ... Continue reading

The post The Resource Description Framework (RDF) appeared first on Enterprise Knowledge.

]]>
Simply defined, a knowledge graph is a network of entities, their attributes, and how they’re related to one another. While these networks can be captured and stored in a variety of formats, most implementations leverage a graph based tool or database. However, within the world of graph databases, there are a variety of syntaxes or flavors that can be used to represent knowledge graphs. One of the most popular and ubiquitous is the Resource Description Framework (RDF), which provides a means to capture meaning, or semantics, in a way that is interpretable by both humans and machines.

What is RDF?

The Resource Description Framework (RDF) is a semantic web standard used to describe and model information for web resources or knowledge management systems. RDF consists of “triples,” or statements, with a subject, predicate, and object that resemble an English sentence. For example, take the English sentence: “Bess Schrader is employed by Enterprise Knowledge.” This sentence has:

  • A subject: Bess Schrader
  • A predicate: is employed by 
  • An object: Enterprise Knowledge

Bess Schrader and Enterprise Knowledge are two entities that are linked by the relationship “employed by.” An RDF triple representing this information would look like this:

What is the goal of using RDF?

RDF is a semantic web standard, and thus has the goal of representing meaning in a way that is interpretable by both humans and machines. As humans, we process information through a combination of our experience and logical deduction. For example, I know that “Washington, D.C.” and “Washington, District of Columbia” refer to the same concept based on my experience in the world – at some point, I learned that “D.C.” was the abbreviation for “District of Columbia.” On the other hand, if I were to encounter a breathing, living object that has no legs and moves across the ground in a slithering motion, I’d probably infer that it was a snake, even if I’d never seen this particular object before. This determination would be based on the properties I associate with snakes (animal, no legs, slithers).

Unlike humans, machines have no experience on which to draw conclusions, so everything needs to be explicitly defined in order for a machine to process information this way. For example, if I want a machine to infer the type of an object based on properties (e.g. “that slithering object is a snake”), I need to define what a snake is and what properties it has. If I want a machine to reconcile that “Washington, D.C.” and “Washington, District of Columbia” are the same thing, I need to define an entity that uses both of those labels.

RDF allows us to create robust semantic resources, like ontologies, taxonomies, and knowledge graphs, where the meaning behind concepts is well defined in a machine readable way. These resources can then be leveraged for any use case that requires context and meaning to connect and unify data across disparate formats and systems, such as semantic layers and auto-classification.

How does RDF work?

Let’s go back to our single triple representing the fact that “Bess Schrader works at Enterprise Knowledge.”

We can continue building out information about the entities in our (very small) knowledge graph by giving all of our subjects and objects types (which indicate the general category/class that an entity belongs to) and labels (which capture the language used to refer to the entity).

RDF diagram building off of the Bess Schrader Employed by Enterprise Knowledge Triple, including Person and Organization types

These types and labels are helping us define the semantics, or meaning, of each entity. By explicitly stating that “Bess Schrader” is a person and “Enterprise Knowledge” is an organization, we’re creating the building blocks for a machine to start to make inferences about these entities based on their types.

Similarly, we can create a more explicit definition of our relationship and attributes, allowing machines to better understand what the “employed by” relationship means. While the above diagram represents our predicate (or relationship) as a straight line between two entities, in RDF, our predicate is itself an entity and can have its own properties (such as type, label, and description). This is often referred to as making properties “first class citizens.”

Uniform Resource Identifiers (URIs)

But how do we actually make this machine readable? Diagrams in a blog are great in helping humans understand concepts, but machines need this information in a machine readable format. To make our graph machine readable, we’ll need to leverage unique identifiers.

One of the key elements of any knowledge graph (RDF or otherwise) is the principle of “things, not strings.” As humans, we often use ambiguous labels (e.g. “D.C”) when referring to a concept, trusting that our audience will be able to use context to determine our meaning. However, machines often don’t have sufficient context to disambiguate strings – imagine “D.C.” has been applied as a tag to an unstructured text document. Does “D.C.” refer to the capital city of the US, the comic book publisher, “direct current,” or something else entirely? Knowledge graphs seek to reduce this ambiguity by using entities or concepts that have unique identifiers and one or more labels, instead of relying on labels themselves as unique identifiers. 

RDF is no exception to this principle – all RDF entities are defined using a Uniform Resource Identifier (URI), which can be used to connect all of the labels, attributes, and relationships for a given entity.

Using URIs, our RDF knowledge graph would look like this:

Sample knowledge graph using URI's between concepts

These URIs make our triples machine readable by creating unambiguous identifiers for all of our subjects, predicates, and objects. URIs also enable interoperability and the ability to share information across multiple systems – because these URIs are globally unique, any two systems that reference the same URI should be referring to the same entity.

What are the advantages to using RDF?

The RDF Specification has been maintained by the World Wide Web Consortium (W3C) for over two decades, meaning it is a stable, well documented framework for representing data. This makes it easy for applications and organizations to develop RDF data in an interoperable way. If you create RDF data in one tool and share it with someone else using a different RDF tool, they will still be able to easily use your data. This interoperability allows you to build on what’s already been done — you can combine your enterprise knowledge graph with established, open RDF datasets like Wikidata, jump-starting your analytic capabilities. This also makes data sharing and migration between internal RDF systems simple, enabling you to unify data and reducing your dependency on a single tool or vendor.

The ability to treat properties as “first-class citizens” with their own properties allows you to store your data model along with your data, explaining what properties mean and how they should be used. This reduces ambiguity and confusion for both data creators, developers, and data consumers. However, this ability to treat properties as entities also allows organizations to standardize and connect existing data. RDF data models can store multiple labels for the same property, enabling them to act as a “Rosetta Stone” that translates metadata fields and values across systems. Connecting these disparate metadata values is crucial to being able to effectively retrieve, understand, and use enterprise data. 

Many implementations of RDF also support inference and reasoning, allowing you to explore previously uncaptured relationships in your data, based on logic developed in your ontology. This reasoning capability can be an incredibly powerful tool, helping you gain insights from your business logic. For example, inference and reasoning can capture information about employee expertise – a relationship that’s notoriously difficult to explicitly store. While many organizations attempt to have employees self-select their skills or areas of expertise, the completion rate of these self-selections is typically low, and even those that do complete the selection often don’t keep them up to date. Reasoning in RDF can leverage business logic to automatically infer expertise based on your organization’s data. For example, if a person has authored multiple documents that discuss a given topic, an RDF knowledge graph may infer that this person has knowledge of or expertise in that topic.

What are the disadvantages to using RDF?

To fully leverage the benefits of RDF, entities must be explicitly defined (see best practices below), which can require burdensome overhead. The volume and structure of these assertions, combined with the length and format of Uniform Resource Identifiers (URIs), can make getting started with RDF challenging for information professionals and developers used to working with more straightforward (albeit more ambiguous) data models. While recent advancements in generative AI have great potential to make the learning curve to RDF less onerous via human-in-the-loop RDF creation processes, learning to create and work with RDF still poses a challenge to many organizations.

Additionally, the “triple” format (subject – predicate – object) used by RDF only allows you to connect two entities at a time, unlike labeled property graphs. For example, I can assert that “Bess Schrader -> employed by -> Enterprise Knowledge,” but it’s not very straightforward in RDF to then add additional information about that relationship, such as what role I perform at Enterprise Knowledge, my start and end dates of employment, etc. While a proposed modification to RDF called RDF* (RDF-star) has been developed to address this, it has not been officially adopted by the W3C, and implementation of RDF* in RDF compliant tools has occurred only on an ad hoc basis.

What are some best practices when using RDF to create a knowledge graph?

RDF, and knowledge graphs in general, are well known for their flexibility – there are very few restrictions on how data must be structured or what properties must be used for their implementation. However, there are some best practices when using RDF that will enable you to maximize your knowledge graph’s utility, particularly for reasoning applications.

All concepts should be entities with a URI

The guiding principle is “things, not strings”. If you’re describing something with a label that might have its own attributes, it should be an entity, not a literal string.

All entities should have a label

Using URIs is important, but a URI without at least one label is difficult to interpret for both humans and machines.

All entities should have a type

Again, remember that our goal is to allow machines to process information similarly to humans. To do this, all entities should have one or more types explicitly asserted (e.g. “Washington, D.C” might have the type “City”).

All entities should have a description

While using URIs and labels goes a long way in limiting ambiguity (see our “D.C.” example above), adding descriptions or definitions for each entity can be even more helpful. A well written description for an entity will leave little to no question around what this entity represents.

Following these best practices will help with reuse, governance, and reasoning.

Want to learn more about RDF, or need help getting started? Contact us today.

The post The Resource Description Framework (RDF) appeared first on Enterprise Knowledge.

]]>
Knowledge Graph University https://enterprise-knowledge.com/knowledge-graph-university-product/ Thu, 09 Feb 2023 15:35:21 +0000 https://enterprise-knowledge.com/?p=17503 If you’re looking to build out competency with semantics and knowledge graphs in your team, EK’s Knowledge Graph University is a great place to start. This five-module intensive training is taught by our expert instructors who will coach your team … Continue reading

The post Knowledge Graph University appeared first on Enterprise Knowledge.

]]>
If you’re looking to build out competency with semantics and knowledge graphs in your team, EK’s Knowledge Graph University is a great place to start. This five-module intensive training is taught by our expert instructors who will coach your team in the various aspects of knowledge graphs, from the basic concepts and business applications through to the technical skills needed to create your own knowledge graph from scratch. We tailor the course to your audience and skillset needs, highlighting the information you need to successfully achieve your business goals. The EKGU five modules of training are offered at three levels (basic, intermediate, and advanced) customized to the following individuals:

  • Information Analysts – Foundational principles around taxonomies, ontologies, and semantics, and the common business applications of ontology based on real case studies. Provides approaches to support simple to complex semantic solutions by analyzing content and data sources to discover relationships, making connections across large datasets.
  • Taxonomists/Ontologists – Gain hands-on experience to design and implement complex ontology solutions that involve the integration of taxonomies, ontologies, and knowledge graphs. Hands-on modeling and practice labs offer practical experience with documenting ontology designs in a subset of industry leading semantic tools for ontology management.
  • Data/Knowledge Managers – Outcomes based and user-centric approach to ontology design and graph implementation, based on real world applications. Specific approaches to align technical requirements to organizational ROI.
  • Knowledge Graph Engineers/Implementers – Hands-on experience to lead and support the technical implementation of semantic solutions leveraging best-of-class, field-proven taxonomy/ontology management tools, and graph databases. Master key concepts around semantic inference, structured and unstructured data, auto-tagging, SPARQL, advanced validation with SHACL, implementation of Knowledge Graphs and Artificial Intelligence (AI) solutions.
Download the Knowledge Graph University Brochure

Topics Covered 

  • Ontology Basics 
  • Ontology Design 
  • Advanced Ontology Design and Data Modeling 
  • Ontology to Graph Implementation 
  • Advanced Knowledge Graph Implementation

Business Outcomes

  • Build Capacity – Empower your staff to lead, execute, and support strategic efforts with advanced data and knowledge engineering to solve enterprise AI needs and challenges.
  • Leverage Data and Information to Answer Business Questions – Help leaders at different levels within your organization answer strategic questions by training business and technical employees on how to use semantic technologies to make sense of multiple, disparate and diverse information. Make your organization’s data work for you to answer key business questions in real-time while leveraging skillful and proficient institutional knowledge and human capital.
  • Gain Competitive Advantage – Build the foundations for enterprise AI to create your Enterprise’s 360 views of your customers, employees, products and services through analytics dashboards and advanced search solutions to outperform your competitors in areas such as findability and discoverability, knowledge and data management, and customer and employee engagement.

Interested in bringing Knowledge Graph University to your team or organization? Contact us here.

The post Knowledge Graph University appeared first on Enterprise Knowledge.

]]>
Knowledge Graph Accelerator https://enterprise-knowledge.com/knowledge-graph-accelerator-product/ Wed, 08 Feb 2023 21:16:25 +0000 https://enterprise-knowledge.com/?p=17491 Knowledge graphs can make Enterprise AI a reality for your organization. The Knowledge Graph Accelerator offering with Enterprise Knowledge (EK) establishes a practical, standards-based roadmap and prototype to quickly realize the potential of your knowledge and data. What Can Knowledge … Continue reading

The post Knowledge Graph Accelerator appeared first on Enterprise Knowledge.

]]>
Knowledge graphs can make Enterprise AI a reality for your organization. The Knowledge Graph Accelerator offering with Enterprise Knowledge (EK) establishes a practical, standards-based roadmap and prototype to quickly realize the potential of your knowledge and data.

What Can Knowledge Graphs Do for You?

Download the Knowledge Graph Accelerator Brochure

A knowledge graph orchestrates data in a way that represents how humans think about information. Common use cases include:

  • Findability and discovery – Understand the meaning and context behind searched terms as opposed to just executing queries
  • Knowledge and data management – integrate structured, semi-structured and unstructured information into your data model to surface knowledge
  • Data analytics – Connect your data with a more flexible model and find insights hidden in relationships.

Outcomes

EK’s Knowledge Graph Accelerator can help your organization: 

  • Better understand the foundations of knowledge graphs, including graph data model, data mapping, and data management
  • Design a first implementable version (FIV) knowledge graph leveraging a graph-based data management solution that can be scaled and enhanced
  • Identify a strategy for your organization to make Enterprise AI a reality

There are three sizes of Knowledge Graphs Accelerator engagements that are built to fit your organization’s budget and needs (see attached flyer for more). Interested in accelerating your knowledge graph? Contact us here.

The post Knowledge Graph Accelerator appeared first on Enterprise Knowledge.

]]>
Knowledge Management Trends in 2023 https://enterprise-knowledge.com/knowledge-management-trends-in-2023/ Tue, 24 Jan 2023 17:24:39 +0000 https://enterprise-knowledge.com/?p=17306 As CEO of the world’s largest Knowledge Management consulting company, I am fortunate to possess a unique view of KM trends. For each of the last several years, I’ve written an annual list of these KM trends, and looking back, … Continue reading

The post Knowledge Management Trends in 2023 appeared first on Enterprise Knowledge.

]]>
Graphic for Knowledge Management Trends

As CEO of the world’s largest Knowledge Management consulting company, I am fortunate to possess a unique view of KM trends. For each of the last several years, I’ve written an annual list of these KM trends, and looking back, I’m pleased to have (mostly) been on point, having successfully identified such KM trends as Knowledge Graphs, the confluence of KM and Learning, the increasing focus on KM Return on Investment (ROI), and the use of KM as the foundation for Artificial Intelligence.

Every year in order to develop this list, I engage EK’s KM consultants and thought leaders to help me identify what trends merit inclusion. We consider factors including themes in requests for proposals and requests for information; the strategic plans and budgets of global organizations; priorities for KM transformations; internal organizational surveys; interviews with KM practitioners, organizational executives, and business stakeholders; themes from the world’s KM conferences and publications, interviews with fellow KM consultancies and KM software leaders; and the product roadmaps for leading KM technology vendors.

The following are the seven KM trends for 2023:

 

Graphic for Knowledge Management TrendsKM at a Crossroads – The last several years have seen a great deal of attention and funding for KM initiatives. Both the pandemic and great resignation caused executives to realize their historical lack of focus on KM resulted in knowledge loss, unhappy employees, and an inability to efficiently upskill new hires. At the same time, knowledge graphs matured to the point where KM systems could offer further customization and ability to integrate multiple types of content from disparate systems more easily.

In 2023, much of the world is bracing for a recession, with the United States and Europe likely to experience a major hit. Large organizations have been preparing for this already, with many proactively reducing their workforce and cutting costs. Historically, organizations have drastically reduced KM programs, or even cut them out entirely, during times of economic stress. In 2008-2009, for instance, organizational KM spending was gutted, and many in-house KM practitioners were laid off.

I anticipate many organizations will do the same this year, but far fewer than in past recessions. The organizations that learned their lessons from the pandemic and staffing shortages will continue to invest in KM, recognizing the critical business value offered. KM programs are much more visible and business critical than they were a decade ago, thanks to maturation in KM practices and technologies. Knowledge Management programs can deliver business resiliency and competitive advantage, ensure that knowledge is retained in the organization, and enable employee and customer satisfaction and resulting retention. The executives that recognize this will continue their investments in KM, perhaps scaled down or more tightly managed, but continued nonetheless. 

Less mature organizations, on the other hand, will repeat the same mistakes of the past, cutting KM, and with it, walking knowledge out the door, stifling innovation, and compounding retention issues, all for minimal and short-term savings. This KM trend, put simply, will be the divergence between organizations that compound their existing issues by cutting KM programs and those that keep calm and KM on.

 

Graphic for Knowledge Management TrendsFocus on Business Value and ROI – Keying off the previous trend, and revisiting a trend I’ve identified in past years, 2023 will bring a major need to quantify the value of KM. In growth years when economies are booming, we’ve typically seen a greater willingness for organizations to invest in KM efforts. This year, there will be a strong demand to prove the business value of KM. 

For KM practitioners, this means being able to measure business outcomes instead of just KM outcomes. Examples of KM outcomes are improved findability and discoverability of content, increased use and reuse of information, decreased knowledge loss, and improved organizational awareness and alignment. All of these things are valuable, as no CEO would say they don’t want them for their organization, and yet none of them are easily quantifiable and measurable in terms of ROI. Business outcomes, on the other hand, can be tied to meaningful and measurable savings, decreased costs, or improved revenues. Business outcomes resulting from KM transformations can include decreased storage and software license costs, improved employee and customer retention, faster and more effective employee upskilling, and improved sales and delivery. The KM programs that communicate value in terms of these and other business outcomes will be those that thrive this year.

This KM trend is a good one for the industry, as it will require that we put the benefits to the organization and end users at the center of any decision.

 

Graphic for Knowledge Management TrendsKnowledge Portals – Much to the surprise, if not disbelief, of many last year, I predicted that portals would make a comeback from their heyday in the early 2000’s. The past year validated this prediction, with more organizations making multi-year and multi-million dollar investments in KM transformations with a Knowledge Portal (or KM Portal) at the center of the effort. As I wrote about recently, both the critical awareness of KM practices as well as the technology necessary to make a Knowledge Portal work have come a long way in the last twenty years. Steered further by the aforementioned drivers of remote work and the great resignation, organizations are now implementing Knowledge Portals at the enterprise level. 

The use cases for Knowledge Portals vary, with some treating the system as an intranet or knowledge base, others using it as a hub for learning or sales, and still others using it more for tacit knowledge capture and collaboration. Regardless of the use cases, what makes these Knowledge Portals really work is the usage of Knowledge Graphs. Knowledge Graphs can link information assets from multiple applications and display them on a single screen without complicated and inflexible interface development. CIOs now have a way to do context-driven integration, and business units can now see all of the key information about their most critical assets in a single location. What this means is that Knowledge Portals can now solve the problem of application information silos, enabling an organization to collectively understand everything its people need to know about its most important knowledge assets.

 

Graphic for Knowledge Management TrendsContext-Driven KM – We’ve all heard the phrase, “Content is King,” but in today’s KM systems, Context is the new reigning monarch. The new trend in advanced knowledge systems is for them to be built not just around information architecture and content quality, but around knowledge graphs that provide a knowledge map of the organization. A business model and knowledge map expressed as an ontology delivers a flexible, expandable means of relating all of an organization’s knowledge assets, in context, and revealing them to users in a highly intuitive, customized manner. Put simply, this means that any given user can find what they’re looking for and discover that which they didn’t even know existed in ways that feel natural. Our own minds work in the same way as this technology, relating different memories, experiences, and thoughts. A system that can deliver on this same approach means an organization can finally harness the full breadth of information they possess across all of their locations, systems, and people for the purposes of collaboration, learning, efficiency, and discovery. Essentially, it’s what everyone has always wanted out of their information systems, and now it’s a reality.

 

Graphic for Knowledge Management TrendsData Firmly in KM – Historically, most organizations have drawn a hard line between unstructured and structured information, managing them under different groups, in different systems, with different rules and governance structures. As the thinking around KM continues to expand, and KM systems continue to mature, this dichotomy will increasingly be a thing of the past. The most mature organizations today are looking at any piece of information, structured or unstructured, physical or digital, as a knowledge asset that can be connected and contextualized like any other. This includes people and their expertise, products, places, and projects. The broadening spectrum of KM is being driven by knowledge graphs and their expanding use cases, but it also means that topics like data governance, metadata hubs, data fabric, data mesh, data science, and artificial intelligence are entering the KM conversation. In short, the days of arguing that an organization’s data is outside the realm of a KM transformation are over.

 

Graphic for Knowledge Management TrendsPush Over Pull – When considering KM systems and technology, the vast majority of the discussion has centered around findability and discoverability. We’ve often talked about KM systems making it easier for the right people to find the information they need to do their jobs. As KM technologies mature, the way we think about connecting people and the knowledge they need is shifting. Rather than just asking, “How can we enable people to find the right information?”, we can also think more seriously about how we proactively deliver the right information to those people. This concept is not new, but the ability to deliver on it is increasingly real and powerful.

When we combine an understanding of all of our content in context, with an understanding of our people and analytics to inform us how people are interacting with that content and what content is new or changing, we’re able to begin predictively delivering content to the right people. Sometimes, this is relatively basic, providing the classic “users who looked at this product also looked at…” functionality by matching metadata and/or user types, but increasingly it can leverage graphs and analytics to recognize when a piece of content has changed or a new piece of content of a particular type or topic has been created, triggering a push to the people the system predicts could use that information or may wish to be aware of it. Consider a user who last year leveraged twelve pieces of content to research a report they authored and published. An intelligent system can recognize the author should be notified if one of the twelve pieces of source content has changed, potentially suggesting to the content author they should revisit their report and update it.

Overall, the trend we’re seeing here is about Intelligent Delivery of content and leveraging AI, Machine Learning, and Advanced Content Analytics in order to deliver the right content to individuals based on what we know and can infer about them. We’re seeing this much more as a prioritized goal within organizations but also as a feature software vendors are seeking to include in their products.

 

Graphic for Knowledge Management TrendsPersonalized KM – With all the talk of improved technology, delivery, and context, the last trend is more of a summary of trends. KM, and KM systems, are increasingly customized to the individual being asked to share, create, or find/leverage content. Different users have different missions, with some more consumers of knowledge within an organization and others more creators or suppliers of that knowledge. Advanced KM processes and systems will recognize a user’s responsibility and mandates and will enable them to perform and deliver in the most intuitive and seamless way possible. 

This trend has a lot to do with content assembly and flexible content delivery. It means that, with the right knowledge about the user, today’s KM solutions can assemble only that information that pertains to the user, removing all of the detritus that surrounds it. For instance, an employee doesn’t need to wade through hundreds of pages of an employee handbook that aren’t pertinent to them; instead, they should receive an automatically generated version specifically for their location, role, and benefits.

The customized KM trend isn’t just about consuming information, however. More powerfully, it is also about driving knowledge sharing behaviors. For example, any good project manager should capture lessons learned at the end of a project, yet we often see organizations fail to get their PMs to do this consistently. A well-designed KM system will recognize an individual as a PM, understand the context of the projects they are managing, and be able to leverage data to know when that project is completed, thereby prompting the user with a specific lessons learned template at the appropriate time to capture that new set of information as content. That is customized KM. It becomes part of the natural work and operations of systems, and it makes it easier for a user to “do the right thing” because the processes and systems are engineered specifically to the roles and responsibilities of the individual.

Another way of thinking about these trends is by invoking the phrase “KM at the Point of Need,” derived from a phrase popularized in the learning space (Learning at the Point of Need). We’re seeing KM head toward delivering highly contextualized experiences and knowledge to the individual user at the time and in the way they need it and want it. What this means is that KM becomes more natural, more simply the way that business is done rather than a conscious or deliberate act of “doing KM.” This is exciting for the field, and it represents true business value and transformation.

 

Do you need help understanding and harnessing the value of these trends? Contact us to learn more and get started.

 

The post Knowledge Management Trends in 2023 appeared first on Enterprise Knowledge.

]]>
Climbing the Ontology Mountain to Achieve a Successful Knowledge Graph https://enterprise-knowledge.com/climbing-the-ontology-mountain-to-achieve-a-successful-knowledge-graph/ Mon, 21 Nov 2022 20:58:59 +0000 https://enterprise-knowledge.com/?p=16851 Tatiana Baquero Cakici, Senior KM Consultant, and Jennifer Doughty, Senior Solution Consultant from Enterprise Knowledge’s Data and Information Management (DIME) Division presented at the Taxonomy Boot Camp (KMWorld 2022) on November 17, 2022. KMWorld is the world’s leading knowledge management … Continue reading

The post Climbing the Ontology Mountain to Achieve a Successful Knowledge Graph appeared first on Enterprise Knowledge.

]]>
Tatiana Baquero Cakici, Senior KM Consultant, and Jennifer Doughty, Senior Solution Consultant from Enterprise Knowledge’s Data and Information Management (DIME) Division presented at the Taxonomy Boot Camp (KMWorld 2022) on November 17, 2022. KMWorld is the world’s leading knowledge management event that takes place every year in Washington, DC.

Their presentation “Climbing the Ontology Mountain to Achieve a Successful Knowledge Graph” focused on how ontologies have gained momentum as a strong foundation for resolving business challenges through semantic search solutions, recommendation engines, and AI strategies. Cakici and Doughty explained that taxonomists are now faced with the challenge of gaining knowledge and experience in designing and documenting complex solutions that involve the integration of taxonomies, ontologies, and knowledge graphs. They also emphasized that taxonomists are well poised to learn how to design user-centric ontologies, analyze and map data from various systems, and understand the technological architecture of knowledge graph solutions. After describing the key roles and responsibilities needed for a team to successfully implement Knowledge Graph projects, Cakici and Doughty shared practical ontology design considerations and best practices based on their own experience. Lastly, Cakici and Doughty reviewed the most common use cases for knowledge graphs and presented real world applications through a case study that illustrated ontology design and the value of knowledge graphs.

The post Climbing the Ontology Mountain to Achieve a Successful Knowledge Graph appeared first on Enterprise Knowledge.

]]>
JPL’s Institutional Knowledge Graph II: A Foundation for Constructing Enterprise and Domain-Specific Semantic Data Sets https://enterprise-knowledge.com/jpls-institutional-knowledge-graph-ii-a-foundation-for-constructing-enterprise-and-domain-specific-semantic-data-sets/ Mon, 21 Nov 2022 20:15:08 +0000 https://enterprise-knowledge.com/?p=16843 Previously at KMWorld 2021, EK joined JPL to share the vision, approach, and delivery of the Institutional Knowledge Graph (IKG), a centrally maintained, ever-evolving knowledge graph identifying and describing JPL’s enterprise-wide concepts, such as people, organizations, projects, and facilities, and … Continue reading

The post JPL’s Institutional Knowledge Graph II: A Foundation for Constructing Enterprise and Domain-Specific Semantic Data Sets appeared first on Enterprise Knowledge.

]]>
Previously at KMWorld 2021, EK joined JPL to share the vision, approach, and delivery of the Institutional Knowledge Graph (IKG), a centrally maintained, ever-evolving knowledge graph identifying and describing JPL’s enterprise-wide concepts, such as people, organizations, projects, and facilities, and the relationships between them. Since August 2020, the IKG has offered a single source of enterprise information that other JPL applications can leverage to reduce redundancy and out-of-date or inaccurate data. In production for 2 years and now with several releases under its belt, the IKG is beginning to fulfill its promise as a foundational layer in the semantic pyramid for additional taxonomies and knowledge graphs to build upon.

At KM World 2022, Bess Schrader, Senior Solutions Consultant at EK, and Ann Bernath, Software Systems Engineer at JPL, shared a follow-up to the IKG journey including a description of the Enterprise Semantic Platform, a look at new taxonomies and knowledge graphs at JPL (enterprise-wide, others specific to engineering, technical, or science domains) and how they are beginning to leverage the IKG’s foundation of JPL concepts to enrich their dataset into a broader context. This presentation discussed different techniques to federate or synchronize multiple knowledge graphs and how these diverse integrations benefit not only the new datasets, but also the IKG as it continues to pursue its overarching dream–providing answers to questions such as, “Who did what when?”, “Who should you call?”, and “Where is the Robotics Lab?”

The post JPL’s Institutional Knowledge Graph II: A Foundation for Constructing Enterprise and Domain-Specific Semantic Data Sets appeared first on Enterprise Knowledge.

]]>
Why Invest in a Knowledge Graph? Your Digital Transformation and Enterprise AI Initiatives Depend on It https://enterprise-knowledge.com/why-invest-in-a-knowledge-graph-your-digital-transformation-and-enterprise-ai-initiatives-depend-on-it/ Thu, 02 Jun 2022 21:54:53 +0000 https://enterprise-knowledge.com/?p=15578 The scope and success for digital transformations and advanced enterprise solutions is driven by a few factors, namely, defined use cases, available data, and people and SMEs. For many organizations, the biggest hurdle is knowing where to start. Starting with … Continue reading

The post Why Invest in a Knowledge Graph? Your Digital Transformation and Enterprise AI Initiatives Depend on It appeared first on Enterprise Knowledge.

]]>
The scope and success for digital transformations and advanced enterprise solutions is driven by a few factors, namely, defined use cases, available data, and people and SMEs. For many organizations, the biggest hurdle is knowing where to start. Starting with defined problems for which the organization has the most valuable data has demonstrated to be the most successful path toward scale and adoption for such solution-based initiatives. 

This has been supported by an interesting trend that I have been seeing in digital transformations and enterprise artificial intelligence (AI) implementations recently — knowledge management is becoming one of the fastest-growing areas of AI spending. Why? Because just 21% of firms are able to adopt AI at scale and the primary challenge is due to the lack of clear strategies for sourcing the knowledge that AI requires from data. 

Our experience at EK also supports this trend. If we look at where our clients are in their AI journey, we are seeing that the organizations that have made the most progress in digitizing their core business processes and institutional knowledge are also on the leading edge of AI adoption. The image below provides a high-level maturity spectrum and where our clients are on the path toward enterprise AI.Percent of clients in each stage of the AI Journey. Pre-AI = 9%; Foundations = 30%; Prototyping = 50%; Enterprise AI = 11%

If you look closely at the findings, what’s interesting is that more organizations have gone past the foundational stage and are conducting pilots or prototypes. These organizations are finding that this is a quick way to show value, create visibility, and reach go-no-go decisions faster. 

Still, we have yet to see many organizations moving to the enterprise AI scale. Out-of-the-box solutions such as IBM Watson Discovery (recently discontinued) have not been enough to get organizations to the unique collective organizational knowledge they need in order to understand and combine the volumes of data that they have to be able to discover, infer, and recommend information. 

A knowledge graph, a machine-readable model for applying business context (semantics) and capturing relationships between your data and content, allows organizations to create a machine-readable version of your collective institutional knowledge. A knowledge graph uses an abstraction layer that aggregates data that is heavily dependent on context, connectivity and relationships – in the way people think about and describe data – even if it is siloed and in diverse formats.

Three Ways a Knowledge Graph Helps Address Digital Transformation and AI Challenges

Applying organizational context and knowledge [in systems]

Deep learning and machine learning practices have been providing significant value in advancing organizational capabilities and allowing organizations to get started with the exploration of big data analysis to figure out what is in their data that spans multiple years and legacy systems. Yet, data teams are starting to recognize challenges with that approach and are getting stalled after some exploration. These challenges include: 

  • Spending a good chunk of their time preparing and modeling information that typically requires expertise in information architecture principles and knowledge management;
  • Requiring additional domain expertise to help them understand or organize how to label data based on business needs; 
  • Not having a clear answer to fundamental questions such as “what problem are we solving?”, “What is considered good training data?”, and “Are we creating the right algorithms?”; and
  • Introducing rework and mistakes in analysis of algorithms, which not only results in bad errors but also creates a lack of trust for their solutions and can ultimately lead to abandonment of AI initiatives.

The best way to make data easier to organize and manage is to enhance it with rich, descriptive metadata. Knowledge graphs leverage taxonomy and ontology to provide the most intuitive description for your organization and enforce a standard vocabulary and user experience to more easily and efficiently structure your content. A knowledge graph helps achieve this by: Normalizing content across disparate systems and processes; Enforcing consistent tags for content; Mapping relationships between disparate content; Optimizing search experience with synonyms; Streamlining discovery with faceting; and Providing the foundational structure for advanced capabilities like natural language processing (NLP) and machine learning (ML) and organizing and integrating structured and unstructured data

Providing the missing link across siloed and diverse data and content 

Over the last decade, a sizable number of organizations have invested in data lakes and warehouses with the goal to integrate and access data at scale. Many organizations are realizing that moving data into one physical location doesn’t really provide answers as to how data is connected or related. Advanced analytics and enterprise AI need to understand the relationship between diverse and siloed data to make any meaningful decisions. This means that enterprise solutions need to go above and beyond simple search and physical integrations of data to allow for natural language answers and specific recommendations. Imagine searching for “who worked on this project with Company XYZ?” and getting a running list of links or irrelevant documents. This is unhelpful and does not address the context behind the question.

To answer these natural language questions with a specific response (without relying on personal networks), organizations need a way to understand and connect their relational data, such as people or customer data usually stored in customer relationship management systems (CRMs), directories, or HR systems, with semi-structured data like project data typically tracked in project management applications, and with unstructured data such as reports in PDF or video form usually stored in digital asset management systems (DAMs). 

How a knowledge graph helps address these AI challenges: Graphs allow for aggregation of information from multiple disparate solutions, linking information that exists in multiple locations and formats using context, relationships, and metadata. This shifts the focus of AI solutions from the data itself to an abstraction layer - the metadata and other meaningful context that people use to find, understand, and interact with their content across systems. While metadata and taxonomy allow us to standardize the descriptions and labels for our data, ontologies and knowledge graphs give us the ability to connect them. As such, a knowledge graph serves as a semantic data fabric layer and works best as a natural integration framework for enabling interoperability of organizational information assets. A knowledge graph: Allows organizations to store data with context (i.e., metadata and data are stored together); Connects structured data, products, and services, with unstructured content like marketing and videos; Is built on semantic web standards that help organizations avoid proprietary solutions and freely move from current system to future systems; and Benefits from open data and caningest large datasets coming from various sources to augment institutional knowledge with industry insights.

Trust in and predictability of the solution

Advanced data and knowledge management solutions are successful when used in areas of the business that create the most value and solve real business problems. However, scalability challenges persist since organizations need to be able to concretely understand and explain how their advanced AI or data solutions arrived at a given recommendation or decision. Over 90% of our clients face challenges with explainability. Errors from advanced solutions are compounded as organizations work with large volumes of data, and so the ability for users to explain or at least predict the path remains a key component. What this means is that in order for any advanced solution or machine to deliver value out of data or information, it first needs to have the context, knowledge, and understanding of how a person would describe, process, or interpret information. 

While the main solutions here are engaging domain experts, using vetted training data, and creating a repeatable way to validate results, organizations need the capabilities to influence machine decisions and transparency through knowledge representations and machine-readable, domain-based explanations.

How a knowledge graph helps address these AI challenges: Knowledge graphs allow organizations to encode context, expose connections and relationships, and support reasoning and inference, thus serving as logic representation layers for advanced AI, data analytics, and machine learning solutions. Knowledge graphs achieve this by: Encoding semantics behind data for text analytics and natural language understanding; Serving as a better model to capture hierarchical relationships and providing logic for deciphering neural networks; and Encoding knowledge as context to data and structuring ML models in an interpretable way.

Closing 

Organizations are embracing digital transformation and advancing technological capabilities to stay relevant and competitive. Companies with good scaling practices spend half their digital transformation and enterprise AI budgets planning for integrations, and adoptions. A successful advanced data or knowledge management solution is one the organization understands and can support. This means that it is not enough for advanced digital solutions to be cool and exciting for the data or technical teams only; rather, they need to be compelling to the business. Organizations need to design systems that employees can appropriately understand, manage, and trust. A knowledge graph or a semantic approach is one proven way that allows us to get to a working solution. Check out our diverse set of case studies to learn more about how our clients are adopting graph solutions to solve real world problems. Contact us if you are looking to get started and are looking to advance your digital capabilities.

The post Why Invest in a Knowledge Graph? Your Digital Transformation and Enterprise AI Initiatives Depend on It appeared first on Enterprise Knowledge.

]]>
A Data Scientist Perspective on Knowledge Graphs (Part 1): the Data-Driven Challenge https://enterprise-knowledge.com/a-data-scientist-perspective-on-knowledge-modeling-part-1/ Tue, 12 Apr 2022 15:00:37 +0000 https://enterprise-knowledge.com/?p=15176 Photographer: Johannes Plenio This series of articles is a data scientist, and data engineer, perspective on knowledge graphs, which is intended not only for other data scientists and engineers, the nerdy role in the office that no one truly understands; … Continue reading

The post A Data Scientist Perspective on Knowledge Graphs (Part 1): the Data-Driven Challenge appeared first on Enterprise Knowledge.

]]>
A picture of a fisherman on a boat.

Photographer: Johannes Plenio

This series of articles is a data scientist, and data engineer, perspective on knowledge graphs, which is intended not only for other data scientists and engineers, the nerdy role in the office that no one truly understands; but also for executives and business groups, who ultimately decide where to steer the organization, and are inundated with a multitude of use cases and business capabilities; as well as for project managers, who are tasked with leading a group of cross-functional teams to move their data projects into successful efforts.

The goal of this series of articles is not to describe what data science, engineering, or machine learning is, but it will ultimately depict what these are, and the reason why we hear about these distinct names, or roles, that intricately work together. Typically, these roles are executed by the same person in small teams, hence creating the confusion.


This article, part 1, will be focusing on the data scientist path from knowledge discovery to solving a business challenge. To understand their perspective, we will explore the challenges facing data scientists and discuss knowledge management as a solution. In the second article of this series, part 2, we will see how knowledge graphs apply to data science and machine learning in the enterprise or business context.

A Scientific Approach to Business

The important word in data science, from a data scientist’s perspective, is science, not data. And science starts with a question (or hypothesis). Data comes in secondly to validate or refute a hypothesis, significantly answering the question, or not.

From a business perspective, this approach means that what matters first is the initial question. Asking the wrong question will always lead to the wrong answer.

Process DiagramFrom our experience within the enterprise, it means that a data scientist might have a cross-functional role based on the question asked, or the problem exposed, as the data needed to answer the question, or solve the problem, might be spread among different units.

Simple problems usually require simple solutions. Data scientists might fit in a single department but we gather that they are often involved in more than one, sitting between departments.

The key role of a data scientist is to answer questions with data, find the model that best suits a problem, assess its performance by developing quality assurance tests (statistical tests), and determine what is needed (often better or more data) to improve the model.

Most of the research executed by a data scientist will consist of refining the initial question asked, or redefining the initial challenge exposed. Usually uncovering other questions, or challenges and iteratively making them more precise and more contextualized.

Here are a few examples of the questions or hypotheses that data scientists are confronted with:

  • What will be our revenue next year?increased profit graphic
  • How can we maximize profits?
  • How can we increase sales?
  • Which products or services should we prioritize?
  • Which marketing campaign brings in more customers?

As we can see, the extent to which these questions apply is vast and they are mainly business economics questions, although data science can apply to more operational or organizational questions as well:

  • How can we improve our processes?iterative development graphic
  • How can we increase service uptime?
  • How can we optimize tasks among employees?

Taking a scientific approach, the initial question that gave life to this series of articles shall be: What is a data scientist’s perspective on knowledge modeling and engineering in a business/enterprise context?

Let’s contextualize it in order to further refine this question.

The Data-Driven Challenge

Businesses are more and more confronted with AI which is now becoming ubiquitous. We hear about data scientists and engineers, sometimes AI or machine learning (ML) engineers, automating business processes, developing predictive models, and many other algorithmic things.

Artificial intelligence is now involved in many, if not most, business processes. Indeed, there are questions to answer, and challenges to overcome, at all levels and in every department of a company. Executives have strategic problems – where to go, how to innovate? Businesses have business problems – how to earn or do more with less. A level lower, organizations have organizational problems – who needs to know what, what do we need where, etc.

"Companies must re-examine the ways that they think about data as a business asset of their organizations. Data flows like a river through any organization. It must be managed from capture and production through its consumption and utilization at many points along the way." - Randy Bean, Harvard Business ReviewWhy Is It So Hard to Become a Data-Driven Company?

Businesses now have data lakes, because data is structurally siloed across their company. As the amount of data gathered is tremendous, they hired data scientists and engineers in order to make sense of it all.

In theory, everybody knows that. In practice, it is never as easy as it sounds, with people typically not knowing where to start. In the next section, we will dive into the data scientist’s path forward.

picture of a man swimming underwater with some fishPhotographer: Rodrigo Pederzini

The Data Scientist Path

Although business shall prevail over technology. The very empirical nature of economics forces the other way around when it comes to data science and engineering applied to business. The term data-driven depicts it best, but the issue here is that a data scientist can be left over with only data and a simple “find something” instruction. We will indicate this extreme case of pure discovery, at the very beginning, on the far left of our path, as shown in the following figure.

A line diagram with Discovery on the far left and "Precise Question / Well-defined problem" on the right

The more we move to the right side of the data science path depicted above, the closer we get to a precise business question, or challenge, and consequently the closer we get to an applicable solution.

Although its path might not be as straightforward, a data scientist strives to move from left to right.

The data scientist’s path is similar for most questions or challenges and consists of data quality assessments (is the data appropriate, sufficient, accessible and discoverable?), exploratory data analysis (can we extract patterns or trends from the data?), and feasibility checks (can we get actionable results or business insights?)

A line diagram with 3 branches coming off: 1) Data quality assessment, 2) exploratory data analysis, 3) feasability checks

In practice, data assessment and exploratory data analysis are always about the same process, and the answers will depend on the initial question asked. As most cases follow the same path, the only difference is the time it takes and the end result, both of which depend on the question asked, or challenge definition.

Let’s have a first look at the two extreme scenarios, two very different questions, or problems, that are “find something” and “find this”, and see how they will differ in practice. We will also see how machine learning algorithms fit between these two. And finally, we will picture for each scenario what are the best and worst cases we can expect from them.

Scenario 1: No direction, purely discovery 

The enterprise generates tremendous amounts of data, about customers, employees, or resources, and we want to make sense of it all, or simply extract some business insights out of it.

The data scientist gets confronted with an overly general demand: “Find something in our data.”discovery graphic

In machine learning terms, most models in use are of the unsupervised learning family that will end up in a classification, or categorization, problem. This is commonly called information or knowledge discovery or retrieval.

Regardless of the output, businesses end up with the same tricky question: how to ensure value out of it? And tackling this issue is fairly simple: define value.

Indeed, this scenario requires at least some business context and objectives, otherwise the team might dig into pointless directions.

Worst case: the project ends up in endless research or inapplicable findings, impacting the ability to retain the team with a value proposition.

Best case: the project ends up in a classification problem, or categorization, leading to the next scenario; more precise questions, better defined challenges.

To avoid the worst case and achieve the best results, enterprises and managers should contextualize, or structure, our research and findings. This shall ultimately lead to more specific questions, which is the second scenario presented here after.


Scenario 2: Precise business question

The organization has a specific question, or goal. The data scientist’s job will be to study the feasibility of the question regarding data availability and quality, and the potential answers based on statistical significance and information availability (engineering), which together shall be able to conclude, potentially providing an answer to the initial question."time is money" graphic

Because we don’t know in advance whether we shall be able to conclude or not on a question – often due to lack of or poor data. The result, the answer, to a question is uncertain.

Therefore, this scenario requires data scientists to have a list of (multiple) precise (well-defined) business questions that address specific business challenges or use cases. Having multiple questions increases the chance of gaining valuable insights from the analysis.

The machine learning models at play here are typically of the supervised learning family. Although many solutions require solely simple, more intricate, or time series regressions depending on the problem.

The result is a list of feasibility checks, prototypes, or proofs of concept.

Worst case: the data is not as good as expected, or is simply not available at the moment. The problem gets postponed to when data is available.

For example, a monthly prediction requires, due to yearly seasonality, at least 24 months of data.

Best case: the data is good and the model works. We have a proof-of-concept that can lead to an implementation or integration phase.

Again, to avoid the worst case and achieve the best results, enterprises and managers should contextualize, or structure, our questions and answers.


Closing

The differences between the two scenarios discussed above are, of course the output, whether it shall end up in a classification or in a predictive problem, and how timely (and expensive) they are, both depending on the initial question, or challenge definition.

A line diagram with 2 additional branches titled "Proof of concept" and "Classification (or categorization)"

In reality, the difference between “find something” and “find this” is rather significant.

“Find something” can lead to unnecessary answers, such as solutions without a problem. We will see later in this paper how to avoid that situation.

“Find this”, “Find why this is”, “Find how this is”, or “Find a solution to this”, are already more precise and tangible questions but require “this” to be defined.

Companies will often place value in being data-driven, or following the data-driven approach, but we’ve seen here that an organization can be data-driven yet still ask the wrong questions. The value of a data science project is defined by the initial question. The data-driven approach is most valuable when the initial question is valuable to the business, meaning the answer to the question can be leveraged and have an impact on the enterprise.

A successful data science project starts with a good question. It does not necessarily mean that you will get a valid answer to your question, but rather that you will be able to answer the question with the data that you have.

Overall, we ensure value from a data science approach by generally being able to use information or knowledge extracted from discovery in order to tackle precise business questions or well defined problems.

The way our data scientist’s path fits within the company will be the subject of the second article, part 2, of this series. We will first put our data scientist’s path within the enterprise context, and second, we will see how knowledge graphs come in handy for that matter. Indeed, we will see how discovery in data science naturally leads to knowledge modeling and in turn how knowledge modeling helps define better, more precise, questions.

 

The post A Data Scientist Perspective on Knowledge Graphs (Part 1): the Data-Driven Challenge appeared first on Enterprise Knowledge.

]]>
Using Wireframes to Define and Visualize Enterprise Knowledge Graphs https://enterprise-knowledge.com/wireframes-visualize-knowledge-graphs/ Tue, 04 May 2021 14:00:03 +0000 https://enterprise-knowledge.com/?p=13096 As a complement to our enterprise search engagements with clients, we often end up exploring how the implementation of a knowledge graph can establish the foundation for more advanced KM and data efforts, such as smart search or AI capabilities … Continue reading

The post Using Wireframes to Define and Visualize Enterprise Knowledge Graphs appeared first on Enterprise Knowledge.

]]>
As a complement to our enterprise search engagements with clients, we often end up exploring how the implementation of a knowledge graph can establish the foundation for more advanced KM and data efforts, such as smart search or AI capabilities like chatbots and recommender engines. However, while knowledge graphs are relatively easy to understand from a conceptual perspective, many organizations aren’t sure how a knowledge graph can specifically benefit them and their working processes. Applying semantic meaning to data? Great! But what does that look like?

Between the newness of this powerful technology and the limitlessness of how it can be applied, I’ve seen many organizations struggle to define exactly what they want a knowledge graph to do for them. To better communicate all that a knowledge graph can do, I’ve found wireframing/interface design and user experience flow definition activities to be powerful tools in helping our clients define what exactly they hope to get out of leveraging a knowledge graph. These interface designs are all the more helpful when they’re a result of collaborative design sessions with project stakeholders and subject matter experts (SMEs). 

In this blog, I’ll be focusing specifically on how wireframing can help define knowledge graph-specific use cases from a search perspective, but know that knowledge graphs enhance a wide variety of KM-adjacent efforts at your organization and aren’t solely relegated to the realm of enterprise, asset-based search.

Defining Knowledge Graph Use Cases

As previously mentioned, an organization may be ready for knowledge graph implementation, but may not yet know why they need one or how they may benefit. Most simply, a knowledge graph imparts meaning to data and information, and from that point of view, the possible use cases can seem limitless. Consider the following use cases from a variety of our past clients spanning various industries:

As a researcher, I need to identify experts in a particular field by browsing related webinars, publications, conferences, committees, HR data, and other such entities, and be able to determine if I should invite that individual to join a committee.

As a lab equipment purchaser, I need to access all available content about a specific product category so that I can make the most informed buying decision possible.

As a data scientist, I need to see how various financial institutions replied to a specific question on the same regulatory form and be able to easily traverse the relationships that exist between the data, institutions, and other such forms.

1. As a researcher, I need to identify experts in a particular field by browsing related webinars, publications, conferences, committees, HR data, and other such entities, and be able to determine if I should invite that individual to join a committee. 

2. As a lab equipment purchaser, I need to access all available content about a specific product category so that I can make the most informed buying decision possible.

3. As a data scientist, I need to see how various financial institutions replied to a specific question on the same regulatory form and be able to easily traverse the relationships that exist between the data, institutions, and other such forms.

An example of a knowledge panel. It is about the Cardiovascular system, and includes three main sections: an overview, latest news and updates, and a list of popular courses. From the search perspective, facilitated collaborative design sessions can not only define the appearance and interaction points of various knowledge graph-supported functionalities, but can also facilitate the discovery of the types of questions and answers users are expecting to address and access via these functionalities. Understanding the users’ goals allows us to structure, model, and build the data in a way that is best able to answer those questions. Search enhancements like action-oriented results and knowledge panels can serve as a great stepping stone to defining use cases that are increasingly specific to your organization and the particular needs of your users.

Additionally, some of these initial design sessions can clarify what the knowledge graph shouldn’t do. For example, in recently collaborating with a provisioner of scientific instrumentation, software, and services, a design session started with the intention of defining the information assets and data entities to be represented in knowledge panels (meant to appear alongside search results). As part of that same session, search aggregator pages were also drafted and designed to feature any and all information, both knowledge- and product-specific, pertaining to a particular scientific technique. As the session continued, use cases better suited to the client’s goals materialized, and another series of wireframes, this time featuring a variety of page recommenders and topic classifiers, were devised to address those needs.

Identify Knowledge Graph-Supported Functionalities

Similar to use case identification, design sessions can also play a significant role in feature or functionality identification. There are not only many forms via which a knowledge graph’s data can be presented, but a limitless variety of ways of which that data can be collated, organized, and ultimately presented. Oftentimes, feature and functionality definition happens alongside use case definition and interface design, as use cases define the user’s action and its associated goal, and the interface’s built-in functionality maps how that goal will be achieved.

Oftentimes, feature and functionality definition happens alongside use case definition and interface design, as use cases define the user's action and its associated goal, and the interface's built-in functionality maps how that goal will be achieved

At this step in the process of knowledge graph-focused use case definition, there’s often some necessary data assessment work that needs to happen. Now that knowledge graph-specific use cases have been identified, the quality and availability of the data to be accessed by the graph and presented in the interface must be assessed so that the feasibility of implementation can be measured and it can be ascertained whether or not a data clean-up process is required.

Examples of Knowledge Graph Design in Practice

Below are two abbreviated examples of how EK used design sessions to define and meet user needs as they related to knowledge graph-supported functionalities and features.

Example 1

For a nonprofit and nongovernmental research organization, designs and use cases were iterated upon in parallel. While the project continued, both the designs and use cases shifted as a result of both an iterative approach, which allowed us to surface additional stakeholder thoughts and concerns, and upon discovering that the quality of the featured data was found to be in need of some necessary clean-up or was otherwise missing. Both to ensure that the project was delivered on time and that the expectations of the knowledge graph were defined and met, design-to-use-case mapping became a regular part of the working project cadence. For example, after each design walkthrough where additional requirements were identified or a use case was refined, those changes to the use case in question were mapped to the wireframe, allowing all project members to visually track how the proposed interface supported a use case, regardless of how much that use case may have changed. Mapping the goals of the use cases to specific actions and processes supported by the interface ensured that the project team’s forward momentum remained user focused and feasible. These design validation sessions also allowed for EK to simultaneously train and educate project stakeholders as to how they could use wireframes to demonstrate the values of a knowledge graph throughout their organization.

Walkthrough of the new design requirements and additions from the wireframe walkthroughs.Use Case to Interface Mapping

Example 2

At an accounting and tax services firm, staff were unable to see any and all content related to a specific client, and this was exacerbated by that content taking many forms, such as survey results and individual documents. To address this need, EK led a series of iterative design sessions to identify the types of data to be featured in both action-oriented results and search aggregator pages. Conducting these sessions in an iterative manner means that each consecutive group of participants provides feedback, and builds, on the inputs supplied by the previous session. At the end of a series of these sessions, the resulting designs have been both refined and validated multiple times by participants’ peers. For example, we worked with stakeholders across the organization to design the client page (featured here) which aggregates information from the organization’s content, HR, and CRM systems to ensure that employees could easily access information like engagement letters, active jobs, survey results, industry intelligence, and more, all as they related to that specific client. Iterative design via collaborating with individuals across the organization allowed us to identify all of these different types and sources of data and information.This repeatable process of refining and validating allows EK to confidently understand how a knowledge graph should be constructed, while also providing a sense of security to project stakeholders in the fact that the recommendations are exactly what the users are expecting and from which they’ll find guaranteed value.

An overview of the client, their active jobs, their survey results, their success story, issues, services provided, and individuals associated with the client.

Example of a search aggregator page iteratively designed by workshop participants.

Conclusion

As you can see, there are a variety of decisions to make prior to implementing a knowledge graph, and design sessions with a focus on wireframing can help define the use cases, interface, and necessary features and functionalities to be supported by that interface. Design workstreams running parallel or prior to development/integration efforts ensure that those efforts are directed towards a high-value goal and ultimate increased return on investment. If your organization is ready to explore the benefits of knowledge graph implementation but isn’t sure where to start, reach out to us. 

The post Using Wireframes to Define and Visualize Enterprise Knowledge Graphs appeared first on Enterprise Knowledge.

]]>
What I’m Looking Forward to Learning at SEMANTiCS Austin 2020 https://enterprise-knowledge.com/what-im-looking-forward-to-learning-at-semantics-austin-2020/ Mon, 03 Feb 2020 16:43:16 +0000 https://enterprise-knowledge.com/?p=10379 SEMANTiCS Austin 2020 is the inaugural SEMANTiCS U.S. conference that will bring together knowledge graphs, ontologies, and Enterprise AI. These topics, among others, are of particular interest to my work in search and semantics, and I am excited to see … Continue reading

The post What I’m Looking Forward to Learning at SEMANTiCS Austin 2020 appeared first on Enterprise Knowledge.

]]>
SEMANTiCS Austin 2020 is the inaugural SEMANTiCS U.S. conference that will bring together knowledge graphs, ontologies, and Enterprise AI. These topics, among others, are of particular interest to my work in search and semantics, and I am excited to see how other organizations are leveraging semantic technologies. Below are the top four things I am looking forward to learning about at SEMANTiCS Austin 2020. 

Austin skyline with a large four and the blog title and author's headshot

Improving Business Processes with Linked Data

Organizations are beginning to work with linked data to improve business processes. There are two types of linked data to consider: open linked data and enterprise linked data. Open linked data is publicly available data that businesses can use to connect and extend their own information with pre-defined entities used across the internet. For example, one of our clients pulled in a hierarchical list of US states, counties, and cities in order to map their organizational sectors to geographic locations. This allowed them to quickly identify sectors based on an input street address. By connecting to open linked data sources, you can jumpstart the design of your domain model, pre-populate controlled lists, and improve your business taxonomy. In contrast, enterprise linked data is an organization’s internal knowledge graph. Internal knowledge graphs can improve data analysis and be a stepping stone on the path towards enterprise artificial intelligence (AI). As linked data becomes more common, new use cases are constantly developing. SEMANTiCS Austin 2020 is a great opportunity to explore some of these new use cases and understand how other organizations are leveraging relationships in data.

Visualizing Knowledge Graphs to Explore Data

Enterprise knowledge graphs are a growing technology trend that help businesses explore and interpret their data by visualizing relationships. Visualizing an ontology, the data domain model, helps organizations discover hidden data relationships and understand how enterprise-wide content is related, even though the content may be siloed across multiple systems and teams. Knowledge graph in ek brand colorsFor one of our clients, we created a custom web application that allows them to visualize their disparate data and traverse between different data sets and institutions. The web application enables them to easily navigate their content that was previously siloed and unstructured. The ability to see relationships and access all of the information about individual entities empowers organizations to make better, more informed business decisions. At SEMANTiCS Austin 2020, I want to explore how other organizations use the power of data visualization to support their knowledge graph and search efforts. 

Building Extendable Semantic Applications

As technology stacks continue to change, the architecture of semantic applications has to adapt to meet organizational needs. When you begin a semantic application project, you may start with an MVP search application or an expert finder. How do you build and support an application that both drives your MVP and is easily extendable in the future? As the scope of the project grows, new data sources should be easy to add and new applications should be able to plug-n-play. Some organizations are building semantic middleware to deliver data and content to end systems for consumption. Other organizations implement GraphQL across their systems in order to make data more accessible. I look forward to learning how organizations are building, defending, and supporting the semantic technology stack as part of their search and KM initiatives at SEMANTiCS Austin 2020. 

Leveraging Semantic Technologies for Search

As I mentioned in the last section, semantic applications are commonly used for search. Using an enterprise knowledge graph, you can identify and describe the people, places, content, or other domain-specific entities of your business. These descriptions help organizations build their own knowledge panels and identify potential action-oriented search use cases. Additionally, semantic technologies are used to extend taxonomies, enable auto-tagging of content, and power machine learning processes to better understand user search queries. With the increased approachability of natural language processing and machine learning tools, there are a number of ways to improve search with semantic technologies. SEMANTiCS Austin 2020 will explore the benefits of semantic search from design to implementation.

Want to explore the potential of semantic technologies in your organization? Join us for the talks, tutorials, and workshops at SEMANTiCS US in April 2020. As a bonus, if you say “I’m a Rockstar” to an EK employee at the conference, there may be some prizes available!

 

The post What I’m Looking Forward to Learning at SEMANTiCS Austin 2020 appeared first on Enterprise Knowledge.

]]>