semantic web standards Articles - Enterprise Knowledge http://enterprise-knowledge.com/tag/semantic-web-standards/ Fri, 26 Sep 2025 15:59:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg semantic web standards Articles - Enterprise Knowledge http://enterprise-knowledge.com/tag/semantic-web-standards/ 32 32 The Semantic Exchange: Why Your Taxonomy Needs SKOS Webinar https://enterprise-knowledge.com/the-semantic-exchange-why-your-taxonomy-needs-skos-webinar/ Wed, 18 Jun 2025 17:34:17 +0000 https://enterprise-knowledge.com/?p=24686 Enterprise Knowledge is pleased to introduce a new webinar series, The Semantic Exchange. We’re kicking off a five part series where we invite fellow practitioners to tune in and hear more about work we’ve published from the authors themselves. In … Continue reading

The post The Semantic Exchange: Why Your Taxonomy Needs SKOS Webinar appeared first on Enterprise Knowledge.

]]>

Enterprise Knowledge is pleased to introduce a new webinar series, The Semantic Exchange. We’re kicking off a five part series where we invite fellow practitioners to tune in and hear more about work we’ve published from the authors themselves. In these moderated sessions, we invite you to ask the authors questions in a short, accessible format. Think of the series as a chance for a little semantic snack! 

This session is designed for a variety of audiences, ranging from those working in the semantic space as taxonomists or ontologists – to folks who are just starting to learn about structured data and content, and how they may fit into broader initiatives around artificial intelligence or knowledge graphs. 

This 30-minute session invites you to engage with Bonnie Griffin’s infographic, Why Your Taxonomy Needs SKOS. Come ready to hear and ask about: 

  • Why SKOS is the W3C-recommended format for taxonomies 
  • How SKOS unlocks more value than a simple term list 
  • What your organization misses out on with non-SKOS taxonomies

This webinar will take place on Wednesday June 25th, from 1:00 – 1:30PM EDT. Can’t make it? The session will also be recorded and published to our knowledge base following the session. View the recording of the first session here!

The post The Semantic Exchange: Why Your Taxonomy Needs SKOS Webinar appeared first on Enterprise Knowledge.

]]>
Enhancing Taxonomy Management Through Knowledge Intelligence https://enterprise-knowledge.com/enhancing-taxonomy-management-through-knowledge-intelligence/ Wed, 30 Apr 2025 20:56:44 +0000 https://enterprise-knowledge.com/?p=23927 In today’s data-driven world, managing taxonomies has become increasingly complex, requiring a balance between precision and usability. The Knowledge Intelligence (KI) framework – a strategic integration of human expertise, AI capabilities, and organizational knowledge assets – offers a transformative approach … Continue reading

The post Enhancing Taxonomy Management Through Knowledge Intelligence appeared first on Enterprise Knowledge.

]]>
In today’s data-driven world, managing taxonomies has become increasingly complex, requiring a balance between precision and usability. The Knowledge Intelligence (KI) framework – a strategic integration of human expertise, AI capabilities, and organizational knowledge assets – offers a transformative approach to taxonomy management. This blog explores how KI can revolutionize taxonomy management while maintaining strict compliance standards.

The Evolution of Taxonomy Management

Traditional taxonomy management has long relied on Subject Matter Experts (SME) manually curating terms, relationships, and hierarchies. While this time-consuming approach ensures accuracy, it struggles with scale. Modern organizations generate millions of documents across multiple languages and domains, and manual curation simply cannot keep pace with the large variety and velocity of organizational data while maintaining the necessary precision. Even with well-defined taxonomies, organizations must continuously analyze massive amounts of content to verify that their taxonomic structures accurately reflect and capture the concepts present in their rapidly growing data repositories.

In the scenario above, traditional AI tools might help classify new documents, but an expert-guided recommender brings intelligence to the process.

KI-Driven Taxonomy Management

KI represents a fundamental shift from traditional AI systems, moving beyond data processing to true knowledge understanding and manipulation. As Zach Wahl explains in his blog, From Artificial Intelligence to Knowledge Intelligence, KI enhances AI’s capabilities by making systems contextually aware of an organization’s entire information ecosystem and creating dynamic knowledge systems that continuously evolve through intelligent automation and semantic understanding.

At its core, KI-driven taxonomy management works through a continuous cycle of enrichment, validation, and refinement. This approach integrates domain expertise at every stage of the process:

1. During enrichment, SMEs guide AI-powered discovery of new terms and relationships.

2. In validation, domain specialists ensure accuracy and compliance of all taxonomy modifications.

3. Through refinement, experts interpret usage patterns to continuously improve taxonomic structures.

By systematically injecting domain expertise into each stage, organizations transform static taxonomies into adaptive knowledge frameworks that continue to evolve with user needs while maintaining accuracy and compliance. This expert-guided approach ensures that AI augments rather than replaces human judgement in taxonomy development.

taxonomy management system using knowledge intelligence

Enrichment: Augmenting Taxonomies with Domain Intelligence

When augmenting the taxonomy creation process with AI, SMEs begin by defining core concepts and relationships, which then serve as seeds for AI-assisted expansion. Using these expert-validated foundations, systems employ Natural Language Processing (NLP) and Generative AI to analyze organizational content and extract relevant phrases that relate to existing taxonomy terms. 

Topic modeling, a set of algorithms that discover abstract themes within collections of documents, further enhances this enrichment process. Topic modeling techniques like BERTopic, which uses transformer-based language models to create coherent topic clusters, can identify concept hierarchies within organizational content. The experts evaluate these AI-generated suggestions based on their specialized knowledge, ensuring that automated discoveries align with industry standards and organizational needs. This human-AI collaboration creates taxonomies that are both technically sound and practically useful, balancing precision with accessibility across diverse user groups.

Validation: Maintaining Compliance Through Structured Governance

What sets the KI framework apart is its unique ability to maintain strict compliance while enabling taxonomy evolution. Every suggested change, whether generated through user behavior or content analysis, goes through a structured governance process that includes:

  • Automated compliance checking against established rules;
  • Human expert validation for critical decisions;
  • Documentation of change justifications; and
  • Version control with complete audit trails.
structured taxonomy governance process

Organizations implementing KI-driven taxonomy management see transformative results including improving search success rates and decreasing the time required for taxonomy updates. More importantly, taxonomies become living knowledge frameworks that continuously adapt to organizational needs while maintaining compliance standards.

Refinement: Learning From Usage to Improve Taxonomies

By systematically analyzing how users interact with taxonomies in real-world scenarios, organizations gain invaluable insights into potential improvements. This intelligent system extends beyond simple keyword matching—it identifies emerging patterns, uncovers semantic relationships, and bridges gaps between formal terminology and practical usage. This data-driven refinement process:

  • Analyzes search patterns to identify semantic relationships;
  • Generates compliant alternative labels that match user behavior;
  • Routes suggestions through appropriate governance workflows; and
  • Maintains an audit trail of changes and justifications.
Example of KI for risk analysts

The refinement process analyzes the conceptual relationship between terms, evaluates usage contexts, and generates suggestions for terminological improvements. These suggestions—whether alternative labels, relationship modifications, or new term additions—are then routed through governance workflows where domain experts validate their accuracy and compliance alignment. Throughout this process, the system maintains a comprehensive audit trail documenting not only what changes were made but why they were necessary and who approved them. 

KI Driven Taxonomy Evolution

Case Study: KI in Action at a Global Investment Bank

To show the practical application of the continuous, knowledge-enhanced taxonomy management cycle, in the following section we describe a real-world implementation at a global investment bank.

Challenge

The bank needed to standardize risk descriptions across multiple business units, creating a consistent taxonomy that would support both regulatory compliance and effective risk management. With thousands of risk descriptions in various formats and terminology, manual standardization would have been time-consuming and inconsistent.

Solution

Phase 1: Taxonomy Enrichment

The team began by applying advanced NLP and topic modeling techniques to analyze existing risk descriptions. Risk descriptions were first standardized through careful text processing. Using the BERTopic framework and sentence transformers, the system generated vector embeddings of risk descriptions, allowing for semantic comparison rather than simple keyword matching. This AI-assisted analysis identified clusters of semantically similar risks, providing a foundation for standardization while preserving the important nuances of different risk types. Domain experts guided this process by defining the rules for risk extraction and validating the clustering approach, ensuring that the technical implementation remained aligned with risk management best practices.

Phase 2: Expert Validation

SMEs then reviewed the AI-generated standardized risks, validating the accuracy of clusters and relationships. The system’s transparency was critical so experts could see exactly how risks were being grouped. This human-in-the-loop approach ensured that:

  • All source risk IDs were properly accounted for;
  • Clusters maintained proper hierarchical relationships; and
  • Risk categorizations aligned with regulatory requirements.

The validation process transformed the initial AI-generated taxonomy into a production-ready, standardized risk framework, approved by domain experts.

Phase 3: Continuous Refinement

Once implemented, the system began monitoring how users actually searched for and interacted with risk information. The bank recognized that users often do not know the exact standardized terminology when searching, so the solution developed a risk recommender that displayed semantically similar risks based on both text similarity and risk dimension alignment. This approach allowed users to effectively navigate the taxonomy despite being unfamiliar with standardized terms. By analyzing search patterns, the system continuously refined the taxonomy with alternative labels reflecting actual user terminology, and created a dynamic knowledge structure that evolved based on real usage.

This case study demonstrates the power of knowledge-enhanced taxonomy management, combining domain expertise with AI capabilities through a structured cycle of enrichment, validation, and refinement to create a living taxonomy that serves both regulatory and practical business needs.

Taxonomy Standards

For taxonomies to be truly effective and scalable in modern information environments, they must adhere to established semantic web standards and follow best practices developed by information science experts. Modern taxonomies need to support enterprise-wide knowledge initiatives, break down data silos, and enable integration with linked data and knowledge graphs. This is where standards like the Simple Knowledge Organization System (SKOS) become essential. By using universal standards like SKOS, organizations can:

  • Enable interoperability between systems and across organizational boundaries
  • Facilitate data migration between different taxonomy management tools
  • Connect taxonomies to ontologies and knowledge graphs
  • Ensure long-term sustainability as technology platforms evolve

Beyond SKOS, taxonomy professionals should be familiar with related semantic web standards such as RDF and SPARQL, especially as organizations move toward more advanced semantic technologies like ontologies and enterprise knowledge graphs. Well-designed taxonomies following these standards become the foundation upon which more advanced Knowledge Intelligence capabilities can be built. By adhering to established standards, organizations ensure their taxonomies remain both technically sound and semantically precise, capable of scaling effectively as business requirements evolve.

The Future of Taxonomy Management

The future of taxonomy management lies not just in automation, but in intelligent collaboration between human expertise and AI capabilities. KI provides the framework for this collaboration, ensuring that taxonomies remain both precise and practical. 

For organizations considering this approach, the key is to start with a clear understanding of their taxonomic needs and challenges, and to ensure their taxonomy efforts are built on solid foundations of semantic web standards like SKOS. These standards are essential for taxonomies to effectively scale, support interoperability, and maintain long-term value across evolving technology landscapes. Success comes not from replacement of existing processes, but from thoughtful integration of KI capabilities into established workflows that respect these standards and best practices.

Ready to explore how KI can transform your taxonomy management? Contact our team of experts to learn more about implementing these capabilities in your organization.

 

The post Enhancing Taxonomy Management Through Knowledge Intelligence appeared first on Enterprise Knowledge.

]]>
The Resource Description Framework (RDF) https://enterprise-knowledge.com/the-resource-description-framework-rdf/ Mon, 24 Feb 2025 19:33:53 +0000 https://enterprise-knowledge.com/?p=23200 Simply defined, a knowledge graph is a network of entities, their attributes, and how they’re related to one another. While these networks can be captured and stored in a variety of formats, most implementations leverage a graph based tool or database. However, within the world of graph databases, there are a variety of syntaxes or ... Continue reading

The post The Resource Description Framework (RDF) appeared first on Enterprise Knowledge.

]]>
Simply defined, a knowledge graph is a network of entities, their attributes, and how they’re related to one another. While these networks can be captured and stored in a variety of formats, most implementations leverage a graph based tool or database. However, within the world of graph databases, there are a variety of syntaxes or flavors that can be used to represent knowledge graphs. One of the most popular and ubiquitous is the Resource Description Framework (RDF), which provides a means to capture meaning, or semantics, in a way that is interpretable by both humans and machines.

What is RDF?

The Resource Description Framework (RDF) is a semantic web standard used to describe and model information for web resources or knowledge management systems. RDF consists of “triples,” or statements, with a subject, predicate, and object that resemble an English sentence. For example, take the English sentence: “Bess Schrader is employed by Enterprise Knowledge.” This sentence has:

  • A subject: Bess Schrader
  • A predicate: is employed by 
  • An object: Enterprise Knowledge

Bess Schrader and Enterprise Knowledge are two entities that are linked by the relationship “employed by.” An RDF triple representing this information would look like this:

What is the goal of using RDF?

RDF is a semantic web standard, and thus has the goal of representing meaning in a way that is interpretable by both humans and machines. As humans, we process information through a combination of our experience and logical deduction. For example, I know that “Washington, D.C.” and “Washington, District of Columbia” refer to the same concept based on my experience in the world – at some point, I learned that “D.C.” was the abbreviation for “District of Columbia.” On the other hand, if I were to encounter a breathing, living object that has no legs and moves across the ground in a slithering motion, I’d probably infer that it was a snake, even if I’d never seen this particular object before. This determination would be based on the properties I associate with snakes (animal, no legs, slithers).

Unlike humans, machines have no experience on which to draw conclusions, so everything needs to be explicitly defined in order for a machine to process information this way. For example, if I want a machine to infer the type of an object based on properties (e.g. “that slithering object is a snake”), I need to define what a snake is and what properties it has. If I want a machine to reconcile that “Washington, D.C.” and “Washington, District of Columbia” are the same thing, I need to define an entity that uses both of those labels.

RDF allows us to create robust semantic resources, like ontologies, taxonomies, and knowledge graphs, where the meaning behind concepts is well defined in a machine readable way. These resources can then be leveraged for any use case that requires context and meaning to connect and unify data across disparate formats and systems, such as semantic layers and auto-classification.

How does RDF work?

Let’s go back to our single triple representing the fact that “Bess Schrader works at Enterprise Knowledge.”

We can continue building out information about the entities in our (very small) knowledge graph by giving all of our subjects and objects types (which indicate the general category/class that an entity belongs to) and labels (which capture the language used to refer to the entity).

RDF diagram building off of the Bess Schrader Employed by Enterprise Knowledge Triple, including Person and Organization types

These types and labels are helping us define the semantics, or meaning, of each entity. By explicitly stating that “Bess Schrader” is a person and “Enterprise Knowledge” is an organization, we’re creating the building blocks for a machine to start to make inferences about these entities based on their types.

Similarly, we can create a more explicit definition of our relationship and attributes, allowing machines to better understand what the “employed by” relationship means. While the above diagram represents our predicate (or relationship) as a straight line between two entities, in RDF, our predicate is itself an entity and can have its own properties (such as type, label, and description). This is often referred to as making properties “first class citizens.”

Uniform Resource Identifiers (URIs)

But how do we actually make this machine readable? Diagrams in a blog are great in helping humans understand concepts, but machines need this information in a machine readable format. To make our graph machine readable, we’ll need to leverage unique identifiers.

One of the key elements of any knowledge graph (RDF or otherwise) is the principle of “things, not strings.” As humans, we often use ambiguous labels (e.g. “D.C”) when referring to a concept, trusting that our audience will be able to use context to determine our meaning. However, machines often don’t have sufficient context to disambiguate strings – imagine “D.C.” has been applied as a tag to an unstructured text document. Does “D.C.” refer to the capital city of the US, the comic book publisher, “direct current,” or something else entirely? Knowledge graphs seek to reduce this ambiguity by using entities or concepts that have unique identifiers and one or more labels, instead of relying on labels themselves as unique identifiers. 

RDF is no exception to this principle – all RDF entities are defined using a Uniform Resource Identifier (URI), which can be used to connect all of the labels, attributes, and relationships for a given entity.

Using URIs, our RDF knowledge graph would look like this:

Sample knowledge graph using URI's between concepts

These URIs make our triples machine readable by creating unambiguous identifiers for all of our subjects, predicates, and objects. URIs also enable interoperability and the ability to share information across multiple systems – because these URIs are globally unique, any two systems that reference the same URI should be referring to the same entity.

What are the advantages to using RDF?

The RDF Specification has been maintained by the World Wide Web Consortium (W3C) for over two decades, meaning it is a stable, well documented framework for representing data. This makes it easy for applications and organizations to develop RDF data in an interoperable way. If you create RDF data in one tool and share it with someone else using a different RDF tool, they will still be able to easily use your data. This interoperability allows you to build on what’s already been done — you can combine your enterprise knowledge graph with established, open RDF datasets like Wikidata, jump-starting your analytic capabilities. This also makes data sharing and migration between internal RDF systems simple, enabling you to unify data and reducing your dependency on a single tool or vendor.

The ability to treat properties as “first-class citizens” with their own properties allows you to store your data model along with your data, explaining what properties mean and how they should be used. This reduces ambiguity and confusion for both data creators, developers, and data consumers. However, this ability to treat properties as entities also allows organizations to standardize and connect existing data. RDF data models can store multiple labels for the same property, enabling them to act as a “Rosetta Stone” that translates metadata fields and values across systems. Connecting these disparate metadata values is crucial to being able to effectively retrieve, understand, and use enterprise data. 

Many implementations of RDF also support inference and reasoning, allowing you to explore previously uncaptured relationships in your data, based on logic developed in your ontology. This reasoning capability can be an incredibly powerful tool, helping you gain insights from your business logic. For example, inference and reasoning can capture information about employee expertise – a relationship that’s notoriously difficult to explicitly store. While many organizations attempt to have employees self-select their skills or areas of expertise, the completion rate of these self-selections is typically low, and even those that do complete the selection often don’t keep them up to date. Reasoning in RDF can leverage business logic to automatically infer expertise based on your organization’s data. For example, if a person has authored multiple documents that discuss a given topic, an RDF knowledge graph may infer that this person has knowledge of or expertise in that topic.

What are the disadvantages to using RDF?

To fully leverage the benefits of RDF, entities must be explicitly defined (see best practices below), which can require burdensome overhead. The volume and structure of these assertions, combined with the length and format of Uniform Resource Identifiers (URIs), can make getting started with RDF challenging for information professionals and developers used to working with more straightforward (albeit more ambiguous) data models. While recent advancements in generative AI have great potential to make the learning curve to RDF less onerous via human-in-the-loop RDF creation processes, learning to create and work with RDF still poses a challenge to many organizations.

Additionally, the “triple” format (subject – predicate – object) used by RDF only allows you to connect two entities at a time, unlike labeled property graphs. For example, I can assert that “Bess Schrader -> employed by -> Enterprise Knowledge,” but it’s not very straightforward in RDF to then add additional information about that relationship, such as what role I perform at Enterprise Knowledge, my start and end dates of employment, etc. While a proposed modification to RDF called RDF* (RDF-star) has been developed to address this, it has not been officially adopted by the W3C, and implementation of RDF* in RDF compliant tools has occurred only on an ad hoc basis.

What are some best practices when using RDF to create a knowledge graph?

RDF, and knowledge graphs in general, are well known for their flexibility – there are very few restrictions on how data must be structured or what properties must be used for their implementation. However, there are some best practices when using RDF that will enable you to maximize your knowledge graph’s utility, particularly for reasoning applications.

All concepts should be entities with a URI

The guiding principle is “things, not strings”. If you’re describing something with a label that might have its own attributes, it should be an entity, not a literal string.

All entities should have a label

Using URIs is important, but a URI without at least one label is difficult to interpret for both humans and machines.

All entities should have a type

Again, remember that our goal is to allow machines to process information similarly to humans. To do this, all entities should have one or more types explicitly asserted (e.g. “Washington, D.C” might have the type “City”).

All entities should have a description

While using URIs and labels goes a long way in limiting ambiguity (see our “D.C.” example above), adding descriptions or definitions for each entity can be even more helpful. A well written description for an entity will leave little to no question around what this entity represents.

Following these best practices will help with reuse, governance, and reasoning.

Want to learn more about RDF, or need help getting started? Contact us today.

The post The Resource Description Framework (RDF) appeared first on Enterprise Knowledge.

]]>