Ontologies Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/ontologies/ Tue, 04 Nov 2025 14:03:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg Ontologies Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/ontologies/ 32 32 How to Leverage LLMs for Auto-tagging & Content Enrichment https://enterprise-knowledge.com/how-to-leverage-llms-for-auto-tagging-content-enrichment/ Wed, 29 Oct 2025 14:57:56 +0000 https://enterprise-knowledge.com/?p=25940 When working with organizations on key data and knowledge management initiatives, we’ve often noticed that a roadblock is the lack of quality (relevant, meaningful, or up-to-date) existing content an organization has. Stakeholders may be excited to get started with advanced … Continue reading

The post How to Leverage LLMs for Auto-tagging & Content Enrichment appeared first on Enterprise Knowledge.

]]>
When working with organizations on key data and knowledge management initiatives, we’ve often noticed that a roadblock is the lack of quality (relevant, meaningful, or up-to-date) existing content an organization has. Stakeholders may be excited to get started with advanced tools as part of their initiatives, like graph solutions, personalized search solutions, or advanced AI solutions; however, without a strong backbone of semantic models and context-rich content, these solutions are significantly less effective. For example, without proper tags and content types, a knowledge portal development effort  can’t fully demonstrate the value of faceting and aggregating pieces of content and data together in ‘knowledge panes’. With a more semantically rich set of content to work with, the portal can begin showing value through search, filtering, and aggregation, leading to further organizational and leadership buy-in.

One key step in preparing content is the application of metadata and organizational context to pieces of content through tagging. There are several tagging approaches an organization can take to enrich pre-existing content with metadata and organizational context, including manual tagging, automated tagging capabilities from a taxonomy and ontology management system (TOMS), using apps and features directly from a content management solution, and various hybrid approaches. While many of these approaches, in particular acquiring a TOMS, are recommended as a long-term auto-tagging solution, EK has recommended and implemented Large Language Model (LLM)-based auto-tagging capabilities across several recent engagements. Due to LLM-based tagging’s lower initial investment compared to a TOMS and its greater efficiency than manual tagging, these auto-tagging solutions have been able to provide immediate value and jumpstart the process of re-tagging existing content. This blog will dive deeper into how LLM tagging works, the value of semantics, technical considerations, and next steps for implementing an LLM-based tagging solution.

Overview of LLM-Based Auto-Tagging Process

Similar to existing auto-tagging approaches, the LLM suggests a tag by parsing through a piece of content, processing and identifying key phrases, terms, or structure that gives the document context. Through prompt engineering, the LLM is then asked to compare the similarity of key semantic components (e.g., named entities, key phrases) with various term lists, returning a set of terms that could be used to categorize the piece of content. These responses can be adjusted in the tagging workflow to only return terms meeting a specific similarity score. These tagging results are then exported to a data store and applied to the content source. Many factors, including the particular LLM used, the knowledge an LLM is working with, and the source location of content, can greatly impact the tagging effectiveness and accuracy. In addition, adjusting parameters, taxonomies/term lists, and/or prompts to improve precision and recall can ensure tagging results align with an organization’s needs. The final step is the auto-tagging itself and the application of the tags in the source system. This could look like a script or workflow that applies the stored tags to pieces of content.

Figure 1: High-level steps for LLM content enrichment

EK has put these steps into practice, for example, when engaging with a trade association on a content modernization project to migrate and auto-tag content into a new content management system (CMS). The organization had been struggling with content findability, standardization, and governance, in particular, the language used to describe the diverse areas of work the trade association covers. As part of this engagement, EK first worked with the organization’s subject matter experts (SMEs) to develop new enterprise-wide taxonomies and controlled vocabularies integrated across multiple platforms to be utilized by both external and internal end-users. To operationalize and apply these common vocabularies, EK developed an LLM-based auto-tagging workflow utilizing the four high-level steps above to auto-tag metadata fields and identify content types. This content modernization effort set up the organization for document workflows, search solutions, and generative AI projects, all of which are able to leverage the added metadata on documents. 

Value of Semantics with LLM-Based Auto-Tagging

Semantic models such as taxonomies, metadata models, ontologies, and content types can all be valuable inputs to guide an LLM on how to effectively categorize a piece of content. When considering how an LLM is trained for auto-tagging content, a greater emphasis needs to be put on organization-specific context. If using a taxonomy as a training input, organizational context can be added through weighting specific terms, increasing the number of synonyms/alternative labels, and providing organization-specific definitions. For example, by providing organizational context through a taxonomy or business glossary that the term “Green Account” refers to accounts that have met a specific environmental standard, the LLM would not accidentally tag content related to the color green or an account that is financially successful.

Another benefit of an LLM-based approach is the ability to evolve both the semantic model and LLM as tagging results are received. As sets of tags are generated for an initial set of content, the taxonomies and content models being used to train the LLM can be refined to better fit the specific organizational context. This could look like adding additional alternative labels, adjusting the definition of terms, or adjusting the taxonomy hierarchy. Similarly, additional tools and techniques, such as weighting and prompt engineering, can tune the results provided by the LLM and evolve the results generated to achieve a higher recall (rate the LLM is including the correct term) and precision (rate the LLM is selecting only the correct term) when recommending terms. One example of this is  adding weighting from 0 to 10 for all taxonomy terms and assigning a higher score for terms the organization prefers to use. The workflow developed alongside the LLM can use this context to include or exclude a particular term.

Implementation Considerations for LLM-Based Auto-Tagging 

Several factors, such as the timeframe, volume of information, necessary accuracy, types of content management systems, and desired capabilities, inform the complexity and resources needed for LLM-based content enrichment. The following considerations expand upon the factors an organization must consider for effective LLM content enrichment. 

Tagging Accuracy

The accuracy of tags from an LLM directly impacts end-users and systems (e.g., search instances or dashboards) that are utilizing the tags. Safeguards need to be implemented to ensure end-users can trust the accuracy of the tagged content they are using. These help ensure that a user is not mistakenly accessing or using a particular document, or that they are frustrated by the results they get. To mitigate both of these concerns, a high recall and precision score with the LLM tagging improves the overall accuracy and lowers the chance for miscategorization. This can be done by investing further into human test-tagging and input from SMEs to create a gold-standard set of tagged content as training data for the LLM. The gold-standard set can then be used to adjust how the LLM weights or prioritizes terms, based on the organizational context in the gold-standard set. These practices will help to avoid hallucinations (factually incorrect or misleading content) that could appear in applications utilizing the auto-tagged set of content.

Content Repositories

One factor that greatly adds technical complexity is accessing the various types of content repositories that an LLM solution, or any auto-tagging solution, needs to read from. The best content management practice for auto-tagging is to read content in its source location, limiting the risk of duplication and the effort needed to download and then read content. When developing a custom solution, each content repository often needs a distinctive approach to read and apply tags. A content or document repository like SharePoint, for example, has a robust API for reading content and seamlessly applying tags, while a less widely adopted platform may not have the same level of support. It is important to account for the unique needs of each system in order to limit the disruption end-users may experience when embarking on a tagging effort.

Knowledge Assets

When considering the scalability of the auto-tagging effort, it is also important to evaluate the breadth of knowledge asset types being analyzed. While the ability of LLMs to process several types of knowledge assets has been growing, each step of additional complexity, particularly evaluating multiple types, can result in additional resources and time needed to read and tag documents. A PDF document with 2-3 pages of content will take far fewer tokens and resources for an LLM to read its content than a long visual or audio asset. Going from a tagging workflow of structured knowledge assets to tagging unstructured content will increase the overall time, resources, and custom development needed to run a tagging workflow. 

Data Security & Entitlements

When utilizing an LLM, it is recommended that an organization invest in a private or an in-house LLM to complete analysis, rather than leveraging a publicly available model. In particular, an LLM does not need to be ‘on-premises’, as several providers have options for LLMs in your company’s own environment. This ensures a higher level of document security and additional features for customization. Particularly when tackling use cases with higher levels of personal information and access controls, a robust mapping of content and an understanding of what needs to be tagged is imperative. As an example, if a publicly facing LLM was reading confidential documents on how to develop a company-specific product, this information could then be leveraged in other public queries and has a higher likelihood of being accessed outside of the organization. In an enterprise data ecosystem, running an LLM-based auto-tagging solution can raise red flags around data access, controls, and compliance. These challenges can be addressed through a Unified Entitlements System (UES) that creates a centralized policy management system for both end users and LLM solutions being deployed.

Next Steps:

One major consideration with an LLM tagging solution is maintenance and governance over time. For some organizations, after completing an initial enrichment of content by the LLM, a combination of manual tagging and forms within each CMS helps them maintain tagging standards over time. However, a more mature organization that is dealing with several content repositories and systems may want to either operationalize the content enrichment solution for continued use or invest in a TOMS. With either approach, completing an initial LLM enrichment of content is a key method to prove the value of semantics and metadata to decision-makers in an organization. 
Many technical solutions and initiatives that excite both technical and business stakeholders can be actualized by an LLM content enrichment effort. By having content that is tagged and adhering to semantic standards, solutions like knowledge graphs, knowledge portals, and semantic search engines, or even an enterprise-wide LLM Solution, are upgraded even further to show organizational value.

If your organization is interested in upgrading your content and developing new KM solutions, contact us!

The post How to Leverage LLMs for Auto-tagging & Content Enrichment appeared first on Enterprise Knowledge.

]]>
Semantic Layer Strategy: The Core Components You Need for Successfully Implementing a Semantic Layer https://enterprise-knowledge.com/semantic-layer-strategy-the-core-components-you-need-for-successfully-implementing-a-semantic-layer/ Mon, 06 Oct 2025 16:03:47 +0000 https://enterprise-knowledge.com/?p=25718 Today’s organizations are flooded with opportunities to apply AI and advanced data experiences, but many struggle with where to focus first. Leaders are asking questions like: “Which AI use cases will bring the most value? How can we connect siloed … Continue reading

The post Semantic Layer Strategy: The Core Components You Need for Successfully Implementing a Semantic Layer appeared first on Enterprise Knowledge.

]]>
Today’s organizations are flooded with opportunities to apply AI and advanced data experiences, but many struggle with where to focus first. Leaders are asking questions like: “Which AI use cases will bring the most value? How can we connect siloed data to support them?” Without a clear strategy, quick start-ups and vendors are making it easy to spin wheels on experiments that never scale. As more organizations recognize the value of meaningful, connected data experiences via a Semantic Layer, many find themselves unsure of how to begin their journey, or how to sustain meaningful progress once they begin. 

A well-defined Semantic Layer strategy is essential to avoid costly missteps in planning or execution, secure stakeholder alignment and buy-in, and ensure long-term scalability of models and tooling.

This blog outlines the key components of a successful Semantic Layer strategy, explaining how each component supports a scalable implementation and contributes to unlocking greater value from your data.

What is a Semantic Layer?

The Semantic Layer is a framework that adds rich structure and meaning to data by applying categorization models (such as taxonomies and ontologies) and using semantic technologies like graph databases and data catalogs. Your Semantic Layer should be a connective tissue that leverages a shared language to unify information across systems, tools, and domains. 

Data-rich organizations often manage information across a growing number of siloed repositories, platforms, and tools. The lack of a shared structure for how data is described and connected across these systems ultimately slows innovation and undermines initiatives. Importantly, your semantic layer enables humans and machines to interpret data in context and lays the foundation for enterprise-wide AI capabilities.    

 

What is a Semantic Layer Strategy?

A Semantic Layer Strategy is a tailored vision outlining the value of using knowledge assets to enable new tools and create insights through semantic approaches. This approach ensures your organization’s semantic efforts are focused, feasible, and value-driven by aligning business priorities with technical implementation. 

Regardless of your organization’s size, maturity, or goals, a strong Semantic Layer Strategy enables you to achieve the following:

1. Articulate a clear vision and value proposition.

Without a clear vision, semantic layer initiatives risk becoming scattered and mismanaged, with teams pulling in different directions and value to the organization left unclear. The Semantic Layer vision serves as the “North Star,” or guiding principle for planning, design, and execution. Organizations can realize a variety of use cases via a Semantic Layer (including advanced search, recommendation engines, personalized knowledge delivery, and more), and Semantic Layer Strategy helps to define and align on what a Semantic Layer can solve for your organization.

The vision statement clearly answers three core questions:

  • What is the business problem you are trying to solve?
  • What outcomes and capabilities are you enabling?
  • How will you measure success?

These three items create a strategic narrative that business and technical stakeholders alike can understand, and enable discussions to gain executive buy-in and prioritize initiative efforts. 

Enterprise Knowledge Case Study (Risk Mitigation for a Wall Street Bank): EK led the development of a  data strategy for operational risk for a bank seeking to create a unified view of highly regulated data dispersed across siloed repositories. By framing a clear vision statement for the Bank’s semantic layer, EK guided the firm to establish a multi-year program to expand the scope of data and continually enable new data insights and capabilities that were previously impossible. For example, users of a risk application could access information from multiple repositories in a single knowledge panel within the tool rather than hunting for it in siloed applications. The Bank’s Semantic Layer vision is contained in a single easy-to-understand one-pager  that has been used repeatedly as a rallying point to communicate value across the enterprise, win executive sponsorship, and onboard additional business groups into the semantic layer initiative. 

2. Assess your current organizational semantic maturity.

A semantic maturity assessment looks at the semantic structures, programs, processes, knowledge assets and overall awareness that already exist at your organization. Understanding where your organization lies on the semantic maturity spectrum is essential for setting realistic goals and sequencing a path to greater maturity. 

  • Less mature organizations may lack formal taxonomies or ontologies, or may have taxonomies and ontologies that are outdated, inconsistently applied, or not integrated across systems. They have limited (or no) semantic tooling and few internal semantic champions. Their knowledge assets are isolated, inconsistently tagged (or untagged) documents that require human interpretation to understand and are difficult for systems to find or connect.
  • More mature organizations typically have well-maintained taxonomies and/or ontologies, have established governance processes, and actively use semantic tooling such as knowledge graphs or business glossaries. More than likely, there are individuals or groups who advocate for the adoption of these tools and processes within the organization. Their knowledge assets are well-structured, consistently tagged, and interconnected pieces of content that both humans and machines can easily discover, interpret, and reuse.

Enterprise Knowledge Case Study (Risk Mitigation for a Wall Street Bank): EK conducted a comprehensive semantic maturity assessment of the current state of the Bank’s semantics program to uncover strengths, gaps, and opportunities. This assessment included:

  • Knowledge Asset Assessment: Evaluated the connectedness, completeness, and consistency of existing risk knowledge assets, identifying opportunities to enrich and restructure them to support redesigned application workflows.
  • Ontology Evaluation: Reviewed existing ontologies describing risk at the firm to assess accuracy, currency, semantic standards compliance, and maintenance practices.
  • Category Model Evaluation: Created a taxonomy tracker to evaluate candidate categories for a unified category management program, focusing on quality, ownership, and ongoing governance.
  • Architecture Gap Analysis and Tooling Recommendation : Reviewed existing applications, APIs, and integrations to determine whether components should be reused, replaced, or rebuilt.
  • People & Roles Assessment: Designed a target operating model to identify team structures, collaboration patterns, and missing roles or skills that are critical for semantic growth.

Together, these evaluations provided a clear benchmark of maturity and guided a right-sized strategy for the bank. 

3. Create a shared conceptual knowledge asset model. 

When it comes to strategy, executive stakeholders don’t want to see exhaustive technical documentation–they want to see impact. A high-level visual model of what your Semantic Layer will achieve brings a Semantic Layer Strategy to life by showing how connected knowledge assets can enable better decisions and new insights. 

Your data model should show, in broad strokes, what kinds of data will be connected at the conceptual level. For example, your data model could show that people, business units, and sales reports can be connected to answer questions like, “How many people in the United States created documents about X Law?” or “What laws apply to me when writing a contract in Wisconsin?” 

In sum, it should focus on how people and systems will benefit from the relationships between data, enabling clearer communication and shared understanding of your Semantic Layer’s use cases. 

Enterprise Knowledge Case Study (Risk Mitigation for a Wall Street Bank): EK collaborated with data owners to map out core concepts and their relationships in a single, digestible diagram. The conceptual knowledge asset model served as a shared reference point for both business and technical stakeholders, grounding executive conversations about Semantic Layer priorities and guiding onboarding decisions for data and systems. 

By simplifying complex data relationships into a clear visual, EK enabled alignment across technical and non-technical audiences and built momentum for the Semantic Layer initiative.

4. Develop a practical and iterative roadmap for implementation and scale.

With your vision, assessment, and foundational conceptual model in place, the next step is translating your strategy into execution. Your Semantic Layer roadmap should be outcome-driven, iterative, and actionable. A well-constructed roadmap provides not only a starting point for your Semantic Layer initiative, but also a mechanism for continuous alignment as business priorities evolve. 

Importantly, your roadmap should not be a rigid set of instructions; rather, it should act as a living guide. As your semantic maturity increases and business needs shift, the roadmap should adapt to reflect new opportunities while keeping long-term goals in focus. While the roadmap may be more detailed and technically advanced for highly mature organizations, less mature organizations may focus their roadmap on broader strokes such as tool procurement and initial category modeling. In both cases, the roadmap should be tailored to the organization’s unique needs and maturity, ensuring it is practical, actionable, and aligned to real priorities.

Enterprise Knowledge Case Study (Risk Mitigation for a Wall Street Bank): EK led the creation of a roadmap focused on expanding the firm’s existing semantic layer. Through planning sessions, EK identified the necessary categories, ontologies, tooling, and architecture uplifts needed to chart forward on their Semantic Layer journey. Once a strong foundation was built, additional planning sessions centered on adding new categories, onboarding additional data concepts, and refining ontologies to increase coverage and usability. Through sessions with key stakeholders responsible for the growth of the program, EK prioritized high-value expansion opportunities and recommended governance practices to sustain long-term scale. This enabled the firm to confidently evolve its Semantic Layer while maintaining alignment with business priorities and demonstrating measurable impact across the organization.

 

Conclusion

A successful Semantic Layer Strategy doesn’t come from technology alone; it comes from a clear vision, organizational alignment, and intentional design. Whether you’re just getting started on your semantics journey or refining your Semantic Layer approach, Enterprise Knowledge can support your organization. Contact us at info@enterprise-knowledge.com to discuss how we can help bring your Semantic Layer strategy to life.

The post Semantic Layer Strategy: The Core Components You Need for Successfully Implementing a Semantic Layer appeared first on Enterprise Knowledge.

]]>
Building Your Information Shopping Mall – A Semantic Layer Guide https://enterprise-knowledge.com/building-your-information-shopping-mall-a-semantic-layer-guide/ Wed, 20 Aug 2025 20:31:05 +0000 https://enterprise-knowledge.com/?p=25160 Imagine your organization’s data as a vast collection of goods scattered across countless individual stores, each with its own layout and labeling system. Finding exactly what you need can feel like an endless, frustrating search. This is where a semantic … Continue reading

The post Building Your Information Shopping Mall – A Semantic Layer Guide appeared first on Enterprise Knowledge.

]]>
Imagine your organization’s data as a vast collection of goods scattered across countless individual stores, each with its own layout and labeling system. Finding exactly what you need can feel like an endless, frustrating search. This is where a semantic layer can help. Think of it as your organization’s “Information Shopping Mall.” 

Just as a physical mall provides a cohesive structure for shoppers to find stores, browse items, and make purchases, a semantic layer creates a unified environment for business users. It allows them to easily discover datasets from diverse sources, review connected information, and gain actionable insights. It brings together a variety of data providers (our “stores”), and their data (their “goods”) into a single, intuitive location, enabling end-users, including people, analytics tools, and agentic solutions (our “shoppers”) to find and intake precisely what they need to excel in their roles. 

This analogy of the Semantic Layer as an Information Shopping Mall has proven incredibly helpful for our teams and clients. In this blog post, we’ll use this familiar background to explore the foundational elements required to build your own Semantic Layer Shopping Mall and share key lessons learned along the way. 

 

1. Building the Mall: Creating the Structural Foundations

Before any stores can open their doors, a shopping mall needs fundamental structural elements: floors, walls, escalators, and walkways. Similarly, a semantic layer demands a well-designed technology architecture to support a seamless, connected data experience.

The core infrastructure of your semantic layer is formed by powerful tools such as Graph Databases, which connect complex relationships; Taxonomy Management Systems, for organizing data with consistent vocabularies; and Data Catalogs, which provide a directory of your data assets. Just like physical malls, no two semantic layers are identical. The unique goals and existing technological landscape of your organization will dictate the specific architecture required to build your bespoke information shopping mall. For example, an organization with a variety of data sensitivity levels and goals of creating agentic solutions may require an Identity and Access Management solution to ensure security across uses, or an organization that is keen on creating fraud detection solutions on top of a plethora of information may require a graph analytics tool. 

 

2. Creating the Directory: Developing Categorization Models

With your Information Shopping Mall’s Infrastructure in place, the next crucial step is to design its interior layout and create a clear map for your shoppers. A well-designed store directory allows a shopper to quickly scan by product types like clothing, electronics, and toys to effortlessly navigate to the right section or store.

Your semantic layer needs precisely this type of robust core categorization model to direct your tools, systems, and people to the specific information they seek. This is achieved by establishing and consistently applying a common vocabulary across all of your systems. Within the semantic layer context, we leverage taxonomies (hierarchical lists of values) and ontologies (formal maps of concepts and their relationships) to provide this essential direction. Taxonomies may be used in cases where we are looking to categorize stores as alike–Payless, DSW, and Foot Locker may be interchangeable as shoe stores–whereas ontologies, thanks to their multi-relational nature, can help tell us stores that make sense to visit for a certain occasion–Staples for school supplies followed by Gap for back-to-school clothes.  

Developing an effective semantic layer directory demands two key considerations: 

  • Achieving a Consensus on Terminology: Imagine a mall directory where “Footwear” and “Shoes” are used in different sections, or where “Electronics” and “Gadgets” demand their own spaces. This negates the purpose of categorization and causes confusion. A semantic layer requires careful negotiation with stakeholders to agree on common concepts. Investing the time to navigate organizational differences and build consensus on metadata and taxonomy terms before implementation significantly mitigates technical challenges down the line. 
  • Designing an Extensible Model: For a semantic layer to thrive, its underlying data model must be capable of growing over time. As new data providers (“stores”) join your mall and new use cases emerge, the model must seamlessly integrate without ‘breaking’ previous work. Employing ontology design best practices and engaging with seasoned professionals ensures that your semantic layer is an accurate reflection of your organization’s reality and can evolve flexibly with both new information and demands. 

At Enterprise Knowledge, we advocate for initiating this phase with a small group of pilot use cases. These pilots typically focus on building out scoped taxonomies or ontologies tied to high-value, priority use cases and serve as a proving ground for onboarding initial data providers. Starting small allows for agile iteration, refinement, and stakeholder alignment before scaling. 

 

3. Store Tenant Recruitment: Driving Adoption & Buy-In

Once the mall’s structure is complete, the focus shifts to a dual objective: attracting sought-after stores (data providers) to occupy the spaces and convincing customers (business users) to come and shop. A successful mall developer must persuasively demonstrate the benefits to retailers, such as high foot traffic, convenience, and access to a wider audience, to secure their commitment. A clear articulation of value is essential to get retailers on board.

When deploying your semantic layer, robust stakeholder buy-in is key. Strategically position your semantic layer initiative as an effort to significantly enhance your knowledge-connectedness and enable decision-making across the organization. Summarizing this information in a cohesive Semantic Layer Strategy is key to quickly convincing providers and customers. 

An effective Semantic Layer Strategy should focus on: 

  • Establishing a Clear Product Vision: To attract both data providers and consumers, the strategy must have a well-defined product vision. This involves articulating what the semantic layer will become, who it will serve, and what core problems it will solve. This strategic clarity ensures that all stakeholders understand the overarching purpose and direction, fostering alignment and shared purpose.
  • Defining Measurable Outcomes: To truly gain adoption, your strategy should demonstrably link to tangible business outcomes. It is paramount to build compelling reasons for stakeholders to both contribute information and consume insights from the semantic layer. This involves identifying and communicating the specific, high-impact results (e.g., increased efficiency, reduced risk, enhanced insights) that the semantic layer will deliver.

 

4. Grand Opening: Populating Data & Unveiling Use Cases

With the foundation built, the directory mapped, and the tenants recruited, it’s finally time for the grand unveiling of your Information Shopping Mall. This phase involves connecting applications to your semantic layer and populating it with data.

A successful grand opening requires:

  • Robust Data Pipelines: Just like a mall needs efficient distributors to stock its stores, your semantic layer needs APIs and data transformation pipelines. These are critical conduits that connect various source applications (like CRMs, Content Management Systems, and traditional databases) to your semantic layer, ensuring a continuous flow of high-quality data.
  • Secure Entitlement Structures: Paramount to any successful mall is ensuring security of its goods. For your semantic layer, this translates to establishing secure entitlement structures. This involves defining who has access to what information and ensuring sensitive information remains protected while still enabling necessary access for relevant business users.
  • Coordinated Capability Development: A seamless launch is the result of close coordination between technology teams, product owners, and stakeholders. This collaboration is vital for building the necessary technical capabilities, shaping an intuitive user experience, and managing expectations across the organization as new semantic-power use cases arise. 

  •  

  •  

Conclusion 

Building an Information Shopping Mall – your Semantic Layer – transforms disjointed data into an invaluable, accessible asset. This empowers your business with clarity, efficiency, and insight.

At Enterprise Knowledge, we specialize in guiding organizations through every phase of this complex journey, turning the vision of truly connected knowledge into a tangible reality. For more information, reach out to us at info@enterprise-knowledge.com.

The post Building Your Information Shopping Mall – A Semantic Layer Guide appeared first on Enterprise Knowledge.

]]>
The Role of Ontologies with LLMs https://enterprise-knowledge.com/the-role-of-ontologies-with-llms/ Tue, 09 Jan 2024 16:30:43 +0000 https://enterprise-knowledge.com/?p=19451 In today’s world, the capabilities of artificial intelligence (AI) and large language models (LLMs) have generated widespread excitement. Recent advancements have made natural language use cases, like chatbots and semantic search, more feasible for organizations. However, many people don’t understand … Continue reading

The post The Role of Ontologies with LLMs appeared first on Enterprise Knowledge.

]]>
In today’s world, the capabilities of artificial intelligence (AI) and large language models (LLMs) have generated widespread excitement. Recent advancements have made natural language use cases, like chatbots and semantic search, more feasible for organizations. However, many people don’t understand the significant role that ontologies play alongside AI and LLMs. People often ask: do LLMs replace ontologies or complement them? Are ontologies becoming obsolete, or are they still relevant in this rapidly evolving field? 

In this blog, I will explain the continuing importance of ontologies in your organization’s quest for better knowledge retrieval and in augmenting the capabilities of LLMs.

Defining Ontologies and LLMs

Let’s start with quick definitions to ensure we have the same background information.

What is an Ontology

An example ontology for Enterprise Knowledge could include the following entity types: Clients, People, Policies, Projects, and Tools. Additionally, the ontology contains the relationships between types, such as people work on projects, people are experts in tools, and projects are with clients.

An ontology is a data model that describes a knowledge domain, typically within an organization or particular subject area, and provides context for how different entities are related. For example, an ontology for Enterprise Knowledge could include the following entity types:

  • Clients
  • People
  • Policies
  • Projects
  • Experts 
  • Tools

The ontology includes properties about each type, i.e., people’s names and projects’ start and end dates. Additionally, the ontology contains the relationships between types, such as people work on projects, people are experts in tools, and projects are with clients. 

Ontologies define the model often used in a knowledge graph, the database of real-world things and their connections. For instance, the ontology describes types like people, projects, and client types, and the corresponding knowledge graph would contain actual data, such as information about James Midkiff (Person), who worked on semantic search (Project) for a multinational development bank (Client).

What is an LLM

Large Language Model Icon

An LLM is a model trained to understand human sentence structure and meaning. The model can understand text inputs and generate outputs that adhere to correct grammar and language. To briefly describe how an LLM works, LLMs represent text as vectors, known as embeddings. Embeddings act like a numerical fingerprint, uniquely representing each piece of text. The LLM can mathematically compare embeddings of the training set with embeddings from the input text and find similarities to piece together an answer. For example, an LLM can be provided with a large document and asked to summarize it. Since the model can understand the meaning of the large document, transforming it into embeddings, it can easily compile an answer from the provided text.

Organizations can take advantage of open-source LLMs like Llama2, BLOOM, and BERT, as developing and training custom LLMs can be prohibitively expensive. While utilizing these models, organizations can fine-tune (extend) them with domain-specific information to help the LLM understand the nuances of a particular field. The tuning process is much less expensive to perform and can improve the accuracy of a model’s output.

Integrating Ontologies and LLMs

When an organization begins to utilize LLMs, several common concerns emerge:

  1. Hallucinations: LLMs are prone to hallucinate, returning incorrect results based on incomplete or outdated training data or by making statistically-based best guesses.
  2. Knowledge Limitation: Out of the box, LLMs can only answer questions from their training set and the provided input text.
  3. Unclear Traceability: LLMs return answers based on their training data and statistics, and it is often unclear if the provided answer is a fact pulled from input training data or if it is a guess.

These concerns are all addressed by providing LLMs with methods to integrate information from an organization’s knowledge domain.

Fine-tuning with a Knowledge Graph

Ontologies model the facts within an organization’s knowledge domain, while a knowledge graph populates these models with actual, factual values. We can leverage these facts to customize and fine-tune the language model to align with the organization’s manner of describing and interconnecting information. This fine-tuning enables the LLM to answer domain-specific questions, accurately identify named entities relevant to the field, and generate language using the organization’s vocabulary. 

A knowledge graph can be leveraged to customize fine-tuning of the language model to answer domain-specific questions.

Training an LLM with factual information presents challenges similar to those encountered with the original LLM: The training data can become outdated, leading to incomplete or inaccurate responses. To address this, fine-tuning an LLM should be considered a continuous process. Regularly updating the LLM with new and existing relevant information is necessary to maintain up-to-date language usage and factual accuracy. Additionally, it’s essential to diversify the training material fed into the LLM to provide a sample of content in various forms. This involves combining ontology-based facts with varied content and data from the organization’s domain, creating a training set to ensure the LLM is balanced and unbiased toward any specific dataset.

Retrieval Augmented Generation

The primary method used to avoid stale or incomplete LLM responses is Retrieval Augmented Generation (RAG). RAG is a process that augments the input fed into an LLM with relevant information from an organization’s knowledge domain. Using RAG, an LLM can access information beyond its original training set, utilizing this information to produce more accurate answers. RAG can draw from diverse data sources, including databases, search engines (semantic or vector search), and APIs. An additional benefit of RAG is its ability to provide references for the sources used to generate responses.

A RAG can enhance an LLM to produce a cleaner answer

We aim to leverage the ontology and knowledge graph to extract facts relevant to the LLM’s input, thereby enhancing the quality of the LLM’s responses. By providing these facts as inputs, the LLM can explicitly understand the relationships within the domain rather than discerning them statistically. Furthermore, feeding the LLM with specific numerical data and other relevant information increases the LLM’s ability to respond to complex queries, including those involving calculations or relating multiple pieces of information. With accurately tailored inputs, the LLM will provide validated, actionable insights rooted in the organization’s data.

For an example of RAG in action, see the LLM input and response below using a GenAI stack with Neo4j.

An example of how a RAG may improve results from an LLM. A question is posed to a trained model and an accurate answer is produced, as well as references in the footnotes.
A chatbot interface showing a user question and the response from an LLM utilizing an RAG to include Stack Overflow links as sources.

Conclusion

LLMs are an exciting tool that enable us to effectively interpret and utilize an organization’s knowledge, and quickly access valuable answers and insights. Integrating ontologies and their corresponding knowledge graphs ensures that the LLM accurately uses the language and factual content of an organization’s knowledge domain when generating responses. Are you interested in leveraging your organization’s knowledge with an LLM? Contact us for more information on how we can get started.

The post The Role of Ontologies with LLMs appeared first on Enterprise Knowledge.

]]>
Constructing KM Technology: Tips for Implementing Your KM Technology Solutions https://enterprise-knowledge.com/tips-for-implementing-km-technology-solutions/ Mon, 15 Aug 2022 15:10:55 +0000 https://enterprise-knowledge.com/?p=16156 In the digital age that we now live in, making Knowledge Management (KM) successful at any organization relies heavily on the technologies used to accomplish every day tasks. Companies are recognizing the importance of providing their workforce with smarter, more … Continue reading

The post Constructing KM Technology: Tips for Implementing Your KM Technology Solutions appeared first on Enterprise Knowledge.

]]>
In the digital age that we now live in, making Knowledge Management (KM) successful at any organization relies heavily on the technologies used to accomplish every day tasks. Companies are recognizing the importance of providing their workforce with smarter, more efficient, and highly specialized technological tools so that employees can maximize productivity in their everyday work. There’s also the expectation for a KM system, like SharePoint, to act as an all-in-one solution. Companies in search of software solutions often make the mistake of thinking a single system can effectively fulfill all of their needs including content management, document management, AI-powered search, automated workflows, etc., which simply isn’t the case. The reality is that multi-purpose software tools may be able to serve more than one business function, but in doing so only deliver basic features that lack necessary specifications and result in a sub-par product. More information on the need for a multi-system solution can be found in this blog about the importance of a semantic layer in a knowledge management technology suite.

In our experience at Enterprise Knowledge (EK), we consider the following to be core and essential systems for most integrated KM technology solutions:

  • Content Management Systems
  • Taxonomy Management Systems
  • Enterprise Search Tools
  • Knowledge Graphs

The systems mentioned above are essential tools to enable successful and mature KM, and when integrated with one another can serve to revolutionize the interaction between an organization’s staff and its information. EK has seen the most success with client organizations once they have understood the need for a blended set of technological tools and taken the steps to implement and integrate them with one another.

Once this need for a combined set of specialized solutions is realized, the issue of how to implement these solutions becomes ever-present and must be approached with a specific strategy for design and deployment. This blog will help to outline some of the key tips and guidelines for the implementation of a KM technology solution, regardless of its current state.

CMS, TMS, Search Engine

Prioritizing Your Technology Needs

When thinking about the approach to implementing an organization’s identified technology solutions, there is often an inclination to prioritize solutions that are considered “state-of-the-art” or “cooler” than others. This is understandable, especially with the new-age technology that is on the market and able to create a “wow” factor for a business’ employees and customers. However, it is important to remember that the order in which systems are implemented relies heavily on the current makeup of the organization’s technology stack. For example, although it might be tempting to take on the implementation of an AI-powered knowledge graph or a chat-bot that has Natural Language Processing (NLP) capabilities, the quality of your results and real-world usability of the product will increase dramatically if you also include other technologies such as a graph database to provide the foundation for a knowledge graph, or a Taxonomy Management System to allow for the design and curation of an enterprise taxonomy and/or ontology.

Depending on your organization’s level of maturity with respect to its technology ecosystem, the order in which systems are implemented must be strategically defined so that one system can build off of and enhance the previous. Typically, if an organization does not possess a solidified instance of any of the core KM technologies, the logical first step is to implement a Content Management System (CMS) or Document Management System (DMS), or in some cases, both. Following the “content first” approach, commonly used in web design and digitalization, organizations must first have a place in which they can effectively store, manage, and access their content, as an organization’s content is arguably one of its most valuable assets. Furthermore, one could argue that all core KM technologies are centered around an organization’s content and exist to improve/enhance that content whether it is adding to its structure, creating ways to more efficiently store and describe it, or more effectively searching and retrieving it at the time of need.

Once an organization has a solidified CMS solution in place, the next step is to implement tools geared towards the enhancement and findability of that content. One system in particular that helps to drastically improve the quality of an organization’s content by managing and deploying enterprise wide taxonomies and ontologies is a Taxonomy Management Systems (TMS). TMS solutions are integrated with an organization’s CMS and search tools and serve as a place to create, deploy, and manage poly-hierarchical taxonomies in a single place. TMS tools allow organizations to add structure to their content, describe it in a way that significantly improves organization, and fuel search by providing a set of predefined values from a controlled vocabulary that can be used to create facets and other forms of search-narrowing instruments. A common approach to implementing your technology ecosystem involves the simultaneous implementation of an enterprise search solution alongside the TMS implementation. Once again, the idea of one solution building off another is present here, as enterprise search tools feed off of the previously implemented CMS instance by utilizing Access Control List (ACL) specifications, security trimming considerations, content structure details, and many more. Once these three systems are in place, organizations can afford to look into additional tools such as Knowledge Graphs, AI-powered chatbots, and Metadata Catalogs.

Defining Business Logic and Common Uses

There is a great deal of preparation involved with the implementation of KM technologies, especially when considering the envisioned use of the system by organizational staff. As part of this preparation, a thorough analysis of existing business processes and standard operating procedures must be executed to account for the specific needs of users and how those needs will influence the design of the target system. Although it is not always initially obvious, the way in which a system is going to be used will heavily impact how that system is designed and implemented. As such, the individuals responsible for implementation must have a well-documented, thorough understanding of what end users will need from the tool, combined with a comprehensive list of core use cases. These types of details are most commonly elicited through a set of analysis activities with the system’s expected users.

Without these types of preliminary activities, the implementation process will seldom go as planned. This is because various detours will have to be taken to accommodate the business process details that are unique to the organization and therefore not ‘pre-baked’ into software solutions. These considerations sometimes come in the form of taxonomy/controlled list requirements, customizable workflows, content type specifications, and security concerns, to name a few.

If the proper arrangements aren’t made before implementing software and integrating with additional systems, it will almost always affect the scope of your implementation effort. Software implementation is not a “one size fits all” type of effort; there are certain design elements that are based on the business and functional requirements of the target solution, and these must be identified in the initial stages of the project. EK has seen how the lack of these preparatory activities can have impacts on project timelines, most commonly because of delays due to unforeseen circumstances. This results in extended deadlines, change requests, additional investment, and other general inefficiencies.

Recruiting the Proper Resources

In addition to the activities needed before implementation, it is absolutely essential to ensure that the appropriate resources are assigned to the project. This too can create issues down the road if not given the appropriate amount of time and attention before beginning the project. Generally speaking, there are a few standard roles that are necessary for any implementation project, regardless of the type or complexity of the effort. These roles are listed and described below:

  • KM Designer/Consultant: Regardless of the type of system to be implemented, having a KM consultant on board is needed for various reasons. A KM consultant will be able to assist with the non-developmental areas of the project, for example designing taxonomies/ontologies, content types, search experiences, and/or governance structures.
  • Senior Solutions Architect: Depending on the level of integration required, a Senior Solutions Architect is likely required. This is ideally a person with considerable experience working with multiple types of technologies that are core to KM. This person should have a thorough and comprehensive understanding of how to arrange systems into a technology suite and how each component works, both alone and as part of a larger, combined solution. Familiarity with REST, SOAP, and RPC APIs, along with other general knowledge about the communication between software is a must.
  • Technology Subject Matter Expert (SME): This role is absolutely critical to the success of the implementation, as there will be a need for someone who specializes in the type of software being implemented. For example, if an organization is working to implement a TMS and integrate it with other systems, the project will need to staff a TMS integration SME to ensure the system is installed according to implementation best practices. This person will also be responsible for a large portion of the “installment” of the software, meaning they will be heavily involved with the initial set up and configuration based on the organization’s specific use of the system.
  • KM Project Manager: As is common with all projects, there will be a need for a project manager to coordinate meetings, ensure the project is on schedule, and facilitate the ongoing alignment of all engaged parties. This person should be familiar with KM so that they can align efforts with best practices and help facilitate KM-related decisions.
  • API Developer(s): Depending on the level of integration required, a developer may be needed to develop code to serve as a connector between systems. This individual must be familiar with the communication logic needed between systems and have a thorough understanding of APIs as well. The programming language in which any custom coding is needed will vary from organization to organization, but it is required that the developer has experience with the identified language.

The list above is by no means exhaustive, nor does it contain resources that are commonly assumed to be a part of any implementation effort. These roles are simply the unique ones that help with successful implementations. Also, depending on the level of effort required, there may be a need for multiple resources at each role, such as the developer or SME role. This type of consideration is important, as the project will need to have ample resources according to the project’s defined timeline.

Defining a Realistic Timeline

One final factor to consider when preparing for a technology solution implementation effort is the estimated time with which the project is expected to be completed. Implementation efforts are notoriously difficult to estimate in terms of time and resources needed, which often results in the over- or under- allocation of financing for a given effort. As a result of this, it’s recommended to err on the side of caution and incorporate more time than is initially estimated for the project to reach completion. If similar efforts have been completed in the past, utilize informal benchmarking. If available resources have experience implementing similar solutions, bring them to the forefront. The best way to estimate the level of effort and time needed to complete certain tasks is to look at historical data, which in this case would be previous implementation efforts.

In EK’s experience implementing large scale and highly complex software and custom solutions, we have learned that it is important to prepare for the unexpected to ensure the expected timeline is not derailed by unanticipated delays. For example, one common consideration we have encountered many times and one that has created significant delays is the need to get individuals appropriate access to certain systems or organizational resources. This is especially relevant with third-party consultants and when the system(s) in question have high security requirements. Additionally, there are several KM-related considerations that can unexpectedly lengthen a project’s timeline, such as the quality/readiness of content, governance standards and procedures that may be lacking, and/or change management preparations.

Conclusion

There are many factors that go into an implementation effort and, unfortunately, a lot of ways one can go wrong. Very seldom are projects like these executed to perfection, and a majority of the times that they fail or go awry is due to one or a combination of a few of the factors mentioned above. The good news and common theme with these considerations is that these pitfalls can mostly be avoided with the proper planning, preparation, and estimates (with regards to both time and resources). The initial stages of an implementation effort are the most critical, as these are the times where project planners need to be honest and realistic with their projections. There is often the tendency to begin development as soon as possible, and to skip most of the preparatory activities due to an eagerness to get started. It is important to remember that successful implementation efforts require the necessary legwork, even if it may seem superfluous at the time. Does your company need assistance implementing a piece of technology and is not sure how to get started? EK provides end-to-end services beginning with strategy and design and ending with the implementation of fully functional KM systems. Reach out to us! Contact us with any questions or general inquiries.

The post Constructing KM Technology: Tips for Implementing Your KM Technology Solutions appeared first on Enterprise Knowledge.

]]>
How to Explain Ontologies to Any Audience https://enterprise-knowledge.com/how-to-explain-ontologies-to-any-audience/ Tue, 19 Apr 2022 14:00:00 +0000 https://enterprise-knowledge.com/?p=15256 Recently, I’ve been thinking a lot about how we can better explain complex, semantic concepts like ontologies, knowledge graphs, and data fabrics to others in an approachable, easy-to-understand manner. I’ve been in any number of situations where I’ve struggled to … Continue reading

The post How to Explain Ontologies to Any Audience appeared first on Enterprise Knowledge.

]]>
Recently, I’ve been thinking a lot about how we can better explain complex, semantic concepts like ontologies, knowledge graphs, and data fabrics to others in an approachable, easy-to-understand manner. I’ve been in any number of situations where I’ve struggled to explain ontologies, and most often, it’s because I jumped too quickly into the details, or because I didn’t have clear outcomes or goals for the conversation. 

No matter whether you are a consultant, an individual contributor, or a manager of a technical team, we all find ourselves explaining what an ontology is, how it will benefit an organization, and the best ways to get started. While this blog focuses on tips and examples for explaining ontologies, I’ve found these tips helpful for any conversation explaining semantic concepts.

Set Intentions and Outline the Big Picture

Every successful presentation or productive conversation begins with setting and recognizing intentions. A scale showing personas that range from Technical / Deep & Detailed to Big Picture: Vision & Outcomes. Executives, Directors, and Management are on the Big Picture side, Semantic Engineers, Ontologists, and IT are on the Technical side; and Business SMEs and end users are in the middle. Understanding both your own, and your audience’s intentions will help you to tailor your message and ensure understanding.

As a presenter, what outcomes am I looking for from this discussion?

As an audience member, what are they looking to learn, understand, apply, or receive? 

There are many reasons why ontologists and other semantic SMEs need to explain ontology, including interviewing for a new role or job, pitching new project ideas, getting funding for new tools, and growing their teams. In each of these situations, the audience may need to understand the essentials of ontologies to varying degrees, and it is the role of the presenter to ensure that the audience’s specific level of understanding is reached.

For example, I’ve been working on a project recently that sought to solicit funding for ontology management tooling to support federated data access and governance across a large organization. The project lead needed a strong case to present to leadership to elicit support and funding. In this case, their WHY is not just needing tooling, but needing tooling to support critical business processes (federated contributions to the ontology) in an approachable way, thus embedding governance directly within the daily workflow. 

A common mistake is diving too quickly into the weeds, without first framing the discussion for your audience. For many of them, a lot of the topics you will touch on may be new to them, or still mainly theoretical in nature. It’s your responsibility as the presenter to do your best to ensure they can take a theoretical concept from your presentation and apply it to their work or their projects. There are a few questions that I always ask myself to help craft the big picture:

1.  What do you need out of this conversation?

Are you looking for alignment on terminology? Resources for a new project? Sharing best practices? A detailed modeling conversation? Funding for an ontology management tool? 

2.  Who is your audience?

Are you presenting to Data Scientists? Ontologists? Executives? What is their background and what level of understanding do they walk in with?

3.  What do they need to know in order for you to achieve your goals or to answer your question?

Our project lead above was looking for funding for new ontology management tooling and in order to elicit support, they would need to explain the need, the business value, and likely some technical concepts like RDF, OWL, or SHACL to describe what the tooling is ultimately for. Depending on the audience, the amount of explanation in each area may vary. 

Drive Interaction and Engagement

Now that you’ve set intentions and outlined the big picture, it is important to keep this vision in mind throughout the conversation. There are specific actions you can take within a conversation that help to achieve this goal.

Use examples, anecdotes, and visuals

Never underestimate the importance of being able to tie conceptual conversations to practical, day-in-the-life examples. A node icon labeled "Customer Phone Number" has several child branches off of it labeled "PH_NUM_CUST_MOBILE", "PRIMARY_NUMBER", and "PhoneNum" respectively.If you are trying to gain alignment across an enterprise on the importance of using ontologies for data standardization, for example, remind your audience of real life examples of messy, duplicative, or hard to find data. Show them how having a single definition of a data element across all departments will add value in reporting across functions, as will having a 360 degree view of your customers, data, or technologies. For example, if you have multiple systems that produce or consume customer data, the same attribute may have many names depending on the system. An ontology can help to standardize this data across systems by creating a single definition and common, human-readable name for Customer Phone Number.

Anecdotal, real-life examples are critical for ensuring your audience walks away with a practical understanding of how your presentation applies to them.

Co-create and make your audience part of the conversation

In addition to anecdotes and visuals, guide your audience through hands-on activities that apply the complex concepts of ontology design, tooling, or governance. This not only helps them better understand, but also gives them a sense of ownership in the final outcome – they’ve had a hand in reaching the goal. For ontology design, this might be a quick demo of an initial design, live prototyping of a feature that will help them complete a task better in the future, or an activity to collaboratively define personas and user stories. 

Engaging your audience isn’t limited solely to the initial conversation, but is also relevant when following up after the conversation. Re-engaging your audience after the initial conversation might include sending updates after a tool has been acquired, inviting them to demos or to participate in design, and sending memos about the outcome of the ontology after it’s been “realized.” 

Pause often and check in on your audience

Last, but definitely not least, build in time to pause and give your audience time to digest, to ask questions, or to repeat back to you, in their own words, what they have come to understand from your conversation. Remember an audience member’s intention and background when answering their questions (i.e., try to think about what they need to know rather than what they are specifically asking). When formulating your response to an audience member’s question, think about the following prompts before jumping into the details:

Why might they be asking this question? 

How does it relate to their goals? 

What do they need to know from my answer to achieve their needs?

Remembering both your own, and your audience’s intentions from beginning to end will ensure a more productive conversation built on solid understanding.

Following these tips have helped me to have productive conversations with non-technical audiences about very complex topics related to ontologies and knowledge graphs. For more tips, or for help presenting the business case of knowledge graphs, tooling, and more, reach out to us and learn more.

The post How to Explain Ontologies to Any Audience appeared first on Enterprise Knowledge.

]]>
Knowledge Cast Product Spotlight – Andreas Blumauer of Semantic Web Company https://enterprise-knowledge.com/knowledge-cast-product-spotlight-andreas-blumauer-of-semantic-web-company/ Wed, 15 Dec 2021 14:43:56 +0000 https://enterprise-knowledge.com/?p=14006 In this episode of Product Spotlight, EK COO Joe Hilger speaks with Andreas Blumauer of Semantic Web Company. Andreas has been CEO and managing partner of Semantic Web Company (SWC) for more than 15 years. At SWC, he is responsible … Continue reading

The post Knowledge Cast Product Spotlight – Andreas Blumauer of Semantic Web Company appeared first on Enterprise Knowledge.

]]>
In this episode of Product Spotlight, EK COO Joe Hilger speaks with Andreas Blumauer of Semantic Web Company.

Andreas has been CEO and managing partner of Semantic Web Company (SWC) for more than 15 years. At SWC, he is responsible for corporate strategy and strategic business development. Andreas has been a pioneer in the field of Semantic AI since 2001.

 

 

 

If you would like to be a guest on Knowledge Cast, Contact Enterprise Knowledge for more information.

The post Knowledge Cast Product Spotlight – Andreas Blumauer of Semantic Web Company appeared first on Enterprise Knowledge.

]]>
Knowledge Cast Product Spotlight – David Clarke of Synaptica https://enterprise-knowledge.com/knowledge-cast-product-spotlight-david-clarke-of-synaptica/ Mon, 13 Dec 2021 14:21:44 +0000 https://enterprise-knowledge.com/?p=13966 In this episode of Product Spotlight, EK COO Joe Hilger speaks with David Clarke of Synaptica. David is the CEO and Co-founder of Synaptica, an enterprise software application for building controlled vocabularies, including taxonomies, thesauri, ontologies and name authority files, … Continue reading

The post Knowledge Cast Product Spotlight – David Clarke of Synaptica appeared first on Enterprise Knowledge.

]]>
In this episode of Product Spotlight, EK COO Joe Hilger speaks with David Clarke of Synaptica. David is the CEO and Co-founder of Synaptica, an enterprise software application for building controlled vocabularies, including taxonomies, thesauri, ontologies and name authority files, and to integrate them with corporate content management systems. 

Product Spotlight is a series in which we talk about KM technologies and  how different products on the market meet KM challenges and provide new  and improved ways to meet tomorrow’s challenges.

 

 

 

If you would like to be a guest on Knowledge Cast, Contact Enterprise Knowledge for more information.

The post Knowledge Cast Product Spotlight – David Clarke of Synaptica appeared first on Enterprise Knowledge.

]]>
EK Again Listed on KMWorld’s AI 50 Leading Companies https://enterprise-knowledge.com/ek-again-listed-on-kmworlds-ai-50-leading-companies/ Fri, 09 Jul 2021 19:59:07 +0000 https://enterprise-knowledge.com/?p=13483 Enterprise Knowledge (EK) has been listed on KMWorld’s 2021 list of leaders in Artificial Intelligence, the “AI 50: The Companies Empowering Intelligent Knowledge Management.” This is the second year in a row EK has been included. To help spotlight innovation … Continue reading

The post EK Again Listed on KMWorld’s AI 50 Leading Companies appeared first on Enterprise Knowledge.

]]>
Enterprise Knowledge (EK) has been listed on KMWorld’s 2021 list of leaders in Artificial Intelligence, the “AI 50: The Companies Empowering Intelligent Knowledge Management.” This is the second year in a row EK has been included. To help spotlight innovation in knowledge management, KMWorld developed the annual KMWorld AI 50, a list of vendors that are helping their customers excel in an increasingly competitive marketplace by imbuing products and services with intelligence and automation.

Unique to the list, EK is one of the few dedicated consultancies that made the list, offering end-to-end technology selection, strategy, design, implementation, and support services for the full range of Enterprise AI components, including knowledge graphs, natural language processing, ontologies, and machine learning tools.

“A spectrum of AI technologies, including machine learning, natural language processing, and workflow automation, is increasingly being deployed by sophisticated organizations,” stated KMWorld Group Publisher Tom Hogan, Jr.  “Their goal is simple. These organizations seek to excel in an increasingly competitive marketplace by improving decision making, enhancing customer interactions, supporting remote workers, and streamlining their processes. To showcase knowledge management solution providers that are imbuing their offerings with intelligence and automation, KMWorld created the ‘AI 50: The Companies Empowering Intelligent Knowledge Management.’ ”

Lulit Tesfaye, EK’s Practice Leader for Data and Information Management, shared, “Given our continued leadership in this space, and the growth of our team and its capabilities, I’m proud to be recognized in this way. We are increasingly seeing our work with customers grow from initial assessments and prototypes into enterprise engagements that are transforming the way they do business. I’m proud to be leading in this exciting space.”

EK CEO Zach Wahl added, “Thanks to KMWorld for this recognition. KM and AI are increasingly coming together, and we’re pleased to be leading organizations in their transformations to intelligent knowledge organizations.”

To read more about the recognition, visit Lulit’s AI Spotlight article on KMWorld and explore EK’s knowledge base for the latest thought leadership.

About Enterprise Knowledge

Enterprise Knowledge (EK) is a services firm that integrates Knowledge Management, Information Management, Information Technology, and Agile Approaches to deliver comprehensive solutions. Our mission is to form true partnerships with our clients, listening and collaborating to create tailored, practical, and results-oriented solutions that enable them to thrive and adapt to changing needs.

About KMWorld

KMWorld is the leading information provider serving the Knowledge Management systems market and covers the latest in Content, Document and Knowledge Management, informing more than 21,000 subscribers about the components and processes – and subsequent success stories – that together offer solutions for improving business performance.

KMWorld is a publishing unit of Information Today, Inc

The post EK Again Listed on KMWorld’s AI 50 Leading Companies appeared first on Enterprise Knowledge.

]]>
EK Listed on KMWorld’s AI 50 Leading Companies https://enterprise-knowledge.com/ek-listed-on-kmworlds-ai-50-leading-companies/ Tue, 07 Jul 2020 15:54:34 +0000 https://enterprise-knowledge.com/?p=11510 Enterprise Knowledge (EK) has been listed on KMWorld’s inaugural list of leaders in Artificial Intelligence, the AI 50: The Companies Empowering Intelligent Knowledge Management. KMWorld developed the list to help shine a light on innovative knowledge management vendors that are … Continue reading

The post EK Listed on KMWorld’s AI 50 Leading Companies appeared first on Enterprise Knowledge.

]]>
2020 KMWorld AI 50

Enterprise Knowledge (EK) has been listed on KMWorld’s inaugural list of leaders in Artificial Intelligence, the AI 50: The Companies Empowering Intelligent Knowledge Management. KMWorld developed the list to help shine a light on innovative knowledge management vendors that are incorporating AI and cognitive computing technologies into their offerings.

As a services provider and thought leader in Enterprise AI, Knowledge Management, and Semantic Search, EK is one of the few dedicated services organizations included on the list. EK was uniquely recognized for our leadership in this area, including our AI Readiness Benchmark and range of functional demos that harness knowledge graphs, natural language processing, ontologies, and machine learning tools.

“As the drive for digital transformation becomes an imperative for companies seeking to compete and succeed in all industry sectors, intelligent tools and services are being leveraged to enable speed, insight, and accuracy,” said Tom Hogan, Group Publisher at KMWorld.  “To showcase organizations that are incorporating AI and an assortment of related technolo­gies—including natural language processing, machine learn­ing, and computer vision—into their offerings, KMWorld created the “AI 50: The Companies Empowering Intelligent Knowledge Management.”

Lulit Tesfaye, EK’s Practice Leader for Data and Information Management stated, “We are thrilled for this recognition and extremely proud of the cutting edge solutions we’re able to deliver for organizations looking to optimize their data and Knowledge AI initiatives. This recognition demonstrates EK’s ability to leverage our real-world experience and define the enterprise success factors for maturity and readiness for AI, bringing the focus back to business values, and the tangible applications of AI for the enterprise. Allowing organizations to go past the common AI limitations is what helps us show where we are leading.”

EK CEO Zach Wahl added, “Thanks to KMWorld for this recognition and congratulations to my amazing colleagues for their thought leadership. Alongside our recognition as one of the top 100 Companies That Matter in Knowledge Management for the sixth year in a row, this demonstrates EK’s leadership position at the nexus of KM and AI.”

About Enterprise Knowledge

Enterprise Knowledge (EK) is a services firm that integrates Knowledge Management, Information Management, Information Technology, and Agile Approaches to deliver comprehensive solutions. Our mission is to form true partnerships with our clients, listening and collaborating to create tailored, practical, and results-oriented solutions that enable them to thrive and adapt to changing needs.

About KMWorld

KMWorld is the leading information provider serving the Knowledge Management systems market and covers the latest in Content, Document and Knowledge Management, informing more than 21,000 subscribers about the components and processes – and subsequent success stories – that together offer solutions for improving business performance.

KMWorld is a publishing unit of Information Today, Inc

 

The post EK Listed on KMWorld’s AI 50 Leading Companies appeared first on Enterprise Knowledge.

]]>