Linked Data Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/linked-data/ Mon, 17 Nov 2025 22:18:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg Linked Data Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/linked-data/ 32 32 Webinar: Semantic Graphs in Action – Bridging LPG and RDF Frameworks https://enterprise-knowledge.com/semantic-graphs-in-action-bridging-lpg-and-rdf-frameworks/ Wed, 27 Aug 2025 15:40:16 +0000 https://enterprise-knowledge.com/?p=25255 As organizations increasingly prioritize linked data capabilities to connect information across the enterprise, selecting the right graph framework to leverage has become more important than ever. In this webinar, graph technology experts from Enterprise Knowledge Elliott Risch, James Egan, David … Continue reading

The post Webinar: Semantic Graphs in Action – Bridging LPG and RDF Frameworks appeared first on Enterprise Knowledge.

]]>
As organizations increasingly prioritize linked data capabilities to connect information across the enterprise, selecting the right graph framework to leverage has become more important than ever. In this webinar, graph technology experts from Enterprise Knowledge Elliott Risch, James Egan, David Hughes, and Sara Nash shared the best ways to manage and apply a selection of these frameworks to meet enterprise needs.

The discussion began with an overview of enterprise use cases for each approach, implementation best practices, and a live demo combining LPG and RDF frameworks. During a moderated discussion, panelists also tackled questions such as:

  • What are the key benefits RDF graphs and LPGs provide?
  • What are the important questions an enterprise architect should ask when designing a graph solution?
  • How are recent developments in the AI space and new AI frameworks influencing when to use graph frameworks?

If your organization is exploring linked data capabilities, new AI frameworks, semantic model development, or is ready to kick off its next graph project, contact us here to help you get started.

The post Webinar: Semantic Graphs in Action – Bridging LPG and RDF Frameworks appeared first on Enterprise Knowledge.

]]>
Semantic Layer Strategy for Linked Data Investigations https://enterprise-knowledge.com/semantic-layer-strategy-for-linked-data-investigations/ Thu, 08 May 2025 15:08:08 +0000 https://enterprise-knowledge.com/?p=24011 A government organization sought to more effectively exploit their breadth of data generated by investigation activity of criminal networks for comprehensive case building and threat trend analysis. EK engaged with the client to develop a strategy and product vision for their semantic solution, paired with foundational semantic data models for meaningful data categorization and linking, architecture designs and tool recommendations for integrating and leveraging graph data, and entitlements designs for adhering to complex security standards. Continue reading

The post Semantic Layer Strategy for Linked Data Investigations appeared first on Enterprise Knowledge.

]]>

The Challenge

A government organization sought to more effectively exploit their breadth of data generated by investigation activity of criminal networks for comprehensive case building and threat trend analysis. The agency struggled to meaningfully connect structured and unstructured data from multiple siloed data sources, each with misaligned naming conventions and inconsistent data structures and formats. Users had to have an existing understanding of underlying data models and jump between multiple system views to answer core investigation analysis questions, such as “What other drivers have been associated with this vehicle involved in an inspection at the border?” or “How often has this person in the network traveled to a known suspect storage location in the past 6 months?”

These challenges manifest in data ambiguity across the organization, complex and resource-intensive integration workflows, and underutilized data assets lacking meaningful context, all resulting in significant cognitive load and burdensome manual efforts for users conducting intelligence analyses. The organization recognized the need to define a robust semantic layer solution grounded in data modeling, architecture frameworks, and governance controls to unify, contextualize, and operationalize data assets via a “single pane of intelligence” analysis platform.

The Solution

To address these challenges, EK engaged with the client to develop a strategy and product vision for their semantic solution, paired with foundational semantic data models for meaningful data categorization and linking, architecture designs and tool recommendations for integrating and leveraging graph data, and entitlements designs for adhering to complex security standards. With phased implementation plans for incremental delivery, these components lay the foundations for the client’s solution vision for advanced entity resolution and analytics capabilities. The overall solution will power streamlined consumption experiences and data-driven insights through the “single pane of intelligence.”  

The core components of EK’s semantic advisory and solution development included:

Product Vision and Use Case Backlog:
EK collaborated with the client to shape a product vision anchored around the solution’s purpose and long-term value for the organization. Complemented with a strategic backlog of priority use cases, EK’s guidance resulted in a compelling narrative to drive stakeholder engagement and organizational buy-in, while also establishing a clear and tangible vision for scalable solution growth.

Solution Architecture Design:
EK’s solution architects gathered technical requirements to propose a modular solution architecture consisting of multiple, self-contained technology products that will provision a comprehensive analytic ecosystem to the organization’s user base. The native graph architecture involves a graph database, entity resolution services, and a linked data analysis platform to create a unified, interactive model of all of their data assets via the “single pane of intelligence.”

Tool Selection Advisory:
EK guided the client on selecting and successfully gaining buy-in for procurement of a graph database and a data analysis and visualization platform with native graph capabilities to plug into the semantic and presentation layers of the recommended architecture design. This selection moves the organization away from a monolithic, document-centric platform to a data-centric solution for dynamic intelligence analysis in alignment with their graph and network analytics use cases. EK’s experts in unified entitlements and industry security standards also ensured the selected tooling would comply with the client’s database, role, and attribute-based access control requirements.

Taxonomy and Ontology Modeling:
In collaboration with intelligence subject matter experts, EK guided the team from a broad conceptual model to an implementable ontology and starter taxonomy designs to enable a specific use case for prioritized data sources. EK advised on mapping the ontology model to components of the Common Core Ontologies to create a standard, interoperable foundation for consistent and scalable domain expansion.

Phased Implementation Plan:
Through dedicated planning and solutioning sessions with the core client team, EK developed an iterative implementation plan to scale the foundational data model and architecture components and unlock incremental technical capabilities. EK advised on identifying and defining starter pilot activities, outlining definitions of done, necessary roles and skillsets, and required tasks and supporting tooling from the overall architecture to ensure the client could quickly start on solution implementation. EK is directly supporting the team on the short-term implementation tasks while continuing to advise and plan for the longer-term solution needs.

 

The EK Difference

Semantic Layer Solution Strategy:
EK guided the client in transforming existing experimental work in the knowledge graph space into an enterprise solution that can scale and bring tangible value to users. From strategic use case development to iterative semantic model and architecture design, EK provided the client with repeatable processes for defining, shaping, and productionalizing components of the organization’s semantic layer.

LPG Analytics with RDF Semantics:
To support the client’s complex and dynamic analytics needs, EK recommended an LPG-based solution for its flexibility and scalability. At the same time, the client’s need for consistent data classification and linkage still pointed to the value of RDF frameworks for taxonomy and ontology development. EK is advising on how to bridge these models for the translation and connectivity of data across RDF and LPG formats, ultimately enabling seamless data integration and interoperability in alignment with semantic standards.

Semantic Layer Tooling:
EK has extensive experience advising on the evaluation, selection, procurement, and scalable implementation of semantic layer technologies. EK’s qualitative evaluation for the organization’s linked data analysis platforms was supplemented by a proprietary structured matrix measuring down-selected tools against 50+ functional and non-functional factors to provide a quantitative view of each tool’s ability to meet the organization’s specific needs.

Semantic Modeling and Scalable Graph Development:
Working closely with the organization’s domain experts, EK provided expert advisory in industry standards and best practices to create a semantic data model that will maximize graph benefits in the context of the client’s use cases and critical data assets. In parallel with model development, EK offered technical expertise to advise on the scalability of the resulting graph and connected data pipelines to support continued maintenance and expansion.

Unified Entitlements Design:
Especially working with a highly regulated government agency, EK understands the critical need for unified entitlements to provide a holistic definition of access rights, enabling consistent and correct privileges across every system and asset type in the organization. EK offered comprehensive entitlements design and development support to ensure access rights would be properly implemented across the client’s environment, closely tied to the architecture and data modeling frameworks.

Organizational Buy-In:
Throughout the engagement, EK worked closely with project sponsors to craft and communicate the solution product vision. EK tailored product communication components to different audiences by detailing granular technical features for tool procurement conversations and formulating business-driven, strategic value statements to engage business users and executives for organizational alignment. Gaining this buy-in early on is critical for maintaining development momentum and minimizing future roadblocks as wider user groups transition to using the productionalized solution.

The Results

With initial core semantic models, iterative solution architecture design plans, and incremental pilot modeling and engineering activities, the organization is equipped to stand up key pieces of the solution as they procure the graph analytics tooling for continued scale. The phased implementation plan provides the core team with tangible and achievable steps to transition from their current document-centric ways of working to a truly data-centric environment. The full resulting solution will facilitate investigation activities with a single pane view of multi-sourced data and comprehensive, dynamic analytics. This will streamline intelligence analysis across the organization with the enablement of advanced consumption experiences such as self-service reporting, text summarization, and geospatial network analysis, ultimately reducing the cognitive load and manual efforts users currently face in understanding and connecting data. EK’s proposed strategy has been approved for implementation, and EK will publish the results from the MVP development as a follow-up to this case study.

Download Flyer

Ready to Get Started?

Get in Touch

The post Semantic Layer Strategy for Linked Data Investigations appeared first on Enterprise Knowledge.

]]>
Enterprise AI Readiness Assessment https://enterprise-knowledge.com/enterprise-ai-readiness-assessment/ Thu, 02 Jul 2020 14:46:25 +0000 https://enterprise-knowledge.com/?p=11483 Understand your organization’s priority areas before committing resources to mature your information and data management solutions. Enterprise Knowledge’s AI Readiness Assessment considers your organization’s business and technical ecosystem, and identifies specific priority and gap areas to help you make
targeted investments and gain tangible value from your data and information. Continue reading

The post Enterprise AI Readiness Assessment appeared first on Enterprise Knowledge.

]]>
A wide range of organizations have placed AI on their strategic roadmap, with C-levels commonly listing Knowledge AI amongst their biggest priorities. Yet, many are already encountering challenges as a vast majority of AI initiatives are failing to show results, meet expectations, and provide real business value. For these organizations, the setbacks typically originate from the lack of foundation on which to build AI capabilities. Enterprise AI projects too often end up as isolated endeavors, lacking the necessary foundations to support business practices and operations across the organization. So, how can your organization avoid these pitfalls? There are three key questions to ask when developing an Enterprise AI strategy; do you have clear business applications, do you understand the state of our information, and what in house capabilities do you possess?

Enterprise AI entails leveraging advanced machine learning and cognitive capabilities to discover and deliver organizational knowledge, data, and information in a way that closely aligns with how humans look for and process information.

With our focus and expertise in knowledge, data, and information management, Enterprise Knowledge (EK) developed this proprietary Enterprise Artificial Intelligence (AI) Readiness Assessment in order to enable organizations to understand where they are and where they need to be in order to begin leveraging today’s technologies and AI capabilities for knowledge and data management. 

assess your organization across 4 factors: enterprise readiness, state of data and content, skill sets and technical capabilities, and change readinessBased on our experience conducting strategic assessments as well as designing and implementing Enterprise AI solutions, we have identified four key factors as the most common indicators and foundations for many organizations in order to evaluate their current capabilities and understand what it takes to invest in advanced capabilities. 

This assessment leverages over thirty measurements across these four Enterprise AI Maturity factors as categorized under the following aspects. 

1. Organizational Readiness

Does your organization have the vision, support, and drive to enable successful Enterprise AI initiatives?The foundational requirement for any organization to undergo an Enterprise AI transformation stems from alignment on vision and the business applications and justifications for launching successful initiatives. The Organizational Readiness Factor includes the assessment of appropriate organizational designs, leadership willingness, and mandates that are necessary for success. This factor evaluates topics including:

  • The need for vision and strategy for AI and its clear application across the organization.
  • If AI is a strategic priority with leadership support.
  • If the scope of AI is clearly defined with measurable success criteria.
  • If there is a sense of urgency to implement AI.

With a clear picture of what your organizational needs are, your Organizational Readiness assessment factor will allow you to determine if your organization meets the requirements to consider AI related initiatives while surfacing and preparing you for potential risks to better mitigate failure.

2. The State of Organizational Data and Content

Is your data and content ready to be used for Enterprise AI initiatives?The volume and dynamism of data and content (structured and/or unstructured) is growing exponentially, and organizations need to be able to securely manage and integrate that information. Enterprise AI requires quality of, and access to, this information. This assessment factor focuses on the extent to which existing structured and unstructured data is in a machine consumable format and the level to which it supports business operations within the enterprise. This factor consider topics including:

  • The extent to which the organization’s information ecosystems allow for quick access to data from multiple sources.
  • The scope of organizational content that is structured and in a machine-readable format.
  • The state of standardized organization of content/data such as business taxonomy and metadata schemes and if it is accurately applied to content.
  • The existence of metadata for unstructured content. 
  • Access considerations including compliance or technical barriers.

AI needs to learn the human way of thinking and how an organization operates in order to provide the right solutions. Understanding the full state of your current data and content will enable you to focus on the right content/data with the highest business impact and help you develop a strategy to get your data in an organized and accessible format. Without high quality, well organized and tagged data, AI applications will not deliver high-value results for your organization.

3. Skills Sets and Technical Capabilities

Does your organization have the technical infrastructure and resources in place to support AI?With the increased focus on AI, the demand for individuals who have the technical skills to engineer advanced machine learning and intelligent solutions, as well as business knowledge experts who can transform data to a paradigm that aligns with how users and customers communicate knowledge, have both increased. Further, over the years, cloud computing capabilities, web standards, open source training models, and linked open data for a number of industries have emerged to help organizations craft customized Enterprise AI solutions for their business. This means an organization that is looking to start leveraging AI for their business no longer has to start from scratch. This assessment factor evaluates the organization’s existing capabilities to design, management, operate, and maintain an Enterprise AI Solution. Some of the factors we consider include:

  • The state of existing enterprise ontology solutions and enterprise knowledge graph capabilities that optimize information aggregation and governance. 
  • The existence of auto-classification and automation tools within the organization.
  • Whether roles and skill sets for advanced data modeling or knowledge engineering are present within the organization.
  • The availability and capacity to commit business and technical SMEs for AI efforts.

Understanding the current gaps and weaknesses in existing capabilities and defining your targets are crucial elements to developing a practical AI Roadmap. This factor also plays a foundational role in giving your organization the key considerations to ensure AI efforts kick off on the right track, such as leveraging web standards that enable interoperability, and starting with available existing/open-source semantic models and ecosystems to avoid short-term delays while establishing long-term governance and strategy. 

4. Change Threshold 

Is your organization prepared for supporting operational and strategic changes that will result from AI initiatives?The success of Enterprise AI relies heavily on the adoption of new technologies and ways of doing business. Organizations who fail to succeed with AI often struggle to understand the full scope of the change that AI will bring to their business and organizational norms. This usually manifests itself in the form of fear (either of change in job roles or creating wrong or unethical AI results that expose the organization to higher risks). Most organizations also struggle with the understanding that AI requires a few iterations to get it “right”. As such, this assessment factor focuses on the organization’s appetite, willingness, and threshold to understand and tackle the cultural, technical, and business challenges in order to achieve the full benefits of AI. This factor evaluates topics including:

  • Business and IT interest and desire for AI.
  • Existence of resource planning for the individuals whose roles will be impacted. 
  • Education and clear communication to facilitate adoption. 

The success of any technical solution is highly dependent on the human and culture factor in an organization and each organization has a threshold for dealing with change. Understanding and planning for this factor will enable your organization to integrate change management that addresses the negative implications, avoids unnecessary resistance or weak AI results, and provides the proper navigation through issues that arise.

How it Works

This Enterprise AI readiness assessment and benchmarking leverages the four factors that have over 30 different points upon which each organization can be evaluated and scored. We apply this proprietary maturity model to help assess your Enterprise AI readiness and clearly define success criteria for your target AI initiatives. Our steps include: 

  • Knowledge Gathering and Current State Assessment: We leverage a hybrid model that includes interviews and focus groups, supported by content/data and technology analysis to understand where you are and where you need to be.This gives us a complete understanding of your current strengths and weaknesses across the four factors, allowing us to provide the right recommendations and guidance to drive success, business value, and long-term adoption.
  • Strategy Development and Roadmapping: Building on the established focus on the assessment factors, we work with you to develop a strategy and roadmap that outlines the necessary work streams and activities needed to achieve your AI goals. It combines our understanding of your organization with proven best practices and methodologies into an iterative work plan that ensures you can achieve the target state while quickly and consistently showing interim business value.
  • Business Case Development and Alignment Support: we further compile our assessment of potential project ROI based on increased revenues, cost avoidance, risk and compliance management. We then balance those against the perceived business needs and wants by determining the areas that would have the biggest business impact with lowest costs. We further focus our discussions and explorations on these areas with the greatest need and higher interest.

Keys to Our Assessment  

Over the past several years, we have worked with diverse organizations to enable them to strategize, design, pilot, and implement scaled Enterprise AI solutions. What makes our priority assessment unique is that it is developed based on years of real-world experience supporting organizations in their knowledge and data management. As such, our assessment offers the following key differentiators and values for the enterprise: 

  • Recognition of Unique Organizational Factors: This assessment recognizes that no Enterprise AI initiative is exactly the same. It is designed in such a way that it recognizes the unique aspects of every organization, including priorities and challenges to then help develop a tailored strategy to address those unique needs.
  • Emphasis on Business Outcomes: Successful AI efforts result in tangible business applications and outcomes. Every assessment factor is tied to specific business outcomes with corresponding steps on how the organization can use it to better achieve practical business impact.
  • A Tangible Communication and Education Tool: Because this assessment provides measurable scores and over 30 tangible criteria for assessment and success factors, it serves as an effective tool to allow your organization to communicate up to leadership and quickly garner leadership buy-in, helping organizations understand the cost and the tangible value for AI efforts. 

Results

As a result of this effort, you will have a complete view of your AI readiness, gaps and required ecosystem and an accompanying understanding of the potential business value that could be realized once the target state is achieved. Taken as a whole, the assessment allows an organization to:

  • Understand strengths and weaknesses, and overall readiness to move forward with Enterprise AI compared to other organizations and the industry as a whole;
  • Judge where foundational gaps may exist in the organization in order to improve Enterprise AI readiness and likelihood of success; and
  • Identify and prioritize next steps in order to make immediate progress based on the organization’s current state and defined goals for AI and Machine Learning.

 

Get Started Download Trends Ask a Question

Taking the first step toward gaining this invaluable insight is easy:

1. Take 10-15 minutes to complete your Enterprise AI Maturity Assessment by answering a set of questions pertaining to the four factors; and
2. Submit your completed assessment survey and provide your email address to download a formal PDF report with your customized results.

The post Enterprise AI Readiness Assessment appeared first on Enterprise Knowledge.

]]>
What I’m Looking Forward to Learning at SEMANTiCS Austin 2020 https://enterprise-knowledge.com/what-im-looking-forward-to-learning-at-semantics-austin-2020/ Mon, 03 Feb 2020 16:43:16 +0000 https://enterprise-knowledge.com/?p=10379 SEMANTiCS Austin 2020 is the inaugural SEMANTiCS U.S. conference that will bring together knowledge graphs, ontologies, and Enterprise AI. These topics, among others, are of particular interest to my work in search and semantics, and I am excited to see … Continue reading

The post What I’m Looking Forward to Learning at SEMANTiCS Austin 2020 appeared first on Enterprise Knowledge.

]]>
SEMANTiCS Austin 2020 is the inaugural SEMANTiCS U.S. conference that will bring together knowledge graphs, ontologies, and Enterprise AI. These topics, among others, are of particular interest to my work in search and semantics, and I am excited to see how other organizations are leveraging semantic technologies. Below are the top four things I am looking forward to learning about at SEMANTiCS Austin 2020. 

Austin skyline with a large four and the blog title and author's headshot

Improving Business Processes with Linked Data

Organizations are beginning to work with linked data to improve business processes. There are two types of linked data to consider: open linked data and enterprise linked data. Open linked data is publicly available data that businesses can use to connect and extend their own information with pre-defined entities used across the internet. For example, one of our clients pulled in a hierarchical list of US states, counties, and cities in order to map their organizational sectors to geographic locations. This allowed them to quickly identify sectors based on an input street address. By connecting to open linked data sources, you can jumpstart the design of your domain model, pre-populate controlled lists, and improve your business taxonomy. In contrast, enterprise linked data is an organization’s internal knowledge graph. Internal knowledge graphs can improve data analysis and be a stepping stone on the path towards enterprise artificial intelligence (AI). As linked data becomes more common, new use cases are constantly developing. SEMANTiCS Austin 2020 is a great opportunity to explore some of these new use cases and understand how other organizations are leveraging relationships in data.

Visualizing Knowledge Graphs to Explore Data

Enterprise knowledge graphs are a growing technology trend that help businesses explore and interpret their data by visualizing relationships. Visualizing an ontology, the data domain model, helps organizations discover hidden data relationships and understand how enterprise-wide content is related, even though the content may be siloed across multiple systems and teams. Knowledge graph in ek brand colorsFor one of our clients, we created a custom web application that allows them to visualize their disparate data and traverse between different data sets and institutions. The web application enables them to easily navigate their content that was previously siloed and unstructured. The ability to see relationships and access all of the information about individual entities empowers organizations to make better, more informed business decisions. At SEMANTiCS Austin 2020, I want to explore how other organizations use the power of data visualization to support their knowledge graph and search efforts. 

Building Extendable Semantic Applications

As technology stacks continue to change, the architecture of semantic applications has to adapt to meet organizational needs. When you begin a semantic application project, you may start with an MVP search application or an expert finder. How do you build and support an application that both drives your MVP and is easily extendable in the future? As the scope of the project grows, new data sources should be easy to add and new applications should be able to plug-n-play. Some organizations are building semantic middleware to deliver data and content to end systems for consumption. Other organizations implement GraphQL across their systems in order to make data more accessible. I look forward to learning how organizations are building, defending, and supporting the semantic technology stack as part of their search and KM initiatives at SEMANTiCS Austin 2020. 

Leveraging Semantic Technologies for Search

As I mentioned in the last section, semantic applications are commonly used for search. Using an enterprise knowledge graph, you can identify and describe the people, places, content, or other domain-specific entities of your business. These descriptions help organizations build their own knowledge panels and identify potential action-oriented search use cases. Additionally, semantic technologies are used to extend taxonomies, enable auto-tagging of content, and power machine learning processes to better understand user search queries. With the increased approachability of natural language processing and machine learning tools, there are a number of ways to improve search with semantic technologies. SEMANTiCS Austin 2020 will explore the benefits of semantic search from design to implementation.

Want to explore the potential of semantic technologies in your organization? Join us for the talks, tutorials, and workshops at SEMANTiCS US in April 2020. As a bonus, if you say “I’m a Rockstar” to an EK employee at the conference, there may be some prizes available!

 

The post What I’m Looking Forward to Learning at SEMANTiCS Austin 2020 appeared first on Enterprise Knowledge.

]]>
The Next Level of Integrated Enterprise Knowledge: PoolParty 6.0 https://enterprise-knowledge.com/the-next-level-of-integrated-enterprise-knowledge-poolparty-6-0/ Fri, 16 Jun 2017 15:09:32 +0000 https://enterprise-knowledge.com/?p=6607 A couple of weeks ago, Semantic Web Company (SWC) released a new version, 6.0, of their already extensive PoolParty Semantic Suite with some exciting new additions. Why does this matter? The PoolParty Semantic Suite is further solidifying its spot at … Continue reading

The post The Next Level of Integrated Enterprise Knowledge: PoolParty 6.0 appeared first on Enterprise Knowledge.

]]>
A couple of weeks ago, Semantic Web Company (SWC) released a new version, 6.0, of their already extensive PoolParty Semantic Suite with some exciting new additions. Why does this matter? The PoolParty Semantic Suite is further solidifying its spot at the forefront of semantic technologies, taxonomy and ontology management tools, auto-classification, KM machine learning, and semantic data integration platforms. In short, it is a tool that helps organizations integrate, connect, and analyze their structured and unstructured information, as well as leverage machine learning capabilities to auto-classify content and gain efficiency in the content capturing and discovery phases of knowledge management.

Below I briefly highlight some of the new features and how they impact the knowledge management business.

1. Implicit Content Relationships (a.k.a. Shadow Concept Extraction)

One of the main goals that PoolParty strives for is to help organizations relate their content objects, thus building the organizational knowledge. In past versions, PoolParty achieved this by relating content where concepts were mentioned explicitly within the content items. In other words, through its complex algorithms and Natural Language Processing capabilities, PoolParty extracted the main concepts that it found in the content.

With 6.0, however, PoolParty takes this capability to the next level. In addition to being able to learn the organization’s knowledge domain and suggest the most relevant tags for content items in an auto-classification style, PoolParty now automatically identifies topics and concepts that are closely related to a piece of content, even if the topic or concept is not explicitly mentioned in the content. In other words, the system can now recommend semantically related content based on indirect, implicit relationships that it identifies based on its own content analysis.

Below is a screenshot of the view a Knowledge Manager would see during the fine-tuning process to identify concepts and shadow concepts. This is not the front end user interface, but it does provide a quick and easy way to test content excerpts against the analysis engine:

Shadow Concepts

In the screenshot above, the items in grey denote the identified shadow concepts.

In more technical terms, the shadow concept functionality allows you to see how a document relates to a concept in your knowledge graph even if the document does not mention that concept explicitly. Additionally, PoolParty allows you to see why that relationship was suggested by providing the found terms and their relationship to the shadow concept.

Why It Matters

The clearest benefit of this technology is its ability to help your users discover relationships between knowledge objects they didn’t know existed. In the background and based on your organization’s content, PoolParty now builds its own network of related terms in addition to your organization’s explicit ontology or thesaurus. This allows PoolParty to recommend related items based on its comprehensive organizational knowledge even if the user did not specifically know of the relationship (or that the recommended concept even existed). Think of the Amazon recommender system, but for your organization and based on your own content.

2. Centralized Interface for All Your PoolParty Data Integrations (a.k.a. Semantic Middleware Configurator)

PoolParty was already well-connected. Its latest version provides additional integration options with Linked Data Sources, search and graph search engines, graph databases (i.e. triple stores), and visualization engines. To aid in managing all these connections and integrations, PoolParty introduces the Semantic Middleware Configurator functionality which provides a quick view and easy management options for all integration channels to systems integrated with PoolParty. Knowledge managers and engineers will find this new functionality extremely helpful. The ability to quickly see all connections, their status, and settings at a glance is a significant improvement on clarity and efficiency.

PoolParty’s new Semantic Middleware Configurator interface

PoolParty’s new Semantic Middleware Configurator interface showing all connections and integrations in one location.

Why It Matters

With the growing number of data sources and tools that allow organizations to interconnect, visualize, and analyze their knowledge, the ability to manage all these connections in one place becomes critical. With this in mind, the new Semantic Middleware Integrator feature is a solid contender to become the enterprise semantic data integration center.

3. Visualization

With larger thesauri, and especially ontologies, it quickly becomes difficult to manage relationships and knowledge models and even more so to communicate them to stakeholders. The 6.0 release of PoolParty tackles this challenge in two ways: improved native visualization; and integration with visualization engines like webVOWL.

The enhancements to PoolParty’s native visualization capability help knowledge managers view relationships between types of concepts (i.e. classes), as well as the custom properties, or attributes associated to a knowledge object.

PoolParty native visualization

PoolParty native visualization of concepts and relationships.

With the introduction of integration capabilities to visualization tools like webVOWL, knowledge managers can now produce a visual model of the full ontology to help them in their ontology design efforts as well as communicate the model with other stakeholders.

PoolParty ontology

PoolParty ontology visualized through webVOWL integration.

Why It Matters

One of the issues inhibiting the proliferation of business ontologies is their perceived complexity. With scale, the challenge of designing, collaborating, and communicating the knowledge model, i.e. the ontology, grows. However, the ability to visually represent concepts, their attributes, and how they are connected significantly simplifies these tasks. As a result, we are looking at shorter design and onboarding time, thus reduced costs.

4. Additional Linked Data Sources

We all try to avoid reinventing the wheel where possible. Or in KM terms, why recreate knowledge if it already exists? With that in mind, even in past versions, PoolParty has provided automatic integration with public sources of information like definitions and alternative names of terms, topics, etc. PoolParty 6.0 includes additional sources of shared data that organizations can utilize to quickly and easily enrich their internal content as well as bring in externally managed and updated content – without additional content management load.

With PoolParty 6.0, these sources of Linked Open Data now include additions like:

  • PermID – A Thomson Reuters Permanent Identifiers source that provides company information, mainly in the finance industry;
  • Getty Vocabularies – A linked open data source of “structured terminology for art and other material culture, archival materials, visual surrogates, and bibliographic materials”;
  • DBPedia in Dutch and Russian – PoolParty continues to add support for new language versions of one of the most popular and active linked data sources.

Why It Matters

Linked Open Data sources are gaining popularity and can be utilized to enhance your content with minimal additional effort. Leveraging public shared data allows you you expand your thesauri by adding new terms, definitions, and synonyms, thus increasing the quality and richness of your knowledge. This in turn helps make your enterprise search results and relationships more accurate and relevant.

Closing Thoughts

As a supervised learning system, PoolParty 6.0 takes the next step to integrating siloed structured and unstructured content, identifying relationships, and augmenting organizational knowledge. The result is more relevant and findable content for your users.

 

The post The Next Level of Integrated Enterprise Knowledge: PoolParty 6.0 appeared first on Enterprise Knowledge.

]]>