Sara Nash, Author at Enterprise Knowledge https://enterprise-knowledge.com Mon, 03 Nov 2025 21:59:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg Sara Nash, Author at Enterprise Knowledge https://enterprise-knowledge.com 32 32 Building a Semantic Layer of your Data Platform https://enterprise-knowledge.com/building-a-semantic-layer-of-your-data-platform/ Tue, 25 Jun 2024 13:59:06 +0000 https://enterprise-knowledge.com/?p=21605 Enterprise Knowledge’s Joe Hilger, COO, and Sara Nash, Principal Consultant, presented “Building a Semantic Layer of your Data Platform” at Data Summit Workshop on May 7th, 2024 in Boston, Massachusetts. This presentation delved into the importance of the semantic layer … Continue reading

The post Building a Semantic Layer of your Data Platform appeared first on Enterprise Knowledge.

]]>
Enterprise Knowledge’s Joe Hilger, COO, and Sara Nash, Principal Consultant, presented “Building a Semantic Layer of your Data Platform” at Data Summit Workshop on May 7th, 2024 in Boston, Massachusetts.

This presentation delved into the importance of the semantic layer and detailed four real-world applications. Hilger and Nash explored how a robust semantic layer architecture optimizes user journeys across diverse organizational needs, including data consistency and usability, search and discovery, reporting and insights, and data modernization. Practical use cases explore a variety of industries such as biotechnology, financial services, and global retail. 

Check out this presentation to gain answers to the following questions:

  • What is a semantic layer? 
  • What problems does a semantic layer solve?
  • What does a semantic layer enable? 
  • What are the key components of a semantic layer?
  • What are top enterprise use cases for a semantic layer?

The post Building a Semantic Layer of your Data Platform appeared first on Enterprise Knowledge.

]]>
What Isn’t a Semantic Layer? https://enterprise-knowledge.com/what-isnt-a-semantic-layer/ Fri, 23 Feb 2024 18:02:01 +0000 https://enterprise-knowledge.com/?p=19963 Semantic layers allow organizations to embed context with their organizational data in a way that is systematically interoperable and intuitive to business users. With developments in data and AI, there is a compelling need (and opportunity) for semantic layer development, … Continue reading

The post What Isn’t a Semantic Layer? appeared first on Enterprise Knowledge.

]]>
Semantic layers allow organizations to embed context with their organizational data in a way that is systematically interoperable and intuitive to business users. With developments in data and AI, there is a compelling need (and opportunity) for semantic layer development, but that comes along with noise around what semantic layers are and what they offer. My colleague Lulit defined the semantic layer framework, its components, and a sample architecture in her blog “What is a Semantic Layer?”

To expand on the definition of a Semantic Layer, and moreover to eliminate the noise and confusion around them, I present to you what it is not. Debunking these misconceptions steers organizations from common detours or roadblocks when building semantic layers, such as: 

  • Vendor lock – expecting a silver bullet solution to provide the connectivity and interoperability that a componentized semantic layer offers; 
  • Lack of adoption – building out semantic solutions in a technology-driven silo with little interaction or ownership by the business; and 
  • Limited enterprise reach – restricting the domain to single departments or use cases.

While you should be wary of the misconceptions about Semantic Layers, their value as tangible solutions that are driving enterprise AI, reporting, search, and personalization initiatives couldn’t be more real. Starting small with a Semantic LLM Accelerator or a 2-day Semantic Architecture Workshop can help to show value quickly while building a foundation for future scale. If you’ve got foundational components and are looking to see how other organizations have achieved the Semantic Layer, check out our case studies or contact us with questions. 

The post What Isn’t a Semantic Layer? appeared first on Enterprise Knowledge.

]]>
Scaling Knowledge Graph Architectures with AI https://enterprise-knowledge.com/scaling-knowledge-graph-architectures-with-ai/ Thu, 30 Nov 2023 16:45:28 +0000 https://enterprise-knowledge.com/?p=19340 Sara Nash and Urmi Majumder, Principal Consultants at Enterprise Knowledge, presented “Scaling Knowledge Graph Architectures with AI” on November 9th, 2023 at KM World in Washington D.C. In this presentation, Nash and Majumder defined a Knowledge Graph architecture and reviewed … Continue reading

The post Scaling Knowledge Graph Architectures with AI appeared first on Enterprise Knowledge.

]]>
Sara Nash and Urmi Majumder, Principal Consultants at Enterprise Knowledge, presented “Scaling Knowledge Graph Architectures with AI” on November 9th, 2023 at KM World in Washington D.C. In this presentation, Nash and Majumder defined a Knowledge Graph architecture and reviewed how AI can support the creation and growth of Knowledge Graphs. Drawing from their experience in designing enterprise Knowledge Graphs based on knowledge embedded in unstructured content, Nash and Majumder defined approaches for entity and relationship extraction depending on Enterprise AI maturity and highlighted other key considerations to incorporate AI capabilities into the development of a Knowledge Graph. Check out the presentation below to learn how to: 

  • Assess entity and relationship extraction readiness according to EK’s Extraction Maturity Spectrum and Relationship Extraction Maturity Spectrum.
  • Utilize knowledge extraction from content to translate important insights into organizational data.
  • Extract knowledge with three approaches:
    • RedEx Rule
    • Auto-Classification Rule
    • Custom ML Model
  • Examine key factors such as how to leverage SMEs, iterate AI processes, define use cases, and invest in establishing robust AI models.

The post Scaling Knowledge Graph Architectures with AI appeared first on Enterprise Knowledge.

]]>
What is A Data Fabric Architecture and What Are The Design Considerations? https://enterprise-knowledge.com/what-is-a-data-fabric-architecture-and-what-are-the-design-considerations/ Thu, 31 Aug 2023 14:00:40 +0000 https://enterprise-knowledge.com/?p=18808 Introduction In today’s data-driven world, effective data management is crucial for businesses to remain competitive. A modern approach to data management is the use of data fabric architecture. A data fabric is a data management solution that connects and manages … Continue reading

The post What is A Data Fabric Architecture and What Are The Design Considerations? appeared first on Enterprise Knowledge.

]]>
Components of the data fabric architecture

Introduction

In today’s data-driven world, effective data management is crucial for businesses to remain competitive. A modern approach to data management is the use of data fabric architecture. A data fabric is a data management solution that connects and manages data in a federated way, employing a logical data architecture that captures connections relevant to the business. Data fabrics help businesses make sense of their data by organizing it in a domain-centric way without physically moving data from source systems. What makes this possible is a shift in focus on metadata as opposed to data itself. At a high level, a semantic data fabric leverages a knowledge graph as an abstraction architecture layer to provide connectivity between diverse metadata. The knowledge graph enriches metadata by aggregating, connecting, and storing relationships between unstructured and structured data in a standardized, domain-centric format. Using a graph-based data structure helps the business embed their business data with context, drive information discovery and inference, and lay a foundation for scale.

Unlike a monolithic solution, a data fabric facilitates the alignment of different toolsets to enable domain-centric, integrated data as a service to multiple downstream applications. A data fabric architecture consists of five main components:

  1. A data/metadata model
  2. Entity extraction
  3. Relationship extraction
  4. Data pipeline orchestration
  5. Persistent graph data storage. 

While there are a number of approaches to designing all of these components, there are best practices to ensure the quality and scalability of a data fabric. This blog post will enumerate the approaches for each architectural component, discuss how to achieve a data fabric implementation from a technical approach and tooling perspective that suits a wide variety of business needs, and ultimately detail how data fabrics support the development of artificial intelligence (AI).

Data/Metadata Model

Data models – specifically, ontologies and taxonomies – play a vital role in building a data fabric architecture. An ontology is a central aspect of a data fabric that defines concepts, attributes, and relationships in a domain that is encoded in a machine and human-readable graph format. Similarly, a taxonomy is essential for metadata management in a data fabric, storing extracted entities and defining controlled vocabularies for core business domains like products, business lines, services, and skills. By creating relationships between domains and data, businesses can help users find insights and discover content more easily. Therefore, to effectively manage taxonomies and ontologies, business owners need a Taxonomy/Ontology Management System (TOMS) that provides a user-friendly platform and interface. A good TOMS should:

  • Help users build data models that follow common standards like RDF (Resource Description Framework), OWL (Web Ontology Language), and SKOS (Simple Knowledge Organization System); 
  • Let users configure the main components of a data model such as classes, relationships, attributes, and labels that define the concepts, connections, and properties in the domain; 
  • Add metadata about the data model itself through annotations, such as its name, description, version, creator, etc.;
  • Support automated governance, supporting quality checks for errors;
  • Allow for easy portability of the data model in different ways, serving multiple enterprise use cases; and
  • Allow users to link to external data models that already exist and can be reused.

Organizations that do not place their data modeling and management at the forefront of their data fabric introduce the risk of scalability issues, limited user-friendly schema views, and hampered utilization of linked open data. Furthermore, the absence of formal metadata management poses a risk of inadequate alignment with business needs and hinders flexible information discovery within the data fabric. There are different ways of creating and using data models with a TOMS to avoid these risks. One way is to use code or scripts to generate and validate the data model based on the rules and requirements of the domain. Using subject matter expertise input helps to further validate the data model and confirm that it aligns with business needs. 

Entity Extraction 

One of the functions of building your data fabric is to perform entity extraction. This is the process of identifying and categorizing named entities in both structured and unstructured data, such as person names, locations, organizations, dates, etc. Entity extraction enriches the data with additional information and enables semantic analysis. Identifying Named Entity Recognition (NER) tools and performing text preprocessing (e.g., tokenization, stop words elimination, coreference resolution) is recommended before determining an entity extraction approach, of which there are several: rule-based, machine learning-based, or a hybrid of both. 

  • Rule-based approaches rely on predefined rules that use syntactic and lexical cues to extract entities. They require domain expertise to develop and maintain, and may not adapt well to new or evolving data. 
  • Machine learning-based approaches use deep learning models that can learn complex patterns in the data and extrapolate to unseen cases. However, they may require large amounts of labeled data and computational resources to train and deploy. 
  • Hybrid approaches (Best Practice) combine rule-based and machine learning-based methods to leverage the strengths of both. Hybrid approaches are recommended for businesses that foresee expanding their data fabric solutions. 

Relationship Extraction 

Relationship extraction is the process of identifying and categorizing semantic relationships that occur between two entities in text, such as who works for whom, what business line sells which product, what is located in what place, and so on. Relationship extraction helps construct a knowledge graph that represents the connections and interactions among entities, which enables semantic analysis and reasoning. However, relationship extraction can be challenging due to the diversity and complexity of natural language.There are again multiple approaches, including rule-based, machine learning-based, or hybrid. 

  • Rule-based approaches rely on predefined rules that use word-sequence patterns and dependency paths in sentences to extract relationships. They require domain expertise to develop and maintain, and they may not capture all the possible variations and nuances of natural language. 
  • One machine learning approach is to use an n-ary classifier that assigns a probability score to each possible relationship between two entities and selects the highest one. This supports capturing the variations and nuances of natural language and handling complex and ambiguous cases. However, machine learning approaches may require large amounts of labeled data and computational resources to train and deploy. 
  • Hybrid approaches (Best Practice) employ a combination of ontology-driven relationship extraction and machine learning approaches. Ontology-driven relationship extraction uses a predefined set of relationships that are relevant to the domain and the task. This helps avoid generating a sparse relationship matrix that results in a non-traversable knowledge graph.

Data Pipeline Orchestration

Data pipeline orchestration is the driving force of data fabric creation that brings all the components together. This is the process of integrating data sources with two or more applications or services to populate the knowledge graph initially and update it regularly thereafter. It involves coordinating and scheduling various tasks, such as data extraction, transformation, loading, validation, and analysis, and helps ensure data quality, consistency, and availability across the knowledge graph. Data pipeline orchestration can be performed using different approaches, such as a manual implementation, an open source orchestration engine, or using a vendor-specific orchestration engine / cloud service provider. 

  • A manual approach involves executing each step of the workflow manually, which is time-consuming, error-prone, and costly. 
  • An open source orchestration engine approach involves managing ETL pipelines as directed acyclic graphs (DAGs) that define the dependencies and order of execution of each task. This helps automate and monitor the workflow and handle failures and retries. Open source orchestration engines may require installation and configuration, and businesses need to take into account the required features and integrations before opting to use one.
  • Third-party vendors or cloud service providers can leverage the existing infrastructure and services and provide scalability and reliability. However, vendor specific orchestration engines / cloud service providers may have limitations in terms of customization and portability.

Persistent Graph Data Storage 

One of the central ideas behind a data fabric is the ability to store metadata and core relationships centrally while connecting to source data in a federated way. This manage-in-place approach enables data discovery and integration without moving or copying data. Persistent graph data storage is the glue that brings all the components together, storing extracted entities and relationships according to the ontology, and persisting the connected data for use in any downstream applications. A graph database helps preserve the semantic relationships among the data and enable efficient querying and analysis. However, not all graph databases are created equal. When selecting a graph database, there are 4 key characteristics to consider. Graph databases should be: standards-based, ACID compliant, widely-used, editable, and explorable via a UI. 

  • Standards-based involves making sure the graph database follows a common standard, such as RDF (Resource Description Framework), to ensure interoperability so that it is easier to transition from one tool to another. 
  • ACID compliant means the graph database ensures Atomicity, Consistency, Isolation, and Durability of the data transactions, which protects the data from infrastructure failures. 
  • Strong user and community support ensures that developers will have access to good documentation and feedback.
  • Explorable via a UI supports verification by experts to ensure data quality and alignment with domain and use case needs. 

Some of the common approaches for graph databases are using RDF-based graph databases, labeled property graphs, or custom implementations. 

  • RDF-based graph databases use RDF which is the standard model for representing and querying data. 
  • Labeled property graph databases use nodes, edges, and properties as the basic elements for storing and querying data. 

Data Fabric Architecture for AI

A mature data fabric architecture built upon the preceding standards plays a pivotal role in supporting the development of artificial intelligence (AI) for businesses by providing a solid foundation for harnessing the power of data. The data fabric’s support for data exploration, data preparation, and seamless integration empowers businesses to harness the transformative and generative power of AI.

By leveraging an existing data fabric architecture, businesses can seamlessly integrate structured and unstructured data, capturing the relationships between them within a standardized, domain-centric format. With the help of the knowledge graph at the core of the data fabric, businesses can empower AI algorithms to discover patterns, make informed decisions, and generate valuable insights by traversing and navigating the graph. This capability allows AI models to uncover valuable insights that are not immediately apparent in isolated data silos. 

Furthermore, data fabrics facilitate the process of data preparation and feature engineering, which are crucial steps in AI development. The logical architecture of the data fabric allows for efficient data transformation, aggregation, and enrichment. By streamlining the data preparation pipeline, AI practitioners can focus more on modeling and algorithm development, accelerating the overall AI development process. AI models often need continuous refinement and adaptation, and data fabrics enable seamless integration of new data sources and updates to ensure that AI models have the most up-to-date information.

Conclusion

A data fabric is a modern approach to data management that is crucial for businesses to remain competitive in a data-driven world. However, a data fabric is not a monolithic solution and the supporting architecture and technical approach can vary based on the state of sources, supporting use cases, and existing tooling at an organization. It’s important to prove out the value of the solutions before investing in costly tool procurement. We recommend starting small and iterating, beginning with a targeted domain in mind and sample source systems to lay a foundation for an enterprise data fabric. Once a data fabric has been established, businesses can unlock the full potential of their data assets, enabling AI algorithms to make intelligent predictions, discover hidden insights, and drive valuable business outcomes. Looking for a kick-start to get your solutions off the ground? Contact us to get started.

The post What is A Data Fabric Architecture and What Are The Design Considerations? appeared first on Enterprise Knowledge.

]]>
Knowledge Graph Use Cases are Priceless https://enterprise-knowledge.com/knowledge-graph-use-cases-are-priceless/ Wed, 30 Nov 2022 15:48:47 +0000 https://enterprise-knowledge.com/?p=16878 At Knowledge Graph Forum 2022, Lulit Tesfaye, Partner and Division Director, and Sara Nash, Senior Consultant, presented on the importance of establishing valuable and actionable use cases for knowledge graph efforts. The talk was on September 29, 2022 in New … Continue reading

The post Knowledge Graph Use Cases are Priceless appeared first on Enterprise Knowledge.

]]>
At Knowledge Graph Forum 2022, Lulit Tesfaye, Partner and Division Director, and Sara Nash, Senior Consultant, presented on the importance of establishing valuable and actionable use cases for knowledge graph efforts. The talk was on September 29, 2022 in New York City. 

Tesfaye and Nash drew on lessons learned from several knowledge graph development efforts to define how to diagnose a bad use case and outlined their impact on initiatives – including strained relationships with stakeholders, time spent reworking priorities, and team turnover. They also share guidance on how to navigate these scenarios and provide a checklist to assess a strong use case.

The post Knowledge Graph Use Cases are Priceless appeared first on Enterprise Knowledge.

]]>
What Team Do You Need for Successful Knowledge Graph Development? https://enterprise-knowledge.com/what-team-do-you-need-for-successful-knowledge-graph-development/ Wed, 24 Aug 2022 14:51:13 +0000 https://enterprise-knowledge.com/?p=16248 Many organizations look to take advantage of knowledge graphs to aggregate and align data from siloed systems, as well as enable explainable artificial intelligence solutions, but can get stalled if they don’t have enough experience building and scaling knowledge graphs. … Continue reading

The post What Team Do You Need for Successful Knowledge Graph Development? appeared first on Enterprise Knowledge.

]]>
Many organizations look to take advantage of knowledge graphs to aggregate and align data from siloed systems, as well as enable explainable artificial intelligence solutions, but can get stalled if they don’t have enough experience building and scaling knowledge graphs. Design and development teams for knowledge graphs often operate similarly to other enterprise data product teams, requiring collaboration between analysts, engineers, team leads, and stakeholders. However, since knowledge graphs are standards-based and place a premium on business intelligence, they require a team that has strong facilitation and analytical skills with a specific foundation in information management and semantic web standards. This ensures that knowledge graph solutions will be relevant to users and interoperable within technical environments. 

The technical expertise required for implementing a knowledge graph can be solved in various ways – through hiring, upskilling, or in-house consulting. Organizations often focus on their core, domain-specific capabilities, and these may not include skill sets in knowledge engineering. EK can enhance their capabilities through collaborative delivery or in-house consulting, working closely with domain experts to build and maintain knowledge graph services. We also offer a Knowledge Graph University to build the knowledge graph design and development competencies on your team. Depending on the nature of your organization, your team structure may vary. However, we find that the following roles and skill groups provide the core capabilities for many organizations to create a solution and program that is user-centered, standards-based, and well-integrated into your enterprise architecture.

Graphic of the three areas of skillsets needed for knowledge graph development. See text below for details

High-Level Teams for Knowledge Graph Development

 

What Are the Teams and Skill Sets You Need?

Product Success and Coordination

This team ensures that the knowledge graph aligns with business needs and technical requirements. These individuals provide leadership and decision-making to the delivery team while communicating outcomes to stakeholders.

It’s important to have a Business/Product Lead and Technical Lead that have experience with enterprise data solutions and can both guide internal development teams and be a point of contact for organizational leadership and other stakeholders. They are responsible for scoping and executing knowledge graph initiatives and should leverage product management best practices to be successful.

This team will make sure that your solution is providing tangible value by translating business challenges into actionable use cases and guiding the knowledge graph in a sustainable and relevant fashion.  

Knowledge Modeling and Data Preparation

This team designs, maintains, and grows the ontology and taxonomy models that are the foundation for the knowledge graph. These key team members ensure the alignment and readiness of integrating source data with the knowledge model. 

The knowledge modeling should be led by experienced ontologists, information architects, and taxonomists. These facilitators should interact with SMEs to design taxonomy and ontology models while applying semantic web standards (RDF, RDF*, SKOS, OWL) to ensure applicability and interoperability of the models and schemas. They collaborate closely with technical analysts who guide the data inventorying and define how data should be transformed and integrated according to the foundational schemas.

This team is central to successful knowledge graph development, making sure that use cases in data standardization, artificial intelligence, search, and more can be achieved through defined data concepts and relationships. This team is successful when knowledge models embed business concepts in a machine readable manner and when source data relevant to the use cases can be ingested, transformed, and stored according to the model. 

Data and Software Engineering

These roles are responsible for implementing the pipelines and algorithms required for populating the graph solution, as well as building the infrastructure and connectivity between the graph and downstream applications.

Semantic data engineers who have programming ability in data extraction/transformation and experience in data-centric applications are key to successful knowledge graph development. It’s important they have a skillset in querying and data manipulation languages, like SQL and Python, alongside experience working with graph-based querying languages and data types, like SPARQL, XML, RDF, JSON, and OWL. 

Having an experienced data engineering team will accelerate your knowledge graph development, ensuring that technical solutions are high quality and well integrated into the enterprise architecture. One of the primary advantages of graph-based solutions is their flexibility and extensibility, and this team makes that tangible for the organization. 

Establishing Your Team

There is not a one-size-fits-all model for establishing a knowledge graph development team. The team structure and needs will differ depending on relevant use cases, where the knowledge graph fits in the enterprise architecture, and the complexity of the domain area. Fulfilling roles can happen in multiple ways. It is possible that a single person may fulfill multiple roles or that each role may be fulfilled by one or more people. Regardless of the team structure, it’s critical to have the right sets of skills represented to successfully achieve your knowledge graph use cases.

At EK, we support many organizations in building out their capabilities to design and implement knowledge graph solutions, closely partnering with teams to close skill and experience gaps. If you’d like to work with us through in-house consulting, advising, training, and coaching, reach out to info@enterprise-knowledge.com

The post What Team Do You Need for Successful Knowledge Graph Development? appeared first on Enterprise Knowledge.

]]>
How to Quickly Prototype a Scalable Graph Architecture: A Framework for Rapid Knowledge Graph Implementation https://enterprise-knowledge.com/how-to-quickly-prototype-a-scalable-graph-architecture-a-framework-for-rapid-knowledge-graph-implementation/ Fri, 20 May 2022 15:37:52 +0000 https://enterprise-knowledge.com/?p=15487 Sara Nash and Thomas Mitrevski, Consultants in Enterprise Knowedge’s Data and Information Management Division presented on May 4, 2022 at the Knowledge Graph Conference in New York City. The talk focused on How to Quickly Prototype a Scalable Graph Architecture: … Continue reading

The post How to Quickly Prototype a Scalable Graph Architecture: A Framework for Rapid Knowledge Graph Implementation appeared first on Enterprise Knowledge.

]]>
Sara Nash and Thomas Mitrevski, Consultants in Enterprise Knowedge’s Data and Information Management Division presented on May 4, 2022 at the Knowledge Graph Conference in New York City. The talk focused on How to Quickly Prototype a Scalable Graph Architecture: A Framework for Rapid Knowledge Graph Implementation, discussing the toolkit to scope and execute knowledge graph prototypes successfully in a matter of weeks. The framework discussed includes the development of a foundational semantic model (e.g. taxonomies/ontologies) and resources and skill sets needed for successful initiatives so that knowledge graph products can scale, as well as the data architecture and tooling required (e.g., orchestration and storage) for enterprise-scale implementation. Nash and Mitrevski shared success stories from past experiences as well as the critical steps to transition a successful prototype into a production system.

 

The post How to Quickly Prototype a Scalable Graph Architecture: A Framework for Rapid Knowledge Graph Implementation appeared first on Enterprise Knowledge.

]]>
Where Does a Knowledge Graph Fit Within the Enterprise? https://enterprise-knowledge.com/where-does-a-knowledge-graph-fit-within-the-enterprise/ Thu, 21 Apr 2022 16:19:33 +0000 https://enterprise-knowledge.com/?p=15302 Our clients often assume that building a knowledge graph requires that all data be managed in a single place for it to be effective. That is not the case. There are a variety of ways that organizations can solve for … Continue reading

The post Where Does a Knowledge Graph Fit Within the Enterprise? appeared first on Enterprise Knowledge.

]]>
Our clients often assume that building a knowledge graph requires that all data be managed in a single place for it to be effective. That is not the case. There are a variety of ways that organizations can solve for their knowledge-first and relationship-based use cases while maintaining aspects of their existing data architecture. In this way, graph data is not a “one size fits all” solution. The spectrum of leveraging graph data models spans from using a graph database as the primary data storage to using an ontology model as the blueprint for a relational data schema.

Graph Database as Primary Storage: All data and ontology is stored within the graph database, ingesting all relevant source data and enabling inference and reasoning capabilities.

A graph database contains several entities (Customer, Product A, Product B, Style) and their relationships to each other.

Graph Database as Relationship Management and Taxonomy Integration: Relationships between core concepts and content metadata (like taxonomy tags on documents) are stored within the graph, but actual content and descriptive metadata are stored within other systems and are connected to the graph via virtualization.

The graph database is connected to CRM, PIM, and Taxonomy management systems, each containing data on entities such as Customer, Product A, Product B, and Style

Graph Data Model (Ontology) as Relational Data Schema: The ontology model is an Enterprise Relational Diagram (ERD) that sets the “vision” for how to connect and leverage data stored in a relational database.

A relational database contains the entities (Customer, Product, Style) and their associated data. Each entity is then linked to other entities relationships.

Organizations can see the value of capturing relationships in a machine-readable way, even when not all of the data relevant to the use case is captured in a graph database. The model that makes sense for your organization and your use case is dependent on factors including: 

  • Restrictiveness of source data systems; 
  • Volume and scale of data;
  • Enterprise architecture maturity; 
  • Inference and reasoning needs; and
  • Integration needs with downstream systems.

At EK, we design graph-based architectures in a way that leverages your organization’s specifications and conventions, while introducing best practices and standards from the industry. Looking to get started? Contact us.

The post Where Does a Knowledge Graph Fit Within the Enterprise? appeared first on Enterprise Knowledge.

]]>
Presentation: Introduction to Knowledge Graphs https://enterprise-knowledge.com/presentation-introduction-to-knowledge-graphs/ Tue, 07 Jul 2020 16:16:18 +0000 https://enterprise-knowledge.com/?p=11507 This workshop presentation from Joe Hilger, Founder and COO, and Sara Nash, Technical Analyst, was delivered on June 8, 2020 as part of the Data Summit 2020 virtual conference. The 3-hour workshop provided an interdisciplinary group of participants with a … Continue reading

The post Presentation: Introduction to Knowledge Graphs appeared first on Enterprise Knowledge.

]]>
This workshop presentation from Joe Hilger, Founder and COO, and Sara Nash, Technical Analyst, was delivered on June 8, 2020 as part of the Data Summit 2020 virtual conference. The 3-hour workshop provided an interdisciplinary group of participants with a definition of what a knowledge graph is, how it is implemented, and how it can be used to increase the value of an organization’s data. This slide deck gives an overview of the KM concepts that are necessary for the implementation of knowledge graphs as a foundation for Enterprise Artificial Intelligence (AI). Hilger and Nash also outlined four use cases for knowledge graphs, including recommendation engines and natural language query on structured data.

The post Presentation: Introduction to Knowledge Graphs appeared first on Enterprise Knowledge.

]]>
Content Silos: Causes, Problems, and Solutions https://enterprise-knowledge.com/content-silos-causes-problems-and-solutions/ Mon, 16 Dec 2019 14:00:02 +0000 https://enterprise-knowledge.com/?p=10115 Organizations are producing exponentially more content than ever before. In most businesses today, every employee is a content creator which has led to multiple, uncoordinated content management systems (CMS). Siloed systems cause content management issues such as duplication of content, … Continue reading

The post Content Silos: Causes, Problems, and Solutions appeared first on Enterprise Knowledge.

]]>
Organizations are producing exponentially more content than ever before. In most businesses today, every employee is a content creator which has led to multiple, uncoordinated content management systems (CMS). Siloed systems cause content management issues such as duplication of content, inconsistent messaging, access risks, and version control. This leads to loss of productivity, risk of inaccurate content, and low user satisfaction. Fortunately, there are ways to break down silos without incurring serious costs or adopting burdensome processes.

Causes: Why is Content Siloed?

Organizations introduce new content management systems as a quick fix to their problems when their existing systems can’t support new functionality. They think that it’s easier to add a new technology rather than confront the root issues. There are a variety of use cases that prompt companies to add a CMS:artistic representation of expanding business silos. 5 content silos with content types coming from the top.

  • Expanding Content Types: Legacy systems do not have the capability to accommodate new types of content. Enter new platforms to manage data, videos, and social media.
  • Collaboration: Companies are seeing the benefit of collaborating on documents in real-time rather than exchanging edits over email. Enter new collaboration tool.
  • Permissions Control: Not everyone at a company should have access to particularly sensitive documents. Enter new legal document management system.
  • New Ways to Present Content: People are increasingly consuming content on their phones and watches, not just through print and the web. Enter new a new CMS to publish to each channel. 

Problems: Implications of Content Silos

Disconnected, siloed systems make it much more difficult to get value from your content. There are a number of implications to content silos, including:

  • Duplication: Companies want to be able to reuse their content, but they can’t think of how to do that if there are multiple systems. A common quick fix for this is creating copies of content in each CMS. If one of these pieces of content needs to be updated or removed, there’s no way to access every version across disconnected systems.
  • Findability: Users have a hard time finding content when it is stored in multiple systems. Not only do they have to alter their search behavior per system, but they also spend extra time searching in each system separately.
  • Lost Relationships: When content is managed in silos, organizations cannot capture relationships between content, like a blog sharing the same topic as a video or a legal document substantiating a claim in a deliverable. These relationships create important context that increases the usability of content.

Solutions: Breaking Down the Silos  

There are ways to address content silos other than trying to force all of your content into a single traditional CMS. Content migration to a monolithic CMS is typically a lengthy and costly process, and still only fulfills a limited set of requirements. Fortunately, there are alternatives:

Introduce a Headless CMS: A Headless CMS takesvisual representation of a headless cms a content-first approach to give you flexibility in the expanding ways that your organization will use its content. Going headless centralizes and normalizes the way that content is modeled across the enterprise, but its feasibility depends on reliance on existing systems. There is a short-term tradeoff of cost, time, and disruption, but in the long-term it reduces the risk of future silos and accommodates a wide set of requirements. 

Metadata hubDevelop a Metadata Hub: Without moving content from existing systems, organizations can aggregate metadata, or the descriptive information associated with content. This solution is beneficial if workflows and logic are highly embedded into existing systems and cannot be decoupled. The best way to implement this solution is through a graph database that contextualizes content by pulling the associated metadata from siloed systems. This is foundational for creating a streamlined search experience and even allows organizations to capture the relationships between content required for recommendation engines.

Content inside a MicroscopeImplement Enterprise Search: Introducing an enterprise-wide search layer on content silos is an effective way to uncover and address issues with duplication and findability. This solution allows organizations to maintain their current systems, while aggregating the content inside of them. Enterprise Search is less disruptive than the above solutions because it does not require migration or database management, but it does not allow users to manage content across systems. However, creating a streamlined and intuitive search experience can be exactly what an organization needs to increase productivity and reveal inconsistencies. 

artistic representation of a business taxonomyDesign a Business Taxonomy: A Business Taxonomy will standardize the way that content is described and organized across siloed systems. This enables users to have a consistent experience in different CMS and is foundational for improving findability and productivity. This does not address challenges with duplicative content or lost relationships, but it significantly lowers the barriers that organizations face with content management. Implementing a Business Taxonomy is also one of the best ways to see improvements quickly

Breaking down content silos streamlines the ways that users access content to increase productivity and reduce the risk of having duplicate content across systems. Introducing a solution to address your organization’s content silos immediately adds value to your content and is critical to anticipate future uses. Not sure how to get started? We’re here to help! Reach out to info@enterprise-knowledge.com.

The post Content Silos: Causes, Problems, and Solutions appeared first on Enterprise Knowledge.

]]>