Lulit Tesfaye, Author at Enterprise Knowledge https://enterprise-knowledge.com Mon, 29 Sep 2025 07:09:40 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg Lulit Tesfaye, Author at Enterprise Knowledge https://enterprise-knowledge.com 32 32 Knowledge Cast – Daan Hannessen, Global Head of Knowledge Management at Shell – Semantic Layer Symposium Series https://enterprise-knowledge.com/knowledge-cast-daan-hannessen-global-head-of-knowledge-management-at-shell/ Mon, 29 Sep 2025 07:00:25 +0000 https://enterprise-knowledge.com/?p=25624 Enterprise Knowledge’s Lulit Tesfaye, VP of Knowledge & Data Services, speaks with Daan Hannessen, Global Head of Knowledge Management at Shell. He has over 20 years experience in Knowledge Management for large knowledge-intensive organizations in Europe, Australia, and the USA, … Continue reading

The post Knowledge Cast – Daan Hannessen, Global Head of Knowledge Management at Shell – Semantic Layer Symposium Series appeared first on Enterprise Knowledge.

]]>

Enterprise Knowledge’s Lulit Tesfaye, VP of Knowledge & Data Services, speaks with Daan Hannessen, Global Head of Knowledge Management at Shell. He has over 20 years experience in Knowledge Management for large knowledge-intensive organizations in Europe, Australia, and the USA, ranging from continuous improvement programs, KM transformations, lessons learned solutions, digital workplaces, AI driven expert bots, enterprise search, and much more.

In their conversation, Lulit and Daan discuss the importance of senior leadership support in ensuring the success of KM initiatives, emphasizing “speaking their language” as key to implementing KM and the semantic layer at a global scale. They also touch on how to measure the success of AI, when AI-generated content can be considered valuable insights, and why to invest in a semantic layer in the first place, as well as Daan’s talk at the upcoming Semantic Layer Symposium.

For more information on the Semantic Layer Symposium, check it out here!

 

 

If you would like to be a guest on Knowledge Cast, contact Enterprise Knowledge for more information.

The post Knowledge Cast – Daan Hannessen, Global Head of Knowledge Management at Shell – Semantic Layer Symposium Series appeared first on Enterprise Knowledge.

]]>
Knowledge Cast – Ben Clinch, Chief Data Officer & Partner at Ortecha – Semantic Layer Symposium Series https://enterprise-knowledge.com/knowledge-cast-ben-clinch-chief-data-officer-partner-at-ortecha/ Thu, 11 Sep 2025 13:43:01 +0000 https://enterprise-knowledge.com/?p=25345 Enterprise Knowledge’s Lulit Tesfaye, VP of Knowledge & Data Services, speaks with Ben Clinch, Chief Data Officer and Partner at Ortecha and Regional Lead Trainer for the EDM Council (EMEA/India). He is a sought-after public speaker and thought leader in … Continue reading

The post Knowledge Cast – Ben Clinch, Chief Data Officer & Partner at Ortecha – Semantic Layer Symposium Series appeared first on Enterprise Knowledge.

]]>

Enterprise Knowledge’s Lulit Tesfaye, VP of Knowledge & Data Services, speaks with Ben Clinch, Chief Data Officer and Partner at Ortecha and Regional Lead Trainer for the EDM Council (EMEA/India). He is a sought-after public speaker and thought leader in data and AI, having held numerous senior roles in architecture and business in some of the world’s largest financial and telecommunication institutions over his 25 year career, with a passion for helping organizations thrive with their data.

In their conversation, Lulit and Ben discuss Ben’s personal journey into the world of semantics, their data architecture must-haves in a perfect world, and how to calculate the value of data and knowledge initiatives. They also preview Ben’s talk at the Semantic Layer Symposium in Copenhagen this year, which will cover the combination of semantics and LLMs and neurosymbolic AI. 

For more information on the Semantic Layer Symposium, check it out here!

 

 

If you would like to be a guest on Knowledge Cast, contact Enterprise Knowledge for more information.

The post Knowledge Cast – Ben Clinch, Chief Data Officer & Partner at Ortecha – Semantic Layer Symposium Series appeared first on Enterprise Knowledge.

]]>
When Should You Use An AI Agent? Part One: Understanding the Components and Organizational Foundations for AI Readiness https://enterprise-knowledge.com/when-should-you-use-an-ai-agent/ Thu, 04 Sep 2025 15:39:43 +0000 https://enterprise-knowledge.com/?p=25285 It’s been recognized for far too long that organizations spend as much as 30-40% of their time searching for or recreating information. Now, imagine a dedicated analyst who doesn’t just look for or analyze data for you but also roams … Continue reading

The post When Should You Use An AI Agent? Part One: Understanding the Components and Organizational Foundations for AI Readiness appeared first on Enterprise Knowledge.

]]>
It’s been recognized for far too long that organizations spend as much as 30-40% of their time searching for or recreating information. Now, imagine a dedicated analyst who doesn’t just look for or analyze data for you but also roams the office, listens to conversations, reads emails, and proactively sends you updates while spotting outdated data, summarizing new information, flagging inconsistencies, and prompting follow-ups. That’s what an AI agent does; it autonomously monitors content and data platforms, collaboration tools like Slack, Teams, and even email, and suggests updates or actions—without waiting for instructions. Instead of sending you on a massive data hunt to answer “What’s the latest on this client?”, an AI agent autonomously pulls CRM notes, emails, contract changes, and summarizes them in Slack or Teams or publishes findings as a report. It doesn’t just react, it takes initiative. 

The potential of AI agents for productivity gains within organizations is undeniable—and it’s no longer a distant future. However, the key question today is: when is the right time to build and deploy an AI agent, and when is simpler automation the more effective choice?

While the idea of a fully autonomous assistant handling routine tasks is appealing, AI agents require a complex framework to succeed. This includes breaking down silos, ensuring knowledge assets are AI-ready, and implementing guardrails to meet enterprise standards for accuracy, trust, performance, ethics, and security.

Over the past couple of years, we’ve worked closely with executives who are navigating what it truly means for their organizations to be “AI-ready” or “AI-powered”, and as AI technologies evolve, this challenge has only become more complex and urgent for all of us.

To move forward effectively, it’s crucial to understand the role of AI agents compared to traditional or narrow AI, automation, or augmentation solutions. Specifically, it is important to recognize the unique advantages of agent-based AI solutions, identify the right use cases, and ensure organizations have the best foundation to scale effectively.

In the first part of this two-part series, I’ll outline the core building blocks for organizations looking to integrate AI agents. The goal of this series is to provide insights that help set realistic expectations and contribute to informed decisions around AI agent integration—moving beyond technical experiments—to deliver meaningful outcomes and value to the organization.

Understanding AI Agents

AI agents are goal-oriented autonomous systems built from large language and other AI models, business logic, guardrails, and a supporting technology infrastructure needed to operate complex, resource-intensive tasks. Agents are designed to learn from data, adapt to different situations, and execute tasks autonomously. They understand natural language, take initiative, and act on behalf of humans and organizations across multiple tools and applications. Unlike traditional machine learning (ML) and AI automations (such as virtual assistants or recommendation engines), AI agents offer initiative, adaptability, and context-awareness by proactively accessing, analyzing, and acting on knowledge and data across systems.

 

Infographic explaining AI agents and when to use them, including what they are, when to use, and its limitations

 

Components of Agentic AI Framework

1. Relevant Language and AI Models

Language models are the agent’s cognitive core, essentially its “brain”, responsible for reasoning, planning, and decision-making. While not every AI agent requires a Large Language Model (LLM), most modern and effective agents rely on LLMs and reinforcement learning to evaluate strategies and select the best course of action. LLM-powered agents are especially adept at handling complex, dynamic, and ambiguous tasks that demand interpretation and autonomous decision-making.

Choosing the right language model also depends on the use case, task complexity, desired level of autonomy, and the organization’s technical environment. Some tasks are better served to remain simple, with more deterministic workflows or specialized algorithms. For example, an expertise-focused agent (e.g., a financial fraud detection agent) is more effective when developed with purpose-built algorithms than with a general-purpose LLM because the subject area requires hyper-specific, non-generalizable knowledge. On the other hand, well-defined, repetitive tasks, such as data sorting, form validation, or compliance checks, can be handled by rule-based agents or classical machine learning models, which are cheaper, faster, and more predictable. LLMs, meanwhile, add the most value in tasks that require flexible reasoning and adaptation, such as orchestrating integration with multiple tools, APIs, and databases to perform real-world actions like dynamic customer service process, placing trades or interpreting incomplete and ambiguous information. In practice, we are finding that a hybrid approach works best.

2. Semantic Layer and Unified Business Logic

AI agents need access to a shared, consistent view of enterprise data to avoid conflicting actions, poor decision-making, or the reinforcement of data silos. Increasingly, agents will also need to interact with external data and coordinate with other agents, which compounds the risk of misalignment, duplication, or even contradictory outcomes. This is where a semantic layer becomes critical. By standardizing definitions, relationships, and business context across knowledge and data sources, the semantic layer provides agents with a common language for interpreting and acting on information, connecting agents to a unified business logic. Across several recent projects, implementing a semantic layer has improved the accuracy and precision of initial AI results from around 50% to between 80% and 95%, depending on the use case.

The semantic layer includes metadata management, business glossaries, and taxonomy/ontology/graph data schemas that work together to provide a unified and contextualized view of data across typically siloed systems and business units, enabling agents to understand and reason about information within the enterprise context. These semantic models define the relationships between data entities and concepts, creating a structured representation of the business domain the agent is operating in. Semantic models form the foundation for understanding data and how it relates to the business. By incorporating two or more of these semantic model components, the semantic layer provides the foundation for building robust and effective agentic perception, cognition, action, and learning that can understand, reason, and act on org-specific business data. For any AI, but specifically for AI agents, a semantic layer is critical in providing access to:

  • Organizational context and meaning to raw data to serve as a grounding ‘map’ for accurate interpretation and agent action;
  • Standardized business terms that establish a consistent vocabulary for business metrics (e.g., defining “revenue” or “store performance” ), preventing confusion and ensuring the AI uses the same definitions as the business; and
  • Explainability and trust through metadata and lineage to validate and track why agent recommendations are compliant and safe to adopt.

Overall, the semantic layer ensures that all agents are working from the same trusted source of truth, and enables them to exchange information coherently, align with organizational policies, and deliver reliable, explainable results at scale. Specifically, in a multi-agent system with multiple domain-specific agents, all agents may not work off the same semantic layer, but each will have the organizational business context to interpret messages from each other as courtesy of the domain-specific semantic layers.

The bottom line is that, without this reasoning layer, the “black box” nature of agents’ decision-making processes erodes trust, making it difficult for organizations to adopt and rely on these source systems.

3. Access to AI-Ready Knowledge Assets and Sources

Agents require accurate, comprehensive, and context-rich organizational knowledge assets to make sound decisions. Without access to high-quality, well-structured data, agents, especially those powered by LLMs, struggle to understand complex tasks or reason effectively, often leading to unreliable or “hallucinated” outputs. In practice, this means organizations making strides with effective AI agents need to:

  • Capture and codify expert knowledge in a machine-readable form that is readily interpretable by AI models so that tacit know-how, policies, and best practices are accessible to agents, not just locked in human workflows or static documents;A callout box that explains what AI-ready knowledge assets are
  • Connect structured and unstructured data sources, from databases and transactional systems to documents, emails, and wikis, into a connected, searchable layer that agents can query and act upon; 
  • Provide semantically enriched assets with well-managed metadata, consistent labels, and standardized formats to make them interoperable with common AI platforms; 
  • Align and organize internal and external data so agents can seamlessly draw on employee-facing knowledge (policies, procedures, internal systems) as well as customer-facing assets (product documentation, FAQs, regulatory updates) while maintaining consistency, compliance, and brand integrity; and
  • Enable access to AI assets and systems while maintaining strict controls over who can use it, how it is used, and where it flows.

This also means, beyond static access to knowledge, agents must also query and interact dynamically with various sources of data and content. Doing this includes connecting to applications, websites, content repositories, and data management systems, and taking direct actions, such as reading/writing into enterprise applications, updating records, or initiating workflows.

Enabling this capability requires a strong design and engineering foundation, allowing agents to integrate with external systems and services through standard APIs, operate within existing security protocols, and respect enterprise governance and record compliance requirements. A unified approach, bringing together disparate data sources into a connected layer (see semantic layer component above), helps break down silos and ensures agents can operate with a holistic, enterprise-wide view of knowledge.

4. Instructions, Guardrails, and Observability

Organizations are largely unprepared for agentic AI due to several factors: the steep leap from traditional, predictable AI to complex multi-agent orchestration, persistent governance gaps, a shortage of specialized expertise, integration challenges, and inconsistent data quality, to name a few. Most critically, the ability to effectively control and monitor agent autonomy remains a fundamental barrier—posing significant security, compliance, and privacy risks. Recent real-world cases highlight how quickly things can go wrong, including tales of agents deleting valuable data, offering illegal or unethical advice, and amplifying bias in hiring decisions or in public-sector deployments. These failures underscore the risks of granting autonomous AI agents high-level permissions over live production systems without robust oversight, guardrails, and fail-safes. Until these gaps are addressed, autonomy without accountability will remain one of the greatest barriers to enterprise readiness in the agentic AI era.

As such, for AI agents to operate effectively within the enterprise, they must be guided by clear instructions, protected by guardrails, and monitored through dedicated evaluation and observability frameworks.

  • Instructions: Instructions define an AI agent’s purpose, goals, and persona. Agents don’t inherently understand how a specific business or organization operates. Instead, that knowledge comes from existing enterprise standards, such as process documentation, compliance policies, and operating models, which provide the foundational inputs for guiding agent behavior. LLMs can interpret these high-level standards and convert them into clear, step-by-step instructions, ensuring agents act in ways that align with organizational expectations. For example, in a marketing context, an LLM can take a general directive like, “All published content must reflect the brand voice and comply with regulatory guidelines”, and turn it into actionable instructions for a marketing agent. The agent can then assist the marketing team by reviewing a draft email campaign, identifying tone or compliance issues, and suggesting revisions to ensure the content meets both brand and regulatory standards.
  • Guardrails: Guardrails are safety measures that act as the protective boundaries within which agents operate. Agents need guardrails across different functions to prevent them from producing harmful, biased, or inappropriate content and to enforce security and ethical standards. These include relevance and output validation guardrails, personally identifiable information (PII) filters that detect unsafe inputs or prevent leakage of PII, reputation and brand alignment checks, privacy and security guardrails that enforce authentication, authorization, and access controls to prevent unauthorized data exposure, and guardrails against prompt attacks and content filters for harmful topics. 
  • Observability: Even with strong instructions and guardrails, agents must be monitored in real time to ensure they behave as expected. Observability includes logging actions, tracking decision paths, monitoring model outputs, cost monitoring and performance optimization, and surfacing anomalies for human review. A good starting point for managing agent access is mapping operational and security risks for specific use cases and leveraging unified entitlements (identity and access control across systems) to apply strict role-based permissions and extend existing data security measures to cover agent workflows.

Together, instructions, guardrails, and observability form a governance layer that ensures agents operate not only autonomously, but also responsibly and in alignment with organizational goals. To achieve this, it is critical to plan for and invest in AI management platforms and services that define agent workflows, orchestrate these interactions, and supervise AI agents. Key capabilities to look for in an AI management platform include: 

  • Prompt chaining where the output of one LLM call feeds the next, enabling multi-step reasoning; 
  • Instruction pipelines to standardize and manage how agents are guided;
  • Agent orchestration frameworks for coordinating multiple agents across complex tasks; and 
  • Evaluation and observability (E&O) monitoring solutions that offer features like content and topic moderation, PII detection and redaction, and protection against prompt injection or “jailbreaking” attacks. Furthermore, because training models involve iterative experimentation, tuning, and distributed computation, it is paramount to have benchmarks and business objectives defined from the onset in order to optimize model performance through evaluation and validation.

In contrast to the predictable expenses of standard software, AI project costs are highly dynamic and often underestimated during initial planning. Many organizations are grappling with unexpected AI cost overruns due to hidden expenses in data management, infrastructure, and maintenance for AI. This can severely impact budgets, especially for agentic environments. Tracking system utilization, scaling resources dynamically, and implementing automated provisioning allows organizations to maintain consistent performance and optimization for agent workloads, even under variable demand, while managing cost spikes and avoiding any surprises.

Many traditional enterprise observability tools are now extending their capabilities to support AI-specific monitoring. Lifecycle management tools such as MLflow, Azure ML, Vertex AI, or Databricks help with the management of this process at enterprise scale by tracking model versions, automating retraining schedules, and managing deployments across environments. As with any new technology, the effective practice is to start with these existing solutions where possible, then close the gaps with agent-specific, fit-for-purpose tools to build a comprehensive oversight and governance framework.

5. Humans and Organizational Operating Models

There is no denying it—the integration of AI agents will transform ways of working worldwide. However, a significant gap still exists between the rapid adoption plans for AI agents and the reality on the ground. Why? Because too often, AI implementations are treated as technological experiments, with a focus on performance metrics or captivating demos. This approach frequently overlooks the critical human element needed for AI’s long-term success. Without a human-centered operating model, AI deployments continue to run the risk of being technologically impressive but practically unfit for organizational use.

Human Intervention and Human-In-the-Loop Validation: One of the most pressing considerations in integrating AI into business operations is the role of humans in overseeing, validating, and intervening in AI decisions. Agentic AI has the power to automate many tasks, but it still requires human oversight, particularly in high-risk or high-impact decisions. A transparent framework for when and how humans intervene is essential for mitigating these risks and ensuring AI complies with regulatory and organizational standards. Emerging practices we are seeing are showing early success when combining agent autonomy with human checkpoints, wherein subject matter experts (SMEs) are identified and designated as part of the “AI product team” from the onset to define the requirements for and ensure that AI agents consistently focus on and meet the right organizational use cases throughout development. 

Shift in Roles and Reskilling: For AI to truly integrate into an organization’s workflow, a fundamental shift in the fabric of an organization’s roles and operating model is becoming necessary. Many roles as we know them today are shifting—even for the most seasoned software and ML engineers. Organizations are starting to rethink their structure to blend human expertise with agentic autonomy. This involves redesigning workflows to allow AI agents to automate routine tasks while humans focus on strategic, creative, and problem-solving roles. 

Implementing and managing agentic AI requires specialized knowledge in areas such as AI model orchestration, agent–human interaction design, and AI operations. These skill sets are often underdeveloped in many organizations and, as a result, AI projects are failing to scale effectively. The gap isn’t just technical; it also includes a cultural shift toward understanding how AI agents generate results and the responsibility associated with their outputs. To bridge this gap, we are seeing organizations start to invest in restructuring data, AI, content, and knowledge operations/teams and reskilling their workforce in roles like AI product management, knowledge and semantic modeling, and AI policy and governance.

Ways of Working: To support agentic AI delivery at scale, it is becoming evident that agile methodologies must also evolve beyond their traditional scope of software engineering and adapt to the unique challenges posed by AI development lifecycles. Agentic AI, requires an agile framework that is flexible, experimental, and capable of iterative improvements. This further requires deep interdisciplinary collaboration across data scientists, AI engineers, software engineers, domain experts, and business stakeholders to navigate complex business and data environments.

Furthermore, traditional CI/CD pipelines, which focus on code deployment, need to be expanded to support continuous model training, testing, human intervention, and deployment. Integrating ML/AI Ops is critical for managing agent model drift and enabling autonomous updates. The successful development and large-scale adoption of agentic AI hinges on these evolving workflows that empower organizations to experiment, iterate, and adapt safely as both AI behaviors and business needs evolve.

Conclusion 

Agentic AI will not succeed through technology advancements alone. Given the inherent complexity and autonomy of AI agents, it is essential to evaluate organizational readiness and conduct a thorough cost-benefit analysis when determining whether an agentic capability is essential or merely a nice-to-have.

Success will ultimately depend on more than just cutting-edge models and algorithms. It also requires dismantling artificial, system-imposed silos between business and technical teams, while treating organizational knowledge and people as critical assets in AI design. Therefore, a thoughtful evolution of the organizational operating model and the seamless integration of AI into the business’s core is critical. This involves selecting the right project management and delivery frameworks, acquiring the most suitable solutions, implementing foundational knowledge and data management and governance practices, and reskilling, attracting, hiring, and retaining individuals with the necessary skill sets. These considerations make up the core building blocks for organizations to begin integrating AI agents.

The good news is that when built on the right foundations, AI solutions can be reused across multiple use cases, bridge diverse data sources, transcend organizational silos, and continue delivering value beyond the initial hype. 

Is your organization looking to evaluate AI readiness? How well does it measure up against these readiness factors? Explore our case studies and knowledge base on how other organizations are tackling this or get in touch to learn more about our approaches to content and data readiness for AI.

The post When Should You Use An AI Agent? Part One: Understanding the Components and Organizational Foundations for AI Readiness appeared first on Enterprise Knowledge.

]]>
Knowledge Cast – Dawn Brushammar, Independent Knowledge Management Consultant & Programme Chair of KMWorld Europe – Semantic Layer Symposium Series https://enterprise-knowledge.com/knowledge-cast-dawn-brushammar-independent-knowledge-management-consultant-programme-chair-of-kmworld-europe/ Thu, 28 Aug 2025 16:43:40 +0000 https://enterprise-knowledge.com/?p=25261 Enterprise Knowledge’s Lulit Tesfaye, VP of Knowledge & Data Services, speaks with Dawn Brushammar, currently an independent KM consultant, advisor, and frequent contributor at industry events. She has spent her 25+ year career connecting people to relevant knowledge and information. … Continue reading

The post Knowledge Cast – Dawn Brushammar, Independent Knowledge Management Consultant & Programme Chair of KMWorld Europe – Semantic Layer Symposium Series appeared first on Enterprise Knowledge.

]]>

Enterprise Knowledge’s Lulit Tesfaye, VP of Knowledge & Data Services, speaks with Dawn Brushammar, currently an independent KM consultant, advisor, and frequent contributor at industry events. She has spent her 25+ year career connecting people to relevant knowledge and information. Her experience across industries and geographies includes leading an internal Knowledge Management team at McKinsey and Company, building databases for the Oprah Winfrey Show, running research services for a division of American Express, and managing academic librarianship at several universities and an environmental and sustainability research institute.

In their conversation, Lulit and Dawn discuss the similarities between their early career paths and KM journeys, the evolving role of the modern librarian, and how KM and semantics support AI technologies. They also define what a “knowledge-first organization” should look like, and touch on Dawn’s upcoming talk at the Semantic Layer Symposium on the rising importance of library science to the Semantic Layer.

For more information on the Semantic Layer Symposium, check it out here!

 

 

If you would like to be a guest on Knowledge Cast, contact Enterprise Knowledge for more information.

The post Knowledge Cast – Dawn Brushammar, Independent Knowledge Management Consultant & Programme Chair of KMWorld Europe – Semantic Layer Symposium Series appeared first on Enterprise Knowledge.

]]>
Knowledge Cast – Paco Nathan, Principal DevRel Engineer at Senzing – Semantic Layer Symposium Series https://enterprise-knowledge.com/knowledge-cast-paco-nathan-principal-devrel-engineer-at-senzing/ Tue, 26 Aug 2025 14:55:41 +0000 https://enterprise-knowledge.com/?p=25238 Enterprise Knowledge’s Lulit Tesfaye, VP of Knowledge & Data Services, speaks with Paco Nathan, Developer Relations (DevRel) Leader for the Entity Resolved Knowledge Graph Practice at Senzing. He is a computer scientist with over 40 years of tech industry experience … Continue reading

The post Knowledge Cast – Paco Nathan, Principal DevRel Engineer at Senzing – Semantic Layer Symposium Series appeared first on Enterprise Knowledge.

]]>

Enterprise Knowledge’s Lulit Tesfaye, VP of Knowledge & Data Services, speaks with Paco Nathan, Developer Relations (DevRel) Leader for the Entity Resolved Knowledge Graph Practice at Senzing. He is a computer scientist with over 40 years of tech industry experience and core expertise in data science, natural language, graph technologies, and cloud computing. He’s the author of numerous books, videos, and tutorials about these topics. He also hosts the monthly “Graph Power Hour!” webinar.

In their conversation, Lulit and Paco discuss Paco’s background in the graph space, as well as current graph trends and scalable use cases for the Semantic Layer. They also touch on how to convince organizations to prioritize investments in semantic technologies and data management, and Paco shares more details on his talk about financial crimes and Semantic Layers at the upcoming Semantic Layer Symposium in Copenhagen.

For more information on the Semantic Layer Symposium, check it out here!

 

 

If you would like to be a guest on Knowledge Cast, contact Enterprise Knowledge for more information.

The post Knowledge Cast – Paco Nathan, Principal DevRel Engineer at Senzing – Semantic Layer Symposium Series appeared first on Enterprise Knowledge.

]]>
The Evolution of Knowledge Management & Organizational Roles: Integrating KM, Data Management, and Enterprise AI through a Semantic Layer https://enterprise-knowledge.com/the-evolution-of-knowledge-management-km-organizational-roles/ Thu, 31 Jul 2025 16:51:14 +0000 https://enterprise-knowledge.com/?p=25082 On June 23, 2025, at the Knowledge Summit Dublin, Lulit Tesfaye and Jess DeMay presented “The Evolution of Knowledge Management (KM) & Organizational Roles: Integrating KM, Data Management, and Enterprise AI through a Semantic Layer.” The session examined how KM … Continue reading

The post The Evolution of Knowledge Management & Organizational Roles: Integrating KM, Data Management, and Enterprise AI through a Semantic Layer appeared first on Enterprise Knowledge.

]]>
On June 23, 2025, at the Knowledge Summit Dublin, Lulit Tesfaye and Jess DeMay presented “The Evolution of Knowledge Management (KM) & Organizational Roles: Integrating KM, Data Management, and Enterprise AI through a Semantic Layer.” The session examined how KM roles and responsibilities are evolving as organizations respond to the increasing convergence of data, knowledge, and AI.

Drawing from multiple client engagements across sectors, Tesfaye and DeMay shared patterns and lessons learned from initiatives where KM, Data Management, and AI teams are working together to create a more connected and intelligent enterprise. They highlighted the growing need for integrated strategies that bring together semantic modeling, content management, and metadata governance to enable intelligent automation and more effective knowledge discovery.

The presentation emphasized how KM professionals can lead the way in designing sustainable semantic architectures, building cross-functional partnerships, and aligning programs with organizational priorities and AI investments. Presenters also explored how roles are shifting from traditional content stewards to strategic enablers of enterprise intelligence.

Session attendees walked away with:

  • Insight into how KM roles are expanding to meet enterprise-wide data and AI needs;
  • Examples of how semantic layers can enhance findability, improve reuse, and enable automation;
  • Lessons from organizations integrating KM, Data Governance, and AI programs; and
  • Practical approaches to designing cross-functional operating models and governance structures that scale.

The post The Evolution of Knowledge Management & Organizational Roles: Integrating KM, Data Management, and Enterprise AI through a Semantic Layer appeared first on Enterprise Knowledge.

]]>
Top Semantic Layer Use Cases and Applications (with Real World Case Studies)   https://enterprise-knowledge.com/top-semantic-layer-use-cases-and-applications-with-realworld-case-studies/ Thu, 01 May 2025 17:32:34 +0000 https://enterprise-knowledge.com/?p=23922 Today, most enterprises are managing multiple content and data systems or repositories, often with overlapping capabilities such as content authoring, document management, or data management (typically averaging three or more). This leads to fragmentation and data silos, creating significant inefficiencies. … Continue reading

The post Top Semantic Layer Use Cases and Applications (with Real World Case Studies)   appeared first on Enterprise Knowledge.

]]>
Today, most enterprises are managing multiple content and data systems or repositories, often with overlapping capabilities such as content authoring, document management, or data management (typically averaging three or more). This leads to fragmentation and data silos, creating significant inefficiencies. Finding and preparing content and data for analysis takes weeks, or even months, resulting in high failure rates for knowledge management, data analytics, AI, and big data initiatives. Ultimately, negativity impacting decision-making capabilities and business agility.

To address these challenges, over the last few years, the semantic layer has emerged as a framework and solution to support a wide range of use cases, including content and data organization, integration, semantic search, knowledge discovery, data governance, and automation. By connecting disparate data sources, a semantic layer enables richer queries and supports programmatic knowledge extraction and modernization.

A semantic layer functions by utilizing metadata and taxonomies to create structure, business glossaries to align on the meaning of terms, ontologies to define relationships, and a knowledge graph to uncover hidden connections and patterns within content and data. This combination allows organizations to understand their information better and unlock greater value from their knowledge assets. Moreover, AI is tapping into this structured knowledge to generate contextual, relevant, and explainable answers.

So, what are the specific problems and use cases organizations are solving with a semantic layer? The case studies and use cases highlighted in this article are drawn from our own experience from recent projects and lessons learned, and demonstrate the value of a semantic layer not just as a technical foundation, but as a strategic asset, bridging human understanding with machine intelligence.

 

 

Semantic Layer Advancing Search and Knowledge Discovery: Getting Answers with Organizational Context

Over the past two decades, we have completed 50-70 semantic layer projects across a wide range of industries. In nearly every case, the core challenges revolve around age-old knowledge management and data quality issues—specifically, the findability and discoverability of organizational knowledge. In today’s fast-paced work environment, simply retrieving a list of documents as ‘information’ is no longer sufficient. Organizations require direct answers to discover new insights. Most importantly, organizations are looking to access data in the context of their specific business needs and processes. Traditional search methods continue to fall short in providing the depth and relevance required to make quick decisions. This is where a semantic layer comes into play. By organizing and connecting data with context, a semantic layer enables advanced search and knowledge discovery, allowing organizations to retrieve not just raw files or data, but answers that are rich in meaning, directly tied to objectives, and action-oriented. For example, supported by descriptive metadata and explicit relationships, semantic search, unlike keyword search, understands the meaning and context of our queries, leading to more accurate and relevant results by leveraging relationships between entities and concepts across content, rather than just matching keywords. This powers enterprise search solutions and question-answering systems that can understand and answer complex questions based on your organization’s knowledge. 

Case Study: For our clients in the pharmaceuticals and healthcare sectors, clinicians and researchers often face challenges locating the most relevant medical research, patient records, or treatment protocols due to the vast amount of unstructured data. A semantic layer facilitates knowledge discovery by connecting clinical data, trials, research articles, and treatment guidelines to enable context-aware search. By extracting and classifying entities like patient names, diagnoses, medications, and procedures from unstructured medical records, our clients are advancing scientific discovery and drug innovation. They are also improving patient care outcomes by applying the knowledge associated with these entities in clinical research. Furthermore, domain-specific ontologies organize unstructured content into a structured network, allowing AI solutions to better understand and infer knowledge from the data. This map-like representation helps systems navigate complex relationships and generate insights by clearly articulating how content and data are interconnected. As a result, rather than relying on traditional, time-consuming keyword-based searches that cannot distinguish between entities (e.g., “drugs manufactured by GSK” vs. “what drugs treat GSK”?), users can perform semantic queries that are more relevant and comprehend meaning (e.g., “What are the side effects of drug X?” or “Which pathways are affected by drug Y?”), by leveraging the relationships between entities to obtain precise and relevant answers more efficiently.

 

Semantic Layer as a Data Product: Unlocking Insights by Aligning & Connecting Knowledge Assets from Complex Legacy Systems

The reality is that most organizations face disconnected data spread across complex, legacy systems. Despite well-intended investments and efforts in enterprise knowledge and data management efforts, typical repositories often remain outdated, including legacy applications, email, shared network drives, folders, and information saved locally on desktops or laptops. Global investment banks, for instance, struggle with multiple outdated record management, risk, and compliance tracking systems, while healthcare organizations continue to contend with disparate electronic health record (EHR) systems and/or Electronic Medical Records (EMRs). These challenges hinder the ability to communicate and share data with newer, more advanced systems, are typically not designed to handle the growing demands of modern data, and result in businesses grappling with siloed information in legacy systems that make regulatory reporting onerous, manual, and time-consuming. The solution to these issues lies in treating the semantic layer as an abstracted data product itself whereby organizations employ semantic models to connect fragmented data from legacy systems, align shared terms across these systems, provide descriptive metadata and meaning, and connect data to empower users to query and access data with additional context, relevance, and speed. This approach not only streamlines decision-making but also modernizes data infrastructure without requiring a complete overhaul of existing systems.

Case Study: We are currently working with a global financial firm to transform their risk management program. The firm manages 21 bespoke legacy applications, each handling different aspects of their risk processes where compiling a comprehensive risk report typically took up to two months, and answering key questions like, “What are the related controls and policies relevant to a given risk in my business?” was a complex, time-consuming task to tackle. The firm engaged with us to augment their data transformation initiatives with a semantic layer and ecosystem. We began by piloting a conceptual graph model of their risk landscape, defining core risk taxonomies to connect disparate data across the ecosystem. We used ontologies to explicitly capture the relationships between risks, controls, issues, policies, and more. Additionally, we leveraged large language models (LLMs) to summarize and reconcile over 40,000 risks, which had previously been described by assessors using free text.

This initiative provided the firm with a simplified, intuitive view where users could quickly look up a risk and find relevant information in seconds via a graph front-end. Just 1.5 years later, the semantic layer is powering multiple key risk management tools, including a risk library with semantic search and knowledge panels, four recommendation engines, and a comprehensive risk dashboard featuring threshold and tolerance analysis. The early success of the project was due to a strategic approach: rather than attempting to integrate the semantic data model across their legacy applications, the firm treated it as a separate data product. This allowed risk assessors and various applications to use the semantic layer as modular “Lego bricks,” enabling flexibility and faster access to critical insights without disrupting existing systems.

 

Semantic Layer for Data Standards and Interoperability: Navigating the Dynamism of Data & Vendor Limitations 

Various data points suggest that, today, the average tenure of an S&P 500 technology company has dropped dramatically from 85 years to just 12-15 years. This rapid turnover reflects the challenges organizations face with the constant evolution of technology and vendor solutions. The ability to adapt to new tools and systems, while still maintaining operational continuity and reducing risk, is a growing concern for many organizations. One key solution to this challenge is using frameworks and standards that are created to ensure data interoperability, offering the flexibility to navigate data organization and abstracting data from system and vendor limitations. A proper semantic layer employs universally adopted semantic web (W3C) and data modeling standards to design, model, implement, and govern knowledge and data assets within organizations and across industries. 

Case Study: A few years ago, one of our clients faced a significant challenge when their graph database vendor was acquired by another company, leading to a sharp increase in both license and maintenance fees. To mitigate this, we were able to swiftly migrate all of their semantic data models from the old graph database to a new one within less than a week (the fastest migration we’ve ever experienced). This move saved the client approximately $2 million over three years. The success of the migration was made possible because their data models were built using semantic web standards (RDF-based), ensuring standards based data models and interoperability regardless of the underlying database or vendor. This case study highlights a fundamental shift in how organizations approach data management. 

 

Semantic Layer as the Framework for a Knowledge Portal 

The growing volume of data, the need for efficient knowledge sharing, and the drive to enhance employee productivity and engagement are fueling a renewed interest in knowledge portals. Organizations are increasingly seeking a centralized, easily accessible view of information as they adopt more data-driven, knowledge-centric approaches. A modern Knowledge Portal consolidates and presents diverse types of organizational content, ranging from unstructured documents and structured data to connections with people and enterprise resources, offering users a comprehensive “Enterprise 360” view of related knowledge assets to support their work effectively.

While knowledge portals fell out of favor in the 2010s due to issues like poor content quality, weak governance, and limited usability, today’s technological advancements are enabling their resurgence. Enhanced search capabilities, better content aggregation, intelligent categorization, and automated integrations are improving findability, discoverability, and user engagement. At its core, a Knowledge Portal comprises five key components that are now more feasible than ever: a Web UI, API layers, enterprise search engine, knowledge graph, and taxonomy/ontology management tools—half of which form part of the semantic layer.

Case Study: A global investment firm managing over $250 billion in assets partnered with us to break down silos and improve access to critical information across its 50,000-employee organization. Investment professionals were wasting time searching for fragmented, inconsistent knowledge stored across disparate systems, often duplicating efforts and missing key insights. We designed and implemented a Knowledge Portal integrating structured and unstructured content, AI-powered search, and a semantic layer to unify data from over 12 systems including their primary CRM (DealCloud), additional internal/external systems, while respecting complex access permissions and entitlements. A big part of the portal involved a semantic layer architecture which included the rollout of metadata and taxonomy design, ontology and graph modeling and storage, and an agile development process that ensured high user engagement and adoption. Today, the portal connects staff to both information and experts, enabling faster discovery, improved collaboration, and reduced redundancy. As a result, the firm saw measurable gains in their productivity, staff and client onboarding efficiency, and knowledge reuse. The company continues to expand the solution to advanced use cases such as semantic search applications and robust global use cases.

 

Semantic Layer for Analytics-Ready Data 

For many large-scale organizations, it takes weeks, sometimes months, for analytics teams to develop “insights” reports and dashboards that fulfill data-driven requests from executives or business stakeholders. Navigating complex systems and managing vast data volumes has become a point of friction between established software engineering teams managing legacy applications and emerging data science/engineering teams focused on unlocking analytics insights or data products. Such challenges persist as long as organizations work within complex infrastructures and proprietary platforms, where data is fragmented and locked in tables or applications with little to no business context. This makes it extremely difficult to extract useful insights, handle the dynamism of data, or manage the rising volumes of unstructured data, all while trying to ensure that data is consistent and trustworthy. 

Picture this scenario and use case from a recent engagement: a global retailer, with close to 40,000 store locations across the globe had recently migrated its data to a data lake in an attempt to centralize their data assets. Despite the investment, they still faced persistent challenges when new data requests came from their leadership, particularly around store performance metrics. Here’s a breakdown of the issues:

  • Each time a leadership team requested a new metric or report, the data team had to spin up a new project and develop new data pipelines.
  • 5-6 months was required for a data analyst to understand the content/data related to these metrics—often involving petabytes of raw data.
  • The process involved managing over 1500 ETL pipelines, which led to inefficiencies (what we jokingly called “death by 2,000 ETLs”).
  • Producing a single dashboard for C-level executives cost over $900,000.
  • Even after completing the dashboard, they often discovered that the metrics were being defined and used inconsistently. Terms like “revenue,” “headcount,” or “store performance” were frequently understood differently depending on who worked on the report, making output reports unreliable and unusable. 

This is one example of why organizations are now seeking and investing in a coherent, integrated way to bridge these gaps and understand their vast data ecosystems. Because organizations often work with complex systems, ranging from CRMs and ERPs to data lakes and cloud platforms, extracting meaningful insights from this data requires a coherent, integrated view that can bridge these gaps. This is where the semantic layer serves as a pragmatic tool that enables organizations to bridge these gaps, streamline processes, and transform how data is used across departments. Specifically for these use cases, semantic data is gaining significant traction across diverse pockets of the organization as the standard interpreter between complex systems and business goals. 

 

Semantic Layer for Delivering Knowledge Intelligence 

Another reality many organizations are grappling with today is that basic AI algorithms trained in public data sets may not work well on organization and domain-specific problems, especially in domains where industry preferences are relevant. Thus, organizational knowledge is a prerequisite for success, not just for generative AI, but for all applications of enterprise AI and data science solutions. This is where experience and best practices in knowledge and data management lend the AI space effective and proven approaches to sharing domain and institutional knowledge. Especially for technical teams that are tasked with making AI “work” or provide value for their organization, they are looking for programmatic ways for explicitly modeling relationships between various data entities, providing business context to tabular data, and extracting knowledge from unstructured content, ultimately delivering what we call Knowledge Intelligence.

A well-implemented semantic layer abstracts the complexities of underlying systems and presents a unified, business-friendly view of data. It transforms raw data into understandable concepts and relationships, as well as organizes and connects unstructured data. This makes it easier for both data teams and business users to query, analyze, and understand their data, while making this organizational knowledge machine-ready and readable. The semantic layer standardizes terminology and data models across the enterprise, and provides the required business context for the data. By unifying and organizing data in a way that is meaningful to the business, it ensures that key metrics are consistent, actionable, and aligned with the company’s strategic objectives and business definitions.

Case Study: With the aforementioned global retailer, as their data and analytics teams worked to integrate siloed data and unstructured content, we partnered with them to build a semantic ecosystem that streamlined processes and provided the business context needed to make sense of their vast data. Our approach included: 

  • Standardized Metadata and Vocabularies: Developed standardized metadata and vocabularies to describe their key enterprise data assets, especially for store metrics like sales performance, revenue, etc. This ensured that everyone in the organization used the same definitions and language when discussing key metrics. 
  • Explicitly Defined Concepts and Relationships: We used ontologies and graphs to define the relationships between various domains such as products, store locations, store performance, etc. This created a coherent and standardized model that allowed data teams to work from a shared understanding of how different data points were connected.
  • Data Catalog and Data Products: We helped the retailer integrate these semantic models into a data catalog that made data available as “data products.” This allowed analysts to access predefined, business-contextualized data directly, without having to start from scratch each time a new request was made.

This approach reduced report generation steps from 7 to 4 and cut development time from 6 months to just 4-5 weeks. Most importantly, it enabled the discovery of previously hidden data, unlocking valuable insights to optimize operations and drive business performance.

 

Semantic Layer as a Foundation for Reliable AI: Facilitating Human Reasoning and Explainable Decisions

Emerging technologies (like GenAI or Agentic AI) are democratizing access to information and automation, but they also contribute to the “dark data” problem—data that exists in an unstructured or inaccessible format but contains valuable, sensitive, or bad information. While LLMs have garnered significant attention in conversational AI and content generation, organizations are now recognizing that their data management challenges require more specialized, nuanced, and somewhat ‘grounded’ approaches that address the gaps in explainability, precision, and the ability to align AI with organizational context and business rules. Without this organizational context, raw data or text is often messy, outdated, redundant, and unstructured, making it difficult for AI algorithms to extract meaningful information. The key step to addressing this AI problem involves the ability to connect all types of organizational knowledge assets, i.e., using shared language, involving experts, related data, content, videos, best practices, lessons learned, and operational insights from across the organization. In other words, to fully benefit from an organization’s knowledge and information, both structured and unstructured information, as well as expert knowledge, must be represented and understood by machines. A semantic layer provides AI with a programmatic framework to make organizational context, content, and domain knowledge machine-readable. Techniques such as data labeling, taxonomy development, business glossaries, ontology, and knowledge graph creation make up the semantic layer to facilitate this process. 

Case Study: We have been working with a global foundation that had previously been through failed AI experiments as part of a mandate from their CEO for their data teams to “figure out a way” to adopt LLMs to evaluate the impact of their investments on strategic goals by synthesizing information from publicly available domain data, internal investment documents, and internal investment data. The challenge for previously failed efforts lay in connecting diverse and unstructured information to structured data and ensuring that the insights generated were precise, explainable, reliable, and actionable for executive stakeholders. To address these challenges, we took a hybrid approach that leveraged LLMs that were augmented through advanced graph technology and a semantic RAG (Retrieval Augmented Generation) agentic workflow. To provide the relevant organizational metrics and connection points in a structured manner, the solution leveraged an Investment Ontology as a semantic backbone that underpins their disconnected source systems, ensuring that all investment-related data (from structured datasets to narrative reports) is harmonized under a common language. This semantic backbone supports both precise data integration and flexible query interpretation. To effectively convey the value of this hybrid approach, we leveraged a chatbot that served as a user interface to toggle back and forth between the basic GPT model vs. the graph RAG solution. The solution consistently outperformed the basic/naive LLMs for complex questions, demonstrating the value of semantics for providing organizational context and alignment and ultimately, delivering coherent and explainable insights that bridged structured and unstructured investment data, as well as provided a transparent AI mapping that allowed stakeholders to see exactly how each answer was derived.

 

Closing 

Now more than ever, the understanding and application of semantic layers are rapidly advancing. Organizations across industries are increasingly investing in solutions to enhance their knowledge and data management capabilities, driven in part by the growing interest to benefit from advanced AI capabilities. 

The days of relying on a single, monolithic tool are behind us. Enterprises are increasingly investing in semantic technologies to not only work with the systems of today but also to future-proof their data infrastructure for the solutions of tomorrow. A semantic layer provides the standards that act as a universal “music sheet,” enabling data to be played and interpreted by any instrument, including emerging AI-driven tools. This approach ensures flexibility, reduces vendor lock-in, and empowers organizations to adapt and evolve without being constrained by legacy systems.

If you are looking to learn more about how organizations are approaching semantic layers at scale or are you seeking to unstick a stalled initiative, you can learn more from our case studies or contact us if you have specific questions.

The post Top Semantic Layer Use Cases and Applications (with Real World Case Studies)   appeared first on Enterprise Knowledge.

]]>
What are the Different Types of Graphs? The Most Common Misconceptions and Understanding Their Applications https://enterprise-knowledge.com/what-are-the-different-types-of-graphs-the-most-common-misconceptions-and-understanding-their-applications/ Fri, 14 Mar 2025 19:16:19 +0000 https://enterprise-knowledge.com/?p=23449 Over 80% of enterprise data remains unstructured, and with the rise of artificial intelligence (AI), traditional relational databases are becoming less effective at capturing the richness of organizational knowledge assets, institutional knowledge, and interconnected data. In modern enterprise data solutions, … Continue reading

The post What are the Different Types of Graphs? The Most Common Misconceptions and Understanding Their Applications appeared first on Enterprise Knowledge.

]]>
Over 80% of enterprise data remains unstructured, and with the rise of artificial intelligence (AI), traditional relational databases are becoming less effective at capturing the richness of organizational knowledge assets, institutional knowledge, and interconnected data. In modern enterprise data solutions, graphs have become an essential topic and a growing solution for organizing and leveraging vast amounts of such disparate, diverse but interconnected data. Especially for technical teams that are tasked with making AI “work” or provide value for their organization, graphs offer a programmatic way for explicitly modeling relationships between various data entities, providing business context to tabular data, and extracting knowledge from unstructured content – ultimately delivering what we call Knowledge Intelligence

Despite its growing popularity, misconceptions about the scope and capabilities of different graph solutions still persist. Many organizations continue to struggle to fully understand the diverse types of graphs available and their specific use cases.

As such, before investing into the modeling and implementation of a graph solution, it is important to understand the different types of graphs used within the enterprise, the distinct purposes they serve, and the specific business needs they support. There are various types of graphs that are built-for-purpose but the top most common categories are metadata graphs, knowledge graphs, and analytics graphs. Collectively, we refer to these as a “semantic network” and the core components of a semantic layer as they all represent interconnected entities with relationships, allowing for richer data interpretation and analysis through the use of semantic metadata and contextual understanding – essentially, a network of information where the connections between data points hold significant meaning.

The most common misconceptions with graph solutions:

Below, I explore these most common types of graphs, their respective use cases, and highlight how each can be applied to real-world business challenges.

  1. Knowledge Graph: Organizes and links information based on its business meaning and context. It represents organizational entities (e.g., people, products, places, things) and the relationships between them in a way that is understandable both to machines and humans. By integrating heterogeneous data from multiple touchpoints and systems into a unified knowledge model, it serves as a knowledge and semantic abstraction layer over enterprise data, where relationships between different datasets are explicitly defined using ontologies and standards (e.g., RDF, OWL). 
    1. When to Use: A knowledge or semantic graph is best suited for semantic understanding, contextualization, and enriched insights. It is a key solution in enterprise knowledge and data management as it allows organizations to capture, store, and retrieve tacit and explicit knowledge in a structured way and provide a holistic view of organization-specific domains such as customers, products, services, etc. ultimately supporting customer 360, sales, marketing, and knowledge and data management efforts. Additionally, enterprise knowledge graphs power AI capabilities such as natural language processing (NLP) and autonomous but explainable AI agents by providing context-aware knowledge that can be used for machine-specific tasks like entity recognition, question answering, and content and data categorization. RDF-based tools such as Graphwise – GraphDB,  Stardog, etc. enable the scale and efficiency of knowledge graph modeling and management. 
  2. Metadata Graph: Captures the structure and descriptive properties of data by tracking business, technical, and operational metadata attributes, such as process, ownership, security, and privacy information across an organization, providing a unified repository of metadata and a connected view of data assets. 
    1. When to Use: A metadata graph is best used for managing and tracking the metadata (data about data) across the enterprise. It helps ensure that data is properly classified, stored, governed, and accessible. As such, it’s ideal for data governance, lineage and data quality tracking, and metadata management. Building a metadata graph simplifies and streamlines data and metadata management practices and is pertinent for data discovery, governance, data cataloging, and lineage tracking use cases. Advanced metadata modeling and management solutions such as data catalogs and taxonomy/ontology management tools (e.g., data.world, TopQuadrant, Semaphore, etc.) facilitate the development and scale of metadata graphs.
  3. Analytics Graph: Supports analytics by connecting and modeling relationships between different data entities to uncover insights and identify trends, patterns, and correlations – enabling users to perform sophisticated queries on large, complex datasets with interrelationships that may not be easily captured in traditional tabular models.
    1. When to Use: Graph analytics supports advanced analytics use cases, including in-depth data exploration to uncover relationships, enabling data analytics teams to identify trends, anomalies, and correlations that may not be immediately apparent through standard reporting tools. It also plays a critical role in recommendation systems by analyzing user behavior, preferences, and interactions. We have seen the most success when analytics graphs are used to power investigative analysis and pattern detection use cases in industries such as e-commerce, media, manufacturing, engineering, and fraud detection for financial institutions. Tools like Neo4j, a widely adopted graph database with property graph also known as Labeled Property Graph (LPG) algorithm capabilities for finding communities/clusters in a graph, facilitate the storage and processing of such large-scale graph data whereas visualization tools (like Linkurious or GraphAware Hume) help interpret and explore these complex relationships more intuitively.

Each type of graph, whether metadata, analytics, or knowledge/semantic, plays a critical role in enhancing the usability and accessibility of enterprise knowledge assets. One important consideration to keep in mind, especially when it comes to analytics graphs, is the potential lack of domain or knowledge graph integration.

Understanding the distinct functions of these graph types enables organizations to effectively leverage their power across a wide range of applications. In many of our enterprise solutions, a combination of these graphs is employed to achieve more comprehensive outcomes. These graphs leverage semantic technologies to capture relationships, hierarchies, and context in a machine-readable format, providing a foundation for more intelligent data interactions. For instance, metadata and knowledge graphs rely on RDF (Resource Description Framework) which is essential for structuring, storing, and querying semantic data, enabling the representation of complex relationships between entities – requiring semantic web standard-compliant technologies that support RDF such as triplestore graph databases (e.g., GraphDB or Stardog) and SPARQL endpoints to query RDF data.

Within a semantic layer, a combination of these graphs is used to organize and manage knowledge in a way that enables better querying, integration, and analysis across various systems and data sources. For example, with the financial institution client case study we briefly discussed above, their risk and compliance department is one of the key users of their semantic layer where a metadata graph is used in a federated approach to track regulatory data and compliance requirements across 20+ systems and 8+ business lines and helping to identify data quality issues in their knowledge graph. Meanwhile, the knowledge graph contextualizes this data by linking it to business operations and transactions – providing an end-to-end, connected view of their risk assessment process. The data and analytics team then utilizes an analytics graph to analyze historical data to support their ML/fraud detection use cases by using a subset of information from the knowledge graph. This integrated semantic layer approach is helping this specific organization ensure both regulatory compliance and proactive risk management. It demonstrates a growing trend and best practice where an enterprise knowledge graph provides a holistic view of enterprise knowledge assets, metadata graphs enable data governance, and analytics graphs support advanced and potentially transient analytical use cases.  

Understanding your specific business needs and implementing an effective graph solution is an iterative process. If you are just embarking on this path, or you have already started and need further assistance with approach, design and modeling, or proven practices, learn more from our case studies or reach out to us directly. 

The post What are the Different Types of Graphs? The Most Common Misconceptions and Understanding Their Applications appeared first on Enterprise Knowledge.

]]>
Data Management and Architecture Trends for 2025 https://enterprise-knowledge.com/data-management-and-architecture-trends-for-2025/ Mon, 27 Jan 2025 19:21:11 +0000 https://enterprise-knowledge.com/?p=23005 Today, many organizational leaders are focused on AI readiness, and as the AI transformation is accelerating, so are the trends that define how businesses look for, store, secure, and leverage data and content.  The future of enterprise data management and … Continue reading

The post Data Management and Architecture Trends for 2025 appeared first on Enterprise Knowledge.

]]>
Today, many organizational leaders are focused on AI readiness, and as the AI transformation is accelerating, so are the trends that define how businesses look for, store, secure, and leverage data and content. 

The future of enterprise data management and architecture is evolving rapidly in some areas and returning to core principles in others. Based on our experience through our engagements across industries, diverse projects and client use cases, and our vendor partnerships, we continue to have the opportunity to observe and address the dynamic challenges organizations are facing in managing and getting value out of their data. These interactions, coupled with inputs from our advisory board, are helping us gain a good picture of the evolving landscape. 

Drawing from these sources, I have identified the key trends in the data management and architecture space that we expect to see in 2025. Overall, these trends highlight how organizations are adapting to technological advancements while shifting towards a more holistic approach – focusing on people, processes, and standards – to maximize returns on their data investments.

1. Wider Adoption of a Business or Domain-Focused Data Strategy

The conventional approach to data management architecture often involved a monolithic architecture, with centralized data repositories and standardized reporting systems that served the entire organization. While this worked for basic reporting and operational needs, the last decade has proven that such a solution couldn’t keep pace with the complexities of modern businesses. In recent years, a more agile and dynamic approach has gained momentum (and adoption) – one that is putting the business first. This shift is driven by the growing need not only to manage vast and diverse data but also to address the persistent challenge of minimizing data duplication while making data actionable, relevant, and directly aligned with the needs of specific business users.

A Business or Domain-Focused data strategy approach emphasizes decentralized data ownership and federated governance across various business domains (e.g., customer service, HR, sales, operations) – where each domain or department owns “fit-for-purpose” tools and the data within. As a result, data is organized and managed by the business function it supports, rather than by data type or format. 

 

This has been an emerging trend for a couple of years and part of the data mesh architecture. It is now gaining traction through the wider adoption of business-aligned data products or data domains in support of business processes – where data products empower individual business units to standardize and contextualize their data and derive actionable insights without heavy reliance on central IT, data teams, or enterprise-wide platforms. Why is this happening now? We are seeing two key drivers fueling the growing adoption of this strategy:

  1. The shift in focus from the physical data to descriptive metadata, and the advancement in the corresponding solutions that enable this approach (such as a semantic layer or data fabric architectures that connect domain-specific data platforms without the need for data duplication or migration); and 
  2. The rise of Artificial Intelligence (AI), specifically Named Entity Recognition (NER), Natural Language Processing (NLP), Large Language Models (LLMs) and Machine Learning (ML) – playing a pivotal role in augmenting organizational capabilities with automation.

As a result, we are starting to see the traditional method of relying on static reports and dashboards becoming obsolete. By integrating the federated capabilities and trends discussed below, we anticipate organizations moving beyond static reporting dashboards to the ability to “talk” to their data in a more dynamic and reliable way.

2. Semantic Layer Data Architecture

One of the key concepts that is significantly fueling the adoption of modern data stacks today is the “zero-copy” principle – building a data architecture that greatly reduces or eliminates the need to copy data from one system to another, thus allowing organizations to access and analyze data from multiple sources in real-time without duplicating it. This principle is changing how organizations manage and interact with their data.

In 2020, I first discussed Semantic Layer Architecture through a white paper I published called, What is a Semantic Architecture and How do I Build One?. In 2021, Gartner dubbed it “a data fabric/data mesh architecture and key to modernizing enterprise data management.” As the field continues to evolve, technical capabilities are advancing semantic solutions. A semantic layer in data architecture takes a metadata-first approach and is becoming an essential component of modern data architectures, enabling organizations to simplify data access, improve consistency, and enhance data governance. 

 

From an architect’s point of view, a semantic layer architecture adds significant value to modern data architecture and it is becoming a trend organizations are embracing – primarily because it provides the framework for addressing these traditional challenges for the data organization:

  • Business alignment through standardized metadata by translating business context and relationships between raw data through metadata and ontology, making it ‘machine reliable’; 
  • Simplified data access for business users through shared vocabulary (taxonomy);
  • Enhanced data connection and interoperability through a virtualized access and central source of “view” that connects data (through metadata) from various sources without requiring the physical movement of data; 
  • Improved data governance and security by enforcing the application of consistent business definitions, metrics, and data access rules to data; and
  • The flexibility to future-proof data architecture by decoupling the complexities of data storage and presentation facilitates a zero-copy principle and ensures data remains where it is stored, without unnecessary duplication or replication, This helps organizations create a virtualized layer to address the challenges of working with diverse data from multiple sources while maintaining consistency and usability.

This trend reflects a broader shift from legacy application/system-centric architecture to a more data-centric approach where data doesn’t lose its meaning and context when taken out of a spreadsheet, a document, SQL table, or a data platform – helping organizations unlock the true potential of their knowledge and data.

3. Consolidation & Rebundling of Data Platforms 

The enterprise data technology landscape has been going after the “modern data stack” strategy, characterized by a best-of-breed approach, where organizations adopt specialized tools from various vendors to fulfill different needs – be it data storage, analytics, data cataloging and discovery, or AI. However, with the growing complexity of managing multiple platforms and tighter budgets, organizations are facing mounting pressures to optimize. 

Much akin to the retro experience that we’re seeing within the TV streaming industry, the landscape of data technologies is undergoing a significant shift – one of rebundling. This change is primarily driven by the need to simplify data management solutions in order to handle increasing organizational data complexity, optimize the costs associated with data storage and IT infrastructure across multiple vendors, and enhance the ability to experiment with and extract value from AI.

As a result, we are seeing the pace of technology bundling and mergers and acquisitions accelerating as large, well-established data platforms are acquiring smaller, specialized vendors and offering integrated, end-to-end solutions that aim to simplify data management. One good, well-publicized example of this is Salesforce’s recent bundling with and acquisition of various vendors to unveil the Unlimited Edition+ bundle, which provides access across Slack, Tableau, Sales Cloud, Service Cloud, Einstein AI, Data Cloud, and more, all in a single offering. In a recent article, my colleague further discussed the ongoing consolidation in the semantic data software industry, highlighting how the sector is increasingly recognizing the importance of semantics and how well-funded software companies are acquiring many independent vendors in this space to provide more comprehensive semantic layer solutions to their customers.

In 2025, we expect more acquisitions to be on the horizon. For CIOs and CDAOs looking to take advantage of this trend, there are important factors to consider. 

Limitations and Known Challenges:

  • Complexity in data migration: Migrating data from multiple platforms into a unified one is a resource-intensive process. Such transitions typically introduce disruptions to business operations, leading to downtime or performance issues during the shift. 
  • Data interoperability: The ability of different data systems, platforms, applications, and organizations to exchange, interpret, and use data seamlessly across various environments is paramount in today’s data landscape. This interoperability ensures data flows without losing its meaning, whether within an organization (e.g., between departments and various systems) or externally (e.g., regulatory reporting). Single-vendor technology bundles are often optimized for internal use, and they can limit data exchange with external systems or other vendors’ tools. This creates challenges and costs when trying to integrate non-vendor systems or migrate to new platforms. To mitigate these risks, it’s important for organizations to adopt solutions based on standardized data formats and protocols, invest in middleware and APIs for integration, and leverage cloud-based systems that support open standards and external system compatibility.
  • Potential vendor lock: By committing to a single platform, organizations often become overly dependent on a specific vendor’s technology for all their data needs. This limits the data organization’s flexibility, especially when new tools or platforms are required, forcing the use of a proprietary solution that may no longer meet your evolving business needs. Relying on one platform also restricts data access and complicates integration with other systems, hindering the ability to gain holistic insights across your organizational data assets.

Benefit Areas:

  • Better control over security and compliance: As businesses integrate AI and other advanced technologies into their data stacks, having a consolidated security framework is particularly top of mind. Facilitating this simplification through a unified platform reduces the risks associated with managing security across multiple platforms and helps ensure better compliance with regulatory data security requirements.
  • Streamlined access and entitlement management: Consolidating the management of organizational access to data, roles, and permissions allows administrators to unify user access to data and content across applications within a suite – typically from a central dashboard, making it easier to enforce consistent access policies across all connected applications. This streamlines better management to prevent unauthorized access to critical data. It helps ensure that only authorized users have the appropriate access to diverse types of data, including AI models, algorithms, and media, strengthening the organization’s overall security posture.
  • Simplified vendor management: Using a single vendor for a bundled suite reduces the administrative complexity of managing multiple vendors, which sometimes involves different support processes, protocols, and system compatibility issues. A unified data platform provides a more streamlined approach to handling data across systems and a single point of contact for support or troubleshooting.

When properly managed, bundling has its benefits; the focus should be on finding the balance, ensuring that data interoperability concerns are addressed while still leveraging the advantages of bundled solutions. Depending on the priority for your organization, this trend will be beneficial to watch (and adopt) for your streamlined data landscape and architecture.

4. Refocused Investments in Complementary AI Technologies (Beyond LLMs)

While LLMs have garnered significant attention in conversational AI and content generation, organizations are now recognizing that their data management challenges require more specialized, nuanced, and somewhat ‘traditional’ AI tools that address the gaps in explainability, precision, and the ability to align LLMs with organizational context and business rules. 

 

Despite the draw to AI’s potential, many organizations prioritize the reliability and trustworthiness of traditional knowledge assets. They also want to integrate human intelligence, ensuring that an organization’s collective knowledge – including people’s experience and expertise – is fully captured. We refer to this as Knowledge Intelligence (KI) rather than just AI, to indicate the integration of tacit knowledge and human intelligence with AI, thereby capturing the deepest and most valuable information within an organization.

As such, organizations have started reinvesting in Natural Language Processing (NLP), Named Entity Recognition (NER), and Machine Learning (ML) capabilities, realizing that these complementary AI tools are just as essential in tackling their complex enterprise data and knowledge management (KM) use cases. Specifically, we are seeing this trend reemerging to embrace the advancements in AI capabilities for enabling the following key priorities for the enterprise. 

  • Expert Knowledge Capture & Transfer: Programmatically encoding expert knowledge and business context in structured data & AI;
  • Knowledge Extraction: Federated connection and aggregation of organizational knowledge assets (unstructured, structured, and semi-structured sources) for knowledge extraction; and
  • Business Context Embedding: Providing standardized meaning and context to data and all knowledge assets in a machine-readable format.

We see this renewed focus in holistic AI technologies as more than just a passing shift, it is marking a pivotal trend in the world of enterprise data management as a strategic move toward more reliable, intelligent, and efficient information and data management. 

For organizations looking to enhance their ability to extract value from experts and diverse data and content assets, the trend in comprehensive AI capabilities facilitates this integration and ensures that AI can operate not just as a tool, but as an intelligent organizational partner that understands the unique nuances of an organization – ultimately delivering knowledge and intelligence to the data organization.

5. A Unified Approach to Data and Content Management: Data & Analytics Teams Meet Unstructured Content & Knowledge Management

One of the most subtle yet significant changes we have been seeing over the last 2-3 years is the blending of traditionally siloed data management functions. In particular, the boundary between data and knowledge management teams is increasingly dissolving, with data and analytics professionals now addressing challenges that were once primarily the domain of KM. This shift is largely due to the growing recognition that organizations need a more cohesive approach to handling both structured and unstructured content.

Just a few years back, data management was largely a function of structured data, confined to databases and well-defined formats and handled by data engineers, data analysts, and data governance officers. Knowledge and content management, on the other hand, dealt primarily with unstructured content such as documents, emails, and multimedia, managed by different teams including knowledge officers and document management specialists. 

However, in 2025, as organizations continue to strive for a more flexible approach to benefit from their overall organizational knowledge assets, we are witnessing a convergence where data teams are now actively engaged in managing unstructured knowledge. With advancements in GenAI, machine learning, NER, and NLP technologies, data and analytics teams are now expected to not only manage and analyze structured data but also tackle the complexities of unstructured content – ranging from documents, emails, text, and social media posts to contracts and video files. 

By bridging the gap between data teams and business-oriented KM teams, organizations are able to better connect technical initiatives to actual use cases for employees, customers, and their stakeholders. For example, we are seeing a successful adoption of this trend with the data & analytics teams at a large global retailer. We are supporting their content and information management teams’ ability to enable the data teams with a knowledge and semantic framework to aggregate and connect traditionally siloed data and unstructured content. The KM team is doing this by providing knowledge models and semantic standards such as metadata, business glossaries, and taxonomy/ontology (as part of a semantic layer architecture) – explicitly providing business context for data, categorizing and labeling unstructured content, and providing the business logic and context for data used in their AI algorithms.

In 2025, we expect to see this trend to become more common for many organizations looking to enable cross-functional collaboration, with traditional data and IM offices starting to converge and professionals from diverse backgrounds working together to manage both structured and unstructured data.

5. Shift in Organizational Roles: From Governance to Enablement

This trend reflects how the previously mentioned shifts are becoming a reality within enterprises. As organizations embrace a more integrated approach to connecting overall organizational knowledge assets, the roles within the organization are also shifting. Traditionally, data governance teams, officers, and compliance specialists have been the gatekeepers of data quality, privacy, and security. While these roles remain crucial, the focus is increasingly shifting toward enablement rather than control.

Additionally, knowledge managers are steadily growing beyond their traditional role of providing the framework for sharing, applying, and managing the knowledge and information of an organization. They are now also serving as the providers of business context to data teams and advancements in Artificial Intelligence (AI). This heightened visibility for KM has pushed the industry to identify more optimized ways to organize teams and measure and convey their value to organizational leaders. On top of that, AI has been fueling the democratization of knowledge and data, leading to a growing recognition of the interdependence between data, information, and knowledge management teams. 

This is what is driving the evolution of roles within KM and data from governance and control to enablement. These roles are moving away from strict oversight and regulation and towards fostering collaboration, access, and self-sufficiency across the organization. Data officers and KM teams will continue to play a critical role in setting the standards for data quality, privacy, and security. However, as their roles shift from governance to enablement, these teams will increasingly focus on establishing frameworks that support transparency, collaboration, and compliance across a more data-centric enterprise – availing self-service analytics tools that allow even non-technical staff to analyze data and generate insights independently.

As we enter 2025, the landscape of enterprise data management is being reshaped by shifts in strategy, architecture, platform focus, and the convergence of data and knowledge management teams. These changes reflect how organizations are moving from siloed approaches to a more connected, enablement-driven model. By leveraging a combination of AI-powered tools, self-service capabilities, and evolving governance practices, organizations are unlocking the full value of their data and knowledge assets. This transformation will enable faster, more informed decision-making, helping companies stay ahead in an increasingly competitive and rapidly evolving business environment.


How do these trends translate to your specific data organization and landscape? Is your organization embracing these trends? Read more or contact us to learn more and grow your data organization.

The post Data Management and Architecture Trends for 2025 appeared first on Enterprise Knowledge.

]]>
Why Graph Implementations Fail (Early Signs & Successes) https://enterprise-knowledge.com/why-graph-implementations-fail-early-signs-successes/ Thu, 09 Jan 2025 15:35:57 +0000 https://enterprise-knowledge.com/?p=22889 Organizations continue to invest heavily in efforts to unify institutional knowledge and data from multiple sources. This typically involves copying data between systems or consolidating it into a new physical location such as data lakes, warehouses, and data marts. With … Continue reading

The post Why Graph Implementations Fail (Early Signs & Successes) appeared first on Enterprise Knowledge.

]]>
Organizations continue to invest heavily in efforts to unify institutional knowledge and data from multiple sources. This typically involves copying data between systems or consolidating it into a new physical location such as data lakes, warehouses, and data marts. With few exceptions, these efforts have yet to deliver the connections and context required to address complex organizational questions and deliver usable insights. Moreover, the rise of Generative AI and Large Language Models (LLMs) continue to increase the need to ground AI models in factual, enterprise context. The result has been a renewed interest in standard knowledge management (KM) and information management (IM) principles.

Over the last decade, enterprise knowledge graphs have been rising to the challenge, playing a transformational role in providing enterprise 360 views, content and product personalizations, improving data quality and governance, and providing organizational knowledge in a machine readable format. Graphs offer a more intuitive, connected view of organizational data entities as they shift the focus from the physical data itself to the context, meaning, and relationships between data – providing a connected representation of an organization’s knowledge and data domains without the need to make copies or incur expensive migrations – and most importantly today, delivering Knowledge Intelligence to enterprise AI. 

While this growing interest in graph solutions has been long anticipated and is certainly welcome, it is also yielding some stalled implementations, unmet expectations, and, in some cases, complete initiative abandonment. Understanding that every organization has its own unique priorities and challenges, there can be various reasons why an investment in graph solutions did not yield the desired results. As part of this article, I will draw upon my observation and experience from industry lessons learned to pinpoint the most common culprits that are topping the list. The signs are often subtle but can be identified if you know where to look. These indicators typically emerge as misalignments between technology, processes, and the organization’s understanding of data relationships. Below are the top tell-tale signs that suggest a trajectory of failure:

1. Treated as Traditional, Application-Focused Efforts (As Technology/Software-Centric Programs)

If I take one datapoint out of observation from the organizations that we work with, the biggest hurdle to adopting graph solutions isn’t about whether the approach itself works – many top companies have already shown it does. The real challenge lies in the mindset and historical approach that organizations have developed over many years when it comes to managing information and technology programs. The complex questions we are asking from our content and data today are no longer fulfilled by the mental models and legacy solutions organizations have been working with for the last four or five decades. 

Traditional applications and databases, like relational or flat file systems, are built to handle structured, tabular data, not the complex, interwoven relationships. The real power of graphs lies in their ability to define organizational entities and data objects (people, customers, products, places, etc. – independent of the technology they are stored in). Graphs are optimized to handle highly interconnected use cases (such as networks of related business entities, supply chains, and recommendation systems), which traditional systems cannot represent efficiently. Adopting such a framework requires a shift from legacy application/system-centric to a data-centric approach where data doesn’t lose its meaning and context when taken out of a spreadsheet, a document, or a SQL table. 

Sticking with such traditional models and relying on legacy systems and implementation approaches that don’t support relationship modeling to make graph models work results in an incomplete or superficial understanding of the data leading to isolated or incorrect decisions, performance bottlenecks, and ultimately lack of trust and to failed efforts. Organizations that do not recognize that graph solutions often represent a significant shift in how data is viewed and used within an organization are the ones that are first to abandon the solution or incur significant technical debt. 

Early Signs of Failure

  • Implementation focuses excessively on selecting the best and latest graph database technologies without the required focus on data modeling standards. In such scenarios, the graph technology is deployed without a clear connection to key business goals or critical data outcomes. This ultimately results in misalignment between business objectives and graph implementation – often leading to vendor lock. 
  • Graph initiatives are treated as isolated IT projects where only highly technical users get a lot out of the solution. This results in little cross-functional involvement from departments outside of the data or IT teams (e.g., marketing, customer service, product development); where stakeholders and subject matter experts (SME) are not engaged or cannot easily access or contribute to the solution throughout modeling/validation and analysis – leading to the intended end users abandoning the solution altogether.
  • Lack of organizational ownership of data model quality and standards. Engineering teams often rely on traditional relational models, creating custom and complex relationships. However, no one is specifically responsible for ensuring the consistency, quality, or structure of these data models. This leads to problems such as inconsistent data formats, missing information, or incomplete relationships within the graph, which ultimately hinders the scalability and performance needed to effectively support the organization.

What Success Looks Like

Graph models rely on high-quality, well-structured business context and data to create meaningful relationships. As such, a data-centric approach requires a holistic view of organizational knowledge and data. If the initiative remains isolated, the organization will miss opportunities to fully leverage the relationships within its data across functions. To tackle this challenge, one of the largest global financial institutions is investing in a semantic layer and a connected graph model to enable comprehensive and complex risk management across the firm. As a heavily regulated financial services firm, their risk management processes necessitate accurate, timely, and detailed data and information to work. By shifting risk operations from application-centric to data-centric, they are investing in standardized terminology and relationship structures (taxonomies, ontologies, and graph analytics solutions) that foster consistency, accuracy and connected data usage across the organization’s 20+ legacy risk management systems. These consumer-grade semantic capabilities are in production environments aiding in business processes where the knowledge graph connects multiple applications providing a centralized, interconnected view of risk and related data such as policies, controls, regulations, etc. (without the need for data migration), facilitating better analysis and decision-making. These advancements are empowering the firm to proactively identify, assess, and mitigate risks, improve regulatory reporting, and foster a more data-driven culture across the firm.

2. Limited Understanding of the Cost-Benefit Equation 

The initial cost of discovering and implementing graph solutions to support early use cases can appear high due to the upfront work required – such as to the preliminary setup, data wrangling, aggregation, and fine tuning required to contextualize and connect what is otherwise siloed and disparate data. On top of this, the traditional mindset of ‘deploy a cutting-edge application once and you’re done’ can make these initial challenges feel even more cumbersome. This is especially true for executives who may not fully understand the shift from focusing on applications to investing in data-driven approaches, which can provide long-term, compounding benefits. This misunderstanding often leads to the premature abandonment of graph projects, causing organizations to miss out on their full potential far too early. Here’s a common scenario we often encounter when walking into stalled graph efforts:  

The leadership team or an executive champion leading the innovation arm of a large corporation makes a decision to experiment with building data-models and graph solutions to enhance product recommendations and improve data supply chain visibility. The data science team, excited by the possibilities, set up a pilot project, hoping to leverage graph’s ability to uncover non-obvious (inexplicit) relationships between products, customers, and inventory. Significant initial costs arise as they invest in graph databases, reallocate resources, and integrate data. Executives grow concerned over mounting costs and the lack of immediate, measurable results. The data science team struggles to show quick value as they uncover data quality issues, do not have access to stakeholders/domain experts or to the right type of knowledge needed to provide a holistic view, and likely lack the graph modeling expertise. Faced with escalating costs and no immediate payoff, some executives push to pull the plug on the initiative.

Early Signs of Failure:

  • There are no business cases or KPIs tied to a graph initiative, or success measures are centered around short-term ROI expectations such as immediate performance improvements. Graph databases are typically more valuable over time as they uncover deep, complex relationships and generate insights that may not be immediately obvious.
  • Graph development teams are not showing incremental value leading to misalignment between business goals – ultimately losing interest or becoming risk-averse toward the solution. 
  • Overemphasis on up-front technical investment where initial focus is only on costs related to software, talent, and infrastructure – overlooking data complexity and stakeholder engagement challenges, and without recognizing the economies of scale that graph technologies provide once they are up and running.
  • The application of graphs for non-optimal use cases (e.g., for single application, not interconnected data) – leading to the project and executives not seeing the impact and overarching business outcomes that are pertinent (e.g., providing AI with organizational knowledge) and impact the organization’s bottomline.

What Success Looks Like

A sizable number of large-scale graph transformation efforts have proven that once the foundational model is in place, the marginal cost of adding new data and making graph-based queries drops significantly. For example, for a multinational pharmaceutical company, this is measured in a six-digit increase in revenue gains within the first quarter of a production release as a result of the data quality and insights gained within their drug development process. In doing so, internal end-users are able to uncover the answers to critical business questions and the graph data model is poised to become a shareable industry standard. Such organizations who have invested early and are realizing the transformational value of graph solutions today understand this compounding nature of graph-powered insights and have invested in showing the short-term, incremental value as part of the success factors for their initial pilots to maintain buy-in and momentum.

 

3. Skillset Misalignment and Resistance to Change

The success of any advanced solution heavily depends on the skills and training of the teams that will be asked to implement and operationalize it. The reality is that the majority of data and IT professionals today, including database administrators, data analysts, and data/software engineers, are often trained in relational databases, and many may have limited exposure to graph theory, graph databases, graph modeling techniques, and graph query languages. 

This challenge is compounded by the limited availability of effective training resources that are specifically tailored to organizational needs, particularly considering the complexity of enterprise infrastructure (as opposed to research or academia). As a result, graph technologies have gained a reputation for having a steep learning curve, particularly within the developer community. This is because many programming languages do not natively support graphs or graph algorithms in a way that seamlessly integrates with traditional engineering workflows. 

Moreover, organizations that adopt graph technologies and databases (such as RDF-based GraphDB, Stardog, Amazon Neptune, or property-graph technologies like Neo4j) often do so without ensuring their teams receive proper training on the specific tools and platforms needed for successful scaling. This lack of preparation frequently limits the team’s ability to design effective graph data models, engineer the necessary content or pipelines for graph consumption, and integrate graph solutions with existing systems. As a result, organizations face slow development cycles, inefficient or incorrect graph implementations, performance issues, and poor scalability – all of which can lead to resistance, pushback, and ultimately the abandonment of the solution.

Early Signs of Failure:

  • Missing data schema or inappropriate data structures such as lack of ontology (especially a theme for property graphs), incorrect edge direction, missing connections where important relationships between nodes are not represented in the graph – leading to incomplete information, flawed analysis, and governance overhead. 
  • The project team doesn’t have the right interdisciplinary team representation. The team tasked with supporting graph initiatives lacks the diversity in expertise, such as domain experts, knowledge engineers, content/system owners, product owners, etc.
  • Inability to integrate graph solutions with existing systems as a result of inefficient query design. Queries that are not optimized to leverage the structure of the graph result in slow execution times and inefficient data retrieval where data is copied into graph resulting in redundant data storage – exacerbating the complexity and inefficiency in managing overall data quality and integrity. 
  • Scalability limitations. As the size of the graph increases, the processing time and memory requirements become substantial, making it difficult to perform operations on large datasets efficiently.

What Success Looks Like

By addressing the skills gap early, planning for the right team composition, aligning teams around a shared understanding of the value of graph solutions, and investing in comprehensive training ecosystems, organizations can avoid common pitfalls that lead to missed opportunities, abandonment or failure of the graph initiative. A leading global retail chain for example, invested in graph solutions to aid their data and analytics teams in enhancing reporting. We worked with their data engineering teams to conduct a skills gap analysis and develop tailored training workshops and curriculum for their various workstreams. The approach took five-module intensive training that is taught by our ontology and graph experts and a learning ecosystem that supported various learning formats, persona/role based training, practice labs, use case based hands-on training, Ask Me Anything (AMA) Sessions, industry talks, and on demand job aids tutorials and train-the-trainer modules. 

Employees were further provided programmatic approaches to tag their knowledge and data more effectively with the creation of a standard set of metadata and tags and data cataloging processes, and were able to leverage training on proper data tagging for an easier search experience. As a result, the chain acquired knowledge on the best practices for organizing and creating knowledge and data models for their data and analytics transformation efforts, so that less time and productivity was wasted by employees when searching for solutions in siloed locations. This approach significantly minimized the overlapping steps and the time it took for data teams to develop a report from 6 weeks to a number of days.

Closing 

Graph implementations require both an upfront investment and a long-term vision. Leaders who recognize this are more likely to support the project through its early challenges, ensuring the organization eventually benefits fully. A key to success is having a champion who understands the entire value of the solution, can drive the shift to a data-centric mindset, and ensures that roles, systems, processes, and culture align with the power of connected data. With the right approach, graph technologies unlock the power of organizational knowledge and intelligence in the age of AI.

If your project has any of the early signs of failure listed in this article, it behooves you to pause the project and revisit your approach. Organizations embarking on a graph initiative, not understanding or planning for the foundations discussed here frequently end up with stalled or failed projects that never provide the true value a graph project can deliver. Are you looking to get started and learn more about how other organizations are approaching graphs at scale or are you seeking to unstick a stalled initiative? Read more from our case studies or contact us if you have specific questions.

The post Why Graph Implementations Fail (Early Signs & Successes) appeared first on Enterprise Knowledge.

]]>