semantic data layer Articles - Enterprise Knowledge http://enterprise-knowledge.com/tag/semantic-data-layer/ Mon, 17 Nov 2025 21:44:57 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg semantic data layer Articles - Enterprise Knowledge http://enterprise-knowledge.com/tag/semantic-data-layer/ 32 32 The Top 5 Reasons for a Semantic Layer https://enterprise-knowledge.com/the-top-5-reasons-for-a-semantic-layer/ Wed, 14 Feb 2024 17:21:50 +0000 https://enterprise-knowledge.com/?p=19833 Implementing Semantic Layers has become a critical strategic plan for many of our most advanced data clients. A Semantic Layer connects all organizational knowledge assets, including content items (files, videos, media, etc.) via a well defined and standardized semantic framework. … Continue reading

The post The Top 5 Reasons for a Semantic Layer appeared first on Enterprise Knowledge.

]]>
Implementing Semantic Layers has become a critical strategic plan for many of our most advanced data clients. A Semantic Layer connects all organizational knowledge assets, including content items (files, videos, media, etc.) via a well defined and standardized semantic framework. If you are unfamiliar with Semantic Layers, read Lulit Tesfaye’s blog What is a Semantic Layer. It provides a great explanation of the Semantic Layer and how it can be implemented. There are a lot of great reasons organizations should implement Semantic Layers. My top five reasons are below.

Improved Findability and Confidence in Data

Improved Findability and confidence in dataData continues to grow at an alarming rate. Leaders want their organizations to be data-driven, but their direct reports need to know the data they require and have confidence in that data. A Semantic Layer helps with both of these issues. It uses a graph database and the metadata from your data catalog to offer a best-in-class search that returns data in the context of the business need. For example, if you are looking for all the data sets containing information about the average purchase price of a product, a graph-based search would have a result explaining what the purchase price is and then show all of the data sets that have purchase transactions with price information in them. Many of our retail clients have multiple data feeds from different purchasing systems. Showing all of this information together helps ensure that one of the feeds is not missed. 

The information returned in this type of graph-based custom search is not limited to data sets. We have one client who uses the graph to capture the relationship between a dashboard, the dashboard objects, and the data tables that populate each component. Their graph-based search not only returns data sets, but also the dashboards and dashboard objects that display results. Their IT people use this to develop new dashboards with the correct data sets and their data scientists prioritize the data sets that power the dashboards they already use

Google has been using graph search for years. Now, this same technology is available in our data environments. 

Enabling AI for Data

Enabling AI for data AI and ChatGPT are all over the news these days. It is a budget priority for every company executive I speak with. One of the most exciting use cases for Generative AI is the databot. Organizations that implement databots give their business users easy access to the metrics they need to do their job. Rather than trying to build dashboards that anticipate users’ needs, databots allow business users to ask questions of any level of complexity and get answers without knowing or understanding anything about the data behind the result. Software companies in the Semantic Layer are already showing demos of how business users can ask their data complicated natural language questions and get answers back. 

Databots require integration with a Generative AI tool (LLM). This integration will not work without a Semantic Layer. The Semantic Layer, specifically the metadata, taxonomy, and graph framework, provides the context so that LLM tools can properly answer these data-specific questions with organizational context. The importance of the Semantic Layer has been proven in multiple studies. In one study, Juan Sequeda, Dean Allmegang, and Bryan Jacob of data.world produced a benchmark showing how knowledge graphs affect the accuracy of question answering against SQL databases. You can see the results of this study here. Their benchmark evaluated how LLMs answered both high complexity and low complexity questions on both high and low schema data sets. The results are below.

  • Low Complexity/Low Schema, knowledge graph accuracy was 71.1% while the SQL accuracy was 25.5%
  • High Complexity/Low Schema, knowledge graph accuracy was 66.9% while the SQL accuracy was 37.4%
  • Low Complexity/High Schema, knowledge graph accuracy was 35.7% while the SQL accuracy was 0%
  • High Complexity/High Schema, knowledge graph accuracy was 38.7% while the SQL accuracy was 0%

As these stats show, organizations implementing a Semantic Layer are better equipped to integrate with an LLM. One of the most striking results is that the schema is much less important than the availability of a knowledge graph in question response accuracy. If your organization is looking to integrate the use of LLMs into your data environment, a Semantic Layer is critical.

Reporting Across Data Domains

Reporting across data domainsThe Semantic Layer uses a combination of the semantic framework (metadata/ taxonomies/ontologies/knowledge graphs) to map data and related data tools to the entities that business users care about. This approach creates a flexible and more reliable way to manage data across different domains. It gives business users greater access to the information they need in a format that makes sense.  

Reporting on metrics that cross data domains or systems continues to be challenging for large enterprises. Historically, these organizations have addressed this through complex ETL processes and rigid dashboards that attempt to align and aggregate the information for business users. This approach has several problems, including:

  • Slow or problematic ETL processes that erode trust in the information,
  • Over-reliance on a data expert to understand how the data comes together,
  • Problems with changing data over time, and 
  • Lack of flexibility to answer new questions.

Implementing a Semantic Layer addresses each of these issues. Taxonomies provide a consistent way to categorize data across domains. The taxonomies are implemented as metadata in the data catalogs so business users and data owners can quickly find and align information across their current sources. The Knowledge Graph portion of the Semantic Layer maps data sets and data elements to business objects. These maps can be used to pull information back dynamically without the need for ETL processes. When an ETL process is required for performance purposes, how the data is related is defined in the graph and not in the head of your data developers. ETL routines can be developed against the knowledge graph rather than in code. As the data changes, the map can be updated so that the processes that use that data reflect the new changes immediately. 

We developed a Semantic Layer for a retail client. Once it was in place, they could report on sales transactions from 6 different point-of-sale systems (each with a different format) in a way that used to be done using time-consuming and complicated ETL processes. They were also able to expand their reporting to show links between third-party sales, store sales, and supply chain issues in a single dashboard. This was impossible before the Semantic Layer was in place because they were overly reliant upon a small set of developers and dashboards that only addressed one domain at a time. Instead of constantly building and maintaining complex ETL routines that move data around, our client maps and defines the relationships in the graph and updates the graph or their metadata when changes occur. Business users are seeing more information than they ever have, and they have greater trust in what they are seeing.

Improved Data Governance

Improved Data GovernanceData governance is critical to providing business users with data that they have confidence in for proper decision-making. The velocity and variety of today’s data environments makes controlling and managing that data seem almost impossible. Tools from the Semantic Layer are built to address the problem of scale and complexity organizations face. Data catalogs use metadata and built-in workflows to allow organizations to manage similar data sets in similar ways. They also provide data lineage information so that users know how data is used and what has been done to the data files over time. Metadata driven data catalogs give organizations a way to align similar data sets and a framework so that they can be managed collectively rather than individually.

In addition to data catalogs, ontologies and knowledge graphs can aid in enterprise data governance. Ontologies identify data elements representing the same thing from a business standpoint, even if they are from different source locations or have different field names. Tying similar data elements together in a machine-readable way allows the system to enforce a consistent set of rules automatically. For example, at a large financial institution we worked with, a knowledge graph linked all fields that represented the open date for an account. The customer was a bank with investment accounts, bank accounts, and credit card accounts. Because ontologies linked these fields as account open dates, we could implement constraints that ensured these fields are always filled out, use a standard date format, and have a date in a reasonable timeframe. The ability to automate constraints across many related fields, allows data administrators to scale their processes even as the data they are collecting continues to grow.

Stronger Security

Stronger SecurityThe incremental growth of data has made controlling access to data sets (A.K.A. entitlements) more challenging than ever. Sensitive data, like HR data, must have limited access for those that need to know only. Licensed data could have contractual limitations as to the number of users and may not exist in your organization’s data lake. Often, data is combined from multiple sources. What are the security rules for those new data combinations? The number of permutations and rules as to who can see what across an organization’s data landscape is daunting. 

The Semantic Layer improves the way data entitlements are managed using metadata. The metadata can define the source of the data (for licensed data) as well as the type of data so that sensitive data can be more easily found and flagged. Data administrators can use a data catalog to find licensed data and ensure proper access rules are in place. They can also find data about a sensitive topic, like salaries, and ensure that the proper security measures are in place. Data Lineage, a common feature in catalogs, can also help identify when a newly combined data set needs to be secured and who should see it. Catalogs have gone a long way to solve these security problems, but they are insufficient to solve the growing security challenges. 

Knowledge graphs augment the information about data stored in data catalogs to provide greater insight and inference of data entitlements. Graphs map relationships across data and those relationships can be used to identify related data sets that need similar security rules. Because the graph’s relationships are machine-readable, implementation of many of these security rules can be automated. Graphs can also identify how and where data sets are used to identify potential security mismatches. For example, a graph can identify situations where data sets have different security requirements than the dashboards that display them. These situations can be automatically flagged and exposed to data administrators who can proactively align the security between the data and the dashboard.

In Conclusion

Layers are a natural evolution of the recognition that metadata is a first class citizen in the battle to get the right data to the right people at the right time. The combination of formal metadata and graphs gives data administrators and data users new ways to find, manage, and work with data.

The post The Top 5 Reasons for a Semantic Layer appeared first on Enterprise Knowledge.

]]>
Ivanov and White Speaking at a Joint Webinar Series on Semantic Data Fabric https://enterprise-knowledge.com/ivanov-and-white-speaking-at-a-joint-webinar-series-on-semantic-data-fabric/ Wed, 10 Jun 2020 15:20:44 +0000 https://enterprise-knowledge.com/?p=11347 Enterprise Knowledge’s Senior Consultants, Yanko Ivanov and Ben White, will be speaking at a series of joint webinars hosted by Semantic Web Company titled: “The Road to AI Requires Semantic Data Fabric.” The webinar series focuses on the value and … Continue reading

The post Ivanov and White Speaking at a Joint Webinar Series on Semantic Data Fabric appeared first on Enterprise Knowledge.

]]>
Enterprise Knowledge’s Senior Consultants, Yanko Ivanov and Ben White, will be speaking at a series of joint webinars hosted by Semantic Web Company titled: “The Road to AI Requires Semantic Data Fabric.” The webinar series focuses on the value and need for a semantic layer to power smarter, more relevant, and more contextualized Artificial Intelligence (AI) applications. As part of building a semantic data fabric, White discusses the approach and methodology for developing a unified, relevant, and intuitive business glossary necessary for ensuring a common language. Ivanov then furthers the discussion by revealing the role standardized semantic taxonomies and ontologies play in laying the groundwork for introducing achievable and realistic AI solutions to organizations. The webinar series concludes with a complete demo application of the concepts discussed during the webinar. Join the webinar series here.

The webinar series is co-hosted along with Semantic Web Company’s Director of Vertical Solutions, Florian Bauer, and Data.World’s Chief Technology Officer, Bryon Jacob.

The post Ivanov and White Speaking at a Joint Webinar Series on Semantic Data Fabric appeared first on Enterprise Knowledge.

]]>
Leveraging Ontologies and Biomedical Standards for Data Governance, Process Optimization, and Product Findability https://enterprise-knowledge.com/leveraging-ontologies-for-data-governance/ Thu, 28 May 2020 20:36:47 +0000 https://enterprise-knowledge.com/?p=11252 The Challenge A global veterinary company that provides a comprehensive suite of products, software, and services for veterinary professionals needed an effective taxonomy and ontology design to provide easy and consistent ways to find and use data in order to … Continue reading

The post Leveraging Ontologies and Biomedical Standards for Data Governance, Process Optimization, and Product Findability appeared first on Enterprise Knowledge.

]]>

The Challenge

A global veterinary company that provides a comprehensive suite of products, software, and services for veterinary professionals needed an effective taxonomy and ontology design to provide easy and consistent ways to find and use data in order to inform machine learning activities and derive business intelligence. Specifically, the organization needed a way to model and describe their business processes and data flow between individual veterinary practices and enrich and align their ontology and data model with industry standards as part of their content and data normalization services and team initiatives. They sought to do this in order to  improve efficiency and create the ability to report on the data across practices as well as trends within a specific practice, such as the adherence to prescribed medications.

Their two primary stakeholders included:

  • Data Scientists, who analyze and interpret data to manage the relationships at more granular levels with minimal effort, and inform machine learning activities and derive business intelligence; and
  • Organizational Divisions who provide easy and consistent ways to find and use data.

The Solution

Enterprise Knowledge (EK) partnered with the organization to implement an enterprise taxonomy/ontology design, data strategy, management tool, and governance processes that provide consistent and effective approaches for data management and interactions. EK first enriched the taxonomy through the addition of synonyms, definitions, and scope notes to ensure broader and more accurate application of tags to the data being mapped. Then, leveraging the taxonomy, EK built an ontology which was then managed in an enterprise taxonomy/ontology management tool to link symptoms, illnesses, treatments, medications and other related products effectively. Synonyms and new terms were harvested and compared from multiple industry giants including Plumb’s Veterinary Drugs and Hill’s Prescription Diets. Additional metadata, such as breed definitions, was integrated with the tool through linked data, specifically DBPedia. Next, EK mapped the provided data sets to the ontology and leveraged industry benchmarking to ensure the interoperability of the ontology design. Finally, EK recommended a customized ontology governance plan including roles, processes, and cadences that leveraged best practices for maintaining and managing the ontology at a scale. 

An example knowledge panel for the veterinarian ontology

The EK Difference

EK’s ontology industry experience and linked open data expertise ensured that the taxonomy and ontology design developed for the organization was not only relevant to the veterinary domain within their realm, but also relevant to the veterinary industry at large. We worked collaboratively with SMEs at the organization to identify and leverage industry standards and models within the ontology design, customizing where necessary but ensuring interoperability.

The Results

By extracting products, brands, drugs, species, and other controlled vocabulary lists from internal taxonomies and external veterinary science industry standards, EK enhanced the organization’s taxonomy and developed an ontology to describe the key types of things vet partners were interested in and how they relate to each other. The new taxonomy also ensures the use of a common vocabulary from all veterinary practices submitting data, reducing the burden of data normalization. In doing so, internal end-users are able to uncover the answers to critical business questions and the ontology is poised to become a shareable industry standard.

An example workflow for ontology governance decision-making.

The post Leveraging Ontologies and Biomedical Standards for Data Governance, Process Optimization, and Product Findability appeared first on Enterprise Knowledge.

]]>
Building a Semantic Enterprise Architecture https://enterprise-knowledge.com/building-a-semantic-enterprise-architecture/ Thu, 28 May 2020 18:05:35 +0000 https://enterprise-knowledge.com/?p=11244 The EK Difference Using a hybrid analysis approach, consisting of a combination of user-driven research (facilitated workshops, focus groups, and interviews) and technology-driven research (in-depth analysis of the existing technology), EK captured the current Enterprise Architecture using our Semantic Enterprise … Continue reading

The post Building a Semantic Enterprise Architecture appeared first on Enterprise Knowledge.

]]>

The Challenge

A federally funded engineering and research facility had a diverse set of applications and data stores with overlapping functionalities that produced identical or nearly identical information and data assets. The inability to identify overlapping functionality, information assets, and data assets resulted in a complex application portfolio with numerous applications that served redundant purposes as well as a disconnected technology environment. The agency needed a better understanding of the redundancy in the existing application portfolio and which applications were ready to be retired. It also needed a better understanding of how the current applications in their overall portfolio aligned with the broader strategic technological direction of the enterprise.

The Solution

Enterprise Knowledge (EK) worked with the agency to define a Semantic Enterprise Architecture strategy that provided a big picture view of the constraints and limitations imposed by duplicate application functionalities, data assets, and information assets. The development of the agency’s enterprise architecture strategy consisted of working with the agency’s numerous directorates to define the current state architecture and assess existing applications against a maturity matrix. The current state definition of architecture consisted of defining relationships between applications, data assets, information assets, business processes, and organizational roles. By defining the current state architecture, the agency had a clear understanding of the applications that existed across the enterprise and their purpose. This maturity assessment provided the agency with clear opportunities for improvement to help determine applications with overlapping purposes and modernize applications with functionalities that did not align with the agency’s strategic goals.

A visual model of the ontology developed for this agency's Semantic Enterprise Architecture, describing relationships between their applications, data assets, information assets, business processes, and organizational roles.

The EK Difference

Using a hybrid analysis approach, consisting of a combination of user-driven research (facilitated workshops, focus groups, and interviews) and technology-driven research (in-depth analysis of the existing technology), EK captured the current Enterprise Architecture using our Semantic Enterprise Architecture metamodel inspired by The Open Group Architecture Framework (TOGAF). In addition to the EA maturity assessment matrix, EK captured the architecture using a knowledge graph repository. Such a knowledge graph driven approach to Enterprise Architecture allowed us to iteratively capture relationships between the application layer, business layer, information layer, and technology layer. Using these approaches,  EK was able to capture critical information about systems and application functionality within the agency’s application portfolio and enterprise infrastructure including: 

  • Data usage and storage; 
  • Security; and 
  • Integration. 

The maturity matrix provided valuable information on the relevance of existing applications. 

The Results

Leveraging EK’s Semantic Enterprise Architecture approach in combination with the maturity matrix, the agency has clear architectural descriptions of applications, information assets, data assets, business processes, and organizational roles cleanly organized in a flexible graph database. This allowed for better short-term and long-term strategic decision making around data, security, integration, new design requirements, sustainability, and future support.  Further, the agency now has clear visibility of applications across the organization that need to be updated or retired. This visibility, combined with the current state architecture and the maturity assessment, allows the agency to see not only when an application needs to be updated or retired, but also how it is addressing business problems and who in the organization is impacted by the retirement of applications. 

The post Building a Semantic Enterprise Architecture appeared first on Enterprise Knowledge.

]]>
Natural Language Search on Big Data https://enterprise-knowledge.com/natural-language-search-on-big-data/ Tue, 12 May 2020 13:00:14 +0000 https://enterprise-knowledge.com/?p=11091 The Solution By extracting key entities or metadata fields, such as topic, place, person, customer, plant, etc. from their sample files and data sets, Enterprise Knowledge (EK) developed an ontology to describe the key questions business users were interested in … Continue reading

The post Natural Language Search on Big Data appeared first on Enterprise Knowledge.

]]>

The Challenge

One of the largest global supply chain companies needed to provide its business users and leadership with a way to directly access and glean quick insights from their large and disparate data sources while using natural language search. They also wanted to ensure that their data analysts have the tools and processes available to manage and analyze this data. The data sets were stored in a large RDBMS data warehouse with little to no context attached, making it difficult to gauge their value, understand which information to use, and what questions that data could answer. The organization wanted to bring meaningful information and facts together to overcome these challenges and to make more timely and informed funding and investment decisions.

The Solution

By extracting key entities or metadata fields, such as topic, place, person, customer, plant, etc. from their sample files and data sets, Enterprise Knowledge (EK) developed an ontology to describe the key questions business users were interested in and how they, and their answers, relate to each other. EK then mapped the various data sets to the ontology and a knowledge graph, and leveraged semantic Natural Language Processing (NLP) capabilities to recognize user intent, link concepts, and dynamically generate the data queries that provide the response. 

The EK Difference

Our experts worked closely with the organization’s own Data subject matter experts (SMEs) throughout the endeavor. We facilitated knowledge transfers and design sessions in order to refine use cases and to reach a clear definition of key information entities, and their relationships to each other, and to unleash the value of data contexts and meaning to the business. We then leveraged our data science expertise and efficient data Extract, Transform, and Load (ETL) logic to drive a rapid alignment of data elements with the natural language structure of English questions to identify user intent. Simultaneously, EK also leveraged a semantic data layer, allowing for the flexible mapping of disparate data source schemas into a single, unified data model that is easily digestible and accessible to both technical and nontechnical users.

The Results

By allowing the company to collect, integrate, and identify user interest and intent, ontologies and knowledge graphs provided the foundation for Artificial Intelligence (AI), ultimately enabling the joint analysis of different entity paths, as well as the ability to describe their connectivity from various angles, and discover hidden facts and relationships through inferences in related content that would have otherwise gone unnoticed. For this particular supply chain and manufacturing company, by connecting internal data to analyze relationships and further mine disparate data sources, they now have a holistic view of products and services they can leverage to influence operational decisions. The interface through which they interact with the knowledge graph enables non-technical users to uncover the answers to a variety of critical business questions, such as:

  • Which of your products or services are most profitable and perform better?
  • What investments are successful, and when are they successful?
  • How much of a given product did we deliver in a given timeframe?
  • Who were my most profitable customers last year?
  • How can we align products and services with the right experts, locations, delivery method, and timing?

The post Natural Language Search on Big Data appeared first on Enterprise Knowledge.

]]>
Semantic Data Portal/Data Catalog https://enterprise-knowledge.com/semantic-data-portal-data-catalog/ Tue, 05 May 2020 13:00:04 +0000 https://enterprise-knowledge.com/?p=11061 As part of their efforts to improve overall data quality, data usage, and coordination, the Chief Data Officer of a federal agency was seeking to analyze current data management practices and identify ways to improve the Office’s existing processes. One of the most pressing challenges identified was that data scientists and economists were finding it difficult to make  efficient use of siloed data sources in order to  easily access, interpret, and track data and its history. Each researcher had anecdotal knowledge of what data resources were available, and they were often recreating similar data manipulations and research that other analysts had done previously in their own departments. The agency needed a more efficient and centralized way to capture key contextual information to drive the use, discovery, and reuse of data. The solution had to enhance and modernize their metadata management practices through improved access and visibility across agency data resources, while maintaining the appropriate security measures. Continue reading

The post Semantic Data Portal/Data Catalog appeared first on Enterprise Knowledge.

]]>

The Challenge

As part of their efforts to improve overall data quality, data usage, and coordination, the Chief Data Officer of a federal agency was seeking to analyze current data management practices and identify ways to improve the Office’s existing processes. One of the most pressing challenges identified was that data scientists and economists were finding it difficult to make  efficient use of siloed data sources in order to  easily access, interpret, and track data and its history. Each researcher had anecdotal knowledge of what data resources were available, and they were often recreating similar data manipulations and research that other analysts had done previously in their own departments. The agency needed a more efficient and centralized way to capture key contextual information to drive the use, discovery, and reuse of data. The solution had to enhance and modernize their metadata management practices through improved access and visibility across agency data resources, while maintaining the appropriate security measures.

The Solution

Enterprise Knowledge (EK) started this effort with a strategy engagement to define an overarching strategy, identify business requirements with prioritized use cases, and design a roadmap to guide the scale of the overall effort. 

As an initial step towards implementing that strategy, EK led the development of an advanced, semantic data catalog prototype, leveraging a knowledge graph to provide key contextual and descriptive information that helped map relationships across the agency’s regulatory data sources, including collected data, metadata repositories, and publicly available financial information. For the data catalog, EK also developed an intuitive front-end, user interface that enabled end-users and data researchers to explore and access the data within the model. To support the catalog application, the knowledge graph was custom modeled to integrate information from a variety of data sources, such as structured databases, regulatory manuals, existing metadata repositories, and public websites. The integrated semantic layer contained the relationships that a user could leverage to explore and traverse information on relevant datasets, regulatory language, financial institutions, and data elements, so that they could discover what they needed regardless of their starting point.

The EK Difference

EK has designed taxonomies, ontologies, and data/metadata models to enable data integrations and modernizations for dozens of large and complex organizations. Relying on this experience, EK was able to rapidly develop the knowledge graph in this solution while providing the agency with a sustainable and scalable model that could readily integrate their data and information across departments, processes, and systems. We were also able to simultaneously ensure the graph’s effective governance, standardization, and, most importantly, cross-department usability.

Graphic outlining a use case

Beyond the model, we also drew upon our end-to-end technology solutions capabilities to develop the catalog application using full-stack development methodologies. This development competency allowed our proposed solution to involve not only the remodeling and provision of a new data layer for the agency, but also to develop a demoable prototype to highlight the deep value of the knowledge graph in a very tangible and interactive way.

The Results

The catalog serves as a visual demonstration of the value of having a semantic data layer to organize, relate, and standardize metadata use at the agency. The catalog also makes it easy to find and connect relevant data with business users to view key information at a glance. Overall, the agency continues to realize the capabilities and associated business outcomes from the phased implementation of the data solution that provides an array of the agency’s stakeholders and business users with the ability to:

  • Capture information about data and relationships between data to power the findability and usability of data sets across the agency. The intuitive, front-end user interface reduced the amount of time data scientists and other SMEs spent tracking or processing data for non-technical users, as they can now directly access and explore the data for their own decision-making purposes.
  • Ensure that only the appropriate people have access to data assets and the information about those data assets.
  • Understand the history, context, and processes behind each data set.
  • Relate data elements more easily and more consistently. The tool allows data analysts and researchers to access the agency’s data resources in a single tool that makes data stored in multiple locations available without having to move or copy the data. 
  • Enhance the overall quality and efficiency of the agency’s data through improved awareness, collaboration, and consistency.

The data portal continues to serve as a scalable model of how the agency can modernize its metadata management practices while making its data easier and more readily available for use.

The post Semantic Data Portal/Data Catalog appeared first on Enterprise Knowledge.

]]>