Semantic Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/semantic/ Tue, 26 Aug 2025 18:32:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg Semantic Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/semantic/ 32 32 Auto-Classification for the Enterprise: When to Use AI vs. Semantic Models https://enterprise-knowledge.com/auto-classification-when-ai-vs-semantic-models/ Tue, 26 Aug 2025 18:19:23 +0000 https://enterprise-knowledge.com/?p=25221 Auto-classification is a valuable process for adding context to unstructured content. Nominally speaking, some practitioners distinguish between auto-classification (placing content into pre-defined categories from a taxonomy) and auto-tagging (assigning unstructured keywords or metadata, sometimes generated without a taxonomy). In this article, I use ‘auto-classification’ in the broader sense, encompassing both approaches. Continue reading

The post Auto-Classification for the Enterprise: When to Use AI vs. Semantic Models appeared first on Enterprise Knowledge.

]]>
Auto-classification is a valuable process for adding context to unstructured content. Nominally speaking, some practitioners distinguish between auto-classification (placing content into pre-defined categories from a taxonomy) and auto-tagging (assigning unstructured keywords or metadata, sometimes generated without a taxonomy). In this article, I use ‘auto-classification’ in the broader sense, encompassing both approaches. While it can take many forms, its primary purpose remains the same: to automatically enrich content with metadata that improves findability, helps users immediately determine relevance, and provides crucial information on where content came from and when it was made. And while tagging content is always a recommended practice, it is not always scalable when human time and effort is required to perform it. To solve this problem, we have been helping organizations automate this process and minimize the amount of manual effort required, especially in the age of AI, where organized and well-labeled information is the key to success.

This includes designing and implementing auto-classification solutions that save time and resources – using methods such as natural language processing, machine learning, and rapidly-evolving AI models such as large language models (LLMs). In this article, I will demonstrate how auto-classification processes can deliver measurable value to organizations of diverse sizes or industries, using real-world examples to illustrate the costs and benefits. I will then give an overview of common methods for performing auto-classification, comparing their high-level strengths and weaknesses, and conclude by discussing how incorporating semantics can significantly enhance the performance of these methods.

How Can Auto-Classification Help My Organization?

It’s a good bet that your organization possesses a large repository of unstructured information such as documents, process guides, and informational resources, either meant for internal use or for display on a public webpage. Such a collection of knowledge assets is valuable – but only as valuable as the organization’s ability to effectively access, manage, and utilize them. That’s where auto-classification can shine: by serving as an automated processor of your organization’s unstructured content and applying tags, an auto-classifier adds structure quickly that provides value in multiple ways, as outlined below.

Time Savings

First, an auto-classifier saves content creators time in two key ways. For one, manually reading through documents and applying metadata tags to each individually can be tedious, taking time away from content creators’ other responsibilities – as a solution, auto-classification can free up time that can be used to perform more crucial tasks. On the other end of the process, auto-classification and the use of metadata tags can improve findability, saving employees time when searching for documents. When paired with a taxonomy or set list of terms, an auto-classifier can standardize the search experience by allowing for content to be consistently tagged with a set of standard language. 

Content Management and Strategy

These standard tags can also play a role in more content strategy-focused efforts, such as identifying gaps in content and content deduplication. For example, if some taxonomy terms feature no associated content, content strategists and managers may identify an organizational gap that needs to be filled via the authoring of new content. In contrast, too many content pieces identified as having similar themes can be deduplicated so that the most valuable content is prioritized for end users. These analytics-based decisions can help organizations maximize the efficacy of their content, increase content reach, and cut down on the cost of storing duplicate content. 

Ensuring Security

Finally, we have seen auto-classification play a key role in keeping sensitive content and information secure. Auto-classifiers can determine what content should be tagged with certain sensitivity classifications (for example, employee addresses being tagged as visible by HR only). One example of this is through dark data detection, where an auto-classifier parses through all organizational content to identify information that should not be visible to all end users. Assigning sensitivity classifications to content through auto-tagging can help to automatically address security concerns and ensure regulatory compliance, saving organizations from the reputational and legal costs associated with data leaks. 

Common Auto-Classification Methods

An infographic about the six common auto-classification methods: rules-based tagging, regular expressions tagging, frequency-based tagging, natural language processing, machine learning-based tagging, LLM-based tagging

So, how do we go about tagging content automatically? Organizations can choose to employ one of a number of methods as a standalone solution, or combine them as part of a hybrid solution. Below, I will give a high-level overview of six of the most commonly used methods in auto-classification, along with some considerations for each.

1. Rules-Based Tagging: Uses deterministic rules to map content to tags. Rules can be built from dictionaries/keyword lists, proximity or co-occurrence patterns (e.g., “treatment” within 10 words of “disorder”), metadata values (author, department), or structural cues (headings, templates).

  • Considerations: Highly transparent and auditable; great for regulated/compliance use cases and domain terms with stable phrasing. However, rules can be brittle, require ongoing maintenance, and may miss implied meaning or novel phrasing unless rules are continually expanded.

2. Regular Expression (RegEx) Tagging: A specialized form of rules-based tagging that applies RegEx patterns to detect and tag structured strings (for example, SKUs, case numbers, ICD-10 codes, dates, or email addresses).

  • Considerations: Excellent precision for well-formed patterns and semi-structured content; lightweight and fast. Can produce false positives without careful validation of results. Best combined with other methods (such as frequency or NLP) for context checks.

3. Frequency-Based Tagging: Frequency-based tagging considers the number of times that a certain term (or variations of said term) appear in a document, and assigns the most frequently appearing tags to the content. Early search engines, website indexers, and tag-mining software relied heavily on this approach for its simplicity and transparency; however, frequency of a term does not always guarantee its importance.

  • Considerations: Works well with a well-structured taxonomy with ample synonyms for terms, as well as content that has key terms appear frequently. Not as strong a method when meaning is implied/terms are not explicitly used or terms are excessively repeated.

4. Natural Language Processing (NLP): Uses basic calculations of semantic meaning (tokenization) to find the best matches by meaning between two pieces of text (such as a content piece and terms in a taxonomy).

  • Considerations: Can work well for terms that are not organization/domain-specific, but struggles with acronyms/more specific terms. Better than frequency-based tagging at determining implied meaning.

5. Machine Learning-Based Tagging: Machine learning methods allow for the training of models on pre-tagged content, empowering organizations to improve models iteratively for better results. By comparing new content against patterns they have already learned/been trained on, machine learning models can infer the most relevant concepts and tags to a content piece and apply them consistently. User input can help refine the classifier to identify patterns, trends, and domain-specific terms more accurately.

  • Considerations: A stock model may initially perform at a lower-than-expected level, while a well-trained model can deliver high-grade accuracy. However, this can come at the expense of time and computing resources.

6. Large Language Model (LLM)-Based Tagging: The newest form of auto-classification, this involves providing a large language model with a tagging prompt, content to tag, and a taxonomy/list of terms if desired. As interest around generative AI and LLMs grows, this method has become increasingly popular for its ability to parse more complex content pieces and analyze meaning deeply.

  • Considerations: Tags content like a human, meaning results may vary/become inconsistent if the same corpus is tagged multiple times. While LLMs can be smart regarding implied meaning and content sensitivity, they can be inconsistent without specific model tuning and prompt engineering. Additionally, suffers from accuracy/precision issues when fed a large taxonomy.

Some taxonomy and ontology management systems (TOMS), such as Graphwise PoolParty or Progress Semaphore, also offer auto-classification add-ons or extensions to their platforms that make use of one or more of these methods.

The Importance of Semantics in Auto-Classification

Imagine your repository of content as a bookstore, and your auto-classifier as the diligent (but easily confused!) store manager. You have a wide number of books you want to sort into different categories, such as their audience (children, teen, adult) and genre (romance, fantasy, sci–fi, nonfiction). 

Now, imagine if you gave your manager no instructions on how to sort the books. They start organizing too specifically. They put four books together on one shelf that says “Nonfiction books about history in 1814.” They put another three books on a shelf that says “Romance books in a fantasy universe with dragons.” They put yet another five books on a shelf that says “Books about knowledge management.” 

Before you know it, your bookstore has 1,098 shelves, and no happy customers. 

Therein lies the danger of tagging content without a taxonomy, leading to what’s known as semantic drift. While tagging without a taxonomy and creating an initial set of tags can be useful in some circumstances, such as when trying to generate tags or topics to later organize into a hierarchy as part of a taxonomy, it has its limitations. Tags often become very specific and struggle to maintain alignment in a way that makes them useful for search or for grouping larger amounts of content together. And, as I mentioned at the beginning of this article, auto-classification without a taxonomy in place is not auto-classification in the true sense of the word; rather, such approaches are auto-tagging, and may not produce the results business leaders/decision-makers expect.

I’ve seen this in practice when testing auto-classification methods with and without a taxonomy. When an LLM was given the same content corpus of 100 documents to tag, but one generated its own terms and the other was given a taxonomy, the results differed greatly. The LLM without a taxonomy generated 765 extremely domain-specific terms that often only applied to a singular content piece. In contrast, the LLM when given a taxonomy tagged the content with 240 terms, allowing the same tags to apply to multiple content pieces, creating topic clusters and groups of similar content that users can easily browse, search, and navigate, making discovery faster, more intuitive, and less fragmented than when every piece is labeled with unique, one-off terms

Bar graph showing the precision, recall, and accuracy of LLM's with and without semantics

Overall, incorporating a taxonomy into LLM-based auto-classification transforms fragmented, messy one-off tags into consistent topic clusters and hierarchies that make content easier to browse, search, and discover.

This illustrates the utility of a taxonomy in auto-classification. When you give your employee a list of shelves to stock in the store, they can avoid the “overthinking” of semantic drift and place books onto more well-architected shelves (e.g., Young Adult, Sci-Fi). A well-defined taxonomy acts as the blueprint for organizing content meaningfully and consistently using an auto-tagger.

 

When Should I Use AI, Semantic Models, or Both?

Bar graph about the accuracy of different auto-tagging methods

 

Bar graph showing the precision of different auto-classification methods

 

Bar graph showing the recall of different auto-classification methods
While results may vary by use case, methods including both AI and semantic models tend to score higher across the board. These images demonstrate results from one specific content corpus we tested internally.

Methods including both AI and semantic models tend to score higher in accuracy, precision, and recall.

 

As demonstrated above, tags created by generative AI models without any semantic model in place can become unwieldy and excessive, as LLMs look to create the best tag for that individual content piece rather than a tag that can be used as an umbrella term for multiple pieces of content. However, that does not completely eliminate AI as a standalone solution for all tagging use cases. These auto-tagging models and processes can prove helpful in the early stages of creating a term list as a method of identifying common themes across content in a corpus and forming initial topic clusters that can later bring structure to a taxonomy, either in the form of hierarchies or facets. Once again, while not true auto-classification as the industry dictates, auto-tagging with AI alone can work well for domains where topics don’t neatly fit within a hierarchy or when domain models and knowledge evolve quickly and a hierarchical structure would be infeasible.

On the other hand, semantic models are a great way to add the aforementioned structure to an auto-classification process, and work very well for exact or near-exact term matching. When combined with a frequency tagging, NLP, or machine learning-based auto-classifier in these situations, they tend to excel in terms of precision, applying very few incorrect tags. Additionally, these methods perform well in situations where content contains domain-specific jargon or acronyms located within semantic models, as it tags with a greater emphasis on these exact matches. 

Semantic models alone can prove to be a more cost-effective option for auto-classification as well, as lighter, less compute-heavy models that do not require paid cloud hosting can tag some content corpora with a high level of accuracy. Finally, semantic models can assist greatly in cases where security and compliance are paramount, as leading AI models are generally cloud-hosted, and most methods using semantics alone can be run on-premises without introducing privacy concerns.

Nonetheless, semantic models and AI can combine as part of auto-classification solutions that are more robust and well-equipped for complex use cases. LLMs can extract meaning from complex documents where topics may be implied and compare content against a taxonomy or term list, which helps ensure content is easy to organize and consistent with an organization’s model for knowledge. However, one key consideration with this method is taxonomy size – if a taxonomy grows too large (terms in the thousands, for example), an LLM may face difficulties finding/applying the right tag in a limited context window without mitigation strategies such as retrieving tags in batches. 

In more advanced use cases, an LLM can also be paired with an ontology, which can help LLMs understand more about interrelationships between organizational topics, concepts, and terms, and apply tags to content more intelligently. For example, a knowledge base of clinical notes and guidelines could be paired with a medical ontology that maps symptoms to potential conditions, and conditions to recommended treatments. An LLM that understands this ontology could tag a physician’s notes with all three layers (symptoms, conditions, and treatments) so when a doctor searches for “persistent cough,” the system retrieves not just symptom references, but also likely diagnoses (e.g., bronchitis, asthma) and corresponding treatment protocols. This kind of ontology-guided tagging makes the knowledge base more searchable and user-friendly and helps surface actionable insights instead of isolated pieces of information.

In some cases, privacy or security concerns may dictate that AI cannot be used alongside a semantic model. In others, an organization may lack a semantic model and may only have the capacity to tag content with AI as a start. However, as a whole, the majority of use cases for auto-classification benefit from a well-architected solution that combines AI’s ability to intelligently parse content with the structure and specific context that semantic models provide.

Conclusion

Auto-classification adds an important step in automation to organizations looking to enrich their content with metadata – whether it be for findability, analytics, or understanding. While there are many methods to choose from when exploring an auto-classification solution, they all rely on semantics in the form of a well-designed taxonomy to function to the best of their ability. Once implemented and governed correctly, these automated solutions can serve as key ways to unblock human efforts and direct them away from tedious tagging processes, allowing your organization’s experts to get back to doing what matters most. 

Looking to set up an auto-classification process within your organization? Want to learn more about auto-classification best practices? Contact us!

The post Auto-Classification for the Enterprise: When to Use AI vs. Semantic Models appeared first on Enterprise Knowledge.

]]>
Top 5 EK Blogs of 2020 https://enterprise-knowledge.com/top-5-ek-blogs-of-2020/ Mon, 21 Dec 2020 14:00:01 +0000 https://enterprise-knowledge.com/?p=12456 As we enter the holiday season, EK reflects on all that we have to be thankful for. Amid this global pandemic, many companies were forced to close their doors. Conversely, EK has been fortunate enough to see continued growth over … Continue reading

The post Top 5 EK Blogs of 2020 appeared first on Enterprise Knowledge.

]]>

As we enter the holiday season, EK reflects on all that we have to be thankful for. Amid this global pandemic, many companies were forced to close their doors. Conversely, EK has been fortunate enough to see continued growth over the last year. We’ve increased revenue, expanded engagements with existing clients, won new contracts, and hired many new employees.

This forced experiment around remote working due to COVID-19 certainly impacted how we did business in 2020 but had little impact on our culture. We’re proud of our team’s continued productivity and dedication, and we’re looking forward to a prosperous new year.

With our focus on thought leadership, we published over 70 new blogs, white papers, podcasts, and infographics in our knowledge base this year. Below are the top 5 most viewed articles written by EK employees during 2020:

1) What’s the Difference Between an Ontology and a Knowledge Graph?

2) Knowledge Management Trends in 2020

3) From Taxonomy to Ontology

4) What is a Semantic Architecture, and How do I Build One?

5) RDF*: What is It and Why do I Need It?

Which EK blogs were your favorites, and what topics would you like to see us write about in 2021? Let us know on Twitter, or reach out at info@enterprise-knowledge.com.

The post Top 5 EK Blogs of 2020 appeared first on Enterprise Knowledge.

]]>
Enterprise AI Readiness Assessment https://enterprise-knowledge.com/enterprise-ai-readiness-assessment/ Thu, 02 Jul 2020 14:46:25 +0000 https://enterprise-knowledge.com/?p=11483 Understand your organization’s priority areas before committing resources to mature your information and data management solutions. Enterprise Knowledge’s AI Readiness Assessment considers your organization’s business and technical ecosystem, and identifies specific priority and gap areas to help you make
targeted investments and gain tangible value from your data and information. Continue reading

The post Enterprise AI Readiness Assessment appeared first on Enterprise Knowledge.

]]>
A wide range of organizations have placed AI on their strategic roadmap, with C-levels commonly listing Knowledge AI amongst their biggest priorities. Yet, many are already encountering challenges as a vast majority of AI initiatives are failing to show results, meet expectations, and provide real business value. For these organizations, the setbacks typically originate from the lack of foundation on which to build AI capabilities. Enterprise AI projects too often end up as isolated endeavors, lacking the necessary foundations to support business practices and operations across the organization. So, how can your organization avoid these pitfalls? There are three key questions to ask when developing an Enterprise AI strategy; do you have clear business applications, do you understand the state of our information, and what in house capabilities do you possess?

Enterprise AI entails leveraging advanced machine learning and cognitive capabilities to discover and deliver organizational knowledge, data, and information in a way that closely aligns with how humans look for and process information.

With our focus and expertise in knowledge, data, and information management, Enterprise Knowledge (EK) developed this proprietary Enterprise Artificial Intelligence (AI) Readiness Assessment in order to enable organizations to understand where they are and where they need to be in order to begin leveraging today’s technologies and AI capabilities for knowledge and data management. 

assess your organization across 4 factors: enterprise readiness, state of data and content, skill sets and technical capabilities, and change readinessBased on our experience conducting strategic assessments as well as designing and implementing Enterprise AI solutions, we have identified four key factors as the most common indicators and foundations for many organizations in order to evaluate their current capabilities and understand what it takes to invest in advanced capabilities. 

This assessment leverages over thirty measurements across these four Enterprise AI Maturity factors as categorized under the following aspects. 

1. Organizational Readiness

Does your organization have the vision, support, and drive to enable successful Enterprise AI initiatives?The foundational requirement for any organization to undergo an Enterprise AI transformation stems from alignment on vision and the business applications and justifications for launching successful initiatives. The Organizational Readiness Factor includes the assessment of appropriate organizational designs, leadership willingness, and mandates that are necessary for success. This factor evaluates topics including:

  • The need for vision and strategy for AI and its clear application across the organization.
  • If AI is a strategic priority with leadership support.
  • If the scope of AI is clearly defined with measurable success criteria.
  • If there is a sense of urgency to implement AI.

With a clear picture of what your organizational needs are, your Organizational Readiness assessment factor will allow you to determine if your organization meets the requirements to consider AI related initiatives while surfacing and preparing you for potential risks to better mitigate failure.

2. The State of Organizational Data and Content

Is your data and content ready to be used for Enterprise AI initiatives?The volume and dynamism of data and content (structured and/or unstructured) is growing exponentially, and organizations need to be able to securely manage and integrate that information. Enterprise AI requires quality of, and access to, this information. This assessment factor focuses on the extent to which existing structured and unstructured data is in a machine consumable format and the level to which it supports business operations within the enterprise. This factor consider topics including:

  • The extent to which the organization’s information ecosystems allow for quick access to data from multiple sources.
  • The scope of organizational content that is structured and in a machine-readable format.
  • The state of standardized organization of content/data such as business taxonomy and metadata schemes and if it is accurately applied to content.
  • The existence of metadata for unstructured content. 
  • Access considerations including compliance or technical barriers.

AI needs to learn the human way of thinking and how an organization operates in order to provide the right solutions. Understanding the full state of your current data and content will enable you to focus on the right content/data with the highest business impact and help you develop a strategy to get your data in an organized and accessible format. Without high quality, well organized and tagged data, AI applications will not deliver high-value results for your organization.

3. Skills Sets and Technical Capabilities

Does your organization have the technical infrastructure and resources in place to support AI?With the increased focus on AI, the demand for individuals who have the technical skills to engineer advanced machine learning and intelligent solutions, as well as business knowledge experts who can transform data to a paradigm that aligns with how users and customers communicate knowledge, have both increased. Further, over the years, cloud computing capabilities, web standards, open source training models, and linked open data for a number of industries have emerged to help organizations craft customized Enterprise AI solutions for their business. This means an organization that is looking to start leveraging AI for their business no longer has to start from scratch. This assessment factor evaluates the organization’s existing capabilities to design, management, operate, and maintain an Enterprise AI Solution. Some of the factors we consider include:

  • The state of existing enterprise ontology solutions and enterprise knowledge graph capabilities that optimize information aggregation and governance. 
  • The existence of auto-classification and automation tools within the organization.
  • Whether roles and skill sets for advanced data modeling or knowledge engineering are present within the organization.
  • The availability and capacity to commit business and technical SMEs for AI efforts.

Understanding the current gaps and weaknesses in existing capabilities and defining your targets are crucial elements to developing a practical AI Roadmap. This factor also plays a foundational role in giving your organization the key considerations to ensure AI efforts kick off on the right track, such as leveraging web standards that enable interoperability, and starting with available existing/open-source semantic models and ecosystems to avoid short-term delays while establishing long-term governance and strategy. 

4. Change Threshold 

Is your organization prepared for supporting operational and strategic changes that will result from AI initiatives?The success of Enterprise AI relies heavily on the adoption of new technologies and ways of doing business. Organizations who fail to succeed with AI often struggle to understand the full scope of the change that AI will bring to their business and organizational norms. This usually manifests itself in the form of fear (either of change in job roles or creating wrong or unethical AI results that expose the organization to higher risks). Most organizations also struggle with the understanding that AI requires a few iterations to get it “right”. As such, this assessment factor focuses on the organization’s appetite, willingness, and threshold to understand and tackle the cultural, technical, and business challenges in order to achieve the full benefits of AI. This factor evaluates topics including:

  • Business and IT interest and desire for AI.
  • Existence of resource planning for the individuals whose roles will be impacted. 
  • Education and clear communication to facilitate adoption. 

The success of any technical solution is highly dependent on the human and culture factor in an organization and each organization has a threshold for dealing with change. Understanding and planning for this factor will enable your organization to integrate change management that addresses the negative implications, avoids unnecessary resistance or weak AI results, and provides the proper navigation through issues that arise.

How it Works

This Enterprise AI readiness assessment and benchmarking leverages the four factors that have over 30 different points upon which each organization can be evaluated and scored. We apply this proprietary maturity model to help assess your Enterprise AI readiness and clearly define success criteria for your target AI initiatives. Our steps include: 

  • Knowledge Gathering and Current State Assessment: We leverage a hybrid model that includes interviews and focus groups, supported by content/data and technology analysis to understand where you are and where you need to be.This gives us a complete understanding of your current strengths and weaknesses across the four factors, allowing us to provide the right recommendations and guidance to drive success, business value, and long-term adoption.
  • Strategy Development and Roadmapping: Building on the established focus on the assessment factors, we work with you to develop a strategy and roadmap that outlines the necessary work streams and activities needed to achieve your AI goals. It combines our understanding of your organization with proven best practices and methodologies into an iterative work plan that ensures you can achieve the target state while quickly and consistently showing interim business value.
  • Business Case Development and Alignment Support: we further compile our assessment of potential project ROI based on increased revenues, cost avoidance, risk and compliance management. We then balance those against the perceived business needs and wants by determining the areas that would have the biggest business impact with lowest costs. We further focus our discussions and explorations on these areas with the greatest need and higher interest.

Keys to Our Assessment  

Over the past several years, we have worked with diverse organizations to enable them to strategize, design, pilot, and implement scaled Enterprise AI solutions. What makes our priority assessment unique is that it is developed based on years of real-world experience supporting organizations in their knowledge and data management. As such, our assessment offers the following key differentiators and values for the enterprise: 

  • Recognition of Unique Organizational Factors: This assessment recognizes that no Enterprise AI initiative is exactly the same. It is designed in such a way that it recognizes the unique aspects of every organization, including priorities and challenges to then help develop a tailored strategy to address those unique needs.
  • Emphasis on Business Outcomes: Successful AI efforts result in tangible business applications and outcomes. Every assessment factor is tied to specific business outcomes with corresponding steps on how the organization can use it to better achieve practical business impact.
  • A Tangible Communication and Education Tool: Because this assessment provides measurable scores and over 30 tangible criteria for assessment and success factors, it serves as an effective tool to allow your organization to communicate up to leadership and quickly garner leadership buy-in, helping organizations understand the cost and the tangible value for AI efforts. 

Results

As a result of this effort, you will have a complete view of your AI readiness, gaps and required ecosystem and an accompanying understanding of the potential business value that could be realized once the target state is achieved. Taken as a whole, the assessment allows an organization to:

  • Understand strengths and weaknesses, and overall readiness to move forward with Enterprise AI compared to other organizations and the industry as a whole;
  • Judge where foundational gaps may exist in the organization in order to improve Enterprise AI readiness and likelihood of success; and
  • Identify and prioritize next steps in order to make immediate progress based on the organization’s current state and defined goals for AI and Machine Learning.

 

Get Started Download Trends Ask a Question

Taking the first step toward gaining this invaluable insight is easy:

1. Take 10-15 minutes to complete your Enterprise AI Maturity Assessment by answering a set of questions pertaining to the four factors; and
2. Submit your completed assessment survey and provide your email address to download a formal PDF report with your customized results.

The post Enterprise AI Readiness Assessment appeared first on Enterprise Knowledge.

]]>
Keys to Successful Ontology Design https://enterprise-knowledge.com/keys-to-successful-ontology-design/ Thu, 07 May 2020 18:13:01 +0000 https://enterprise-knowledge.com/?p=11073 Ontologies can capture highly complex ideas and business logic, provide more intuitive ways to structure information, and can ultimately power new use cases, such as semantic search, recommendation engines, and AI. While many organizations aim to leverage an ontology, they … Continue reading

The post Keys to Successful Ontology Design appeared first on Enterprise Knowledge.

]]>
Ontologies can capture highly complex ideas and business logic, provide more intuitive ways to structure information, and can ultimately power new use cases, such as semantic search, recommendation engines, and AI. While many organizations aim to leverage an ontology, they lack the strategic expertise and the in-house technical skills required to design or implement it. 

In order to get you started, here are some tips to ensure your efforts result in a quality ontology design. Even though they sound simple, these practical design considerations will have a huge impact on the reusability and scalability of your ontology.

Infographic for Ontology Design Steps

1. Identify a Clear Use Case. 

At the beginning of any ontology design effort, identify the 1-2 critical questions that the ontology needs to answer. Modeling for these specific use cases will help you to show immediate value by having a working model implemented quickly. If you attempt to model a full domain, you may be modeling indefinitely with no clear return on investment for the time spent. As we know, ontologies are never ‘complete’ and can always be expanded for additional use cases and domain coverage so it is important to understand the first few use cases that will show immediate return on investment. 

Some high-return example use cases we’ve worked on with success are: 

  • Related Content Recommendations: Using relationships between similar content and shared attributes, like the topic or author of a document, can support a recommendation engine that surfaces content to users. 
  • Natural Language Processing & Semantic Search: By using RDF and storing the ontology in triples, we can instantiate the model and traverse our data via relationships by asking natural language style questions that follow the same pattern using SPARQL. For example, if our model contains a relationship between two concepts:
PersonName isAuthorOf BookTitle
  We can instantiate examples based on our domain and data like these:
Jane Austen isAuthorOf Pride and Prejudice 
Jane Austen isAuthorOf Emma
Jane Austen isAuthorOf Northanger Abbey
  Then we can ask questions in a search such as, “What books has Jane Austen written?” and return results from our dataset of Pride and Prejudice, Emma, and Northanger Abbey

Showing immediate value with a concrete use case can help to ensure participation and support from stakeholders and end users on additional use cases and ontology design efforts.

2. Reuse Standards & Existing Vocabularies.

Look first for models and standards that already exist and may inform your design. One of the most important benefits of using standards for ontology development is the interoperability that comes with open linked data. Infographic of existing models and standardsDepending on the industry, well-developed models may already exist. For example, the Veterinary Extension of SnomedCT from the Veterinary Terminology Services Laboratory Browser (VTSL Terminology Browser) is packed with defined vocabulary and classes for Procedures, Clinical Findings, Events, and more for the veterinary industry. Another industry specific model is the Unified Medical Language System (UMLS) that has gathered many biomedical and clinical vocabularies in one web browser interface. Finding and leveraging an existing vocabulary or model for your industry can jumpstart your design and ensure that you are in line with the industry, even if your model is tailored or customized in some areas. One resource for finding industry or domain specific models is Linked Open Vocabularies, a collection of open source vocabularies. 

Non-industry specific standards are also important and can be key for saving time, such as pulling in descriptions and alternative labels from DBpedia or Wikidata, classes and relationships from Friend of a Friend (FOAF) or Schema.org, and modeling standards like W3C’s Web Ontology Language (OWL) and Resource Description Framework Schema (RDFS) for consistency. These standards will also ensure interoperability with any applications or datasets that are also using semantic web standards.

3. Leverage Consistent Naming.

Follow a naming convention to ensure that your resources are easily understood and referenceable by others. If your naming conventions are inconsistent, it can make it much harder to integrate with organizational tools or reference parts of the model. On the other hand, if the naming conventions clearly differentiate between classes, properties, and instances, it will be immediately obvious which type of resource someone is looking at or trying to return in a query or API call. Luckily, the World Wide Web Consortium (W3C) has already defined some simple conventions:

Resource Type Naming Convention Examples
Classes Sentence case starting with capital letters. Place

Person

Properties Start with lowercase, then continue with title case. inverseOf

authorOf

hasBroader

Instances For proper names, capitalize the first letter of each word. United States of America

Jane Austen

These naming conventions will improve the clarity and quality of your model, and, as always, support interoperability.

4. Define Classes and Instances.

Two important components of any ontology design are classes and instances. These components allow us to model both the broad types of things and the specific examples of those things. The OWL standard defines these as:

Classes provide an abstraction mechanism for grouping resources with similar characteristics. Like RDF classes, every OWL class is associated with a set of individuals, called the class extension. The individuals in the class extension are called the instances of the class.

The differences between classes and instances can be tricky to define when designing a new ontology, especially if you are building off of an existing taxonomy or thesaurus. My colleague Ben describes how the top level of a well constructed taxonomy can often be repurposed as the classes of your ontology in his blog, From Taxonomy to Ontology. Taxonomies that include metadata fields like Content Type, Person, and Company can transition to ontological classes with the narrower terms, like Proposal, Jenni Doughty, and Enterprise Knowledge as instances of those classes, respectively. 

Ontology Example

It’s important to understand which of your taxonomy terms are candidates for classes, subclasses, or instances. A good rule of thumb is to recognize which terms are types of things, versus examples of individual things. For example, a Quarterly Report can be a type of report, (a subclass of the Report class), while the 2020 Q3 Quarterly Report is an instance, or a specific example of a Quarterly Report. The distinction is important for ensuring your ontology model is complete and can be implemented successfully.

In OWL and RDFS, there are many useful axioms that can help ontologists express different types of relationships and classes and allow applications using the model to infer different things and further define a class. Some of these include:

  • rdfs:subClassOf – Can define a class as a narrower or child class of another, allowing the inheritance of properties and additional inferences based on this relationship.
  • owl:equivalentClass – Can indicate a class that is equivalent to another, indicating that the instances within are the same in both classes. This is not the same as saying that a class is owl:sameAs another class, meaning that they have the same intensional meaning. 
  • owl:disjointWith – Can restrict classes from overlapping and containing the same instance in more than one class, reducing ambiguity when tagging or recommending content. For example, if we have an ontology with Animal and Car classes, we can disjoint these classes which will prevent the same instance of Jaguar from appearing in both classes.

Understanding which axioms to use will ensure that the ontology models not just the classes, but also contains information about those classes that further characterize the instances that fit within and how they relate to others.

5. Implement Iteratively.

Once you’ve designed the ontology model for your use case, it is time to begin mapping data sources to the ontology, or instantiating and implementing it with graph technology. As with all our design and implementation projects, EK recommends implementing an ontology iteratively, starting with 1-2 data sources or a high-level set of data from each intended source that is relevant to your priority use cases. When mapping or ingesting data, there are multiple best practices for Enterprise Knowledge Graph Design to assist in completing this task, including deciding what data to actually move and store in the knowledge graph, and which to map through a virtual graph.

The best way to ensure sustainable and scalable implementation is to start small, move quickly, and seek continuous feedback from end users and stakeholders. These stakeholders will be instrumental in ensuring that the resulting ontology and implementation meet business needs and end user expectations. A good rule of thumb is to engage a wide variety of stakeholders from all business areas that will be benefited or engaged in any part of the design, implementation, maintenance, or end user processes. Finally, through facilitated conversations or working sessions, these stakeholders will not only assist in the development of the ontology, but will also feel as if they have a stake in what has been designed. They can become your champions for future work and capabilities.

These five keys can help to ensure a strong, standards-based foundation for your ontology design that will result in an intuitive and interoperable model. For more information on how to begin designing an ontology, consider EK’s Two-Day Design Workshop or contact us at info@enterprise-knowledge.com.

The post Keys to Successful Ontology Design appeared first on Enterprise Knowledge.

]]>
What is a Semantic Architecture and How do I Build One? https://enterprise-knowledge.com/what-is-a-semantic-architecture-and-how-do-i-build-one/ Thu, 02 Apr 2020 13:00:48 +0000 https://enterprise-knowledge.com/?p=10865 Can you access the bulk of your organization’s data through simple search or navigation using common business terms? If so, your organization may be one of the few that is reaping the benefits of a semantic data layer. A semantic … Continue reading

The post What is a Semantic Architecture and How do I Build One? appeared first on Enterprise Knowledge.

]]>
Can you access the bulk of your organization’s data through simple search or navigation using common business terms? If so, your organization may be one of the few that is reaping the benefits of a semantic data layer. A semantic layer provides the enterprise with the flexibility to capture, store, and represent simple business terms and context as a layer sitting above complex data. This is why most of our clients typically give this architectural layer an internal nickname, referring to it as “The Brain,”  “The Hub,” “The Network,” “Our Universe,” and so forth. 

As such, before delving deep into the architecture, it is important to align on and understand what we mean by a semantic layer and its foundational ability to solve business and traditional data management challenges. In this article, I will share EK’s experience designing and building semantic data layers for the enterprise, the key considerations and potential challenges to look out for, and also outline effective practices to optimize, scale, and gain the utmost business value a semantic model provides to an organization.

What is a Semantic Layer?

A semantic layer is not a single platform or application, but rather the realization or actualization of a semantic approach to solving business problems by managing data in a manner that is optimized for capturing business meaning and designing it for end user experience. At its core, a standard semantic layer is specifically comprised of at least one or more of the following semantic approaches: 

  • Ontology Model: defines the types of things that exist in your business domain and the properties that can be used to describe them. An ontology provides a flexible and standard model that organizes structured and unstructured information through entities, their properties, and the way they relate to one another.
  • Enterprise Knowledge Graph: uses an ontology as a framework to add in real data and enable a standard representation of an organization’s knowledge domain and artifacts so that it is understood by both humans and machines. It is a collection of references to your organization’s knowledge assets, content, and data that leverages a data model to describe the people, places, and things and how they are related. 

A semantic layer thus pulls in these flexible semantic models to allow your organization to map disparate data sources into a single schema or a unified data model that provides a business representation of enterprise data in a “whiteboardable” view, making large data accessible to both technical and nontechnical users. In other words, it provides a business view of complex knowledge, information, and data and their assorted relationships in a way that can be visually understood.

How Does a Semantic Layer Provide Business Value to Your Organization?

Organizations have been successfully utilizing data lakes and data warehouses in order to unify enterprise data in a shared space. A semantic data layer delivers the best value for enterprises that are looking to support the growing consumers of big data, business users, by adding the “meaning” or “business knowledge” behind their data as an additional layer of abstraction or as a bridge between complex data assets and front-end applications such as enterprise search, business analytics and BI dashboards, chatbots, natural language process etc. For instance, if you ask a non-semantic chatbot, “what is our profit?” and it recites the definition of “profit” from the dictionary, it does not have a semantic understanding or context of your business language and what you mean by “our profit.” A chatbot built on a semantic layer would instead respond with something like a list of revenue generated per year and the respective percentage of your organization’s profit margins.Visual representation of how a semantic layer draws connections between your data management and storage layer

With a semantic layer as part of an organization’s Enterprise Architecture (EA), the enterprise will be able to realize the following key business benefits: 

  • Bringing Business Users Closer to Data: business users and leadership are closer to data and can independently derive meaningful information and facts to gain insights from large data sources without the technical skills required to query, cleanup, and transform large data.   
  • Data Processing: greater flexibility to quickly modify and improve data flows in a way that is aligned to business needs and the ability to support future business questions and needs that are currently unknown (by traversing your knowledge graph in real time). 
  • Data Governance: unification and interoperability of data across the enterprise minimizes the risk and cost associated with migration or duplication efforts to analyze the relationships between various data sources. 
  • Machine Learning (ML) and Artificial Intelligence (AI):  Serves as the source of truth for providing definition of the business data to machines and enabling the foundation for deep learning and analytics to help the business answer or predict business challenges.

Building the Architecture of a Semantic Layer

A semantic layer consists of a wide array of solutions, ranging from the organizational data itself, to data models that support object or context oriented design, semantic standards to guide machine understanding, as well as tools and technologies to enable and facilitate implementation and scale. Visual representation of semantic layer architecture. Shows how to go from data sources, to data modeling/transformation/unification and standardization, to graph storage and a unified taxonomy, to finally a semantic layer, and then lists some of the business outcomes.

The three foundational steps we have identified as critical to building a scalable semantic layer within your enterprise architecture are: 

1. Define and prioritize your business needs: In building semantic enterprise solutions, clearly defined use cases provide the key question or business reason your semantic architecture will answer for the organization. This in turn drives an understanding of the users and stakeholders, articulates the business value or challenge the solution will solve for your organization, and enables the definition of measurable success criteria. Active SME engagement and validation to ensure proper representation of their business knowledge and understanding of their data is critical to success. Skipping this foundational step will result in missed opportunities for ensuring organizational alignment and return on your investment (ROI). 

2. Map and model your relevant data: Many organizations we work with support a data architecture that is based on relational databases, data warehouses, and/or a wide range of content management cloud or hybrid cloud applications and systems that drive data analysis and analytics capabilities. This does not necessarily mean that these organizations need to start from scratch or overhaul their working enterprise architecture in order to adopt/implement semantic capabilities. For these organizations, it is more effective to start increasing the focus on data modeling and designing efforts by adding models and standards that will allow for capturing business meaning and context (see section below on Web Standards) in a manner that provides the least disruptive starting point. In such scenarios, we typically select the most effective approach to model data and map from source systems by employing the relevant transformation and unification processes (Extract, Transform, Load – ETLs) as well as model-mapping best practices (think ‘virtual model’ versus stored data model in graph storages like graph databases, property graphs, etc.) that are based on the organization’s use cases, enterprise architecture capabilities, staff skill sets, and primarily provide the highest flexibility for data governance and evolving business needs.

The state of an organization’s data typically comes in various formats and from disparate sources. Start with a small use case and plan for an upfront clean-up and transformation effort that will serve as a good investment to start organizing your data and set stakeholder expectations while demonstrating the value of your model early.

3. Leverage semantic web standards to ensure interoperability and governance: Despite the required agility to evolve data management practices, organizations need to think long term about scale and governance. Semantic Web Standards provide the fundamentals that enable you to adopt standard frameworks and practices when kicking off or advancing your semantic architecture. The most relevant standards to the enterprise should be to: 

  • Employ an established data description framework to add business context to your data to enable human understanding and natural language meaning of data (think taxonomies, data catalogs, and metadata); 
  • Use standard approaches to manage and share the data through core data representation formats and a set of rules for formalizing data to ensure your data is both human-readable and machine-readable (examples include XML/RDF formats); 
  • Apply a flexible logic or schema to map and represent relationships, knowledge, and hierarchies between your organization’s data (think ontologies/OWL);
  • A semantic query language to access and analyze the data natural language and artificial intelligence systems (think SPARQL). 
  • Start with available existing/open-source semantic models and ecosystems for your organization to serve as a low-risk, high-value stepping stone (think Open Linked Data/Schema.org). For instance, organizations in the financial industry can start their journey by using a starter ontology for Financial Industry Business Ontology (FIBO), while we have used the Gene Ontology for Biopharma as a jumping off point or to enrich or tailor their model for the specific needs of their organization.

4. Scale with Semantic Tools: Semantic technology components in a more mature semantic layer include graph management applications that serve as middleware, powering the storage, processing, and retrieval of your semantic data. In most scaled enterprise implementations, the architecture for a semantic layer includes a graph database for storing the knowledge and relationships within your data (i.e. your ontology), an enterprise taxonomy/ontology management or a data cataloging tool for effective application and governance of your metadata on enterprise applications such as content management systems, and text analytics or extraction tools to support  advanced capabilities such as Machine Learning (ML) or natural language processing (NLP) depending on the use cases you are working with. 

5. “Plug in” your customer/employee facing applications: The most practical and scalable semantic architecture will successfully support upstream customers or employees facing applications such as enterprise search, data visualization tools, end services/consuming systems, and chatbots, just to name a few potential applications. This way you can “plug” semantic components into other enterprise solutions, applications, and services. With this as your foundation, your organization can now start taking advantage of advanced artificial intelligence (AI) capabilities such as knowledge/relationship and text extraction tools to enable Natural Language Processing (NLP), Machine Learning based pattern recognition to enhance findability and usability of your content, as well automated categorization of your content to augment your data governance practices. 

The cornerstone of a scalable semantic layer is ensuring the capability for controlling and managing versions, governance, and automation. Continuous integration pipelines including standardized APIs and automated ETL scripts should be considered as part of the DNA to ensure consistent connections for structured input from tested and validated sources.

Conclusion

In summary, semantic layers work best as a natural integration framework for enabling interoperability of organizational information assets. It is important to get started by focusing on valuable business-centric use cases that drive getting into semantic solutions. Further, it is worth considering a semantic layer as a complement to other technologies, including relational databases, content management systems (CMS), and other front-end web applications that benefit from having easy access and an intuitive representation of your content and data including your enterprise search, data dashboards, and chatbots.

If you are interested in learning more to determine if a semantic model fits within your organization’s overall enterprise architecture or if you are embarking on the journey to bridge organizational silos and connect diverse domains of knowledge and data that accelerate enterprise AI capabilities, read more or email us.   

Get Started Ask Us a Question

 

The post What is a Semantic Architecture and How do I Build One? appeared first on Enterprise Knowledge.

]]>
Ivanov to Speak at Knowledge Graph Conference https://enterprise-knowledge.com/ivanov-to-speak-at-knowledge-graph-conference/ Thu, 26 Mar 2020 13:00:14 +0000 https://enterprise-knowledge.com/?p=10808 Yanko Ivanov, a Senior Solution Architect at Enterprise Knowledge, will be presenting at the upcoming global, digital-first, multi-day Knowledge Graph Conference being held on May 6th and 7th.  Ivanov’s presentation, “The Curious Case of the Semantic Data Catalog,” will discuss … Continue reading

The post Ivanov to Speak at Knowledge Graph Conference appeared first on Enterprise Knowledge.

]]>
Yanko Ivanov, a Senior Solution Architect at Enterprise Knowledge, will be presenting at the upcoming global, digital-first, multi-day Knowledge Graph Conference being held on May 6th and 7th. 

Ivanov’s presentation, “The Curious Case of the Semantic Data Catalog,” will discuss the application of knowledge graphs as an enterprise semantic data catalog. With the popularization of enterprise knowledge graphs in recent years and the inherent power of the semantic technology to integrate structured and unstructured information, the application of knowledge graphs as data catalogs seems to be a foregone conclusion. In his presentation, Ivanov will discuss key considerations and business value of semantic data catalogs. He will also review a specific use case where EK implemented a semantic data catalog for a government organization to help them track, discover, and govern a large number of data sets.

You can register for this remote Conference here

A photo of speaker Yanko Ivanov alongside the name and dates of the conference

The post Ivanov to Speak at Knowledge Graph Conference appeared first on Enterprise Knowledge.

]]>
Speakers, Companies, and Topics You Shouldn’t Miss at SEMANTiCS 2020 https://enterprise-knowledge.com/speakers-companies-and-topics-you-shouldnt-miss-at-semantics-2020/ Thu, 06 Feb 2020 14:00:32 +0000 https://enterprise-knowledge.com/?p=10473 I just finished putting together the final touches on the agenda for this year’s SEMANTiCS conference in Austin, TX. As Conference Chair, I couldn’t be more excited about the speakers, companies, and the topics they will be speaking about. We … Continue reading

The post Speakers, Companies, and Topics You Shouldn’t Miss at SEMANTiCS 2020 appeared first on Enterprise Knowledge.

]]>
I just finished putting together the final touches on the agenda for this year’s SEMANTiCS conference in Austin, TX. As Conference Chair, I couldn’t be more excited about the speakers, companies, and the topics they will be speaking about. We received over 60 speaker submissions for less than 40 slots. We faced some difficult choices, but the talks we selected are going to be amazing. I can’t wait for people to hear all of the great presentations on topics like semantic technologies, knowledge graphs, machine learning, and artificial intelligence. 

The conference has three primary tracks: 

  • Case studies; 
  • Methodologies and best practices; and, 
  • Technologies. 

The case studies will provide real-life stories about how organizations around the world have implemented semantic solutions. The speakers will share what worked and what they would have done differently so that attendees can learn from their experiences. Organizations like NASA Jet Propulsion Laboratory (JPL) and the German Aerospace Center will talk about how knowledge graphs help send people to the moon. Presenters from Wells Fargo and Intuit will explain how they use knowledge graphs to make data more accessible and accurate. Organizations like eBay, Healthstream, and the Inter-American Development Bank will provide lessons learned as they have developed recommendation engines using semantic technologies.

We also have a great group of speakers who will share their methodologies and best practices. Experienced taxonomists like Heather Hedden and Gary Carlson will give guidance on the best ways to develop taxonomies, ontologies, and chatbots. Dean Algegang and Lulit Tesfaye will share their best practices for publishing linked data to the web and developing a semantic data strategy, respectively. Finally, Melissa Orozko intends to provide guidance on how to create a global ontology; while Bob Kasenchak will talk about transforming taxonomies to knowledge graphs.

Semantics technologies are changing rapidly. We have a number of speakers that will keep you up to date on the latest technologies and trends in the industry. Kurt Cagle, from Forbes, and Andreas Blumauer will speak about knowledge graphs for human beings and how to “cook” a knowledge graph. Brian Platz will share guidance on how to integrate time into semantic solutions. There will also be talks on multilingual tagging approaches and identifying digital twins.

This is just a shortlist of the kinds of companies, speakers, and topics that will be at the first US SEMANTiCs conference in Austin, TX from April 21-23. I hope to see you there!

Austin skyline with Semantics conference information such as date

The post Speakers, Companies, and Topics You Shouldn’t Miss at SEMANTiCS 2020 appeared first on Enterprise Knowledge.

]]>
What I’m Looking Forward to Learning at SEMANTiCS Austin 2020 https://enterprise-knowledge.com/what-im-looking-forward-to-learning-at-semantics-austin-2020/ Mon, 03 Feb 2020 16:43:16 +0000 https://enterprise-knowledge.com/?p=10379 SEMANTiCS Austin 2020 is the inaugural SEMANTiCS U.S. conference that will bring together knowledge graphs, ontologies, and Enterprise AI. These topics, among others, are of particular interest to my work in search and semantics, and I am excited to see … Continue reading

The post What I’m Looking Forward to Learning at SEMANTiCS Austin 2020 appeared first on Enterprise Knowledge.

]]>
SEMANTiCS Austin 2020 is the inaugural SEMANTiCS U.S. conference that will bring together knowledge graphs, ontologies, and Enterprise AI. These topics, among others, are of particular interest to my work in search and semantics, and I am excited to see how other organizations are leveraging semantic technologies. Below are the top four things I am looking forward to learning about at SEMANTiCS Austin 2020. 

Austin skyline with a large four and the blog title and author's headshot

Improving Business Processes with Linked Data

Organizations are beginning to work with linked data to improve business processes. There are two types of linked data to consider: open linked data and enterprise linked data. Open linked data is publicly available data that businesses can use to connect and extend their own information with pre-defined entities used across the internet. For example, one of our clients pulled in a hierarchical list of US states, counties, and cities in order to map their organizational sectors to geographic locations. This allowed them to quickly identify sectors based on an input street address. By connecting to open linked data sources, you can jumpstart the design of your domain model, pre-populate controlled lists, and improve your business taxonomy. In contrast, enterprise linked data is an organization’s internal knowledge graph. Internal knowledge graphs can improve data analysis and be a stepping stone on the path towards enterprise artificial intelligence (AI). As linked data becomes more common, new use cases are constantly developing. SEMANTiCS Austin 2020 is a great opportunity to explore some of these new use cases and understand how other organizations are leveraging relationships in data.

Visualizing Knowledge Graphs to Explore Data

Enterprise knowledge graphs are a growing technology trend that help businesses explore and interpret their data by visualizing relationships. Visualizing an ontology, the data domain model, helps organizations discover hidden data relationships and understand how enterprise-wide content is related, even though the content may be siloed across multiple systems and teams. Knowledge graph in ek brand colorsFor one of our clients, we created a custom web application that allows them to visualize their disparate data and traverse between different data sets and institutions. The web application enables them to easily navigate their content that was previously siloed and unstructured. The ability to see relationships and access all of the information about individual entities empowers organizations to make better, more informed business decisions. At SEMANTiCS Austin 2020, I want to explore how other organizations use the power of data visualization to support their knowledge graph and search efforts. 

Building Extendable Semantic Applications

As technology stacks continue to change, the architecture of semantic applications has to adapt to meet organizational needs. When you begin a semantic application project, you may start with an MVP search application or an expert finder. How do you build and support an application that both drives your MVP and is easily extendable in the future? As the scope of the project grows, new data sources should be easy to add and new applications should be able to plug-n-play. Some organizations are building semantic middleware to deliver data and content to end systems for consumption. Other organizations implement GraphQL across their systems in order to make data more accessible. I look forward to learning how organizations are building, defending, and supporting the semantic technology stack as part of their search and KM initiatives at SEMANTiCS Austin 2020. 

Leveraging Semantic Technologies for Search

As I mentioned in the last section, semantic applications are commonly used for search. Using an enterprise knowledge graph, you can identify and describe the people, places, content, or other domain-specific entities of your business. These descriptions help organizations build their own knowledge panels and identify potential action-oriented search use cases. Additionally, semantic technologies are used to extend taxonomies, enable auto-tagging of content, and power machine learning processes to better understand user search queries. With the increased approachability of natural language processing and machine learning tools, there are a number of ways to improve search with semantic technologies. SEMANTiCS Austin 2020 will explore the benefits of semantic search from design to implementation.

Want to explore the potential of semantic technologies in your organization? Join us for the talks, tutorials, and workshops at SEMANTiCS US in April 2020. As a bonus, if you say “I’m a Rockstar” to an EK employee at the conference, there may be some prizes available!

 

The post What I’m Looking Forward to Learning at SEMANTiCS Austin 2020 appeared first on Enterprise Knowledge.

]]>
Upcoming Webinar: Knowledge Graphs, AI and Semantics: How do these technologies fit together? https://enterprise-knowledge.com/upcoming-webinar-knowledge-graphs-ai-and-semantics-how-do-these-technologies-fit-together/ Tue, 14 Jan 2020 16:27:54 +0000 https://enterprise-knowledge.com/?p=10331 Confused about these new technologies and why companies like Google, Amazon, and Microsoft rely on them? Join a panel of experts, including Alan Morrison (Sr. Research Fellow, Emerging Tech, PwC), Aaron Bradley (Knowledge Graph Strategist, Electronic Arts), Kurt Cagle (Contributing … Continue reading

The post Upcoming Webinar: Knowledge Graphs, AI and Semantics: How do these technologies fit together? appeared first on Enterprise Knowledge.

]]>
Austin skyline with SEMANTiCS webinar logoConfused about these new technologies and why companies like Google, Amazon, and Microsoft rely on them? Join a panel of experts, including Alan Morrison (Sr. Research Fellow, Emerging Tech, PwC), Aaron Bradley (Knowledge Graph Strategist, Electronic Arts), Kurt Cagle (Contributing writer to Forbes Magazine), Lulit Tesfaye (EK Data and Information Management Practice Lead), and Yanko Ivanov (EK Technology Partner Manager) as they answer questions about how AI, Knowledge Graphs, and Semantic tools work together to solve complicated business problems. Additionally, the webinar will provide insight into the key topics and questions thought leaders will explore at this year’s SEMANTiCS conference in Austin, TX.

The webinar will be held on Thursday, January 30th from 1:00 – 2:00 PM EST. 

View event details and register here.

 

 

The post Upcoming Webinar: Knowledge Graphs, AI and Semantics: How do these technologies fit together? appeared first on Enterprise Knowledge.

]]>
SEMANTiCs US to Promote Next Generation of Transparent AI https://enterprise-knowledge.com/semantics-us-to-promote-next-generation-of-transparent-ai/ Fri, 27 Dec 2019 16:11:18 +0000 https://enterprise-knowledge.com/?p=10181 Teaming with Semantic Web Company, Enterprise Knowledge is sponsoring the inaugural SEMANTiCs US conference to be held in Austin, TX, from April 21-23 at the AT&T Conference Center. The multi-day conference will showcase proven practices and real-life case studies of … Continue reading

The post SEMANTiCs US to Promote Next Generation of Transparent AI appeared first on Enterprise Knowledge.

]]>
Austin Skyline with purple text saying Register NowTeaming with Semantic Web Company, Enterprise Knowledge is sponsoring the inaugural SEMANTiCs US conference to be held in Austin, TX, from April 21-23 at the AT&T Conference Center. The multi-day conference will showcase proven practices and real-life case studies of companies using semantics to build artificial intelligence solutions that matter to their business. Additionally, SEMANTiCs US will leverage three simultaneous tracks to ensure active engagement for all levels of interest and experience, from novice to expert.

The conference will cover topics such as Explainable AI, Knowledge Graphs, Semantic Technologies, and Data Governance. A number of renowned experts will speak at the conference, including the following highlights:

  • Aaron Bradley, Senior Manager for Web Channel Strategy at Electronic Arts.
  • Alan Morrison, Senior Research Fellow, Emerging Technologies at PricewaterhouseCoopers.
  • Amit Sheth, Founding Director of the Artificial Intelligence Institute and Professor of Computer Science & Engineering at the University of South Carolina.
  • Ruben Verborgh, Technology Advocate for Inrupt.

Joe Hilger, COO of Enterprise Knowledge and Chair of SEMANTiCs US, is excited about the speakers already lined up: “We have fantastic speakers joining the conference. It will be a great opportunity for people to engage with some of the leading experts and business stakeholders in the field and learn more about Knowledge Graphs and Explainable AI.”

More information on SEMANTiCS 2020 can be found here: https://2020-us.semantics.cc/.

 

About Semantics Conference Series

SEMANTiCS is an established knowledge hub where technology professionals, industry experts, researchers and decision makers can learn about new technologies, innovations and enterprise implementations in the fields of Knowledge Graphs, Linked Data and Semantic AI. Founded in 2005 the SEMANTiCS is the only European conference at the intersection of research and industry.

In 2020 the SEMANTICS will be held in Austin, TX in April and in Amsterdam, NL in September.

 

About EK

Enterprise Knowledge (EK) is a services firm that integrates Knowledge Management, Information Management, Information Technology, and Agile Approaches to deliver comprehensive solutions. Our mission is to form true partnerships with our clients, listening and collaborating to create tailored, practical, and results-oriented solutions that enable them to thrive and adapt to changing needs.

 

About Semantic Web Company

A leading provider in graph-based metadata, search, and analytic solutions, Semantic Web Company helps global 500 customers manage corporate knowledge models, extract useful knowledge from big data sets and integrate both structured and unstructured data to recommend evolved strategies for organizing information at scale. Founded in 2004,  the Semantic Web Company is the vendor of PoolParty Semantic Suite (www.poolparty.biz) and was named on KMWorld’s 2016 until 2019 prestigious list of “100 Companies that Matter in Knowledge Management.” The company has recently been added to Gartner’s Magic Quadrant of Metadata Management Solutions in the category Visionary. Andreas Blumauer, founder and CEO has been nominated to the Forbes Technology Council.

The post SEMANTiCs US to Promote Next Generation of Transparent AI appeared first on Enterprise Knowledge.

]]>