Enterprise Articles - Enterprise Knowledge http://enterprise-knowledge.com/tag/enterprise/ Thu, 16 Jan 2025 16:19:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg Enterprise Articles - Enterprise Knowledge http://enterprise-knowledge.com/tag/enterprise/ 32 32 5 Steps For Building Your Enterprise Semantic Recommendation Engine https://enterprise-knowledge.com/5-steps-for-building-your-enterprise-semantic-recommendation-engine/ Tue, 10 Oct 2023 14:51:08 +0000 https://enterprise-knowledge.com/?p=19029 In today’s digital landscape, where content overload is a constant challenge, recommendation engines (also known as personalized content recommenders, or just “recommenders”) have emerged as powerful solutions that help users cut through the noise and discover relevant information. These intelligent … Continue reading

The post 5 Steps For Building Your Enterprise Semantic Recommendation Engine appeared first on Enterprise Knowledge.

]]>
In today’s digital landscape, where content overload is a constant challenge, recommendation engines (also known as personalized content recommenders, or just “recommenders”) have emerged as powerful solutions that help users cut through the noise and discover relevant information. These intelligent tools enhance user experiences by surfacing personalized recommendations based on user activity or specific input selections at the point of need. While people may be most familiar with recommendation engines utilized by the likes of Amazon and Netflix, recommendation engines have been successfully implemented in various domains beyond big tech. Businesses can utilize internal recommendation engines to streamline knowledge management processes, ensuring employees have access to the most relevant resources and expertise within the organization. 

Traditional Approach vs. Semantic Approach

There are two mainstream frameworks for developing a recommendation engine: traditional recommendation engines and semantic recommendation engines. It is important to select the right framework based on your use case, scale and availability of data, and internal resources.

Traditional recommender approaches involve leveraging user analytics to establish statistical associations between content and user interests. However, these approaches come with challenges such as the “cold start” problem, arising from a lack of historical user data when a recommender is initialized, making it challenging to provide accurate and tailored recommendations from the outset. Additionally, these approaches rely heavily on machine learning resources and subject matter expertise, further adding to the complexity and investment required in building and maintaining such systems.

In contrast, the semantic recommender approach focuses on constructing a finite-scale semantic model and leveraging its capabilities to recommend desired content. This approach has several advantages, including the ability to work with better scoped data sets and to utilize an organization’s existing taxonomies and metadata as contextual information for machine learning algorithms. With less emphasis on machine learning expertise and a faster development time, the semantic recommender approach provides a practical and efficient solution to building recommenders that meet immediate business needs.

Establishing a Semantic Recommendation Engine

When using a semantic approach, the process of building the recommendation engine should be specifically tailored to capitalize on the advantages of these solutions. EK has been successful using the following 5 iterative steps to launch production grade semantic recommendation engines and applications to help businesses make targeted recommendations and improve their content personalization journey.

1. Define the Use Cases

When building a semantic recommendation engine, clearly defining the use cases is crucial for successful implementation. This involves identifying the corresponding personas, problem statement, and the necessary inputs and outputs. To initiate recommendations, the system requires trigger inputs, typically consisting of 1-3 user-provided inputs that serve as the basis for generating recommendations. Having known information about the trigger inputs provides a starting place and can facilitate the recommendation process by narrowing down the scope and improving the relevance of the suggestions. 

For example, one of our healthcare provider clients had a semantic recommendation engine use case for enabling clinicians to search for content related to illnesses based on patient information, so that they can pass on that content to patients and assist them in treatment and recovery. The inputs in this case would be the patient information and diagnosis, and the outputs would be content related to those inputs.

The main output of the recommendation engine should provide the minimum information required for users to understand what the recommender offers and to make an informed decision based on the presented information. Alongside the main output, having known information about the recommendations can provide additional context and details to enhance the user experience or help to validate the results. By carefully defining the use case and considering these elements, businesses will effectively plan to generate meaningful and relevant recommendations for users.

2. Create Supporting Data Models

Once the use case has been defined, the next step in building a semantic recommendation engine is to create supporting data models, including taxonomies and ontologies. Taxonomies establish hierarchical structures and relationships between various content items, enabling efficient content classification based on shared characteristics. On the other hand, ontologies define the complex relationships and dependencies between different entities, attributes, and concepts, fostering a deeper understanding of the data. By leveraging these relationship-based structures, the semantic recommendation engine can provide more contextually relevant and personalized suggestions.

To effectively design ontologies and supporting taxonomies for the selected use case, an iterative approach is crucial. This involves strategically designing the models to capture the shared characteristics between different content types. During the design process, the models are refined to ensure accurate representation of the relationships and connections between different data elements. Shared metadata can then be incorporated into the ontologies, enriching the content with valuable context and enabling the engine to make more informed recommendations based on those shared metadata. In our healthcare provider example, shared metadata such as content type, content keywords, publication date, and author information could be used to provide context to medical content and improve the relevancy of content recommendations. 

3. Construct the Graph

After creating the supporting data models, the next step in building a semantic recommendation engine is to construct the graph. The graph acts as a database of nodes and connections between nodes (called edges) that houses all of the content relationships defined in the ontology model.

Building the graph involves both ingesting and enriching source data. Ingestion maps raw data to nodes and edges in the graph. Enrichment appends additional attributes, tags, and metadata to enhance the data. This enriched data is then be transformed into semantic triples, which are subject-predicate-object structures that capture relationships. In our example, the healthcare provider could transform their enriched data into triples that capture the relationships between diagnoses and medical subjects, and medical subjects and content. 

Converting data into a web of semantic triples and loading it into the graph enables efficient querying. The knowledge graph’s flexibility also enables continuous integration of new data to keep recommendations relevant. This means the healthcare provider can query their graph to find medical content that relates to specific diagnoses, and they can continue to add content to their graph database without affecting the existing content, or changing the schema by which they have to run queries.

4. Define and Apply Recommendation Logic

Once the graph is constructed, the next step in building a semantic recommendation engine is to define and apply the logic for generating recommendations. This involves selecting an appropriate approach – either deterministic or statistical – and implementing the corresponding recommendation logic. 

In the deterministic approach, various techniques such as path finding, matching, and scoring logic are used to leverage the metadata embedded within the graph to generate recommendations. An example of this approach is content-based recommendation, where recommendations are generated based on the similarity of content attributes. 

On the other hand, the statistical approach utilizes graph neural networks or classifiers. These models leverage the power of graph-based machine learning algorithms to determine the probability of matches through weighting algorithms. An example of this approach is link prediction recommendation, where the likelihood of connections between items is predicted based on their graph structure. 

By defining and implementing the appropriate logic, the semantic recommendation engine can effectively generate recommendations that align with user preferences and maximize their relevance and utility. Our example healthcare provider may choose a deterministic recommendation approach if they decide they want to provide the content recommendations based on how similar content metadata is to the clinician-provided inputs.

5. Integrate with Enterprise Systems/Applications

Integration with enterprise systems enables users to interact with the graph through various channels such as a custom web application, an analytics tool, or a machine learning pipeline, through use cases including semantic search and chatbots. By integrating the semantic recommendation engine into enterprise systems, businesses can scale existing data models to multiple use cases and provide users with enhanced experiences, facilitate knowledge discovery, and promote streamlined knowledge management processes.

For example, the healthcare provider is able to integrate the recommendation engine on the backend with their internal database of content and on the frontend with their clinician portal to let clinicians search for and receive content recommendations in their browser, without having to manually look through their database.

Conclusion

Ultimately, semantic recommendation engines have emerged as essential tools in the digital era, offering personalized content discovery and driving user engagement. Leveraging relationship-based data models, these engines provide contextual suggestions without relying on large volumes of usage data.

To implement an impactful semantic recommendation engine with these 5 steps, key factors include:

  • Building the case for the importance of graph-based recommendations
  • Acquiring the necessary tooling and architecture
  • Establishing and training key roles like data engineers, semantic engineers, and architects

By considering these elements, businesses can efficiently build semantic recommenders that empower them to reap the benefits in navigating today’s data-saturated landscape. 

Does your organization need support building your recommender toolkit for success? Contact us at info@enterprise-knowledge.com.

 

 

The post 5 Steps For Building Your Enterprise Semantic Recommendation Engine appeared first on Enterprise Knowledge.

]]>
Elevating Your Point Solution to an Enterprise Knowledge Graph https://enterprise-knowledge.com/elevating-your-point-solution-to-an-enterprise-knowledge-graph/ Wed, 16 Nov 2022 16:08:39 +0000 https://enterprise-knowledge.com/?p=16825 I am fortunate to be able to speak with many vendors in the Graph space, as well as company executives and leaders in IT and KM departments around the world. So many of these people are excited about the power … Continue reading

The post Elevating Your Point Solution to an Enterprise Knowledge Graph appeared first on Enterprise Knowledge.

]]>
I am fortunate to be able to speak with many vendors in the Graph space, as well as company executives and leaders in IT and KM departments around the world. So many of these people are excited about the power of knowledge graphs and the graph databases that power them. They want to know how to turn their point solution into an enterprise-wide knowledge graph powering AI solutions and solving critical problems for their clients or their companies. I have answered this question enough times that I thought I would share it in a blog post for others to learn.

Knowledge graphs are new and exciting tools. They provide a different way of managing information and can be used to solve a wide range of problems. Early adopters of this technology typically start with a small, targeted solution to “try it out.” This is a smart way to learn about any new technology, but all too often the project stops at a point solution or becomes pigeonholed for solving one problem when it can be used to solve so many more. The organizations that can grow and expand their graph solution have three things in common:

  • A backlog of use cases,
  • An enterprise ontology, and
  • Marketing and change management.

Knowledge graphs can solve many different types of problems. They can be recommendation engines, search enhancers, AI engines, data fabrics, or knowledge portals. That first solution that an organization picks only does one of these things, and it may also be targeted to just one department or one problem. This is a great way to start, but it can also lead to a stovepipe solution that misses some of the real power of graphs. 

When we start knowledge graph projects with new clients, we always run a workshop with business users from across the organization. During this workshop, we share examples of what can be done with knowledge graphs and help them identify a backlog of use cases that their new knowledge graph can solve. This approach creates excitement for the new technology and gives the project team and the business a vision for how to add to what was built as part of the first solution. Once the first solution is effectively launched, the organization has a roadmap for what is next. If you have already launched your solution and do not have a backlog of use cases, that is okay. You can host a graph workshop at any time to create a list of the next projects. The most important thing is to get that backlog in place and begin to share it with your leadership team so that they can budget for the next project.

The structure of a graph is defined by an ontology. Think of an ontology as a model describing the information assets of the business and how they fit together. Graph databases are easy to change, so organizations can get started with simple knowledge graphs that solve targeted problems without an ontologist. The problem is, the solution will be designed to solve a specific problem rather than being aligned with the business as a whole. A good ontologist will design a model that both solves the initial problem being addressed and aligns with the larger business model of the organization. For example, a graph-enhanced search at a manufacturing company may have products, customers, factories, parts, employees, and designs. The search could be augmented with a simple knowledge graph that describes parts. An ontologist would use this opportunity to model the relationships of all of the organization’s entities up front. This more inclusive approach would allow for a wider range of search results and could serve as the baseline for a number of other projects. This same graph could fuel a recommendation service or chatbot for their customers. It could also be used as the map for their data elements to create a data fabric that simplifies the way people access data within the organization. One graph, properly designed, can easily expand to become the enterprise backbone for a number of different enterprise-centric applications.

Building a backlog of use cases and creating a proper ontology helps ensure that there is a framework and plan to grow. The final challenge in turning a point solution into an enterprise knowledge graph has to do with marketing the solution. Knowledge graphs and graph databases are still new, and the number of things they can do is very broad (see Using Knowledge Graph Data Models to Solve Real Business Problems). As a result, executives often do not know what to do with knowledge graphs. It is important to set success criteria for your point solution and regularly communicate the value it adds to the business. This brings attention to the solution and opens the door for discussions about expanding the knowledge graph. Once you have the executive’s attention, educate them as to what knowledge graphs can do through the industry literature and the backlog of use cases that you have already gathered. This will allow executives to see how they can get even greater value from their investment and drive more funding for your knowledge graph.

Knowledge graphs are powerful information management tools that are only now becoming fully understood. The leading graph database vendors offer free downloads of their software so that organizations can start to understand the true power of these tools. Unfortunately, too often these downloads are used only for small projects that disappear over time. The simple steps I have described above can pave the way to turn your initial project into an enterprise platform powering numerous, critical Artificial Intelligence solutions.

Learn more about how we enable this for our clients by contacting us at info@enterprise-knowledge.com.

The post Elevating Your Point Solution to an Enterprise Knowledge Graph appeared first on Enterprise Knowledge.

]]>
Knowledge Cast – Bryan Yee of Amgen https://enterprise-knowledge.com/knowledge-cast-bryan-yee-of-amgen/ Tue, 02 Aug 2022 16:48:27 +0000 https://enterprise-knowledge.com/?p=15893 In this episode of Knowledge Cast, Enterprise Knowledge CEO Zach Wahl speaks with Bryan Yee, Director Of Knowledge Management at Amgen, one of the world’s largest biotechnology companies. Bryan has been at Amgen for over 16 years and has served … Continue reading

The post Knowledge Cast – Bryan Yee of Amgen appeared first on Enterprise Knowledge.

]]>
In this episode of Knowledge Cast, Enterprise Knowledge CEO Zach Wahl speaks with Bryan Yee, Director Of Knowledge Management at Amgen, one of the world’s largest biotechnology companies. Bryan has been at Amgen for over 16 years and has served as the Director of Knowledge Management since 2019.

His team is focused on tapping into the collective genius of drug developers, with a specific focus on fostering a culture of psychological safety and leveraging data science to reduce the friction of knowledge sharing and discovery.


 

 

If you would like to be a guest on Knowledge Cast, Contact Enterprise Knowledge for more information.

The post Knowledge Cast – Bryan Yee of Amgen appeared first on Enterprise Knowledge.

]]>
6 Questions to Help Determine Where to Start Your KM Transformation https://enterprise-knowledge.com/6-questions-to-help-determine-where-to-start-your-km-transformation/ Wed, 21 Apr 2021 13:51:50 +0000 https://enterprise-knowledge.com/?p=13033 “Where do we start?” It’s a question that can seem daunting for the organizations that EK works with as they contemplate moving from developing a Knowledge Management (KM) Strategy to implementation. This question invites uncertainty and even skepticism as leadership … Continue reading

The post 6 Questions to Help Determine Where to Start Your KM Transformation appeared first on Enterprise Knowledge.

]]>
“Where do we start?” It’s a question that can seem daunting for the organizations that EK works with as they contemplate moving from developing a Knowledge Management (KM) Strategy to implementation. This question invites uncertainty and even skepticism as leadership reflects on what resources will be required, how much time it will take out of their staff’s days, and past KM efforts that have commenced and stopped multiple times without showing value. As we work with organizations to understand their current state of KM maturity and develop a Target State Vision and Roadmap for how to better connect their people to the knowledge and information they need to do their jobs, our job as KM consultants is to ensure that there is no ambiguity around this question.  

One way we do this at EK is by defining a series of pilots, which are limited-scope efforts, focused on quickly demonstrating value to organizational stakeholders by solving targeted issues and exploring new technologies and practices. Each pilot is intended to validate that the KM Strategy approach we’ve developed will work for the organization and to determine how that pilot can be scaled. These pilots also serve to drive incremental change and excitement for “what could be.” This exercise in defining pilots begs the question though, “How do we know where to get started?” 

Here are 6 questions that help us determine the best approach for an organization to start their KM transformation. 

1. Where is the low-hanging fruit?

A commonly used metaphor, what I mean by “low-hanging fruit” is that we’re looking to identify the simplest activity to implement within an organization that will produce immediate, tangible value. What this means from a practical standpoint  is that the pilot has a low-level of complexity. There are a few ways to judge this:

  • The pilot is able to be conducted using solely the internal expertise and experience of the organization’s staff. In this scenario, no external subject matter expertise or consultancy is required. The organization can get started today with the skills and competencies they have in house.
  • The pilot involves one department (or business area) or up to two closely-aligned departments. Scoping the pilot to one or two departments allows an organization to test a methodology or process within a specific function before it’s adapted and scaled for the enterprise’s benefit. 
  • The pilot is building off and enhancing a pre-existing technology or practice. We’re always looking for examples of “good KM” when we’re conducting our Current State Assessments because we know that there are strengths that can be leveraged. Some of our pilots do just that – they improve something that is already in place that has the potential to be transformative if modified or if the right incentives are in place to increase adoption.

2. What does the organization care about, and what would get them excited?

At the onset of a KM Strategy project, we ask staff at different levels of the organization, “If you had easier access to the people and information you need to effectively execute your daily tasks and responsibilities, what would that mean for you? How would that help you be successful?” Ultimately, we’re trying to understand the downstream effects and business value of KM for the organization. 

In every organization, the downstream impacts and business value of KM can vary depending on the teams and departments whose insights are being solicited. For those in Sales roles, for example, it could be access to accurate, current, and competitive market information that is going to help them pursue and close sales deals. For those in Customer Service positions, it could be having the ability to find customer and account information to provide the right level of service to customers based on what the organization has done for them in the past. For other organizations, it’s ensuring continuity of operations by ensuring that knowledge does not walk out the door when their employees leave or retire. 

It’s these value statements that help us think through what pilots can serve to further these goals:

  • Does the organization need a pilot around content clean-up to ensure that when people do come across information, they have confidence that it’s up-to-date and accurate, and they can use it to take action or make a decision?
  • Could we come up with a pilot that helps to define what customer-facing staff would want to see when searching for past information on customers and accounts? 
  • Do we need to consider a pilot around experimenting with knowledge transfer techniques to support colleagues in sharing what they know throughout their tenure with an organization?

My colleague, Mary Little, discusses the importance of aligning KM with your organization’s strategic goals and this can start as early as the pilot definition phase. 

3. Who is interested in being an early adopter of KM, or is equipped with the capabilities and resources to support a pilot immediately?

If we’re conducting an KM Strategy project at the enterprise level, we always ask to speak with staff who represent different functions and departments with the organization. We do this for a variety of reasons, one being that it helps us understand those pockets within the organization that are acutely experiencing a KM challenge and who are eager to see change. This approach not only helps us brainstorm options for what recommendations and pilots we will define for the organization, but it also helps us identify who might want to be a part of a pilot. Identifying early adopters in the form of a department, group, or team helps the organization drive interest in and momentum for its KM initiatives. This is critical for the long-term adoption and sustainability of a holistic KM program, which will be focused on solving different challenges over time and necessitate changes in how people work. 

Another angle to consider is whether there is a department or group who has the capabilities and resources needed to support a pilot immediately. Part of this involves exploring what skill sets will be needed to perform associated responsibilities and whether the organization can draw on current employees with specific expertise to support the implementation of a pilot. Conversely, it is also important to gain an understanding of an organization’s internal processes around approving funding for projects. It can be beneficial to have these conversations to determine whether departments have their own pool of funding to use at their discretion or whether projects have to go through a more formal review process that happens at different intervals throughout the year.  

4. Is there an existing organizational initiative that we can align a KM pilot to?

In developing a KM Strategy, we look at five different dimensions within an organization: People, Process, Content, Culture, and Technology. Because we’re looking across these dimensions, we often hear about other initiatives that are going on in the organization. We love to hear about these because they can be tangential to what we’re doing and there are opportunities for alignment. In the past, these tangential initiatives have taken the form of:

  • Data inventory and governance efforts.
  • Enterprise search projects.
  • Process improvement efforts.
  • Initiatives to consolidate content management or customer relationship management systems.  
  • Records management implementations.
  • Selection and implementation of a learning management system. 
  • Sunsetting legacy knowledge repositories and related content migration efforts.

Just as it can be easier to secure support for a pilot if it’s tied to an organization’s strategic objective, it can be easier to secure support for a pilot if you can communicate how it will support the success of another initiative. By aligning a KM pilot to another relevant initiative, you’re helping to ensure the maximal effectiveness of both.

5. How many people will the proposed pilot impact?

In considering what pilots we recommend prioritizing as part of a KM transformation, we’re thinking about what is going to drive the biggest return on investment. Part of that has to do with how many people will be affected by the proposed change. Early on in our KM Strategy engagements, we request an overview of our client’s organizational structure, their departments, and which departments have interdependencies. This gives us a sense of how big the departments are in relation to each other and which work closely with one another. In return, as we conduct interviews, focus groups, and workshops, we start to understand the degree to which staff are experiencing similar KM challenges regardless of where they sit in the organization, and which KM challenges are most pressing. Armed with this information, we can think through how to prioritize our pilots based on how many people it will impact positively. These pilots often end up being holistic efforts that will benefit all departments over time, as they are scaled. 

6. How foundational is the pilot?

When developing pilots and recommendations, we are also outlining a roadmap across which these can take place. Our roadmaps span different timeframes based on an organization’s needs and resources, but they can include both “foundational” and “advanced” pilots. A foundational pilot is one that helps establish the success of subsequent efforts in the roadmap. This could include, for example, developing metrics to monitor the success of KM pilots, enable alignment across different initiatives, and allow the organization to make data-driving decisions on how to adapt its KM Strategy, as needed. We may also include, if the organization is ready, advanced pilots that lay the groundwork for AI applications – for example, developing a knowledge graph to connect and show meaningful relationships between data regardless of where it is located. While the advanced pilots can sometimes be more “exciting” work, we want to ensure an organization is laying the foundation to explore advanced AI capabilities in the right way and in a way that will be scalable and sustainable. Prioritizing foundational pilots on your organization’s KM Strategy Roadmap is essential to building that infrastructure.

Closing

Regardless of how big your company is, how many millions of documents your organization might maintain, or how widely disparate the processes are between staff to capture critical information, we know it can be overwhelming to contemplate the question “Where do we start?” But it doesn’t have to be. We’re here to help! Contact Us at Enterprise Knowledge to navigate this ambiguity and jump start your KM transformation.  

 

The post 6 Questions to Help Determine Where to Start Your KM Transformation appeared first on Enterprise Knowledge.

]]>
Webinar: Building a Connected Search Experience: Bringing KM and AI Together to Fuel Findability https://enterprise-knowledge.com/webinar-building-a-connected-search-experience-bringing-km-and-ai-together-to-fuel-findability/ Fri, 15 May 2020 18:52:47 +0000 https://enterprise-knowledge.com/?p=11149 Presented by Joe Hilger, COO, and Stephon Harris, Senior Developer, on Wednesday, May 13th. In this video, Hilger and Harris discuss how advanced search can leverage KM and AI in order to maximize an organization’s search capabilities and create user-centered, … Continue reading

The post Webinar: Building a Connected Search Experience: Bringing KM and AI Together to Fuel Findability appeared first on Enterprise Knowledge.

]]>
Presented by Joe Hilger, COO, and Stephon Harris, Senior Developer, on Wednesday, May 13th. In this video, Hilger and Harris discuss how advanced search can leverage KM and AI in order to maximize an organization’s search capabilities and create user-centered, highly intuitive results.

The post Webinar: Building a Connected Search Experience: Bringing KM and AI Together to Fuel Findability appeared first on Enterprise Knowledge.

]]>
What is a Semantic Architecture and How do I Build One? https://enterprise-knowledge.com/what-is-a-semantic-architecture-and-how-do-i-build-one/ Thu, 02 Apr 2020 13:00:48 +0000 https://enterprise-knowledge.com/?p=10865 Can you access the bulk of your organization’s data through simple search or navigation using common business terms? If so, your organization may be one of the few that is reaping the benefits of a semantic data layer. A semantic … Continue reading

The post What is a Semantic Architecture and How do I Build One? appeared first on Enterprise Knowledge.

]]>
Can you access the bulk of your organization’s data through simple search or navigation using common business terms? If so, your organization may be one of the few that is reaping the benefits of a semantic data layer. A semantic layer provides the enterprise with the flexibility to capture, store, and represent simple business terms and context as a layer sitting above complex data. This is why most of our clients typically give this architectural layer an internal nickname, referring to it as “The Brain,”  “The Hub,” “The Network,” “Our Universe,” and so forth. 

As such, before delving deep into the architecture, it is important to align on and understand what we mean by a semantic layer and its foundational ability to solve business and traditional data management challenges. In this article, I will share EK’s experience designing and building semantic data layers for the enterprise, the key considerations and potential challenges to look out for, and also outline effective practices to optimize, scale, and gain the utmost business value a semantic model provides to an organization.

What is a Semantic Layer?

A semantic layer is not a single platform or application, but rather the realization or actualization of a semantic approach to solving business problems by managing data in a manner that is optimized for capturing business meaning and designing it for end user experience. At its core, a standard semantic layer is specifically comprised of at least one or more of the following semantic approaches: 

  • Ontology Model: defines the types of things that exist in your business domain and the properties that can be used to describe them. An ontology provides a flexible and standard model that organizes structured and unstructured information through entities, their properties, and the way they relate to one another.
  • Enterprise Knowledge Graph: uses an ontology as a framework to add in real data and enable a standard representation of an organization’s knowledge domain and artifacts so that it is understood by both humans and machines. It is a collection of references to your organization’s knowledge assets, content, and data that leverages a data model to describe the people, places, and things and how they are related. 

A semantic layer thus pulls in these flexible semantic models to allow your organization to map disparate data sources into a single schema or a unified data model that provides a business representation of enterprise data in a “whiteboardable” view, making large data accessible to both technical and nontechnical users. In other words, it provides a business view of complex knowledge, information, and data and their assorted relationships in a way that can be visually understood.

How Does a Semantic Layer Provide Business Value to Your Organization?

Organizations have been successfully utilizing data lakes and data warehouses in order to unify enterprise data in a shared space. A semantic data layer delivers the best value for enterprises that are looking to support the growing consumers of big data, business users, by adding the “meaning” or “business knowledge” behind their data as an additional layer of abstraction or as a bridge between complex data assets and front-end applications such as enterprise search, business analytics and BI dashboards, chatbots, natural language process etc. For instance, if you ask a non-semantic chatbot, “what is our profit?” and it recites the definition of “profit” from the dictionary, it does not have a semantic understanding or context of your business language and what you mean by “our profit.” A chatbot built on a semantic layer would instead respond with something like a list of revenue generated per year and the respective percentage of your organization’s profit margins.Visual representation of how a semantic layer draws connections between your data management and storage layer

With a semantic layer as part of an organization’s Enterprise Architecture (EA), the enterprise will be able to realize the following key business benefits: 

  • Bringing Business Users Closer to Data: business users and leadership are closer to data and can independently derive meaningful information and facts to gain insights from large data sources without the technical skills required to query, cleanup, and transform large data.   
  • Data Processing: greater flexibility to quickly modify and improve data flows in a way that is aligned to business needs and the ability to support future business questions and needs that are currently unknown (by traversing your knowledge graph in real time). 
  • Data Governance: unification and interoperability of data across the enterprise minimizes the risk and cost associated with migration or duplication efforts to analyze the relationships between various data sources. 
  • Machine Learning (ML) and Artificial Intelligence (AI):  Serves as the source of truth for providing definition of the business data to machines and enabling the foundation for deep learning and analytics to help the business answer or predict business challenges.

Building the Architecture of a Semantic Layer

A semantic layer consists of a wide array of solutions, ranging from the organizational data itself, to data models that support object or context oriented design, semantic standards to guide machine understanding, as well as tools and technologies to enable and facilitate implementation and scale. Visual representation of semantic layer architecture. Shows how to go from data sources, to data modeling/transformation/unification and standardization, to graph storage and a unified taxonomy, to finally a semantic layer, and then lists some of the business outcomes.

The three foundational steps we have identified as critical to building a scalable semantic layer within your enterprise architecture are: 

1. Define and prioritize your business needs: In building semantic enterprise solutions, clearly defined use cases provide the key question or business reason your semantic architecture will answer for the organization. This in turn drives an understanding of the users and stakeholders, articulates the business value or challenge the solution will solve for your organization, and enables the definition of measurable success criteria. Active SME engagement and validation to ensure proper representation of their business knowledge and understanding of their data is critical to success. Skipping this foundational step will result in missed opportunities for ensuring organizational alignment and return on your investment (ROI). 

2. Map and model your relevant data: Many organizations we work with support a data architecture that is based on relational databases, data warehouses, and/or a wide range of content management cloud or hybrid cloud applications and systems that drive data analysis and analytics capabilities. This does not necessarily mean that these organizations need to start from scratch or overhaul their working enterprise architecture in order to adopt/implement semantic capabilities. For these organizations, it is more effective to start increasing the focus on data modeling and designing efforts by adding models and standards that will allow for capturing business meaning and context (see section below on Web Standards) in a manner that provides the least disruptive starting point. In such scenarios, we typically select the most effective approach to model data and map from source systems by employing the relevant transformation and unification processes (Extract, Transform, Load – ETLs) as well as model-mapping best practices (think ‘virtual model’ versus stored data model in graph storages like graph databases, property graphs, etc.) that are based on the organization’s use cases, enterprise architecture capabilities, staff skill sets, and primarily provide the highest flexibility for data governance and evolving business needs.

The state of an organization’s data typically comes in various formats and from disparate sources. Start with a small use case and plan for an upfront clean-up and transformation effort that will serve as a good investment to start organizing your data and set stakeholder expectations while demonstrating the value of your model early.

3. Leverage semantic web standards to ensure interoperability and governance: Despite the required agility to evolve data management practices, organizations need to think long term about scale and governance. Semantic Web Standards provide the fundamentals that enable you to adopt standard frameworks and practices when kicking off or advancing your semantic architecture. The most relevant standards to the enterprise should be to: 

  • Employ an established data description framework to add business context to your data to enable human understanding and natural language meaning of data (think taxonomies, data catalogs, and metadata); 
  • Use standard approaches to manage and share the data through core data representation formats and a set of rules for formalizing data to ensure your data is both human-readable and machine-readable (examples include XML/RDF formats); 
  • Apply a flexible logic or schema to map and represent relationships, knowledge, and hierarchies between your organization’s data (think ontologies/OWL);
  • A semantic query language to access and analyze the data natural language and artificial intelligence systems (think SPARQL). 
  • Start with available existing/open-source semantic models and ecosystems for your organization to serve as a low-risk, high-value stepping stone (think Open Linked Data/Schema.org). For instance, organizations in the financial industry can start their journey by using a starter ontology for Financial Industry Business Ontology (FIBO), while we have used the Gene Ontology for Biopharma as a jumping off point or to enrich or tailor their model for the specific needs of their organization.

4. Scale with Semantic Tools: Semantic technology components in a more mature semantic layer include graph management applications that serve as middleware, powering the storage, processing, and retrieval of your semantic data. In most scaled enterprise implementations, the architecture for a semantic layer includes a graph database for storing the knowledge and relationships within your data (i.e. your ontology), an enterprise taxonomy/ontology management or a data cataloging tool for effective application and governance of your metadata on enterprise applications such as content management systems, and text analytics or extraction tools to support  advanced capabilities such as Machine Learning (ML) or natural language processing (NLP) depending on the use cases you are working with. 

5. “Plug in” your customer/employee facing applications: The most practical and scalable semantic architecture will successfully support upstream customers or employees facing applications such as enterprise search, data visualization tools, end services/consuming systems, and chatbots, just to name a few potential applications. This way you can “plug” semantic components into other enterprise solutions, applications, and services. With this as your foundation, your organization can now start taking advantage of advanced artificial intelligence (AI) capabilities such as knowledge/relationship and text extraction tools to enable Natural Language Processing (NLP), Machine Learning based pattern recognition to enhance findability and usability of your content, as well automated categorization of your content to augment your data governance practices. 

The cornerstone of a scalable semantic layer is ensuring the capability for controlling and managing versions, governance, and automation. Continuous integration pipelines including standardized APIs and automated ETL scripts should be considered as part of the DNA to ensure consistent connections for structured input from tested and validated sources.

Conclusion

In summary, semantic layers work best as a natural integration framework for enabling interoperability of organizational information assets. It is important to get started by focusing on valuable business-centric use cases that drive getting into semantic solutions. Further, it is worth considering a semantic layer as a complement to other technologies, including relational databases, content management systems (CMS), and other front-end web applications that benefit from having easy access and an intuitive representation of your content and data including your enterprise search, data dashboards, and chatbots.

If you are interested in learning more to determine if a semantic model fits within your organization’s overall enterprise architecture or if you are embarking on the journey to bridge organizational silos and connect diverse domains of knowledge and data that accelerate enterprise AI capabilities, read more or email us.   

Get Started Ask Us a Question

 

The post What is a Semantic Architecture and How do I Build One? appeared first on Enterprise Knowledge.

]]>
Enterprise Knowledge’s Upcoming Webinars and Virtual Round Tables https://enterprise-knowledge.com/enterprise-knowledges-upcoming-webinars-and-virtual-round-tables/ Mon, 30 Mar 2020 18:18:44 +0000 https://enterprise-knowledge.com/?p=10857 EK has announced a new schedule of webinars to promote thought leadership during this time of remote work. In place of in-person Meetups and KM Wine & Design Workshops, EK will host a series of Webinars and Online Round Tables … Continue reading

The post Enterprise Knowledge’s Upcoming Webinars and Virtual Round Tables appeared first on Enterprise Knowledge.

]]>
EK has announced a new schedule of webinars to promote thought leadership during this time of remote work. In place of in-person Meetups and KM Wine & Design Workshops, EK will host a series of Webinars and Online Round Tables over the next several weeks. 

The goal for these virtual sessions is two-fold. 

  1. Ensure EK continues to discuss the value of Knowledge Management, Enterprise AI, and Advanced Search in clear and actionable ways; and
  2. Provide the Knowledge Management community with a continued outlet for connection and learning. 

The Value of KM for Remote Work

Date: April 2nd, 2020

Time: 1:00 – 2:00 PM EST

Presenters: Zach Wahl and Mary Little

With the current global COVID-19 pandemic, companies big and small, global and local, have found themselves in a much different reality and have been forced into remote work situations. Knowledge Management, when well-designed and implemented, can play a major role in helping an organization maintain the three c’s of organizational health: connections, collaboration, and culture. In this webinar, Zach Wahl and Mary Little will discuss how KM supports effective remote work, and will offer recommendations for how organizations can improve their KM and remote work immediately.

Register Today

KM: Nothing To Wine About

Date: April 7th, 2020

Time: 5:30 – 6:30 PM EST

Moderator: Zach Wahl

Panelists: Joe Hilger, Mary Little, Lulit Tesfaye, and Rebecca Wyatt

At its core, KM is all about connections. Are you missing your KM community at this point? Pour yourself a drink and join us for an informal talk about KM value, current challenges, and where we see the future of Knowledge and Information Management. This will be an informal round table discussion facilitated by Zach Wahl, EK’s CEO to discuss all things KM. We’ll also be taking questions from the audience so be prepared to join the conversation.

Register Today

Enterprise AI Readiness: Ensure You’re Prepared to Meet Your Organization’s AI Needs

Date: April 15th, 2020

Time: 1:00 – 2:00 PM EST

Presenters: Joe Hilger and Lulit Tesfaye

Virtually every large organization has placed AI on their strategic roadmap, with C-levels commonly listing Knowledge AI amongst their biggest priorities, but are you prepared to deliver on the promise of AI? How can you ensure your organization has taken the necessary steps regarding organization, content, architecture, and technology to make AI a reality that returns hard ROI rather than an expensive and high profile failure? This webinar, presented by Lulit Tesfaye and Joe Hilger will share first-hand experience from real-world, cutting edge global organizations that have already put Enterprise AI into production by harnessing knowledge graphs, ontologies, and natural language processing. It will provide a check-list of foundational elements necessary to prepare for Knowledge AI, discuss business cases and budgeting for AI roadmaps, and cover the first steps of your Enterprise AI journey to ensure you’re demonstrating real business value and validating your organization’s priorities. 

Register Today

Building a Connected Search Experience: Bringing KM and AI Together to Fuel Findability

Date: May 13th, 2020

Time: 1:00 – 2:00 PM EST

Presenters: Joe Hilger and Stephon Harris

An effective search experience is one of the most impactful means of surfacing the knowledge, information, and data within your organization. Though search has been around for years, many organizations are still struggling to make it work for their end users. Good KM design and practices can play a major role in enabling search, and increasingly, the latest concepts and technologies in Artificial Intelligence are being brought to bear to enable truly connected search experiences. In this webinar, Joe Hilger and Stephon Harris will lay out the foundations of good search, offer strategies to improve your search immediately, and discuss a roadmap to leveraging AI to fuel your (near) future search experience.

Register Today

The post Enterprise Knowledge’s Upcoming Webinars and Virtual Round Tables appeared first on Enterprise Knowledge.

]]>
How To Optimize Search Relevance With Feature Signaling https://enterprise-knowledge.com/how-to-optimize-search-relevance-with-feature-signaling/ Thu, 26 Mar 2020 14:35:44 +0000 https://enterprise-knowledge.com/?p=10814 When your users are looking for content, they rarely have time or patience with search. This often leads to frustration for both end users and those who control the search application. Users expect search to just “work” and then are … Continue reading

The post How To Optimize Search Relevance With Feature Signaling appeared first on Enterprise Knowledge.

]]>
When your users are looking for content, they rarely have time or patience with search. This often leads to frustration for both end users and those who control the search application. Users expect search to just “work” and then are highly disappointed when what they think is a simple search, provides them with irrelevant results – or worse, no results at all. A possible reason for some of these search challenges is that search engineers often think that users are searching whole documents. The truth is that users are actually just searching the fields that make up those documents. In order to deliver search that aligns your users’ intentions with what is stored in your search application, we have to fix the fields that make up your documents.

Fixing search involves tweaking the terms by properly tokenizing and analyzing text, as well as suggesting corrections to terms that may be misspelled. In this first part of my three-part blog series about optimizing search relevance, I’ll explain some technical steps on how to improve the precision of search. Specifically I’ll be discussing the importance of feature signaling in your enterprise search solution and using specific examples from Elasticsearch. If you would like to test the technical features on your own, follow the ‘Getting Started with Elasticsearch’ guide to set up an Elasticsearch instance and test out running commands against the Elasticsearch REST API.

In order to deliver search that aligns your users’ intentions with what’s stored in your search application, we have to fix the fields that make up documents.

Feature Signaling

How you put data into your system is important; as the saying goes, “garbage in, garbage out.” One way to improve the way you put data into your system is feature signaling. Feature signaling means crafting the data you index into ways your users will search for things, so that the fields that are searched provide strong relevancy signals. For example, when a user enters “bbq near 77489-3175”, a relevant search would match documents related to “bbq”,“barbeque”, or “barbecue”, as well as understand that the 9 digit number is the zip code for a neighborhood in Missouri City, TX. Crafting relevant signals could involve relating the term “bbq” with its synonyms “barbecue” and “barbeque,” or classifying a 9 digit number as a zip code and storing the 9 and 5 digit versions in particular fields. These examples of feature signaling take into account different words users leverage to search for the same thing, boosting relevancy and ensuring a seamless search experience. 

Analyzers

Example of an analyzer
Listing 1 – Elasticsearch Standard Analyzer
The standard analyzer includes 2 filters that transform searched query terms by lowercasing them and removing stopwords. Stop words are terms that typically don’t hold significant meaning, like “the”, and “is”.

Since data is stored and queried via fields, matching relevant documents based on signals in fields starts with aligning users’ search terms with the data indexed through analyzers. Analyzers play a crucial part in the matching terms to documents during both the indexing of text as well as the querying process. The search engine first indexes the documents by breaking them up – usually on whitespace characters – and transforming each segment into a token using a tokenizer. Tokens are sub-strings of text that relate back to the original terms indexed or queried. In order to match query terms with the terms stored in the search index, search engines compare the characters within the term, the position of the terms in the data and even the number of characters in a term. Finally, documents that contain the terms are returned based on each document’s relevancy. 

infographic
Listing 2 – Terms are matched after being analyzed. The positioning and length of terms are a factor in the comparison of terms.

Analyzers transform text using filters. Filters either replace, drop, or pass text on as tokens through various transformations. Take for example the word “flowers.” There are filters called stemmers which will stem terms based on heuristics down to a root form. This means that the term “flowers” would become “flower.” Analyzers that use these filters allow various forms of a term to be queried and still match what is in the search index. Each field is defined by its own filter, which transforms the text indexed and queried in that field. This is why analyzers are the heart of crafting strong signals to fields.

Synonyms

Example of synonym analyzer
Listing 3 – A synonym filter that maps terms to synonyms when indexed.
This is an example of a synonym analyzer. It makes synonyms of terms found in “mysynonyms.txt” using the graph_synonyms filter when documents are indexed and queried.

Synonyms are a good example of the power and importance of analyzers in fields. For example, say you have a state field that users can search based on both abbreviations and the full state name. You would want references to “VA” to tie to references to “Virginia”. Using a synonym analyzer, search terms can map to multiple words that have a similar meaning, based on mappings you define.This way your search can have more roads that lead to relevant content returned back to you users.

Misspellings

Sometimes the flaws lie outside the data within the search index. Spelling errors are a major reason why users do not receive relevant search results. However, there is no reason to fret. Misspellings can be handled while also giving your users clarity on the syntax of terms. A helpful way to handle misspellings is to use correctly tokenized words to perform a mini-search and suggest to a user those query terms that are found in the index.

The search suggester here takes a string and searches if the tokens are found in the index. The terms “checking” and “search” are returned as suggested terms found in the index.

Example of a search suggester
Listing 4 – The search suggester can be used as a way to auto suggest corrections to misspellings.

In the example above, the “suggest” keyword performs a search looking for terms that appear in the index that are similar to the terms queried. Those are then brought back, and can be surfaced to users as possible suggestions to search on.

Data in Various Formats

Crafting feature signals in fields normalizes what terms users enter to search with what terms lie in the search index. Normalization is important in transforming not just words, but data that may be represented in many formats.Take for example the many formats in which a single phone number may be represented: (123) 456- 7890, 456-7890, +11234567890. You can use analyzers to match these various forms when a user searches for a phone number. Acronyms are another format of data that should be transformed for more relevant search results. Users may enter acronyms with or without periods and you would still want relevant results to show. Modeling the intent of users queries through analyzers are foundational in getting data in various formats matched in fields.

Example of a pattern capture filter
Listing 5 – A pattern capture filter uses regular expressions to match text and classify it in a field.
The “phone_num_parts” filter captures the 7 digit, 10 digit as a pattern_capture filter and the original format of phone numbers through the preserve_original option.

Conclusion

Hopefully this blog has given you some search best practices as well as items on which to take immediate action in your own environments. Understanding search can feel like an overwhelming sea. However, when you quiet the tide by focusing on what your users are looking for, you can offer a lighthouse of query terms to help surface the documents they are seeking with more precision. 

Remember that how text is tokenized in a search index is fundamental to what can be found in document fields. Analyzers help map synonyms and capture data in various formats, and tools like the suggester API increase the precision of queries by trying to better align the terms of your users’ query with the data in your search engine when misspellings occur. If you would like to discuss how Enterprise Knowledge could help you steer through this ocean of search contact us.

The post How To Optimize Search Relevance With Feature Signaling appeared first on Enterprise Knowledge.

]]>
The Value of Knowledge Management for Remote Work https://enterprise-knowledge.com/the-value-of-knowledge-management-for-remote-work/ Fri, 20 Mar 2020 15:55:43 +0000 https://enterprise-knowledge.com/?p=10789 With the current global COVID-19 pandemic, companies big and small, global and local, have found themselves in a much different reality and have been forced into remote work situations. We, at EK, are amongst them. Though we always offered a … Continue reading

The post The Value of Knowledge Management for Remote Work appeared first on Enterprise Knowledge.

]]>
With the current global COVID-19 pandemic, companies big and small, global and local, have found themselves in a much different reality and have been forced into remote work situations. We, at EK, are amongst them. Though we always offered a rather liberal work from home policy, we found that the vast majority of people chose to come to the office, so we’ve had little opportunity, except for the odd snow day, to test what is now our current reality. Despite the lack of practice, however, it has been notable for me to see that we have adapted rather well to 100% remote work.  

Many have heard me utter one of my favorite phrases, that “happy employees are good at KM, and good KM makes happy employees.” I’ve always held EK up as the ultimate example of this pithy phrase, and it now seems that I can expand it to also note that good KM enables productive and natural remote work.  

With EK’s definition of what KM is and what value it offers, the linkages of how KM powers remote work become immediately apparent:

KM Enables Findability – With remote work, your employees no longer have the ability to walk down the hall in order to ask someone for the information or guidance they need to do their job.

I often reference the human search engines of an organization; those people that seem to know where all the right answers are hidden; but what if those people aren’t available? With remote work, employees need to be empowered to act more independently, but nevertheless act on the correct information. Good KM ensures they can find that actionable and accurate information. This begins with the foundational elements of KM, namely taxonomies, content cleanup and governance, and content types. These elements will power findability and discoverability in your organization, ensuring the right employees are finding the right information that they need in order to do their jobs. This will maximize productivity and minimize risk for your organization regardless of whether you’re remote working or not; but at its core, this is particularly necessary for organizations that are suddenly finding themselves with employees in makeshift home offices (i.e. their couch), rather than an office full of experienced and tenured employees to serve as their human search engines.

KM Enables Connections – With remote work, your employees no longer have the ability to meet at the water cooler and discover a like interest or form a learning/coaching relationship.

Whether your organization is local or global, face-to-face interactions remain a critical component of how people connect, how they find mentors/mentees, and how they learn from one another. When we lose the opportunity to discover each other in the same physical space, how do we ensure the right connections still happen? One key KM answer to this is to ensure that all your organization’s knowledge objects (content, data, people, etc.) are all tagged consistently in order that your employees can discover connections by traversing from one type of knowledge to another. For instance, perhaps one of your employees goes searching for an answer to a question about Enterprise Artificial Intelligence (AI). With consistent tagging and enterprise search, we want them to find the simple, short definition of Enterprise AI, and then not only discover white papers and blogs on the topic, but end up connecting that content to an expert on the topic (or a group of experts) from whom they can learn and with whom they can engage. In this case, the technological side of KM plays a potentially major role by creating digital communities of practice and expert finder tools and then harnessing advanced search to allow your employees to find the right materials. One of my favorite clients tells the story of two unique experts, both on the same obscure sub-topic of their field, who never were aware of each other for over 20+ years at the organization until we created a digital community of practice where they discovered each other. One of them quoted, “I thought I was alone for so long, and now I have someone I can learn from.” Now, the two of them are building that community of practice to mentor the next generation of experts. Like I said, good KM makes happy employees, and happy employees make good KM.

KM Enables Collaboration – With remote work, your employees can’t duck into a conference room and whiteboard ideas for a solution together.

I have to admit, I was a slow adopter to the concept of remote work. As a CEO, I was worried about my colleague’s ability to work together and learn from each other when they weren’t physically together. I am confident, however, that the right KM tools and processes vastly improve the effectiveness of collaboration in remote work situations.

EK CEO wearing an EK grey ball cap and EK grey t-shirt sitting at his desktop
Working from home but modeling (all of) the EK spirit!

In order to make the vast field of KM more digestible, we at EK divide it into five categories: People, Process, Content, Culture, and Technology. An effective KM strategy enables remote collaboration by:

  1. Identifying the people who hold the knowledge, and the people who need the knowledge, making the appropriate connections so the right people are finding each other and working together;
  2. Creating processes by which knowledge can move through the organization and ensuring that, as people are collaborating, there are the appropriate guides in place to guarantee organizational policies are followed and the right knowledge is graduating as corporate information. This is particularly important for remote work situations, where managers may have less ability to stay on top of day-to-day work efforts;
  3. Making the right content available as starting points, templates, and guides so that the starting points of collaborative exercises are fruitful from the beginning; 
  4. Creating a corporate culture of knowledge sharing that encourages collaboration and support for one another, regardless of whether employees are in the same room or on the other side of the globe, resulting in willingness and rewards for employees lending their time to support one another; and
  5. Constructing the appropriate technologies to allow people to collaborate remotely in ways that feel natural to them. For many organizations, the default answer here is SharePoint or Google Docs, and with the right governance and design, both of those are fine options, but they are just two of many options in what might be an organization’s KM Suite of Technologies. Over the last week, for instance, we’ve been keeping in touch and sharing pictures of our home offices on Slack, collaborating on deliverables in Google Docs, and holding live meetings in Zoom. Any piece of technology is a KM tool if harnessed properly.

KM Enables Culture – With remote work, your employees can’t witness the behaviour of corporate leaders or follow the cues of more tenured employees.

With a focus on corporate culture, I’ll end where I started: happy employees are good at KM, and good KM makes happy employees. I believe one of the reasons we’ve so successfully transitioned to remote work at this time is that we’d already laid the foundation of corporate kindness and collaboration, we’ve already demonstrated that a culture of knowledge sharing and collegial support will be rewarded. As a result, our employees have found ways to stay in touch, for instance today a group suggested we institute a 15-minute open conference call just so everyone could check in. We, as leaders, have also played our part, leveraging our online collaborative platforms to get everyone talking, announcing new wins that we’d normally do in person via fun videos, and scheduling a virtual “happy hour” tomorrow for the whole team to get on video and catch up on the week. For any organization, the same can be made true, regardless of whether you’ve laid that foundation or not. With the right KM strategy, organizations can create the right expectations and rewards for their employees to stay engaged with the organization, and more importantly, with each other during periods of remote work. This, in turn, will result in higher productivity, employee satisfaction, and retention. 

Are you playing catchup with KM at this point? Do you need help to get started in the face of the current remote work reality? Contact us, and we’ll set up a (virtual) meeting.

The post The Value of Knowledge Management for Remote Work appeared first on Enterprise Knowledge.

]]>
What is the Roadmap to Enterprise AI? https://enterprise-knowledge.com/enterprise-ai-in-5-steps/ Wed, 18 Dec 2019 14:00:57 +0000 https://enterprise-knowledge.com/?p=10153 Artificial Intelligence technologies allow organizations to streamline processes, optimize logistics, drive engagement, and enhance predictability as the organizations themselves become more agile, experimental, and adaptable. To demystify the process of incorporating AI capabilities into your own enterprise, we broke it … Continue reading

The post What is the Roadmap to Enterprise AI? appeared first on Enterprise Knowledge.

]]>
Artificial Intelligence technologies allow organizations to streamline processes, optimize logistics, drive engagement, and enhance predictability as the organizations themselves become more agile, experimental, and adaptable. To demystify the process of incorporating AI capabilities into your own enterprise, we broke it down into five key steps in the infographic below.

An infographic about implementing AI (artificial intelligence) capabilities into your enterprise.

If you are exploring ways your own enterprise can benefit from implementing AI capabilities, we can help! EK has deep experience in designing and implementing solutions that optimizes the way you use your knowledge, data, and information, and can produce actionable and personalized recommendations for you. Please feel free to contact us for more information.

The post What is the Roadmap to Enterprise AI? appeared first on Enterprise Knowledge.

]]>