Elliott Risch, Author at Enterprise Knowledge https://enterprise-knowledge.com Mon, 03 Nov 2025 21:25:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg Elliott Risch, Author at Enterprise Knowledge https://enterprise-knowledge.com 32 32 How KM Leverages Semantics for AI Success https://enterprise-knowledge.com/how-km-leverages-semantics-for-ai-success/ Wed, 03 Sep 2025 19:08:31 +0000 https://enterprise-knowledge.com/?p=25271 This infographic highlights how KM incorporates semantic technologies and practices across scenarios to enhance AI capabilities.

The post How KM Leverages Semantics for AI Success appeared first on Enterprise Knowledge.

]]>
This infographic highlights how KM incorporates semantic technologies and practices across scenarios to enhance AI capabilities.

To get the most out of Large Language Model (LLM)-driven AI solutions, you need to provide them with structured, context-rich knowledge that is unique to your organization. Without purposeful access to proprietary terminology, clearly articulated business logic, and consistent interpretation of enterprise-wide data, LLMs risk delivering incomplete or misleading insights. This infographic highlights how KM incorporates semantic technologies and practices across scenarios to  enhance AI capabilities and when they're foundational — empowering your organization to strategically leverage semantics for more accurate, actionable outcomes while cultivating sound knowledge intelligence practices and investing in your enterprise's knowledge assets. Use Case: Expert Elicitation - Semantics used for AI Enhancement Efficiently capture valuable knowledge and insights from your organization's experts about past experiences and lessons learned, especially when these insights have not yet been formally documented.  By using ontologies to spot knowledge gaps and taxonomies to clarify terms, an LLM can capture and structure undocumented expertise—storing it in a knowledge graph for future reuse. Example:  Capturing a senior engineer's undocumented insights on troubleshooting past system failures to streamline future maintenance. Use Case: Discovery & Extraction - Semantics used for AI Enhancement Quickly locate key insights or important details within a large collection of documents and data, synthesizing them into meaningful, actionable summaries, and delivering these directly back to the user. Ontologies ensure concepts are recognized and linked consistently across wording and format, enabling insights to be connected, reused, and verified outside an LLM's opaque reasoning process. Example: Scanning thousands of supplier agreements to locate variations of key contract clauses—despite inconsistent wording—then compiling a cross-referenced summary for auditors to accelerate compliance verification and identify high-risk deviations. Use Case: Context Aggregation - Semantics for AI Foundations Gather fragmented information from diverse sources and combine it into a unified, comprehensive view of your business processes or critical concepts, enabling deeper analysis, more informed decisions, and previously unattainable insights. Knowledge graphs unify fragmented information from multiple sources into a persistent, coherent model that both humans and systems can navigate. Ontologies make relationships explicit, enabling the inference of new knowledge that reveals connections and patterns not visible in isolated data. Example: Integrating financial, operational, HR, and customer support data to predict resource needs and reveal links between staffing, service quality, and customer retention for smarter planning. Use Case: Cleanup and Optimization - Semantics for AI Enhancement Analyze and optimize your organization's knowledge base by detecting redundant, outdated, or trivial (ROT) content—then recommend targeted actions or automatically archive and remove irrelevant material to keep information fresh, accurate, and valuable. Leverage taxonomies and ontologies to recognize conceptually related information even when expressed in different terms, formats, or contexts; allowing the AI to uncover hidden redundancies, spot emerging patterns, and make more precise recommendations than could be justified by keyword or RAG search alone. Example: Automatically detecting and flagging outdated or duplicative policy documents—despite inconsistent titles or formats—across an entire intranet, streamlining reviews and ensuring only current, authoritative content remains accessible. Use Case: Situated Insight - Semantics used for AI Enhancement Proactively deliver targeted answers and actionable suggestions uniquely aligned with each user's expressed preferences, behaviors, and needs, enabling swift, confident decision-making. Use taxonomies to standardize and reconcile data from diverse systems, and apply knowledge graphs to connect and contextualize a user's preferences, behaviors, and history; creating a unified, dynamic profile that drives precise, timely, and highly relevant recommendations. Example: Instantly curating a personalized learning path (complete with recommended modules, mentors, and practice projects) based on an employee's recent performance trends, skill gaps, and long-term career goals, accelerating both individual growth and organizational capability. Use Case: Context Mediation and Resolution - Semantics for AI Foundations Bridge disparate contexts across people, processes, technologies, etc., into a common, resolved machine readable understanding that preserves nuance while eliminating ambiguity. Semantics establish a shared, machine-readable understanding that bridges differences in language, structure, and context across people, processes, and systems. Taxonomies unify terminology from diverse sources, while ontologies and knowledge graphs capture and clarify the nuanced relationships between concepts—eliminating ambiguity without losing critical detail. Example: Reconciling varying medical terminologies, abbreviations, and coding systems from multiple healthcare providers into a single, consistent patient record—ensuring that every clinician sees the same unambiguous history, enabling faster diagnosis, safer treatment decisions, and more effective care coordination. Learn more about our work with AI and semantics to help your organization make the most out of these investments, don't hesitate to reach out at:  https://enterprise-knowledge.com/contact-us/

The post How KM Leverages Semantics for AI Success appeared first on Enterprise Knowledge.

]]>
LLM Solutions PoC to Production: From RAGs to Riches (Part 1) https://enterprise-knowledge.com/llm-solutions-poc-to-production-from-rags-to-riches-part-1/ Wed, 30 Jul 2025 19:14:23 +0000 https://enterprise-knowledge.com/?p=25063 In the past year, many of the organizations EK has partnered with have been developing Large Language Model (LLM) based Proof-of-Concepts (PoCs). These projects are often pushed for by an enthusiastic IT Team, or internal initiative – with the low … Continue reading

The post LLM Solutions PoC to Production: From RAGs to Riches (Part 1) appeared first on Enterprise Knowledge.

]]>
In the past year, many of the organizations EK has partnered with have been developing Large Language Model (LLM) based Proof-of-Concepts (PoCs). These projects are often pushed for by an enthusiastic IT Team, or internal initiative – with the low barrier to entry and cost in LLM development making it an easy project for executives to greenlight. Despite initial optimism, these LLM PoCs rarely reach the enterprise-grade implementations promised due to factors such as organizational buy-in, technical complexity, security concerns, misalignment on content readiness for AI solutions, and a lack of investment in key infrastructure. For example, Gartner has predicted that 30% of GenerativeAI projects will be abandoned after PoC by the end of 2025. This blog provides an overview of EK’s approach to evaluating and roadmapping an LLM solution from PoC to production, and highlights several dimensions important to successfully scaling an LLM-based enterprise solution.

 

Organizational Implementation Considerations:

Before starting on the technical journey from “RAGs to Riches”, there are several considerations for an organization before, during, and after creating a production solution. By taking into account each of these considerations, a production LLM solution has a much higher chance of success.

Before: Aligning Business Outcomes

Prior to building out a production LLM solution, a team will have developed a PoC LLM solution that is able to answer a limited set of use cases. Before the start of production development, it is imperative that business outcomes and the priorities of key stakeholders are aligned with project goals. This often looks like mapping business outcomes – such as enhanced customer interactions, operational efficiency, or reduced compliance risk to quantifiable outcomes such as shorter response times and findability of information. It is important to ensure these business goals translate from development to production and adoption by customers. Besides meeting technical functionality, setting up clear customer and organizational goals will help to ensure the production LLM solution continues to have organizational support throughout its entire lifecycle.

During: Training Talent and Proving Solutions

Building out a production LLM solution will require a team with specialized skills in natural language processing (NLP), prompt engineering, semantic integration, and embedding strategies. In addition, EK recommends investing resources into content strategists and SMEs who understand the state of their organization’s data and/or content. These roles in particular are critical to help prepare content for AI solutions, ensuring the LLM solution has comprehensive and semantically meaningful content. Organizations that EK has worked with have successfully launched and maintained production LLM solutions by proactively investing in these skills for organizational staff. This helps organizations build resilience in the overall solution, driving success in LLM solution development.

After: Infrastructure Planning and Roadmapping

To maintain a production LLM solution after it has been deployed to end-users, organizations must account for the infrastructure investments and operational costs needed, as well as necessary content and data maintenance. Some of these resources might include enterprise licensing, additional software infrastructure, and ongoing support costs. While many of these additional costs can be mitigated by effectively aligning business outcomes and training organizational talent, there still needs to be a roadmap and investment into the future infrastructure (both systematically and content-wise) of the LLM production solution.

 

Technical Criteria for Evaluating LLM PoCs:

In parallel with the organizational implementation considerations, and from EK’s depth of experience in developing LLM MVPs, designing enterprise AI architecture, and implementing more advanced LLM solutions such as Semantic RAG, EK has developed 7 key dimensions that can be used to evaluate the effectiveness of an LLM PoC:

Figure 1: Dimensions for Evaluating an LLM Solution

1. Depth of Interaction: refers to how deeply and dynamically users can engage with the LLM solution. At a lower level, interaction might simply involve asking questions and receiving direct answers, while at the highest level, intelligent agents act on behalf of the user autonomously to leverage multiple tools and execute tasks.

2. Freshness of Information: describes how frequently the content and data behind the semantic search solution are updated and how quickly users receive these updates. While lower freshness implies data updated infrequently, at higher freshness levels, data is updated frequently or even continuously which helps to ensure users are always interacting with the most current, accurate, and updated information available.

3. Level of Explanation: refers to how transparently the LLM solution communicates the rationale behind its responses. At a lower level of explanation, users simply are receiving answers without clear reasoning. In contrast, a high level of explanation would include evidence, citations, audit trails, and a clear path on how information was retrieved. 

4. Personalization, Access & Entitlements Requirements: describes how specifically content and data are tailored and made accessible based on user identity, roles, behavior, or needs. At lower levels, content is available to all users without personalization or adaptations. At higher levels, content personalization is integrated with user profiles, entitlements, and explicit access controls, ensuring users only see highly relevant, permissioned content. 

5. Accuracy of Information: refers to how reliably and correctly the LLM solution can answer user queries. At lower levels, users receive reasonable answers that may have minor ambiguities or occasional inaccuracies. At the highest accuracy level, each response is traced back to original source materials and are cross-validated with authoritative sources. 

6. Enterprise Agentic Support: describes how the LLM solution interacts with the broader enterprise AI ecosystem, and coordinates with other AI agents. At the lowest level, the solution acts independently without any coordination with external AI agents. At the highest level, the solution seamlessly integrates as a consumer and provider within an ecosystem of other intelligent agents.

7. Enterprise Embedding Strategy: refers to how the LLM solution converts information into vector representations (embeddings) to support retrieval. At a lower level embeddings are simple vector representations with minimal or no structured metadata. At the highest levels, embeddings include robust metadata and are integrated with enterprise context through semantic interpretation and ontology-based linkages. 

For an organization, each of the technical criteria will be weighed differently based on the unique use cases and requirements of the LLM solution. For example, an organization that is working on a content generation use case could have a greater emphasis on Level of Explanation and Freshness of Information while an organization that is working on an information retrieval use case may care more about Personalization, Access, & Entitlements Requirements. This is an integral part of the evaluation process, with an organization coming to agreement on the level of proficiency needed within each factor. Leveraging this standard, EK has worked with organizations across various industries and diverse LLM use cases to optimize their solutions.

Additionally, EK recommends that an organization undergoing an LLM PoC evaluation also conduct an in-depth analysis of content relevant to their selected use case(s). This enables them to gain a more comprehensive understanding of its quality – including factors like completeness, relevancy, and currency – and can help unearth gaps in what the LLM may be able to answer. All of this informs the testing phase by guiding the creation of each test, as well as the expected outcomes, and can be generally categorized across three main areas of remediation:

  • Content Quality – The content regarding a certain topic doesn’t explicitly exist and is not standardized – this may necessitate creating dummy data to enable certain types of tests.
  • Content Structure – The way certain content is structured varies – we can likely posit that one particular structure will give more accurate results than another. This may necessitate creating headings to indicate clear hierarchy on pages, and templates to consistently structure content. 
  • Content Metadata – Contextual information that may be useful to users is missing from content. This may necessitate establishing a taxonomy to tag with a controlled vocabulary, or an ontology to establish relationships between concepts. 

 

Technical Evaluation of LLM PoCs In Practice:

Putting the organizational implementation and technical considerations into practice, EK recently completed an engagement with a leading semiconductor manufacturer, employing the standard process for evaluating their PoC LLM search solution. The organization had developed a PoC search solution that was being utilized for answering questions against a series of user-selected PDFs relating to the company’s technical onboarding documentation. EK worked with the organization  to align on key functional requirements via a capability assessment for a production LLM solution based on the 7 dimensions EK has identified. Additionally, EK completed a simultaneous analysis of in-scope content for the use case. The results of this content evaluation informed which content components should be prioritized and candidates for the testing plan.

After aligning on priority requirements, in this case, accuracy and freshness of information, EK developed and conducted a testing plan for parts of the PoC LLM. To operationalize the testing plan, EK created a four-phase RAG Evaluation & Optimization Workflow to turn the testing plan into actionable insights.This workflow helped produce a present-state snapshot of the LLM solution, a target-state benchmark, and a bridging roadmap that prioritizes retriever tuning, prompt adjustments, and content enrichment. Based on the workflow results, stakeholders at the organization were able to easily interpret how improved semantics, content quality, structure, and metadata would improve the results of their LLM search solution.

In the following blogs of the “RAGs to Riches” series, EK will be explaining the process for developing a capability assessment and testing plan for LLM based PoCs. These blogs will expand further on how each of the technical criteria can be measured as well as how to develop long-term strategy for production solutions.

 

Conclusion

Moving an LLM solution from proof-of-concept to enterprise production is no small feat. It requires careful attention to organizational alignment, strong business cases, technical planning, compliance readiness, content optimization, and a commitment to ongoing talent development. Addressing these dimensions systematically will ensure that your organization will be well positioned to turn AI innovation into a durable competitive advantage.

If you are interested in having EK evaluate your LLM-based solution, and help build out an enterprise-grade implementation contact us here

The post LLM Solutions PoC to Production: From RAGs to Riches (Part 1) appeared first on Enterprise Knowledge.

]]>
Getting Control of Your Content: AI Solutions to Streamline and Optimize Your Digital Assets https://enterprise-knowledge.com/getting-control-of-your-content-ai-solutions-to-streamline-and-optimize-your-digital-assets/ Tue, 22 Oct 2024 14:44:39 +0000 https://enterprise-knowledge.com/?p=22272 Paula Land, Principal Consultant for Content Strategy and Operations and Elliott Risch, Semantic AI Consultant, presented a LavaCon pre-conference webinar on the topic of AI-assisted content analysis titled Getting Control of Your Content: AI Solutions to Streamline and Optimize Your … Continue reading

The post Getting Control of Your Content: AI Solutions to Streamline and Optimize Your Digital Assets appeared first on Enterprise Knowledge.

]]>
Paula Land, Principal Consultant for Content Strategy and Operations and Elliott Risch, Semantic AI Consultant, presented a LavaCon pre-conference webinar on the topic of AI-assisted content analysis titled Getting Control of Your Content: AI Solutions to Streamline and Optimize Your Digital Assets. 

Land and Risch discussed how LLM-powered solutions can be harnessed to overcome the challenges of content overload and duplication. They explored how GenAI technologies can be leveraged to automatically identify and eliminate duplicate or near-duplicate content, enabling organizations to streamline their repositories and retain only the most relevant and valuable materials. 

The talk included real-world case studies from Enterprise Knowledge, illustrating how GenAI has been successfully deployed to help our clients analyze and act on insights about the current state of their content. 

The LavaCon Conference on Content Strategy and Technical Communication Management will be held October 27-30 in Portland, Oregon. EK will be sponsoring and presenting a session at the conference

The post Getting Control of Your Content: AI Solutions to Streamline and Optimize Your Digital Assets appeared first on Enterprise Knowledge.

]]>