Content Personalization Articles - Enterprise Knowledge http://enterprise-knowledge.com/tag/content-personalization/ Mon, 03 Nov 2025 21:58:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg Content Personalization Articles - Enterprise Knowledge http://enterprise-knowledge.com/tag/content-personalization/ 32 32 Improving Customer Experience in a Personalized Customer Resource Portal https://enterprise-knowledge.com/improving-customer-experience-in-a-personalized-customer-resource-portal/ Tue, 06 Feb 2024 18:25:00 +0000 https://enterprise-knowledge.com/?p=19777 The Challenge The customer support team of a global corporation recently undertook an initiative to transform the content in their Customer Resource Portal from unstructured, long-form PDFs to structured content in a DITA based Component Content Management System (CCMS). This … Continue reading

The post Improving Customer Experience in a Personalized Customer Resource Portal appeared first on Enterprise Knowledge.

]]>

The Challenge

The customer support team of a global corporation recently undertook an initiative to transform the content in their Customer Resource Portal from unstructured, long-form PDFs to structured content in a DITA based Component Content Management System (CCMS). This Customer Resource Portal contains detailed manuals and information about the hundreds of products and services this corporation offers across the globe, many of which are specific to regions or countries. Customers of this corporation rely on the customer service portal to locate specific information about products and services available to them in their location. 

When they approached EK, the organization was struggling to achieve ROI from this massive undertaking and customer experience (CX) remained stubbornly unsatisfactory. The poor CX of the system resulted in an 8% drop in daily user access and one in five customers reporting that they didn’t use the system at all. Additionally, half of users felt that the search functionality of the system was unsatisfactory. To remediate this, the organization created a complex, multi-year plan to optimize customer experience through the display of dynamic content updates in the front end of the portal, however when EK conducted a technical assessment, our technical consultants found that the off-the-shelf product in use for the presentation layer was not adequate to meet the robust needs of the structured content. 

Further, the content model employed in the CCMS did not adequately enrich the technical documentation content with metadata suitable to enable personalization. Users struggled to find the information as search terms did not always align to query strings in the content. Content and search results were not personalized to the needs of the user, and search queries returned large amounts of information that was not relevant. For the internal content team, tagging and authoring content was a manual effort. Authors were spending too much time adding tags and metadata to documentation, rather than publishing fresh information to the portal. Furthermore, content that was not properly manually tagged became especially difficult to find within the sheer volume of content in the portal.

The Solution

EK worked with the customer care team at this corporation to architect and implement content personalization improvements to the customer portal. This work began with an assessment and redesign of the key taxonomies needed to better personalize the content in the portal. EK’s taxonomy experts worked with SMEs at the corporation to create intuitive taxonomies for products, services, and user entitlements that best reflected the current state of the business. Then, EK developers and taxonomists worked together to execute bulk tagging of prioritized existing content in the portal with the newly updated taxonomies. Additionally, EK developed a custom auto-tagging authoring tool so that content authors could quickly and easily apply tags to new content being published in the portal. To support the new taxonomies and tagging processes, EK taxonomists drafted a Taxonomy and Tagging Governance Plan to facilitate the long-term, ongoing maintenance of metadata within the system. 

During this time EK architects and semantic specialists were also helping the corporation steward the implementation of their newly acquired Taxonomy and Ontology Management Tool. This work involved designing roles for the tool, providing an orientation on how to use the tool, uploading the new taxonomies into the system, and providing an integration plan for how the tool should integrate with the customer service portal and its component technologies. This work set the groundwork for the corporation to be able to manage the taxonomies that would underpin personalization from a centralized repository, and then push updates to the systems that leverage them. 

Once taxonomies had been updated, stored in the correct systems, and applied to content, EK architected a solution that would personalize search results relevant to a user’s entitlements and geographical location. The customer service portal boosts search results that are tagged with products, services, and other tags that are related to the user’s profile. 

The EK Difference

EK’s experts in taxonomy, content, and technical implementation came together to provide a comprehensive, innovative content personalization solution for the customer support team. Because EK employs experts in semantic technologies, EK was able to provide expertise and support for the corporation as they were implementing their new taxonomy tool, including providing training for their staff on how to use it. EK technology experts also partnered with the corporation’s technology teams to ensure there was alignment on and understanding of the personalization solution and its components across all teams that would be interacting with or managing it. 

EK solution architects and content engineers assessed the corporation’s technology stack on its fitness to support advanced content functionality and found that the current configuration and technologies were inadequate. To address this, EK assessed both internal and external tools available to the corporation and provided them with an architecture that would meet their content needs. This included the vetting of a new front end tool that would significantly improve both CX and search experience for users. With the new architecture in hand, technology leadership could make informed, confident decisions about their future content technology plans and investments.

The Results

EK was able to provide the customer support team with a sustainable solution for personalizing content in their customer service portal. Currently published content in the system is now tagged for search boosting, and content authors have an intuitive and quick way to tag newly created content. Improvements made by the EK team have rendered promising user experience outcomes, including reduced confusion and customer service inquiries by having 100% of user entitlements aligned with product taxonomies, and by removing 43 outdated or irrelevant terms. The personalized search boosting also increased click through rates of search results by 13%, indicating that users are being served content that is more relevant to them. Additionally, content authors will save approximately 125 hours a year leveraging the new, faster tagging process instead of manually tagging content. 

Download Flyer

Ready to Get Started?

Get in Touch

The post Improving Customer Experience in a Personalized Customer Resource Portal appeared first on Enterprise Knowledge.

]]>
QA for Personalized Content https://enterprise-knowledge.com/qa-for-personalized-content/ Thu, 16 Nov 2023 15:08:08 +0000 https://enterprise-knowledge.com/?p=19268 We’ve all sifted through dense technical documentation which has way too much detail about features and products that aren’t even relevant to us – just to get to that one nugget of information we’re actually looking for (often on page … Continue reading

The post QA for Personalized Content appeared first on Enterprise Knowledge.

]]>
We’ve all sifted through dense technical documentation which has way too much detail about features and products that aren’t even relevant to us – just to get to that one nugget of information we’re actually looking for (often on page 245 of a 500-page product manual). If we deliver personalized content to our end users, we can solve this problem by only showing people the information that’s relevant to them.

However, delivering personalized content introduces a specific set of Quality Assurance (QA) considerations. Quality Assurance (QA) is defined as the maintenance of a desired level of quality in a service or product, especially by means of attention to every stage of the process of delivery or production. It is vital to the success of application development and product delivery that QA and User Acceptance Testing (UAT) are carried out as an ongoing part of the release cadence of an application. QA and UAT help eliminate accrued technical debt by raising any and all issues throughout development while ensuring the application is being built with personalization in mind through the continuous feedback loop of QA. 

That said, there should be a clear set of standards in a QA test plan to capture and evaluate how personalization is implemented throughout development. In this blog, I’ll highlight some key tenets for a development team to follow to ensure personalization is continuously captured and reinforced through the thoughtful use of QA.

Agile Release Cadence and QA Scripting

When developing an application, releases are structured to follow a set cadence as defined by a release period. Each release will contain the features, patches, and other deliverables to the application developed by the team’s engineers during that release period. It is important to ensure proper UAT is carried out during each release period and accepted before the application is released to production. This starts with defining a QA script for users to follow when executing UAT and exhaustively testing the code contributions and their associated use cases within the application. Moreover, when personalization comes into play, criteria must be included in that UAT script to not only ensure code contributions are accepted but to validate that what’s delivered achieves personalization.

Criteria that evaluate personalization or personalized content delivery must be properly defined within these QA scripts and in a way that is easily executable by the users testing proposed changes to the application. These criteria should be translated from the business requirements and personalization goals the engagement tries to solve. For example, consider search engineering for a complex technical resource center for a large business with many departments. Without personalization, UAT could be passed by searching multiple facets and seeing results that are tagged or belong to said facets. With personalization in the QA script, UAT for those same search results is extended to cover things such as boosting results based on a user’s location and content findability based on a user’s role.

A successful development team knows the business requirements, personalization efforts, and use cases implemented by ongoing development. As a result, they will have an increased understanding of how to structure QA scripts so that proper evaluation of personalization is fully and accurately defined. In this way, the team will be able to better provide users with a testing plan to uncover more possibilities of bugs in the application, shortcomings of the feature(s) developed, and overall integrity of the application’s end-to-end functionality; all the while ensuring personalization is measurable and fully understood by the entire team. Uncovering and understanding these shortcomings early can eliminate technical debt by reducing the chance of discovering them after the application has gone live.

QA Processes and Execution

After QA scripts have been properly defined and distributed to users, it is time to carry out QA and ensure UAT is successful for the application to move to production. 

As stated above, the development team has a much deeper understanding of the features proposed in the release, which could negatively impact the integrity of QA execution. An engineer, for example, executing QA scripts could produce a false sense of a feature being intuitive, given their foundational understanding of how it works. Instead, execution should be left to the users as they will offer the most critical and valuable feedback. There should also be a well-defined place for users to indicate a pass or fail status, along with a dedicated space for feedback and replication steps of any issue.

Suppose an engineer has developed a feature that maps content metadata to user roles after publishing the content. The code involved has been vigorously unit-tested and functionally tested by the engineer, giving them a biased lens of an intuitive feature because of their familiarity. A user will provide unbiased feedback on the feature, as they will be exposed to it with a fresh set of eyes. This will allow the user to evaluate the feature for accurate personalization, allowing for a continuous feedback loop of business goals being properly accomplished for the entirety of the QA process.

Meaningful Results and Feedback

Upon completion of QA and UAT, it is time to review the feedback that users left throughout their execution process. While it is imperative that analysts on the development team track, assess, and record feedback and success statuses of QA scripts, engineers should also be directly involved in this review process. Engineer involvement with a QA review is similar to user-driven story development. User-driven story development builds and estimates work from the viewpoint of those using the application. Engineer-driven QA review analyzes the work from the viewpoint of those familiar with the acceptance criteria of the work. As a result, engineer-driven QA increases the likelihood of an application’s success by fulfilling the needs of the use case, business goals, and quantifiable measurements of successful personalization. 

Similarly, an engineer’s involvement in reviewing feedback and UAT results takes this principle further. By reviewing feedback, an engineer will have the opportunity to surface more thorough use cases given by actual users throughout the QA process. 

Consider a subscription-based web form that allows users to subscribe themselves and their colleagues to content being delivered by an application. This feature has just been developed, and users have finished their acceptance testing. While nothing indicated a failure or blocker to move code into production, multiple users had left feedback asking for this web form to handle distribution lists and other types of pre-defined mailboxes. Since the engineers were heavily involved with the QA execution of this release, they can quickly refine and estimate new work so that it can be pulled into future sprints. Not only will this save planning time, but it also presents a learning opportunity for the engineers to better understand the process of user-driven development and business cases being solved by the application.

Conclusion

User Acceptance Testing and Quality Assurance plans are foundational to building and delivering an application that fully aligns with the business goals that drive personalization. It allows users to get the most out of the product and ensures the integrity of the application as a whole. This is especially important when considering how personalization affects improvement in content delivery. EK’s advanced content team has experience in a wide range of different areas regarding personalized content delivery. Through our thoughtful approach to QA, we ensure that no use case is left behind and the correct audiences are met with the correct content. Contact us if you see opportunities for personalization in your business! 

The post QA for Personalized Content appeared first on Enterprise Knowledge.

]]>
Breaking it Down: What is Personalization? https://enterprise-knowledge.com/breaking-it-down-what-is-personalization/ Fri, 13 Jan 2023 15:22:31 +0000 https://enterprise-knowledge.com/?p=17153 I consistently hear “personalized” used to describe technical features, and it is not always clear what the term means. The term “personalized” often leads us to ask, How is the feature personalized, i.e. what factors lead to the outcome? At … Continue reading

The post Breaking it Down: What is Personalization? appeared first on Enterprise Knowledge.

]]>
I consistently hear “personalized” used to describe technical features, and it is not always clear what the term means. The term “personalized” often leads us to ask,

  • How is the feature personalized, i.e. what factors lead to the outcome?
  • At what level is the feature personalized, for a group or individual?
  • What criteria can we use to measure the personalized results?

Personalization applies to content, learning materials, notifications, and several other features in everyday software. When implemented effectively, personalization improves user engagement, experience, and retention. In this blog, I will define personalization, discuss the goal and benefits of personalization, and provide a few examples of how to implement personalization successfully.

Defining Personalization

As Joe Hilger mentioned in his recent content personalization blog, “A common phrase in Knowledge Management is ‘Deliver people the right information at the right time!’.” Personalization can be defined by the shorter phrase, “deliver people the right information.” The right information is determined by adapting results based on the following:

Rules: Changing what is shown based on known facts about the individual (i.e. their background, interests, or department in the organization, etc.).

 

Behavior: Identifying patterns and similarities between users to show likely results.

 

Predictions: Learning from rules and known behaviors to make best-guess predictions about an individual.

When marketers sell products, they have more success selling to consumers who can relate to their purchases. If a marketer intentionally connects their product with a consumer based on a profile of a consumer’s behavior, characteristics, and preferences, the consumer is more likely to purchase the product. Personalization leverages the wants, needs, and attributes of a consumer or end-user to provide relevant content, learning materials, or other digital touch-points like notifications. If content design utilizes end-user information to customize content for a particular user or group of users, they are more likely to engage with the content.

The Benefits of Personalization

As a result of the information age, the amount of content available to users increases exponentially daily. Finding key information in the vast knowledge space of an organization is more difficult each day. The steps we can take to curate the data that employees or consumers have to sift through before finding what they need are critical to reducing time spent searching and improving satisfaction with results.

According to McKinsey, a personalized experience increases employee engagement 20-30%, and SmarterHQ reinforces that with their finding that 72% of consumers say they only engage with personalized messaging. There are numerous benefits to using personalization including enhanced user experience, improved performance of daily tasks, and reduced time and clutter of information. The benefits of leveraging personalization vary depending on the use case, so let’s take a look at some examples of how to impactfully apply personalization.

Personalization Use Cases

A common use of personalization is in a learning ecosystem when developing a course (or learning path) for an individual learner. In this case, the learning materials and assignments presented to the learner can change based on the learner’s preferences, previously proven abilities, or other background information. Every learner grasps concepts differently, and the ability to adapt courses to optimize comprehension and retention of course material so learners can accurately utilize the information is key to meeting success criteria. An example of this is personalizing learner feedback based on how a learner reacted to an interactive prompt. If a learner answers a formative assessment question incorrectly, we may want to provide more specific resources or a plan to reinforce relevant material. Going beyond the course, learners may want to personalize their coursework timeline by configuring the timing and set up of microlearning assets and notifications to remind them to re-engage and strengthen lesson reinforcement.

“Delivering personalized digital experiences is another tool in your knowledge management toolbox – helping deliver the right information to the right people at the right time.” – Enterprise Knowledge (on building a personalization proof of concept)

Another emerging personalization trend in the knowledge management space is using personalized search results in company public or internal-facing search platforms. When a user enters a search query, search platforms can leverage more than just the query to produce the result. The system may also recognize the user’s

  • department,
  • geographic region,
  • company or project role, and
  • current projects.

By using all this information, we can boost search results produced by the same department, related to a region, or match objectives tied to the user’s current role and projects. The personalization approach and specifics of boosting should be workshopped, iterated on, and tested to optimize results, and tied into a taxonomy best describing the knowledge domain.

Conclusion

Personalization is an important aspect of improving the findability and discoverability of information. With successful application, personalization provides individuals access to relevant information that was previously buried or obscure. If you or your organization have questions about starting your personalization journey or have a topic of interest you would like us to cover next in our “Breaking it down” blog series, contact us.

The post Breaking it Down: What is Personalization? appeared first on Enterprise Knowledge.

]]>
Transforming Tabular Data into Personalized, Componentized Content using Knowledge Graphs in Python https://enterprise-knowledge.com/transforming-tabular-data-into-personalized-componentized-content-using-knowledge-graphs-in-python/ Tue, 22 Mar 2022 13:30:26 +0000 https://enterprise-knowledge.com/?p=14576 My colleagues Joe Hilger and Neil Quinn recently wrote blogs highlighting the benefits of leveraging a knowledge graph in tandem with a componentized content management system (CCMS) to curate personalized content for users. Hilger set the stage explaining the business value of a personalized digital experience and the logistics of these two technologies supporting one ... Continue reading

The post Transforming Tabular Data into Personalized, Componentized Content using Knowledge Graphs in Python appeared first on Enterprise Knowledge.

]]>
My colleagues Joe Hilger and Neil Quinn recently wrote blogs highlighting the benefits of leveraging a knowledge graph in tandem with a componentized content management system (CCMS) to curate personalized content for users. Hilger set the stage explaining the business value of a personalized digital experience and the logistics of these two technologies supporting one another to create it. Quinn makes these concepts more tangible by processing sample data into a knowledge graph in Python and querying the graph to find tailored information for a particular user. This post will again show the creation and querying of a knowledge graph in Python, however, the same sample data will now be sourced from external CSV files.

A Quick Note on CSVs

CSV files, or comma-separated values files, are widely used to store tabular data. If your company uses spreadsheet applications, such as Microsoft Excel or Google Sheets, or relational databases, then it is likely you have encountered CSV files before. This post will help you use the already existent, CSV-formatted data throughout your company, transform it into a usable knowledge graph, and resurface relevant pieces of information to users in a CCMS. Although this example uses CSV files as the tabular dataset format, the same principles apply to Excel sheets and SQL tables alike.

Aggregating Data

The diagram below is a visual model of the knowledge graph we will create from data in our example CSV files.

Diagram showing customers, products and parts

In order to populate this graph, just as in Quinn’s blog, we will begin with three sets of data about:

  • Customers and the products they own
  • Products and the parts they are composed of
  • Parts and the actions that need to be taken on them

This information is stored in three CSV files, Customer_Data.csv, Product_Data.csv and Part_Data.csv:

Customers

Customer ID Customer Name Owns Product
1 Stephen Smith Product A
2 Lisa Lu Product A

Products

Product ID Product Name Composed of
1 Product A Part X
1 Product A Part Y
1 Product A Part Z

Parts

Part ID Part Name Action
1 Part X
2 Part Y
3 Part Z Recall

To create a knowledge graph from these tables, we will need to

  • Read the data tables from our CSV files into DataFrames (an object representing a 2-D data structure, such as a spreadsheet or table)
  • Transform the DataFrames into RDF triples and add them to the graph

In order to accomplish these two tasks, we will be utilizing two Python libraries. Pandas, a data analysis and manipulation library, will help us serialize our CSV files into DataFrames and rdflib, a library for working with RDF data, will allow us to create RDF triples from the data in our DataFrames.

Reading CSV Data

This first task is quite easy to accomplish using pandas. Pandas has a read_csv method for ingesting CSV data into a DataFrame. For this use case, we only need to provide two parameters: the CSV’s file path and the number of rows to read. To read the Customers table from our Customer_Data.csv file:

import pandas as pd

customer_table = pd.read_csv("Customer_Data.csv", nrows=2)

The value of customer_table is:

       Customer ID      Customer Name     Owns Product
0                1      Stephen Smith        Product A
1                2            Lisa Lu        Product A

We repeat this process for the Products and Parts files, altering the filepath_or_buffer and nrows parameters to reflect the respective file’s location and table size.

Tabular to RDF

Now that we have our tabular data stored in DataFrame variables, we are going to use rdflib to create subject-predicate-object triples for each column/row entry in the three DataFrames. I would recommend reading Quinn’s blog prior to this one as I am following the methods and conventions that he explains in his post. 

Utilizing the Namespace module will provide us a shorthand for creating URIs, and the create_eg_uri method will url-encode our data values.

from rdflib import Namespace
from urllib.parse import quote

EG = Namespace("http://example.com/")

def create_eg_uri(name: str) -> URIRef:
    """Take a string and return a valid example.com URI"""
    quoted = quote(name.replace(" ", "_"))
    return EG[quoted]

The columns in our data tables will need to be mapped to predicates in our graph. For example, the Owns Product column in the Customers table will map to the http://example.com/owns predicate in our graph. We must define the column to predicate mappings for each of our tables before diving into the DataFrame transformations. Additionally, each mapping object contains a “uri” field which indicates the column to use when creating the unique identifier for an object.

customer_mapping = {
    "uri": "Customer Name",
    "Customer ID": create_eg_uri("customerId"),
    "Customer Name": create_eg_uri("customerName"),
    "Owns Product": create_eg_uri("owns"),
}

product_mapping = {

    "uri": "Product Name",
    "Product ID": create_eg_uri("productId"),
    "Product Name": create_eg_uri("productName"),
    "Composed of": create_eg_uri("isComposedOf"),
}

part_mapping = {

    "uri": "Part Name",
    "Part ID": create_eg_uri("partId"),
    "Part Name": create_eg_uri("partName"),
    "Action": create_eg_uri("needs"),
}

uri_objects = ["Owns Product", "Composed of", "Action"]

The uri_objects variable created above indicates which columns from the three data tables should have their values parsed as URI References, rather than Literals. For example, Composed of maps to a Part object. We want to make the <Part> object in the triple EG:Product_A EG:isComposedOf <Part> a URI pointing to/referencing a particular Part, not just the string name of the Part. Whereas the Product Name column creates triples such as EG:Product_A EG:productName “name” and “name” is simply a string, i.e. a Literal, and not a reference to another object.

Now, using all of the variables and methods declared above, we can begin the translation from DataFrame to RDF. For the purposes of this example, we create a global graph variable and a reusable translate_df_to_rdf function which we will call for each of the three DataFrames. With each call to the translate function, all triples for that particular table are added to the graph.

from rdflib import URIRef, Graph, Literal
import pandas as pd

graph = Graph()

def translate_df_to_rdf(customer_data, customer_mapping):
    # Counter variable representing current row in the table
    i = 0
    num_rows = len(customer_data.index)

    # For each row in the table
    while i < num_rows:
        # Create URI subject for triples in this row using ‘Name’ column
        name = customer_data.loc[i, customer_mapping["uri"]]
        row_uri = create_eg_uri(name)

        # For each column/predicate mapping in mapping dictionary
        for column_name, predicate in customer_mapping.items():

            # Grab the value at this specific row/column entry
            value = customer_data.loc[i, column_name]

            # Strip extra characters from value
            if isinstance(value, str):
                value = value.strip()

            # Check if the value exists
            if not pd.isnull((value)):
                # Determine if object should be a URI or Literal
                if column_name in uri_objects:
                    # Create URI object and add triple to graph
                    uri_value = create_eg_uri(value)
                    graph.add((row_uri, predicate, uri_value))
                else:
                    # Create Literal object and add triple to graph
                    graph.add((row_uri, predicate, Literal(value)))
        i = i + 1

Querying the Graph

In this case, we make three calls to translate_df_to_rdf:

translate_df_to_rdf(customer_data, customer_mapping)
translate_df_to_rdf(product_data, product_mapping)
translate_df_to_rdf(part_data, part_mapping)

Now that our graph is populated with the Customers, Products, and Parts data, we can query it for personalized content of our choosing. So, if we want to find all customers who own products that are composed of parts that need a recall, we can create and use the same query from Quinn’s previous blog:

sparql_query = """SELECT ?customer ?product
WHERE {
  ?customer eg:owns ?product .
  ?product eg:isComposedOf ?part .
  ?part eg:needs eg:Recall .
}"""

results = graph.query(sparql_query, initNs={"eg": EG})
for row in results:
    print(row)

As you would expect, the results printed in the console are two ?customer ?product pairings:

(rdflib.term.URIRef('http://example.com/Stephen_Smith'), rdflib.term.URIRef('http://example.com/Product_A'))
(rdflib.term.URIRef('http://example.com/Lisa_Lu'), rdflib.term.URIRef('http://example.com/Product_A'))

Summary

By transforming our CSV files into RDF triples, we created a centralized, connected graph of information, enabling the simple retrieval of very granular and case-specific data. In this case, we simply traversed the relationships in our graph between Customers, Products, Parts, and Actions to determine which Customers needed to be notified of a recall. In practice, these concepts can be expanded to meet any personalization needs for your organization.

Knowledge Graphs are an integral part of serving up targeted, useful information via a Componentized Content Management System, and your organization doesn’t need to start from scratch. CSVs and tabular data can easily be transformed into RDF and aggregated as the foundation for your organization’s Knowledge Graph. If you are interested in transforming your data into RDF and want help planning or implementing a transformation effort, contact us here.

The post Transforming Tabular Data into Personalized, Componentized Content using Knowledge Graphs in Python appeared first on Enterprise Knowledge.

]]>
Knowledge Management Technology to Improve Learning Outcomes https://enterprise-knowledge.com/knowledge-management-technology-to-improve-learning-outcomes/ Fri, 25 Feb 2022 14:30:16 +0000 https://enterprise-knowledge.com/?p=14452 Learning Ecosystems should be designed to not only present educational information, but to truly promote learning. There are many factors that improve learning outcomes within learning ecosystems, but two of those factors most strongly impacted by knowledge management technology are … Continue reading

The post Knowledge Management Technology to Improve Learning Outcomes appeared first on Enterprise Knowledge.

]]>
Learning Ecosystems should be designed to not only present educational information, but to truly promote learning. There are many factors that improve learning outcomes within learning ecosystems, but two of those factors most strongly impacted by knowledge management technology are motivation and attention.

Motivation

EK MotivationMotivation is a complex factor to understand, but psychologists, neuroscientists, and learning theorists have amassed quite a body of research. We know that motivation can be positively influenced by intrinsic motivation, experiences of success, and overall positive system user experience.

Learning ecosystems that leverage curiosity and interest to drive intrinsic motivation create much better learning outcomes than learning ecosystems which depend upon compulsory training or fear. There are obviously a lot of process and cultural elements that influence curiosity and interest-driven learning, but knowledge management technology has a role to play as well. In knowledge management, we talk a lot about the findability and discoverability of information:

  • Findability describes the ability of a system user (in this case a learner) to find the information for which they came to the system. If I want to learn the basics of graphic design, I might execute a search for “graphic design basics.” Findability refers to my ability to find a beginner eLearning course so I can get started.
  • Discoverability describes the ability of the learner to discover new information in the system which is useful – but for which they weren’t even searching. In the example above, I search for “graphic design basics” and find an eLearning course, but I also find an entire training plan with multiple levels of graphic design proficiency and supporting learning assets for each. I didn’t know those additional resources were there, but I’m thankful to discover them as they provide me not only with the course, but with a roadmap to continue advancing my skills.

A well-designed knowledge management portal supports both findability and discoverability of learning assets. Enabling the discoverability of additional learning assets and learning paths inspires curiosity and helps create an intrinsic motivation to learn.

Research shows that learners who experience success are also motivated to keep learning. Knowledge management technology can build success experiences into your organization’s learning ecosystem by automatically conferring certificates when learners complete metadata-enabled learning paths. Knowledge management technology can also create personalization of feedback by leveraging some of the same tools we use to deliver a multitude of content personalization experiences – componentized content and a robust metadata strategy.

Motivation is also strongly linked to the overall user experience a learner has with the learning technology. Learning is a process which requires sustained attention and effort, and if a learner is frustrated with outdated information, a lack of cues to guide attention, or visual clutter which creates cognitive overload, motivation is greatly reduced.

Attention

EK AttentionIt is difficult for learners to sustain attention, and many learning activities take place in an online environment where there is fierce competition for that attention. Many traditional training approaches rely on unrealistic expectations of our ability to pay attention. Full-day, instructor-led workshops or even hour-long webinars are examples where learner attention can drop off drastically.

Knowledge management technology can provide solutions to this problem. Componentized content can enable the chunking of educational content in such a way that the same core components of content are reusable across multiple learning contexts. SCORM packages promised this benefit, but SCORM was only designed for reuse within eLearning courses. With ever-increasing demands on learner attention, we know that diverse learning opportunities – including informal learning and social learning – are absolutely critical. Componentized content in a CCMS can actually enable the reuse of content in any context – not just in courses.

A Headless CMS delivery architecture can provide further benefits and allow for the personalized delivery of these reusable learning asset components across multiple learner experiences. If you’ve created a reusable learning asset that explains how to create a budget report, a Headless CMS would enable you to publish that information to:

  • A checklist that provides context for a project manager to create and update the report; and
  • An explanatory reference sheet for a department director that explains how to apply the information in the report for department-level strategic planning.

When we leverage the latest knowledge management technology to create reusable, componentized learning assets, which can be reused across multiple learning experiences, we allow ourselves to create shorter, varied, and personalized learning experiences, which will help our learners sustain their attention and improve learning outcomes.

Summary

A modern workforce faces many demands for their time and attention and it’s easy for learning to get put on the back burner – even for those of us who love learning. When designing a learning ecosystem, it’s important to remember the learning theory that helps us best support learners and set them up for successful learning outcomes. Supporting learner motivation and attention are key. Knowledge management technology has the potential to improve the motivation and attention of learners – and thereby increase learning outcomes. If you’d like to apply knowledge management best practices to the design and development of your learning ecosystem, EK can help.

The post Knowledge Management Technology to Improve Learning Outcomes appeared first on Enterprise Knowledge.

]]>
Content Personalization with Knowledge Graphs in Python https://enterprise-knowledge.com/content-personalization-with-knowledge-graphs-in-python/ Mon, 14 Feb 2022 15:00:14 +0000 https://enterprise-knowledge.com/?p=14361 In a recent blog post, my colleague Joe Hilger described how a knowledge graph can be used in conjunction with a componentized content management system (CCMS) to provide personalized content to customers. This post will show the example data from Hilger’s post being loaded into a knowledge graph and queried to find the content appropriate ... Continue reading

The post Content Personalization with Knowledge Graphs in Python appeared first on Enterprise Knowledge.

]]>
In a recent blog post, my colleague Joe Hilger described how a knowledge graph can be used in conjunction with a componentized content management system (CCMS) to provide personalized content to customers. This post will show the example data from Hilger’s post being loaded into a knowledge graph and queried to find the content appropriate for each customer, using Python and the rdflib package. In doing so, it will help make these principles more concrete, and help you in your journey towards content personalization.

To follow along, a basic understanding of Python programming is required.

Aggregating Data

Hilger’s article shows the following visualization of a knowledge graph to illustrate how the graph connects data from many different sources and encodes the relationship between them.

Diagram showing customers, products and parts

To show this system in action, we will start out with a few sets of data about:

  • Customers and the products they own
  • Products and the parts they are composed of
  • Parts and the actions that need to be taken on them

In practice, this information would be pulled from the sales tracking, product support, and other systems it lives in via APIs or database queries, as described by Hilger.

customers_products = [
    {"customer": "Stephen Smith", "product": "Product A"},
    {"customer": "Lisa Lu", "product": "Product A"},
]

products_parts = [
    {"product": "Product A", "part": "Part X"},
    {"product": "Product A", "part": "Part Y"},
    {"product": "Product A", "part": "Part Z"},
]
parts_actions = [{"part": "Part Z", "action": "Recall"}]

We will enter this data into a graph as a series of subject-predicate-object triples, each of which represents a node (the subject) and its relationship (the predicate) to another node (the object). RDF graphs use uniform resource identifiers (URIs) to provide a unique identifier for both nodes and relationships, though an object can also be a literal value.

Unlike the traditional identifiers you may be used to in a relational database, URIs in RDF always use a URL format (meaning they begin with http://), although a URI is not required to point to an existing website. The base part of this URI is referred to as a namespace, and it’s common to use your organization’s domain as part of this. For this tutorial we will use http://example.com as our namespace.

We also need a way to represent these relationship predicates. For most enterprise RDF knowledge graphs, we start with an ontology, which is a data model that defines the types of things in our graph, their attributes, and the relationships between them. For this example, we will use the following relationships:

RelationshipURI
Customer’s ownership of a producthttp://example.com/owns
Product being composed of a parthttp://example.com/isComposedOf
Part requiring an actionhttp://example.com/needs

Note the use of camelCase in the name – for more best practices in ontology design, including how to incorporate open standard vocabularies like SKOS and OWL into your graph, see here.

The triple representing Stephen Smith’s ownership of Product A in rdflib would then look like this, using the URIRef class to encode each URI:

from rdflib import URIRef

triple = (
    URIRef("http://example.com/Stephen_Smith"),
    URIRef("http://example.com/owns"),
    URIRef("http://example.com/Product_A"),
)

Because typing out full URLs every time you want to add or reference a component of a graph can be cumbersome, most RDF-compliant tools and development resources provide some shorthand way to refer to these URIs. In rdflib that’s the Namespace module. Here we create our own namespace for example.com, and use it to more concisely create that triple:

from rdflib import Namespace

EG = Namespace("http://example.com/")

triple = (EG["Stephen_Smith"], EG["owns"], EG["Product_A"])

We can further simplify this process by defining a function to transform these strings into valid URIs using the quote function from the urlparse module:

from urllib.parse import quote

def create_eg_uri(name: str) -> URIRef:
    """Take a string and return a valid example.com URI"""
    quoted = quote(name.replace(" ", "_"))
    return EG[quoted]

Now, let’s create a new Graph object and add these relationships to it:

from rdflib import Graph

graph = Graph()

owns = create_eg_uri("owns")
for item in customers_products:
    customer = create_eg_uri(item["customer"])
    product = create_eg_uri(item["product"])
    graph.add((customer, owns, product))

is_composed_of = create_eg_uri("isComposedOf")
for item in products_parts:
    product = create_eg_uri(item["product"])
    part = create_eg_uri(item["part"])
    graph.add((product, is_composed_of, part))

needs = create_eg_uri("needs")
for item in parts_actions:
    part = create_eg_uri(item["part"])
    action = create_eg_uri(item["action"])
    graph.add((part, needs, action))

Querying the Graph

Now we are able to query the graph, in order to find all of the customers that own a product containing a part that requires a recall. To do this, we’ll construct a query in SPARQL, the query language for RDF graphs.

SPARQL has some features in common with SQL, but works quite differently. Instead of selecting from a table and joining others, we will describe a path through the graph based on the relationships each kind of node has to another:

sparql_query = """SELECT ?customer ?product
WHERE {
  ?customer eg:owns ?product .
  ?product eg:isComposedOf ?part .
  ?part eg:needs eg:Recall .
}"""

The WHERE clause asks for:

  1. Any node that has an owns relationship to another – the subject is bound to the variable ?customer and the object to ?product
  2. Any node that has an isComposedOf relationship to the ?product from the previous line, the subject of which is then bound to ?part
  3. Any node where the object has a needs relationship to an object which is a Recall.

Note that we did not at any point tell the graph which of the URIs in our graph referred to a customer. By simply looking for any node that owns something, we were able to find the customers automatically. If we had a requirement to be more explicit about typing, we could add triples to our graph describing the type of each entity using the RDF type relationship, then refer to these in the query.

We can then execute this query against the graph, using the initNs argument to map the “eg:” prefixes in the query string to our example.com namespace, and print the results:

results = graph.query(sparql_query, initNs={"eg": EG})
 
for row in results:
    print(row)

This shows us the URIs for the affected customers and the products they own:

(rdflib.term.URIRef('http://example.com/Stephen_Smith'), rdflib.term.URIRef('http://example.com/Product_A'))
(rdflib.term.URIRef('http://example.com/Lisa_Lu'), rdflib.term.URIRef('http://example.com/Product_A'))

These fields could then be sent back to our componentized content management system, allowing it to send the appropriate recall messages to those customers!

Summary

The concepts and steps described in this post are generally applicable to setting up a knowledge graph in any environment, whether in-memory using Python or Java, or with a commercial graph database product. By breaking your organization’s content down into chunks inside a componentized content management system and using the graph to aggregate this data with your other systems, you can ensure that the exact content each customer needs to see gets delivered to them at the right time. You can also use your graph to create effective enterprise search systems, among many other applications.

Interested in best in class personalization using a CCMS plus a knowledge graph? Contact us.

The post Content Personalization with Knowledge Graphs in Python appeared first on Enterprise Knowledge.

]]>
Taking Content Personalization to the Next Level: Graphs and Componentized Content Management https://enterprise-knowledge.com/taking-content-personalization-to-the-next-level-graphs-and-componentized-content-management/ Tue, 14 Dec 2021 14:30:07 +0000 https://enterprise-knowledge.com/?p=13971 Content personalization is no longer optional for companies. A personalized digital experience is essential to creating loyal customers, partners, and employees. The leading technology companies have created an expectation of highly contextualized information that answers customer questions and anticipates future … Continue reading

The post Taking Content Personalization to the Next Level: Graphs and Componentized Content Management appeared first on Enterprise Knowledge.

]]>
Content personalization is no longer optional for companies. A personalized digital experience is essential to creating loyal customers, partners, and employees. The leading technology companies have created an expectation of highly contextualized information that answers customer questions and anticipates future needs. Fortunately, some of the latest technology trends address this challenge and allow organizations to personalize information in a meaningful and cost effective manner. Two of the most important tools for effective content personalization are:

  • Componentized Content Management Systems (CCMS) and 
  • Knowledge Graphs (Graph).

A CCMS allows organizations to create and manage content so that people receive only the information they need. A Graph allows organizations to better target what information should be delivered. In the rest of this blog post, I am going to explain how these two tools work to provide the best possible content personalization experience. To keep things simple, I am going to refer to the receiver of this personalized information as a customer, although these concepts could easily apply to personalization for partners and employees as well.

Tool 1:  Bite-Sized Content Components

A CCMS supports the content side of the personalization equation. It allows organizations to author and manage content in components or sections rather than long documents. Componentized content is structured content that represents a portion of a larger document (typically a chapter or section) that can be combined to build documents dynamically. A great example of this is your car’s owners manual. This manual has instructions for changing a tire, filling the wiper fluid, and jump starting the car. Dividing the content of the owners manual into components allows the manufacturer to send only the jump starting instructions when that is what the customer needs.

Truly effective personalization requires this division of content so that the customer gets only the information they need and not extraneous information. Imagine you are a product manufacturer and you sell multiple products, many of which use the same or similar parts. As you interact with customers, it is important that the information that you provide to them is limited to products that they own or related products that would be of interest. A CCMS allows organizations to deliver individual components to directly answer questions or to dynamically assemble larger manuals that include only information relevant to the products the customer owns. This level of personalization reminds the customer that you understand who they are and that you value their time. A common phrase in Knowledge Management is “Deliver people the right information at the right time!”. A CCMS helps ensure that the right information can be delivered in a way that is specific and relevant to the user.

Tool 2:  Knowledge Graphs

Once the information is componentized the personalization platform needs a way to decide what information is the right information to share with the customer. This is where Knowledge Graphs provide a level of control over personalization that is new and much more powerful than prior technologies. Knowledge graphs deliver on personalization through two key features: aggregation and inference.

Graphs are very good at aggregating information from multiple sources. This is the use case that Google shows with their famous knowledge panels (the information about people, places, and things that show on the right side of the search results). A graph can pull information about customers from multiple sources such as the CRM system, product support tickets, and sales information. This aggregated view means that the graph can be used to determine which content chunks should be assembled together to best align with the purchases and concerns of the customer. The CCMS produces the chunked content and the graph assembles the “right” chunks of content and delivers them to the customer based on what the graph knows about the customer.

A graph can aggregate more than just information about a customer. Graphs are also used to aggregate information about the products and services of an organization. This information can come from the product information management system, the product catalog, and ticketing systems. With this information, the graph can look at the products a customer owns and the latest information about those products so that highly targeted and almost predictive information can be delivered to customers. For example, a customer may own a product that has had a recent recall. The CCMS stores information about how to get the recall and how to install the solution. The graph is able to proactively see that the customer owns this product from the customer portion of the graph and then identify that the product has a recall from the product portion of the graph. The organization can then send a highly personalized and targeted message that explains what the recall is and how to install the solution. This type of proactive personalization can turn a potentially negative situation into a more positive engagement. A simplified example of how the graph provides this information can be seen in the image below.

An example of a knowledge graph applied to a product recall use case.

Additional personalization can be offered through a feature in graphs called inference. Inference occurs when two entities in the graph are seen as related because they have relationships in common. For example, two products that a company offers might use a part that lowers noise. A third product might use a different part, but one that also offers noise canceling features. Even though these products are not directly related we can infer that they are similar because the parts that they use have a similar characteristic (see the image below).

An example of inference through knowledge graphs.

Inference allows organizations to personalize recommendations in a way that could not easily be done with older technologies. This opens a new path for personalization that allows for even more proactive content interactions with both customers and prospects.

Summary

Componentized Content Management Systems and Knowledge Graphs are foundational elements that are key to providing content personalization. Organizations that personalize customer, employee, and partner experiences with these tools can create compelling digital experiences that surprise and delight. Our content management and graph specialists can help your organization build a true cutting edge personalization platform. If you are interested in learning more about our services in this area, you can reach us at info@enterprise-knowledge.com.

The post Taking Content Personalization to the Next Level: Graphs and Componentized Content Management appeared first on Enterprise Knowledge.

]]>