security Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/security/ Mon, 03 Nov 2025 21:26:42 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg security Articles - Enterprise Knowledge https://enterprise-knowledge.com/tag/security/ 32 32 The Journey to Unified Entitlements https://enterprise-knowledge.com/the-journey-to-unified-entitlements/ Thu, 24 Jul 2025 14:48:32 +0000 https://enterprise-knowledge.com/?p=25041 Now, more than ever, organizations need a clear and consistent way to ensure that the access permissions for all their data are applied consistently across the enterprise. We call this unified entitlements Continue reading

The post The Journey to Unified Entitlements appeared first on Enterprise Knowledge.

]]>
Now, more than ever, organizations need a clear and consistent way to ensure that the access permissions for all their data are applied consistently across the enterprise. We call this unified entitlements, and a perfect storm of events is driving the need for it.

  • AI tools make data in all forms more accessible than ever before.
  • Data is captured in a broader range of tools (both in the cloud and on-premises), each with its own security model.
  • Hackers are more sophisticated than ever, and the need for highly decentralized information repositories with strong security models is now seen as a critical way to deter them.

In the same way that we now have technologies that enable better information access, we also have technologies that make securing this information more robust and scalable. You can learn more about how this is done in our blog post, “Inside the Unified Entitlements Architecture.” In this article, we describe how a Unified Entitlements Service (UES) can be set up to consistently replicate information access rules from a central source across a wide range of products so that these rules are the same throughout the organization. 

As with most problems, technology is only part of the solution. Implementing a UES is not merely a technical project, but a transformational journey. As part of this journey, organizations typically progress through several maturity stages:

  • Discovery and Assessment: Mapping the current entitlement landscape across platforms and identifying the highest-risk inconsistencies.
  • Policy Standardization: Creating a unified policy framework that translates business rules into technical controls.
  • Incremental Implementation: Rolling out UES capabilities gradually, starting with the most critical data sources and expanding over time.
  • Continuous Improvement: Refining policies, enhancing performance, and expanding coverage to new data platforms as they enter the enterprise ecosystem.

The Discovery and Assessment stage is critical to understanding the complexity of implementing unified entitlements across an organization. During this stage, analysts identify which repositories need content with specific entitlement rules, the rules that need to be described, and how they will be implemented. Most organizations focus on securing their datasets and SharePoint online. While that is a good starting point, there are many other repositories that likely need to be properly secured. Information like contracts, client data, pricing, and product specifications may all require their own security policies. It is important to put together a list of these repositories and their business owners so that the true scope of the problem is understood correctly. Once this list is in place, the security rules (or policies) can be enumerated. These rules might look like the following:

Limit access to client team members, the project sponsor, and senior leadership only

This list of rules for different information assets should be understandable by both business and technical people and is often quite lengthy. Having discovered the repositories and established the rules, it is important to identify who is responsible for ensuring these rules are in place both at the time of the analysis and in the future. Once this discovery work is complete, the entitlements team can start to move into iterative project implementation.

After defining the repositories and rules, the Policy Standardization process begins. During this stage, the security rules defined in the first stage are aligned with the systems to which they apply to, and the security policy models are developed. Each system has its own way of managing security, and the new security policy models need to account for these requirements. Since most security models are either role-based or attribute-based, the new policy models typically address requirements for groups and attributes at an enterprise level. One of the key outputs of this stage are the guidelines for how groups need to be managed and what personal attributes need to be captured, managed, and shared with other applications.

After a core set of policies are defined, the Incremental Implementation stage can begin. During this stage, IT works with repository owners to automate the application of entitlements using the UES. This is a collaborative effort where IT implements the rules to automate entitlements, and business users identify the exceptions that inevitably arise. Both parties then work through the exceptions until the entitlements are correct. Then, this process is repeated with other repositories across the enterprise, focusing on the most critical repositories first.

The Continuous Improvement stage begins once the initial implementations are completed. Information management should never be static. As new information types are captured, new systems are implemented, and new security policies are required, the entitlements must be updated. We help our clients define a repeatable process to update their UES with the latest policies to keep their entitlements aligned with continuously changing business needs.

This journey yields progressive benefits at each stage, from reduced administrative overhead to enhanced security and an improved compliance posture. Organizations that successfully navigate this transformation gain not just better governance but a strategic advantage: the ability to safely democratize data access while maintaining robust protection for sensitive information.

Our Unified Entitlements team has helped others through this journey. If you want to solve your entitlement problems, please contact our team for guidance at info@enterprise-knowledge.com.

 

The post The Journey to Unified Entitlements appeared first on Enterprise Knowledge.

]]>
Entitlements Within a Semantic Layer Framework: Benefits of Determining User Roles Within a Data Governance Framework https://enterprise-knowledge.com/entitlements-within-a-semantic-layer-framework/ Tue, 25 Mar 2025 14:16:22 +0000 https://enterprise-knowledge.com/?p=23518 The importance of data governance grows as the number of users with permission to access, create, or edit content and data within organizational ecosystems faces cumulative upkeep. An organization may have a plan for data governance and may have software … Continue reading

The post Entitlements Within a Semantic Layer Framework: Benefits of Determining User Roles Within a Data Governance Framework appeared first on Enterprise Knowledge.

]]>
The importance of data governance grows as the number of users with permission to access, create, or edit content and data within organizational ecosystems faces cumulative upkeep. An organization may have a plan for data governance and may have software to help them do it, but as users cycle by 10s to 1000s per month, it becomes unwieldy for an administrator to manage permissions, define the needs around permission types, and ultimately decide requirements that exist for users as they come and go to access information. If the group of users is small (<20), it may be easy for an administrator to determine what permissions each user should have. But what if thousands of users within an organization need access to the data in some capacity? And what if there are different levels of visibility to the data depending on the user’s role within the organization? These questions can be harder for an administrator to answer themselves, and cause bottlenecks in data access for users.

An entitlement management model is an important part of data governance. Unified entitlements provide a holistic definition of access rights. You can read more about the value of unified entitlements here. This model can be designed and implemented within a semantic layer, providing an organization with roles and associated permissions for different types of data users. Below is an example of an organizational entitlements model with roles, and explanations of an example role for fictional user Carol Jones.


Having a consistent and predictable approach to entitlements within a semantic layer framework makes decisions easier for human administrators within a data governance framework. It helps to alleviate questions around how to gain access to information needed for projects if it is not already available to a user, given their entitlements. Clearly defined, consistent, and transparent entitlements provide greater ease of access for users and stronger security measures for user access. The combination of reduction in risk and reduction in lost time makes entitlements an essential area of any enterprise semantic layer framework.

Efficiency

New users are able to be onboarded with the correct permissions sooner by an administrator with a clear understanding of the permissions this new user needs. As the user’s role evolves, they can submit requests for increased permissions.

Risk Mitigation

Administrators and business leads at a high level within the framework are able to see all of the users in a business area and their associated permissions within the semantic layer framework. If the needs of the user change, or as users leave the company, the administrator can quickly and easily remove permissions from the user account. This method of “pruning” permissions within an entitlements model reduces risk by mitigating the chance of users maintaining permissions to information they no longer need.

    Diagnostics

In a data breach, the point of entry can be quickly identified.

Identify Points of Contact

Users who can see the governance model can quickly identify points of contact for specific business areas within an organization’s semantic layer framework. This facilitates communication and collaboration, enabling users to see points of contact to permission areas across the organization.

An entitlement management model addresses the issue of “which users can do what” with the organization’s data. This is commonly addressed by considering which users should be able to access (read), edit (write, update), or create and delete data, often abbreviated as CRUD. Another facet of the data that must be considered is the visibility users should have. If there are parts of the data that should not be seen by all users, this must be accounted for in the model. There may be different groups of users with read permissions, but not for all the same data. These permissions will be assigned via roles, granted by users with an administrative role. 

C=Create, R=Read, U=Update, D=Delete

One method to solve this problem is to develop a set of heuristics for users that the administrator can reference and revise. By having examples of the use cases that they have granted permissions for, they can reference these when deciding what permissions to grant new users within a model, or users whose data needs have evolved. It is difficult to predict all individual user needs, especially as an organization grows and as technology advances. Implementing a set of user heuristics allows administrators to be consistent in granting user permissions to semantically linked data. They are able to mitigate risk and provide appropriate access to the users within the organization. The table below shows some common heuristics, who to apply them to and a decision if the entitlements needs further review. A similar approach is the Adaptable Rule Framework (ARF).

This method serves as a precursor to documenting a formal process for entitling, which should include the steps, sequence, requirements, and timeliness in which users are entitled to access data augmented by a semantic layer. These entitlements will determine where in the semantic layer framework users can go and their ability to impact the framework through their actions. Decisions and documentation of these process elements provide thorough consistency within an organization for managing entitlements.

Enterprise Knowledge (EK) has over 20 years of experience providing strategic knowledge management services. If your organization is looking for more advice for cutting-edge solutions to data governance issues, contact us!  

The post Entitlements Within a Semantic Layer Framework: Benefits of Determining User Roles Within a Data Governance Framework appeared first on Enterprise Knowledge.

]]>
Incorporating Unified Entitlements in a Knowledge Portal https://enterprise-knowledge.com/incorporating-unified-entitlements-in-a-knowledge-portal/ Wed, 12 Mar 2025 17:37:34 +0000 https://enterprise-knowledge.com/?p=23383 Recently, we have had a great deal of success developing a certain breed of application for our customers—Knowledge Portals. These knowledge-centric applications holistically connect an organization’s information—its data, content, people and knowledge—from disparate source systems. These portals provide a “single … Continue reading

The post Incorporating Unified Entitlements in a Knowledge Portal appeared first on Enterprise Knowledge.

]]>
Recently, we have had a great deal of success developing a certain breed of application for our customers—Knowledge Portals. These knowledge-centric applications holistically connect an organization’s information—its data, content, people and knowledge—from disparate source systems. These portals provide a “single pane of glass” to enable an aggregated view of the knowledge assets that are most important to the organization. 

The ultimate goal of the Knowledge Portal is to provide the right people access to the right information at the right time. This blog focuses on the first part of that statement—“the right people.” This securing of information assets is called entitlements. As our COO Joe Hilger eloquently points out, entitlements are vital in “enabling consistent and correct privileges across every system and asset type in the organization.” The trick is to ensure that an organization’s security model is maintained when aggregating this disparate information into a single view so that users only see what they are supposed to.

 

The Knowledge Portal Security Challenge

The Knowledge Portal’s core value lies in its ability to aggregate information from multiple source systems into a single application. However, any access permissions established outside of the portal—whether in the source systems or an organization-wide security model—need to be respected. There are many considerations to take into account when doing this. For example, how does the portal know:

  • Who am I?
  • Am I the same person specified in the various source systems?
  • Which information should I be able to see?
  • How will my access be removed if my role changes?

Once a user has logged in, the portal needs to know that the user has Role A in the content management system, Role B in our HR system, and Role C in our financial system. Since the portal aggregates information from the aforementioned systems, it uses this information to ensure what I see in the portal is reflective of what I would see in any of the individual systems. 

 

The Tenets of Unified Entitlements in a Knowledge Portal

At EK, we have a common set of principles that guide us when implementing entitlements for a Knowledge Portal. They include:

  • Leveraging a single identity via an Identity Provider (IdP).
  • Creating a universal set of groups for access control.
  • Respecting access permissions set in source systems when available.
  • Developing a security model for systems without access permissions.

 

Leverage an Identity Provider (IdP)

When I first started working in search over 20 years ago, most source systems had their own user stores—the feature that allows a user to log into a system and uniquely identifies them within the system. One of the biggest challenges for implementing security was correctly mapping a user’s identity in the search application to their various identities in the source systems sending content to the search engine.

Thankfully, enterprise-wide Identity Providers (IdP)  like Okta, Microsoft Entra ID (formerly Azure Active Directory), and Google Cloud Identity are ubiquitous these days.  An Identity Provider (IdP) is like a digital doorkeeper for your organization. It identifies who you are and shares that information with your organization’s applications and systems.

By leveraging an IdP, I can present myself to all my applications with a single identifier such as “cmarino@enterprise-knowledge.com.” For the sake of simplicity in mapping my identity within the Knowledge Portal, I’m not “cmarino” in the content management system, “marinoc” in the HR system, and “christophermarino” in the financial system.

Instead, all of those systems recognize me as “cmarino@enterprise-knowledge.com” including the Knowledge Portal. And the subsequent decision by the portal to provide or deny access to information is greatly simplified. The portal needs to know who I am in all systems to make these determinations.

 

Create Universal Groups for Access Control

Working hand in hand with an IdP, the establishment of a set of universally used groups for access control is a critical step to enabling Unified Entitlements. These groups are typically created within your IdP and should reflect the common groupings needed to enforce your organization’s security model. For instance,  you might choose to create groups based on a department or a project or a business unit. Most systems provide great flexibility in how these groups are created and managed.

These groups are used for a variety of tasks, such as:

  • Associating relevant users to groups so that security decisions are based on a smaller, manageable number of groups rather than on every user in your organization.
  • Enabling access to content by mapping appropriate groups to the content.
  • Serving as the unifying factor for security decisions when developing an organization’s security model.

As an example, we developed a Knowledge Portal for a large global investment firm which used Microsoft Entra ID as their IdP. Within Entra ID, we created a set of groups based on structures like business units, departments, and organizational roles. Access permissions were applied to content via these groups whether done in the source system or an external security model that we developed. When a user logged in to the portal, we identified them and their group membership and used that in combination with the permissions of the content. Best of all, once they moved off a project or into a different department or role, a simple change to their group membership in the IdP cascaded down to their access permissions in the Knowledge Portal.

 

Respect Permissions from Source Systems

The first two principles have focused on identifying a user and their roles. However, the second key piece to the entitlements puzzle rests with the content. Most source systems natively provide the functionality to control access to content by setting access permissions. Examples are SharePoint for your organization’s sensitive documents, ServiceNow for tickets only available to a certain group, or Confluence pages only viewable by a specific project team. 

When a security model already exists within a source system, the goal of integrating that content within the Knowledge Portal is simple: respect the permissions established in the source. The key here is syncing your source systems with your IdP and then leveraging the groups managed there. When specifying access to content in the source, use the universal groups. 

Thus, when the Knowledge Portal collects information from the source system, it pulls not only the content and its applicable metadata but also the content’s security information. The permissions are stored alongside the content in the portal’s backend and used to determine whether a specific user can view specific content within the portal. The permissions become just another piece of metadata by which the content can be filtered.

 

Develop Security Model for Unsupported Systems

Occasionally, there will be source systems where access permissions have not or can not be supported. In this case, you will have to leverage your own internal security model by developing one or using an entitlements tool. Instead of entitlements stored within the source system, the entitlements will be managed through this internal model. 

The steps to accomplish this include:

  • Identify the tools needed to support unified entitlements;
  • Build the models for applying the security rules; and
  • Develop the integrations needed to automate security with other systems. 

The process to implement this within the Knowledge Portal would remain the same: store the access permissions with the content (mapped using groups) and use these as filters to ensure that users see only the information they should.

 

Conclusion

Getting unified entitlements correct for your organization plays a large part in a successful Knowledge Portal implementation. If you need proven expertise to help guide managing access to your organization’s valuable information, contact us

The “right people” in your organization will thank you.

The post Incorporating Unified Entitlements in a Knowledge Portal appeared first on Enterprise Knowledge.

]]>
The Phantom Data Problem: Finding and Managing Secure Content https://enterprise-knowledge.com/the-phantom-data-problem-finding-and-managing-secure-content/ Fri, 10 Sep 2021 13:39:20 +0000 https://enterprise-knowledge.com/?p=13609 Every organization has content/information that needs to be treated as confidential. In some cases, it’s easy to know where this content is stored and to make sure that it is secure. In many other cases, this sensitive or confidential content … Continue reading

The post The Phantom Data Problem: Finding and Managing Secure Content appeared first on Enterprise Knowledge.

]]>
Are you actually aware of the knowledge, content, and information you have housed on your network? Does your organization have content that should be secured so that not everyone can see it? Are you confident that all of the content that you should be securing is actually in a secure location? If someone hacked into your network, would you be worried about the information they could access?

Every organization has content/information that needs to be treated as confidential. In some cases, it’s easy to know where this content is stored and to make sure that it is secure. In many other cases, this sensitive or confidential content is created and stored on shared drives or in insecure locations that employees could stumble upon or hackers could take advantage of. Especially in larger organizations that have been in operation for decades, sensitive content and data that has been left and forgotten in unsecured locations is a common, high-risk problem. We call hidden and risky content ‘Phantom Data’ to express that it is often unknown or unseen and also has the strong potential to hurt your organization’s operations. Most organizations have a Phantom Data problem and very few know how to solve it. We have helped a number of organizations address this problem and I am going to share our approach so that others can be protected from the exposure of confidential information that could lead to fines, a loss of reputation, and/or potential lawsuits.

We’ve consolidated our recommended approach to this problem into four steps. This approach offers better ways to defend against hackers, unwanted information loss, and unintended information disclosures.

  1. Identify a way to manage the unmanaged content.
  2. Implement software to identify Personally Identifiable Information (PII) and Personal Health Information (PHI).
  3. Implement an automated tagging solution to further identify secure information.
  4. Design ongoing content governance to ensure continued compliance.

Manage Unmanaged Content

Shared drives and other unmanaged data sources are the most common cause of the Phantom Data problem. If possible, organizations should have well-defined content management systems (document management, digital asset management, and web content management solutions) to store their information. These systems should be configured with a security model that is auditable and aligns with the company’s security policies.

Typically we work with our clients to define a security model and an information architecture for their CMS tools, and then migrate content to the properly secured infrastructure. The security model needs to align with the identity and access management tools already in place. The information architecture should be defined in a way that makes information findable for staff across business departments/units, but also makes it very clear as to where secure content should be stored. Done properly, the CMS will be easy to use and your knowledge workers will find it easier to place secure content in the right place.

In some cases, our clients need to store content in multiple locations and are unable to consolidate it onto a single platform. In these cases, we recommend a federated content management approach using a metadata store or content hub. This is a solution we have built for many of our clients. The hub stores the metadata and security information about each piece of content and points to the content in its central location. The image below shows how this works.

Metadata hub

Once the hub is in place, the business can now see which content needs security and ensure that the security of the source systems matches the required security identified in the hub.

Implement PII and PHI Software

There are a number of security software solutions that are designed to scan content to identify PII and PHI information. These tools look at content to identify the following information:

  • Credit card and bank account information
  • Passport or driver’s license information
  • Names, DOBs, phone numbers
  • Email addresses
  • Medical conditions
  • Disabilities
  • Relative information

These are powerful tools that are worth implementing as part of this solution set. They are focused on one important part of the Phantom Data issue, and can deliver a solution with out-of-the-box software. In addition, many of these tools already have pre-established connectors to common CMS tools.

Once integrated, these tools provide a powerful alert function to the existence of PII and PHI information that should be stored in more secure locations.

Implement an Automated Tagging Solution

Many organizations assume that a PII and PHI scanning tool will completely resolve the problem of finding and managing Phantom Data. Unfortunately, PII and PHI are only part of the problem. There is a lot of content that needs to be secured or controlled that does not have personal or health information in it. As an example, at EK we have content from clients that describes internal processes, which should not be shared. There is no personal information in it, but it still needs to be stored in a secure environment to protect our clients’ confidentiality. Our clients may also have customer or product information that needs to be secured. Taxonomies and auto-tagging solutions can help identify these files. 

We work with our clients to develop taxonomies (controlled vocabularies) that can be used to identify content that needs to be secured. For example, we can create a taxonomy of client names to spot content about a specific client. We can also create a topical taxonomy that identifies the type of information in the document. Together, these two fields can help an administrator see content whose topic and text suggest that it should be secured.

The steps to implement this tagging are as follows:

  1. Identify and procure a taxonomy management tool that supports auto-tagging.
  2. Develop one or more taxonomies that can be used to identify content that should be secured.
  3. Implement and tune auto-tagging (through the taxonomy management tool) to tag content.
  4. Review the tagging combinations that most likely suggest a need for security, and develop rules to notify administrators when these situations arise.
  5. Implement notifications to content/security administrators based on the content tags.

Once the tagging solution is in place, your organization will have two complementary methods to automatically identify content and information that should be secured according to your data security policy.

Design and Implement Content Governance

The steps described above provide a great way to get started solving your Phantom Data problem. Each of these tools is designed to provide automated methods to alert users about this problem going forward. The solution will stagnate if a governance plan is not put in place to ensure that content is properly managed and the solution adapts over time.

We typically help our clients develop a governance plan and framework that:

  • Identifies the roles and responsibilities of people managing content;
  • Provides auditable reports and metrics for monitoring compliance with security requirements; and
  • Provides processes for regularly testing, reviewing, and enhancing the tagging and alerting logic so that security is maintained even as content adapts.

The governance plan gives our clients step-by-step instructions, showing how to ensure ongoing compliance with data protection policies to continually enhance the process over time.

Beyond simply creating a governance plan, the key to success is to implement it in a way that is easy to follow and difficult to ignore. For instance, content governance roles and processes should be implemented as security privileges and workflows directly within your systems.

In Summary

If you work in a large organization with any sort of decentralized management of confidential information, you likely have a Phantom Data problem. Exposure of Phantom Data can cost organizations millions of dollars, not to mention the loss of reputation that organizations can suffer if the information security failure becomes public.

If you are worried about your Phantom Data risks and are looking for an answer, please do not hesitate to reach out to us.

The post The Phantom Data Problem: Finding and Managing Secure Content appeared first on Enterprise Knowledge.

]]>
Three Things You can do Today to Get the Most out of Microsoft 365’s Project Cortex https://enterprise-knowledge.com/three-things-you-can-do-today-to-get-the-most-out-of-microsoft-365s-project-cortex/ Thu, 13 Aug 2020 13:00:38 +0000 https://enterprise-knowledge.com/?p=11683 Project Cortex is Microsoft’s new AI offering as part of its Microsoft 365 Suite. It will have several exciting features that organizations can leverage to make connections between content, data, and experts. Cortex will be able to surface knowledge that … Continue reading

The post Three Things You can do Today to Get the Most out of Microsoft 365’s Project Cortex appeared first on Enterprise Knowledge.

]]>
Project Cortex is Microsoft’s new AI offering as part of its Microsoft 365 Suite. It will have several exciting features that organizations can leverage to make connections between content, data, and experts. Cortex will be able to surface knowledge that would have otherwise been buried in people’s mailboxes, SharePoint libraries, meetings, and conversations by generating topic cards, topic pages, people cards, and integrating content into the Microsoft Graph.

Harnessing Project Cortex’s full power, however, will not be as simple as turning on a switch, and there are also pitfalls that could hamper its rollout at your organization. Below, you will find three actions you can take right now to maximize the usefulness of Project Cortex once it is enabled in your organization.

Tidy Up Your Content

One of Cortex’s main strengths and features is its ability to surface content – it can deliver contextual knowledge in the form of topic cards to users as they are working through various Microsoft 365 apps, including SharePoint, Teams, and Outlook. Cards will generally consist of a short description, a directory of related experts, and links to associated content. On many occasions, cards will be curated by knowledge managers, and often, these cards will also be generated automatically by AI. 

All too often, organizations and teams have their content in disarray: duplicates (or near duplicates) of content items, outdated information, or obsolete guidance are scattered throughout their sites. By cleaning up your content, you will make it easier for knowledge managers to load topic cards and topic pages with high-quality knowledge, and more importantly, reduce the risk that Cortex will display inaccurate and incomplete information when the AI automatically brings together associated content. Tidying up content is not magic, but it does take planning and effort. Like the old adage goes, garbage in, garbage out.

Maximize the Use of Taxonomies

Cortex will be smart enough to detect important terms in emails, uploaded documents, SharePoint pages, and other Microsoft 365 content. For each of these important terms, Cortex can create topic cards and topic pages automatically. However, we can help Cortex’s AI algorithm by teaching it which topics are important and relevant. We do so by creating a taxonomy of topics within Microsoft 365’s Managed Metadata Services. Say, for instance, a manufacturing company creates a taxonomy of their product offerings – Microsoft 365 would know to create a topic card and a topic page for each of their products, and surface product information for users as they read their emails or contribute to a Word document without having to interrupt their work. If you want better topic cards appearing on users’ screens as they work, then you need a well-designed taxonomy complementing Cortex’s AI capabilities.

In addition to powering topic cards and topic pages, the taxonomy will also serve as a common tagging scheme across Microsoft 365 apps including SharePoint, Word, OneDrive, Teams, and Yammer. Essentially, the Managed Metadata Services delivers on the promise of taxonomies by providing a layer that connects users to similar resources and to other experts on the topics relevant to their work. 

Set up the Right Levels of Access to your Content

Even though Project Cortex will excel at surfacing knowledge and documents, it will not do so indiscriminately. Indeed, it will respect each individual content item’s security and access policies. Users referencing a topic card or topic page will not be able to read nor gain entry to restricted documents or private conversations. The challenge organizations face is to find the balance between setting up access frameworks that are not too restrictive or too permissive, and creating a site architecture that allows the organization to implement these frameworks.

Finding a balance can be difficult. At one end of the spectrum, teams can be too risk-averse and feel overprotective of their content, restricting access to a fault and excluding others across the organization who would benefit from leveraging that content overprotective. Organizations may also set up their SharePoint sites, teams channels, etc., and their respective access to mimic their org chart – essentially replicating knowledge and information silos within Microsoft 365. For example, a project team may keep all of their expenses within a completely restricted document library – for somebody in accounting (who may have a valid reason to read and use this information) it may be impossible to access it. Regardless of the rationale behind limiting access, frameworks that are too restrictive may impair the delivery of useful knowledge to users who have a valid (and sometimes urgent) reason to reference it, thereby rendering the full power of Project Cortex and the Microsoft graph moot.

At the other end of the spectrum, there are frameworks that are completely open. There may be numerous reasons for this openness. Sometimes it may be a deliberate design to encourage knowledge-sharing and transparency, but on other occasions, openness may be a result of weak governance roles and procedures or the lack of guidance for site and content owners. Regardless of the underlying reasons for the lack of security, the risk in this scenario is that Cortex may prove too good at surfacing knowledge and deliver confidential or sensitive information to users who may not have the appropriate privileges to view it.

The right security and access framework will ensure that knowledge is shared with users who need it while protecting the organization’s sensitive information.

Conclusion

Project Cortex is one of the most exciting developments for Microsoft 365 in recent years. Its potential to empower organizations to leverage their own institutional knowledge is unlike anything Microsoft has ever released. Although it is just rolling out this year, and many organizations have had limited exposure to it so far, these three tips can help ensure that you hit the ground running once you enable it. You can start today no matter how far along your organization is with Microsoft 365.

Interested in making the most of your SharePoint and Microsoft 365 instance? Contact Enterprise Knowledge to learn more.

The post Three Things You can do Today to Get the Most out of Microsoft 365’s Project Cortex appeared first on Enterprise Knowledge.

]]>