5 Indicators You're Doing DataSecOps Wrong
I love hiking as well as outdoor navigation. What I find especially useful for these types of trips is not only planning my waypoints but also searching in advance for points-of-confusion — points which will inform me if I went off route. These hiking trips taught me that addressing points of confusion is useful to mend errors in any aspect of life, so I developed this guide to help you recognize when you are going against DataSecOps principles.
Before discussing situations in which you are off route, let’s review DataSecOps.
DataSecOps: Embedding Security Into Data Operations
DataSecOps is a relatively new concept which embeds security and governance into data operations in order to enable data democratization while reducing security and compliance risks. DataSecOps is designed to be a collaborative framework between security, GRC (Governance, Risk Management, & Compliance), data engineering, and other teams, so security is embedded into data processes in the organization.
Having a “DataSecOps mindset” is crucial for creating data-driven value for companies. For example, when allowing more teams and users to access data in the organization (data democratization), it is important to ensure security is a top priority. If security considerations are not involved in the entire process, from design to monitoring, it can lead to adverse effects like project delays (when security issues are finally revealed), or, even worse, it can cause compliance and security risks.
So what is DataSecOps? DataSecOps is an agile, holistic, and security-embedded approach to coordinating the ever-changing data and its users which aims to deliver quick data-to-value while keeping data private, safe, and well-governed.
Now that we have had a short introduction to DataSecOps and understand its importance, let’s examine five indicators that you may be doing something wrong on your way to becoming a “DataSecOps organization.”
Point 1: Data Operations Are Separate from Security
If your data operations, or data engineering, are managed without a strong collaboration with the security and GRC teams, you are definitely off route. In many companies, data engineering teams attempt to “do their work” with as little interaction with security teams so that the latter does not slow down their projects. This strategy is, of course, very short-sighted. I have seen quite a few projects where security was not involved until it was too late, and all of them could have, in hindsight, been improved by earlier security considerations.
In many cases, this problem is caused by security teams who lack the capacity or resources to deal with data projects until someone forces them to (e.g. by requiring them to perform an audit).
Nevertheless, in any case, there must be a very strong collaboration between all teams involved with a data project, especially between the data engineering and security teams. Questions like what types of sensitive data are accessed in the project, which teams are consuming the data, and what these people do with the data need to be resolved (as much as possible) in the design phase. If new data security questions arise, they should also be solved using this strong collaborative mindset.
Point 2: Changes in Data Are Not Reflected in Security
Another sign that you can improve your DataSecOps mindset is when changes in data occur without being properly reflected in security. An example of this occurs when data is copied to new locations, but security controls like data masking are not considered. Another example is when data that was stored in a certain way becomes stored in a different way without verifying that security controls are still functioning optimally (e.g. changing data to be stored as semi-structured data).
Since data is changed in an agile way, it can “drift away” from the security controls we think we have implemented. Once functioning controls, like data masking or row-level security, may no longer work due to changes in data or the way it is stored or processed. When designing data, we should put safeguards in place, which may include dynamic masking, automated data discovery, or automation of security tests (e.g. an automation test that verifies staged data using an up-to-date security configuration so that a data analyst cannot retrieve PII data).
Point 3: Your Data Access Permissions Are a One-Way Road
This is a very common phenomenon where data access is granted when needed by consumers but is either never revoked or revoked only on special occasions (like right before the annual audit). The problem with not revoking data access is that you run the risk of users or teams being able to access data without gaining any value out of actually using the data. This causes a risk leak - your risk level continuously increases as more data access is granted, and there is no sufficient clean-up of this risk.
Point 4: You Become Too Platform-Dependent
Another very common phenomenon is locking down on specific platforms for the wrong reasons. The problem arises when your organization uses data platforms it would rather not use (for various reasons like cost or technical fit) because the security controls are buried deep within the platform itself.
Often, this issue is caused by ad-hoc security configurations that are over-complicated, making it difficult to grasp how to migrate to a different platform while maintaining the same (or better) level of security risk.
Point 5: You Are Unsure of Where Your Sensitive Data Is Located
This point is last, but perhaps first in terms of priority. If you are not sure about where your sensitive data (e.g. PII, PHI, or business secrets) is located, you need to fix this immediately. The first, and obvious, reason to fix this problem is that knowing where sensitive data is located is part of several compliance requirements and security audits. And finding this information out only right before the audit is definitely not a good practice.
The second reason to know where sensitive data is located is that, in most organizations, there is a shortage of resources for both data engineering and security teams. Knowing where your sensitive data is can help you prioritize projects and resources in a much better way. After all, if your operational latency reports are exposed, that is an inconvenience, but, if your customers' or employees' private information is exposed, it is a disaster.
By reviewing these points of confusion, I hope it becomes easier for you to locate areas of improvement in order to build a strong DataSecOps mindset. If you think of any additional signs that you may be off track, I would love to hear them. Also, if you find yourself in one, some, or all the above situations, do not feel discouraged, but rather recognize the immense opportunity that adopting a DataSecOps mindset can bring to you and your organization.
If you'd like to understand how Satori helps organizations achieve effective DataSecOps, fill the form below to set up a quick demo call.
Schedule a Demo
Ready for better data access governance and universal data protection? Schedule a quick, private demo today!
Recent blog posts
- Introducing Custom Data Classification in Satori
- Satori Secures New Funding to Take DataSecOps to the Next Level
- The Tableau-Snowflake-Satori Stack for Secure Data Democratization
- Amazon Data Lake Security with Athena and Satori
- 5 Indicators You're Doing DataSecOps Wrong
- Redshift, Looker and Satori: Advanced Data Access
Posts by Tag
- Access Control
- Data Governance
- Data Protection
- Snowflake Data Warehouse
- data democratisation
- data security
- AWS Redshift
- Data Science
- Sensitive Data
- Data Classification
- Snowflake security
- Data Policy Management
- Policy Management
- Row Level Security
- self service access control
- Athena Security
- Data Masking
- Human Element
- Least Privileges
- Policy Engine
- RSA ISB
- Redshift Security
- Redshift data access
- Snowflake Roles
- data lake security
- role hierarchy
- rsa conference
- rsa innovation sandbox
- snowflake stages