- ITDM 2025 전망 | “불경기 시대 속 콘텐츠 산업··· 기술이 돌파구를 마련하다” CJ ENM 조성철 엔터부문 CIO
- 50억 달러 피해에서 700명 해고까지··· 2024년 주요 IT 재난 8선
- Network problems delay flights at two oneworld Alliance airlines
- Leveraging Avaya Experience Platform to accelerate your digital banking transformation
- The best iRobot vacuums of 2024: Expert tested and reviewed
The Data Challenge: How to Map Data in A Distributed World
By Dotan Nahum, Head of Developer-First Security at Check Point Software Technologies
Here’s a quick-fire question: do you know where all your sensitive data is? As businesses of all sizes generate, accumulate, store, and process more data records in more places than ever, it’s increasingly challenging to classify and track all that data – not to mention make use of it.
On the one hand, enterprises rush into digital transformation with their isolated data silos and outdated legacy code. On the other hand, 86% of developers admit they do not consider application security a top priority when coding. Somewhere in between are CISOs facing burnout as they attempt to enforce code security best practices, privacy regulations, and compliance standards into the chaotic process that is the software development lifecycle.
In this post, we’ll look at mapping your distributed data is necessary, what challenges you’ll face along the way, and how you can overcome them.
Why is data scattered in the first place?
Whether you like it or not, most data produced, stored, and processed by business applications is distributed by nature. Both logical and physical data distribution is necessary for any application to scale in functionality and performance. Organizations store different data types across different files and databases for various purposes.
The classic example of data distribution within a company is buyer and client data. One SME can have data on leads, warehouse orders, CRM, and social media monitoring spread over dozens of internally developed and third-party SaaS applications. These applications read and write data at different intervals and formats to owned and shared repositories. In many cases, each also has various schemas and field names to store the exact same data.
Application development processes distribute a significant portion of data within the application architecture, especially regarding serverless, microservice-based architectures, APIs, and third-party (open source) code integration. So, the critical question isn’t why we distribute data in our applications. Instead, it’s how we can manage it effectively and securely throughout its lifecycle in our application.
Mapping distributed data: is the effort worth the reward?
“Shift left” application security, big data security, code security, and privacy engineering are not new concepts. However, software engineers and developers are only beginning to adopt tools and methodologies that ensure their code and data are safe from malefactors. Mainly because, until recently, security tools were designed and built for use by information security teams rather than developers.
Privacy by design is nothing new either, but in today’s hectic velocity and delivery-driven developer culture, data privacy still tends to be neglected. It often remains ignored until regulatory standards (like GDPR, PCI, and HIPAA) become business priorities. Alternatively, in the aftermath of a data breach, the C-suite may demand that all relevant departments take responsibility and introduce preventative measures.
It would be great if all software services and algorithms were developed with privacy by design principles. We’d have systems planned and built in a way that makes data management a breeze, which would streamline access control throughout the application architecture and bake compliance and code security into the product from day one. In short, it’d be absolutely fantastic. But that’s not the case in most development teams today. Where do you even start if you want to be proactive about data privacy?
The first step in protecting data is knowing where it resides, who accesses it, and where it goes. This seemingly simple process is called data mapping. It involves discovering, assessing, and classifying your application’s data flows.
Data mapping entails using manual, semi-automated, and fully automated tools to survey and list every service, database, storage, and third-party resource that makes up your data processes and touches data records.
Mapping your application data flows will give you a holistic view of your app’s moving parts and help you understand the relationships between different data components, regardless of storage format, owner, or location (physical or logical).
Don’t expect an easy ride.
Mapping your data for compliance, security, interoperability, or integration purposes is easier said than done. Here are the hurdles you can expect to face.
Depiction of a moving target
Depending on your application’s overall size and complexity, a manual data mapping process can take weeks or even months. Since most applications that require data mapping are thriving and growing projects, you’ll often find yourself chasing the velocity of codebase expansion and deploying additional data stores throughout micro-services and distributed data processing tasks. However, you spin it, your data map is obsolete as soon as it’s complete.
The ease of data distribution
Why do new data stores pop up faster than you can map them? Because it’s so easy to deploy new data-based features, microservices, and workflows using cloud-based tools and services. As your application grows, so does the number of data-touching services. Furthermore, since developers love to experiment with new technologies and frameworks, you may find yourself dealing with a complex containerized infrastructure (with Docker and Kubernetes clusters) that may have been a breeze to deploy, but is a nightmare to map.
The horrors of legacy code
As enterprises undertake digital transformation of their legacy systems, they must address the data used and created by those systems. In many cases, especially with established enterprises, whoever originally wrote and maintained the legacy code is no longer with the company. So it’s up to you to explore the intricacies of service interconnectivity and data standardization in an outdated environment with limited visibility or documentation.
Integrating security and privacy engineering in your applications
It’s no secret that data is stolen every day. So much so that you can pretty much guarantee that your email address is included in one or more datasets for sale on the dark web.
What can you do to protect your application and data from the greed of cyber criminals and the scrutiny of regulators?
Scan your code to map your data.
Modern CI/CD pipelines and processes employ Static Application Security Testing (SAST) tools to identify code issues, security vulnerabilities, and code secrets accidentally pushed to public-facing repositories. You can employ a similar static code analysis technique to discover and map out data flows in your application.
This approach maps out the code components that can access, process, and store the data, thus mapping out the data flows without fully crawling the content of any database or data store.
Enforce clear boundaries for microservices.
In a microservice architecture, each microservice should (ideally) be autonomous (for better or worse). But where does each microservice end and another begin regarding sensitive data?
You can identify the boundaries for each microservice and its related domain model and data by focusing on the application’s logical domain models and related data. Then, attempt to minimize the coupling between those microservices.
Shift left for privacy in a distributed world
Data security and privacy are rarely a priority for application developers. So it’s no surprise that application data can float around your cloud assets and on-premises devices uncatalogued and unmanaged. However, in 2023 you can’t afford to neglect data privacy laws and potential data security threats lurking in your code.
Mapping the data flows in and out of your application is the first step to shifting privacy left and integrating privacy engineering, compliance, and code security in your CI/CD pipeline.
About the Author
Dotan Nahum is the Head of Developer-First Security at Check Point Software Technologies. Dotan was the co-founder and CEO at Spectralops, which was acquired by Check Point Software, and now is the Head of Developer-First Security. Dotan is an experienced hands-on technological guru & code ninja. Major open-source contributor. High expertise with React, Node.js, Go, React Native, distributed systems and infrastructure (Hadoop, Spark, Docker, AWS, etc.) Dotan can be reached online at (dotann@checkpoint.com) and https://twitter.com/jondot and at our company website https://www.checkpoint.com/.