Dowiedz się więcej na temat naszych produktów. Zobacz nasz blog
  • EN
  • PL
  • It would be hard to miss all the articles on the topic of GDPR and all the various, terrifying sanctions that could be put upon an entity for non-compliance. Few, however, delve into important details, such as the significance of anonymization or data retention, which allow for avoiding all these sanctions and make the work of developers significantly easier. For this reason, we decided to explain in an accessible way what anonymization and retention of personal data are and show why their proper implementation is of such importance in the software development process. Today, let us tackle anonymization.

    What is anonymization?

    Anonymization is a process that allows you to permanently remove the link between personal data and the person to whom the data relates. Thanks to this, what was previously deemed as personal ceases to be that.

    What does it look like in practice?

    The definition above becomes less complicated when presented with an example. Let us imagine, for example, Superman – a comic hero from Krypton who wants to hide his identity and blend in with the crowd.

    Name Superman
    Occupation Superhero
    Origin Krypton

    During the anonymization process, Superman enters the telephone booth, puts on glasses and a tweed suit, and becomes Clark Kent, a reporter from Kansas.

    Name Clark Kent
    Occupation Reporter
    Origin Kansas, USA

    Through the anonymization process, Superman’s data turned into Clark Kent’s, and there is no connection between these two people. This is fictitious data that can be safely used, e.g. in test environments.

    The example above illustrates the process of anonymization itself. Let us now consider why it is important that the anonymization is of good quality.

    Irreversibility

    The foundation of anonymization is its irreversibility. We should never be able to find out what the original data looked like, based on the anonymized data. Clark’s associates should not be able to discover his true identity.

    When we anonymize a data set, usually only a fragment of the data will undergo change. However, we must ensure that non-anonymized data does not allow the anonymization process to be reversed for the entire set. In our example, we would not have to change Superman’s favorite color. However, if we do not anonymize his origin, we would certainly cause a sensation.

    True to reality

    An important qualitative measure of anonymization is also how well it imitates reality. If Superman and all other people in the data set are anonymized as follows:

    Name X
    Occupation Y
    Origin Z

    we have no doubt that the process is irreversible, but its usefulness is questionable. Person X does not look like someone who exists in reality, and the nature of the original data has not been preserved. The length of the names were not preserved, and the data itself looks unbelievable with all of the people having the same name. In the case of IT systems, the tester using such data would run into a lot of issues, he would not even be able to distinguish between people.

    Repeatability

    Another feature of good anonymization is its repeatability. When anonymizing the data set, we want to make sure that each time the data set would be anonymized in the same way. We want Superman to always become Clark Kent, no matter whether it’s the first or the tenth anonymization. This is especially important from the point of view of Quality Assurance. Testers often create test cases based on specific data. If this data were changed every time, the tester’s work would certainly be more difficult!

    Integrated systems

    Today’s IT world is represented by countless systems connected with each other. Hardly any application can function as a single organism. Systems connect with each other, exchanging data and using each other’s services. Therefore, when approaching anonymization, we must consider the process not only for one system, but for many systems at once. The challenge is for anonymized data to be consistent throughout the entire ecosystem. This means that if the Daily Planet (Clark’s workplace) has a human resources system and a blog, then in both applications Superman will become Clark Kent.

    Efficiency

    The last key parameter affecting the quality of anonymization, from my point of view, is performance. IT systems process huge data sets measured in gigabytes or even terabytes. Anonymization of such databases can be time consuming, therefore, we must ensure not only security but also good speed of the anonymization process. One of the things Superman learned after arriving on Earth is that time is money. This saying rings even more true in the case of modern IT.

    I invite everyone interested in the topic of data retention to read my next article, which I plan on publishing shortly.

    Artur Żórawski, Founder & CTO

    Good quality tests require good data – data that is the most accurate representation of reality. A copy of production data is very often used for this purpose. Such a dedicated test environment is often used to reproduce tickets, debugging issues with data and performing stress tests. Setting aside the fact that this practice is most often incompatible with the GDPR, while the production environment is monitored and audited like a fortress, and only a few people have access to it, non-production environments are treated much less restrictively. The number of people with access to them (not including the users) is also much larger. Many serious leaks of personal data were not caused by hacking into the “fortress”, but by abuse of these “unprotected settlements”.

    In the area of ​​test data, there are usually two extremes – personal data  is either processed by testers and developers in production database copies, or, we wait half a year to refresh test environments with artificial data, usually poorly prepared. The solution to this problem could be the implementation of anonymization, but as it turns out, this is not an easy task.

    Challenges associated with designing the anonymization process

    Simple data masking can work in simple cases, but you can quickly see that this is not enough for applications that we usually work with every day. On the other hand, when reviewing existing solutions, we noticed that they did not meet our needs – most often they did not support mechanisms to maintain data consistency between different databases. It was also difficult to find a solution that supported the automation of the anonymization process. The most popular tools didn’t allow for defining your own generators, not only regarding a single record, but also taking into account the distribution of data. By implementing a solution that meets these requirements yourself, one will quickly encounter obstacles:

    Happy medium

    However, there exists a happy medium – ensuring free access to high-quality data reflecting the characteristics of production data, while ensuring the security of the solution and compliance with legal regulations. This happy medium is Nocturno – a data anonymization tool that we designed together as a team. While working on this solution, we decided to take care of:

    – Maintaining full data consistency – not only within the schema or database, but all data sources within the organization (databases of various suppliers, LDAP, file sources, etc.)

    What do we gain by implementing good-quality anonymization?

    By implementing anonymization, we are able to reduce the number of people who have access to personal data to the absolute minimum. Due to the good quality of the anonymized data, its use for software development purposes is transparent and compliant with the GDPR. The process based on Nocturno is easily configurable and maintainable by developers – it can be simultaneously developed in the same codebase as the application.

    Nocturno supports two main implementation scenarios:

    The picture above portrays Nocturno’s role in the automatic process of providing anonymized copies of databases.

    More information about Nocturno can be found here: https://wizards.io/en/nocturno-en/. If you have questions about the anonymization process, please feel free to reach out.

    Marcin Gorgoń, Senior Software Engineer

    Soon it will be twenty years since I joined the world of IT. During this time, I have observed how the environment has changed, how development processes have developed and what new tools have been used. Over time, many processes, including repetitive tasks, were automated. Companies implemented Continuous Integration and Continuous Delivery. All of this change has been motivated by a single thought: let software developers focus on system and business development.

    Enter GDPR

    The entry of GDPR into life shook the IT world and changed the rules of the game. The development process became more complicated and operating on personal data became a big risk that had to be addressed. Working in a software house, we saw these issues clearly because they occurred in each of our projects. In theory, we were prepared for GDPR. We completed the appropriate courses and the company was armed with documents and records. In practice, it turned out that legal restrictions and the uncertainty associated with the entry of this regulation into force impacted our everyday work. Gone was my dream of unhindered development, where we could focus solely on producing quality software.

    Shortly after the appearance of GDPR regulations, we started looking for available solutions. The tools that we were able to find did not meet our project needs because every day we developed entire integrated ecosystems created in various technologies that exchanged personal data. I felt as if I had travelled two decades backwards in time.

    Change of status quo

    Ultimately, a group of people in the company emerged that set themselves the goal of changing the status quo. We knew what was required and how our plan could be implemented. We had never faced such a challenge before. Together, however, we managed to create a set of tools that ended up being a Godsend for us.

    Anonymization of data

    We started by anonymizing data in test environments. We created a tool that was able to handle many applications at once, taking into account the specificity of Polish law, and do its work efficiently.

    The created solution was to support all of our projects, so high configurability and the ability to adapt to various requirements was the priority. We included anonymization in Continuous Integration processes and quickly implemented them in our projects. It turned out that the most painful aspects of GDPR are now handled automatically and no longer cause sleepless nights to the development team.

    Retention of personal data

    The next step was the retention of personal data, which is necessary in almost every system. Taking care of this aspect in a single application is easy. Performing data retention in ten integrated systems is much more difficult, and in a hundred – virtually impossible. It was clear to us that we did not want to repeat the same functionality in all systems that we produce. This is how another tool was born, relieving us of this burden.

    Everything was back on track, just as I had dreamed. Fortunately, GDPR turned out to be only a bump on the road in our projects.

    Wizards

    With all of this in mind, we founded a startup. We came to the conclusion that the problems we had been dealing with were being experienced by many development teams, and we now had the ready solution.

    That is why we decided to create Nocturno and Oblivio, about which you will be able to read more soon on our company profile.

    Artur Żórawski, Founder & CTO of Wizards