On 25 April, the Commission communicated that together with the Member States, it was working on a coordinated AI plan, which should be finalized by the end of the year. Its objectives are maximizing the impact of investments at EU and national levels, encouraging cooperation across the EU, exchanging best practices and defining the way forward together, so as to ensure the EU’s global competitiveness in AI. Consequently, public and private AI investments in the EU should be increased. Preparations should also be made for the socio-economic changes brought about by AI. And an appropriate ethical and legal framework should be ensured.
And with the arrival of the GDPR on 25 May, the first obstacles to this “appropriate ethical and legal framework” are materialising, i.e.: the provisions regarding Data Minimisation and automated individual decision-making.
The GDPR stipulates that the collection of personal data shall be limited to data which is adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed (‘data minimisation’, article 5.1.c GDPR).
Research and development in AI and AI solutions depend on massive quantities of data, including personal data, for machine learning to function and for the development of algorithms. Machine learning is a type of artificial intelligence which makes it possible for computer systems to learn from examples, data and experience. By enabling computers to perform specific tasks in an intelligent way, machine learning systems can execute complex processes by learning from data, instead of following pre-programmed rules. The necessary quantity and the relevance of data used to feed the algorithm is not always known beforehand. The principle of machine learning is that it is the machine itself which detects patterns, relevance and correlation in data sets. The strict application of the GDPR principle of data minimisation jeopardises the development of AI technologies.
Automated individual decision-making, including profiling
The GDPR stipulates that the data subject has the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her (article 22.1 GDPR).
This right does not apply if the decision
- is necessary for entering into, or performance of, a contract between the data subject and a data controller,
- is based on the data subject’s consent
- is authorised by EU or Member State law (article 22.2 GDPR).
Moreover, the recent WP29 opinion on article 22 GDPR makes the obstacle to AI almost insuperable. According to the WP29 opinion, article 22.1 should be read as an isolated prohibition with the three aforementioned exceptions.
The ‘strict interpretation’ by WP29 implies that as soon as AI may involve automated decisions having a legal or a significant impact (which will most likely be the case), such decisions are prohibited, unless there is consent, a contract or a legal exemption. This goes beyond what the European legislator has stipulated in article 22.
To promote AI technology in Europe, the EU or the Member States should come up with legislation regulating the way in which data subjects may (or may not) decide not to be covered by AI if this implies automated decisions having a legal or a significant impact.
A bigger priority, in fact, than introducing ePrivacy.