The texts in this article were partly generated by artificial intelligence and corrected and revised by us. The following services were used for the generation:
In our last article on working with generative texts, we already pointed out initial ethical concerns in connection with the use of artificial intelligence. In this article, we would therefore like to go into these concerns in more detail and show different perspectives with regard to the use of artificial intelligence.
This is not about a fundamental classification of this technology as positive or negative, but about developing a holistic view and illuminating the effects on the different areas.
People, companies and society
For workers in particular, artificial intelligence is a double-edged sword. On the one hand, as the Parliament’s Think Tank reports, there is the increase in productivity through the automation of trivial tasks, which allows individuals more time to work on more creative and higher quality content and products.
11-37% Estimated increase of labour productivity related to AI by 2035 (Parliament’s Think Tank 2020)
On the other hand, the same automation leads to the loss of jobs. Artificial intelligence does not necessarily have to completely replace a profession, but can even support and promote it, as the think tank Denkfabrik Digitale Arbeitsgesellschaft reports. What impact this could have on the labour market is the subject of a forecast by the think tank of the European Union:
14% of jobs in OECD countries are highly automatable and another 32% could face substantial changes (estimate by Parliament’s Think Tank 2020)
In this context, it should not be overlooked that the labour landscape will change permanently and new professions will emerge. Whether these will be able to compensate for the increasing job insecurity of the working population remains to be seen.
Data protection, copyright and security
The potential for fundamental conflict between data protection and artificial intelligence has been public knowledge at least since the brief ban on OpenAI’s ChatGPT in Italy.
The State Commissioner for Data Protection (LfD) of Lower Saxony, Barbara Thiel, comes to a similar assessment of the potential for conflict in a recently published article by BigData-Insider:
The use of AI systems usually means a deep intrusion into the fundamental rights and freedoms of the data subjects, as this often involves mass processing of data and automated decision-making (Barbara Thiel 2023 via BigData-Insider)
GitHub Copilot is an example of the conflict between personal data and its use for artificial intelligence. The AI code assistant was trained with public repositories by developers and companies and caused a fuss shortly after its release.
As the online publication Bleeping Computer reports in a detailed article, there has been an influx of violations of licences and company secrets.
According to the report, GitHub Copilot ignored the authors and the licences associated with the repositories when generating code snippets. Furthermore, various users reported that GitHub Copilot also listed API credentials from public repositories on request.
Please note that the loss of these API credentials in the first instance is a consequence of the negligence of the developers and companies concerned, who did not act in the name of operational security.
Ethics, responsibility and success
To pick up on the ethical concerns from our last article: Artificial intelligence can only be developed for the benefit of all if transparent and respectful handling of the processed data and their products is guaranteed.
This requires not only the consideration of the points mentioned above, but also poses new challenges for the development of artificial intelligence. One of these challenges will be the new legal framework for the use of artificial intelligence. Since 21 April 2021, the European Parliament has been working on the proposal for this very framework presented by the Commission. The aim is to create a
This would also involve creating legal certainty for companies and protecting citizens and intellectual property. In addition, it is vital to regulate the extent to which automated systems penetrate and influence the daily lives of EU citizens.
The forthcoming legal framework will not only define more clearly the responsibilities in dealing with artificial intelligence, but will also grant companies and individual citizens individual rights to deal with their products, data and intellectual property.
Under these conditions, the use of artificial intelligence can not only be safe, but also successful. This means that artificial intelligence creates added value for citizens and society without violating ethical principles or fundamental rights. To achieve this, artificial intelligence and its applications must be transparent, reliable, secure and responsible.
Artificial intelligence offers opportunities and risks for various areas of life. In addition to the various aspects, the current legal situation will also be shed light on.