What exactly does the data processing policy say, which I agreed to without ever reading it? Somehow it is a bit disturbing that on the website of this newspaper an advertisement links to the denim jacket that I tagged with a heart at the online clothing retailer I trust. And how comfortable am I with the idea that medical doctors, engineers and lawyers might have cheated their way through university with the help of AI software?
Questions like these and many others are addressed by the broad field of digital ethics.
It deals with ethics in the digital domain, i.e. digital technologies in the broadest sense and their impact on individuals, society and the environment in general. The question here is not only what is legally permissible, but what is the 'morally right' thing to do. Because - and we are all aware of this from everyday life - not everything that can be done should be done. The goal, then, is not to examine the realm of what is legally possible, but to find out what should be possible at all from an ethical perspective.
But digital ethics is a broad and difficult field to grasp, and not just because of its wide-ranging topics. A further level of complexity is added (in addition to questions that are usually not entirely clear-cut anyway) by the forward-looking nature of the subject. It is therefore not enough to consider what is okay today and what is not. In addition, it is necessary to consider which software and use cases could have negative implications in the future and require regulation. It is clear that this is an undertaking that cannot always be successful.
Why do we need digital ethics?
Everybody will be familiar with the advantages of digital technologies by now. But it is just as little a secret that digitization also brings a number of characteristic disadvantages or at least challenges.
One of these (which may have serious consequences for society as a whole) is the increasing concentration of power among a few players, namely the Big 4 of the Internet: Google, Apple, Facebook and Amazon. They are now involved in just about most of our activities on the Internet. Even those who do not use the Internet very intensively, i.e., do not have profiles on social media, for example, still leave their mark - be it through online shopping, hotel bookings, ticket purchases, the use of streaming services, or even simple Google searches.
Yet even the 'simple Google search' is ultimately wishful thinking, because it practically does not exist. With every search, you get a more or less satisfactory result and usage data is also collected, stored, utilized and made usable by mega-companies. Large companies, such as the previously named Big 4, collect vast amounts of information about us, our usage behavior, our needs - and with every click, another piece of the puzzle that can be put together to form a big whole and will help to attract our attention even better and influence behavior even more effectively.
This ultimately gives those who develop these applications power over those who merely use them.
The analysis of this data is backed by algorithms. In addition to personalizing advertising, they are also designed to provide an individually tailored user experience. In this way, they filter our flow of information on the Internet and play a major role in determining how we see the world. This is because website operators naturally pursue economic interests, and these are best served when users interact with their services for as long as possible. Naturally, people are more averse to this if they are repeatedly confronted with viewpoints that do not correspond to their own views when they open an app (e.g. Instagram). If it were just about entertainment, that might be patronizing, but still not too much of a problem. However, it becomes problematic when more and more people receive the majority of their news via social media, as this leads to the well-known formation of filter bubbles in which we are only shown content that agrees with our opinion. Consequently, we are increasingly isolated from other world views. (This article discusses the importance of AI in more detail.)
The rapid development of AI raises new concerns
Digital ethics issues also keep arising in connection with ChatGPT, which OpenAI launched last November (and more broadly AI applications in general). In addition to privacy concerns (due to which the software was temporarily blocked in Italy), there is also an ongoing discussion about how to treat content generated with the help of AI. ChatGPT can do a lot of work for its users, including writing job applications and even academic papers. This carries the risk that, for example, students who have written their own work will be disadvantaged compared to fellow students who have used the AI software, and possibly even that people who do not meet the professional requirements will be helped to graduate. For this reason, standardized regulations should be created as quickly as possible on whether and to what extent the use of ChatGPT is permitted, how it should be labeled, and secure methods for detecting AI work should be developed.
What to do
It is increasingly imperative for digital service providers to cultivate trust in their product and their handling of sensitive data. Acceptance of new technologies and digital business models can only be achieved if people feel their concerns are taken seriously and whoever they engage in business relations with knows how to tackle the potential threats.
Companies must be able to answer who has access to what data and for what purposes it is used. Convincing solutions must be developed and communicated transparently. A hidden and incomprehensible subordinate clause informing about purposes of use with which the affected person would probably not agree is not sufficient for this.
In order not only to declare their digital sense of responsibility, but also to demonstrate it through action, companies can, for example