Digital Ethics

As digital technologies advance, so does the need for regulation to protect users' rights and limit negative social impacts. This is the focus of digital ethics.

What exactly does the data processing policy say, which I agreed to without ever reading it? Somehow it is a bit disturbing that on the website of this newspaper an advertisement links to the denim jacket that I tagged with a heart at the online clothing retailer I trust. And how comfortable am I with the idea that medical doctors, engineers and lawyers might have cheated their way through university with the help of AI software?

 

Questions like these and many others are addressed by the broad field of digital ethics.

It deals with ethics in the digital domain, i.e. digital technologies in the broadest sense and their impact on individuals, society and the environment in general. The question here is not only what is legally permissible, but what is the 'morally right' thing to do. Because - and we are all aware of this from everyday life - not everything that can be done should be done. The goal, then, is not to examine the realm of what is legally possible, but to find out what should be possible at all from an ethical perspective.

But digital ethics is a broad and difficult field to grasp, and not just because of its wide-ranging topics. A further level of complexity is added (in addition to questions that are usually not entirely clear-cut anyway) by the forward-looking nature of the subject. It is therefore not enough to consider what is okay today and what is not. In addition, it is necessary to consider which software and use cases could have negative implications in the future and require regulation. It is clear that this is an undertaking that cannot always be successful.

 

Why do we need digital ethics?

 

Everybody will be familiar with the advantages of digital technologies by now. But it is just as little a secret that digitization also brings a number of characteristic disadvantages or at least challenges.

One of these (which may have serious consequences for society as a whole) is the increasing concentration of power among a few players, namely the Big 4 of the Internet: Google, Apple, Facebook and Amazon. They are now involved in just about most of our activities on the Internet. Even those who do not use the Internet very intensively, i.e., do not have profiles on social media, for example, still leave their mark - be it through online shopping, hotel bookings, ticket purchases, the use of streaming services, or even simple Google searches.

Yet even the 'simple Google search' is ultimately wishful thinking, because it practically does not exist. With every search, you get a more or less satisfactory result and usage data is also collected, stored, utilized and made usable by mega-companies. Large companies, such as the previously named Big 4, collect vast amounts of information about us, our usage behavior, our needs - and with every click, another piece of the puzzle that can be put together to form a big whole and will help to attract our attention even better and influence behavior even more effectively.

This ultimately gives those who develop these applications power over those who merely use them.

 

The analysis of this data is backed by algorithms. In addition to personalizing advertising, they are also designed to provide an individually tailored user experience.  In this way, they filter our flow of information on the Internet and play a major role in determining how we see the world. This is because website operators naturally pursue economic interests, and these are best served when users interact with their services for as long as possible. Naturally, people are more averse to this if they are repeatedly confronted with viewpoints that do not correspond to their own views when they open an app (e.g. Instagram). If it were just about entertainment, that might be patronizing, but still not too much of a problem. However, it becomes problematic when more and more people receive the majority of their news via social media, as this leads to the well-known formation of filter bubbles in which we are only shown content that agrees with our opinion. Consequently, we are increasingly isolated from other world views. (This article discusses the importance of AI in more detail.)

 

The rapid development of AI raises new concerns

Digital ethics issues also keep arising in connection with ChatGPT, which OpenAI launched last November (and more broadly AI applications in general). In addition to privacy concerns (due to which the software was temporarily blocked in Italy), there is also an ongoing discussion about how to treat content generated with the help of AI. ChatGPT can do a lot of work for its users, including writing job applications and even academic papers. This carries the risk that, for example, students who have written their own work will be disadvantaged compared to fellow students who have used the AI software, and possibly even that people who do not meet the professional requirements will be helped to graduate. For this reason, standardized regulations should be created as quickly as possible on whether and to what extent the use of ChatGPT is permitted, how it should be labeled, and secure methods for detecting AI work should be developed.

 

What to do

 

It is increasingly imperative for digital service providers to cultivate trust in their product and their handling of sensitive data. Acceptance of new technologies and digital business models can only be achieved if people feel their concerns are taken seriously and whoever they engage in business relations with knows how to tackle the potential threats.

Companies must be able to answer who has access to what data and for what purposes it is used. Convincing solutions must be developed and communicated transparently. A hidden and incomprehensible subordinate clause informing about purposes of use with which the affected person would probably not agree is not sufficient for this.

In order not only to declare their digital sense of responsibility, but also to demonstrate it through action, companies can, for example

  • establish guidelines for data privacy and the handling of personal data, and
  • for creating transparency on digital issues.
  • Create binding guidelines for internal and external online communication.
  • Design processes so that decisions based on algorithms can be corrected by humans.
  • Appoint an internal ethics officer or commission an external one.
  • Sensitize employees to digital ethics issues and train them accordingly (especially colleagues who handle employee data and sensitive customer data).

Most individuals and companies do not want to become directly involved in unethical behavior - rather, in the contest between different interests, ethical concerns can quickly fall behind, which then leads to certain processes getting underway over time, the means and implications of which have not been sufficiently considered.

 

As important as the ethical actions of companies are on the one hand, and the individual behavior of users on the other, they do little to solve the structural problems of our digital world. Here, solutions would have to be developed at the macro level, and not just by economically motivated actors, but by various interest groups and especially those who currently have little say in shaping the digital space.

 

Digital ethics starts with digital thinkers

 

This right to have a say (apart from the legislature, which by its very nature will always lag a little behind the latest developments) lies primarily with the 'digital thinkers.' This is where technological innovation begins, but this is also where moral responsibility begins.

In the education of 'digital thinkers,' however, this moral responsibility is rarely addressed. They are confronted with a purpose-driven and short-term perspective on the digital sphere that barely considers the social, economic and cultural implications of their own innovation.

The goal is bringing a great idea to life and giving the world something special, original or even revolutionary (and getting rich). This focus on bringing something great and impressive to fruition clouds the view of the problems that could arise. Either negative consequences and possible instances of misuse that could cause harm to others are not recognized, or relentlessly glorified.

In the end, it will often be up to the legislature to regulate the areas and types of use, ensuring that users' rights are protected. Unfortunately, legislation will always lag behind the latest developments and allow time for dysfunctional usage practices. A problem must eventually be recognized before it can be solved, and by the time it is recognized, it has often been around for a while.

 

Conclusion

 

As digitization advances, companies not only want to manage the transformation, but at best emerge stronger from it. At the same time, users are also increasingly skeptical of or frustrated by the (lack of) ethical practices of some digital players.

At the moment, it is essentially economically motivated interest groups that are shaping the digital realm. However, even well-intentioned ideas are not always what is really in the interest of the user or what leads to desirable developments at the societal level.

After all, digital technologies do not exist in isolation in their own sphere; they are now an integral part of the reality of our lives and are playing an increasingly significant role in shaping them. Technology and its application cannot be separated. In this respect, it is only natural that it is the duty of innovators not only to ensure that their projects are purely functional and fulfill the intended purpose, but also that they do not harm individuals and society or at least help develop measures to contain negative consequences and harmful use cases.

Following this, we will look at AI applications such as ChatGPT in the next article.

 

Do you care about data protection? So do we! Learn how you can use your corporate devices in a GDPR-compliant way.