Use of AI: What IT companies should bear in mind
As an IT company, we are constantly confronted with new developments and trends in artificial intelligence (AI) – especially when it comes to AI on mobile devices. But the topic goes far beyond the use of useful apps or functions on smartphones and tablets – AI has now reached almost all areas of daily life. Reason enough to take a critical look at AI applications in the following article. We look at what the term ‘AI’ actually means and what impact it has on everyday working life and the environment. We then attempt to provide an objective assessment of the topic to enable you to weigh up your own use of AI.
Artificial intelligence: What exactly is it?
If you conducted a street survey, 80-90% of all respondents would probably say they know what AI means. But it’s not that simple: the term ‘intelligence’ is abstract in itself and there is no universal definition. However, ‘intelligence’ is commonly understood as the ability to think, learn and solve problems. The degree of intelligence is often measured by how well a person can think deductively, plan, solve problems, understand and execute complex ideas, and learn from experience. (Definition) When you consider these factors and then use an application such as ChatGPT, you naturally quickly get the impression that you are dealing with a kind of intelligence.
However, AI does not act on its own — it does not generate new ideas or concepts, but merely imitates human behaviour and thinking. The basis for this is a huge amount of data that AI must be trained with in order to deliver results.
In the case of medical AI, for example, this data comes from patient databases in which, for example, the probabilities of developing certain types of cancer can be evaluated for specific risk factors and population groups. Other AI, on the other hand, is based on texts and other content from which it feeds its answers. This is not always done in accordance with data protection and copyright laws.
So when we talk about artificial intelligence, we actually mean an LLM (= Large Language Model). However, this less appealing name is not used — on the contrary, the pioneers of the AI wave, such as Meta and OpenAI, have promoted the term artificial intelligence from the outset, which has quickly become a hyped marketing term. Many companies, especially start-ups, are still jumping on this bandwagon. However, what exactly lies behind their products often remains obscure, and the added value is also unclear. It therefore always makes sense to ask: What is someone trying to sell me under the guise of “artificial intelligence”?
Typical areas of application for artificial intelligence
For the sake of simplicity, however, we will continue to use the term AI in this article to shed more light on the topic. So what are the typical areas of application for AI?
As briefly mentioned above, AI is primarily used where large amounts of data need to be processed in order to extract important information. Without AI, the evaluation of medical records, for example, would otherwise take several years in some cases. AI is also used in industry, for example in optical inspections, process analyses, predictive maintenance and energy management. Another area of application is administration, where AI systems such as chatbots can form an initial customer interface. Email programmes that automatically group emails according to subject matter or facilitate personnel planning are also popular areas of application.
AI has also been used in nursing care for some time now, for example in the form of assistance robots and speech recognition systems. We are all familiar with AI in our everyday lives. The best examples are e-commerce sites that show us personalised product recommendations or provide virtual shopping advisors. AI is also found elsewhere: in autonomous driving, digital voice assistants, smartphone facial recognition and more. AI has become an integral part of our everyday lives. However, none of these applications are based on true intelligence, but rather on an automated learning process that only takes place because users provide their data.
AI in the workplace: What are possible implications?
Since the beginning of the AI hype, there have been fears that entire professions, ‘preferably’ those in the creative industry, will disappear or at least undergo complete transformation. However, it is not quite that simple. Of course, employees must prepare themselves for widespread changes, and those who cannot (or do not want to) do so will sooner or later find themselves in other positions. Nevertheless, a sink-or-swim mentality is short-sighted: yes, AI can ‘take over’ many things, but it has been shown time and again that the various AI systems are still prone to errors.
The best example of this is news about AI-generated reports in which the majority of the data and forecasts provided were simply wrong. The same applies to creative professions, which are often declared dead, since anyone can now use AI to generate illustrations, designs — even entire websites. Not to mention whether it is ethically acceptable to ‘harvest’ the creative work of others without compensating them for it, but the problem lies elsewhere: AI cannot currently iterate. This means that if I give the AI a creative task and then only need to change a few details, the result is not convincing because the concept of creative iteration is not ‘understood’. The AI generates each piece of content from scratch every time. That’s why it makes sense to only have an AI generate basic creative things, if anything, and then continue on your own, which in turn means that the person writing the prompt still has to be an expert in their field. So AI alone is not enough.
We are also observing several critical developments in branding. On the one hand, many companies are jumping on new AI trends, resulting in graphics on websites, social media posts and marketing materials looking very similar. Apart from the fact that this always reflects a lack of creativity, companies in today’s attention economy should consider whether they can afford not to stand out from hundreds of similar-looking companies. On the other hand, consumers are also aware that the main goal of companies using AI-generated content is to save money. With the savings in design achieved through AI, design is degraded to a product that is to be produced as quickly and cheaply as possible – but this does not work in design, as design thrives on psychology and empathy (e.g. UX design) and AI cannot evaluate the human component. Without humans, quality is also compromised, which in turn affects the brand’s reputation – low quality is noticeable: 62% of consumers reject products when AI is used in marketing, as a study by Yahoo shows. Because AI is also omnipresent, the mere mention of AI triggers ‘AI fatigue’ in many users. The term refers to people being tired of AI.
Fundamentally, one argument in favour of using AI is that it can speed up standardised processes. And many companies were already using LLMs before the AI boom, as we described above. The use of AI can save time and free up resources for other tasks, but it should not replace genuine expertise.
AI and data protection: Are you going full risk?
Due to the ongoing AI boom, many companies understandably feel compelled to use AI applications as well. On the other hand, it is almost impossible to escape AI anyway, as it is often already an integral part of new laptops and smartphones. Nevertheless, it is important to bear in mind that AI can only run on data from the company and its employees. This means that it must be fed with the company’s own data in order to obtain suitable results.
But where do you draw the line? How do you find the best practice for your company in terms of data protection? Which AI should employees use (be allowed to use)? And: What remains prohibited?
Companies should ask themselves all these questions in order to avoid coming into conflict with the GDPR.
Do AIs violate the GDPR?
There are basically two problem areas that need to be considered here:
On the one hand, the processing of customer or employee data by the company using AI, e.g. for personalised offers, and on the other hand, the use of AI in the company to work with internal company data. Although the term ‘artificial intelligence’ does not explicitly appear in the GDPR, it is formulated in such a way that it is ‘technology-neutral’. This means that newer technologies are also included. If companies want to train their own generative AI, i.e. AI that generates its own original content from existing data, for example a chatbot, they must bear in mind that the processing of personal data always falls under the GDPR.
This means that even existing data cannot simply be used to train AI, as it was originally collected for a different purpose. Users would therefore have to give their explicit consent again – for example, by agreeing to amended terms and conditions. However, when processing purchased data or data obtained through scraping, this consent cannot usually be obtained because the owners of the data are generally unknown to the company.
In order to train internal AI in a legally compliant manner, the solution lies in using anonymised or aggregated data that no longer allows conclusions to be drawn about individual persons. In addition, the company’s data protection information and processing information, among other things, must be adapted.
Furthermore, the use of AI in companies harbours other pitfalls that can prove costly in serious cases: lack of transparency with regard to data processing, violation of the right to erasure, and automated decision-making without the user’s explicit consent. Data security is also not always guaranteed, as AI systems can be vulnerable to cyber attacks. In terms of liability, AI cannot, of course, be prosecuted if automated decisions result in data protection violations. Accordingly, companies must define responsibilities.
For greater data protection, AI can also be hosted locally. There are also major data protection concerns regarding the use of external AI tools in companies, which are used, for example, to write texts, edit photos and graphics, or create business forecasts. Many of these applications run on US-based cloud servers, where it cannot always be guaranteed that they comply with all GDPR regulations. With the EU AI Act of 2024, the EU wants to take an important step in regulating AI applications. One of the goals is to enable small and medium-sized enterprises to quickly and reliably identify risks when using AI and to be protected, at least to a certain extent.
AI applications: Not all that glitters is gold
When using AI applications in companies, it is not only data protection that must be taken into account. Most AIs have been trained with a seemingly endless amount of data — the collection of which is not always traceable. This means that AIs may violate copyright law under certain circumstances, for example, if existing brands and slogans are suggested for a new company or content is reproduced almost identically.
However, it is also the responsibility of users to ensure that AI does not create a carbon copy of successful branding measures. Furthermore, as little internal and sensitive information as possible should be passed on to AI, as this can in turn be used for training purposes. When collecting internal statistics and creating reports or other texts, the facts provided by the AI should always be checked. AI can also ‘hallucinate’ data or present connections where none exist, thereby distorting the information.
In the competitive environment between companies, it is also important to ensure that AI is not misused, for example to spread false or misleading information about competitors. Companies therefore also need internal guidelines on the proper use of AI applications. Here, too, it is important for companies to consider which applications they really need and to check that they comply with company guidelines. In addition, employees must be trained in the use of AI applications. In areas with particularly high data protection requirements, we recommend that AI should not be used at all if possible and should also be deactivated on mobile devices. We have summarised how you can implement this in this article.
Conclusion:
Whether the AI hype is just a bubble or will be with us for the long term is currently impossible to predict. Nevertheless, employees and companies need to familiarise themselves with the innovations that come with the use of AI so as not to fall behind. From a data protection perspective, however, not every task that arises in a company should be placed in the hands of AI. Ultimately, every output of AI still requires a person who can classify and check the results with expertise. It is also generally advisable to formulate internal guidelines for dealing with AI, which also specify which applications may be used to what extent and which may not. Companies should also be aware that the use of AI will present them with new legal challenges. Furthermore, the use of AI is a double-edged sword, especially for brand building: although it saves money and time, users or potential customers often perceive it as inauthentic and untrustworthy.