How AI technologies are changing our world
With the help of artificial intelligence, many work processes can already be optimized and automated today. But what is AI exactly? What potential does it create? And what are the risks of using it?
ChatGPT, Midjourney, Google Bard — AI bots have been very popular for some time now. But the technology is not new. On the contrary: The world's first AI program was created back in 1956. The “Logic Theorist” — the name of AI — was not only able to prove 38 theorems from the work “Principia Mathematica” by Bertrand Russell and Alfred North Whitehead, but also found completely new and shorter proofs for some. An absolute novelty at that time.
A lot has happened since then: in 1966, the first chatbot saw the light of day, in 1972 AI entered the medical sector, and in 1986, the first computer voice based on AI was born.
Artificial intelligence has therefore been around for quite a while. As technology is constantly evolving, scientists divide the various AIs into different types. So-called weak AI is used to find solutions to specific problems. Weak AI is programmed for specific requirements and cannot operate outside these limits. Examples of areas of application of weak AI include navigation systems or voice recognition.
A strong AI, on the other hand, is able to independently take on complex tasks and acquire knowledge independently — at least in theory. Because no one has yet succeeded in developing such an AI. Popular examples of strong AIs from pop culture include HAL 9000 (“2001: A Space Odyssey”) or the “Terminator.”
In addition to the two types, there are different types into which AIs can be roughly divided:
1. Reactive AI
This category describes the next form of artificial intelligence. These AIs are designed for a specific task and respond exclusively to external input.
IBM's “DeepBlue” chess computer is an example of reactive AI. The computer reacts to moving the opponent's pieces on the chessboard and, in response, sets a position for one of his pieces. This AI succeeded in beating reigning world chess champion Garry Kasparov in 1997. In 2016, DeepMind's “AlphaGo” AI also defeated a world champion. With 4 out of 5 victories, the AI beat Lee Sedol in the Chinese and considered the most complex board game “Go”.
2. AI with limited storage capacity
AIs with limited storage capacity are able to react to current situations to a limited extent using data that has been collected or programmed in the past.
Self-driving cars are one example of such AI. This form of artificial intelligence has a limited memory in which special objects and situations have already been modeled. The AI knows what cars and people look like and how they behave. She is also familiar with the road traffic regulations. This data is compared with the current situation in road traffic and leads to a decision on the part of the AI as to what to do next.
At this stage, reactive AI and AI with limited storage capacity are the only AIs that exist in reality. But even for the next two types, there are concrete ideas about how these AIs behave.
3. Artificial intelligent machines
Type 3 AIs can perceive and understand people's emotions and adapt their own behavior accordingly. Other human aspects, such as memory and perception of the world, will also be part of these AIs.
The difficult part of developing such an AI lies in the countless variables that the human brain brings with it. All the different facets, such as emotions, values and norms, are very complex.
4. Self-perception
This form of AI is the highest of all. AI is conscious and sees all facets of the world. The level of intelligence is at the same or higher level than that of humans. In addition, AI is able to perform all tasks that humans perform equally or better.
So much for the various definitions. But what role do AIs play in today's society and what impact do they have on us, our lives and our everyday lives?
Where artificial intelligence is already part of everyday life
There are various areas in which artificial intelligence supports us humans today. One of the latest and most eye-catching applications is, of course, ChatGPT from OpenAI. Using generative AI, the chatbot communicates with the user via text messages. Modern machine learning technology enables AI to formulate appropriate answers to topics of conversation. The range of options is not that small. For example, ChatGPT can help create content by “researching” relevant facts and presenting them clearly. AI is also useful for formulating and revising texts. With the help of text inputs, so-called prompts, written editions can be written in a more emotional, funny or neutral way, for example.
But the technology is not completely flawless. For example, ChatGPT may present false allegations as facts. An independent verification of the alleged facts by the user is therefore necessary in order not to spread false information in the end. You should also not adopt the chatbot's texts one-to-one. Because ChatGPT does not simply “invent” its own sentences, but uses the data that is made available to it for training. This means that the sentences are subject to copyright, depending on how much different data they consist of. It is therefore important to rephrase the texts in your own words.
Another exciting development is Google Duplex. Presented in 2018 by Alphabet managing director Sundar Pichai, this type of AI is an extension of Google Assistant and makes it possible to automatically make calls and book appointments. It is also interesting that the AI uses filler words such as “um” or “mm-hmm” to appear more human.
Banks in the USA are now increasingly using algorithms to decide which customer gets a loan and which doesn't. When making decisions, the AI refers to data sets with which it was previously trained. This highlights a major problem: The data, which is usually extracted from the Internet, is subject to social distortion and is not free of prejudice. Common resentments, such as racism and sexism, are sometimes taken over by AI. As a result, in the case of Lending in the USA, the decision to grant a loan was dependent on the skin color of the customer concerned. Other factors, such as income and debt, were not or far less taken into account.
AI has also been increasingly used for so-called deep fakes in recent years. Deep fakes are media content, such as images, video or audio recordings, that have been digitally modified with the help of AI. They are often used for false information or for manipulation. The American actor and comedian Jordan Peele proves in a videoHow realistic deep fakes can already look today.
EU launches AI regulation
In order to counteract the risks of unequal treatment and manipulation and to provide guidelines for further research into technologies and thus limit risks for society, the EU has taken various measures.
In order for deep fakes to be recognized as such, the EU Commission requires companies to label artificially created content. Since the two companies Microsoft and Google are based on generative AI with the search engines “Bingchat” and “Bard”, these are taken into account.
In addition, the EU has launched an AI regulation that is intended to regulate artificial intelligence. In this way, all AI technologies should respect the fundamental rights of citizens and neither manipulatively nor discriminate or otherwise restrict or disregard fundamental rights.
Furthermore, more transparency of developments and greater accountability should be ensured. In the case of generative AIs, it must be revealed that the content was generated by AI. The AI must be designed in such a way that illegal content is not produced. Summaries of copyrighted material that was used for AI training purposes must also not be generated.
The regulation is also intended to create a uniform framework that makes it easier for companies to develop and use appropriate technologies. According to the regulation, AIs should be classified according to their risk potential: low risk, limited risk, risky and prohibited.
For example, technologies such as the “social scoring” system should be prohibited. The evaluation of human behavior and comprehensive monitoring using real-time biometric data leads to massive human rights violations. AIs with a lower risk potential, such as ChatGPT, should continue to be approved.
At the beginning of December 2023, a political agreement was reached on the law. After the European Parliament and the Council have formally adopted the Act, the regulation will be published promptly in the Official Journal and will be binding after 20 days.
Between euphoria and regulation
Artificial intelligence is not completely new in human history. However, the rapid pace at which the development of new applications is progressing and the ever-evolving technologies have now reached a new level. Medicine, industry, science and many other areas are already benefiting from the new applications. AIs have also long since found their way into the private sector. Be it Amazon Alexa, Google Bard, Midjourney — AIs are very popular.
But the technology also poses risks. Manipulation and deception through artificial intelligence pose a real threat to all citizens. Although labeling generated AI content, as required by the EU, is the right step, it will not be enough to completely avoid disinformation. It is therefore more important than ever to consciously question information from the Internet and to put it into context.
Guidelines and laws that establish certain basic rules, such as ethics, equal treatment and transparency, lay a foundation for regulating the further development and use of AI technologies and minimize some of the risks. However, further laws will also have to be created in the future to meet the rapid progress of AI.
Dr. Moritz Liebeknecht
IP Dynamics GmbH
Billstraße 103
D-20539 Hamburg