OP-ED | The ethical peril of generative AI

If you haven’t yet tried your hand at verbal jousting with ChatGPT, you’re already one step behind. And, unless you’ve spent the last thirty days in a cave with no internet access, you’ve probably been exposed to experiments in your relationships with ChatGPT all over your LinkedIn newsfeed.

Let’s start with a quick reminder. GPT-3 is a language model, i.e. an artificial intelligence tool designed to produce text. It was launched in 2020. ChatGPT is simply its Chat version, i.e. conversational and with a simplified user experience. You can now interact with this tool and, depending on what you ask it, it will formulate a more or less lengthy or elaborate response.

GPT is generative artificial intelligence. In other words, algorithms that use existing knowledge to generate new content. It’s an absolutely fascinating tool that all of us, professionals and individuals alike, need to learn how to use. But it is so powerful that it raises a number of questions, not least ethical ones.

Those who feared its version 3 are in for a rude awakening. Its new version, GPT-4, will have 100 trillion parameters for analysing and responding to requests. GPT-3 has “only” 175 billion. And for those who want to put the brakes on this innovation, it’s already too late. Meta has designed a “large language model”, OPT-175B. Google presented its LaMDA bot two years ago. We are not to be outdone by Bloom, an AI founded by the French, while China has Wu Dao for its “path of consciousness”, which is around ten times larger than GPT.

AI: From technology to language

Two elements amplify the power of technology: convergence and invisibilisation. They make language models such as GPT and their iterations staggering.

Convergence allows the strengths of several technologies to be combined. Take a digital avatar (GAN), combine it with ChatGPT3, add behavioural neuroscience, offer this avatar an augmented reality (AR) presentation or a video broadcast on a social network such as Instagram, all imagined and delivered by a malicious lobbying company. The result is a convincing, photorealistic expert offering a distorted reading of reality for the purposes of manipulation.

Invisibility, on the other hand, allows technology to be forgotten. It is the result of the miniaturisation and integration of technologies. Imagine the number of services integrated into your iPhone. In the case of our malicious expert, the strategy will be inspired by George Lucas, with the “consensual suspension of disbelief”. In narratology, this means believing that what you are being presented with is real; in epistemology, it means appropriating the truth. To simplify, you tend to forget that the expert in front of you is an artificial intelligence. Take the example of Spike Jonze’s film Her.

When this type of technology interferes in our lives, without it being possible to distinguish between human and machine production, when the public, citizens, consumers or opinion leaders such as journalists, executives or elected representatives, can be fooled by machines, this poses a major problem.

The need for the human hand

I asked the person concerned (ChatGPT) what ethical problems he himself had when it came to communicating on social networks. His answer was honest but short: manipulation of public opinion, discrimination, invasion of privacy, false content. It’s coherent. Criticism of the Languages Models by researchers such as Timnit Gebru and Margaret Mitchell, members of Google’s artificial intelligence ethics laboratory, cost them their jobs…

It is important to understand that within 10 years at most, it will be absolutely impossible to detect what is real and what is artificial in terms of language, photos and video. Only the providers of these services, Big Tech, will be in a position to warn you, in return for payment, of the production of their generative models. Which gives full force to the phrase “truth has a price”. It is also, in politics, an illustration of democratic centralism. According to NewsGuard, ChatGPT relays fake news in 80% of queries on sensitive subjects such as COVID and the war in Ukraine.

Let’s face it: these large language models such as GPT are already capable of instantly analysing a piece of legislation or a document thousands of pages long. They know how to detect objective flaws and then produce thousands of requests (amendments) that will obstruct the debate by creating noise (spam). In other words, we are witnessing the automation of lobbying with a disproportionate volume effect. The aim is to saturate the human or organisational capacity to process information. We know that “information obesity” slows down or even prevents decision-making.

Now imagine this same ‘spam’ approach being applied to all of a government’s or party’s legislation, or to all of a brand’s statements – on Twitter, Instagram, in the press – all at the same time, with the resulting multiplication of ‘spam effects’. You’re suffocating the sender of the messages and blurring the possible understanding of their dialectic. And that’s not to say we don’t have a Cambridge Analytica 2.0 on our hands…

GPT-3 was developed within OpenAI, originally a non-profit organisation created by Elon Musk and investors such as Sam Altman, its current CEO. Today, the non-profit status has changed to for-profit. There is no getting away from the parallels with Arpanet, the university research tool that became the Internet, or the computer ethics of the MIT pioneers. The original ideals have been derailed by the excesses of the commercial Internet (social networks and other abuses of the Dark Web).

Technologies need to be manipulated, in the sense this time of the medieval Latin “to lead by the hand”. It needs to simplify the work of human beings to enable them to move up the value chain. Thinking that it can do the thinking for you, especially when it comes to foresight or progress, is a mistake you should never make. At JIN, we invite all our consultants to use GPT, but we require them to stay in control. We need to keep control of new technologies, but we must never forget that they must remain at the service of people.

Leave a comment