AI generated image of an AI Tsunami
Over the past months, I have been deep diving into the discussions around potential social implications of the accelerating dynamics in AI technology.
I am generally skeptical of anything that sounds like a conspiracy theory. But if there was something like a global seismic measurement of potential social earthquakes, I think we’re now at the point to raise the alarm about the risk of a devastating “AI Tsunami” rolling over our societies.
It is difficult to predict how soon it will arrive. It could be in three months, it could be in twelve. But there are many reasons to expect it to hit us before the 2024 elections in the U.S.
AI-powered technologies are already benefiting us in many respects, and have the potential to benefit us much more. But if we do not prepare well for this Tsunami, the real harms might outweigh the real benefits by far.
AI is a wide field, this post focuses on the impact of generative AI models, which are widely discussed since the breakthrough successes of large language models (LLM) like ChatGPT.
Why the analogy of a tsunami?
Tsunamis are waves that run through the ocean at a much higher speed than normal waves, whilst being hard to detect due to their long wave length. At the same time, the water moves much deeper under the surface in tsunamis than in normal waves. When a tsunami reaches shallower water, they slow down and thereby can rise to enormous heights.
Until now, we can look at the ways in which AI impacts our societies as somehow separate, fast developing phenomena. But I think they are better understood as parallel, self-accelerating “waves” running through the same “ocean” of society.
The moment any of these waves puts our societies under significant stress, our social fabric is getting less elastic - comparable to the water getting shallower.
When this happens, the waves that so far seemed separate, could be reinforcing each other, amounting to an “AI Tsunami” that might hit our realities and spread panic and confusion quicker and more profoundly than the Covid-19 pandemic did.
How will this AI Tsunami look like?
The first little waves are already hitting the shores of public awareness: Some deep fake pictures and videos are going viral, people are complaining about AI generated spam and forum posts, journalists uncover campaigns of “scam robocalls” asking people for donations.
At the same time, developers are marveling about the ability to write more code in shorter time with help of AI. The number of emails we receive from software companies offering their services has gone up from 1-2 per week to 3-5 per day.
While this might still seem like the usual ups and downs in the introduction of a new technology, there are signs that each of the “waves” is growing exponentially, indicating that there will be a “surprise point”, where they put too much stress on our societies to be absorbed by normal processes.
The challenge is that each of the waves by itself involves complex social implications. While many lists of the potential risks from AI have been published, it is difficult to think about the interaction of their social implications.
The closest to a comprehensive perspective that I have seen is Azra Raskin’s and Tristan Harris's Video on “The AI Dilemma”.
If you have not watched it yet, it might be a very well invested hour of your time.
Still, there needs to be more discussion about the combined impact of the different waves. It’s hard to make predictions about all the potential impact of AI, since no-one knows what new capabilities will emerge in upcoming AI models. But at least five self-accelerating waves can already be observed:
Wave 1/5: Massive job losses
This is the most obvious and widely discussed wave. Whilst much of the discussion looks at sectors and job types, I think that a large share of this wave will actually come in the shape of thousands of companies going out of business.
The reason is that with the increasing capabilities of AI tools, they could give unprecedented advantages to companies that are quicker and better at adopting them. At least in the short run, these companies are likely to see substantial growth in turnover, without creating many new jobs.
At the same time, the big reminder of less agile companies might find themselves with little chance to catch up. The challenge is that AI can give quick adopters a type of “super-advantage”, creating efficiency gains across all business processes.
Still, I believe that this might be the “slowest” of the five waves. It could, nonetheless, contribute to a general uncertainty, making people more sensitive and reactive towards the other waves.
Wave 2/5: Extreme inflation of high-quality digital content
This may sound less harmful than it is. Everyone who writes text can now write way more texts per time. Everyone who designs graphics can design more graphics, anyone who produces videos can produce more videos. And since this requires less specific skills, the number of people who can produce quality content rises simultaneously.
On top of that, thousands of companies and startups are eager to profit from the AI gold rush, resulting in a parallel inflation of the AI based tools themselves. This might multiply an already exponential growth of content production.
The hyper-inflation risk comes from the fact that all this can be automated. At the current pace, it seems to be a matter of months until it is normal that anyone can mass-create and mass-spread content without requiring much technical knowledge.
At the moment, this wave is still slowed down by the inaccuracy of current LLMs, making them less useful for auto-producing text and code. But at the current pace of AI development, it’s probably a question of months, rather than years, that we have models able to generate auto-fact-checked, high-quality content that is far better than what the average human could produce.
How could this be harmful and why couldn’t it be stopped?
Now this falls on societies that have been trained by the attention economy for nearly two decades. Hundreds of millions of people are used to swipe, share, like, and comment on digital content that grabs their attention.
And the new, mass-produced content will be fascinating stuff, copying, combining, varying the styles of the most liked human creators. Like any inflation, this might result in a loss of perceived value of all content, which could create challenges for artists, influencers etc. to have their work stand out.
The platforms that might have the technical capacity to restrict the inflationary spread of content are unlikely do so. Doing this would mean leaving the inflow of attractive content to their competitors. We’ve seen already how such a race to the bottom plays out with the inflation of short videos on TikTok, Instagram, Facebook, Youtube etc.
Neither is it likely that governments will regulate this risk quickly and decisively enough. Of course there are vibrant discussions and proposals on updated intellectual property rights. But how to regulate unpredictable, automatized combinations and variations of sources? And how to push through such complex policies soon enough?
While the extreme inflation of quality content may not create major harm by itself, the accompanying feelings of overload, disorientation, and loss of value could further weaken the social fabric for the other waves to hit harder.
Wave 3/5: Extreme inflation of fake content
Much of the discussions about AI generated content tends to focus on biases and misinformation resulting from the technical design and the data fed into the LLMs. However, these concerns might soon be dwarfed by the impact of intentionally produced fake content generated with AI-based tools.
Like in Wave 2, the “Tsunami point” might arrive when the automatized and decentralized creation and spread of advanced deep fakes of photos, videos, music, voice messages, video-calls etc. will be available at basically no costs. In addition to the overload of quality content from Wave 2, this might create a degree of confusion our societies are not well prepared to cope with.
Some experts and journalists who follow the trends may raise early warnings when this high end spam wave starts accelerating, but the vast majority of people will have responded to, forwarded, and ingested hundreds of messages before they realize that something deeper is going on. By then, confusion and distrust might have reached levels that make any balanced response difficult to get through.
For you, it might be the moment when you get a phone call from a good friend telling you that they lost their phone and wallet and urgently need your help. You hear the agitation and urgency in your friend’s voice and recognize the slang and self-ironic jokes you’re so familiar with. And just by chance this very friend is sitting next to you...
In that moment you’re friend receives a phone call from a colleague – but you can see in their face that they’re not so sure if it’s really their colleague they’re talking to…
The main problem might not be the individual cases of fraught and deception, but rather the general distrust this seeds across our societies.
Can’t this be stopped before it spreads too far?
Three reasons will make it very hard to stop the wave of fake content once it has reached the level when it gets impossible to ignore.
1. It is increasingly hard to identify deep fakes. There is an emerging industry of fake detectors, but even these companies themselves admit that they’re likely to lack behind the developments in deep fake creation. https://www.nytimes.com/interactive/2023/06/28/technology/ai-detection-midjourney-stable-diffusion-dalle.html
2. The sources will be too decentralized. The fake content won’t just be produced by a few bot farms run by some autocratic governments or ruthless companies. Rather, any person who wants to make some easy money or to push their political views on others will be able to produce and spread high-quality fake content in mass.
3. Even centralized cloud services and internet providers might have a hard time blocking this content, because it could risk halting legitimate communication and servers needed for normal business processes.
This is related to the fourth wave that could make the AI Tsunami actually deadly:
Wave 4/5: Major fallouts in global supply chains
We’ve seen what can happen when one large cargo ship gets stuck in the Suez channel.
Almost all processes in the global supply chains rely on digital communication in some form. If many of the common channels of communication suddenly become unreliable, it might take a while for everyone around the world to adapt.
Of course, many companies will set up encrypted communication channels, if they haven’t done so yet. But there might be enough that aren’t well prepared, to create significant chaos for a few days or weeks. And this time window might be enough for terrorist groups, industry mafias, secret services, individual spammers and anyone who seeks to spread chaos or take advantage of it.
This risk gets amplified by everyone trying to keep pace with the AI gold rush. It’s hard to see how hastily developed and deployed AI-based software and processes will not create new security loopholes. And this would not even require potential malicious (AI?) agents intentionally building loopholes into the code.
The economic implications of disturbed supply chains tend to hit the poorest the hardest, but they might also affect the availability of goods in wealthy societies. This might further spread a sense of uncertainty and fuel shortsighted egoism, as we have seen in the Covid-19 pandemic.
Wave 5/5: A global mental health crisis
I believe this is by far the most dangerous of the five waves.
Yuval Noah Harari has offered strong arguments why it could create fundamental risks to give machines the capacity to generate and automatize human language as the “operating systems” of our societies.
He is by far not the only one voicing serious concerns that the ability of generative AI tools to build up intimate relationships with humans creates unprecedented opportunities for voter manipulation, leading some to warn that 2024 might be the last democratic elections in the U.S.
This risk to democracy is real and urgently needs comprehensive responses.
But there is an even deeper risk
The combined impact of the other four waves might leave societies with levels of mistrust and uncertainty that create fertile ground for AI powered tools to dig deep into peoples minds.
“AI companions” and other tools that generate individually targeted emotional engagement might have destabilizing effects on our capacities for real human connections, with unpredictable consequences on the human mind.
This might result in a global mental health crisis, which could radically amplify many of the social ills we already have. In this sense, it could become a “super-problem”, which at the same time blocks our capacity to address them.
We are about to enter a gigantic psychological experiment with very little control, let alone a responsible research design.
A self-accelerating process hitting ill prepared societies
Thousands of marketing agencies and product designers around the world are working hard to get their head around how to use the new generative AI tools for their work. Everyone is under pressure to use them more effectively than the rest.
This might result in a decentralized, competitive race for effective (emotional) manipulation, similar to what we have seen in the attention economy. No malicious intentions are needed for letting the “AI arms race” trickle down right into our brains.
Our societies are still influenced by the after effects of the Covid-19 pandemic, which has “led to a worldwide increase in mental health problems” and “further widened the mental health treatment gap“ according to the WHO.
This is not a good starting point for such an experiment.
This wave might be the hardest to control
It might be much harder to implement any useful policy when growing parts of our societies are loosing their wits.
At an individual level, it might be as difficult to abstain from potentially intrusive AI-based products. The bots can learn our emotional weaknesses and deliver us exactly the stories and interactions that just feel good. And we have no long established social norms to regulate such “emotional porn”.
It took decades to halfway effectively regulate at least the worst strategies of the sugar and tobacco industries. We still haven’t done so successfully with social media. How can we expect our societies to deal with a fundamentally new type of “unintended side-effects” quickly enough?
So how to prepare for the AI Tsunami?
In short, the AI Tsunami could hit in form of widespread uncertainty, distrust, and mental health deterioration. Multiple seemingly independent “waves” might contribute to a situation, where these effects overwhelm our societies’ coping mechanisms.
In such a situation, the most important goods will be real human trust and real human connections. I believe that our efforts should focus on how we can use technologies, but also social and organizational tools that we have at hand, to support the protection and nourishment of these crucial goods.
The biggest mistake would be to lean back and hope for the potential benefits of AI “solving” these problems for us. For these potential benefits of AI to pay off, AI would need to be integrated in balanced, constructive ways. It might be that the chaos created on the way will simply rule out this possibility.
I will dedicate another article to “Preparing for the AI Tsunami”. For the moment, my best response is what we try to do at CosyAI: Building a coalition of like-minded organizations to create a sincerely cooperative approach to AI.
Cooperatives have a strong track record in fostering human trust and protecting human communities from unhealthy pressures.
Join us, if you want to co-shape these efforts.