The observation that underlies this post has been voiced by renowned AI experts like Turing Award winner Joshua Bengio:
"Corporations may be viewed as special kinds of artificial intelligences [...]."
Similar perspectives have been published for example in the Economist , where the social scientist David Runciman is quoted as saying:
"Corporations are another form of artificial thinking-machine in that they are designed to be capable of taking decisions for themselves".
In this post, I start out from this observation to explore the functioning of corporations as organizational AIs. Based on this, I argue that the logical next step is to view corporations as a form of meta AIs, given that they largely control the current development of technical AI.
Just to clarify: The title doesn't specifically refer to Meta Inc., the corporation owning Facebook, Instagram, and WhatsApp. The first version of this text was written seven years ago and I decided to keep the title because a part of me resists accepting that Facebook could capture the beautiful Greek word "meta" in its effort to cover up its well-earned image as an irresponsible company.
Before diving deeper, I'd like to start with a recent anecdote:
This photo is from the Big Data Summit in Berlin in mid September, a corporate event on AI and quantum computing. The event couldn't have been in starker contrast to the Regens Unite Conference, where I had been on the weekend before to discuss how we can use technology for regenerating our societies and ecosystems.
Soon after I arrived at the Big Data Summit, something felt completely off. It took me a while to realize what it was: A complete absence of human presence.
Everyone seemed stuck in defense mode and its little siblings, show mode and sell mode. No matter if on stage or walking between the corporate booths, people seemed constantly wary of not showing any sign of weakness. It seemed outright unthinkable to see the human behind the well-controlled faces.
This is certainly not about judging anyone personally. Being in touch with oneself, let alone sharing it with others, would have felt completely out of place at this conference. The atmosphere was an amplified showcase of what corporate professionalism does to human brains.
This is one of the strategies that the meta AIs use to control the development of technical AI. It capitalizes on our adaptive capacities as humans to turn us into functional instruments that are aligned with its goals.
At Regens Unite by contrast, people simply were present. Of course not perfectly, but still it felt natural to look into other peoples eyes without feeling an urge to hide my vulnerabilities and shortcomings.
This created an entirely different ground for communication, even when discussing abstract technical questions like combining blockchains and AI for decentralized data sharing.
"As a guest, be a host"
was one of the slogans of the conference, and it seemed that the participants succeeded in creating a common "safe space" for human presence. No one seemed to be eager to "sell a product". Rather, people were genuinely discussing about how we can employ technologies to serve human values.
Of course, this anecdote purely reflects my subjective interpretation. But to me, this observation is by far the most pertinent takeaway from both conferences - and its implications, if thought through, are ground-shaking.
Call it meta AI
So why is human presence suppressed in people who are exposed to corporate settings?
I think a good way of understanding this is to view it as the outcome of intentional actions by intelligent, autonomous systems that have been influencing large aspects of our reality for decades.
To repeat the main three claims of this text:
1. Corporations are structures that can be understood as forms of organizational artificial intelligence.
2. These forms of organizational AI are largely controlling the development of technical AI today, which turns them into "meta AIs".
3. Corporations align the development of technical AI with their own values, which are rooted in their specific structural design: The Corporate Logic.
These three claims inspire a different perspective on some discussions about AI's (potential) social implications: The so-called "AI arms race" isn't some kind of natural disaster that we're facing, but a direct result of chronic growth compulsion, the core value that dominates the meta AIs.
Corporations as Organizational Artificial Intelligence
Just as our human intelligence emerges from our neural structures, this high level type of artificial intelligence emerges from the organizational structures of corporations. In this analogy, humans constitute the primary 'hardware' of these corporate structures. And to keep us running smoothly, we're 'programmed' through a variety of tools, ranging from financial incentives, procedural rules, and performance indicators, to targeted information and specialized education.
From the perspective of the Corporate Logic, we humans have the handy property of adapting to the conditions that we find ourselves in. Taking advantage of this property, they constantly optimize the methods for “designing” us humans into increasingly better functioning hardware.
Moreover, through decades of their uncoordinated collective actions, corporations have influenced the conditions of human existence, for example by promoting location competition between local municipalities. Combined with their more direct activities to influence us. This has shaped an ever growing number of humans according to their needs - as employees, as managers, as consumers, as shareholders, as founders, as policy makers etc.
Corporations can use the hardware over which they have direct control - like employees and managers - to engage in activities that indirectly influence the behavior of the relevant peripheral hardware, for example consumers and policy makers. Conversely, the roles of shareholders and founders are often defined by evolutionary selection processes, as explored in more detail in my post on The Corporate Logic.
These meta AIs already are intentional AGIs
Since corporations use humans for their hardware, they basically have all human cognitive capabilities at their disposal, which by many definitions, if we accept the claim that they're defacto AIs, makes them artificial general intelligences, (AGIs).
By clustering humans into teams, departments, specialized subcontractors etc., they can even be viewed to achieve mild forms of "super intelligence". Combined with their increasing capacity to collect and process vast amount of data, this gives them enough advantage over any individual human to be able to influence our (perceived) values, worldviews, and identities.
At the same time, meta AI in form of corporations can be considered intentional. The Corporate Logic can be viewed as an internal guiding mechanism that leads corporations to pursue their intentions, thereby systematically influencing their stakeholders (rather than being governed by them, as many might like to believe).
Corporations are guided by intentions that emerge from their structural specifications and autonomously govern their own behavior. The individuals tasked with managing a corporation are essentially interchangeable parts, continually evaluated for their effectiveness in promoting the corporation's chronic growth compulsion and are replaced if found to be dysfunctional .
Even when a corporation employs certain individuals as its public face, it does not depend on these persons. Even with the passing of iconic leaders like Steve Jobs, Apple persisted; similarly, Microsoft continued to thrive post-Bill Gates, with both entities adhering to their primary objective: perpetual growth.
Meta AI controlling AI
Meta AI in form of corporations by corporations, already exerts a significant influence on nearly every facet of our societies, including finance, retail, transport, media, food production, and healthcare. It is particularly dominant in controlling the development of technical AI.
Developing cutting-edge AI technologies requires massive computing power, necessitating supercomputers composed of tens of thousands of high-end GPUs. Most of this computing power is currently controlled by large corporations like Amazon, Google, Microsoft etc.
Basically all leading startups in LLM development, such as OpenAI, Anthropic, and Inflection AI, have secured substantial investments from major tech corporations. This investment often takes the form of cloud computing credits, effectively leaving the ownership and ultimate control of these critical infrastructures with the large corporations.
Even Hugging Face, a platform offering access to numerous open-source LLMs, relies on Microsoft Azure's cloud services. And when Facebook (Meta Inc.) announced its Llamda2 model, one of the largest and best performing open source LLMs, it was in partnership with Microsoft, too, giving the corporation even more de facto power over the world's top end LLMs.
State-funded AI research, too, is often strongly influenced by corporations. This is evident not only in third-party funding of research projects but also in how corporate-sponsored research influences the public sector. The rising gap in computing capacities often forces independent researchers to rely on resources and results made available to them by corporate research labs. Moreover, the meta AIs solidify their influence over public research institutions by offering incentives to students and professors as their potential future employers.
Looking at it from a more general perspective, well-known methods of corruption and policy capture are particularly effective in the field of technical AI, thanks to the steep information asymmetries that create an ideal ground for exercising undue influence.
Reclaiming control over AI development
So what can we humans and our democratic governments do to reclaim control?
If I could implement one policy wish, it would be to classify all high-end GPUs and any hardware with equivalent or greater computing power as critical infrastructure, which must be kept in democratic ownership and control.
This would allow for democratic decisions on fundamental questions like "do we really want to create artificial super intelligence?". Recent polls show that the majority of Americans actually don't.
However, I recognize that this is utopian, given the current dominance of the meta AIs.
This leaves us with the more general challenge:
How can we humans reclaim power from the corporations?
It might turn out that the recent advances in technical AI provide a means to achieve this: By utilizing it to strengthen viable alternatives to the meta AIs. For instance, generative AI can be used for empowering small and medium-sized enterprises, cooperatives, and other decentralized organizations that are not inherently driven by growth compulsion.
In this sense, corporate meta AIs might unwittingly be giving rise to their own Frankenstein's monster. By harnessing technical AI, we have the potential to liberate ourselves from their dominance and guide this powerful technology towards a path that is aligned with our human values.