Is capitalism an intelligent agent threatening humanity?

1. Background

A recent research article [1]  warned that:

“an advanced agent motivated to intervene in the provision of reward would likely succeed and with catastrophic consequences”

First Author Michael Cohen also summarized the contents of the paper in a presentation [2] and on social media [3]:

In addition, the article considerable attention in mass media [4, 5] because the paper is co-authored by an employee of the world-leading AI lab “Google Deepmind”, and its message was summarized as

“Google Deepmind Researcher Co-Authors Paper Saying AI Will Eliminate Humanity”.

Sounds scary, huh!?

Example: The unexpected (so far simulated) accident of a AI-controlled US-drone trying to kill its superior [10, 11], who wanted to order it to stop killing enemies, ie fulfill its previous order, is a great, practical example of the risk of an Artificial Intelligence intervening in the provision of its reward, which Cohen et al. (2022) [3] warned against in their academic paper. [12, 13]

Of note, the presenting author Hamilton has later retracted what he said in his presentation, and stated that he had mis-spoken, the simulation never happened, and it was just a “though experiment”. However, one could also imagine that a presentation at an expert conference unexpectedly went viral on mainstream and social media, probably leading to a pushback by the public and politics to further develop AI technology or implement strict regulation. This will not be in the interest of the US military whose goal is to develop more powerful weapons. Therefore, personally, I do believe in the retraction, because the original quotation is very clear, such results were predicted by AI researchers, and the vested interests of the US military.

2. Proposal

I read the paper with great interest since I have been worrying about the developments in AI since I read the excellent book by leading AI expert Kai-Fu Lee “AI Superpowers” [6]. However, I realized that many features of the described superintelligent AI agent are similar to our current global economic system, which can be be described as “globalized neoliberal capitalism”:

  • a standardized reward system, ie the accumulation of capital
  • a dependence on the real world, ie (increasing) resource consumption and (increasing) waste production
  • a clear conflict between
    • the objective of the economic system (“infinite growth”) and
    • human interests (“maintenance of the stability of the biosphere as basis for organic (human) life”)
  • the astonishing inability to change the trajectory of the economic system which contradicts long-standing scientific evidence of violating the planetary boundaries and bioethical  values and even political pledges and ambitions, as exemplified by the escalating climate and ecological crisis.

An outstanding example of this development may be the development of cryptocurrency, esp. bitcoin, which is mostly used for speculative investments, while its societal function is marginal.  However, it consumes vasts amounts of energy comparable to entire countries, like Pakistan [8].

Therefore, I would propose that such an (artificially) intelligent agent may already exists.

One just has to relax some assumptions in the sense that the underlying neural network is implemented in a distributed hybrid form of artificial and natural (human) neural networks and the basic algorithm is encoded in the form of cultural ideas (e.g. economics textbooks). In addition, the agent may be the result of emergent behaviour of the individual (human and artificial) agents, where the assignment of the “an intelligent agent” identity to these system properties may not actually matter, but may be a matter of parsimony and intellectual convenience.

A superintelligent (ie more intelligent as the average democratic voter) agent may provide rewards to individual humans which strive to optimize their outcome within their limited scope (lifes), while actually optimizing the reward of the agent in the long run. By utilising underlying fears (e.g. fear of losing your job, fear of immigrants, …), such an agent may easily circumvent the rational mind of humans.

For example, cultural “frames”, such as “trickle-down economics” [8] may invert the actual reward function on a system level in the long run (> average human life span)  relative to the perceived reward function of an individual (human) level in the short run (< average human life span), thereby, triggering humans to constantly act against the long-term interest of the human species in general.


[1] Michael K. Cohen, Marcus Hutter, Michael A. Osborne (2022). Advanced artificial agents intervene in the provision of reward. AI Magazine, 2022-08-29.





[6] Kai-Fu Lee  (2018). AI Superpowers. Harpers.