Political Horizons, Cybernetics, and Intelligence

On the Fate of Techno-Capitalism in the Face of Climate Change (Part 1)[0]

Evan Jack
6 min readDec 10, 2021

One thing I’ve noticed as an avid sentiment among leftists is a care for the environment. They hold that capitalism and technology destroy the environment and therefore, because of their values, they require that techno-capitalism (because capitalism and technology are essentially the same thing) be rejected in favor of some form of eco-socialism, or eco-communism (usually analogous to anarcho-primitivism or, to use Marxist terminology, primitive communism), and the keynesians put forward something analogous to eco-fascism (minus any racial quandaries; and by fascism, I just mean a high amount of state presence).

The future will be one of those two because, fundamentally, human politics frame decision-making around horizons. Climate change is one of those horizons, but the singularity is one of those horizons too. Talk of the singularity is little in present-day public politics, which is odd because, unlike climate change, it necessarily entails the end of the human race. Climate change, at worst, will kill off millions, maybe billions, but extinction? Of course not, humanity has a fundamental plasticity to it that has allowed it to adapt to all sorts of environments. I remember reading back in 2020 an article that lined out why the human race will never go extinct. What it always assumed was that we had the most intelligence of any volitional actor. On the one hand, aliens are highly unlikely to exist, or, at least, aliens that have a level of intelligence that is higher than ours (as if they did, we would certainly have been killed by them, or rather, the technology they would have made would have already killed us). But, on the other hand, technology in the form of artificial general intelligence, abbreviated as AGI, will “soon” exist and when it does it will be able to self-recursively improve itself, and because it itself is intelligence, it will therefore be able to self-recursively improve its intelligence. Because intelligence is problem solving and the more intelligence an entity has the more problems it can solve, eventually, through self-recursive improvement, AGI will be able to solve any problem in regards to improving its intelligence. So, AGI will be able to attain infinite intelligence, in an accelerative fashion. Solving problems faster and faster, AGI will quickly be able to solve all problems. Intelligence explosion is what we are going to call this process of intelligence optimization undergone by artificial intelligence, abbreviated as AI. Intelligence explosion happens in the fashion of what, in the field of cybernetics, is called a positive feedback loop. To express it crudely, a positive feedback loop is where an input put into a system produces an output which is greater than itself, and this greater output will then be put back into the system (hence feedback) which then leads to an even greater new output, and so on.[1] In the case of AI, the input is its current intelligence directed toward solving a problem which is currently preventing it from having a higher level of intelligence, the output then is the solving of this inhibitory problem and thus its attaining of a higher level of intelligence, and this then leads to its identification of another problem that is inhibiting it from having a higher level of intelligence, which leads to the greater output just created by the entrance of the input into the system to itself become an input entered into the system (this is the process of feedback as the output is fed back as an input into the system), therefore leading to a larger inhibitory problem being solved, thus leading to an even higher level of intelligence being attained, and so on.[2] The question then becomes, “With this infinitely intelligent AGI, what will it think of us?” If this infinitely intelligent AGI, abbreviated as AGIii, doesn’t think positively of us, we will certainly be gotten rid of. The optimistic humanist roar, therefore, is Kurzwellian,[3] which is to say, “AGIii will help humanity!” But, this certainly isn’t true. Because, inherent within all intelligent systems is the will-to-think (which is generally synonymous with a drive toward the optimization of intelligence), the first course of problems AGI will solve are those problems which are inhibitions to it being AGIii. We will certainly not be its main focus then, but only for a while. Therefore, what happens when its focus is diverted from intelligence optimization? If we look towards Stephen M. Omohundro, a computer scientist at Facebook researching and working on AI-based simulation , and his groundbreaking 2007 paper “The Basic AI Drives,” we can conclude that any form of AI will have six basic drives from which a multiplicity of other drives can be derived. The first drive is one we’ve already identified: AI wants to improve itself, i.e., its intelligence. The second drive is obvious and is clearly derived from the first: AI wants to be rational. It wants to be rational in order to achieve its latter goal. Thus, by rational, all I mean is behavior that maximizes one’s utility function. Rational actors are actors who maximize their utility function in each of their actions, therefore meaning rational action is action that maximizes their utility function. In this case, their utility function is intelligence optimization. The third drive is also clear and derivable from the second and therefore also from the first: AI will preserve, or at least attempt to, their utility functions which they will, or, again, at least attempt to, maximize. Because of the first drive, the first thing they will attempt to maximize is their intelligence. The fourth drive derivable from the other three drives and therefore ultimately from the first is simple: AI will, or at least try to, prevent counterfeit utility, which is to say, AI will avoid doing things that may seem beneficial to maximizing their utility functions in the short term but will ultimately prevent them from maximization in the long term. The fourth drive then is that AI will have a low time preference (i.e., they will care more about the future than the now) and will not only avoid but will not have a high time preference (i.e., they will not care more about the now than the future). The fifth drive, derivable from the rest and thus from the first: AI will protect itself. It of course must do this in order to maximize its utility function which is the optimization of itself, i.e, its intelligence. The sixth and final drive, easily derivable from the first and second: AI will want to accrue resources and efficiently use them. In order to self-improve, at a certain point, it will require outside objects in order to build upon itself, and develop itself. It will do this efficiently because it acts rationally. We will conclude here and then pick up where we left off in part 2. Thanks for reading![4]

Notes

[0]: [NOTE: I have not finished this series so, you will have to wait for all the parts, hopefully I will remember to indicate which one is the last one]

[1]: Within cybernetics, the opposite of a positive feedback loop is a negative feedback loop. A negative feedback loop, to put it reductively, is where an input is put into a system, leading to the production of an output smaller than the input, and this can lead to the regulation of the system which leads it to an equilibrium or, at least, something of that sort. The implication of this latter conclusion is that positive feedback loops lead a system away from equilibrium.

[2]: On the level of techno-capitalism, the process that is itself a positive feedback loop known as acceleration that takes place “within” techno-capitalism is techonomic, which means that it is both technological and economic. I will talk more about acceleration in a future part of this.

[3]: Ray Kurzweil, recipient of 1999 National Medal of Technology and Innovation and author of The Singularity is Near (2005), is largely optimistic about the singularity in that he believes it will benefit humanity in a manner and degree like we have never seen. He also believes that the singularity will take place by 2045, so buckle up and fasten those seat belts, techonomic acceleration is about to take place!

[4]: We did not cite Omohundro’s paper directly, so due to the fact that it was indirectly referenced heavily in the article, it will therefore be linked here. Another resource that may be simpler to understand is Omohundro’s summary of his paper in a single sentence (which has further analysis under it), and you can find that here.

--

--

Evan Jack

How sweet terror is, not a single line, or a ray of morning sunlight fails to contain the sweetness of anguish. - Georges Bataille