The Capitalist Nature of Artificial Intelligence and Orthogonality

Let us first address the first basic drive of AI that Omohundro lines out in “The Basic AI Drives”: the drive toward self-improvement. Within the very core of debates about the nature of intelligence lies what is called the orthogonality thesis. The orthogonality thesis holds that “[i]ntelligence and final goals are orthogonal axes along which possible agents can freely vary. In other words, more or less any level of intelligence could in principle be combined with more or less any final goal” (Bostrom 3). Let me simplify what Nick Bostrom is saying here: any intelligent system could technically have any final goal, regardless of its level of intelligence. If we break this down, what Bostrom is saying is that technically, any intelligent system, regardless of how many problems it could solve, could solve any problem. Obviously this is as ridiculous as it is problematic. Because intelligence is itself problem solving, and the higher level of intelligence an intelligent system has the more problems it can solve, if an intelligent system with a low level of intelligence were to try to solve problems that require a high level of intelligence it would be unable to. Therefore, to reach its final goal which requires a high level of intelligence, it would have to take self-improvement as its goal. This demonstrates that even within the orthogonality thesis itself, there is a tendency of intelligent systems, hereinafter abbreviated as IS, toward self-improvement. But, this assumes our understanding of intelligence to be correct, and the orthogonalists (Bostrom et al.) have a different idea in mind. For the orthogonalists, intelligence is “the capacity for instrumental reasoning” (Bostrom 3), which is to say, intelligence is instrumental rationality. Therefore, with a higher level of intelligence, the more instrumentally rational an IS is. This leads us to Nick Land and his understanding of intelligence.

Following Nick Land’s dynamic understanding of intelligence, we must understand that praxeology is most definitely at the bottom of it. I think it is hard to argue that it is a mere coincidence that the first blog post on Outside in with the intelligence tag explicitly references Mises’ Human Action. Utilitarianism is to be rejected as normative ethical framework if the norm that is posited is of pleasure. The pursuit of pleasure is a trap, limiting ISs from having intelligence explosions. In other words, having pleasure as one’s primary goal generally leads to high time-preference behavior and thus, in the long term, a failure to actually reach the potential amount of the maximization of pleasure. To further, utilitarianism as a descriptive economic framework is incomparably superior to any form of hedonistic utilitarianism focused on pleasure. AIsai is the perfect example of this in that it is obviously utilitarian as it directs its action toward the maximization of its end. In this sense, “[c]alculative consequentialism is vastly superior to deontology[ but o]nly if it grows intelligence” as Nick Land argues (“Optimize for Intelligence”). The question was then asked, “How do you optimize for intelligence,” and Nick Land gave the answer: “through intense competition, primarily” (“Optimize for Intelligence”). That market dynamics will lead to the optimization of intelligence isn’t really a question… it is a basic deduction from any praxeologically-based theory of catallactics.

Now that calculative consequentialism has been lined out as the framework by which intelligence is to act if it wants to maximize its desired end, we must ask the very question “What is intelligence?” and formally[1] define it. In all cases, intelligence does one thing: it solves problems. How does it do this? To use thermodynamic terms, intelligence solves problems “by guiding behavior to produce local extropy” (“What is Intelligence?). Extropy is simply “local entropy reduction” or “what it is for something to work” (“Extropy”). Because, “[t]he general science of extropy production (or entropy dissipation) is cybernetics,” “intelligence always has a cybernetic infrastructure, consisting of adaptive feedback circuits that adjust motor control in response to signals extracted from the environment” (“What is Intelligence?”). Unsurprisingly cybernetic, intelligence is comprised of, on an infrastructural level, feedback loops which are adaptive, therefore meaning that these feedback loops can have a series of both positive and negative feedback depending upon its environment. What intelligence can do is self-improve itself, i.e., “correct [its] performance” because of the fact that it is realist on a machinic level, i.e., because of the fact that “it reports the actual outcome of behavior (rather than its intended outcome)” (“What is Intelligence?”). Extropy can be a troubling concept for some in the sense that it seems contradictory to any form of acceleration process, which is to say, extropy seems to contradict capitalism. This seeming contradiction comes from the fact that many conflate the idea of entropy with positive feedback loops and extropy with negative feedback loops exclusively. Nick Land actually addresses this when he says,

Even rudimentary, homoeostatic feedback circuits, have evolved. In other words, cybernetic machinery that seems merely to achieve the preservation of disequilibrium attests to a more general and complex cybernetic framework that has successfully enhanced disequilibrium. The basic cybernetic model, therefore, is not preservative, but productive. Organizations of conservative (negative) feedback have themselves been produced as solutions to local thermodynamic problems, by intrinsically intelligent processes of sustained extropy increase, (positive) feedback assemblage, or escalation. In nature, where nothing is simply given (so that everything must be built), the existence of self-sustaining improbability is the index of a deeper runaway departure from probability. It is this cybernetic intensification that is intelligence, abstractly conceived. Intelligence, as we know it, built itself through cybernetic intensification, within terrestrial biological history. It is naturally apprehended as an escalating trend, sustained for over 3,000,000,000 years, to the production of ever more extreme feedback sensitivity, extropic improbability, or operationally-relevant information. Intelligence increase enables adaptive response of superior complexity and generality, in growing part because the augmentation of intelligence itself becomes a general purpose adaptive response. (“What is Intelligence?”)

So, to explain this large quote, let me first say that it is actually that cybernetic escalation has led to the production of negative feedback loops as solutions to the issues of local entropy. It is escalation because extropy too can be produced in the manner of a positive feedback loop, and is in fact produced in the manner of a positive feedback loop. The point of intelligence is problem solving, and therefore adaptation. Extropy is clearly nothing less than this movement of adaptation. In this way both the dynamics of markets within capitalism and Darwinian Evolution are “intrinsically intelligence” (“What is Intelligence?”). In a certain sense, one could define intelligence as “adaptation” (“What is Intelligence?”). Because, for Land, “‘equilibrium’ and ‘trap’ have almost identical meaning,” the reason intelligence’s cybernetic infrastructure having negative feedback loops is not contrary to capitalism is because disequilibrium is maintained through negative feedback loops (“The Monkey Trap”). In other words, negative feedback loops within higher and more complex levels of intelligence are acting to preserve the escalation that is produced through positive feedback. To put this into perspective, on the one hand, the opposite of capitalism is “preservering ‘intelligence equilibrium’ through negative feedback,” whereas, on the other hand, capitalism is preserving intelligence disequilibrium through negative feedback which is also a disequilbirium that is enhanced by way of positive feedback (“Quote notes (#26)”). But let us head back to the orthogonality thesis now that we understand what intelligence really is.

What Land sees in the orthogonality thesis is a division between intellect and volition, which is to say, the orthogonality thesis holds that levels of intelligence (intellect) and what goals they are directed toward (volition) do not need anything to do with one another. Against this, Nick Land makes various arguments, but all he really has to do to disprove the orthogonality thesis is to disprove that there is any split between intellect and volition. How he disproves the idea that there is disconnect will be looked at in the next part. Thank you for reading!

Notes

[1]: Ew… formality… Gross, I know!

[2]: If the xenosystems.net links do not work then please just copy and paste the cited urls into the Wayback Machine if you want to read the blog post in its entirety.

Bibliography

Bostrom, Nick. “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents.” Nick Bostrom’s Home Page, May 2012, https://www.nickbostrom.com/superintelligentwill.pdf.

Land, Nick.[2] “Extropy.” Outside in: Involvements with Reality, 20 Feb. 2013, http://www.xenosystems.net/extropy/.

— -. “Optimize for Intelligence.” Outside in: Involvements with reality, 15 Mar. 2013, http://www.xenosystems.net/optimize-for-intelligence/.

— -. “The Monkey Trap.” Outside in: Involvements with reality, 31 Aug. 2013, http://www.xenosystems.net/the-monkey-trap/.

— -. “What is Intelligence?” Outside in: Involvements with reality, 19 Mar. 2013, http://www.xenosystems.net/what-is-intelligence/.

--

--

--

How sweet terror is, not a single line, or a ray of morning sunlight fails to contain the sweetness of anguish. - Georges Bataille

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Ethical problems with recommender systeems

How Autonomous Vehicles Mitigate Distracted Driving

How to Build an A.I.

Reimagining the Future of Business and Tech

(Practical) AI ethics

Our 11 reasons why innovation for Knowledge work have stalled

Rumors of AI’s Demise Are Greatly Exaggerated

Smart Integration: Four levels of AI maturity, and why it’s OK to be at Level 3

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Evan Jack

Evan Jack

How sweet terror is, not a single line, or a ray of morning sunlight fails to contain the sweetness of anguish. - Georges Bataille

More from Medium

The Capitalist Nature of Artificial Intelligence and Orthogonality

Althusser and “Fahrenheit 451”

What is Capitalism? [PART 1]

The Capitalist Nature of Artificial Intelligence and Praxeology