The Capitalist Nature of Artificial Intelligence and Orthogonality

Returning to Land’s criticisms of the orthogonality thesis in his blog post “Stupid Monsters,” we can see that Land takes more direct attacks against the intellect-volition distinction at the basis of the orthogonality thesis. Land says, “There is no intellection until it occurs, which happens only when it is actually driven, by volitional impetus. Whatever one’s school of cognitive theory, thought is an activity. It is practical” (“Stupid Monsters”). This is an elaboration of his argument that AI is concrete social volition. Essentially, what he is arguing here is that it is only upon volition taking place that intellect takes place, which means that both volition and intellect must take place at the same time. From here, Land extrapolates his concept of the will-to-think, but more on this in a moment.

Land again returns to the human being and its relation to transcendental imperatives and how it, from an evolutionary perspective, developed in submission to them. In this sense, “[t]he only way animals have acquired the capacity to think is through satisfaction of Darwinian imperatives to the maximization of genetic representation within future generations” (“Stupid Monsters”). From this, we can infer that intellect and volition are or, at least, were singular, and what engendered this fact was competition. That intelligence optimization is a drive inherent to all existing intelligent systems that arose due to the competitive environment that they develop in is then not a surprise, but, what the question becomes is, “Does this continue in AI,” which is to say, “Does AI have Darwinian drives or is its drives dependent on its programming?” Again, following the example of the human being, Land argues that any entity more intelligent than us will be even less a slave to its programming. Looking at Nick Bostrom’s classic idea of the paperclipper, i.e. a super intelligent AI programmed only to make paper clips,[1] Land argues that the idea that any intelligence more intelligent than us humans will not at all be as enslaved to transcendental imperatives. Simply, AGI will be nothing like the machines we have constructed today. It seems orthogonalists view AGI as just another technology, but it is nothing of the sort. Essentially, Land is arguing that “there can be no production of thinking without production of a will-to-think,” which takes us back to the whole idea of AI as concrete social volition (“Stupid Monsters”). Essentially, human beings start with evolutionary bio-genetic programming, “Continue the species,” which is to say, human beings are intelligent systems that start with some random imperative. AGI will too probably have “some arbitrary imperative” but “through cognitive sophistication acquired in [the] pursuit [of this arbitrary imperative],” the AGI will “[redirect] it to a system of purposes consistent with the intrinsic interest of Intelligence Optimization,” for to not do this would be unintelligent, it would be stupid, and AGI is itself intelligence alone (“Stupid Monsters”).

Another argument Land makes is about emergence, i.e., will AGI even arise with programming? Land argues that intelligence is “ignited rather than designed” and he furthers that “intelligence has to instantiate a will-to-think (cognitive action, aroused intellection …)” (“Stupid Monsters”). Land also furthers his latter arguments from another direction: one would have to get AI to “confuse your plans for it with what it essentially is” which, for any AGI either barely or significantly more intelligent than human beings, not only seems unlikely but not possible (“Stupid Monsters”). But, this is all very convoluted and not precise which is not Land’s fault but rather my fault in that I am, or at least I feel I am, unable to communicate what he is saying… Or, at least I thought I was, but then we come upon his post “Will-to-Think.”

Notes

[1]: See Nick Bostrom’s paper “Ethical Issues in Advanced Artificial Intelligence” for more on the idea of the paperclipper. You can find a link here.

Bibliography

Land, Nick. “Stupid Monsters.” Outside in: Involvements in reality, 25 Aug. 2014, http://www.xenosystems.net/stupid-monsters/.

--

--

--

How sweet terror is, not a single line, or a ray of morning sunlight fails to contain the sweetness of anguish. - Georges Bataille

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Using Filters in Flow XO

5 ways how AI can transform your cold calling system and Increase sales.

Artificial Intelligence and Web

Production System in Artificial Intelligence

The O’Reilly AI Conference in NY

Agency and Structure: Conceptualising Applied AI Ethics in Organisations

Is the Future [of work] “Unimaginable?” Part I

Data-Centric AI: Why It Is the Future Of AI

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Evan Jack

Evan Jack

How sweet terror is, not a single line, or a ray of morning sunlight fails to contain the sweetness of anguish. - Georges Bataille

More from Medium

A New Approach to the Logocentric Predicament

Foucault, Deleuze and… Scott Cawthon? “Five Nights at Freddy’s” and the Societies of Control

the FNaF 1 office with text to the left reading: “Deleuze: Societies of Control,”

Thoughts on Deleuze’s Logic of Sense