Mystification, Naturalization, & Literacy

This section will outline three masks that contemporary power has adorned in the age of deep learning. These masks are ancient methods of refashioning authority, but are being retrofitted for planetary computational dominion. Used for distancing and obfuscating material jurisdiction away from the general population, the sovereign justify and cloak their authority with these historically vetted tactics.


Where realpolitik’s brute force approach to wielding power dwindles is where its twin, mysticism, intervenes. Across most cultures, authority can be found seated next to the keepers of transcendental doctrine: monarchs’ devotion to the papacy, maharajahs’ consultation with the brahmins, shahs’ relation to the caliph, emperors’ adjacency to the hierophants, and so on. The mystical consulate are positioned as the adjudicators of transcendental doctrine. Their creeds typically required their hermeneutic expertise due to complex, abstract, and remote precepts.

The modern humanist project that began during the European Enlightenment quite literally dethroned old world autocratic rule as reason triumphed and laissez-faire capitalism formed a new political subject: the individual liberal citizen. This social subject was granted inalienable rights that superseded the divine decrees of pontiffs and monarchs. However, in the age of computational capital and ecological meltdown we are witnessing a crisis of the Westphalian nation-state, the primary maintainer of the principles of the humanist project. In an accelerating geopolitical landscape where intelligent algorithmic systems continue to agitate national identities, desubjectify modern democratic subjecthood, and retopologize new jurisdictions according to remote technocratic administration it is becoming evident that monolithic power is re-emerging.

︎ Refer to Appendix A1 for ancillary musical, historical, and technical details

Given the emergent complexities in our global networks of coordination, algorithmic capitalism has anointed its entrepreneurial engineers to remodel the world to reflect the staggering rate of growth. To contend with the sheer amount of information processing human oversight is bypassed due to the magnitude of tasks. These convoluted processes of automation are developed and maintained by the embrocated. This software canonicate occupy the edges of computation where mathematics becomes metaphysics; where transcendental logic seeks to untether intelligence from the shackles of humanism.

Some of these machine intelligence researchers utilize allegories of gods for framing models of intelligence that exceed the capacity of humanity. While mythology can serve as a mirror for reflecting insights into the human condition, these analogies with AI generally play into teleological and theological schemas of determinism and inevitability that are ultimately unhelpful misnomers. Deifying complex systems serves as another tactic for elevating and naturalizing investment capital’s capacity to aggregate and analyze colossal amounts data at planetary scale.

Myth-making is to be expected as humans attempt to explicate these ineffable, foreign processes that play such an enormous role in our lives. Since its indigenous origins humanity has spun stories using in-built associations to construct meaning and negotiate with uncertainty. However, these types of animistic myths of enchanted relationality and ecological cohabitation with other intelligences are not the ones cropping up in Silicon Valley circles. These factions have categorically discarded pre-colonial models of temporality, including dreamtime and cyclical time, supplanting them with a chronological telos of autocatalytic productivity that will beget emergent computational supremacy. This belief in a deterministic a priori supreme being made of pure reason (logos) and is quite literally derived from ecclesiastical Christian dogma and rabbinical Hebrew scripture. In fact, the Jesuit priest Pierre Teilhard de Chardin postulated the Omega Point5, a theory that the universe is evolving towards a maximum state of complexity and consciousness. Despite its religious provenance, his ur-Singularity cosmology has been widely adopted by many secular executives and engineers helming the burgeoning technocracy.

The Ethics of Big Data
The Ethics of Big Data, O'Reilly Media, 2012
Server Blessing
Data Center Blessing; # /etc/init.d/daemon stop
Image Credit: India Times

The Sophist notion of technē which forms our most essential figuration of technology is derived from Promethean myth: whereby fire was stolen from the gods and bestowed upon humanity spawning progress and civilization. This correlation is apotheosized in Yudkowsky’s Bayesian horrors6, Alexander's transhumanism7, Kurzweil's technological singularity8, Land's cosmological singularity9, Bostrom's Superintelligence10 and Levandowski's Church of AI. The irony in these rationalists’ accounts of intelligence is that they are all reifications of inherited cosmologies that aren’t predicated on formal logic, but on myth. When subjected to methodological rigor these narrative-based scare tactics are subsumed by heuristic biases and collapse into xenophobic rhetoric.

Mystifying computation in this way tends towards a cosmic narcissism: gazing into the abyss as the abyss affirms its own preconceptions of itself. Regardless of one’s stance on the othering of hypercomplex computation I would argue that humanity should cultivate a lexicon for talking about aliens, others, or xeno-intelligence without dynamic divergence.

The theories of breakaway recursive self-enhancing technology that undergird these conceptions of intelligence are often discussed in a bounded immaterial domain without the examination of the anatomical geographic scale. These exceedingly brilliant cerebral meta-linguists and transcendental number theorists often neglect to acknowledge the material earthen corpus upon which their symbolic logic is expressed.11

Researchers Kate Crawford and Vladen Joler’s Anatomy of an AI System12  is a graphical dissection of the infrastructural assemblage required to embody artificial intelligence; spanning the fabric of capital to include mineral resource extraction, human labor, supply chain logistics, data collection and distribution, analytics, prediction, and optimization, their project synopsizes the pipeline for constructing a deep learning system. This corporeal plexus is typically trivialized with the public relations nomenclature of the Cloud.

Referring to the vast material architectural projects of data centers with the ephemeral parlance of the Cloud is not just a bit disingenuous, it is downright deceptive. As platforms deploy preemptive algorithms into the cultural apparatus it requires a delicate and tactical marketing narrative as its trojan horse. As architecture design critic Keller Easterling aptly suggests: “You can see the discrepancy between what organizations are saying and what they are doing. You can even see temperament in construction or potentials for violence. That disposition is propensity within a context, property or tendency that is unfolding over time.”13 By steering the public narrative away from its material operations, these supranational syndicates are able to truncate opposition by minimizing attention to what they are building.


Historically the doctrinal gatekeeping that delineated class and caste was maintained by ecclesiastical scribes. There are striking parallels to the emerging niche of computational literati contributing to the steepening disparity of wealth and technical literacy. Both clergy and programmers provide order, in the form of textual statutes, that designate the foundational axioms upon which a society rests. Similar to the vertical denominations that emerge with religious sects, emanant power has stratified social order and restricted access to its inner sanctum with media illiteracy and technical deficiency.

The development of user experience serves as an exegetical layer between the code and its graphical representation. These simplified behavioral flows divert users away from the software’s extractive disposition and carry the hermeneutic subtext of “leave it to the experts”. UX is a set of inherited decisions that form interfaces to assist in navigating a computational domain. While user experience design affords those without literacy to use digital media with relative ease and is an essential facet of all computing, the Jobsian design ethos of “it just works” is synonymous with consumerism and is not designed to garner media literacy. By funneling users into compromised sets of autonomy these firms are able to maintain opacity by occluding access to the operational facets of their services. For example, the frictionless front-end of Amazon’s Alexa and its ilk are reverse portals into behavioral surplus supply chains.

Developing formal design criteria, let alone literacy, for deep learning is a sophisticated process that even those who are are working in the field have failed to do. Arguably since artificial intelligence interpolates the latent fields between engineering, science, philosophy, mathematics, design, and spectacle, it doesn’t satisfy the methodologies that quantify any real metrics specific to each respective field.14 This provides us interesting output but becomes subject to motivated logic15 and is used to “prove” certain assumptions but without the rigor and criteria that each of the aforementioned fields uses. This effectively allows interesting, but not provable claims to slip through under the guise of technical or conceptual rigor.16

AI researchers are more likely educated in fields that take formal problems as inputs: engineering, computer science, mathematics, or theoretical physics. Yet the problems being tackled are mostly ones in which a design approach, maintaining a continuous, open-ended relationship with nebulosity, may be more appropriate.17 These applications assume a highly technocratic solutionist position on social life; entire fields and industries are “disrupted” by platform engineering that ignores the specialist knowledge held by experts in their respective practices. This approach circumvents domain specific expertise and supplants it with big data. With its appeals to bottom line, businesses can afford to overlook the resultant margin of error bypassing proficient professionals with automated solutions. What the adopters of these “disruptive solutions” may not realize is that these platforms are using tautology to substantiate their claims. In essence the proponents create the criteria that they need to fulfill, fit the data to qualify their results, and prove that their “solution” will cost less than hiring experts.

︎ Refer to Appendix A2 for ancillary musical, historical, and technical details


Venkatesh Rao alleges that “deep learning has an authoritarian right wing bias. It feeds on vast data sets created by natural behavior, has a tendency to inherit and reproduce endemic biases, and codify them in favor of conservative authoritarians who see the incumbent balance of power as natural and just.”

Rao goes on to state that the management class organizing the business and social formations for how deep learning is codified claim that trying to “regulate” the functioning of deep learning algorithms directly, through human political processes, or by demanding ‘justifiable AI’ that can explain itself, is a fool's errand.

By adopting this framing of deep learning it becomes a tautological justification for itself. Outside of the market-based rhetoric of profit motivation, how is this being justified as data science? These algorithms use techniques that leverage what are called adversarial networks. These techniques use computation to accelerate simulated evolutionary processes by determining “data fitness” through a mathematical sorting process. These are sometimes referred to more broadly as genetic algorithms.18 These information processing models compute data by simulating Darwinian natural selection where only the best data “survives” making it through a statistical gauntlet of adversarial regression. By using these types of sorting algorithms many Silicon Valley executives leverage and vindicate the results as “natural” data science; the truth is that they are anything but natural. Their reasoning generally claims that if you have enough data the algorithm will be able to arrive at a statistical equilibrium having run through enough permutations of evolution.

This assumption is predicated on the cum hoc logical fallacy19 which, in statistical lexicon, can be summed up as “correlation doesn’t imply causation.” As more of big data’s conclusions are subjected to scientific rigor outside of its own self-affirming means-tested regressions it has been shown that more data can often lead to erroneous results.20 21

Cultural critic Mike Pepi articulates the hubristic naturalization of platforms as biological organisms in his astute analysis of Silicon Valley’s Sublime Administration: Between Platform and Organism22. The careful use of biological and evolutionary language around these techniques is intentional because the proponents position their critics as being against “science”. This has been a continual assertion of those advocates seeking to deepen the entrenchment made by these systems. The evolutionary justification for these conclusions are not just fallacious but are epistemologically falsifiable.

Francis Galton, a pioneer in eugenics and biometrics, was also a progenitor in the field of statistics. The statistical techniques that Galton invented23 (correlation, regression) and phenomena he established (regression to the mean, bivariate normal distribution) form the basis of the biometric approach and now operate at the core of deep learning data analytics.

Historically, we’ve seen this type of evolutionary rhetoric attempt to draw sinister erroneous conclusions about race and criminology. In the age of deep learning we are experiencing the resurgence of outmoded 19th century ‘race (pseudo)science’ [e.g. criminal anthropology23, biological determinism24, social Darwinism25, phrenology26, physiognomy27, and eugenics28). Like much of the induction biases latent in deep and reinforcement learning systems these fallacious notions are predicated on racist methodological weaknesses (poor sampling technique, bias in gathering data, poor statistics)29. Even though these claims have been categorically disproven we are observing deep learning with these biases rapidly mount the American martial systems30 of ICE31, NYPD32, U.S. Army33, Orlando Police34, New Orleans PD35, and Washington Sheriff’s Department36.

5    "Teilhard de Chardin and Transhumanism." https://jetpress.org/v20/steinhart.htm
6    "Rationality: A-Z - LessWrong 2.0." https://www.lesswrong.com/rationality
7    "Transhumanism | Slate Star Codex." https://slatestarcodex.com/tag/transhumanism/
8    "Ray Kurzweil | Singularity" https://www.kurzweilai.net/futurism-ray-kurzweil
9    "Fanged Noumena, Nick Land" http://azinelibrary.org/trash/fangednoumena.pdf
10   "Nick Bostrom | Superintelligence" https://nickbostrom.com/
11   "Total Consumer Power Consumption Forecast - ResearchGate." https://www.researchgate.net/publication/320225452_Total_Consumer_Power_Consumption_Forecast
12   "Anatomy of an AI System." https://anatomyof.ai/
13   "Keller Easterling — Extrastatecraft: The Power of Infrastructure Space." http://kellereasterling.com/books/extrastatecraft-the-power-of-infrastructure-space
14   "How should we evaluate progress in AI?"  https://meaningness.com/metablog/artificial-intelligence-progress
15   "Artificial intelligence pioneer says we need to start over - Axios." https://www.axios.com/artificial-intelligence-pioneer-says-we-need-to-start-over
16    "Why AI Is Not A Science - Stanford University" https://web.stanford.edu/group/SHR/4-2/text/matteuzzi.html
17    "Troubling Trends in Machine Learning Scholarship” http://approximatelycorrect.com/2018/07/10/troubling-trends-in-machine-learning-scholarship/
18    "Genetic Algorithm [Deep Learning Patterns]" http://www.deeplearningpatterns.com/doku.php?id=genetic_algorithm
19    "Logical Fallacies » Cum Hoc Fallacy." https://www.logicalfallacies.info/presumption/cum-hoc/
20    "Causal Inference and Statistical Fallacies" http://www.math.chalmers.se/~wermuth/pdfs/96-05/CoxWer01_Causal_inference_and_statistical.pdf
21   "Issues with data and analyses: Errors, underlying themes, and ..." https://www.pnas.org/content/115/11/2563
22    "Jenna Sutela - Orgs: From Slime Mold to Silicon Valley - Printed Matter." https://www.printedmatter.org/catalog/49553
23   "Francis Galton: Pioneer of Heredity and Biometry | Johns Hopkins University Press"  https://jhupbooks.press.jhu.edu/title/francis-galton
23    "Neural Network Learns to Identify Criminals by Their Faces - MIT." https://www.technologyreview.com/s/602955/neural-network-learns-to-identify-criminals-by-their-faces/
24   "OSF | Deep neural networks are more ...." https://osf.io/zn79k/
25    "Researchers Want to Link Your Genes and Income—Should They?" https://www.wired.com/story/researchers-want-to-link-your-genes-and-incomeshould-they/
26    "FACEPTION | Facial Personality Analytics." https://www.faception.com/
27   "Automated Inference on Criminality using Face Images - Brown CS" http://cs.brown.edu/courses/cs143/2017_Spring/lectures_Spring2017/27_Spring2017_SocialGoodandDatasetBias.pdf
28    "Sociogenomics is opening a new door to eugenics - MIT Technology" https://www.technologyreview.com/s/612275/sociogenomics-is-opening-a-new-door-to-eugenics
29   "Machine Bias — ProPublica." https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
30   "AI is sending people to jail—and getting it wrong - MIT Technology." https://www.technologyreview.com/s/612775/algorithms-criminal-justice-ai/
31   "ICE Extreme Vetting Initiative: A Resource Page | Brennan Center for Law." https://www.brennancenter.org/analysis/ice-extreme-vetting-initiative-resource-page
32   "Palantir Contract Dispute Exposes NYPD's Lack of Transparency" https://www.brennancenter.org/blog/palantir-contract-dispute-exposes-nypd%E2%80%99s-lack-transparency
33    "Palantir wins competition to build Army intelligence system", https://www.washingtonpost.com/world/national-security/palantir-wins-competition-to-build-army-intelligence-system/2019/03/26/
34   "ZeroEyes AI Threat Detection - ZeroEyes." https://zeroeyes.com/
35   "An improved kernelized discriminative canonical correlation analysis - IEEE" https://ieeexplore.ieee.org/document/6359400
36    "Orlando Pulls the Plug on Its Amazon Facial Recognition Program" https://www.nytimes.com/2018/06/25/business/orlando-amazon-facial-recognition.html