3.2

Retuning, Retraining, and Recalibrating



Historically imperial econometrics and religious edicts have strong parallels to some of the perils of our contemporary condition of data-driven probabilistic normativity, but this emerging set of social conditions enacted from this hegemonic imposition of standards is unprecedented. Empirical analysis of complex systems like big data, machine learning, and financial markets can be reduced to two parts: the model and the data. The dialogue between the model and the data is one of circular iteration. Data is collected, this data updates the model, more data is then collected to improve and refactor the model. This cycle continues ad infinitum using probability to statistically truncate erroneous data from the model to arrive at a state of high likelihood of predicting an outcome that the model is designed to solve. In the parlance of a data scientist this cycle is called optimization and for a musician this process is called tuning. Optimizing a deep learning pipeline has strong similarities to that of tuning. For example, a musician tunes to a fixed pitch (often 440Hz; concert A) and a data scientist tunes to 99.99...% accuracy. Sometimes during the process of tuning the musician or data scientists’ model accuracy begins to deviate from the mean, but musicians with good ears and data scientists with the right techniques will observe this overcorrection and update the errors to arrive closer to their desired calibration point. With enough tuning both musicians and data scientists generally end up with a harmonic unison or a low fault tolerance.

Unlike the measurement of acoustic phenomena which has been subjected to centuries of scientific rigor the process of knowledge formation from deep learning processes are relatively new and rest on many assumptions that aren’t yet properly vetted as credible science. Data scientists often make use of inherited datasets that have been subjected to processes outside of their control experiments. For example, the process of cleaning and meta-tagging data for analytics is often crowdsourced to platform labor markets compensated with micropayments67 with no scientific oversight into the validity of the result. When these types of prediction methodologies are used in applications that require the highest standard of credibility we cannot afford even a modicum of dissonance, to borrow a musical term.

Statistics is a well defined field and these insufficiencies often aren’t occurring at that phase of knowledge production in deep learning. The endemic biases are often produced at the methodological level; where, how, and why the data was sampled and labeled. This process, when unexamined, often politicizes and instrumentalizes science to justify industry interests. Prior to big data, we saw statistical analytics leveraged by big pharmaceutical, big oil companies, and big tobacco industries “prove” that their products aren’t harmful because they controlled the data aggregation pipeline to produce results to substantiate their interests. When will our regulators place big data and deep learning in this category?

My optimistic take is that these processes are tuning to the wrong pitch, or optimizing for the wrong things. As the German physicist and acoustician Ernst Chladni demonstrated68: resonance begets form.

Information is immaterial, mathematical, and metaphysical and thus cannot resonate since it is not explicitly material but embodied in it. However, when data is decoded by our devices and formatted for perception via our sense organs, it is subjected to physical processes and is translated from semiotic substrate and encoded into synaptic modulations. Our neural pathways are immediately modulated and refactored and our brains become proxy nodes in broader networks. After updating and refactoring our behavioral models we transmit these signals of ourselves into the resonant econo-networks shaping our culture and cities. In Norbert Wiener’s 1954 opus on cybernetics, The Human use of Human Beings: Communication, Control, and Morality of Machines69, he ominously portended of a world where physics and information networks cross modulate each other through biological human learning systems. This is our world. Garnering public literacy and oversight over these tuning parameters is imperative.

When extrapolating the subjects of harmony and tuning into a socio-political dimension of protocol design and rule of law it is possible for these concepts to take on hegemonic dynamics. Canonizing some of the aforementioned machine learning techniques into a political process is certainly not without equally extreme existential risks. Migrating deep learning from private enterprise to nationalized models historically have been adopted by high-modernist authoritarian regimes. National projects attempting to treat governance as a tuning system typically leverage feedback to supplant, rather than augment, participatory processes. Xi Jinping has written about leveraging cybernetics as a modality for centralizing power in China.70

Albeit unrealized, in Chile and elsewhere there have been anti-authoritarian national projects for tuning real-time democracy. Project CyberSyn71, a formal initiative by the Chilean government to implement a democratic viable system model approached the tuning of politics and economics computer networks with consensual bottom-up polling in places of labor, but was overthrown by a military coup d'état established by the CIA72 before the project could be implemented. Estonia is currently working on algorithmic democratic political mediation methodologies73 for its national model although, due to its naissance, its credibility remains unseen. Creating technical specifications through democratic processes of legislative oversight and subjecting enterprise to due process may end up being a more efficacious role for our regulatory systems than one of direct cultural intervention.

However, the very notion of jurisprudence is being refigured and circumvented in the post-cybernetic or algorithmic intelligence age. When contingency of things has become absolute, there is one problem: namely, that there is no law - the law becomes fictional. In the data regime in which the law is inductive and the empirical mode is dominant, the process begins with evidence in order to create general law. This legal process is about gathering evidence but foregoes formal integrative reasoning.

Luciana Parisi has noted that “given the current legal crisis, with cybernetics and intelligent automation replacing the law, which is now abstracted from its subject, the question arises as to whether it is possible not to simply relinquish or abandon the law within this dominant realm of data recombination, where there is no law unless it is governed by case findings and the gathering of evidence, but instead to theorize a break from the code of the law by making a reconstruction of it.”74 The crisis of induction she is foregrounding is not just political but epistemological because complex feedback systems, as previously outlined, do not have the ability to recall their state or justify their reasoning because they are constantly ‘learning’.

Control can be understood in two primary ways: firstly as domination or secondly as methodologies of self-regulation through feedback. If we are to move away from despotic systems, but continue to operate at scale, our systems will still require mechanisms for adjusting flows of resources and information. By avoiding conflating self-regulatory control systems with those of subordination I believe that it is possible to salvage the practice of political calibration if its (hyper)parameters become the locus of traditional legalist procedures.

Retuning the cultural apparatus into a habitable form will certainly be an Atlassian task considering the adversities that imperil our species. However, if approached like parametric luthiery we might be able to move away from our existing discordant extractive modalities. In the next section I will outline some material forms these ideas could potentially assume.




67    "Using Amazon's Mechanical Turk for Data Projects - ProPublica" https://www.propublica.org/article/how-we-use-mechanical-turk-to-do-data-driven-reporting-and-how-you-can-too
68    "Chladni Plates | Harvard Natural Sciences Lecture Demonstrations." https://sciencedemonstrations.fas.harvard.edu/presentations/chladni-plates
69    "Greek musical writings | Cambridge University Press, 1989." https://www.worldcat.org/title/greek-musical-writings/oclc/10022960
70    "An Institutional Analysis of Xi Jinping's Centralization of Power" https://www.tandfonline.com/doi/abs/10.1080/10670564.2016.1245505?journalCode=cjcc20
71    "Big Data, Algorithmic Regulation, and Project CyberSyn" https://www.mdpi.com/2076-0760/7/4/65/htm
72   "Documenting U.S. Role in Democracy’s Fall and Dictator’s Rise in Chile - The New York Times" https://www.nytimes.com/2017/10/14/world/americas/chile-coup-cia-museum.html
73    "e-Governance — e-Estonia." https://e-estonia.com/solutions/e-governance/
74   "Sternberg Press - Perhaps It Is High Time for a Xeno-architecture to Match" http://www.sternberg-press.com/index.php?pageId=1853&l=en&bookId=747&sort=year