1994 – The Computable and the Uncomputable: VLC Forum: Keynote Lecture by Alexander R. Galloway (2020)

I am very glad to be able to post something on Alexander R Galloway right here. He needs no introduction I am afraid, and I think he is unavoidable if one wants to dig a little deeper into how online-offline entanglements that affect more of us by the day intersect and interplay. Alexander continues to be one of the most important theoreticians of the digital, having published in the 2000s several key books on Internet protocols, algorithmic culture, unconventional computing, digital humanities and posthumanities, network theory and gaming : Protocol: How Control Exists After Decentralization, Gaming: Essays on Algorithmic Culture,  The Exploit: A Theory of Networks(with Eugene Thacker) and recently (2021) – Uncomputable: Play and Digital Politics in the Digital Age from Verso.

As a starter, here are some of his free articles:

Warcraft and Utopia

Mathification

Radical Illusion (A Game Against)

This keynote lecture brings together research and books by other authors, be it cyber-feminist or digital culture – a different history of computing, biding carefully and imaginatively together old and new material practices that subtend computation (by XX women artists let’s say or adopted from specific work done by indigenous people) as a common weave of ‘uncomputable’ computer history.

In a sense he is just tying together several knots and threads, adding more to wider web of inclusive and non-reductionist histories of (unconventional) computing. There is an incredible visible and tangible built-up that made computing happen starting from down below. One that allows us to better feel and understand that it could not exist without this processual practices. An instantiated (and mostly underrated and unwaged) work specific to all sorts of weaving process – from childhood games such as Cat’s Cradle (Donna Haraway) to DNA molecular folding. Textile art and textile production for a long time considered ‘minor’ arts and ‘decorative’ (even inside men preserves such as Bauhaus) – are taken as better examples of parsing both industrial history and understanding mathification in various other ways than just visiting your local computer museum or technical museum. Here are a few rapid notes on it:

-on the way it discusses both the work of early industrial weavers, the worker’s own resistance and distraction of machines as boycott against automation and the ‘intellectual’ aesthetic critic against pieces (observations by Lord Byron) made in the new factories as opposed to the previous handicraft work. New lower quality work coming out of these early factories was disconsidered and called in the day’s cant: ‘spider work’.

-early employers preferring married women as workers since they would be more docile, and more ready to give everything in order to provide for their families (a quote from Marx that quotes an early social reformer.

-the way Ada Lovelace largely considered the first programmer – at the same time (as Sadie Plant has pointed out in 1997) the context of her ground braking mathematical work is as telling as the work itself (if not more for non-mathematical minds as mine), it is an addenda to a proto-vapourware, an annex written by a women to a footnote of a translated review from Italian about the first “computer” – a machine thought by Charles Babbage (the Analytical engine in his words), but that did not yet exist!

-a very nice example of fraying of margins, of falling apart. This is no smooth or continuous and unaltered history. It follows the same way carpets or woven products get most intense friction or use at the margins. There is I think a long-standing interest of AR Galloway in the role of error, of the glitch in programming and the way all these proto-computers were always incredibly noisy, clunky and prone to failure all the time and had to be always rebooted or debugged from early on.

-the way spiders interpret or percieve any improvement to their work (as in the work of the artist Nina Katchadourian was mending damaged spider webs) as something unwanted, an event that actually made them come and extract the ‘repaired part’ and continue with their own work

“Narrating a series of lesser-known historical episodes, Alexander R. Galloway’s keynote lecture addresses the computable and uncomputable. These stories are drawn from the archives of computation and digital media, broadly conceived. The goal is to show how computation emerges or fails to emerge, how the digital thrives but also atrophies, how networks interconnect while also fraying and falling apart. Such alternations–something done something undone, something computed, something uncomputed–constitute the real history of digital machines, from cybernetics and networks to cellular automata and beyond. And while computers have colonized the globe in recent years they also excel at various practices of exclusion. Since the 1970s “protocol” technologies have played a key role in this transformation. Galloway concludes with an interrogation of the concept of protocol in 2020, revisiting his groundbreaking 2004 book Protocol: How Control Exists after Decentralization.”(VLC Forum 2020 description)

1852 – Coded Bias (documentary by Shalini Kantayya 2020)

official

When MIT Media Lab researcher Joy Buolamwini discovers that facial recognition does not see dark-skinned faces accurately, she embarks on a journey to push for the first-ever U.S. legislation against bias in algorithms that impact us all.

This is probably one of the most important documentaries to address many issues that are not any longer strictly the domain of SF. Cod Bias is definitely within the bounds of any socially inflected SF worlds u can think of. Maybe it used to be just the figment of dystopian – Cold War tinged imagination, but now it is very much part of ours. Made me actually mentally revisit theat primordial Silicon Valley 1984 promo – the ad for Apple Macintosh PC released in December 1983. Feels puzzling how this new televised technological muscle was part of a much wider and concerted Reaganite response to the -(still) Socialist East. ‘Free World’ computing as easily turned and facing off the eponymous Orwellian 1984 villain, a drab, grey, docile citizenry of the standardized monolithic solid-state, the ideological ‘other’ where a repressive & monstrous surveillance apparatus – (be it Securitate/Stasi) enforced obedience & ‘rightminding’. Only that, in retrospect, the newly competitive Silicon Valley product was a launch-pad for a much wider privacy Dragnet and much more insidious scope and certainly fancier in looks & design. Buying into a system of personal, automated & generalized consumer surveillance that also brought the pretense of neutral, un-biased coding.

Coded Bias documentary is the strongest advocacy of algorithmic justice i have seen, watched or heard of. A critical introduction to the current algo-capitalistic trends & as well as some of the ways needed to counter act AI-supported disparities & disenfranchisement. It is no mystery that you actually need people from across the board, including industry ppl (call them what u want, ex- Quants/former flash trading brokers, tech renegades, whistle-blowers, technological deserters, industry watchdogs, etc). Yes, not only EFF members, STEMs, geeks and blerds, but also people from the social housing blocks, the hood, the street corner youngsters and those with migrant-background – those that are primary targets and have been already mis-measured, data stripped and data mined and whose bodies and faces are literally the training grounds of computational modernity. Most of them, are the unwilling informants and unpaid trainers of emerging tech deployments that under-girds surveillance capitalism.

One of the most important takes from this documentary – was for me the counter-intuitive demonstration that goes against old cyberpunk sayings (paraphrasing: ‘the future is already here but it is just unequally distributed’). In the 21st c we learn time and time again, that the 1%, or 10% or the rich, powerful and wealthy are not the future’s bleeding knife- since they have mostly lived live of unfettered privacy and non data retention. They are not a tested minority, and clearly not the ones who get first unwanted access beforehand and do not suffer the effects of those things that will get distributed later one a vast scale. In fact (as one of the participants of Coded Bias points out) – the post-apocalyptic poor, the unprotected, those with previous histories of discrimination, enslavement, incarceration, abusive family background, profiling etc those already under some state of surveillance, registration and control (ID checked mostly in terms of constituting some form of risk), are the ones who suffer the blunt of these new technologies.

They are the un-glamorized testers of unequal futures, and not the privileged rich beta testers that mostly seem to opt-out of their own companies technological wonders. Accordingly, technological transformation is so important that it should not be defined just in terms of access – or left at the whim of company board members, Big Tech, Innovation hubs or ‘smart’ city planners & cheerleaders. It is not just a question of ‘users’ – since it is about the ‘used’ more than the users nowadays. It is – without nostalgia or pre-technological naivity in tow, that in spite – of these tremendous and complex planetary changes, legislation and lobbying for digital rights & accountability seems to lag behind, since both public attention and consciousnesses gets bypassed. Direct oversight and regulation or consciousness itself seems so trivial, and yet it is constantly remade into a threshold to be bypassed by the free markets & mantras hailing for ‘disruptive’ transgressions. Nonetheless, there is this incredible alliance and (as seen below) a lot of initiatives have sprung up, that espouse not just a neo-Luddite conviction, but one of tekk-savvyness, informed by the above ‘renegades’and industry insiders and/or burnouts as well, by previous historical black liberation examples as by the new empowering SF alternate histories (i see some clear signs of Wakanda there) having been written (thinking about Solomon Rivers,Nalo Hopkins and Nisi Shawl & others here) or waiting to be written in collaboration with automated text generators or not.

There is emerging calls from both government and by popular demand to at least be able to opt-out of these technologies in the US and EU (face recognition being just the most obvious case), altough I’m not sure about the vast majority of the world (which is clearly not from the Global North) or even the accelerating use & deployment of drone wars & DARPA abroad in the wake of protracted but inevitable US retreat from Afghanistan. There of course the possibility to learn how optical governance works or is put to use/abused in other parts of the world, since the West does not hold the monopoly over AI. China, in particular is an interesting divergence, since machine vision has been widely rolled out by the CCP via its social credit score, as well as being repurposed from below during the Pandemic response. SF has been historically very wary with attempts to modulate or influence behaviors such as behaviourism, to tuning or pegging controls or strong emotional responses towards a common good (Just think of swath of movies from Equilibrium 2002 to Brave New World 2020 or the new Voyagers 2021). ‘Brainwashed’, ‘the Manchurian Candidate’ etc are just a few of the inherited standard fear responses churned by both Cold War warriors, strategists, Pentagon brass and the run of the mill Hollywood movie output whenever they tried to depict or describe actual, imagined or suspected ideological traitors and US army deserters. ‘Brainwashing’ especially was made up into a sort of explain-all – to cover a whole range of ‘enemy'(past & present) responses, as the only possible logical explanation for the divergent behavior of former US troops (many of them black) who decided to opt-out of the racist US capitalist system after living as POW (during Korean War). When former army personnel decided to question, defect & live outside their bounds they must have been ‘brainwashed’, especially if they happened to be choosing Mao’s China for a while (a forgotten history detailed with tremendous wit in Julia Lowell’s fascinating book: Maoism: A Global History 2020) instead of racism back home or in the army. Change of mind and qualms about incoming orders also equals treason as we know from the case of Chelsea Elizabeth Manning or Edward Snowden.

In a rare and courageous move – The White Space (Machine/Ancestral Night duology) space opera universe of Elizabeth Bear avoids the usual ‘brainwashing’ suspicion of previous SF dystopian conventions by offering exactly what so much canonic SF eschews. It opens the possibility of a wide, non-coercive future galactic union where every human (altough the union is made by many non-sapient but sentient syster species) has the option to decide how much it alters, allows or wants to dial-down or fine-tune (what amounts to certain AI assisted ‘mindfulness’) a central nervous system evolved to automatize responses to emotional distress. Changing developmental patterns etc including universal non-coercive(!) access (called “bumping” in the novel) to what amounts to puberty blockers is not automatically a bad thing or a monstrous unnatural hybristic act(altough there’s libertarian privateers who think so in that universe like in ours)!

White Space opens up a way to modulate, discuss and deal in other ways with trauma, isolation, addiction, puberty, dysphoria, sex or gender assignment by birth etc bypassing automatic, hormonal or non-cognitive ‘habitual’ responses, being able to imaginatively limit violent behaviors at a minimum. Curbing willingly so much of what is anti-social behavior was apparently frowned upon even in that far future, but there’s room for so much more. It’s of course always important to pay attention to who decides what and when one misbehaves or when disobedience becomes accepted & when not. Of course there is a thin line, and there are those who want to skip and actively propagate opting out of the opting out. Body (non modification) extremists surely exist in that future that deem it sacrilegious to intervene or to dabble with ‘natural’ responses, while acting (on whole) quite egoistically and self-centered. In this galactic union – new forms of piratical freeports keep offshoring resources and escaping the central taxing authority, thus harboring non-mindfulness terrorism arising in response to a largely benefic mental & emotional tuning widely available. Even if coding bias into hardware based on white wetware bias is the main focus of Coded Bias, it ultimately supports a malleable wetware-hardware continuum that allows for modulation and even requires it.

Black-boxing of the operative logics of machine vision or acknowledging that machinic cognition or decisionality is essentially collaborative, not isolated, nor impervious to questioning, thus, cannot just settle for the human/nonhuman or creator/created, nonhuman/posthuman binaries. It feels very wrong, since it closes down our own sensitivity either to the same old repackaged as new, or to a newer wider & largely collaborative nonhuman ‘worldy sensiblity’ that is always risks being tipped towards whiteness and reactive toxicity if left unattended. Microsoft’s Tay 2016 chatbot that developed 24h a proclivity for hate speech is a test in case. It’s not just the simple powerful logic of trash in trash out, but of how easily this tipping point might be achieved today under trolling & targeted attacks. At the same time, one should never loose sight of other machinic bridges &conceptually as well as emotionally more progressive examples that developed as part of writing practices & modernist techniques such as automatic writing or Alan Turing’s automated Loveletter generator.

One cannot unbox anything in a straightforward way, since Shalini Kantayya’s diverse cast of protagonists and invited guests make clear that not even programmers or makers do not understand how the AI does what it does. One more thing cannot be remedied with just more data, simply more information. Even acknowledging that we can fully understand those internal processes, we can still feel trh results, see the hard facts and harsh reality whenever these AIs tend to ignore black and brown or female faces. AIs do need some deep unlearning in order to ‘re-educate'(not such a bad word) themselves and make sure they will not act out just the mathematical sums of the worst of the worst and select by default for the chosen few while deselecting everybody else.

Pushing the logic of this documentary, it is time to find out more about how decisions, ‘chance’, contingency may still be directed so as to redistribute luck on a more equal way in an increasingly unequal world economy. Economy is itself futurism served frozen & pre-cooked, and different debt ridden lives and widely different futures are being handed down, bent along pre-selected trajectories, trajectories that are being doctored (who cares if knowingly or unknowingly, intentionality is always ulterior anyway) actively make impossible the lives of a majority. A ‘pan-selectivity’ needs yo be developed that refuses yo be ‘gamed’ easily and influenced only by the influent few armed with predictive algorithms – at the tip of a capitalistic drive that actualizes every potential out there, no matter how horrific and brutal as long as it pays dividends.

Like probably any ideological formation – bias is not just invisible, it probably maybe impossible to completely eliminate, but this should not stop us trying to change it and actively imagine what’s to be done. Bias seems to work and act by being unspecified, invisibilized, left out of the loop. Again, like ideology, it is the missing mass that bends everything according to its set of preemptive expectations, almost like a constant enactment of a single, unilateral inner experience, making itself ubiquitous. Bias is not simply an apparently whimsical conceit, it is not just a pre-programmed part of the system, but something that needs to be enforced, hard-coded and programmed at every level of future decision making, at ever threshold of resistance.

Bias is made seemingly non-existent each time output and prediction is put at a premium. If if blaring, it feels like an itch you cannot scratch, because it starts to seem so intrinsic & para-systemic. Technology or AI is not neutral nor is inherently bad it gas been often said, and it is getting as bad or worse or as good as the whole context/environment allows it, or the drift promoting it keeps on pushing it, or as long as the coded ideals and values are what they are. Remember even if everything is being turned into ‘driver-less’-everything, it’s not less of driven- market economy.

We can not see it and measure it because its effects are measured on those who are made to matter less and less, on those ‘others’ that even the states, law or constitution does not seem to ‘notice’ or care for any longer. It is easier to wave bias aside, to bring undigested misconstructions on board and heap them on top of those being distributed the loosing lots, the bad seats(if any), and even if those stories just give you bad dreams, goosebumps, depression or severe need to disconnect from another’s catastrophic or already dystopian reality. So this necessitates different, collective and directed research approaches & coordinated effort to ‘black boxing’ so many current decisional processes. There’s also a different venue (not tackled in Coded Bias) – a sort of related QWERTY bias, of path dependencies whenever we have historically & incrementally built conventional (man-made) computational infrastructures. This ‘convention’ not only only stands in the way of more evolutionary – developmentally inclusive, unconventional approaches to computation & computing, but might leave out or blind us to other venues or other modes of problem solving existing or evolved (as those investigated by Andrew Adamatsky studying maze-solving slime molds). While most computation & research nowadays follows old & certainly well-tested arhitectures, it only builds upon existing & specific constraints – all too human ones we might add, moreover a very restrictive & biased account of what counts as ‘human’ (amply documented throughout Coded Bias), one that both engineering and coding seems to take as granted. ‘Worth’ – in a constantly devalorizing environment becomes constantly threatened, at the same time we should welcome the erosion of old, gendered biased and individualistic notions of singular genius(unmoved mover?) and farcical ‘great men’ through our plural AI – human interactions.

Coded Bias gets the highest marks in advocating for an A.I.X -research, attempting to build an explainable artificial intelligence, a research that should be aware of ‘artificial unintelligence'(Meredith Broussard), as well as to demands that humans hone their response-ability (Haraway), both allowing for aesthetic, epistemologic and ethical responsiveness whenever technological 21st upgrades and optimizations start pouring in.

Algorithmic Justice League (AJL)

AJL TW

AI fairness 360

Big Brother Watch UK

Algorithmic Equity Toolkit

Recidivism Risk Assessment

Association for Computing Machinery code of ethics

Silicon Valley Rising

Critical Race and Digital Studies Syllabus

No Biometric Barriers Housing Act of 2019

A Toolkit on Organizing Your Campus against ICE

stopping big data plan to flag at risk students

Responsible Computing Science Challenge

Hacking Discrimination hackaton

Protest Surveillance: Protect Yourself toolkit from Surveillance Technology Oversight Project (S.T.O.P.) for safety recommendations

AI Now Institute at New York University is a research center dedicated
to understanding the social implications of AI.

Fight for the Future is a group of artists, activists, engineers, and technologists
advocating for the use of technology as a liberating force.

Our Data Bodies is a human rights and data justice organization.

Data & Society studies the social implications of data-centric technologies & automation.

AJL logo

You do not need to be a tech expert to advocate for algorithmic justice. These basic terms are a good foundation to inform your advocacy. For a more detailed breakdown of how facial recognition works, see the guide titled Facial Recognition Technologies: A Primer from the AJL. For more on surveillance, see the Community Control Over Police Surveillance: Technology 101 guide from the ACLU.

GLOSSARY OF TERMS (extracted from Coded Bias Activist Toolkit)

Algorithm. A set of rules used to perform a task.

Algorithmic justice. Exposing the bias and harms from technical systems in order to safeguard the most marginalized and develop equitable, accountable, and just artificial intelligence.

Benchmark. Data set used to measure accuracy of an algorithm before it is released.

Bias. Implicit or explicit prejudices in favor of or against a person or groups of
people.

Artificial intelligence (AI). The quest to give computers the ability to perform
tasks that have, in the past, required human intelligence like decision making,
visual perception, speech recognition, language translation, and more.

Big data. The mass collection of information about individuals who
use personal technology, such as smartphones.

Biometric technology. Uses automated processes to recognize an individual through unique physical characteristics or behaviors

Black box. A system that can be viewed only through its inputs and outputs, not its internal process.

CCTV. Closed-circuit television cameras are used by institutions to record activity on and around their premises for security purposes.

Civil rights. A broad set of protections designed to prevent unfair treatment or
discrimination in areas such as education, employment, housing, and more.

Code. The technical language used to write algorithms and other computer programs.

Data rights. Referring to the human right to privacy, confidentiality, and
ethical use of personal information collected by governments or corporations through technology

Data set. The collection of data used to train an algorithm to make predictions.

Due process. The right not to be deprived of life, liberty, or property without
proper legal proceedings, protected by the Fifth and Fourteenth Amendments to the US Constitution.

General Data Protection Regulation
(GDPR).
A data rights law in the European Union that requires technology users consent to how their data is collected and prohibits the sale of personal data.

Facial recognition. Technologies – a catchall phrase to describe a set of technologies that process imaging data to perform a range of tasks on human
faces, including detecting a face, identifying a unique individual, and assessing demographic attributes like age and gender.

Machine learning. An approach to AI that provides systems the ability to learn
patterns from data without being explicitly programmed.

Racism. The systematic discrimination of people of color based on their social
classification of race, which disproportionately disadvantages Black and
Indigenous people of color.

Recidivism risk assessment – Automated decision making system used in
sentencing and probation to predict an individual’s risk of future criminal behavior based on a series of data inputs, such as zip code and past offenses.

Sexism. The systematic discrimination of women and girls based on their social
categorization of sex, which intersects with racism for women and girls of color.

Social credit score. An AI system designed by the Communist Party of China
that tracks and analyzes an individual’s data to assess their trustworthiness.

Surveillance. The invasive act of monitoring a population to influence its
behavior, done by a government for law and order purposes or by corporations for commercial interests.

Value-added assessments. Algorithms used most commonly to evaluate teachers by measuring student performance data.

Voice recognition. An application of AI technology that interprets and carries out spoken commands and/or aims to identify an individual based on their speech patterns.

imdb

1851 – books mentioned in the Coded Bias documentary

Weapons of Math Destruction by Cathy O’Neil

We live in the age of the algorithm. Increasingly, the decisions that affect our lives–where we go to school, whether we can get a job or a loan, how much we pay for health insurance–are being made not by humans, but by machines. In theory, this should lead to greater fairness: Everyone is judged according to the same rules.
But as mathematician and data scientist Cathy O’Neil reveals, the mathematical models being used today are unregulated and uncontestable, even when they’re wrong. Most troubling, they reinforce discrimination–propping up the lucky, punishing the downtrodden, and undermining our democracy in the process.

The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power by Shoshana Zuboff

The challenges to humanity posed by the digital future, the first detailed examination of the unprecedented form of power called “surveillance capitalism,” and the quest by powerful corporations to predict and control our behavior.

In this masterwork of original thinking and research, Shoshana Zuboff provides startling insights into the phenomenon that she has named surveillance capitalism. The stakes could not be higher: a global architecture of behavior modification threatens human nature in the twenty-first century just as industrial capitalism disfigured the natural world in the twentieth.

Zuboff vividly brings to life the consequences as surveillance capitalism advances from Silicon Valley into every economic sector. Vast wealth and power are accumulated in ominous new “behavioral futures markets,” where predictions about our behavior are bought and sold, and the production of goods and services is subordinated to a new “means of behavioral modification.”

The threat has shifted from a totalitarian Big Brother state to a ubiquitous digital architecture: a “Big Other” operating in the interests of surveillance capital. Here is the crucible of an unprecedented form of power marked by extreme concentrations of knowledge and free from democratic oversight. Zuboff’s comprehensive and moving analysis lays bare the threats to twenty-first century society: a controlled “hive” of total connection that seduces with promises of total certainty for maximum profit–at the expense of democracy, freedom, and our human future.

With little resistance from law or society, surveillance capitalism is on the verge of dominating the social order and shaping the digital future–if we let it.

Artificial Unintelligence: How Computers Misunderstand the World by Meredith Broussard

A guide to understanding the inner workings and outer limits of technology and why we should never assume that computers always get it right.

In Artificial Unintelligence, Meredith Broussard argues that our collective enthusiasm for applying computer technology to every aspect of life has resulted in a tremendous amount of poorly designed systems. We are so eager to do everything digitally—hiring, driving, paying bills, even choosing romantic partners—that we have stopped demanding that our technology actually work. Broussard, a software developer and journalist, reminds us that there are fundamental limits to what we can (and should) do with technology. With this book, she offers a guide to understanding the inner workings and outer limits of technology—and issues a warning that we should never assume that computers always get things right.

Making a case against technochauvinism—the belief that technology is always the solution—Broussard argues that it’s just not true that social problems would inevitably retreat before a digitally enabled Utopia. To prove her point, she undertakes a series of adventures in computer programming. She goes for an alarming ride in a driverless car, concluding “the cyborg future is not coming any time soon”; uses artificial intelligence to investigate why students can’t pass standardized tests; deploys machine learning to predict which passengers survived the Titanic disaster; and attempts to repair the U.S. campaign finance system by building AI software. If we understand the limits of what we can do with technology, Broussard tells us, we can make better choices about what we should do with it to make the world better for everyone.


<<   0879