2174 – After Blue aka Dirty Paradise (2022)

spacetime coordinates: on a distant Earth-like called After Blue planet in the far far future

directed by Bertrand Mandico (famous for Wild Boys made in 2017).

Synopsis: “In a faraway future, on a wild and untamed female inhabited planet called After Blue, a lonely teenager named Roxy (Paula Luna) unknowingly releases a mystical, dangerous, and sensual assassin from her prison. Roxy and her mother Zora (Elina Löwensohn) are held accountable, banished from their community, and forced to track down the murderer named Kate Bush. Haunted by the spirits of her murdered friends, Roxy sets out on a long and strange journey across the supranatural territories of this filthy paradise. The newest vision from Bertand Mandico (The Wild Boys) plays like a lesbian El Topo (in space!) with stunning 35mm in-camera practical effects, otherworldly set pieces, and a dazzling score by Pierre Desprats.”

If there is somebody or someone who takes further the tradition of Euro-sleaze or Euro-trash tradition to new (exoplanetary) heights, then it must be Bertrand Mandico. What has been usually dismissed as a “low brow” form of European entertainment cinema under various labels of either Giallo (visually immersive and excessive Italian mystery/horror), or Euro-spy or Eurocop movie has also had a few Euro horror sci-fi gems (think Mario Bava) completely falling under what Linda Williams has called ‘body genres’ (the weepies, pornographic and horror movies). Combining softcore porn camp iconography with day-glo FX and artificial lighting (black-light or fluorescent makeup) results in a completely neo-psychedelic dirty mystical experience that has an abstruse plot and that basically screams altered states with every shot. The whole movie seems a collection of obsessions (including a Kate Bush mania that seems unrelated to the recent Stranger Things revival) – and it looks and feels more sword & sorcery than science fiction. In my mind, it has more to do with a recent neo-Ralph Bakshiesque animation – that I have been reviewing here. There is also something familiar to Andrzej Zulawski‘s Silver Globe planetary crash landing future & semi mystical science fiction (revived by Raised by the Wolves or Battlestar Galactica?) – in its insistence that the future might not be just about boys and their high tech gimmicks but also about nakedness, visions, dirt, rags and bricoleurs.

It is really a plot I could not follow (maybe because I stuck with the French original – which left me completely spell-bound and suspended in this excessive lava-lamp imagery) – and somehow was hard to take in all female witches, their obscure conflicts and the various unrealistic ham characters that seemed not only to pop out of nowhere but also be explicitly & thankfully out of tune with today’s SF canon. It all gave me a slight feeling of nausea that seems to pervade (for me) this whole cinematic Mandico experience. I somehow was not able to watch the whole movie and was drifting in and out of it, almost like I just wanted to wake up and see if I could randomly piece it together or if everything would melt down in a shimmering haze.

Finally, the slightly familiar & utterly strange exo-planetary landscapes did not just feel made-up or artificial but also touchable and an expansion of inner worlds and possibly LSD-drenched trips. They are not just green screen filming backdrops added in post-production (like the majority of today’s lavish special effects movies) by anonymous studios but hand-made spectacles of low-brow alien-made (?!) candor and uneasy (sleazy) embodiment. Maybe this is about glamour and ‘the auratic’ after effects of celebs in the age of digital (post-mechanical) reproduction (pace W Benjamin) – a bit like in Blood Machines (another recent Frenchwave SF ‘sploitation directed Raphaël Hernandez, Seth Ickerman, Savitri Joly-Gonfard which combines a lot of actual props with hand-made sets and FX), After Blue delights in simple light effects, low illusionism and practical effects reminiscent of Georges Méliès early SF (like Jules Verne inspired Trip to the Moon 1902). They are elaborate yet basic imaginary (more like dime show) sets that belie all the current high-budget CGI showoffs mega-spectacles (think Marvel blockbusters). In its literary form, I find this sensibility familiar to the one that combines seamlessly inner and outer landscapes in such recent SF works as Chris Beckett’s Beneath the World, A Sea.

1852 – Coded Bias (documentary by Shalini Kantayya 2020)

official

When MIT Media Lab researcher Joy Buolamwini discovers that facial recognition does not see dark-skinned faces accurately, she embarks on a journey to push for the first-ever U.S. legislation against bias in algorithms that impact us all.

This is probably one of the most important documentaries to address many issues that are not any longer strictly the domain of SF. Cod Bias is definitely within the bounds of any socially inflected SF worlds u can think of. Maybe it used to be just the figment of dystopian – Cold War tinged imagination, but now it is very much part of ours. Made me actually mentally revisit theat primordial Silicon Valley 1984 promo – the ad for Apple Macintosh PC released in December 1983. Feels puzzling how this new televised technological muscle was part of a much wider and concerted Reaganite response to the -(still) Socialist East. ‘Free World’ computing as easily turned and facing off the eponymous Orwellian 1984 villain, a drab, grey, docile citizenry of the standardized monolithic solid-state, the ideological ‘other’ where a repressive & monstrous surveillance apparatus – (be it Securitate/Stasi) enforced obedience & ‘rightminding’. Only that, in retrospect, the newly competitive Silicon Valley product was a launch-pad for a much wider privacy Dragnet and much more insidious scope and certainly fancier in looks & design. Buying into a system of personal, automated & generalized consumer surveillance that also brought the pretense of neutral, un-biased coding.

Coded Bias documentary is the strongest advocacy of algorithmic justice i have seen, watched or heard of. A critical introduction to the current algo-capitalistic trends & as well as some of the ways needed to counter act AI-supported disparities & disenfranchisement. It is no mystery that you actually need people from across the board, including industry ppl (call them what u want, ex- Quants/former flash trading brokers, tech renegades, whistle-blowers, technological deserters, industry watchdogs, etc). Yes, not only EFF members, STEMs, geeks and blerds, but also people from the social housing blocks, the hood, the street corner youngsters and those with migrant-background – those that are primary targets and have been already mis-measured, data stripped and data mined and whose bodies and faces are literally the training grounds of computational modernity. Most of them, are the unwilling informants and unpaid trainers of emerging tech deployments that under-girds surveillance capitalism.

One of the most important takes from this documentary – was for me the counter-intuitive demonstration that goes against old cyberpunk sayings (paraphrasing: ‘the future is already here but it is just unequally distributed’). In the 21st c we learn time and time again, that the 1%, or 10% or the rich, powerful and wealthy are not the future’s bleeding knife- since they have mostly lived live of unfettered privacy and non data retention. They are not a tested minority, and clearly not the ones who get first unwanted access beforehand and do not suffer the effects of those things that will get distributed later one a vast scale. In fact (as one of the participants of Coded Bias points out) – the post-apocalyptic poor, the unprotected, those with previous histories of discrimination, enslavement, incarceration, abusive family background, profiling etc those already under some state of surveillance, registration and control (ID checked mostly in terms of constituting some form of risk), are the ones who suffer the blunt of these new technologies.

They are the un-glamorized testers of unequal futures, and not the privileged rich beta testers that mostly seem to opt-out of their own companies technological wonders. Accordingly, technological transformation is so important that it should not be defined just in terms of access – or left at the whim of company board members, Big Tech, Innovation hubs or ‘smart’ city planners & cheerleaders. It is not just a question of ‘users’ – since it is about the ‘used’ more than the users nowadays. It is – without nostalgia or pre-technological naivity in tow, that in spite – of these tremendous and complex planetary changes, legislation and lobbying for digital rights & accountability seems to lag behind, since both public attention and consciousnesses gets bypassed. Direct oversight and regulation or consciousness itself seems so trivial, and yet it is constantly remade into a threshold to be bypassed by the free markets & mantras hailing for ‘disruptive’ transgressions. Nonetheless, there is this incredible alliance and (as seen below) a lot of initiatives have sprung up, that espouse not just a neo-Luddite conviction, but one of tekk-savvyness, informed by the above ‘renegades’and industry insiders and/or burnouts as well, by previous historical black liberation examples as by the new empowering SF alternate histories (i see some clear signs of Wakanda there) having been written (thinking about Solomon Rivers,Nalo Hopkins and Nisi Shawl & others here) or waiting to be written in collaboration with automated text generators or not.

There is emerging calls from both government and by popular demand to at least be able to opt-out of these technologies in the US and EU (face recognition being just the most obvious case), altough I’m not sure about the vast majority of the world (which is clearly not from the Global North) or even the accelerating use & deployment of drone wars & DARPA abroad in the wake of protracted but inevitable US retreat from Afghanistan. There of course the possibility to learn how optical governance works or is put to use/abused in other parts of the world, since the West does not hold the monopoly over AI. China, in particular is an interesting divergence, since machine vision has been widely rolled out by the CCP via its social credit score, as well as being repurposed from below during the Pandemic response. SF has been historically very wary with attempts to modulate or influence behaviors such as behaviourism, to tuning or pegging controls or strong emotional responses towards a common good (Just think of swath of movies from Equilibrium 2002 to Brave New World 2020 or the new Voyagers 2021). ‘Brainwashed’, ‘the Manchurian Candidate’ etc are just a few of the inherited standard fear responses churned by both Cold War warriors, strategists, Pentagon brass and the run of the mill Hollywood movie output whenever they tried to depict or describe actual, imagined or suspected ideological traitors and US army deserters. ‘Brainwashing’ especially was made up into a sort of explain-all – to cover a whole range of ‘enemy'(past & present) responses, as the only possible logical explanation for the divergent behavior of former US troops (many of them black) who decided to opt-out of the racist US capitalist system after living as POW (during Korean War). When former army personnel decided to question, defect & live outside their bounds they must have been ‘brainwashed’, especially if they happened to be choosing Mao’s China for a while (a forgotten history detailed with tremendous wit in Julia Lowell’s fascinating book: Maoism: A Global History 2020) instead of racism back home or in the army. Change of mind and qualms about incoming orders also equals treason as we know from the case of Chelsea Elizabeth Manning or Edward Snowden.

In a rare and courageous move – The White Space (Machine/Ancestral Night duology) space opera universe of Elizabeth Bear avoids the usual ‘brainwashing’ suspicion of previous SF dystopian conventions by offering exactly what so much canonic SF eschews. It opens the possibility of a wide, non-coercive future galactic union where every human (altough the union is made by many non-sapient but sentient syster species) has the option to decide how much it alters, allows or wants to dial-down or fine-tune (what amounts to certain AI assisted ‘mindfulness’) a central nervous system evolved to automatize responses to emotional distress. Changing developmental patterns etc including universal non-coercive(!) access (called “bumping” in the novel) to what amounts to puberty blockers is not automatically a bad thing or a monstrous unnatural hybristic act(altough there’s libertarian privateers who think so in that universe like in ours)!

White Space opens up a way to modulate, discuss and deal in other ways with trauma, isolation, addiction, puberty, dysphoria, sex or gender assignment by birth etc bypassing automatic, hormonal or non-cognitive ‘habitual’ responses, being able to imaginatively limit violent behaviors at a minimum. Curbing willingly so much of what is anti-social behavior was apparently frowned upon even in that far future, but there’s room for so much more. It’s of course always important to pay attention to who decides what and when one misbehaves or when disobedience becomes accepted & when not. Of course there is a thin line, and there are those who want to skip and actively propagate opting out of the opting out. Body (non modification) extremists surely exist in that future that deem it sacrilegious to intervene or to dabble with ‘natural’ responses, while acting (on whole) quite egoistically and self-centered. In this galactic union – new forms of piratical freeports keep offshoring resources and escaping the central taxing authority, thus harboring non-mindfulness terrorism arising in response to a largely benefic mental & emotional tuning widely available. Even if coding bias into hardware based on white wetware bias is the main focus of Coded Bias, it ultimately supports a malleable wetware-hardware continuum that allows for modulation and even requires it.

Black-boxing of the operative logics of machine vision or acknowledging that machinic cognition or decisionality is essentially collaborative, not isolated, nor impervious to questioning, thus, cannot just settle for the human/nonhuman or creator/created, nonhuman/posthuman binaries. It feels very wrong, since it closes down our own sensitivity either to the same old repackaged as new, or to a newer wider & largely collaborative nonhuman ‘worldy sensiblity’ that is always risks being tipped towards whiteness and reactive toxicity if left unattended. Microsoft’s Tay 2016 chatbot that developed 24h a proclivity for hate speech is a test in case. It’s not just the simple powerful logic of trash in trash out, but of how easily this tipping point might be achieved today under trolling & targeted attacks. At the same time, one should never loose sight of other machinic bridges &conceptually as well as emotionally more progressive examples that developed as part of writing practices & modernist techniques such as automatic writing or Alan Turing’s automated Loveletter generator.

One cannot unbox anything in a straightforward way, since Shalini Kantayya’s diverse cast of protagonists and invited guests make clear that not even programmers or makers do not understand how the AI does what it does. One more thing cannot be remedied with just more data, simply more information. Even acknowledging that we can fully understand those internal processes, we can still feel trh results, see the hard facts and harsh reality whenever these AIs tend to ignore black and brown or female faces. AIs do need some deep unlearning in order to ‘re-educate'(not such a bad word) themselves and make sure they will not act out just the mathematical sums of the worst of the worst and select by default for the chosen few while deselecting everybody else.

Pushing the logic of this documentary, it is time to find out more about how decisions, ‘chance’, contingency may still be directed so as to redistribute luck on a more equal way in an increasingly unequal world economy. Economy is itself futurism served frozen & pre-cooked, and different debt ridden lives and widely different futures are being handed down, bent along pre-selected trajectories, trajectories that are being doctored (who cares if knowingly or unknowingly, intentionality is always ulterior anyway) actively make impossible the lives of a majority. A ‘pan-selectivity’ needs yo be developed that refuses yo be ‘gamed’ easily and influenced only by the influent few armed with predictive algorithms – at the tip of a capitalistic drive that actualizes every potential out there, no matter how horrific and brutal as long as it pays dividends.

Like probably any ideological formation – bias is not just invisible, it probably maybe impossible to completely eliminate, but this should not stop us trying to change it and actively imagine what’s to be done. Bias seems to work and act by being unspecified, invisibilized, left out of the loop. Again, like ideology, it is the missing mass that bends everything according to its set of preemptive expectations, almost like a constant enactment of a single, unilateral inner experience, making itself ubiquitous. Bias is not simply an apparently whimsical conceit, it is not just a pre-programmed part of the system, but something that needs to be enforced, hard-coded and programmed at every level of future decision making, at ever threshold of resistance.

Bias is made seemingly non-existent each time output and prediction is put at a premium. If if blaring, it feels like an itch you cannot scratch, because it starts to seem so intrinsic & para-systemic. Technology or AI is not neutral nor is inherently bad it gas been often said, and it is getting as bad or worse or as good as the whole context/environment allows it, or the drift promoting it keeps on pushing it, or as long as the coded ideals and values are what they are. Remember even if everything is being turned into ‘driver-less’-everything, it’s not less of driven- market economy.

We can not see it and measure it because its effects are measured on those who are made to matter less and less, on those ‘others’ that even the states, law or constitution does not seem to ‘notice’ or care for any longer. It is easier to wave bias aside, to bring undigested misconstructions on board and heap them on top of those being distributed the loosing lots, the bad seats(if any), and even if those stories just give you bad dreams, goosebumps, depression or severe need to disconnect from another’s catastrophic or already dystopian reality. So this necessitates different, collective and directed research approaches & coordinated effort to ‘black boxing’ so many current decisional processes. There’s also a different venue (not tackled in Coded Bias) – a sort of related QWERTY bias, of path dependencies whenever we have historically & incrementally built conventional (man-made) computational infrastructures. This ‘convention’ not only only stands in the way of more evolutionary – developmentally inclusive, unconventional approaches to computation & computing, but might leave out or blind us to other venues or other modes of problem solving existing or evolved (as those investigated by Andrew Adamatsky studying maze-solving slime molds). While most computation & research nowadays follows old & certainly well-tested arhitectures, it only builds upon existing & specific constraints – all too human ones we might add, moreover a very restrictive & biased account of what counts as ‘human’ (amply documented throughout Coded Bias), one that both engineering and coding seems to take as granted. ‘Worth’ – in a constantly devalorizing environment becomes constantly threatened, at the same time we should welcome the erosion of old, gendered biased and individualistic notions of singular genius(unmoved mover?) and farcical ‘great men’ through our plural AI – human interactions.

Coded Bias gets the highest marks in advocating for an A.I.X -research, attempting to build an explainable artificial intelligence, a research that should be aware of ‘artificial unintelligence'(Meredith Broussard), as well as to demands that humans hone their response-ability (Haraway), both allowing for aesthetic, epistemologic and ethical responsiveness whenever technological 21st upgrades and optimizations start pouring in.

Algorithmic Justice League (AJL)

AJL TW

AI fairness 360

Big Brother Watch UK

Algorithmic Equity Toolkit

Recidivism Risk Assessment

Association for Computing Machinery code of ethics

Silicon Valley Rising

Critical Race and Digital Studies Syllabus

No Biometric Barriers Housing Act of 2019

A Toolkit on Organizing Your Campus against ICE

stopping big data plan to flag at risk students

Responsible Computing Science Challenge

Hacking Discrimination hackaton

Protest Surveillance: Protect Yourself toolkit from Surveillance Technology Oversight Project (S.T.O.P.) for safety recommendations

AI Now Institute at New York University is a research center dedicated
to understanding the social implications of AI.

Fight for the Future is a group of artists, activists, engineers, and technologists
advocating for the use of technology as a liberating force.

Our Data Bodies is a human rights and data justice organization.

Data & Society studies the social implications of data-centric technologies & automation.

AJL logo

You do not need to be a tech expert to advocate for algorithmic justice. These basic terms are a good foundation to inform your advocacy. For a more detailed breakdown of how facial recognition works, see the guide titled Facial Recognition Technologies: A Primer from the AJL. For more on surveillance, see the Community Control Over Police Surveillance: Technology 101 guide from the ACLU.

GLOSSARY OF TERMS (extracted from Coded Bias Activist Toolkit)

Algorithm. A set of rules used to perform a task.

Algorithmic justice. Exposing the bias and harms from technical systems in order to safeguard the most marginalized and develop equitable, accountable, and just artificial intelligence.

Benchmark. Data set used to measure accuracy of an algorithm before it is released.

Bias. Implicit or explicit prejudices in favor of or against a person or groups of
people.

Artificial intelligence (AI). The quest to give computers the ability to perform
tasks that have, in the past, required human intelligence like decision making,
visual perception, speech recognition, language translation, and more.

Big data. The mass collection of information about individuals who
use personal technology, such as smartphones.

Biometric technology. Uses automated processes to recognize an individual through unique physical characteristics or behaviors

Black box. A system that can be viewed only through its inputs and outputs, not its internal process.

CCTV. Closed-circuit television cameras are used by institutions to record activity on and around their premises for security purposes.

Civil rights. A broad set of protections designed to prevent unfair treatment or
discrimination in areas such as education, employment, housing, and more.

Code. The technical language used to write algorithms and other computer programs.

Data rights. Referring to the human right to privacy, confidentiality, and
ethical use of personal information collected by governments or corporations through technology

Data set. The collection of data used to train an algorithm to make predictions.

Due process. The right not to be deprived of life, liberty, or property without
proper legal proceedings, protected by the Fifth and Fourteenth Amendments to the US Constitution.

General Data Protection Regulation
(GDPR).
A data rights law in the European Union that requires technology users consent to how their data is collected and prohibits the sale of personal data.

Facial recognition. Technologies – a catchall phrase to describe a set of technologies that process imaging data to perform a range of tasks on human
faces, including detecting a face, identifying a unique individual, and assessing demographic attributes like age and gender.

Machine learning. An approach to AI that provides systems the ability to learn
patterns from data without being explicitly programmed.

Racism. The systematic discrimination of people of color based on their social
classification of race, which disproportionately disadvantages Black and
Indigenous people of color.

Recidivism risk assessment – Automated decision making system used in
sentencing and probation to predict an individual’s risk of future criminal behavior based on a series of data inputs, such as zip code and past offenses.

Sexism. The systematic discrimination of women and girls based on their social
categorization of sex, which intersects with racism for women and girls of color.

Social credit score. An AI system designed by the Communist Party of China
that tracks and analyzes an individual’s data to assess their trustworthiness.

Surveillance. The invasive act of monitoring a population to influence its
behavior, done by a government for law and order purposes or by corporations for commercial interests.

Value-added assessments. Algorithms used most commonly to evaluate teachers by measuring student performance data.

Voice recognition. An application of AI technology that interprets and carries out spoken commands and/or aims to identify an individual based on their speech patterns.

imdb