Dr Strangecode — how I learned to stop worrying and love the algorithm

Timmy Maiello
UX Collective
Published in
7 min readAug 5, 2020

--

Illustration by Timmy Maiello
Illustration by Timmy Maiello

“Algorithms evolve, push us aside and render us obsolete.

This means war with your creator”

Algorithm, Muse (Simulation Theory, 2018)

During a design project with my team last year I stumbled on a peculiar situation. As we were trying to figure out how people decide the system of values related to endangered species to save, we noticed that in 2017 the Department of Conservation in New Zealand let an algorithm (the Threatened Species Strategy algorithm) to decide the faith of more than 150 species, based on factors like social contribution to the people, contribution to securing the widest range of taxonomic lineages, conservation status, rate of decline and conservation dependency. Put more simply, New Zealand entrusted blindly an environmental crisis to just a few lines of code, and apparently they are doing a great job. Relying on an algorithm has become today’s fashion. And why shouldn’t it? With a database in hand and a machine able to read instructions you too can do anything! Do you want to sell a paint for $432,500? Here’s the code! Do you want to advertise your product to a specific persona? Say no more! Do you need to put a stop on your country’s crime rate? Try this code!

The problem, however, lies in the belief that people feel like they do not have the right to question or doubt those decision processes, even if it is fundamentally important to their lives. But because algorithms are considered something sophisticated and mathematically intellectual, they do not have enough authority to have an opinion about it. So those algorithms take on a life of their own in term of authenticity and like in some kind of cyber religion they rule masses of followers that trust any kind of belief just because “an algorithm said so”. The rise of those WMDs (widespread, mysterious and destructive algorithms) is increasingly having an impact on our lives, making decisions that for most people are undoubtedly right. So who gets hired or fired? Who gets that loan? Who gets insurance? Are you admitted into the college you wanted to get into? Law enforcement is also starting to use machine learning for predictive policing. Some judges use machine-generated risk scores to determine how long an individual is going to spend in prison.

The purpose of this essay is to analyze the role of the algorithm, if they act fairly, in what cases it might be weaponizable and if we can we do something about it before you and I could end up on an endangered species list.

Portrait of Edmond de Belamy by Obvious
Portrait of Edmond de Belamy by Obvious

A bias code

First thing first: what even is an algorithm? We can see it as a set of instructions, typically to solve a class of problems, perform calculation, data processing and other tasks. Technically speaking it does not rely on machines to be solved and it does not require to be complicated. In a game, for example, the player is the one who executes an algorithm. He is given a task (winning a match, being first in a race, reaching the last level) and to do so he has to view the outcomes and has to input decision, to then get the results displayed by the computer. The player has to execute an algorithm in order to win. The similarity between the actions expected from the player and computer algorithms is too uncanny to be dismissed. Lev Manovich sees an algorithm as a “halve that, combined with data structures, forms the ontology of the world according to a computer: they are equally important for a program to work”. That means that in any case, in any situation (machine or human) the one thing necessary to make an algorithm work is a database, a structured collection of data where every item is equally significant. In effect, the composition and structure of the database is the one thing (often underestimated) that we should put under inspection, as the quality of its components (or worse, the lack of) determines the source of most problems.

Joy Buolamwini in “How I’m fighting bias in algorithm”
Joy Buolamwini in “How I’m fighting bias in algorithm”

Let’s imagine a design product where an algorithm is tasked to scan people’s face. If during the machine learning technique the database, whether intentionally or not, is made of non-diverse kind of people, any face that deviates too much will be harder, if not impossible to detect. However, there are just enough times that we can consider something “unintentional”. Too often people of colour, Muslims, immigrants and LGBTQ communities are the ones forgot or worse targeted. In all those cases we are talking about algorithm bias. We must not forget that those codes are set up by the people in power, by companies and by people who simply just want to help themselves. Even when something has been set on “default”, it always means a careless way of acting and, consequentially, a harmful one. After all, artefacts have politics, and most of them will probably not go along with the varied spectrum of thinking of the users. The tentacles of the matrix of domination have a comfortable home here and unlike human bias, the digital ones are much more spreadable and faster. However, not all is lost. Those databases do not materialize from anything. We have to act and we have to do it fast: if we do not intervene of course it will increase inequality and discriminatory practices. Luckily we still have one line of defence and it is our best weapon yet: us designers.

Our Role

The first thing to keep in mind is that the accountability does not lie on the algorithm, (easy to cry wolf with an inanimate code) but rather on the designer behind it. Who designs the database matters more than the database itself. Is he/she creating a full-spectrum database with diverse individuals who can check each other’s blind spots? Is he/she projecting a moral value on the final result of the artefact? Is he/she thinking of the right opinion to introduce into this already full ecosystem? Of course, we are only humans and as humans, we can make many, many mistakes. The catch here is that errors can be corrected, codes can be changed and software can be updated. But when our design has the power to create immense harm to people factors like social health, equality and representation have to be set as priorities, not as an afterthought. Something that we should also keep in mind is why we design. Having the chance to design an algorithm does not mean that we have to do it. Do we really need to build a code capable of revealing sexual orientation just because we can? Do we have the right to decide if height has something to do with education? All those cases are always and will always be soaked in bias to the point where the “magical and unquestionable code” could even create new ones. In a pessimistic world like the one described by me here is there even hope? When an algorithm tells me what is the fastest path to choose, the perfect partner to date or the most probable food I am going to eat tonight, what even is free choice? Is my opinion still relevant? Are we just numbers in a database? One could rightly decide that we should get rid of those algorithms once and for all but I (as probably the entire population of Silicon Valley) disagree: we should change our idea of algorithm rather than trying to delete it. The reason is simple: those algorithms will not go away, but we can still do something to avoid a crisis. We should see the codes for what they really are: tools (and not creators) that can help us during our tasks. Look no further than projects like Tulipmania, a beautiful and critical piece of art by Anna Ridler that use an algorithm to make a point on systems of social and economic currency. It seems easy to overlook the labour that goes into making a database, ignoring the network of human decisions involved in machine learning but here the artist does not want you to forget that what you see is real: real flowers, hand-picked in a real market, photographed, labelled and arranged by her decisions. Here the algorithm is an instrument in the hands of a creative, not the dictator of an absolute truth. Essential part of the artwork, not maker of it.

Tulipmania by Anna Ridler
Tulipmania by Anna Ridler

In conclusion

In creating a world where we value inclusion and where technology works for all of us, not just some of us, algorithms need to be put in place. We should regulate their use, apply laws where we need to and have the right to doubt and question the morals behind those codes. We are not dealing with any supernatural force: the algorithm is human, imperfect and often driven by secondary reasons. We should remember to put equality as a priority and not to substitute them to any kind of human work: we should coexist and use them as an instrument to make our life easier.

Bibliography

Agüera y Arcas, B.(2018) ‘Do algorithms reveal sexual orientation or just expose our stereotypes?’, in Medium, [accessed 18/6/2019].

Costanza-Chock, S. (2019) ‘Design Justice, A.I., and Escape from the Matrix of Domination’ in Journal of Design and Science, [accessed 19/6/2019].

Graeme, E. (2017) ‘Threatened Species Strategy Algorithm’, in Department of Conservation New Zealand, [accessed 19/6/2019].

Hosanagar, K. (2018) ‘ Free Will in an Algorithmic World’ in Medium, [accessed 20/6/2019]

Manovich, L. (1999) ‘The Digital’, in Millenium Film Journal №34

OBVIOUS, (2018) ‘Portrait of Edmond de Belamy

O’ Neil, C. (2016) ‘Weapon of Math Destruction’, in Crown Books

O’ Neil, C. (2016) ‘Death by Algorithm’, in PBS, [accessed 19/6/2019]

Ridler, A. (2019) ‘Tulipmania’.

The UX Collective donates US$1 for each article published in our platform. This story contributed to UX Para Minas Pretas (UX For Black Women), a Brazilian organization focused on promoting equity of Black women in the tech industry through initiatives of action, empowerment, and knowledge sharing. Silence against systemic racism is not an option. Build the design community you believe in.

--

--