From scientific racism to inclusive design

Bias, brains, and skulls.

Stacie Sheldon
UX Collective

--

The impact of bias in every aspect of our lives — services, technology, and government — is again at the forefront of our cultural psyche. Experts and advocates have been raising the alarm about the implications of bias in data and how it feeds into every aspect of our society. But where did it come from and what can we do about it? Understanding the enormous impact the legacy of racism has had on the political, cultural, linguistic, and technological landscape requires looking back at its legacy in scientific investigation.

Language has power

During my undergraduate study in literature, “Introduction to Literary Theory” was a required course. The title of the course certainly didn’t inspire, and I assumed the class would be a meaningless slog. However, at some point as we started talking about all the competing viewpoints for how text is interpreted and by whom, it suddenly really clicked for me: language has power.

I think we all know this as most of us have been hurt by words. As an American Indian kid growing up in northern Michigan, I certainly knew this. But this is where I realized how intrinsic this is and how maybe it could help explain some of the cultural and political forces that had always been present. The words we use and how they’re composed, the stories they tell — and don’t tell — are the sum of a culture, rather than the relatively innocuous impact of an individual author. In a self-feeding cycle, those texts in turn build and reinforce the existing culture. They are a reflection of who we are and what we believe; reflecting and reinforcing biases that we ignore or are ashamed of.

Protestors holding a sign that says “Save a walleye…spear an indian”
Protestors from the 1989 Wisconsin Walleye Wars. Conflict regarding fishing rights has occurred on all five Great Lakes and as recently as May, 2020, tribal citizens of Wisconsin exercising treaty fishing rights have been shot at and harassed.

This intersection of texts and cultural identity resonated with me as a person navigating many different cultural and political identities as a Native American, female, American, a person raised in a rural area finding herself now living in an urban setting and so on. In the United States, American Indians were forbidden by federal law to teach or use their own languages until the passing of the Native American Languages Act of 1990. So words and language have always been important to me and now I was really grasping the scope of the thing.

What evil is possible?

Fast forward now to the present where I have enjoyed a lengthy career as a User Experience professional (who happens to have a degree in Literature). A few years ago I attended a World Information Architecture Day event with a session on Web Analytics. The presentation focused on the practical aspects of using insights derived from web analytics but also talked a bit about the ethics of data collection. I was intrigued and kept thinking about the ethics of data and the power of the data analysts to decide whom and what to include and exclude in data models. Gender, race, ethnicity, how you ask for data or don’t ask for it at all — language has power was back — this time in Tech.

There is more and more conversation about this now — bias built into non-transparent algorithms that determine who can buy a house, who gets selected for a job interview, who has the best chance at college acceptance, and so on. As User Experience professionals, we often participate in the research and design of these systems and forms. I felt a sense of alarm over the implications of this power and its manifestation in technology: what evil is possible when you start measuring human beings?

And why do I say the evil that’s possible when you measure human beings? Though not the first or last person to make broad and sweeping generalizations about swaths of people based on assumptions and using tests that are designed to confirm those biases, Samuel G. Morton is a widely cited example of how data can be used for evil with centuries of repercussions. Morton was an American Scientist, physician, and writer who was active from the 1820s to the 1850s. And Morton is considered the father of scientific racism.

Portrait of Samuel Morton next to a sketch of human heads and skulls from his notebooks
Portrait of Samuel George Morton, American Philosophical Society, and an excerpt from “Crania Americana” showing the supposed differences between the skulls of different races.

In the nineteenth century, Americans were intensely interested in theories that connected peoples’ looks to their basic character, including their intelligence, moral sense, capacities for leadership and violence. To support this theory, the study of the shape and size of the cranium in relation to mental abilities, phrenology, was very popular. Interestingly, the word “phrenology” translates as the “study of the mind” and not the “study of the head” or the study of the skull.” The word was invented by Franz Gall in 1811, only a decade before Morton started using it so it was very much a fad — a dangerous one. Ultimately it served to enforce white superiority. And as we will see, it didn’t get completely left behind in the nineteenth century.

Reinforcing cultural norms and beliefs

The context of Morton’s work is an important background for understanding the implications of this belief system: how it reinforced cultural norms and beliefs. To give you that context, Morton’s work in phrenology in Crania Americana was published in 1839.

Here is what the United States looked like in 1839:

Map of the United States as it looked in 1839
Source: La Chuleta Congela, Territorialism in the U.S., 1789-Present (Maps)

Here are some highlights of what was happening in the United States in 1839:

  • Martin Van Buren is President of the United States. He had been the Vice President under Andrew Jackson who served for two terms.
  • Jackson’s administration was responsible for the Indian Removal Act and Van Buren’s administration strongly enforced that.
  • Tens of thousands of American Indians were forcibly removed from their lands and thousands died on the Trail of Tears.
  • The Seminole in Florida territory refused to leave, sparking the Second Seminole War in 1835 through 1842.
  • The Amistad rebellion took place in 1839, leading to a Supreme Court case the following year that ultimately freed the people who were enslaved.
  • Pre-Civil War, the first anti-slavery party, the Liberty Party, convened in NY in 1839.

Now, prior to Morton’s work, the prevailing scholastic theory was monogenism: that all people share a Creation. However, this belief could not reinforce the privilege and superiority Morton and his contemporaries enjoyed. Against this landscape of slavery and genocide, within the context of his own life and experiences, Morton was seeking to understand race and the origin of man.

Skulls, pepper seeds, and lead shot

Morton’s studies supported and reinforced a different theory: polygenism. In Morton’s view the five different accepted races had separate creations, which his understanding of Christianity and interpretation of the Bible at that time supported. From this lens, Morton developed the study he is now infamous for as he tried to correlate the size and shape of skulls with innate human and racial abilities. Morton collected nearly a thousand skulls and measured their cranial capacity by measuring the capacity of the skull to hold white pepper seeds. He later changed his method to use lead shots. Through his work, he concluded that the size of a person’s skull was an accurate measure of their intelligence. The bigger a subject’s skull, the bigger brain which he correlated to intelligence. Morton ranked the five races he identified according to the size of their skulls.

Five skulls from Samuel Morton’s collection.
Skulls from Samuel George Morton’s collection at Penn Museum
  1. Caucasians “The highest intellectual endowments”

2. Mongolians “Ingenious, imitative and highly susceptible of cultivation”

3. Malay “Active and ingenious”

4. American (American Indian) “Averse to cultivation, slow in acquiring knowledge, restless, fond of war”

5. Ethiopian “Joyous, flexible, and indolent”

(Note: Caucasian — united states, europe, middle east, north africa. Mongolians = asian and inuit. Malay is southeast asian. Ethiopian is sub-Saharan Africa, US slaves.)

He also wrote: “The Indian is “incapable of servitude, and thus his spirit sank at once in captivity, and with it his physical energy” and….

“The more pliant Negro, yielding to his fate, and accommodating himself to his condition.”

These findings, that American Indians could not integrate successfully into “modern” society were central to Indian Removal and overall Indian Policy. And I am sure they eased one’s conscience over slavery. Now we’re back to that literary theory — text as reflecting and creating culture.

A bias within a bias?

Morton’s book came out in 1839 and he was considered a leading empirical scientist until he died in 1851. He missed the 1859 publication of Darwin’s Origin of Species which blew his work out of the water and, of course, the Civil War. People mostly forgot about Morton until early 1978 when Stephen Jay Gould wrote in Science magazine and again in 1981 in his book The Mismeasure of Man about the work that Morton did and his conclusion that Morton’s work was biased in a number of ways — one way being that he was unconsciously stuffing more seeds into the skulls that he wanted to hold more seeds. He made him into the poster child of bias.

Now years later, a 2018 study published in PLOS Biology demonstrated that there is evidence that Gould was also biased and Morton’s capacity measurements were in fact, accurate. The problem really was that Morton didn’t have measurements for the bodies to go with the skulls and we know today that bigger people have bigger brains. We also know that the size of the brain has nothing to do with intelligence. Indeed, Einstein’s brain weighed less than the average adult male brain. And regardless, the overall bias of his work is indisputable. The lesson here is that we are probably never going to be perfect. Like Gould who argued that unconscious bias is ubiquitous in science, we must constantly be on the lookout for our own unconscious bias and how it may impact our work.

Curly hair, big feet, and lasting damage

We should also be aware of the legacy of lasting damage that Morton’s work had. For example, in the 1910s while the United States Justice Department was trying to sort out land allotments at the White Earth (Minnesota) Chippewa reservation for a lawsuit regarding land fraud, they contracted Albert Jenks, professor of anthropology at the University of Minnesota, and Aleš Hrdlička of the Smithsonian to distinguish mixed-blood from full-blooded Chippewa. It was believed that full-blooded Chippewa were incompetent and not able to sell their land allotments and so who was a full-blood and who was a mixed-blood became a political question. White Earth Chippewa themselves believed that if you shared their lifeway you were full-blood. If you did not share their way of life you were a mixed-blood. This was not good enough for the United States government and so Jenks and Hrdlička were brought in as experts. In his article, Curly Hair and Big Feet: Physical Anthropology and the Implementation of Land Allotment on the White Earth Chippewa Reservation, David L. Beaulieu details how the two so-called experts claimed that there was quantitative evidence that full-bloods never had curly hair and their feet were smaller as the “natural form of people who do little manual labor.” They managed to reduce the amount of full-bloods at White Earth from over 5,000 to 127 and therefore vastly diminished the amount that the United States government had to pay in the land fraud lawsuit. They claimed scientific objectivity while practicing pseudoscience and the people of White Earth lost millions of dollars worth of timber rights. Worse yet, their blood quantum measurements still comprise the database for determining blood quantum and tribal enrollment today.

It is no coincidence that not long after this the Nazis also engaged in this quantitative examination of skulls and other body parts as part of their comparisons of Aryan to non-Aryan people. Again and again people want to use data to classify, measure, and rank humans.

So what about now? We’d love to be past all of that now. Unfortunately, we are not.

Sample web form about gender next to a photo of Matthew Shephard
Sample web form by Sarai Rosenburg. Photo of Matthew Wayne Shepard, a gay American student at the University of Wyoming who was beaten, tortured, and left to die near Laramie on the night of October 6, 1998

There are consequences of how we ask for, collect and use data. Who are we including and who are we excluding? I have this demographic form example pictured above. Demographic questions are ubiquitous on the web and in application. What if every time (out of the gazillion times you are asked) to identify yourself, you don’t see a place for yourself? That would be a small repeated hurt — a microaggression that is happening all of the time. And it would prevent others from seeing you — you’d be invisible — on the margins. And there is no getting around the fact that this is what contributes to a culture where someone can be killed because they don’t fit in — because they are on those margins.

A closer look at ourselves

It has now been over 20 years ago that Matthew Shepard was so brutally murdered because he was gay and America was shocked into thinking about how powerful the consequences of excluding people can be. The last 20 years has also brought enormous changes in Tech and how we interact with it.

Here are some recent statistics from Towards Data Science about how we commonly consume (and create) data every day.

  • 7% of users produce 50% of the posts on Facebook.
  • 4% of users produce 50% of the reviews on Amazon
  • 0.04% of Wikipedia’s registered editors (about 2000 people) originated half the entries of English Wikipedia.

When you consider that 70% of US adults use Facebook and 36% of adults are getting their news from Facebook, it is worrisome. Who has the power over what is seen, what is thought about, who we are? We need to think about that.

We are in a unique time period now where we have Big Data and the Internet of Things to consider in all of this. And we are already seeing how things like social media, connected devices and AI are impacting us. Here are some examples:

  • Filter Bubbles. A term coined by internet activist Eli Pariser — is a state of intellectual isolation that can result from personalized searches when a website algorithm selectively guesses what information a user would like to see based on information about the user, such as location, past click-behavior and search history. As a result, users become separated from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles. Have your clicks and likes and shares turned your Facebook account into a bubble of your own making?
  • Fake News. False or misleading information presented as news. It often has the aim of damaging the reputation of a person or entity, or making money through advertising revenue. Media scholar Nolan Higdon has offered a more broad definition of fake news as “false or misleading content presented as news and communicated in formats spanning spoken, written, printed, electronic, and digital communication.”
  • Predictive Policing. There are two kinds of Predictive Policing. The Brennan Center for Justice defines these as such: Place-based predictive policing, the most widely practiced method, typically uses pre-existing crime data to identify places and times that have a high risk of crime. Person-based predictive policing, on the other hand, attempts to identify individuals or groups who are likely to commit a crime — or to be the victim of one — by analyzing for risk factors such as past arrests or victimization patterns. Both of these algorithms lack transparency and can infringe on civil liberties.
  • Alexa and Siri Voice Assistants. A 2019 Unesco study found that “The ‘female’ obsequiousness — and the servility expressed by so many other digital assistants projected as young women — provides a powerful illustration of gender biases coded into technology products… The more that culture teaches people to equate women with assistants, the more real women will be seen as assistants — and penalized for not being assistant-like.”

There can be a tendency when technology is changing so rapidly to feel as though this is all an inevitable force that we can’t do anything about. But we could demand that companies audit their algorithms for fairness, legality, and accuracy. We demand other forms of responsibility from companies and this should be no different. The consequences are certainly just as dire. In her book, Weapons of Math Destruction, Cathy O’Neil writes, “When automatic systems sift through our data to size us up for an e-score, they naturally project the past into the future. As we saw in recidivism sentencing models and predatory loan algorithms, the poor are expected to remain poor forever and are treated accordingly — denied opportunities, jailed more often, and gouged for services and loans.”

And while every day we are being measured, categorized, and scored by a myriad of data models we are also inviting digital voice assistants into our homes that offer convenience, safety features, and entertainment. But as we know, language has power, and humans naturally build bias into any technology they build.

Why does Siri sound white?

Product Designer, AmberNechole Hart, recently spoke at the 2020 World Usability Day event on “Why Does Siri Sound White?” Hart talked about the Black Voice and how it has been relegated for use outside of professional and academic spaces. Voice assistant products such as Siri and Alexa which are quickly becoming mainstays of our daily lives are based on data models that only include Standard American English. Again, who are we leaving out? And why? Design teams could work with linguists to update and change these algorithms to offer a more inclusive experience. It is important to think about the historical context of language here.

“In 1619, when the first ship carrying Africans landed in what would become the United States of America, black people were given a design brief…. Some of the requirements: you have to be subservient, and you have to use someone else’s voice to communicate. A limit is that, again, you can’t use your own voice and that you’re disconnected from the land, you’re disconnected from your culture, and many times you’re disconnected from your community.”

- AmberNechole Hart

Once again, language has power. Right here in our homes. Disconnecting people from their land, culture, and communities always results in socio economic crisis of high rates of disease, poverty, educational dropout, youth suicide, and violence.

Wall mural of George Floyd that says “I can breathe now.”
Wall mural at the corner of 38th Street and Chicago Avenue South in Minneapolis, Minnesota, the spot where George Floyd was arrested.

And as I am writing this article in 2021 we have witnessed the emergence of the Black Lives Matter movement in a powerful way. Many companies are finally shifting some attention to Inclusion and Diversity in a more meaningful way.

And what is Inclusive Design? There are many definitions available and the best ones are verb or method-oriented. For example, I like this definition from Microsoft because it points out that Inclusive Design is a method or way of thinking:

“Inclusive Design is a methodology, born out of digital environments, that enables and draws on the full range of human diversity. Most importantly, this means including and learning from people with a range of perspectives.”

-Microsoft

Inclusive design makes people feel welcome, safe, and valued. It prevents frustration and demeaning experiences. It remembers that language has power.

Guiding principles for data in tech

While doing research for this article, I came across these Guiding Principles that IDEO put together that I like very much. Those are:

1. Data is not truth. Data is human-driven. Data can be biased through what is included or excluded, how it is interpreted, and how it is presented.

2. Don’t presume the desirability of AI. Just because AI can do something doesn’t mean that it should.

3. Respect privacy and the collective good. We must hold ourselves to a higher standard than “will we get sued?”

4. Unintended consequences of AI are opportunities for design. Use unanticipated consequences and new unknowns as starting points for iteration.

As a UX professional, I know our work has an impact on people’s lives. And I know our discipline is built on the notion of Human-Centered design. So let’s work toward the inclusion of all humans. Let’s seek out exclusions (such as what AmberNechole Hart has suggested with the Voice Assistants) to create great new ideas and inclusive designs. It may be surprising, but every decision we make as User Experience professionals and other fields of technology can either raise or lower barriers to participation for people in our communities and with our products, services, and experiences. Let’s be as aware as we can of our bias and the impact it may have and acknowledge the collective responsibility we have to lower those barriers.

The UX Collective donates US$1 for each article published on our platform. This story contributed to Bay Area Black Designers: a professional development community for Black people who are digital designers and researchers in the San Francisco Bay Area. By joining together in community, members share inspiration, connection, peer mentorship, professional development, resources, feedback, support, and resilience. Silence against systemic racism is not an option. Build the design community you believe in.

--

--

UX strategist, researcher and designer, published author, mentor, and American Indian language advocate. Member of Mackinac Bands of Chippewa & Ottawa Indians