It’s Good* That Word Embeddings Are Sexist

A lot of news has been fluttering around about word embeddings being racist and sexist, e.g:

This is a good thing, but not in the sense that sexism and racism are good. It’s good because people who work on quantitative problems don’t believe things are real without quantitative evidence. This is quantitative evidence of that sexism and racism are real things.

Per my initial reaction, I was surprised how much alarm there is about this. When you live in a world that is glaringly *ist, take data from that world, and learn in an unsupervised manner, you’re going to acquire *ist knowledge. But then again, I’ve done my graduate education in a linguistics department with a strong group of sociolinguists. I was exposed to these ideas years ago and have been taught to have an awareness and sensitivity to these issues and to be critically aware of how language can construct and reinforce racist and sexist norms, especially though prescriptivism.

I suspect a lot of the shock is coming from the stronger CS end of things–a side of the university that is more strictly quantitative. My undergrad was in physics, which I suspect has a similar distribution of social science coursework–namely, just what the university requires. A student might have to take sociology or anthropology, and that’s only if the university requires it. My undergrad did not; I took macroeconomics en lieu of either of those.

When you’re in a quantitative program, there exists a lot of hidden assumptions. One is that quantitative analysis is the only way to do anything–any other way of approaching any problem of any kind is bullshit. This is because any other approach can involve biases that a researcher is unaware of. Abstraction and measurement help to remove the preferences of the researcher from the process, mitigating the effect of their biases. The procedure and the numbers are what count.

Hard-core context control.
Hard-core context control.

This works great for particles in a vacuum, for problems where the context can be completely controlled, but the assumption that these standards can be universally maintained bleeds into other problems for which doing so realistically is impossible. However, the air of non-bias around quantitative methods remains despite that the conditions that purged that bias in the first place are lost.

This assumption of non-bias holds into AI research–that a machine built on quantitative principles will be capable of arriving, logically and deductively, at perfect, non-biased truth–the objective truth that’s obscured by those pesky, confounding social factors.

If only Tay had taken advice from Dr. Dre: "I don't smoke weed or sess / Cause it's known to give a brother brain damage / And brain damage on the mic don't manage nothing / But making a sucker and you equal..."
“I don’t smoke weed or sess / Cause it’s known to give a brother brain damage / And brain damage on the mic don’t manage…”

This hope is at ends with AI’s Dark Secret–the one that never seems to make it into the press with its claims about AI’s up-and-coming “singularity”–solutions to the most interesting problems in AI rely entirely on training data. Some of this is supervised, some it is unsupervised, but it all still relies on the data it’s fed. With that, it comes to replicate whatever it’s been provided: garbage in, garbage out.

And so, this is where the shock comes from. For the first time, white, male quantitative researchers are smacked upside the head with the reality that the world exhibits sexist and racist tendencies. The data they’ve provided is digested and learned into biases. It turns out, building that perfectly logically deductive system, free from bias–a consciousness liberated from the social confines of human existence–isn’t just hard, but possibly impossible.

This isn’t a bad thing–perhaps disappointing to a slowly dying vision of AI. The upside though is that the majority of the evidence up to the present for *ist tendencies in society has been qualitative. You have to trust individuals synopses of their aggregate subjective experiences that privilege and bias exist. Right here, we’re seeing quantitative evidence that supports their testimonies.

There’s a two fold effect there: hopefully, it opens up quantitative researchers to acknowledging better the validity of qualitative research. Simultaneously, it confirms the discoveries of a lot of that qualitative research through discovering the same things from a totally different angle. That sort of independent confirmation is ideal in scientific work, and this convergence is just exactly that. We’re seeing decades of social science research supported by evidence from entirely different methods. In a discipline filled with men, this is unequivocal evidence that there are issues that need to be addressed, derived from the methods within that discipline. Sexism and racism suck, but with AI finally bumping into them and providing firm support for them as real issues, perhaps we can have better luck garnering public support in the larger social sphere.

2 Replies to “It’s Good* That Word Embeddings Are Sexist”

  1. Some excellent points! I do have one tiny quibble: “For the first time, quantitative researchers are smacked upside the head with the reality that the world exhibits sexist and racist tendencies.”

    This might be perhaps better revised “white, male quantitative researchers”. Female and minority researchers are all too aware of the realities of sexism/racism. I remember one particularly chilling recent poll where almost half of the black and latina female scientists polled reported being mistaken for custodial staff: https://thesocietypages.org/socimages/2015/07/02/nearly-half-of-black-and-latina-stem-workers-mistaken-for-janitors-and-assistants/

Leave a Reply

Your email address will not be published. Required fields are marked *