It’s Good* That Word Embeddings Are Sexist

A lot of news has been fluttering around about word embeddings being racist and sexist, e.g:

This is a good thing, but not in the sense that sexism and racism are good. It’s good because people who work on quantitative problems don’t believe things are real without quantitative evidence. This is quantitative evidence of that sexism and racism are real things.

Per my initial reaction, I was surprised how much alarm there is about this. When you live in a world that is glaringly *ist, take data from that world, and learn in an unsupervised manner, you’re going to acquire *ist knowledge. But then again, I’ve done my graduate education in a linguistics department with a strong group of sociolinguists. I was exposed to these ideas years ago and have been taught to have an awareness and sensitivity to these issues and to be critically aware of how language can construct and reinforce racist and sexist norms, especially though prescriptivism.

I suspect a lot of the shock is coming from the stronger CS end of things–a side of the university that is more strictly quantitative. My undergrad was in physics, which I suspect has a similar distribution of social science coursework–namely, just what the university requires. A student might have to take sociology or anthropology, and that’s only if the university requires it. My undergrad did not; I took macroeconomics en lieu of either of those.

When you’re in a quantitative program, there exists a lot of hidden assumptions. One is that quantitative analysis is the only way to do anything–any other way of approaching any problem of any kind is bullshit. This is because any other approach can involve biases that a researcher is unaware of. Abstraction and measurement help to remove the preferences of the researcher from the process, mitigating the effect of their biases. The procedure and the numbers are what count.

Hard-core context control.
Hard-core context control.

This works great for particles in a vacuum, for problems where the context can be completely controlled, but the assumption that these standards can be universally maintained bleeds into other problems for which doing so realistically is impossible. However, the air of non-bias around quantitative methods remains despite that the conditions that purged that bias in the first place are lost.

This assumption of non-bias holds into AI research–that a machine built on quantitative principles will be capable of arriving, logically and deductively, at perfect, non-biased truth–the objective truth that’s obscured by those pesky, confounding social factors.

If only Tay had taken advice from Dr. Dre: "I don't smoke weed or sess / Cause it's known to give a brother brain damage / And brain damage on the mic don't manage nothing / But making a sucker and you equal..."
“I don’t smoke weed or sess / Cause it’s known to give a brother brain damage / And brain damage on the mic don’t manage…”

This hope is at ends with AI’s Dark Secret–the one that never seems to make it into the press with its claims about AI’s up-and-coming “singularity”–solutions to the most interesting problems in AI rely entirely on training data. Some of this is supervised, some it is unsupervised, but it all still relies on the data it’s fed. With that, it comes to replicate whatever it’s been provided: garbage in, garbage out.

And so, this is where the shock comes from. For the first time, white, male quantitative researchers are smacked upside the head with the reality that the world exhibits sexist and racist tendencies. The data they’ve provided is digested and learned into biases. It turns out, building that perfectly logically deductive system, free from bias–a consciousness liberated from the social confines of human existence–isn’t just hard, but possibly impossible.

This isn’t a bad thing–perhaps disappointing to a slowly dying vision of AI. The upside though is that the majority of the evidence up to the present for *ist tendencies in society has been qualitative. You have to trust individuals synopses of their aggregate subjective experiences that privilege and bias exist. Right here, we’re seeing quantitative evidence that supports their testimonies.

There’s a two fold effect there: hopefully, it opens up quantitative researchers to acknowledging better the validity of qualitative research. Simultaneously, it confirms the discoveries of a lot of that qualitative research through discovering the same things from a totally different angle. That sort of independent confirmation is ideal in scientific work, and this convergence is just exactly that. We’re seeing decades of social science research supported by evidence from entirely different methods. In a discipline filled with men, this is unequivocal evidence that there are issues that need to be addressed, derived from the methods within that discipline. Sexism and racism suck, but with AI finally bumping into them and providing firm support for them as real issues, perhaps we can have better luck garnering public support in the larger social sphere.

Twitter Takes Characters, Gives Access in Return

I avoided Twitter for a long time because I’m long-winded. “144 Characters? How on Earth could I get my point across?!?” I like to set the context for what I’m talking about and make my points clear.

When everyone gets to do that, though, the result is a lot of stuff to sift through–like the entire rest of the Internet. This blog, for example, is some obscure, long-winded fringe colony. Few visit, and those who do don’t take the time to read it. Hell, I wouldn’t.

By limiting the complete freedom of speech to 144 characters, though, everyone gets that much time to get their point across. Even if they break their post into pieces, they still have to make a conscious decision to do so. It’s a lot of work to string together one of those multi-posts.

The end result is increased access. Because a lot of noise is stripped down, it allows for one to sift through what’s interesting quickly and get to the point. It also cuts down on the influence of noisier individuals and allows better access to people who get large streams of stuff sent to them regularly, like celebrities. Folks with whom I’d never have had a chance to interact might get to see–because they have less to sift through–some clever shit I came up with.

twitter_celeb_access

Of course, you can’t get every point across on such a short-winded platform, but you’re not supposed to. That’s what other platforms are for, like this one. As for a public forum that provides access to those who would otherwise be out of reach, Twitter is amazing.

Arranging Fundamentals

A friend of mine who does some work I shouldn’t talk about–DC living for you–once explained something that struck me as odd. If you have two unclassified documents, document A and document B. Staple document A and B together–now you may have a classified document. Just the juxtaposition of two pieces of information is information, enough to change the status of the documents.

This was so striking, in fact, I dwelled on it for a bit, and began to realize, that is literally what I do as a computational linguist–re-arrange strings in meaningful ways.

Once you start thinking about this, it appears in a lot of things, aside from the arrangement of textual information. Geographic location is just this in action: real estate is valued by what real estate is next to it; if I own a car on a different continent, that car is essentially worthless unless I’m on that continent; I’m the Emperor of the Moon of the Wholly Circumferential Lunar Empire, yet they refuse to give me a diplomat plate.

Why should information feel so different? After all, re-arranging information is fundamentally what you learn to do in school, and I’ve been in school for 21 years.

I suppose it’s just that. Especially having studied physics, one learns to boil any problem down to its principle components, down to the key relationships that apply, and to demonstrate with those relationships how the facts have come to be. You become a master at re-arranging information in the right way, and when you become a master at re-arranging information, re-arranging information feels cheap; knowing the principle components is the key to finding the solution.

I suppose this is why the juxtaposition of two documents as something meaningful is so striking–if you have access to the two documents before, you have access to the principle components. As a master of re-arrangement, nothing else is required.

This couldn’t be further from the truth, though. The arrangement of things does matter, and it’s often incredibly complex. If the fundamental components were all that mattered, then if you memorized this chart of the fundamental particles of matter, you would know everything there is to know about everything.

Standard_Model_of_Elementary_Particles.svg

 

But knowing this chart, you don’t know everything about everything. Their combinations allow a certain freedom, and how those uncertainties left by that freedom are realized are also interesting.

Those uncertainties are just arrangements, but they’re important. They explain how cats are different from birds, why cruising in the passing lane makes you a complete asshole, and why I keep writing this essay despite having far more pressing shit on my plate. All of these things, in many ways individually, are due to arrangements of arrangements of arrangements of fundamental particles, so far removed that the black box of the atomic nucleus bears little (obvious) bearing on the outcome of their combination, aside from making it possible amongst another infinitude of possibilities.

This line of thinking is common outside of physics. For example, after recent events at UVa, some have argued in favor of shutting down fraternities, indefinitely. Counter-arguments in the comments, however, went along the lines of “well, if you kick them out of the frats, they’re still rapists.” They’re treating the members of the organizations as fundamental, principle components, and are arguing that by dividing the principle components up, you do nothing to negate the evil contained in those components.

There’s a lot of places this could go, but I’ve made my point here, loosely enough. Arrangement is information, and it matters. Fundamental components are good to know–they give space for juxtaposition to happen–but the interesting stuff happens in how things are arranged. It’s why we’re more than quarks.

The Distortion That is Learning

Ada Lovelace, they were talking about the other day, on the radio. They pointed out that she was the first person–working under Charles Babbage–to describe the computer as more than an adding machine. It could add, of course, but more importantly, it could follow instructions. It was–more than a mere calculator–a decision maker.

I’d been thinking about this the last few months–that the role of a programmer isn’t just to give instructions, but to bestow meaning into the machine. Of the infinitudes of programs accessible to a programmer, they choose the ones which are meaningful. Otherwise, we could just generate programs at random and call it a day.

There seemed to be something profound about this convergence of thoughts–that Lovelace and I had been thinking about the same thing, as if the whole universe pointed to my own thoughts.

That, of course, is absurdly egotistical at best. I’ve seen myself wander into this thought a number of times, though, and I’ve paid more attention as I’ve seen it arise. In some cases, it’s something I’d probably heard before, but at the time, had nothing particularly interesting to do with that fact. With nothing to peg the idea to, it wandered back to hyperuranium. Only when I had some context to apply it to–a probabilistic dimple in my brain etched deep enough to pull the idea in–did the fact seem soon so profound.

It’s a lot like digging through a pile of Legos, wherein the digger develops an ever-changing myopia. With a certain problem at hand, some Legos are extremely prominent, relevant to the problem that needs to be solved at that moment. Others are just noise and join the irrelevant static of the rest of the pile.

As the digger builds, though, the process changes. The experience gained from the process of building–or simply progress in building–changes the needs of the process. What was once a piece of noise is now very valuable, once one sees a fit in what’s being built. The digger’s own perspective, through the learning that’s done through building, becomes distorted from how it had previously been.

The same goes for any other learning process. As one works, one’s apparent needs change, and what once seemed irrelevant can suddenly pop out as a solution. The way one sees the world literally changes as learning occurs; the world, though, is the same as it was.

Actors and Actions

This summer–out of town, meeting many new people–I encountered far more often the unenviable dilemma of explaining my dissertation topic. Unintentionally, though, I turned it into an experiment.

Linguistics: where talking about an experiment becomes another experiment.

Typically, when introducing the topic, I presented a set of verbs, “arrest, search, apprehend, try, convict” and asked what nouns came to mind. Most folks drew a blank. At first I thought it was a fluke, but after a sustained near-0% success rate, and failing so frequently to explain to so many people what I was doing, I got my head out of my ass and admitted that I was explaining wrong.

So instead of giving them verbs and asking what nouns came to mind, I gave them “police and suspect” and asked them what words come to mind. “arrest, search…” It worked like a charm.

It’s easy to think of the actors and the actions associated with them as interchangeable, and then to emphasize the extracted product of the process (Chambers and Jurafsky 2009). After all, that list of verbs is a project result. However, coreference chains–strings of co-referring nouns–are employed at the first step, so it’s more sensible to convey the process nouns-first. Then, in a way, the listener becomes the project, and that’s way more interesting for them and you.

Furthermore, this may signal a need to alter the schema construction process. Verbs are compared to one another, and though their similarity depends on their co-referrent arguments, the choice of comparison depends on grammatical/referent collocations of verbs, not the juxtaposition of two actors. In this direction, the pair of actors I prompted listeners with is similar to those in Balasubramanian et al. 2013, retaining a pairwise relationship between role fillers through the extraction process.

In the end, it’s the nouns I’m interested in. On my 2nd Qualifying Paper, I looked at narratives related to police. Fundamentally, I was interested in what the system told me about police and how they interacted with other argument types: suspects, bystanders, etc. A noun-centric generation process may provide results more suited to this sort of analysis.

A noun-centric process may also improve performance in more challenging domains. I noticed analyzing movie reviews that, while the means of describing films and reviewer sentiment about them varied, particular roles remained constant throughout the domain: the reviewer, the director, characters in a plot synopsis, the film itself. Since that’s where I’m headed, that seems to be the way to think about things.

Synchronous Narratives, Small Data, and Measure Veracity

I’m, at the moment, looking for a particular problem to work on for my dissertation. It feels a bit backwards the way I’m going about it–I know what kind of solution I want to deploy, but I’m looking for a problem to solve with it. It’s a bit like running around the house with a hammer, looking for nails to hit, or running around with a new saw, cutting up wood into bits for the hell of it. The danger is that I could end up cutting all my wood up into tiny shavings, having had a blast with the saw but finding myself homeless at the end of the day.

My tool in this case isn’t a saw, but the abstraction of narrative schemata. The idea is, using dependency parses and coreference chains, you can extract which verbs are likely to co-occur with a shared referent. For example, arrest, search, and detain often share role fillers of some kind–policesuspect, or something referring to something that is one of those two.

A corpus of news contains all kinds of relationships like those, buried inside the language data itself. Ideally, these represent some sort of shared world knowledge that can be applied to other tasks. To demonstrate that this isn’t mere idealism is what I’m looking to do my dissertation on at the moment.

Back in the spring, I took my first attempt at this, and it went ok. My hypothesis–one of convenience, mostly–didn’t pan out, but there were interesting trends in the data. That resulted in a problem, though; I had two things to sort out: was my hypothesis wrong? Was the measure I used to determine that fact suitable for doing so? There was some minor evidence that the measure was suitable, but nothing conclusive.

Instead, I started sniffing around for other hypotheses–things someone else had already thought of, and that may be demonstrable with narrative schemata as an overlying application. Per my typical procrastination, I stumbled upon a recent article on Salon that critiques national press coverage of Rick Perry, claiming that narratives presented in the national press diverge wildly from those presented in Texas papers.

With an author having shown this qualitatively, it’s ripe for quantitative replication. It would make a great experiment for showing the veracity of whatever measure I end up devising.

The difficulty comes in with corpus building. There isn’t a corpus of these texts lying around. I’d have to dig them up myself, from numerous scattered sources. Additionally, the number of sources is likely to be limited. I may be able to obtain a few hundred articles if I’m relentless. Prior work on schemata began with millions of articles. The robustness of the approach may be questionable, in this case.

Of course, the difference in size may be the source of an interesting result in and of itself, but it’s not what I’d set out to demonstrate when searching for a problem that demonstrates the veracity of my measure.