AI Snake Oil (Part 3): Evaluation

In the last post, I discussed training data: mostly that you ought to have it or a way of getting it. If someone pitches you an idea without even a reasonable vision of what the training data would be, they’ve got a lot less credibility. In other words, if you can’t even envision training data for a given task, then the task itself may be impractical.

No offense to the creators of this actual robot; I just needed an image with a CC license.
A ridiculous image to get the idea of “AI evaluation” into your head. (No offense to the creators of this actual robot; the giant red letter “F” is not an actual evaluation of your robot. I just needed an image with a CC license. I hope it ping-ponged well.)

Next, let’s talk about evaluation with respect to application development, namely, if someone pitches an AI application idea to you:

Question: Do they have an evaluation procedure built into their application development process?

Arguably, evaluation is more important than training data. I chose to discuss training data in the first post because thinking in terms of training data gives you intuitions about what’s possible. It eliminates the infinite, but still leaves you with dreams. Evaluation is where your dreams are torn to shreds, whether or not you have training data.

Fundamentally, I want to cover three things here: why we evaluate, how do we evaluate, and how do we score the results. Understanding these three things is essential to understanding what makes a suitable evaluation; a crappy evaluation sows false confidence, something worse than no evaluation at all.

Continue reading “AI Snake Oil (Part 3): Evaluation”

AI Snake Oil (Part 2): Training Data

First in this series, I want to address the simplest and most important question to ask about a machine learning start-up or application:

Question: Is there existing training data? If not, how do they plan on getting it?

To sufficiently understand the answers to this question, you have to understand what training data is and, from there, what tasks or ideas would be extremely difficult to capture within training data. I’ll be addressing those in this post.

Most useful AI applications require training data: examples of the phenomenon they’re trying to replicate with the computer. If some start-up or group proposes a solution to a problem and they don’t have training data, you should be much more skeptical of their proposed solution; it’s now meandering into magic and/or expensive.

I like to think of training data as artificial intelligence’s dirty secret. It never gets mentioned in the press, but it is the topic of Day 1 of any Machine Learning class and forms the theoretical basis for what you learn the rest of the semester. Techniques like these that use training data are called often statistical methods, since they gather statistics about the data they’re provided to make predictions; this is in contrast to the rule-drive methods that were used prior to this.

Continue reading “AI Snake Oil (Part 2): Training Data”

AI Snake Oil (Part 1): Golden Lunar Toilet

A lot of over-hyped AI claims are being thrown around right now. In a lot of cases, leveraging this hype, some individuals make promises they can’t keep, no matter how dedicated or incredibly talented they are as developers. Steve Jobs may have had a so called “reality distortion field,” but that didn’t ever spawn a conscious AI, and neither will these people.

What I do want to describe is how to tell if someone is trying to sell you AI snake oil—bullshit claims on what they can actually achieve in a realistic time and budget. Sure, with infinite resources, I could build you a gold toilet on the moon, but no one has that kind of cash lying around. Shit needs to get done, and the time and material for doing so is finite.

Anything is possible. The only limit is yourself.
Anything is possible. I will make this happen for $412 billion dollars. Please provide in gold bullion so I can melt it down into the toilet of my own secret Swiss bank account.

If you’re approached by someone trying to sell you artificial intelligence-related software, or you read a piece in the popular press about what profession AI will uncannily crush in the next year, these are the questions you should ask. Depending on the answers, you can determine whether they’re bluffing or that they’ve done their homework and are worth taking seriously.

I was originally going to make this one post, but it’s grown too large to fit into one. In this series, each post is centered around a question you should ask when someone wants to do something in the real world with natural language processing, machine learning, or other AI components. These questions are:

Each post will detail what you should expect for an answer. As I write, I might add to or revise some of these questions, so don’t consider this list definitive quite yet.

All said and done, there are some really great things happening in AI right now; it’s part of why I chose to invest 6 years of my life getting involved in computational linguistics as a field. However, on any big wave of technology, there’s also a big wave of exploitation.  When people exploit the gap in knowledge between researchers and the public with hyperbole, it comes back to hurt those of us who work so hard to actually make shit that works. I hope that these posts can help non-researchers think more critically about AI and provide researchers a way to inform the public without dragging them through the equivalent graduate level coursework.

It’s Good* That Word Embeddings Are Sexist

A lot of news has been fluttering around about word embeddings being racist and sexist, e.g:

This is a good thing, but not in the sense that sexism and racism are good. It’s good because people who work on quantitative problems don’t believe things are real without quantitative evidence. This is quantitative evidence of that sexism and racism are real things.

Per my initial reaction, I was surprised how much alarm there is about this. When you live in a world that is glaringly *ist, take data from that world, and learn in an unsupervised manner, you’re going to acquire *ist knowledge. But then again, I’ve done my graduate education in a linguistics department with a strong group of sociolinguists. I was exposed to these ideas years ago and have been taught to have an awareness and sensitivity to these issues and to be critically aware of how language can construct and reinforce racist and sexist norms, especially though prescriptivism.

I suspect a lot of the shock is coming from the stronger CS end of things–a side of the university that is more strictly quantitative. My undergrad was in physics, which I suspect has a similar distribution of social science coursework–namely, just what the university requires. A student might have to take sociology or anthropology, and that’s only if the university requires it. My undergrad did not; I took macroeconomics en lieu of either of those.

When you’re in a quantitative program, there exists a lot of hidden assumptions. One is that quantitative analysis is the only way to do anything–any other way of approaching any problem of any kind is bullshit. This is because any other approach can involve biases that a researcher is unaware of. Abstraction and measurement help to remove the preferences of the researcher from the process, mitigating the effect of their biases. The procedure and the numbers are what count.

Hard-core context control.
Hard-core context control.

This works great for particles in a vacuum, for problems where the context can be completely controlled, but the assumption that these standards can be universally maintained bleeds into other problems for which doing so realistically is impossible. However, the air of non-bias around quantitative methods remains despite that the conditions that purged that bias in the first place are lost.

This assumption of non-bias holds into AI research–that a machine built on quantitative principles will be capable of arriving, logically and deductively, at perfect, non-biased truth–the objective truth that’s obscured by those pesky, confounding social factors.

If only Tay had taken advice from Dr. Dre: "I don't smoke weed or sess / Cause it's known to give a brother brain damage / And brain damage on the mic don't manage nothing / But making a sucker and you equal..."
“I don’t smoke weed or sess / Cause it’s known to give a brother brain damage / And brain damage on the mic don’t manage…”

This hope is at ends with AI’s Dark Secret–the one that never seems to make it into the press with its claims about AI’s up-and-coming “singularity”–solutions to the most interesting problems in AI rely entirely on training data. Some of this is supervised, some it is unsupervised, but it all still relies on the data it’s fed. With that, it comes to replicate whatever it’s been provided: garbage in, garbage out.

And so, this is where the shock comes from. For the first time, white, male quantitative researchers are smacked upside the head with the reality that the world exhibits sexist and racist tendencies. The data they’ve provided is digested and learned into biases. It turns out, building that perfectly logically deductive system, free from bias–a consciousness liberated from the social confines of human existence–isn’t just hard, but possibly impossible.

This isn’t a bad thing–perhaps disappointing to a slowly dying vision of AI. The upside though is that the majority of the evidence up to the present for *ist tendencies in society has been qualitative. You have to trust individuals synopses of their aggregate subjective experiences that privilege and bias exist. Right here, we’re seeing quantitative evidence that supports their testimonies.

There’s a two fold effect there: hopefully, it opens up quantitative researchers to acknowledging better the validity of qualitative research. Simultaneously, it confirms the discoveries of a lot of that qualitative research through discovering the same things from a totally different angle. That sort of independent confirmation is ideal in scientific work, and this convergence is just exactly that. We’re seeing decades of social science research supported by evidence from entirely different methods. In a discipline filled with men, this is unequivocal evidence that there are issues that need to be addressed, derived from the methods within that discipline. Sexism and racism suck, but with AI finally bumping into them and providing firm support for them as real issues, perhaps we can have better luck garnering public support in the larger social sphere.

Master Key

When it comes to infosec, the magnitude of ignorance amongst people astounds me. People like this actually get taken seriously, requesting backdoors in encryption algorithms so government officials can take a peek once they get a warrant. That sounds like a good idea when he frames it that way, but encryption, data, and computers in general are really abstract. Let me give you an analogy that’s a little more concrete, and then I wanna poke at why they even want this shit in the first place.

Let’s say FBI Guy were proposing a mandate for a national master key. Any door in the country, and with a warrant, an officer of the law could get a copy of the national master key and open the door to the house.

Totally creepy, of course, knowing that at any time some guy could just show up with a magic key that opens the door to your house. Even ignoring the potential for abuse–“pretty please, we promise not to abuse our national master key privileges”–there’s the inevitability that someone could figure out what the national master key is. If there’s one of these things built into every house in the country–even if there’s a special master key for each house–there’s some pattern to figure out. Someone’s gonna want to find out that pattern, because all the national mandate has done is create a puzzle to crack.

And these kinds of puzzles always get cracked. Especially when the prize is so big–access to literally every house in the country–it will get cracked. The solution will get plastered all over the Internet as a big “fuck you” right back at the people that failed to grasp the consequences of their poorly planned policies. It’s happened before, and it will happen again.

If the consequences are so bad–the neutering of every lock in the country–why does the NSA, FBI, and seemingly every other triple letter agency want something like this?

Roughly speaking though, the FBI already has that national master key–a state monopoly on coercive force. With a warrant, they can kick down your door, shoot your dog, throw you in jail, and throw all your personal belongings into duffel bags to get torn apart in a forensics lab.

They can’t do that with encrypted data, not without millions of computer hours for decryption. That’s not as easy as kicking your door down and stealing seizing all of your shit. That’s what they really want; from their point of view, encrypted data is a domain beyond the reach of brute force, and they want to reel it back in.

Maybe, in the end, they shouldn’t be focused on breaking encryption, but strengthening it for everyone, including themselves. While the FBI was busy petitioning for laws that break encryption, another massive government data breach was revealed, probably including personal information about Mr. Steinbach–the very official begging for weaker standards. We’re stuck with 20th century barons imposing 20th century standards on 21st century problems.

Arranging Fundamentals

A friend of mine who does some work I shouldn’t talk about–DC living for you–once explained something that struck me as odd. If you have two unclassified documents, document A and document B. Staple document A and B together–now you may have a classified document. Just the juxtaposition of two pieces of information is information, enough to change the status of the documents.

This was so striking, in fact, I dwelled on it for a bit, and began to realize, that is literally what I do as a computational linguist–re-arrange strings in meaningful ways.

Once you start thinking about this, it appears in a lot of things, aside from the arrangement of textual information. Geographic location is just this in action: real estate is valued by what real estate is next to it; if I own a car on a different continent, that car is essentially worthless unless I’m on that continent; I’m the Emperor of the Moon of the Wholly Circumferential Lunar Empire, yet they refuse to give me a diplomat plate.

Why should information feel so different? After all, re-arranging information is fundamentally what you learn to do in school, and I’ve been in school for 21 years.

I suppose it’s just that. Especially having studied physics, one learns to boil any problem down to its principle components, down to the key relationships that apply, and to demonstrate with those relationships how the facts have come to be. You become a master at re-arranging information in the right way, and when you become a master at re-arranging information, re-arranging information feels cheap; knowing the principle components is the key to finding the solution.

I suppose this is why the juxtaposition of two documents as something meaningful is so striking–if you have access to the two documents before, you have access to the principle components. As a master of re-arrangement, nothing else is required.

This couldn’t be further from the truth, though. The arrangement of things does matter, and it’s often incredibly complex. If the fundamental components were all that mattered, then if you memorized this chart of the fundamental particles of matter, you would know everything there is to know about everything.

Standard_Model_of_Elementary_Particles.svg

 

But knowing this chart, you don’t know everything about everything. Their combinations allow a certain freedom, and how those uncertainties left by that freedom are realized are also interesting.

Those uncertainties are just arrangements, but they’re important. They explain how cats are different from birds, why cruising in the passing lane makes you a complete asshole, and why I keep writing this essay despite having far more pressing shit on my plate. All of these things, in many ways individually, are due to arrangements of arrangements of arrangements of fundamental particles, so far removed that the black box of the atomic nucleus bears little (obvious) bearing on the outcome of their combination, aside from making it possible amongst another infinitude of possibilities.

This line of thinking is common outside of physics. For example, after recent events at UVa, some have argued in favor of shutting down fraternities, indefinitely. Counter-arguments in the comments, however, went along the lines of “well, if you kick them out of the frats, they’re still rapists.” They’re treating the members of the organizations as fundamental, principle components, and are arguing that by dividing the principle components up, you do nothing to negate the evil contained in those components.

There’s a lot of places this could go, but I’ve made my point here, loosely enough. Arrangement is information, and it matters. Fundamental components are good to know–they give space for juxtaposition to happen–but the interesting stuff happens in how things are arranged. It’s why we’re more than quarks.

~/.ssh/config Noob Problems

I did a computer rebuild a month or two ago, and I couldn’t seem to get my ssh config file to work. I setup some aliases for a few servers I connect to, and nothing would happen when I actually tried to connect. However, if I typed the whole address in from the command line, no problems.

As an example, one of these was called “armstrong.” Turning on verbose mode made the problem clear. When trying to use the alias, ssh tried to connect to a different IP address than the whole written URL.

ssh refuses to use a config file unless the permissions for that file are set appropriately–that is, only if the user who owns the file can read and write to it.

How could that be the problem? I’m the only user on this machine.

But I’m not. When I created the files, I used sudo, because sudo is magic computer sauce that makes everything work. So technically, the ~/.ssh/config file belonged to the root user, not to me, and because of that, ssh refused to use it.

So, ssh is magic sauce. It works pretty good on a lot of things, but for some things, it ruins them.

And don’t forget,

~/.ssh/config

must be made with

vim ~/.ssh/config

NOT

sudo vim ~/.ssh/config

.