A lot of over-hyped AI claims are being thrown around right now. In a lot of cases, leveraging this hype, some individuals make promises they can’t keep, no matter how dedicated or incredibly talented they are as developers. Steve Jobs may have had a so called “reality distortion field,” but that didn’t ever spawn a conscious AI, and neither will these people.
What I do want to describe is how to tell if someone is trying to sell you AI snake oil—bullshit claims on what they can actually achieve in a realistic time and budget. Sure, with infinite resources, I could build you a gold toilet on the moon, but no one has that kind of cash lying around. Shit needs to get done, and the time and material for doing so is finite.
If you’re approached by someone trying to sell you artificial intelligence-related software, or you read a piece in the popular press about what profession AI will uncannily crush in the next year, these are the questions you should ask. Depending on the answers, you can determine whether they’re bluffing or that they’ve done their homework and are worth taking seriously.
I was originally going to make this one post, but it’s grown too large to fit into one. In this series, each post is centered around a question you should ask when someone wants to do something in the real world with natural language processing, machine learning, or other AI components. These questions are:
- Is there existing training data? If not, how do they plan on getting it?
- Do they have an evaluation procedure built into their application development process?
- Does their proposed application rely on unprecedentedly high performance on specific AI components?
- Do the proposed solutions rely on attested, reliable phenomena?
- If using pre-packaged AI components, do they have a clear plan on how they will go from using those components to having meaningful application output?
Each post will detail what you should expect for an answer. As I write, I might add to or revise some of these questions, so don’t consider this list definitive quite yet.
All said and done, there are some really great things happening in AI right now; it’s part of why I chose to invest 6 years of my life getting involved in computational linguistics as a field. However, on any big wave of technology, there’s also a big wave of exploitation. When people exploit the gap in knowledge between researchers and the public with hyperbole, it comes back to hurt those of us who work so hard to actually make shit that works. I hope that these posts can help non-researchers think more critically about AI and provide researchers a way to inform the public without dragging them through the equivalent graduate level coursework.