You know that a tech trend is growing when there are more conferences and training programs than you can shake a stick at. And also, the trend is picked up by the amazing Science Friday and you get to hear some interesting developments and future direction.
One thing that you really don’t hear often are the “hidden truths.” The Verge recently wrote a very nice article highlighting three places where AI falls short – training, the bane of specialization, and the black box of how the AI works.
Machine learning in 2016 is creating brilliant tools, but they can be hard to explain, costly to train, and often mysterious even to their creators.
Source: These are three of the biggest problems facing today’s AI – The Verge
I had the good fortune to work with some very talented data scientists who were regularly using machine learning on healthcare data to understand patient behavior. Also, at IBM, I was able to learn a lot about how Watson thought and how well it worked. In all cases, the three hidden truths that The Verge had commented on were evident.
Teach me
The Verge article starts by pointing out the need for plenty of data to be able to train models. True. But for me, the real training issue is that it’s never “machine learning” in the sense of the machine learning on its own. Machine learning always requires a human, for example to provide training and test data, to validate the learning process, to select parameters to fit, to nudge the machine to learn in the right direction. This inevitably leads to human bias in the system.
“The irony is that the more we design artificial intelligence technology that successfully mimics humans, the more that A.I. is learning in a way that we do, with all of our biases and limitation,” said University of Utah computer science researcher Suresh Venkatasubramanian in a recent statement.
Source: Computer Programs Can Be as Biased as Humans
This bias means that no matter how well created or how smart, the AI will show the bias of the data scientists involved. The article quoted above references the issue in the context of resume scanning. No, the machine won’t be less biased than the human.
Taking that thought further, I am not concerned only with bias, but that possibly the AI cannot be smarter than the human, using the methods we currently have. Yes, an AI can see patterns across huge sets of data sets, automate certain specific complex actions, come to conclusions – but I do not think these conclusions are any better than a well-trained human. Indeed, my biggest wonder with machine learning in healthcare is whether all the sophisticated models and algorithms are no better than a well-trained nurse. Indeed, Watson really isn’t better than a doctor.
But that’s OK. These AIs can help humans sift through huge data sets, highlight things that might be missed, point humans to more information to help inform the human decision. Like Google helps us remember more, AIs can help us make more informed decisions. And, yes, Watson, in this way, is actually pretty good.
The hedgehog of hedgehogs
The Verge also points out that AIs need to be hyper-specialized to work. Train the AI on one thing and it does it well. But then the AI can’t be generalized or repurposed to do something similar.
I’ve seen this in action, where we had a product that was great in mimicking medical billing coding that a human could do. After training the system for a specific institution, using that specific institution’s data, the system would then perform poorly when given data from another institution. We always had to train to the specific conditions to get useful results. And this applied to all our machine learning models: we always had to retrain for the specific (localized) data set. Rarely were results decent on novel though related data sets.
Alas, this cuts both ways. This allows us to train systems on local data to get the best result, but it also means we need people and time (and money) every time we shift to another data set.
This reminds me of Minsky’s Society of Mind. Often we can create hybrid models that provide multiple facets to be fitted to the data, allowing the hybrid collection to decide which sub-models reflect the data better. Might we not also use a society of agents, a hybrid collection of highly specialized AIs that collaborate and promote the best of the collection to provide the output?
Black box AI
The third and last point the Verge article makes is about showing your work. I’ve been in many customer meetings where we are asked what are the parameters, what is the algorithm, how does the model think? We always waved our hands: “the pattern arises from the data,” “the model is so complex, it matches reality in its own way.” But at the same time, the output we’d see, the things the machine would say, clearly showed that sometimes the model could approximate the reality of the data, but not reality itself. We’d see this in the healthcare models and would need to have the output validated and model tweaked (by a human, of course) to better reflect the reality.
While black boxing the thinking in AI isn’t terrible, it makes it unapproachable to correct any misconceptions. The example in the Verge article on recognizing windows with curtains is a great one. The AI wasn’t recognizing windows with curtains, but correlating rooms with beds with windows with curtains.
AI is not about the machine
The human is critical in the building and running of AIs. And, for me, AIs should be built to help me be smarter, make better decisions. Some of the hidden truths listed above become less concerning when we realize we should, for now, stick to making AI as smart as a puppy, rather than imbue them with supposed powers of cognition beyond the human creators. AIn’t gonna happen any time soon. And will only annoy the humans.
Image from glasseyes view
1 Comment
Comments are closed.