Artificial intelligence (AI) is highly effective at parsing extreme volumes of data and making decisions based on information that is beyond the limits of human comprehension. But it suffers from one serious flaw: it cannot explain how it arrives at the conclusions it presents, at least, not in a way that most people can understand.
This “black box” characteristic is starting to throw some serious kinks in the applications that AI is empowering, particularly in medical, financial and other critical fields, where the “why” of any particular action is often more important than the “what.”