The “Black Box” approach to AI may well be good enough for simple applications, but what happens when life or death decisions are entrusted to neural networks. Without baked-in “explainability”, can we really trust the results the machines are giving us?
If these decisions are for the purposes of a tank identifying a target, then the decisions they support need to be understandable. We will show how the explainability ethos can be built into the design of AI architectures, replacing large black box models with smaller explainable components. Ensembles and hierarchies of these smaller AI components can perform dedicated tasks enabling a decision audit trail to be maintained, allowing decisions to be tracked and errors to be diagnosed. We will discuss the difference between training and inferencing and how this impacts possible deployment from the cloud, to the dedicated server to embedded devices such as smart phones. Low latency requirements can be facilitated by efficient components optimised with TensorRT and deployed on an embedded device, while at the other end of the scale we will demonstrate how we use large-scale batch deconvolution inferencing for explaining decisions made by CNNs using state-of-the-art enterprise GPU cards.
We will discuss hybrid deployment of AI components for real-world use cases from the automotive, insurance and voice industries, and demonstrate how clever architecture and explainable AI components can provide instantaneous feedback to the user, resulting in improved data collection quality for inferencing and training purposes.