We'll describe our work at Intelligent Voice on explainable AI. We are working to separate AI technology into smaller components so it can be more easily explained, build explainability into AI architecture design, and make it possible for AI to progress within confines of current regulation. New GDPR regulations in Europe, which affect any company with European consumers, give people a right to challenge computer-aided decisions and to have these decisions explained. We'll discuss how existing technology can make it difficult to provide an explanation and how that inhibits AI adoption in customer-facing fields such as insurance, health, and financial services.
The “Black Box” approach to AI may well be good enough for simple applications, but what happens when life or death decisions are entrusted to neural networks. Without baked-in “explainability”, can we really trust the results the machines are giving us?
If these decisions are for the purposes of a tank identifying a target, then the decisions they support need to be understandable. We will show how the explainability ethos can be built into the design of AI architectures, replacing large black box models with smaller explainable components. Ensembles and hierarchies of these smaller AI components can perform dedicated tasks enabling a decision audit trail to be maintained, allowing decisions to be tracked and errors to be diagnosed. We will discuss the difference between training and inferencing and how this impacts possible deployment from the cloud, to the dedicated server to embedded devices such as smart phones. Low latency requirements can be facilitated by efficient components optimised with TensorRT and deployed on an embedded device, while at the other end of the scale we will demonstrate how we use large-scale batch deconvolution inferencing for explaining decisions made by CNNs using state-of-the-art enterprise GPU cards.
We will discuss hybrid deployment of AI components for real-world use cases from the automotive, insurance and voice industries, and demonstrate how clever architecture and explainable AI components can provide instantaneous feedback to the user, resulting in improved data collection quality for inferencing and training purposes.
Deep learning, assisted with GPU acceleration, is pervading many sectors and the insurance space is no exception. We''ll illustrate how deep learning applications in image and speech recognition are forming the backbone of innovative applications in the insurance industry. Real-world examples of image and speech deep learning technology are presented, demonstrating how ground-breaking applications have been engineered in the industry to automate decision-support, assist humans, improve customer experiences and reduce costs.
In contrast to demands for less regulation in the US, European financial institutions face new MiFID II and GDPR regulations which fundamentally affect how records are stored, retrieved and destroyed. 50% of all corporate data will have a voice component in the next 5 years, which implies that companies not only need to know where data is being held, but also what is being said in it, and who is saying it. Part of this talk will showcase the solution produced by Telefonica/O2 and Intelligent Voice to capture, index and analyse mobile phone calls, and introduce them as part of a compliance and monitoring workflow for MiFID II. We will also show how machine learning can be applied to analysing real-time voice conversations to help spot fraud to an accuracy level on a par with humans
Time synchronous Viterbi search algorithm for automatic speech recognition is implemented using a counter-intuitive single CUDA block approach. Decoding of a single utterance is carried out on a single stream multiprocessor (SM) and multiple utterances are decoded simultaneously using CUDA streams. The single CUDA block approach is shown to be substantially more efficient and enables overlapping of CPU and GPU computation by merging ten thousands of separate CUDA kernel calls for each utterance. The proposed approach has the disadvantage of large GPU global memory requirement because of the simultaneous decoding feature. However, the latest GPU cards with up to 12GB of global memory fulfill this requirement and the full utilization of the GPU card is possible using all available SMs.