In the last session of the FINDER inclusive digital innovation week, Jeremie Abiteboul, Chief Technology Advisor at DreamQuark, explained how explainable AI works, its benefits, and how DreamQuark is implementing it with customers.
What is explainable AI?
Explainable AI refers to making the decision-making process of a machine-learning model transparent and understandable for a human observer. This includes which data has been used as an input and which variables are proportionally contributing to a model’s decision.
Why do we need explainable AI?
There are multiple reasons why explainable is needed. Firstly, we need to know if input data is biased because that leads to bias-reproducing AI. Secondly, we need to know which variables the model is attributing the most weight to since these could be variables that discriminate against particular groups of people. Thirdly, having an explainable AI model enables companies to address accountability and to be prepared for regulatory reporting.
How to implement explainable AI in insurance?
The prominent business cases that AI in insurance addresses are cross-selling and up-selling, targeted recommendations, and churn prevention. Explainable AI in insurance (compared to non-explainable AI) enables customers to have increased trust in the AI system, validate the business relevance of the model, discover new insights in the data, check for variables that should be excluded, and use it for regulatory purposes.
If you would like to learn more about explainable AI in insurance, please reach out to Jeremie Abiteboul.