Consumers Challenge AI-Based Insurance Claims Review
The The Transform Technology Summits begin October 13 with Low-Code / No Code: Enabling Enterprise Agility. Register now!
Only 17% of consumers say they would be comfortable with insurance claims for their home, renters, or vehicles being reviewed exclusively by AI. That’s according to a new survey commissioned by Policygenius, which also found that 60% of consumers would rather change insurance companies than have AI review their claims.
The results indicate a general reluctance to trust AI systems – especially “black box” systems that lack an explainability component. For example, only 12% of people polled in a recent AAA report said they would be comfortable driving in a self-driving car. The high-profile failures of the past few years have not instilled much confidence, with AI-based recruiting tools showing bias against women, algorithms unfairly degrading student grades, and facial recognition technology leading to false arrests.
The survey suggests that in the insurance industry, people – especially drivers and owners – are reluctant to sacrifice privacy, even if it earns them policy discounts. More than half (58%) of auto insurance customers told Policygenius that no savings are worth using an app that collects data about their driving behavior and location. And only one in three respondents (32%) said they would be willing to install a smart home device that collects personal data, like a doorbell camera, water sensor, or smart thermostat.
The results are consistent with another survey – this one by insurance company Breeze – which shows that 56% of consumers don’t think insurance companies should be allowed to use ‘big data’ (for example, personal daily health data and purchasing behavior) to determine the insurance policy. pricing. According to Policygenius P&C insurance expert Pat Howard, consumer sentiment has not changed much in this regard.
“We’re seeing home and auto insurers integrating various data collection and analysis technologies into policy distribution, pricing and claims, but it’s clear consumers aren’t ready to share personal data or give up. the human touch for marginal savings, ”Howard said in a press release.
Importance of explainability
In a recent report, McKinsey predicted that insurance would shift from its current state of “detect and fix” to “predict and prevent,” transforming all aspects of the industry. As AI increasingly becomes part of the industry, carriers must position themselves to respond to the changing business landscape, the company wrote, while insurance executives must understand the factors that will contribute. to this change.
As an online insurance marketplace, Policygenius has a horse in the race. But his investigation is important in light of efforts by the European Commission’s High Level Expert Group on AI (HLEG) and the US National Institute of Standards and Technology, among others, to create standards for building a “Trustworthy AI”. Explainability continues to present major hurdles for companies adopting AI. According to FICO, 65% of employees cannot explain how decisions or predictions of AI models are made.
That’s not to say that all experts are convinced that AI can become truly “trustworthy”. But researchers like Manoj Saxena, who chairs the consultancy firm Responsible AI Institute, say “controls” can ensure awareness of the context in which AI will be used and the conditions that could create biased results. By engaging product owners, risk assessors, and users (e.g. policyholders) in conversations about potential AI flaws, processes can be created to expose, test, and correct those flaws.
For the insurance market in particular, the Dutch Association of Insurers (DAI) offers a possible model for responsible adoption of AI. The organization’s ethical framework for applying AI in the insurance industry, which became binding in January, requires companies to think about how best to explain AI results to clients. or other data-driven applications before those applications are deployed.
“Human governance is extremely important; there cannot be complete dependence on technology and algorithms. Human involvement is essential for lifelong learning and for responding to questions and dilemmas that will inevitably arise, ”DAI CEO Richard Weurding told KPMG, who worked with DAI on an educational campaign around deployment of the framework. “Businesses want to use technology to build trust with customers, and human involvement is essential to achieve this. “
Responsible AI practices can deliver major business value. A Capgemini study found that customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to stand up for them. Companies that don’t approach the problem thoughtfully could face both reputational risk and a direct impact on their bottom line, according to Saxena.
“[Stakeholders need to] make sure that potential biases are understood and that the data from these models is representative of the different populations that AI will impact, ”Saxena told VentureBeat in a recent interview. “[They also need to] invest more to ensure that the members who design the systems are diverse.
VentureBeat’s mission is to be a digital public place for technical decision-makers to learn about transformative technology and conduct transactions. Our site provides essential information on data technologies and strategies to guide you in managing your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the topics that interest you
- our newsletters
- Closed thought leader content and discounted access to our popular events, such as Transform 2021: Learn more
- networking features, and more
Become a member