First introduced in 1998, the technologies monitor various aspects of the customers’ behavior so the insurer can better determine their level of risk. For instance, drivers install sensors in their cars to monitor their driving habits, or wear Fitbit-like devices to keep track of their physical activity. Insurance companies gather the data, analyze it, and using data analytics, offer premium discounts to safe drivers or people who keep themselves in better condition.
According to economic theory, people should be more than happy to sign up for these usage-based insurance (UBI) contracts. According to the theory, the devices combat moral hazard, which is the idea that insurance will encourage riskier behavior from a policyholder because they have insurance to bail them out if something goes wrong. They can also save the consumers money. But customers have proven resistant. Monitoring devices on automobiles, for instance, have only about 5% market penetration globally.
“These were supposed to be the wave of the future, but they haven’t caught on,” said Richard Peter, a professor of finance at the Tippie College of Business and insurance expert who wondered why so many customers pass up the chance to save money on auto insurance. In a newly published study, he presents a theoretical model that suggests the algorithm insurance companies use to determine the discounts is too confusing for most people to understand. Since they can’t figure out what happens in the algorithm’s “black box,” policyholders are worried they will be misclassified as a bad driver even when they don’t take unnecessary risks. That some companies often outsource these algorithms to third parties only confuses consumers more.
Since the whole process seems so mysterious, they take a pass.
“Consumers say forget it, I don’t need this technology, I’m sticking with the old-time contract I’ve always had,” Peter said.
The sensors also don’t understand the context of what might seem at first to be dangerous driving. For instance, a driver may have to swerve suddenly to avoid a crash. The algorithm could ding the driver based solely on the fact the driver did something erratic. It won’t see the action was necessary to avoid a crash.
Peter said this leads to other problems for the insurance company, pointing to a German firm that piloted UBI automobile contracts. It eventually dropped the initiative because customer service representatives were overwhelmed with phone calls from drivers trying to explain why they shouldn’t be penalized for something.
Peter’s study, “Mitigating moral hazard with usage-based insurance,” was co-authored by Julia Holzapfel and Andreas Richter of Ludwig Maximilian University of Munich. It will be published in a forthcoming issue of the Journal of Risk and Insurance, the flagship journal of the American Risk and Insurance Association.