close
close

Your Next Home Insurance Nightmare: AI, Drones, and Surveillance

Your Next Home Insurance Nightmare: AI, Drones, and Surveillance

It was already a hectic day when my insurance broker left me a frantic voicemail informing me that my homeowners insurance had expired. I felt nauseous and naked. Suddenly, any leak, fire or tree limb falling on the century-old Hudson Valley home that had been in my family for nearly 40 years could drain my bank account. I felt ashamed. How could I have let this happen? Did I forget to update a credit card? Did I miss a bill? Did I do something wrong with the policy? But when I checked my records, even the Travelers website, there was nothing.

A few hours later, my panic turned to bewilderment. When I finally contacted my insurance broker, he explained the reason Travelers had revoked my policy: AI-assisted drone surveillance. My finances were apparently being jeopardized by faulty code.

I take privacy and surveillance very seriously, so much so that I started one of the leading think tanks on the topic, the Surveillance Technology Oversight Project. But while I study surveillance threats across the country for a living, I had no idea that my own insurance company was using my premiums to spy on me. Travelers not only uses aerial photography and AI to monitor its customers’ rooftops, it has also filed patents on the technology—nearly 50 patents in fact. And it may not be the only insurer spying from the sky.

It not only seemed creepy and invasive, but also abnormal. Literally abnormal: there was nothing wrong with my roof.

I’m a lazy homeowner. I hate gardening and I don’t clean as often as I should. But I still take care of the essentials. Whether it’s upgrading the electrical or installing a new HVAC system, I try to make sure my home is safe. But for Travelers’ AI, it turned out that my laziness was too big a risk to insure. Its algorithm didn’t detect any foundation issues or leaky pipes. Instead, as my broker revealed to me, the ominous threat that voided my insurance was nothing more than moss.

Where there’s moisture, there’s moss, and if you leave a lot of it on a roof for an extended period of time, it can jeopardize the life of the roof. A small amount is largely harmless. Yet the treatment couldn’t be simpler. Sure, I could have gotten rid of the moss sooner, but life got busy, so it kept falling (and growing) between the cracks. Finally, in June, weeks before I knew my roof was being monitored, I went to the hardware store, spent $80 on a moss killer, hooked up the white bottle of chemicals to the garden hose, and sprayed the stuff on the roof. The whole thing took about five minutes. A few days later, to my relief, the moss was dying. I thought that was the end of a completely pointless story.

Who knows. Maybe if I had done this a month ago, Travelers’ technology would never have flagged me, never said I was an insurance risk. But one of the deep frustrations of the age of AI surveillance is that as companies and governments increasingly track our lives in ever-greater detail, we rarely know we’re being monitored. At least not until it’s too late to change our minds.

While it’s impossible to know exactly how many other Travelers customers have been targeted by the company’s surveillance program, I’m certainly not the first. In February, ABC’s Boston affiliate reported on a customer who was threatened with policy cancellation if she didn’t replace her roof. The roof was well past its expected lifespan and the customer had no leaks; she was told, however, that without a roof replacement, she would be uninsured. She said she was being forced to pay $30,000 to replace a slate roof that experts estimated could have lasted another 70 years.

Insurance companies will have huge incentives to choose against the owner every time.

Insurers have every incentive to be extremely careful about how they build their AI models. No one can use AI to predict the future; you train the technology to make guesses based on changing roof colors and grainy aerial images. But even the best AI models will often get their predictions wrong, especially at scale and especially when making guesses about the future of radically different roof designs on countless buildings in diverse environments. For the insurance companies designing the algorithms, that means a lot of questions about when to tilt the scales in favor of or against the homeowner. And the insurance companies will have huge incentives to choose against the homeowner every time.

Think about it: Every time AI greenlights a roof that has a problem, the insurance company picks up the tab. Every time that happens, the insurance company can add that data point to its model and train it to be even more risk-averse. But when homeowners are threatened with cancellation, they pick up the tab for repairs, even if they’re unnecessary. If the Boston homeowner throws out a slate roof that has 70 years of life left, the insurance company never knows it was wrong to remove it. It never updates the model to be less aggressive on similar homes.

Over time, insurance companies will have every incentive to make their models increasingly unforgiving, threatening more and more Americans with the risk of losing coverage and potentially incurring millions or even billions of dollars in unnecessary home repairs. And as insurers face mounting losses from the climate crisis and inflation, the pressure to force unnecessary preventative repairs on customers will only increase.

The bottom line of this ordeal is this: What Travelers said when I asked them a detailed list of fact-checking questions and a request for an interview. In response, a spokesperson sent a terse denial: “AI analysis/modeling and drone surveillance are not part of our underwriting decision-making process. When this information is available, our underwriters may refer to high-resolution aerial imagery as part of an overall property condition review.”

How could this possibly make sense given what was written on Travelers’ website and in its patent applications? Then the precision and simplicity of the language began to stand out. What exactly counts as an “underwriting decision process”? When Travelers boasts online that its employees “rely on algorithms and aerial imagery to identify the shape of a roof—a typically time-consuming process for customers—with nearly 90 percent accuracy,” doesn’t that classification count as an underwriting process? And even though Travelers has conducted tens of thousands of drone flights, aren’t those part of underwriting? And if AI and drones don’t actually affect customers, why file so many patent applications like “Systems and Methods for Analyzing Roof Deterioration Using Artificial Intelligence (AI)”? I felt like the company was trying to have it both ways, boasting about using the latest and greatest technology while avoiding liability for mistakes. When I asked the company these follow-up questions, Travelers did not respond.


Thankfully, my own roof isn’t going anywhere anytime soon, at least not yet. A few hours into my ordeal with Travelers and after I began scrambling to find new coverage, the situation resolved itself. Travelers admitted it made a mistake. It never admitted its AI was wrong to tag me. But it did reveal the reason I couldn’t find my cancellation notice: The company never sent it.

Travelers may have invested huge sums in neural networks and drones, but it seems she never updated her billing software to reliably handle the basics. Without a notice of non-renewal, she couldn’t legally cancel coverage. Bad cutting-edge technology hurt me; bad basic software got me out.

What’s disturbing about this whole episode is its opaqueness. When Travelers flew a drone over my house, I never knew. When they decided I was too much of a risk, I had no way of knowing why or how. As more and more companies use increasingly opaque forms of AI to decide the course of our lives, we’re all at risk. AI may give companies a quick way to save money, but when these systems use our data to make decisions about our lives, we’re the ones who bear the risk. As infuriating as dealing with a human insurance agent is, it’s clear that AI and surveillance are not good substitutes. And unless lawmakers act, things are only going to get worse.

The reason I still have insurance is simple: consumer protection laws. New York State doesn’t allow Travelers to revoke my insurance without notice. But why are we letting companies like Travelers use AI on us in the first place without any protections? A century ago, lawmakers saw the need to regulate the insurance market and make policies more transparent, but today, updated laws are needed to protect us from AI trying to decide our fate. Otherwise, the future looks ominous. Insurance is one of the few things that protect us from the risks of modern life. Without AI safeguards, algorithms will rob us of what little peace of mind our policies offer.


Albert Fox Cahn is the founder and executive director of the Surveillance Technology Oversight Project, or STOP, a civil rights and privacy advocacy group based in New York.