How Insurance Collides With Climate Change: The Risky Business of Predicting Where Climate Disaster Will Hit

Courtesy of Bloomberg, a look at how  on how climate tech companies can tell you the odds that a flood or wildfire will ravage your home. But what if their odds are all different?

Humans have tried to predict the weather for as long as there have been floods and droughts. But in recent years, climate science, advanced computing and satellite imagery have supercharged their ability to do so. Computer models can now gauge the likelihood of fire, flooding or other perils at the scale of a single building lot and looking decades into the future. Startups that develop these models have proliferated, buoyed by venture capital and private equity.

The models are already guiding the decisions of companies around the US and across the global economy. Hoping to climate-proof their assets, government-sponsored mortgage behemoth Fannie Mae, global insurance broker Aon Plc, major insurers such as Allstate Corp. and Zurich Insurance Group AG, large banks, consulting firms, real estate companies and public agencies have flocked to modelers for help. Two ratings giants, Moody’s Corp. and S&P Global Inc., have brought risk modeling expertise in-house through acquisitions.

There’s no doubt that this future-facing information is badly needed. The Federal Emergency Management Agency — often criticized for the inadequacy of its own flood maps — will now require local governments to assess future flood risk if they want money to build back after a disaster.

But there’s a big catch. Most private risk modelers closely guard their intellectual property, which means their models are essentially black boxes. They’re often not transparent enough to allow for rigorous independent vetting. A White House scientific advisers’ report warned last year that climate risk predictions from a “burgeoning” new industry were sometimes “of questionable quality.” And the research nonprofit CarbonPlan puts it even more starkly: Decisions informed by models that can’t be inspected “are likely to affect billions of lives and trillions of dollars.”

Everyone on the planet is exposed to climate risk, and modeling is an indispensable tool for understanding the biggest consequences of rising temperatures in the decades ahead. But zooming in closer with forecasting models, as the policymakers and insurers adopting these tools are already doing, could leave local communities vulnerable to unreliable data that can’t be double checked. As black-box models become the norm across industries, including housing and insurance, there’s real danger that decisions made with these tools can harm people with fewer resources or less ability to afford higher costs.

One telltale sign of this uncertainty can be seen in the startlingly different outcomes from a pair of models designed to measure the same climate risk.

A Bloomberg Green analysis of two flood-risk models, based on new academic research, finds they clash with each other more than they agree. When compared only on a single, relatively simple metric, the models match just 21% of the time.

Bloomberg Green compared two different models showing areas in California’s Los Angeles County that are vulnerable to flooding in a once-in-a-century flood event. The analysis considers only current flood risk.

One model was released in 2020 by New York-based First Street Technology Inc. If the American public has any familiarity with climate risk modeling, it’s thanks to First Street risk scores, which have been integrated into millions of online real-estate listings. The other, new model was created by a team led by researchers at the University of California at Irvine. Those researchers undertook their own, more detailed comparison of their model with First Street’s, which was published last month in the peer-reviewed journal Earth’s Future. The Irvine team focused on First Street in part because its data is used by many government agencies.

There can be real-world consequences from these data disagreements. Depending on which models a government office or insurer considers, it could mean building protective drainage in a relatively safe area or raising premiums in the wrong neighborhood.

“If you see a flood model, how much confidence can you have in the information?” asks Brett Sanders, a civil and environmental engineering professor at Irvine and a co-author of the study. He and his colleagues sought “to reveal there are significant differences in models trying to characterize the same thing.” The researchers, using slightly different datasets than Bloomberg, found an agreement rate between their model and First Street’s of about one in four. They also found contrasting social patterns in where the models spot risk, with Irvine’s pointing to higher exposure for Black and disadvantaged populations and First Street’s for White and more affluent communities.

Model inputs — that is, which metrics are fed into a black box and how they’re weighted — obviously affect the outputs. Sanders and his colleagues created their model specifically for Los Angeles County. It incorporates high-resolution ground-elevation data, as well as granular information on local drainage infrastructure, like storm sewers. First Street’s model, on the other hand, covers the whole US. It accounts for coastal flooding caused by waves and storm surge, whereas Sanders’ team considered only rain-induced and riverine flooding.

In their paper, Sanders and his co-authors — Jochen Schubert, associate research specialist at Irvine, and Katharine Mach, a professor of environmental science and policy at the University of Miami — assert that their work is “likely” more accurate than First Street’s, due to their use of fine-resolution data and efforts to check the model against outside sources. But the researchers also acknowledge a level of overall uncertainty that only emphasizes the need to validate different flood models. (Sanders and Schubert hold an equity interest in risk modeler Zeppelin Floods LLC.)

Noting that Irvine and First Street came to “nearly polar opposite” findings regarding the demographics of communities each sees as exposed to high risk, the researchers observe that such large discrepancies “could radically reshape assessment” of where in Los Angeles should be prioritized for flood-defense projects and funding. Poor and minority communities have a history of being neglected when it comes to flood prevention.

First Street disputes the Irvine team’s conclusions. In written responses to Bloomberg Green, founder and Chief Executive Officer Matthew Eby said the best way to validate models is to check them against real-world flooding events, not other models. He said First Street’s predictions correlate more closely than Irvine’s with flood damage claims made to FEMA’s Individual Assistance program: “This shows our model has more skill.”

Eby attributed the demographic disparity to the Irvine researchers “erroneously” modeling the area along the Los Angeles River — a result, he said, of not accounting for how mid-20th-century flood-control work changed streamflow and water levels. “This leads to an overestimation of flood risk in the area, and given the demographic makeup of the surrounding neighborhoods, drives the material divergence.”

To fill out the picture, Bloomberg reviewed a third model, created by the property information firm CoreLogic Inc., one of the largest business-to-business information providers in the US. CoreLogic’s data is used by many banks and insurers, and by the federal National Flood Insurance Program to help set rates for some 5 million US homeowners. CoreLogic’s model agrees with First Street’s or Irvine’s less than 50% of the time on which properties are at high or extreme risk.

Anand Srinivasan, a CoreLogic executive in climate risk analytics, and Mahmoud Khater, the company’s chief climate risk and hazard officer, readily acknowledged the difference among findings. CoreLogic’s approach, they said, incorporates detailed information about the structures on a parcel of land, and its risk scores are based on estimated financial losses caused by flooding events, not just the flooding itself. That dynamic is non-linear: For example, a house with two inches of standing water might sustain four times as much damage as a house with one inch of floodwater.

new analysis by CarbonPlan also suggests that some private risk models are out of sync with each other. The research nonprofit compared two models provided by Jupiter Intelligence Inc. and XDI Pty Ltd., looking at three different risks: fire, coastal flooding and river and rainfall flooding. Companies that use Jupiter data or invest in the firm include BP Plc and Liberty Mutual Group Inc.; XDI has worked with BlackRock Inc. and HSBC Holdings Plc.

Both models estimate that California’s fire risk will grow this century at a third of sampled locations, but they agree on only 12% of locations where that risk will increase. Projecting the risk of coastal flooding in New York City in 2100, Jupiter and XDI see vulnerable properties concentrated near the coast — but the share of locations where both concur on rising risk is only 21%.

Karl Mallon, co-founder of XDI, called CarbonPlan’s project a useful one and said much of the variation may come down to modelers’ varying sets of expertise — climate science, building codes, insurance, engineering, hydrology — and how they’re applied.

Josh Hacker, co-founder and chief scientist for Jupiter, agreed that projections can differ for many reasons. As for private climate models being black boxes, he said, “When you’re in business, you have to protect some things.” He added, “We’ve spent a lot of time and money building something that we believe could be useful in the right business context and regulatory context as well.”

Experts are well versed in modeling methods and understand how slight variations might influence the estimates. But that’s not the case for all users and potential users of such tools, including ordinary homebuyers.

Asked whether risk models convey a false precision that could skew decisions and prices, First Street’s Eby said the key “is understanding model limitations and the proper application of them given these limitations. This is why we publish our methodologies freely.”

The company publishes the methodology behind its model on its website. Yet First Street declined to share a high-resolution version of its most recent flood model, citing it as a main revenue source. Formerly a nonprofit, First Street launched a public benefit corporation earlier this year and has raised $46 million in Series A funding with aims to scale its model globally. Investors include Innovation Endeavors and Galvanize Climate Solutions.

First Street has attained the highest profile of any climate analytics startup in the US by getting its work in front of a mass audience. The company says it can predict the risks of flooding, fire, extreme heat and high winds at the level of an individual property, even 30 years out. Those predictions are translated into simple risk scores for US addresses, which anyone can check online for free at riskfactor.com. The real estate brokerage Redfin and the listings website realtor.com include these scores in home listings.

Redfin Chief Economist Daryl Fairweather said that climate risk information is an increasingly important tool for homebuyers, and while Redfin doesn’t test First Street’s data, “we have found it to be accurate” when, for example, neighborhoods deemed high risk did indeed flood.

One person experienced with applying climate risk models to real-world circumstances is Mark Pestrella, director of Los Angeles County Public Works. Members of his staff consider multiple models, including FEMA’s and Irvine’s, to gauge flood risk around the county. Staff hydrologists also do their own data analysis.

Communicating risk responsibly requires nuance, Pestrella says, and categories like “severe risk” must be understood within the context of likelihood over a century. While the public needs to know the risks they face, there’s also the potential for properties and neighborhoods to be stigmatized.

“You have to be careful how you convey the information,” he says, so as not to create hysteria and unwittingly harm “those who can least afford to buy flood insurance, who can least afford their mortgage.”


Understanding climate risk is a bit like taking a vision test at the eye doctor. At the largest scale — the big “E” at the top of the chart, or the whole globe — the signs are the clearest. Refined by scientists over decades, climate models have proved very reliable at what they were designed to do, projecting the global effects of rising greenhouse gases.

But at progressively smaller scales and over longer time horizons — as the characters on the eye chart shrink — clarity gives way to fuzziness: You’re sure it’s a letter, just not which one. Climate models can be like that. They are simply better at projecting averages than extremes. Outlier events, like 1-in-100-year storms, are still hard to predict.

“It’s a sad irony that the higher the impact, the greater the uncertainty,” says Katharine Hayhoe, a climate scientist at Texas Tech University and chief scientist for the Nature Conservancy.

In addition, scientists have not reached consensus on whether climate change might bring more warming or cooling phases to the Pacific, in the periodic shift between the now-ending El Niño and the forthcoming La Niña. Such uncertainty makes projections that much harder.

Whatever the climate influence on risk is, it may not matter as much as on-the-ground details. The risk of flooding on a particular city block, for example, could be influenced by micro factors such as roof age, the size of water pipes and the height of nearby levees, and these can have an outsized effect on local impacts. And layering projections of climate change over the next two or three decades on top of flood-risk estimates that already disagree with each other is unlikely to clarify matters.

“There’s so much uncertainty in even what the uncertainties are,” says Daniel Swain, a climate scientist at the University of California at Los Angeles who advises two analytics startups, Reask Pty Ltd. and ClimateCheck Inc.

This doesn’t mean the models are useless. What scientists mean by “uncertainty” isn’t paralyzing ignorance; it’s the range of possible answers they’re confident about. But that’s why it’s all the more important for models to be scrutinized and compared, the Irvine authors argue.

Until a few years ago, much of climate science fell to government or university researchers. They would gather all their knowledge about the climate, write it into computer software models, run their simulations and then publish their findings. Methods and data were left open for review. Incorrect equations and buggy code could be managed, because scientists were the inventors, developers and users of the products.

That’s changed with the recent wave of climate models that are privately held intellectual property. Methods and data can “stay hidden within a black box and cannot be subject to consumer scrutiny or peer review,” Madison Condon, who teaches at Boston University’s law school, warned in a 2023 paper.

CarbonPlan analysts said they approached nine analytics companies to participate in their comparative study, specifically requesting small data samples to avoid burdening them. Only two fulfilled the request. Perhaps the most striking aspect of the project was “how little data we received,” they wrote.

A lack of outside scrutiny may incentivize companies “to over-claim their accuracy or bury inconvenient findings with potential liability implications,” Justin Mankin, a climate scientist at Dartmouth College, wrote in a New York Times op-ed earlier this year.

Although modelers can now render risk levels at a very fine geographic scale, that doesn’t necessarily make the predictions reliable — and they often aren’t, according to multiple scientists. “You can get very precise and detailed outcomes, but not accurate,” says Giuliano di Baldassare, a hydrology professor who researches the science of disaster risk at Uppsala University in Sweden. With the property-level models in general, he says, “I see a tendency to be precisely wrong, rather than being approximately right.”

Some experts and clients of risk-analytics firms would like to see the creation of a standard-setting body that could review and validate private models. “I would hope that, down the road, there is some kind of third party, nonprofit, government agency that does start to rank these,” Redfin’s Fairweather said on a panel hosted by the University of Pennsylvania’s Wharton School in 2021. “It’s pretty much untestable in the short term, to know which of these risks is accurate.”

Ultimately, there is plenty more science to do. The question is whether it should occur “in an open source context or in a walled garden in a profit-driven corporation,” Mankin says. “I think that choice matters.”


In wildfire-scarred California, insurance companies are pushing for the right to look into the future.

By law and with few exceptions, insurers in the state have had to rely only on historical data to gauge a property’s risk of experiencing fire, flooding and other perils. As the planet warms and weather becomes more extreme, old data becomes less useful. Companies want the ability to use forward-looking projections to set rates.

The state insurance commission has offered insurers a deal: It would allow them to use catastrophe models in exchange for writing more policies in under-insured areas. The proposed regulatory change hasn’t been adopted yet.

Not everyone is on board. The advocacy group Consumer Watchdog warned that models can be “inconsistent” and potentially contain biases, citing how algorithms have led to racial discrimination in criminal sentencing and mortgage lending. The group is calling for California to create a public climate risk model instead of allowing insurers to set rates with black-box science.

The debate over how businesses and governments should consult climate risk models is already moving from the relatively narrow confines of California insurance regulation to the national scale. Some experts, including Mankin at Dartmouth, want to see the US government create a public prediction engine of its own, enabled by federal investment. That could give analysts and consumers a benchmark for comparing the outputs of private models. The report released last year by the White House’s science advisers argued for more transparency and additional ways to compare risk models used in the private sector. Allowing for greater scrutiny would improve the data underpinning them and bring financial risks from climate change into tighter focus.

Yet no change on the horizon is likely to shift the balance away from planners, insurers and others making more decisions that rely on black-box risk models. The number of people who want to know what climate risk means for their homes, communities and businesses will only increase as weather becomes more volatile. And so too will the number of companies offering answers for a price. It may not be clear for years or decades — until the flood or fire arrives — if their predictions are right.



This entry was posted on Friday, August 9th, 2024 at 7:15 pm and is filed under Predicative Analytics.  You can follow any responses to this entry through the RSS 2.0 feed.  Both comments and pings are currently closed. 

Comments are closed.


ABOUT
BLACK SWANS GREEN SHOOTS
Black Swans / Green Shoots examines the collision between urbanization and resource scarcity in a world affected by climate change, identifying opportunities to build sustainable cities and resilient infrastructure through the use of revolutionary capital, increased awareness, innovative technologies, and smart design to make a difference in the face of global and local climate perils.

'Black Swans' are highly improbable events that come as a surprise, have major disruptive effects, and that are often rationalized after the fact as if they had been predictable to begin with. In our rapidly warming world, such events are occurring ever more frequently and include wildfires, floods, extreme heat, and drought.

'Green Shoots' is a term used to describe signs of economic recovery or positive data during a downturn. It references a period of growth and recovery, when plants start to show signs of health and life, and, therefore, has been employed as a metaphor for a recovering economy.

It is my hope that Black Swans / Green Shoots will help readers understand both climate-activated risk and opportunity so that you may invest in, advise, or lead organizations in the context of increasing pressures of global urbanization, resource scarcity, and perils relating to climate change. I believe that the tools of business and finance can help individuals, businesses, and global society make informed choices about who and what to protect, and I hope that this blog provides some insight into the policy and private sector tools used to assess investments in resilient reinforcement, response, or recovery.