Medicine Technology 🌱 Environment Space Energy Physics Engineering Social Science Earth Science Science
Science 2026-02-23 3 min read

Catastrophe Modeling Needs a Rebuild -- CERCat Is Starting the Work

Lehigh and Rice universities joined major insurance and reinsurance companies to form a consortium targeting the data gaps and model failures behind disaster loss estimates

When a major hurricane, flood, or earthquake strikes, the question that follows almost immediately -- how much will this cost? -- depends on computational models built from decades of engineering research, historical loss data, and probabilistic assumptions about how structures fail. Those models drive insurance pricing, reinsurance contracts, and government risk assessments covering trillions of dollars in exposure.

The models are also imperfect. High-profile disasters have repeatedly produced losses that diverged sharply from pre-event estimates, sometimes by a factor of two or more. Hurricane Ian in 2022 is one example; the 2011 Thailand floods, which cost the global supply chain far more than modelers anticipated, are another. The gap between what catastrophe models predict and what disasters actually cost is not merely an academic concern -- it affects the availability and pricing of insurance, the solvency of carriers, and the fiscal exposure of governments.

Building the Partnership

The Consortium for Enhancing Resilience and Catastrophe Modeling -- CERCat -- is a formal research partnership between Lehigh University and Rice University, backed by some of the largest companies in the global insurance and reinsurance market, including Swiss Re, Munich Re, Guy Carpenter, FM Global, Zurich Insurance, and Aon. The consortium held a two-day convening at Rice University in February 2025, bringing academic researchers and industry partners together to identify research priorities and sharpen collaboration.

The founding premise is that the most important improvements in catastrophe modeling require data and expertise that no single institution holds. Academic researchers have methodological depth and access to peer-reviewed science. Industry partners have proprietary claims data, operational experience with model outputs, and direct knowledge of where models fail in practice. CERCat is designed to connect those two worlds.

Where Models Break Down

Catastrophe models consist of four modules: a hazard component characterizing the physical event (wind speed, flood depth, ground motion), a vulnerability component estimating how different building types respond to that hazard, an exposure component mapping at-risk assets, and a financial component translating physical damage into insured loss. Errors or data gaps in any module propagate through to the final loss estimate.

CERCat's initial research focus includes secondary perils that standard models handle poorly -- post-wildfire debris flows, storm surge interactions with riverine flooding -- along with the vulnerability of aging and mixed-construction building stock that does not map cleanly onto model archetypes. Demand surge, the price inflation that follows a major disaster and drives actual repair costs above pre-event estimates, is another target area.

Climate change adds further difficulty. Historical loss data, which underpins most model calibration, may no longer accurately reflect the frequency or intensity of future events. Integrating forward-looking climate projections into models traditionally calibrated on the past is a methodological frontier the consortium intends to address systematically.

Claims Data as a Research Resource

One of the most significant potential contributions of industry partners within CERCat is access to post-event claims data -- detailed records of what actually broke, in what kind of building, under what kind of hazard conditions. This data has historically been proprietary and unavailable to academic researchers, creating a persistent gap between model assumptions and empirical reality.

The consortium is developing data-sharing protocols that would allow anonymized and aggregated claims information to be used in academic research. Industry partners have also committed to sharing model validation challenges -- cases where internal models produced estimates that deviated substantially from observed losses -- so that researchers can focus on the specific failure modes that matter most in practice.

"The value of this data for improving model performance cannot be overstated," said Rodrigo Costa, director of the CERCat initiative at Lehigh University. "We have the engineering methods. What we often lack is the empirical feedback loop that tells us where our assumptions break down at scale."

Resilience as a Research Frame

Beyond improving loss estimate accuracy, CERCat frames its research around community resilience -- the capacity of people, systems, and infrastructure to absorb disruption and recover. Studies consistently find that every dollar spent on pre-event mitigation saves several dollars in post-event recovery, but translating that general finding into specific, site-level recommendations that influence actual policy and investment decisions requires model granularity that current tools rarely achieve.

Several working groups emerging from the February convening will focus on this translation problem -- developing methods for quantifying the risk-reduction value of specific interventions in ways credible to both regulators and insurance markets.

The problems CERCat is addressing have been recognized for decades. The consortium's three-year initial roadmap includes graduate student fellowships, joint research projects, an annual conference, and shared data infrastructure -- a more systematic approach than has previously existed to solving them.

Source: Consortium for Enhancing Resilience and Catastrophe Modeling (CERCat), Lehigh University and Rice University. Industry partners include Swiss Re, Munich Re, Guy Carpenter, FM Global, Zurich Insurance, and Aon.