Insights

Exposure Management in Practice: Making It Work for Reinsurance Underwriting

Key results

Reduced exposure analysis time, enabling swift responses during peak periods.

Delivered real-time analytics to validate assumptions and align exposures with risk appetite.

Improved ability to analyze year-on-year changes and assess profitability with actionable insights.

Exposure Management (EM) is in something of a spotlight. In a recent Global Reinsurance article, Sanjiv Sharma, Head of actuarial and exposure management at the Lloyd’s Market Association (LMA), said “Exposure management has always been technical, but now we’re moving towards something more advanced”. His point is being echoed across the industry; not least because many modern perils – cyber being the most obvious – are proving hard to quantify with classic modelling alone. EM is often seen as a technical discipline that runs separately to underwriting. But at a recent roundtable hosted by Allphins, a group of senior EM specialists and underwriters argued for the more modern approach: that EM can be a central tool in strategic underwriting—particularly in helping reinsurers understand not just what they’re writing, but the drivers which lead to expansion (or indeed contraction) of those risks.

A tool, not a rulebook

From the outset, our contributors agreed on one principle: EM is not the only tool in underwriting—it’s just an increasingly important one. And in the reinsurance market, it allows for more nuanced comparison of books.

“It’s one tool in your array,” says Vanessa Jones, Head of Exposure Management at Dale Underwriting Partners. “You’re trying to create relativity - relative benchmarks - so you can look at an array of risks and say which ones fit your target portfolio better.”

James Simpson, Head of Exposure Management at Blenheim Syndicate echoed that sentiment. “The problem is when people get obsessed by setting limits based solely on EM data,” he says. “It’s best to use EM to get the bigger picture. A risk might price badly in isolation but might add valuable diversification to your portfolio.”

That diversification angle - EM as a lens on how risks are amplified or mitigated when books are combined – was a repeated refrain. Rather than being used solely as a gatekeeper, EM can give reinsurers a strategic edge by helping them assess relative positioning, understand overlaps and concentrations, and model portfolio resilience under stress scenarios.

Limitations of modelling

As our roundtable guests were all EM and modelling professionals, it’s no surprise that much of the discussion centred on the limitations of probabilistic modelling. Our group broadly agreed that models are too often mistaken for crystal balls.

“People think they can model a risk to within decimal points,” says Mathias Borjesson, SVP of Underwriting at RenaissanceRe, “when in reality, history tells us we have considerably higher model uncertainty .”

Jones also highlighted the danger of letting “personal view of risk” tweaks to vendor models become gospel, even though it seems to be endemic practice. She says, “We really resist tweaking, because if you’ve got version changes coming out each year, all those adjustments shift. You end up with a house of cards.” One unique value of EM is that it offers unquestionable numbers rather than predictive numbers - so the temptation to tinker which always hangs over models is entirely removed.

Instead, the consensus was to use EM and modelling together to build a more holistic view. Models can point to broad patterns or validate portfolio-level risk assumptions, but EM can reveal subtleties the models miss—like concentrations in unexpected regions, vintage or quality issues in the data, or exposure that doesn’t show up in standard peril definitions.

A modern tool for modern risks

But EM is also a tool whose time has come. In our favourite summary, Borjesson described EM as a “great hedge against model uncertainty”.

He says that EM picks up the slack in emerging or complex lines, such as cyber or terror, where models either don’t exist or have more variables and are therefore more difficult to rely on than NatCat models. “Everyone thought the first thing they had to do was build a probabilistic cyber model”, Borjesson continues. “Now there’s a clear movement away from that. We’re focusing more on segmentation and understanding where the exposure is—like where data centres are. You don’t need a model for that. You just need clean data and a footprint.”

Simpson added: “I’ve never licensed a probabilistic terror model. I just don’t believe you can model terror probabilistically. The nature of the risk is too erratic.”

Data first, always

Of course, just as is the case with modelling, EM relies on data: and what you get out is absolutely dependent on what you put in. And EM faces some of the same data challenges as modelling – too little data, too much, lack of interoperability and more - (read the article in which our contributors discuss challenges for more on this).   One senior EM Manager shared a very current example from the recent CrowdStrike cyber event: “We weren’t capturing the operating systems our clients were using, so we had no idea what our exposure was,” he says. “Now we’re pushing our cover-holders to include that.”

“One of the barriers we have is actually having any data at all,” added another EM Manager. “On our retro book, it’s extremely rare to get useful exposure data. So we have to make a lot of assumptions.” Retrocession, in particular, poses significant difficulties. “Even if you get the data, you often don’t know what’s in it,” says Simpson. “You’re two or three parties down the line. Do you get all the peril regions…? It’s really tricky.”

That creates problems when trying to use EM to its fullest extent—because without sufficient granularity, you’ll be relying on proxies or making decisions in the dark. “If we’re writing 50% of our accounts in retro, and we only have data for half of them—can you even model that?” Simpson asked. “I’d say no.”

Our contributors were cautiously optimistic that data is improving, but only in parts of the market. “The US is generally good,” Simpson continues. “But outside that, it gets very patchy—regionally and by line of business.”

Borjesson pointed to the role of market strength: “As we’ve got bigger, we’ve been able to demand better data. But that’s not the case for everyone. Data quality is still driven by who’s asking—and how much leverage they have.”

One of our attendees also emphasised the importance of making it easy for brokers and underwriters to capture data: “Set up your underwriting systems so they can do it quickly. You have to make it painless.” Jones suggested that regulation could help: “If Lloyd’s mandated more granular data, we (EM professionals) could act as the ‘good cop’ internally: we know this is painful, but we’ll build the systems to make it easier.”

That said, meaningful change often comes from underwriters themselves – especially if it affects business. Simpson adds, “We weren’t getting exposure data on our terror book. Then one of our underwriters said to the brokers: ‘If we don’t get the EDM, we’re not quoting.’ Within a year or two, we had 90% of accounts sending proper data.”

Organisational Buy-In and Cultural Shifts

Simpson’s point matters because the other key theme was the separation of EM and underwriting – which can become sympathetically aligned or a challenging gulf. Even the best data and the most nuanced tools are useless if underwriters don’t buy in. Our contributors agreed that exposure management only works when there is cultural alignment between EM teams and underwriters.

“You need underwriters who understand the models and their limitations,” says Jones. “If EM is embedded in the way people think about risk, then it becomes a genuinely useful tool.” Simpson agreed, saying that exposure management had, in the past, been a tough sell. “But now, there’s real buy-in, especially with all the requirements from Lloyd’s. Exposure management is now recognised as a key function—like claims or capital management.”

Even so, some lines of business remain harder than others. “In casualty, the conversation is all about losses,” notes Borjesson. “It’s very hard to shift that dialogue towards portfolio concentration or exposure.”

Key Takeaways

Exposure management is more than a tool for compliance or capital modelling. It’s a strategic compass—guiding reinsurers through a landscape that’s increasingly unpredictable, multi-class, and data-driven.

EM is most effective when used alongside modelling—not as a replacement, but as a complementary lens that challenges false precision. And its success depends entirely on data: clean, complete, and timely.

It must also be embedded into the culture of underwriting teams; something which seems to be happening quite rapidly as the profile of lines changes. Exposure Management’s role is not just to quantify risk, but to illuminate it—and increasingly, to shape the decisions that define reinsurance performance.

Share via:
Insights

Exposure Management in Practice: Making It Work for Reinsurance Underwriting

Exposure Management (EM) is in something of a spotlight. In a recent Global Reinsurance article, Sanjiv Sharma, Head of actuarial and exposure management at the Lloyd’s Market Association (LMA), said “Exposure management has always been technical, but now we’re moving towards something more advanced”. His point is being echoed across the industry; not least because many modern perils – cyber being the most obvious – are proving hard to quantify with classic modelling alone. EM is often seen as a technical discipline that runs separately to underwriting. But at a recent roundtable hosted by Allphins, a group of senior EM specialists and underwriters argued for the more modern approach: that EM can be a central tool in strategic underwriting—particularly in helping reinsurers understand not just what they’re writing, but the drivers which lead to expansion (or indeed contraction) of those risks.

A tool, not a rulebook

From the outset, our contributors agreed on one principle: EM is not the only tool in underwriting—it’s just an increasingly important one. And in the reinsurance market, it allows for more nuanced comparison of books.

“It’s one tool in your array,” says Vanessa Jones, Head of Exposure Management at Dale Underwriting Partners. “You’re trying to create relativity - relative benchmarks - so you can look at an array of risks and say which ones fit your target portfolio better.”

James Simpson, Head of Exposure Management at Blenheim Syndicate echoed that sentiment. “The problem is when people get obsessed by setting limits based solely on EM data,” he says. “It’s best to use EM to get the bigger picture. A risk might price badly in isolation but might add valuable diversification to your portfolio.”

That diversification angle - EM as a lens on how risks are amplified or mitigated when books are combined – was a repeated refrain. Rather than being used solely as a gatekeeper, EM can give reinsurers a strategic edge by helping them assess relative positioning, understand overlaps and concentrations, and model portfolio resilience under stress scenarios.

Limitations of modelling

As our roundtable guests were all EM and modelling professionals, it’s no surprise that much of the discussion centred on the limitations of probabilistic modelling. Our group broadly agreed that models are too often mistaken for crystal balls.

“People think they can model a risk to within decimal points,” says Mathias Borjesson, SVP of Underwriting at RenaissanceRe, “when in reality, history tells us we have considerably higher model uncertainty .”

Jones also highlighted the danger of letting “personal view of risk” tweaks to vendor models become gospel, even though it seems to be endemic practice. She says, “We really resist tweaking, because if you’ve got version changes coming out each year, all those adjustments shift. You end up with a house of cards.” One unique value of EM is that it offers unquestionable numbers rather than predictive numbers - so the temptation to tinker which always hangs over models is entirely removed.

Instead, the consensus was to use EM and modelling together to build a more holistic view. Models can point to broad patterns or validate portfolio-level risk assumptions, but EM can reveal subtleties the models miss—like concentrations in unexpected regions, vintage or quality issues in the data, or exposure that doesn’t show up in standard peril definitions.

A modern tool for modern risks

But EM is also a tool whose time has come. In our favourite summary, Borjesson described EM as a “great hedge against model uncertainty”.

He says that EM picks up the slack in emerging or complex lines, such as cyber or terror, where models either don’t exist or have more variables and are therefore more difficult to rely on than NatCat models. “Everyone thought the first thing they had to do was build a probabilistic cyber model”, Borjesson continues. “Now there’s a clear movement away from that. We’re focusing more on segmentation and understanding where the exposure is—like where data centres are. You don’t need a model for that. You just need clean data and a footprint.”

Simpson added: “I’ve never licensed a probabilistic terror model. I just don’t believe you can model terror probabilistically. The nature of the risk is too erratic.”

Data first, always

Of course, just as is the case with modelling, EM relies on data: and what you get out is absolutely dependent on what you put in. And EM faces some of the same data challenges as modelling – too little data, too much, lack of interoperability and more - (read the article in which our contributors discuss challenges for more on this).   One senior EM Manager shared a very current example from the recent CrowdStrike cyber event: “We weren’t capturing the operating systems our clients were using, so we had no idea what our exposure was,” he says. “Now we’re pushing our cover-holders to include that.”

“One of the barriers we have is actually having any data at all,” added another EM Manager. “On our retro book, it’s extremely rare to get useful exposure data. So we have to make a lot of assumptions.” Retrocession, in particular, poses significant difficulties. “Even if you get the data, you often don’t know what’s in it,” says Simpson. “You’re two or three parties down the line. Do you get all the peril regions…? It’s really tricky.”

That creates problems when trying to use EM to its fullest extent—because without sufficient granularity, you’ll be relying on proxies or making decisions in the dark. “If we’re writing 50% of our accounts in retro, and we only have data for half of them—can you even model that?” Simpson asked. “I’d say no.”

Our contributors were cautiously optimistic that data is improving, but only in parts of the market. “The US is generally good,” Simpson continues. “But outside that, it gets very patchy—regionally and by line of business.”

Borjesson pointed to the role of market strength: “As we’ve got bigger, we’ve been able to demand better data. But that’s not the case for everyone. Data quality is still driven by who’s asking—and how much leverage they have.”

One of our attendees also emphasised the importance of making it easy for brokers and underwriters to capture data: “Set up your underwriting systems so they can do it quickly. You have to make it painless.” Jones suggested that regulation could help: “If Lloyd’s mandated more granular data, we (EM professionals) could act as the ‘good cop’ internally: we know this is painful, but we’ll build the systems to make it easier.”

That said, meaningful change often comes from underwriters themselves – especially if it affects business. Simpson adds, “We weren’t getting exposure data on our terror book. Then one of our underwriters said to the brokers: ‘If we don’t get the EDM, we’re not quoting.’ Within a year or two, we had 90% of accounts sending proper data.”

Organisational Buy-In and Cultural Shifts

Simpson’s point matters because the other key theme was the separation of EM and underwriting – which can become sympathetically aligned or a challenging gulf. Even the best data and the most nuanced tools are useless if underwriters don’t buy in. Our contributors agreed that exposure management only works when there is cultural alignment between EM teams and underwriters.

“You need underwriters who understand the models and their limitations,” says Jones. “If EM is embedded in the way people think about risk, then it becomes a genuinely useful tool.” Simpson agreed, saying that exposure management had, in the past, been a tough sell. “But now, there’s real buy-in, especially with all the requirements from Lloyd’s. Exposure management is now recognised as a key function—like claims or capital management.”

Even so, some lines of business remain harder than others. “In casualty, the conversation is all about losses,” notes Borjesson. “It’s very hard to shift that dialogue towards portfolio concentration or exposure.”

Key Takeaways

Exposure management is more than a tool for compliance or capital modelling. It’s a strategic compass—guiding reinsurers through a landscape that’s increasingly unpredictable, multi-class, and data-driven.

EM is most effective when used alongside modelling—not as a replacement, but as a complementary lens that challenges false precision. And its success depends entirely on data: clean, complete, and timely.

It must also be embedded into the culture of underwriting teams; something which seems to be happening quite rapidly as the profile of lines changes. Exposure Management’s role is not just to quantify risk, but to illuminate it—and increasingly, to shape the decisions that define reinsurance performance.