Data opportunities in Specialty Insurance Part 3. Risk does not equal appetite

  • Risk should equal appetite, but if we care to look, we find it otherwise

    When large groups of policies are segmented by income, urbanicity, Class, or business size huge differentials of Loss ratio will often be found. These differentials are usually missed by the normal analyses.

    Any disparity of Loss Ratio, which can not be explained by business or marketing choices, implies an inconstancy of price-to-risk assessment, and an appetite-risk model that is in need of polishing. Any difference of the actual Loss Ratio (PULR) from the target Loss Ratio (TULR), for any significant grouping, can be seen as a financial opportunity.

    Risk should equal appetite however the data is investigated. Every difference is either the company paying out more than it wants to, or quoting higher than it needs to. While a higher than ideal Loss Ratio is easily seen as dollars lost to a bad risk, a lower than ideal Loss Ratio is also a lost opportunity, indicating a reduced market share and, perhaps, a bad quote-to-bind ratio.

    Unlocking the value in Data

    The mature specialty companies may not be taking the risk of data-focused competition seriously enough. Insuretechs are often not seen as a real challenge to the large Specialty Insurers. It is perhaps careless to disregard the risk of lean, and data-focused, start-ups, who are free to cherry pick their markets. At the very least these upstart challengers can be a call to action to add some urgency to improvement.

    A key factor that differentiates a mature insurer, from small (& startup) insurers is the availability of a long risk history - however that history can only manifest its value through value-focused curation.

    Clean data + 3rd party data

    Risk modeling is based on:

    • submission information

    • underwriter knowledge

    • 3rd party data

    • policy/claim history

    Opportunity lies with both improving access to usable Internal data plus suitable use of 3rd Party data. In my experience improvements to Internal data (especially making Limits, Attachments, Deductibles, Renewals, and Class Codes consistent and accessible) seem to be often discussed, but practical efforts tend to be siloed, slow and marginal.

    It is not enough for 3rd Party data to be limited to submissions/on-boarding & claims. 3rd party data needs to be added alongside the policy history to create a Risk context. 3rd party data vendors may offer thousands of data points, however much can be done with a handful of 3rd party data points such as:

    • Unified Business ID

    • Business size/age

    • Clean Risk addresses

    • Credit scores/files

    • Building age (for Property)

    The choice of metrics can vary over time: A simple Credit Score is an a necessary & ideal first step. When the Credit Score has been assimilated, and it's uses validated, one can start accessing the complete credit file in order to find better, more fundamental, measures, which can themselves be validated (as improvements) against the simple Credit Score.

    External data has explicit upfront costs which can seem significant, but the data is necessary in order to mine the insights within the policy history. A mature insurance company can only can only get full value from this differential advantage with good 3rd party data.

    Base line for data

    Specialty insurance data is often a melange of data structures from numerous independent businesses. While the data that needs to align to the bank ledger will be well established other data will often needed to be cleaned to be useful.

    Internal Data - clean & consistent

    • Limits/Attachments/SIRs,

    • Renewal data (needed for Lifetime Value/Prior Losses studies)

    • Class Codes (aligned with NAICS/ISO)

    3rd Party data

    • Clean risk addresses (for any location analysis such as Urban/Rural risk differentials)

    • Unified Business ID - i.e. all Walmart Policies can be grouped

    • Credit Score

    • Various factors appropriate to the line i.e. Building Age or Years of Experience

    Building useful tools with non-ideal data

    Many of these elements can be proven without ideal data. The Minimum Viable Product can be achieved with an Analytical tool which escapes the need to have ideal data, or SOX compliance. Once a viable tool is built using as much data of maximal granularity as possible, it can then prove its usefulness both as an upper management big-picture view, or as an adjunct to the established methods of analysis at any stage of the data flow, whether in Submissions, Underwriting, Actuarial or Claims. From there the tool can be iteratively improved.

    In the absence of third party data, often proxies can be used: the ZIP code area is a good proxy for Urbanicity, and the count of Class Codes is a good proxy for business size.

    Opportunity costs from not improving data

    An example of a largely overlooked data opportunity is suggested by the CEO of Convex Insurance, Paul Brand. In 4 years the London based Convex Insurance has grown its Specialty business from zero to $6b GWP.

    Paul Brand states "You should have the lifetime value of your clients in both your rating and client assessment through all phases of the market place. In Personal Lines this is clearly understood. The [Specialty] people who think it is all about expected loss costs in a 12 month period; it is as if they are playing rugby when they should be playing soccer”.

    Whether we want a Lifetime value factor, or simply an accurate loss history, both are best if they are based on 3rd party data, a unified Business entity, and clean renewal data.

    Perfect Analysis is the enemy of the good analysis

    Data Services can make improvements, but from the business perspective Data Services efforts may appear to be very slow, expensive, and conservative. The maxim ‘Everything is easy until you do it’ is very true for data. Data Services’s commonly heard refrain is ‘whatever we do has to be 100% correct’ and ‘who will keep the systems running 24/7/365?’.

    Analysis is not always equal, and it does not need to be equal. As discussed in part 1, an analysis that is used for definitive financial decisions will need to balance to the ledger, and be under SEL/ SOX requirements. Secondary tools used to support those same decisions can avoid those burdens. Business analysis often a matter of being an improvement on the existing options (if any), rather than having to be perfect.

    New tools, such as Alteryx and PowerBI have enabled ‘shadow-IT’ options. At first tools will appear alongside the existing work flow and be used to illuminate and guide choices for the first adopters. In that way Minimum Viable Products can be built to be useful rather than to be exact. In their early forms they will be far from perfect; often they only need to be better than nothing. Measures of accuracy can even be built in so that the user can get an immediate estimate of the trustworthiness of the data being viewed at any time. Obviously care must always be taken to avoid misleading analyses, or creating tools that can not be maintained.