Protected classes and analytics in Specialty Insurance
Analysing Risk with new methods will force Insurers to be more proactive in finding inevitable pockets of discrimination.
https://www.datadive.systems/protected-classes1
It is a fundamental principle in US insurance, often supported by State and Federal law, that the protected classes, such as race, national origin, sex, or religion, must not be given unfair discrimination.
Typical means of unfair discrimination may be unbalanced premiums and redlining. Unbalanced premiums is where the same risk are priced or accepted differently based on factors that lead to protected classes being treated negatively. A commonly used example of redlining is the government agency from the 1930’s called the Home Owners’ Loan Corp. The Home Owners’ Loan Corp used maps limiting where they would lend, and often these maps overwhelmingly defined areas that were racially distinct.
Legally the concept of ‘unfair discrimination’ refers to making explicit choices which are independant of risk that infringe on a protected group. However dicrimination may also take the form of ‘Disparate impact’.
Disparate impact can happen without overt discrimination. It can be an indirect effect of choices which are seen as independant of the protected classes, but nontheless affect them negatively. Factors, such as ZIPs or income, may appear to be actuarialy sound criteria for setting rates, but when put in practice they can easily become proxies for race. AI black-box algorithmic modelling can easily create disparate impact as it is generally impossible to make the internal calculations transparent.
While laws vary between States and by insurance types, the fear of creating an unintentional disparate inpact can be used as an argument against looking for complex risk relationships, whether by overseen methods or black box AI. The fear of discrimination can be used as a reason to avoid the search for complex factors and interactions affecting risk. Keeping the analysis of risk as simple is seen as protecting the Insurer against claims of discrimination.
It is the nature of many policies being sold in many ways that there will be areas where, if only by chance, some evidence of discrimination could be discovered somewhere, at some level. If the insurer does not know, because it has made a choice not to proactively verify that the existence of any unintentional pockets of discrimination, is probably a weak defence. Allowing for pockets of discrimination to exist by default is seen as better than looking for those inevitable pockets. As things stand, ignorance is a winning strategy.
However the push towards using more complex analysis to improve risk performance, whether by traditional means or by AI black-boxes, challanges the ignore strategy.
As analytics is looking at multi-level and multi-factor risk profiles the opportunity for unintended discrimination increases.
AI black-boxes will identify associations of risk but the causality will be opaque. Every AI black-box that is used in pricing and rate-making will need to have an anti-discrimination algorithm checking its conclusion. The process can not be monitored within, so it will need to be monitored afterwards.
This proactive checking is perhaps overdue even with the relatively simplistic risk modelling that we presently use. We will have to actively search for, rather than passively assume that there is no discrimination.
Once we are discovering pockets of discrimination, we will need to find appropriate strategies to fix the. For instance, we may proactively push the offending metric, whether it is premium, quote-to-bind ratio, or marketing spend, to a mean value of the whole population.
We could have started building these active strategies to look for the inevitable pockets of discrimination sooner, but if we want the bonuses associated with complex risk analysis we will have to move away from the ‘ignore’ strategy.