More and more data are available to insurers, but organising it for effective decision making is no easy task. Amit Tiwari, EMEA & APAC president, Xceedance, looks at ways the industry can overcome its data challenges.
Data is the crux of the insurance industry and its most significant value generator. Having the right data is essential for every step of the insurance journey, from risk assessment to pricing and claims adjudication.
Underwriting
Data is a key to the success of every underwriting function. Underwriters need a large variety of data to balance their portfolios between high, medium and low risk business.
For example, when insuring a cargo vessel, the underwriter has to consider risk exposures that include the type of goods being transported, the condition of the vessel, and the geopolitical situation, as well as potential liabilities such as oil spillages or accidents from explosive materials that may have a damaging impact on the ecology and environment.
Underwriters must not only take into consideration data from past events and a claims history, but also future and emerging risks from advances in technology. These include cyber breaches, and the risks from lithium batteries used in electrical vehicles and household gadgets. Underwriters need the best and most up-to-date data to effectively assess potential losses.
Compliance, claims and investments
Insurers must also comply with regulatory changes and take relevant data into consideration when they are designing an insurance product or making a claims payout.
Data about current events has an impact on an insurer’s claim readiness and reserving portfolios, as well as on their investment strategy.
While underwriting profits are critical to a company’s bottom line, a good investment strategy also enhances profitability, and the latest, most accurate data is essential so that investment aligns with how markets are behaving, or a situation is unfolding.
The problem of inconsistent data
The entire insurance industry is based on data that has been captured over several decades, and as it stands, the actuarial function curates and collates data to come up with insights and guidelines that can be used in underwriting and to inform investment decisions.
However, it would be transformative for the industry if insurers could readily access one set of clean and accurate data rather than extracting it from various documents and from several different parties in the value chain.
Mistakes and omissions can creep in when the insured engages with their broker or when the broker presents risk information to various carriers.
In addition, each carrier does its own assessment and there is another layer of underwriting at reinsurance stage. At every point where data changes hands, a reconciliation effort is made to ensure output sent to each counterparty is aligned.
The problem is that data being passed along the chain is unstructured and is being taken from various sources. As well as direct data being considered to assess risk, there is also a great deal of proxy information available. If the original information submitted by the insured or the broker is incorrect, a bad customer experience is waiting down the line when a claim is denied.
The veracity of data plays a key role in the insurance life cycle, but one player’s version of the truth is not necessarily the same as others. The ready availability of data is also important, but it often has to be extracted from different documents to make sensible inferences, which can be time-consuming and expensive.
Central reusable data repository/hub
What is needed is a platform that enables and establishes rules for best practice within organisations, one that establishes a global exposure repository that includes data from third parties, thereby enabling a data summary, auto-mapping and visualisation tools.
The idea of a single global hub fulfilling this function may be utopian, but there could curated data hubs for individual lines of business such as marine, aviation, terrorism and cyber.
There are already insurtechs and data providers that are seeking to create a universal third-party data repository that can be used by multiple players. Personal lines insurance is a frontrunner in establishing such databases. One example is the ability to input a vehicle’s registration to check previous accident and claims history as well as other data about the vehicle make and model.
There is more work to be done in the commercial insurance space, but brokers can play an important role as data providers. They are custodians of large amounts of data as they work directly with the insured and pass this single-source information to several carriers.
Underwriting acumen is also important for creating these data banks as underwriters look for proxy data from other sources to give themselves an edge when pricing risk. Re/insurers should expand their data capability, with data teams digging deeper into alternative sources such as third-party subscriptions, large data providers, social media, or exchange related information.
Although establishing such platforms and repositories is a challenge, as the industry embraces new technology and new ways of working, obstacles that once seemed insurmountable are now being overcome, and we should see significant advances within the next five years. Artificial intelligence has the potential to power these platforms and process ever larger amounts of data with ease, to the benefit of the entire industry.
By Amit Tiwari (pictured), EMEA & APAC president, Xceedance
No comments yet