top of page

Risks of Straight-through Processing (STP) in Underwriting and Catastrophe Modeling


The idea of automatically submitting and processing SOVs to minimize manual work and reduce errors is undoubtedly an enchanting one, but beware, for not all data is as it seems. Automating this process during a hard market without proper checks and balances is like casting a spell on a dragon with a broken wand - the consequences can be catastrophic. Before you start uttering the magic words to invoke Straight-through Processing (STP), there are a few risks you need to consider.


First, the lack of human review (regardless of how sophisticated the AI might be) results in critical errors creeping into the pricing methodology, leaving the underwriter unaware of the erroneous output, unless the input and output are compared manually, line by line. To avoid the ‘Garbage In, Garbage Out’ (GIGO) scenario, it's crucial to understand the components of an unstructured SOV that were used or discarded as input for the pricing model and why. Using CAT Models for account-level pricing is not an exact science. In reality, it's analogous to shooting darts at a board and rarely hitting the bullseye. AI with a human in the loop is necessary to avoid the Terminator-type doomsday scenario in the art and science of Catastrophe Modeling for pricing.


Another critical risk associated with STP in catastrophe modeling is the potential for added secondary uncertainty in the model outputs. Errors or inaccuracies within structured input data can significantly impact the model's output, leading to incorrect risk assessments and potential losses while skewing the resulting premium levels unfavorably. Since the SOV data received is rarely complete, STP tools tend to err on the side of caution by assigning the most conservative values for unknown data points. Thus, the resulting output is seldom as accurate as expected.


STP systems are designed to work within a specific set of parameters, which can limit their flexibility. If any changes to the data being processed or the analysis are required, the system may only be able to accommodate them with significant reconfiguration. With STP, the entire process is automated, making it challenging to understand how the data is processed or analyzed. This lack of transparency can make identifying errors or inaccuracies in the data challenging.


While employing AI and ML to scrub and harmonize unstructured data is excellent, insurers should carefully consider the risks associated with STP. To avoid GIGO, insurers should review the structured data manually before it enters the CAT modeling stage. By doing so, insurers can leverage the benefits of AI, such as implementing robust quality control processes, and investing in data quality and completeness, while empowering underwriters to exercise their judgment and expertise in assessing and pricing risks, thus maximizing the chances of policy issuance at minimal risk to the insurer.


In conclusion, while STP is a convenient tool, it can introduce significant limitations for underwriters who often rely on their experience, expertise, and intuition to assess risks that the models may only partially capture. When insurers rely solely on STP, the ability of underwriters to apply their judgment and expertise may be constrained, potentially leading to overly conservative or optimistic risk assessments. The risks associated with STP in underwriting and catastrophe modeling are significant and should not be overlooked. By carefully balancing the benefits of automation with human expertise and judgment, insurers can minimize the risk of errors, inaccuracies, and security breaches, ultimately taming the dragon while maximizing their chances of hitting the bullseye.

18 views0 comments

Comments


bottom of page