<img alt="" src="https://secure.agile-company-365.com/798703.png" style="display:none;">

7 Data Sins Series: Insufficient Model Risk Management

7 Data Sins Series: Insufficient Model Risk Management
6:02

As a continuation of our 7 Data Sins series, we spoke with John Matway, CEO and founding partner at Red Swan Risk. During the discussion, we explore whether data models and data assignments are reliable enough to be trusted to navigate you through risky waters.

 

Q: What are the challenges around modelling securities – why is it so challenging? I mean, when a company has just bought a risk system, doesn’t it deal with coverage out of the box?

A: Sometimes there is no suitable model or the right data might not be readily at hand (yet), which prompts one to resort to proxying. Here one wants to tread even more carefully to avoid creating additional model risk. Most generally speaking, model risk occurs when models don’t behave as they ought to. This may be due to an insufficient analytical model, misuse of the model, or plain input errors such as bad market data or incorrect terms & conditions or simply wrongfully chosen reference data such as sector classifications, ratings, etc.

Why is this so important?

Models can misbehave at the security level for long periods before showing up at the portfolio level.  Perhaps the size of the hedge was small and has grown larger, or the volatility suddenly changed.  This may suddenly create distortions at the portfolio, benchmark, or higher aggregate level. These problems often surface during times of market stress and can be very resource-intensive to troubleshoot at a critical time.

Q: Why is it so resource-intensive to change, troubleshoot, and manage data?

A: When rules are hardcoded or implemented in an inflexible manner (i.e. model queries and scripts are being based on rigid and narrowly defined model schemas and inputs with too few degrees of freedom)  the problem is often exacerbated, making it truly difficult to interrogate and correct changes, when they are critically required.  Too often, the developer or analyst is given a set of functional requirements that are too narrowly defined, based on the current state of holdings and securities.

Given the dynamic nature of portfolio holdings, OTC instruments, available market data, and model improvements, it is essential to have a very flexible mapping process with and transparent and configurable rules that make it much easier to identify modeling issues and resolve them more efficiently.  A unified data model that tracks the data lineage of both model inputs and outputs (including risk stat, stress tests, and simulations), model choices, mapping rules, and portfolio holdings provides a highly robust and efficient framework for controlling this process. The benefit of working with a commercial tool is that it has been designed to address a very wide range of instrument types, data fields, and market data sources so you won’t outgrow its utility. So, in essence, having a unified model and data lineage capabilities combined together implies less digging and troubleshooting for the business user

Q: Can we discuss some real-life examples perhaps?

A: Some examples are…

  • Corporate bond credit risk derived from equity volatility using the credit grades model can cause significant distortions. A more direct method uses the observed pricing of single-name CDS prices or a sector-based credit curve. However, these must be properly assigned to the security with either the correct CDS red code or a waterfall structure for assigning the sector credit curve.  In the case of capital structure arbitrage where there are corporate bonds at various seniority and CDS, it is very important to be consistent in the mapping rules so that both the bond and the CDS have the same market data inputs.
  • A similar issue occurs when using constant maturity commodity curves for convenience. This is easier to maintain than assigning the correct futures data set each time.  Calendar spread risk is underestimated with constant maturity curves that share data.  The negative front-month crude prices that occurred in March are an example of why constant maturity would have underestimated the risk significantly.  (I like this example because PassPort is a good solution for managing commodity future curve names in RiskMetrics).
  • Changing over to the new Libor curves will likely be a very painful process for banks unless they have a very flexible mapping process that can easily be configured to assign the new curves to the right security types. (This is a simple procedure with the Map Editor and PassPort).
  • But perhaps a more benign example is that of modelling one’s complete book with the right mapping for each individual security (i.e.: choosing the right risk factors as well as the correct reference data, such as ratings and sector classifications), whilst skipping to model all this stuff for its benchmark. This modelling inconsistency between portfolio and benchmark will introduce a TE-risk which can be contributed completely to inconsistent data mapping, rather than true market dynamics.

 


 

In summary, to model things properly – be it a simple proxy or something more granular and exact- one needs a setup that can dynamically configure the users ‘modelling choices and data mapping logic’. And as market conditions and data availability evolves over time, one should have a system that can adapt. Both Gresham and Red Swan allow the users to control their model and data mapping choices in a very flexible, transparent, user-friendly, and visual way. This doesn’t’ just help you during a setup or implementation phase but perhaps, more importantly, it drastically improves your ever-evolving modelling choices and (proxy) coverage over time as well as ongoing operational efficiencies. In short, it enables greater control over your model risk management.