<img alt="" src="https://secure.agile-company-365.com/798703.png" style="display:none;">

Around the World in 30 Days: Data Management Insights from Around the Globe

Around the World in 30 Days: Data Management Insights from Around the Globe
12:03

Different regions have different financial priorities and initiatives. During our Summer Series, we’re stopping in six countries to discuss the top issues they’re facing when it comes to financial services and new regulations.

Scratch your travel itch and come along with us over the next 30 days to gain a new perspective on your approach to data management.

 

Putting ESG Data to Work: Overcoming Data Management and Data Quality Challenges

Environmental, Social and Governance (ESG) based investing is growing rapidly. The data landscape to support ESG use cases includes screening indicators such as board composition and energy use, third-party ratings as well as primary data such as waste and emissions. There is a wide range of primary data sources, aggregators and reporting standards. ESG ratings in particular are very dispersed reflecting different methodologies, input data and weights – which means investors need to go to the underlying data for their decision making.

 

Role of ESG in Investment Operations

Depending on the investment style, ESG information plays a key role in research, fund product development, external manager selection, asset selection, performance tracking, client reporting, regulatory reporting, as well as voting. In short, ESG data is needed through the entire chain and must be made available to different stakeholders across the investment process.

Increasingly, ESG is becoming an investment factor in its own right. This means ESG indicators and ESG-based selection criteria need to be distilled from a broader set of primary data points, self-declarations in the annual report and third-party assessments. Additionally, ESG information needs to be standardized, to roll up company-based information to portfolio-level information, track ESG criteria against third-party indices or external reporting requirements. However, a lot of corporates do not (yet) report sufficient information causing a need to proxy or estimate missing data points or leaving them outside investment consideration altogether.

 

Data Management Challenges

Legislatures are promoting sustainable investment by creating taxonomies that specify which economic activities can be viewed as environmentally sustainable. From a data management perspective, this classification refines and is an additional lens on the traditional industry sector classifications.

Other ingredients are hard numbers such as carbon footprinting (detailing scope 1, 2 and 3 emissions, clarifying whether scope 3 is upstream or downstream and so on), gender diversity, water usage and board composition. More qualitative data elements include sustainability scores, ratings and other third-party assessments that use some condensed statistics. A key requirement is the accurate linking of financial instruments to entities.

As ESG investment criteria become operationalized, ESG data management is rapidly evolving. Whenever new data categories or metrics are introduced, data management practices typically start with improvisation through desk level tools including spreadsheets, local databases and other workarounds. This is gradually streamlined, centralized, operationalized and ultimately embedded into core processes to become BAU. Currently, the investment management industry is somewhere halfway in that process.

 

What Is Required to Fully Embed ESG Data into Investment Operations?

To overcome these data quality issues, firms need a process that seamlessly acquires, integrates and verifies ESG information. The data management function should facilitate the discoverability of information and effective integration into business user workflows. In short, data management should service users from the use case down, not from the technology and data sets up.

ESG data management capabilities should facilitate the easy roll-up of information from instrument to portfolio and blend ESG with pricing and reference data sets, so it becomes an integral part of the end-to-end investment management process.

Data derivation capabilities and business rules can spot gaps and highlight outliers, whether it concerns historical patterns or outliers within a peer group, industry or portfolio. Additionally, historical data to run scenarios can help with adequate risk and performance assessment of ESG factors. Having these capabilities in-house is good news for all users across the investment management process.

 

Post-Brexit, Post-Pandemic London

For the City of London, the last few years have been eventful, to say the least. Midway through the worldwide Covid pandemic, Brexit finally landed with a free trade agreement agreed on Christmas eve 2020. A Memorandum of Understanding on Financial Services was agreed upon at the end of March. However, this remains to be signed and is entirely separate from any decisions on regulatory equivalence.

Large international banks prepared for the worst and the possibility of a hard Brexit by strengthening their European operations in the years leading up to Brexit. However, the discussion on the materiality of EU-based operations will continue to rage for some. ESMA adopted decisions to recognize the three UK CCPs under EMIR. These recognition decisions took effect the day following the end of the transition period and continue to apply while the equivalence decision remains in force until 30 June 2022. One immediate effect of Brexit was a sharp drop in share trading volumes in January, with volume moving to continental Europe. For other sectors, Singapore and New York are well-positioned to nibble at the City’s business.

Financial services, together with industries such as fisheries, remain one of the most politicized of topics in the EU – UK relationship. The U.K. government must consider to what extent it should diverge from the EU’s system of financial services regulation. It is unlikely that any announcement on equivalence decisions will be forthcoming in the short term. A decision to grant full regulatory equivalence would depend upon UK alignment to EU regulation on a forward-looking basis – which would defeat the whole point of Brexit. Equivalence may not be worth the loss of rulemaking autonomy that is likely to be a condition of any EU determination. The longer equivalence decisions are delayed, the less valuable they are as firms adapt to the post-Brexit landscape.

As the financial services sector is coming to terms with the post-Brexit reality, it must prepare for regulatory divergence with the level of dispersion still an open question. Differences can emerge in clearing relationships, pre-and post-trade transparency, investor protection, requirements on (managed services) providers, derivatives reporting, solvency rules, and future ESG disclosure requirements. Having a flexible yet rigorous data management infrastructure in place and using suppliers with operations in the UK and the EU will mitigate this divergence and prepare firms for the future.

 

FRTB: The Need to Integrate Data Management and Analytics

After some delays, the deadline for FRTB implementation is now approaching fast. As of January 1, 2023, banks are expected to have implemented newly required processes and begin reporting based on the new Fundamental Review of the Trading Book (FRTB) standards. With Libor’s transition taking place over the next years, it is a busy market data world.

FRTB poses material new demands on the depth and breadth of market data, risk calculations, and data governance. A successful FRTB implementation will need to address new requirements in market data, analytical capabilities, organizational alignment, supporting technology and overall governance. In this blog, I focus on the need for integrated data management and analytics.

FRTB requires additional market data history and sufficient observations for internal model banks to ascertain whether risk factors are modellable. These observations can be committed quotes or transactions and sourced from a bank’s internal trading system and supplemented with external sources. Apart from trade-level data, additional referential information is needed for liquidity horizon and whether risk factors are in the reduced set or not.

The market data landscape continues to broaden. Apart from the traditional enterprise data providers, many firms that collect market data and trade level information as part of their business now offer this data directly. This includes brokerages, clearinghouses and central securities depositories. Different data marketplaces have been developed, providing further sourcing options for market data procurement. Effectively sourcing the required additional data and monitoring its usage to get the most out of its market data spend is becoming a key capability.

Organizational alignment between front office, risk and finance is required as well. Many firms still run different processes to acquire, quality-proof and derive market data. This often leads to failures in backtesting and in comparing front-office and mid-office data. FRTB causes the cost of inconsistency to go up. Regulatory considerations aside, clearly documenting and using the same curve definitions, cut-off times to snap market data prices and models to calculate risk factors can reduce operational cost as well. Clean and consistent market data makes for more effective decision-making and risk and regulatory reporting.

FRTB accelerates the need for market data and analytics to be more closely integrated. Advanced analytics is no longer mostly used at the end-point of data flows (e.g. by quants and data scientists using desk-level tools); it is now increasingly used in intermediate steps in day-to-day business processes, including risk management.

Data quality management, too, is increasingly getting automated. Algorithms can deal with many exceptions (e.g. automatically triggering requests to additional data sources). Using a feedback loop as pictured above, the proportion of them requiring human eyes can go down. To successfully prepare data for machine learning, data management is a foundational capability. Regulators take a much closer look at data quality and the processes that operate on the data before it is fed into a model, scrutinizing provenance, audit and quality controls.

Important to improve any process is to have a feedback loop that provides built-in learning to change the mix of data sources and business rules. In data quality management, this learning has to be both:

  • Continuous and bottom-up: Persistent quality issues should lead to a review of data sources. For example, using false positives or information from subsequent manual intervention to tune the screening rules. Rules that look for deviations against market levels taking into account prevailing volatility, will naturally self-adjust.
  • Periodic and top-down: This could, for example, include looking at trends in data quality numbers, the relative quality of different data feeds and demands of different users downstream. It also includes a review of the SLA and KPIs of managed data services providers.

 

If you cannot assess the accuracy, correctness and timeliness of your data sets or access it, slice and dice it and cut them up as granular as you need for risk and control purposes, then how can you do what matters: make the correct business calls based on that same data?

Data management and analytics are both key foundational capabilities for any business process in banks but most definitely for risk management and finance, which are all the functions where all data streams come together to enable enterprise-level reporting.