They go by many names—business-managed applications (BMAs), user-developed controls, among others. Their labels are as bespoke as the tasks they do. But it’s what they share in common that makes them stand out. They are inevitable. Annoying. Seemingly unavoidable. And, perhaps most of all: they are spreading.
For many banking institutions, these BMAs and controls have grown out of addition—and operational improvisation. Newly-developed business units, acquisition flurries, and tangles of reporting requirements introduced over the post-crisis period—all of these have left them managing the puzzle of reconciliation with legacy processors that aren’t designed for this era. As a result, banks have developed workarounds—typically housed in spreadsheets well outside the range of an enterprise data management framework—to manually complete the work that their systems cannot. These applications and controls will typically number into the thousands. Earlier this year, one global bank reported having roughly 60,000 of these running—or around one BMA for every four people working there.
That ought to be hard to do at a firm employing almost a quarter of a million people, and whether giant or not, firms are realizing that BMA proliferation isn’t the right answer. Coping with this is one of the leading drivers for the new enterprise efforts at control, including at boardroom level, that have gained steam in 2018. For new control chiefs and their teams, identifying the root of the problem isn’t difficult; the trouble comes in sniffing out the many BMAs that now riddle reconciliation processes; understanding the technology problem that has caused them; and just as complicated, finding the right data-centric solution for solving that problem (rather than potentially just making it worse).
Fixed Models Fail
Many of these apps and controls grow out of a combination of circumstances. To begin with, they will surround trading of instruments, products or transactions with new complex attributes that existing systems simply weren’t designed for—and can’t handle. In addition, these situations will often involve markets where automation around trade confirmation and settlement isn’t well-developed, or fully agreed and standardized. At the extreme, this can mean reaching as far back as emailing key documents, or using a fax, to receive key data and then inputting it in by keystroke. There is a sense of “this is already slow; why bother at all?”
For those that do bother, they find their legacy vendors will take many—often months—to figure out how to configure and implement the new control, and given the machinations already involved, that is quite simply too long. It comes down to the fixed data models they are strapped with. Most of this time is spent writing and using ETL (extract, transform, load) tools to properly prepare and pull in the new data, and make it translatable for matching processes. Unsurprising, then, that bank operations personnel will come up with their own fixes—not only to manage transaction data and attributes, but incorporate derived data calculations, conditional logic, and customer-specific enrichment processes pulling from another security master or other reference data. As often, what is intended to be a temporary BMA becomes all too permanent.
Structured Products, Sec Lending
Take two illustrations we’ve recently analyzed at Gresham: equity-linked notes and securities lending.
The former are structured products, i.e. equity-linked put options or principal-guaranteed notes, and are typically held to maturity. But they will involve a sequence of rules governing conditions of buy-backs, and the relationship of the product’s current value to its underlying. Securities lending, meanwhile, has become increasingly popular for borrowers (who will typically be shorting), lenders (institutions), and sell-side facilitators. Here again, data attributes surrounding the specific obligations of a three-party lending arrangement need to be accurately consumed and reconciled, including combined balance and transaction validation.
Implementing an Excel-based or AVTL control around either of these examples involves numerous challenges, and creates some of its own. Buying an ELPO or lending out stock requires significantly more initial workflow than trading in a liquid market or conventional asset movements. Processing the data associated with these activities is clunky and bound to generate exceptions that need to be effectively managed. And being able to accurately assess the risk they pose against the broader balance sheet is both complex and increasingly important, meaning they must be accessible and plugged into systems across the institution. All of this serves to both complicate reconciliation processes, and heighten their consequences.
Beyond these examples, the same issues apply to a wider spectrum of over-the-counter (OTC) derivatives. To pile the flavors and transactions up—across desks and business units, over time and with new controls potentially being generated for even a single transaction—it isn’t difficult to see how BMAs become troublesome. At the largest institutions, they can require millions in annual operating cost to prop up, and add significant operational risk.
Smarter Heuristics
While they may be purpose-fit to today’s transaction, the problem with surviving on BMAs is that they rarely anticipate what comes next or, for that matter, properly integrate into broader enterprise control and compliance. The stronger approach to dealing with legacy shortcomings, and the scourge of BMAs left in their wake, is to introduce technology that can be implemented both far more quickly and sustainably: by deploying smarter heuristics and training data models to fit the control requirement in an ongoing agile process, with built-in documentation around the transformation process from initial input to reconciliation engine.
Like New Year’s resolutions, too many BMAs spell trouble. Retiring these controls would be a great start for 2019.
Jan's original article can be found on LinkedIn.