Commentaries and insightful analyses on the world of finance, technology and IT.

« How to meet customers' demand for good multichannel banking experience | Main | Your Core Banking System modernization endeavor must succeed - come hell or high water! »

To Help Prevent the Next Market Disaster, Raise the Bar on Testing Standards

We have yet to learn all the details behind the US stock market's 45 minutes of terror on August 1, during which Knight Capital's automated trading systems spewed out erroneous orders on the New York Stock Exchange.  In the meantime, we can draw a preliminary conclusion from what we do know.  Knight's acknowledgement of a bug in software released the night before the event points directly to the need for higher testing standards.


Much of the commentary in the media has focused on the hazards of "wayward" or "rogue" algorithms, the negative impact of high frequency trading, and our overly complex market structure.  But these broader issues, while critical to the future of the financial markets, are secondary to the immediate and very tangible risk posed by poorly implemented modifications - even seemingly minor ones - to complex electronic trading systems.  The magnitude of that risk has just become patently obvious and requires our urgent collective attention. 


The risk manager's goal is to identify and mitigate possible adverse outcomes (e.g., a major market disruption) of a known event (e.g., a code change).  Testing helps mitigate risk by exposing bugs and other hidden problems that could lead to such unintended consequences.  Testing code changes to complex systems is seldom straightforward; only a carefully crafted testing procedure will allow the risk manager to discover previously unexpected risks.  But even the most extensive testing program will be imperfect.  In the initial phase of risk discovery, before testing has actually begun, it is nearly impossible to identify every input that may adversely impact the system being tested.  And because most testing takes place in simulation, which never behaves exactly like production, testing even the known inputs presents its own challenges.  A truly comprehensive test procedure is an unattainable goal, and the best-laid testing plans always leave some risk on the table. 


Still, testing remains a very effective risk management tool.  Working within technical limitations, and accounting for business constraints, such as budgets and time-to-market, all stakeholders of a system change must commit at the outset of a project to define and implement the highest possible testing standards.  Risk managers can draw on their experience and intuition to create optimal test procedures with the resources available to them.  IT and business leaders should partner to determine and prioritize risks, create optimal testing plans, monitor test results, and jointly certify the system change or new release. 


Testing is truly effective only when it is performed end-to-end.  When tested in isolation, individual modules that behave as expected may still fail to produce the outputs required by a dependent system.  The end-to-end requirement, which is difficult enough, makes fully automated testing very challenging, if not impossible.  For example, an Infosys team currently managing testing for a client's electronic trading platform will only certify a product after performing both automated and manual tests, end-to-end.


Often, end-to-end testing extends beyond the walls of an individual firm.  When two or more institutions are stakeholders in the same code release, as when a broker or market maker prepares to trade on a new venue, testing and monitoring become more difficult to manage but even more critical.  Beyond providing technical specifications and a testing environment, market centers should partner with their clients on test procedures, establish a minimum standard for certification and, following certification, jointly schedule a go-live procedure in which they and their client will monitor initial flows.  In the lifecycle of an automated trade, exchanges are, after all, the risk managers of last resort.  Similarly, a broker that believes it has completed testing with a trading venue should consider the following guidelines for Day One: Establish well-defined accountability for problem resolution, maintain a state of heightened alert for any sign of a human or coding error and, most importantly, tread lightly.  In the first wave of production trading, both parties should plan for a trickle rather than a torrent. 


Who, then, should set testing standards for the industry, and can adoption be self-policed or must it be enforced by regulation?  Considering all the recent high-profile trading technology mishaps, which have impacted players on all sides of the industry, the electronic trading industry as a whole should be inspired to collaborate on a voluntary solution.  The newly updated FIX Protocol Risk Control Guidelines are a good place to start.  They could form the basis for an industry-wide standard of "Best Testing Practices" which, like the FIX messaging standard itself, may ultimately be accepted by all market participants, from the buy side to the liquidity venues.  Software testing standards adopted by other industries should also be evaluated. 


We cannot be certain that fuller testing or more careful implementation could have prevented this dubious milestone in the history of electronic trading.  But in the current drought of market volume and confidence, raising the priority and the quality of testing is the very least we can do.  Let's put this unanticipated adverse outcome to good use.

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Please key in the two words you see in the box to validate your identity as an authentic user and reduce spam.

Subscribe to this blog's feed

Follow us on

Blogger Profiles

Infosys on Twitter