For the past couple of years contactless payments have been in the wings, waiting to happen. According to IE Market Research, the global volume of mobile payment transactions is expected to grow from USD 37.4 billion in 2009 to over US 1.13 trillion in 2014. While the first mobile contactless payment in the UK dates back to 2003, it did not reach mass-market adoption. Wave and pay cards have a processing time of less than a second making it a quick and easy way to pay, and with mobile phones having NFC (Near Field Communications) capabilities, this seems like a recipe for success. However, the growth of contactless payments has been at a slower pace than expected due to multiple factors such as the lack of a supporting infrastructure, the apprehension of users around security, lack of unified standards and more.
Contactless payments require the supporting infrastructure to be provided by the merchants, but it has been a classical chicken and egg dilemma - the merchants would be willing to invest in the required infrastructure if contactless payments reaches mass-market while on the other hand, for the consumer to pick up the technology the proper infrastructure must be in place with the merchants. This vicious circle now seems to have broken with retailers joining the contactless bandwagon - McDonalds, Starbucks, SubWay being some of the bigger names. Further, all the major banks including Citibank, The Royal Bank of Scotland Group, Barclays and HSBC are offering contactless enabled credit cards to their customers. With every sales terminal at the London Olympics 2012 having a 'wave and pay' option, it will provide a big boost to the publicity of this technology. Post Offices across Europe are also introducing contactless payments for consumers making them the biggest user of the technology.
Another major roadblock for the growth of this technology has been around security concerns of the consumers. A consumer is generally apprehensive about new technologies, especially related to financial transactions. Thomas Skora - a German security researcher, recently developed an application that can read the details of a contactless card using an NFC-enabled handset. This highlights the vulnerabilities in using the contactless payments technology. Though Visa and MasterCard claim that even if data is copied, it would not be of any use as there are multiple other security systems in place. In addition, consumer would in any case be fully covered against fraud. However, such security concerns prevent retailers from using this technology on account of being exposed to fraud liability and the associated risks. To prevent illicit third-party intercepts, higher-layer cryptographic protocols need to be used to ensure that a secure channel is established between the NFC device and the device reader. Securing NFC data requires cooperation by all stakeholders - device manufacturers, consumers, application providers and transaction parties.
In addition to the standard fraud detection and fraud prevention methods used in magnetic strip based cards, new methods need to be put in place. One of the latest means is to have dynamic data being added to a transaction such as generating a new PIN for every transaction. So, if an unauthorized person accesses a customer's card information, the person will make at most one payment using the stolen information. However, one must not forget that fraudsters are also constantly inventing new methods to deceive. So, financial Institutions need to continuously evolve their fraud prevention and management system ensuring that for a fraudster - the cost of doing a fraud is more than the rewards from that fraud.
We have yet to learn all the details behind the US stock market's 45 minutes of terror on August 1, during which Knight Capital's automated trading systems spewed out erroneous orders on the New York Stock Exchange. In the meantime, we can draw a preliminary conclusion from what we do know. Knight's acknowledgement of a bug in software released the night before the event points directly to the need for higher testing standards.
Though barely noted in the financial news media, the evolution of the US equities market structure recently reached a milestone. On July 5th, the New York Stock Exchange received approval from the SEC to create a separate, private exchange for retail investors. This was hardly headline news compared to the Libor scandal, not to mention the likely discovery by physicists of the Higgs boson ("the God particle"). But to observers of the securities markets, especially those watching trends in electronic trading, it was a significant event, marking NYSE's deepest entry to date into the "dark liquidity" market.
The NYSE initiative, called the Retail Liquidity Program (RLP), is designed to offer retail customers better pricing than they would receive on the NYSE floor, or in other transparent markets such as NASDAQ, Direct Edge, or BATS. A market in which the bids and offers of participants are matched with the potential for price improvement, while the sizes and prices of those orders remain invisible, is known as a "dark pool." It is by no means a new phenomenon; for well over a decade, brokers, exchanges, and independent entities have offered electronic "matching" or "crossing" to their institutional stock-trading customers.
For investment managers, the advantages of trading "in the dark" include price improvement, greater anonymity, and reduced "market impact," which occurs when stock prices move adversely during the course of trading, often the result of large quotes appearing in the "lit" market. These benefits are especially pronounced for small-cap stocks, which tend to be thinly traded and often have wide bid/offer spreads. When a match occurs in a dark pool, each party pays a fraction - often half - of the publicly displayed spread at that moment. So, if a stock is bid at 30.01 and offered at 30.02 in the displayed market, a dark pool transaction may be executed at a price of 30.015, which is a half-penny savings for both buyer and seller. Even if the spread in the displayed market is only a penny, as is typical for the most liquid stocks, over time there can be substantial cost savings for institutions trading sizeable orders. Also, because the price and size of an order entered in a dark venue is hidden, there should be - in theory, at least - no "information leakage" to the market regarding the intentions of an institution establishing or reducing a position in a particular name.
While the buy side can lower its execution costs by trading in the dark, sell-side dark pool operators benefit from essentially free execution on orders they are able to cross. To avoid incurring exchange fees, a bank running a dark pool will seek to "internalize" as many flows as possible. Subject to compliance restrictions, they will pass orders from institutional and retail customers, as well as from internal trading desks, through their dark pool en route to the exchanges. Whatever they can cross internally, even if it is a fraction of an order, represents top-line savings; the more shares crossed the better. That cost savings may also translate into lower commissions for their customers.
It would seem, then, that everyone gains from dark trading. But critics of the dark pools have accused them of creating a two-tiered marketplace, one for the public and the other for professionals trading privately amongst themselves without adequate oversight. They portray dark pools as the enemy of the retail investor. Economically, public exchanges such as NYSE and NASDAQ continue to feel the impact of dark trading. Rising dark pool market share over the last few years - now between 13 and 14 percent in the US, according to Rosenblatt Securities - has cut into exchanges' revenues, already under great pressure due to diminished volumes.
Ironically, though perhaps not coincidentally, NYSE CEO Duncan Niederauer outlined his objections to dark pools in a July 4th Financial Times Op-Ed titled "It's time to bring 'dark pools' into the daylight," a day before NYSE received the SEC's approval for the RLP. Some commentators view Niederauer's statement as a blatant contradiction to the RLP initiative, but I believe that NYSE's actions are completely consistent with its statements on this issue. In his essay, Niederauer emphasized the harm inflicted on retail traders by a market fragmented into dark and light layers: "Retail investors are put at a disadvantage as more and more information is outside the public view and excluded from the price discovery process." In NYSE's view, apparently shared by the SEC, the RLP is designed to redress this imbalance by offering retail customers - within a highly regulated environment - the same price improvement opportunities currently available to institutions trading in dark pools. In one important respect, it should be noted, the RLP will be more transparent than a typical dark pool. Among its features will be an indicator displaying to the public when a retail price-improvement order is available, though only the symbol and side (buy or sell) will be revealed. Dark pools do not typically disseminate such indications to the market.
Retail flow is considered golden by dark pool operators and their participants. Retail orders tend to be larger than institutional orders, which are commonly sliced by algorithms into lots as small as 100 shares. Orders from retail customers are also perceived as lacking the "toxicity" of those from more sophisticated sources, such as high-frequency trading firms, whose advanced techniques may result in adverse prices for their trading partners. Wholesalers will often pay retail brokers for their customers' orders.
The RLP is an attempt by the NYSE to regain some of its lost retail market share. Just as in 2007 it finally adopted the hybrid electronic/open-outcry model after years of being squeezed by NASDAQ and other electronic markets, NYSE is embracing dark trading on a qualitatively new level and it is targeting retail, the most coveted slice of the dark liquidity pie. At the same time, it is promoting this adaptation as an effort to level the playing field between Wall Street and Main Street, between the institutional trader and the everyman investor. Whether or not the market buys into their model remains to be seen.
Time and again, the Value at Risk (VaR) model has been threatened but it somehow survived the financial turmoil owing to the market's resilience and inertia to change. However, VaR now seems to be on its deathbed with the Basel Committee all set to replace it.
VaR proponents had always backed it because of its simple and objective approach, but it failed miserably during the financial crises owing to the underlying weakness in the basic model itself as well as its ignorance to black swans, which exposes banks and financial institutions to catastrophic countercyclical scenarios. Events like the black Friday or the recent US$ 2 billion loss reported by JP Morgan has exposed the vulnerabilities in VaR and its ignorance to the flat tail risk.
VaR as a model does not venture beyond the 1% threshold. Ironically the widespread use of VaR as a risk control metric prompted traders to drag the eminent risk to below the 1% VaR threshold. While this ensured that they were complying with the trading book rules, it eventually exposed the system to catastrophic and extreme losses which VaR was never intended to address. VaR leaves a lot of room for capital arbitrage as it ignores the patterns and severity of losses at both extremes of the tails.
Is weakness in the model to be blamed for the catastrophic losses or is it the urge of the financial system to lookout for every possible arbitrage opportunity? The crisis arising out of the vulnerabilities in VaR could have been subdued had the industry taken cognizance of its widely known pitfalls and used stress testing and scenario analysis in conjunction with VaR.
Nonetheless, the Basel committee is all set to mend the trading book rules and has proposed scrapping the "Value at Risk" model with the "Expected Shortfall" approach, which unlike VaR is a coherent risk measure. The expected shortfall being a convex function is also sub-additive in nature. It does not penalize diversification; thereby overcoming a severe deficiency in VaR. Though expected shortfall is theoretically superior to VaR and addresses the tail risk (at least as a conditional expectation) in a better way, the operational details can turn out to be quite challenging entailing a painful system overhaul. Outliers and black swans reside in the tail of the distribution and the expected shortfall intends to address this world of ambiguity which is a disturbing fact for many risk professionals.
The risk fraternity may back a particular model but the measuring scale named "VaR" has certainly lived its life and calls for replacement with a new scale called "Expected Shortfall". All the more, this new landscape presents ample business process change opportunities for the IT industry.
In my next blog post, I will dive deeper into some of the opportunities that exist for the IT industry. In the meantime, I look forward to hearing your views on VaR vis-à-vis Expected Shortfall.
The recent Greece elections have given some hope that the country remains within the European Union (EU). With this, the global financial world has heaved a huge sigh of relief. However, deep down, the possible exit of Greece and other weaker economies from the Euro is still haunting many. The German chancellor's view that weaker economies have to do much more than they promise is gaining support in the market. Though, no one wants the exit of Greece (at least for now), fundamental principles that have supported a single currency and unified business region are coming under increasing pressure.
The concept of a single currency, potential of increased inter-region trades and improved labour mobility within the EU was expected to deliver sustainable growth, lower the unemployment rate and improve disposable incomes. However what has been accomplished is far from these objectives. Hot money flow spurred up demand for houses and created a housing bubble, weakened the banking systems and could not sustain employment. The increased money flow worldwide has not improved the efficiency, productivity and the macro variables of the economy. The perceived re-insurance by the European Commission moved multi-billion Euro investments into Greece. The irony is that the debt itself is becoming the core of the problem, threatening the fragile equilibrium of the world economy.
Though not immediate, there is a possibility that some of these economies such as Greece and Spain opt out of the Euro. While it is difficult to estimate the effect of the contagion on the global banking business, it may be far less complicated to understand the impact of these potential events on the information technology landscape of the banking industry. From the business of retail banking to the complex operations of investment management and investment banking, it is important that CIOs have a clear view on what to do if such events occurred. Based upon my interactions with CIOs of large banks, I have tried to capture key areas which call for immediate attention:
Overall, bank CIOs along with their CEOs need to start preparing their organization to face worst-case scenarios such as the exit of an EU member. In the end, worst-case scenarios may not happen and the financial world may live happily ever after. However, thoughtful preparedness that addresses the question of "what to do" if that happens may well save the day for the business.
With operational risk management, organisations aim for an imperforate ambit, exactitude of the numbers and providence to emblematize the contingent. Numbers often grab centre stage, manifesting as milestones, unsurpassable or financial dominance, resounding. With financial disciplines, this couldn't be more veracious; risk management is no exception.
In its quest for precision, every organisation, inevitably, commits the cardinal sins of - delimiting the unbounded, quantifying the abstruse and postulating the unknown.
For a discipline forced to cope with imperfection emanating from a source, disembodied, yet simultaneously braided within a majority of other event types, aka 'the people factor'; this can often be a tough ask.
In many ways, the 'people' facet of ORM is like directly stumbling onto the end of a book, only to find it abominable. Let's face it, there can nothing complete, accurate or predictable about people risk. The real question is how many organisations care to flip through the book, ending notwithstanding. It's like proposing a travel back in time, with a future, un-impacted by any change to the past. But, why wouldn't you just enjoy the ride?
Of those people risks internal to the organisation, quite a few (frauds, rogue trading), albeit not all (who are we kidding here), can be negated through an appropriate combo of system and process controls, properly implemented. Such incidents having surfaced even in the recent past, is a knock-out punch for the 'compliance' paradigm of risk
On the causes of people risk itself - Churn, though afflictive, is a lesser cause of concern for organisations, as against an apathetic workforce. Holding onto that thought, let's ponder the below...
Risk culture can shape risk awareness of the employees and resultantly, the risk profile of the organisation. While, risk culture and awareness are all permeating, arguably the former flows top-down, while the latter is bottom-heavy; either which way, agreeably people-driven, people-communicated, people-actioned structures in any facet of risk management.
Whilst every organisation might have an ethically sounding and perceivably fair set of policies, whether actualized or adopted, its adherence to, and every day actions set the management's tone towards risk culture. And when I say, management, I also mean the senior and middle management, as they often communicate the tone at the top.
Given the heavy reliance of risk management on decentralisation in identifying, tackling and reporting risk, or at bare minimum being cognizant of them in course of daily business, the contribution of risk culture to risk awareness cannot be emphasised enough.
Now, back to my point on the 'apathetic' workforce - This is precisely where organisations may shoot themselves in the foot by hopelessly holding on to the policies, rather than using them as guidelines. If legislations drafted by experts aren't fool proof, neither can an organisation's policies - Employees may start to drag their heels, stick to the job, and much less contribute to managing risks, whilst still being within the 'policy-defined terms of employment'.
In the current world of complexly muddled financial engineering, two remedial calls are growing louder, one for more regulatory impositions (which understandably, is going to be reactive - like Batman solving Riddler-Puzzles albeit, without the forewarnings), and the other is for organisations to be 'risk-smart' i.e. own up risk management. With the latter, agreeably, it's not like the entire organisation is contriving to profit by dodgy means. Au contraire, more often than not, it's a single employee or a team. But, hang on, accountable doesn't mean the employee concerned has a moral epiphany, infact far from it; it means the other employees are sufficiently motivated to 'rat out' (excuse the phrase) the wrongdoers!
Employees are much like a financial instrument, risk and return, all packaged in one, and as long as the organisation's handling of the living organism deters risk or enables its identification, it's all good!
As the saying goes "Experience is the name everyone gives to their mistakes." If one has to go by this, we must say that the Financial Regulators are more experienced now than they were in 2008. That's undeniably healthy, as there is nothing better than learning from the past and emerging better and stronger. This is precisely what one can observe in today's overhauling regulatory landscape with Basel III proposing stringent Capital Adequacy Ratios, Dodd-Frank endorsing laws around Consumer Protection and HIRE-FATCA modifying rules to enhance internal revenues. These evolving regulations make it even more challenging for Financial Institutions (FIs) to be compliant.
In these changing times, it is of utmost importance for FIs to be adaptable. This requires every FI to select the right solution for their unique problem. The time and effort spent at an early stage, in analyzing and opting for the right solution would surely go a long way for them towards their ultimate goal of being compliant. To understand the above, let's take an example of a recent enhancement to the regulations.
We all know the role that Credit Risk Agencies (CRAs) played in the 2008 financial crisis. An inappropriate rating of financial instruments like Mortgage Backed Securities and Credit Default Swaps created havoc for FIs. What we see in the aftermath, is a series of regulatory advancements. Basel realized that the loan risk factors they were using to calculate Capital Adequacy Ratios (CARs) directly depended on the Credit Ratings awarded by CRAs, which were not accurate. The Basel committee enhanced these calculations resulting in better CARs, which we see as part of BASEL III today. Similarly realizing the impact of the credit ratings on the consumers, it became implicit for the Dodd-Frank committee to bring the Nationally Recognized Statistical Rating Organization (NRSRO), under the umbrella of its regulations. NRSRO is an entity, which authorizes ratings provided by the CRAs. The Committee mandated in its Title IX, the creation of the Office of Credit Ratings (OCR), to oversee the NRSROs and regulate them.
The examples above depict the ever changing nature of the regulatory landscape and how a common factor can cause significant changes in two completely different regulations. These changes are inevitable and FIs will have to comply with more stringent regulations compared to earlier ones. It is implicit that with every change, implementations of compliance systems cannot be started all over again. This requires FIs to choose smarter technologies, solutions and products which are flexible to implement and are open to modification. Not just that, they also need to have an enterprise wide view of the problem than looking at it in silos or at a Line of Business level. Selection of the right solution is the key, however due to different sizes, structures and strategies no single solution can fit all FIs. A technology partner with a blend of business knowledge can make a difference and greatly help FIs, select the right solution and get ahead of their peers in these evolving times...
The recent $2 Billion trading loss reported by JP Morgan has again jolted the financial services industry and raised several questions about the regulatory environment as well as the way risk management is done by banks and financial institutions. While the investigating agencies are yet to produce their final report and the quantum of loss incurred is yet to be determined, it is quite evident that banks are still betting huge numbers and continuing to adopt unregulated means for maximizing profits and hedging financial risk. Although, history has shown that such events do not occur in isolation and can bring about a ripple effect on related parties, it is hoped that the losses are contained to a bearable figure.
This and other similar events of the past call for a closer look at regulatory reforms which are still on the table but waiting to get implemented. Volker's Rule is one such regulation which, can limit and regulate complex OTC derivatives trading, if not completely ban it. The rule has been drafted to impose a ban on proprietary trading. It would also impose more stringent regulations on capital adequacy and greater transparency in reporting, thereby making financial institutions more resilient and accountable.
Though, the objective of regulation is to minimize the risk of big banks from failing but will it ever be possible to completely regulate the complex financial world and their trading practices? Is it not the responsibility of the banks to create their own internal regulations which are strong enough to shield the hard earned consumer deposits from losing value or avoid bailouts which are funded by the taxpayers?
The onus of managing risk within banks and financial institutions should be shared equally amongst all lines of business and at all levels within the hierarchy. Designating a Chief Risk Officer (CRO) and establishing a risk management unit under the CRO is not going to solve the issues unless all lines of business work in a cohesive manner to determine the risk mitigation strategy. Internal buffers need to be added onto federal regulations while arriving at the risk management guidelines for the bank. This will save the bank from marginal errors arising out of trading practices.
Banks need to adopt a comprehensive approach towards risk management encompassing risk identification, measurement and mitigation. An integrated framework should be designed to enable "people ", implement "processes " and leverage "technology" to streamline the risk management function across the firms operations. People from all sections of the bank should be sensitized to the risk parameters that are to be considered while executing various transactions with a broader perspective of the impact on the bank's financial health. Processes should be implemented with the objective of creating discipline in the operating model to minimize errors and help ascertain the risk exposure at any point in time. Lastly, technology can act as the enabler for implementing processes and educating people about smart and effective risk management.
It is imperative that a continuous focus is required to improve on risk management techniques and processes. Can risk be completely avoided? No, even granting a small consumer loan has risk attached to it. Can risk be better managed? Yes, if banks acknowledge the ownership of managing it.
Over my previous blogs, I have attempted to touch upon some not-so-widely-debated facets of Basel II and its impact on the crisis. While I presented selected aspects of Basel III as retrospectively applying to the Basel II era, I noted that the obvious - the evolution of Basel (proactively and reactively). Having said that, I wanted to stress test, one of my older concepts (the need for a unique technology framework to cater to this evolution), in a more comprehensive fashion. So, here goes...
[If you have been reading my earlier posts, you may want to directly skip to pages 4 and 5]
Whenever I am handed out a new assignment, Lao Tzu always crosses my mind - "A good traveller has no fixed plans and is not intent on arriving". Sadly enough, neither my boss, nor the clients ever seem to agree...
...which leads to some memoirs from "The Chronic Traveller's Diary" - Having seemingly survived Continental Europe as a vegan (and yes, including the inevitable sacrifice of piquancy) and with just English on my linguistic arsenal, I was exultant to have to be in the Land of the Brits. Apart from the cold weather, which you could easily mistake for a North Indian winter, I was pretty much at home. And comes with that is the ability to find vegan, especially curry in abundance. And one day, I was out with a good ol' friend for lunch. Even as the food arrived, we were still not done jabbering; when suddenly, I felt tasting something unpleasant. I looked down to find my pizza strewn with chunks of hot dogs (possibly! c'mon how could I know?). I gestured to the waitress. "Is this Veneziana?" I queried as she came up. "Oh! This is the Americano your friend ordered" she said and coolly switched our plates. "Everything's ok, now?" she remarked with a beaming smile.
As we were dragging ourselves back to work, I couldn't help but think if this one experience would put me off visiting this Pizzeria? If so, would it because of the bad initial service encounter, or, perhaps the ignorance to realize their mistake? In the financial services world, the Operational Risk Pundits would have loved to pounce on it and call it a "Reputational Risk Event". But, hang on, what if you were marooned on an island, waiting for Captain James Cook to make contact, where this was the only food source; hypothetically? Well, there's no bothering about risk, if it's not going to blow into a loss, is it?
With an over-arching risk like reputational, an unimaginable amount of factors come into play - Market dynamics like competition, which in effect is governed by entry and exit barriers, which is again predicated on a series of other factors. How feasible is it going to be setting up a new restaurant in an island, esp. given that island tribes aren't known a whole lot to be fond of Pizzas? [Sarcasm]. In banking parlance, this effect is more magnified. For one; we don't have financial mom and pop stores cropping up on every street corner - Regulatory and financial barriers are high! We can however, most certainly be glad that Banking Industry a'int a monopoly.
While most certainly banks may not be worried about a new competitor, they would need to fear 'churn'. All said, reputational risk is really a function of market perception of operational risk management failures within the bank. (Now...Why didn't I say "function of Operational Risk Management"?...Hmm). A single risk management failure casts a cloud of cognitive bias on the risk management capabilities of the bank. Here, the assumption that such a reputational risk event would lead to losses is based on the notion of revenue thinning on account of customer churn. But, what would happen when every competitor (or majority of the big boys, atleast) has a similar breakdown of risk management? Ah, well...the credit crisis wasn't so long ago, was it? We could go with everyone's guilty and hence even, I suppose!
Clearly, as something of concern to a bank, reputational risk is narrower than it may sound, only referring to those events negatively affecting its revenue stream. For instance, a bank failing to fulfil its 'social' responsibilities, though probably, taking a hammer on its 'reputation', would not suffer a reputational risk as it does not cast a slur on its 'money-making competencies'. Rather it should act as a point in its favour, since the money can be deployed for better purposes. How do mortgage backed securities sound, for a start? [Sarcasm]
Turns out, with a pleasant snowy weather outside the window, I am more distracted than I thought; so how about we continue this next year then? ;-)
Here's wishing you a great year ahead, folks !
"Cost and Worth are very different things" - Luke Brandon
"Is it worth the risk" - Pessimist Populace
Ok; let's back up a little bit here. Often, the economic concepts of cost and worth are taken to be synonyms. While cost is the factual and quantitative measure of the monetary value expended to achieve or acquire something; worth, though quantifiable is purely subjective and is at the mercy of a vantage point.
From the data at hand, while I can't draw a conclusion on how Basel III will fare in the world of black swans; what I can say with certainty is that movies offer good analogies in conveying the point I am trying to make. In this particular movie, Rebecca Bloomwood tries to buy a scarf by spreading the price over cash and multiple cards, but with one card being declined, is still $20 short. She rushes to a hot dog vendor, going to the front of the line, begging the vendor to give her cash back on a check, even offering to buy all of his hot dogs. Luke Brandon, the man in the front of the line gives her twenty dollars to get her out of the way, so he can get his hot dog, telling her there is a difference between cost and worth.
Back in the world of operational risk, cost and worth still have clear distinctions. The cost of a risk would, for me, really be the cost of the control (including the opportunity cost of capital and resources from their use elsewhere) that can leave the residual risk in the realm of low probability and impact. Besides the 'tail' risks (high impact, low probability items like catastrophes), this would be feasible for the vast majority. Well, but hold on, isn't the cost of a risk, the damages resulting from its graduation into a loss. No, because, for one, it is variable and can range to infinity depending on the dynamics of its occurrence; and for another, the cost of each risk in relation to another would become disproportionate, limiting the evaluation of its economics.
Now, how much is a risk worth?; Ah!, this one would include everything from the simplest, estimated tangible damages to a slightly more complicated to quantify reputational risk, arising from a probable loss event, after setting-off return on capital savings due to better management of risk.
Quantification of reputational risk, eh? - Easier said than done! While there is more than one logical approach, the question is really, how closer to comprehensive can / should you get? (My next post would be on this!)
In short, cost would be the expense in combating the risk, while worth would be penalty, if not. Or, rather, the latter would be the implied benefits from preventing the manifestation of risk into loss.
In the clamour for limited resources, the opportunity cost component needs to clearly reflect the preference in implementing control for one risk over another. Unfortunately, the only element considered for such decision making today, is the estimated loss for a risk based on historical frequency - severity. While what I have talked about above also includes this as a part of worth, (as for obvious reasons, incorrect as it may be, it is the only key statistical data that can be extrapolated from the past), consideration of other factors lowers its weight in the whole decision.
By the definition above, while cost (of risk) is singular across all its occurrences, worth should factor in the effect across the entire organisation. This would add more sanity to the prioritization of risks.
Outside the core business applications landscape, Cloud is doing well, from "Chrome-Alone" browser based OS, streaming OS, game checkpoints on the cloud, multiple device syncing to virtual graphics. Heck, you can even run your own makeshift private social network. However, with business applications, it hasn't gained much steam. With the current approach in this area being, shifting the entire application to, and delivering from the cloud, the key is to discern which chunks make better sense on the cloud. In case of risk management, anonymized "data" (cost, worth and plenty others - More on this later) is a winner. External loss data has been in use for many years now - This in reality extends the horizon of the 'types of data' available, facilitating better and faster intelligence, sans the data warehouses or the 'middle-men' data providers.
Risk management decisions are like much of their other economic counterparts (however, with their own versions of cost and worth) and hence would make a great deal of sense enhancing the risk controls assessment process (RCSA / RCA) in an endeavour to factor in the same.
My experiments on application of cost-worth principles in social relationships have often met with harsh criticism, resistance and the classic "What's wrong with you?" reactions deterring further work in this area [Sarcasm]. Anyways, for the socially challenged, there are other options, like just lighting up your Blu e-cigarettes.
What would you do if you were really inspired by that late-night movie? 'Transform' an Optimus from trash cans? Improvise Tony Stark's Mark II Suit in a way that would rival 'War Machine' Rhodey? Attempt to become a real-life superhero? Develop an idée fixe of enigmatic moments to your special number in way that would trounce Scott Fahlman? Or armed with the same obsession, perhaps better Edward O. Thorp in 'counting' (for) some greenback?
I can reminisce about my Professors' analogies better than the classes - Sometimes it's nice to goof around the point. Anyways, circling back to the key topic - Expressing his anguish on the stance of the
Beyond financial and reputational loss for the victim organisations and maybe, the backlog of Black Ops missions for users', the steady rise of such incidents in recent times is a real threat to the banks, credit card issuers, payment networks and insurance providers. Popular opinion has it that many of these have to do with the target organisations' (incl Sony, MasterCard, Visa etc) developing enemies in dark corners of the internet Personal and financial data have been made away with in some cases, like Sony(n) and Citi. While debit and credit card holders, with limited timely action from their side are absolved of any large liabilities through various regulations (like Consumer Credit Act in UK, Truth in Lending Act in US), the real contention is who bears the loss then? If you aren't going to be paying for those shady transactions on your card, someone else is going to have to.
Unfortunately the story doesn't end there, financial data apart, loss of personal data and unencrypted passcodes, which users tend to rampantly reuse across the cyberspace heightens the potential for these incidents to magnify into large scale identity theft. While, the scene remains the same for customers, save for a lot of paperwork, the financial institutions are still the losing participant in the zero sum game. It remains to be seen whether and how victim organisations will be held responsible by the financial ecosystem for such write-offs.
When the users of the victimized organisation's service are spread across the globe (Eg: PSN), another key issue remains that the maturity of financial practices and the adroitness of the information systems supporting them are not on an even platform, making it challenging to provide a fighting chance against preventing or alerting on misuse. For instance, many countries do not have a unique resident id or central credit bureau, and the individual card issuers themselves may not have necessary infrastructure to effect pattern-based intelligence, leaving the card holder to foot the losses with the exception where certain classes of cards are insured against such mishaps, passing the buck back to the financial system.
Little comfort can be drawn from the fact that the bereavement from one of the breaches was just out-dated credit card information, since it begs the question of compliance with PCI-DSS 3.1 which emphasises minimal (amount and time) retention of cardholder data and secure deletion of data beyond what is dictated by business needs. This would hopefully drive the other online merchants to refrain from obsessively storing credit card information on opening an account, or comply with PCI-DSS. Well, it wouldn't hurt to atleast provide an option enabling the risk-averse / infrequent users to feed payment information on a transaction basis; after all it's their dough!
With these jeopardies no longer being surreal, the financial system has to bother about risks beyond its control, be it in the form of money (assuming non-recovery from victim organisations), procedural overhead or demand on its resources.
In the event of the bank or card issuer having to bear the monetary brunt, this would be yet another un-modelled scenario from an Operational Risk standpoint; well, it ain't a "catastrophe" which is what is bucketed / budgeted under 'external events' and even there, very few institutions factor in the far-side of the risk quadrant whilst assessing the extent of their exposure.
To re-quote Sheldon Cooper, even given the stolen identity, the hackers couldn't become Green Lantern unless they were chosen by the guardians of Oa, but given enough start-up capital and adequate research facilities, they could be Batman! The odds of witnessing the 'Dark' Knight seem oh so real now.
Even as I write this, news of infiltration into 72 world organisations targeting commercial, state secrets and intellectual property flows in. In this age, nothing is safe from the digital Jack Sparrow. But, hey, we can atleast do our part!
14.28% of your life is an awful lot of time to be spent loathing, I thought as I was driving to work, on a Monday morning. Despite my Le(a)d (Zeppelin) foot affliction, a biker managed to keep pace through the highway. Cometh the city, he zipped ahead, wriggled through the traffic...till he got boxed in a corner, as I sped away gazing through my rear-view mirror. Inching closer to work (place), got me thinking, isn't that how most software products have b(f)oxed their customers?
Risk Management has traditionally been 'Compliance' coloured, which has painted the software in the shade of 'Products'. With our local weather channel dude not being able to foretell a day's weather, how could we predict black swans and accompanying regulatory reactive? The funny thing with probability (for high impact items) is that, we have to end up planning for it, no matter how minimal the likelihood! The question remains, did the organisations plan for Basel III? Well, majorly for changes to their risk management system...
Years back, when the action started in this space, custom developed applications were obviously not the way to go - Clunky as the 'car', not set-up for success and besides did not offer the swift go-to-market and "clone the Basel handbook" advantage as the canned software product 'bikes'. However, swiftness doesn't mean much in the Basel 'box' - You could be waiting on your product vendor to support Basel 'n+1'. I did speak about a best of both worlds approach, kind of like a Batpod, you know rip through those cars in your way!
These decisions involve a tough choice, like for instance, selecting a regular or an express check-out lane at the supermarket. While there are a variety of factors to consider; contrary to popular belief, the regular lane (less people, more items per person) is quicker than its counterpart (more people, less items per person). A study identifies the hidden culprit as the 'tender time' - 48 seconds for every additional customer translates to 17 additional items scanned at 2.8 seconds per item. Plus, I would assume the "Express" lane naming convention and the halo effect of people flocking to it have accentuated this misconstrued belief. Sounds familiar? Any which way, such amount of thought and reasoning is required when determining the IT strategy for Risk Management. The vendor non-dependency of these new-age componentized products transforms regulatory (or pretty much most) change(s) into a simple configuration affair. You can cut short entire enhancement / product release lifecycles, needless to say, the cost. You could argue that there are huge timelines for adoption of the new(er) accords, however these are better spent on realigning business - IT should scale to support business, rather than being expended upon to make-scale for business.
Extensive configurability, ability to piggyback upon existing IT investments, knack of jazzing up on upgrades to the piggybacked, infrequent need for enhancements to the 'core', elimination of vendor dependency, variety of interfaces; all clearly point to these componentized product frameworks being the better approach, unless of course Chief John Anderton starts a 'Pre-Basel' team.
P.S. I love going to work on a Monday morning. Drive Safe.
What would you do if you had a time machine to travel back in the past? Show Da Vinci an iPhone? Invest in the start-up Google? Hang out with yourself chatting about your life in the future? Accelerate singularity by teaching science to cave men? Well, the list is endless...
But, if BCBS had travelled back in time with Basel III, what would have become of the financial ecosystem that surrounds us? Would the crisis have been averted? An assessment of what was lacking in Basel II and hence a wish-list in its newer successor would explicate this.
As we know, reliance on credit ratings to determine the purportedly low Basel II capital, through RWA led to the 'manufacture' of AAA-rated CDOs backed by lousy sub-prime mortgages, which fuelled the crisis. In Basel III, while specific problem areas in risk weighting (addressed through increase in risk weight for super-senior tranches of (re)securitization products; elimination of regulatory arbitrage between banking and trading book, by treating securitization exposures on the latter on par with the former; and strengthening requirements on OTC derivatives and repos through capital for MTM losses, rather Credit Valuation Adjustments) and quantum & quality of capital (addressed through higher tangible common equity, capital conservation and counter cyclical buffers) have been dealt with, the larger issue pertains to the concept of risk weighting itself. This approach still urges the banks to "find" apparently risk-free assets which can be leveraged much higher than their riskier counterparts - We may be witness to some whacky financial engineering, yet again!
While zero risk weight assumption for AAA and AA-rated sovereigns (which caused the Sovereign debt crisis), has been acknowledged as faulty, yet, it has been let be. Well, the governments which put Basel III together needed some incentive, didn't they? - Cheap borrowing!
While oligopoly of rating agencies and the Gaussian Copula-powered symbiotic growth of CDS' and CDOs played their part in harmonised synchronicity, the use of internal rating models finished things off. The dumbed down simplification of VaR garnered attention in expressing and interpreting individual and firm-wide risk as a single figure for any asset class, its limitations were however forgotten. The assumption that the bank was in the best position to measure its own risk, when coupled with VaR's "normal", no-extremities market, failed to pay-off. Risk-based compensation in this case proved counter-productive, further encouraging managers to paint a low-risk picture.
The back-stop non-risk based measure viz. leverage ratio is a step in the right direction, albeit low. If the past is any indication, Lehman was levered 31-1, whereas the current Basel III rules peg the requirement at 33-1. Ultimately, this treads on a fine line - What cost of economic growth is a fair price for curbing risk?
Lehman's folding was a result of liquidity problems from unwinding of huge derivative positions. The 30-day stressed Liquidity Coverage Ratio; encouragement of medium to long term funding through Net Stable Funding Ratio; and the variety of monitoring tools do well here. However, there are arguments implicating that the LCRs bias toward government bonds could hamper credit to small businesses, which is also interesting given that they are the ones who do not have access to capital markets, and hence turn to banks for fundraising, where their 'unrated' status again tend to extend the 'halo effect'.
Let's assume BIS travels into the future with Basel III, would it have avoided a recurrence? There's no telling Black Swans, much like Miles Dyson did not know his neural-net processor would create Skynet and bring upon the Judgement Day - no one ever sees it happening, that's why it does! That really explains the two Basel accords prior and the ones after...hopefully not!
All said, Basel III is one of swiftest and smoothest regulations in modern history.