Commentaries and insightful analyses on the world of finance, technology and IT.


March 27, 2015

How to comply with new regulations such as fast pay-out? - A banker's dilemma

The new guidelines on fast pay-out issued by the financial regulators of FSA, are changing the perspective of banks and other deposit-accepting financial institutions on achieving consolidated and comprehensive views on their customers and their activities irrespective of their touch points. These guidelines and the consultative framework, which FSA is building, will significantly speed up processing of claims of the depositors. The consultation prescribes a mandatory period of seven days to process the claims and settle them. An important element of the proposals is the introduction of a clause requiring the banks to be able to furnish a Single Customer View (SCV) to ensure that they are in a position to provide the aggregate balance held by each eligible depositor (FSA UK).

In its attempt to solve the depositors' difficulties to gain their money back from the banks as well as to assure that bank will not have a run on them, FSA is asking a few fundamental questions to banks. These questions can be summarised as:

a. How do banks store and retrieve all their customer information?
b. Are systems and applications that the banks have built over a period capable of extracting vast amounts of data attributes to create meaningful information?
c. Can the banking and other financial organisations realistically establish a relationship between depositors A and B when they are the same or are interconnected with transactions?
d. How do banks manage their customer information particularly in the context of mergers and acquisitions?
e. Can the two systems - acquired and acquirer bank be integrated in a way that enables single view of their customers and their activities?

Traditionally, banks have organized themselves in silos created on the basis of products/services or geographical processes. Further innovation in products has made it difficult to share customer information between different SBUs seamlessly. In addition to this, disparate systems exist between different divisions of the banks, making it all the more difficult to extract the information in real-time basis to understand depositors' exposure to the banks.  Though in the last few years banks have spent significant effort and money to implement robust CRM systems and other applications such as KYC to meet internal and external compliance matters, a comprehensive view providing a greater depth of knowledge about their customers is far from reality. The key stumbling blocks to achieving single views on customer data between different products and service lines include the lack of an information bridge between business architecture and technology architecture and the difficulties in building common symbology across source systems.

Historically, organisations have approached the solutions from the perspective of building large data-warehouses. This approach of building large-scale databases to load customer information, analyse them through data marts and data processing applications were built as additional layers to create meaningful reports and views about customers and their activities. However, issues such as duplications, re-creation of customer data in addition to effort, the requirement to maintain structured and unstructured data along with real-time update of changes, have limited the benefits of these built data-warehouses.

To be in a position to meet the FSA's deadline, banks now need to relook at their entire IT landscape. Sooner or later, IT management of these banks have to take a deeper look on the multiple databases they have built over a period of time to maintain and manage their customers across the globe. They need to be in a position to seamlessly distribute and redistribute information as, when, and where needed. To comply with FSA, banks need to initiate a few first but important steps.

Step 1: Build an enterprise wide roadmap for master data quality: It is a known fact in the industry that in large organizations, there are multiple formats and versions of master data. Having a defined view on how customer-related information will be captured and maintained is the first foundation stone. De-duplication of customer information and building a standardised format through which, customer information acquisition can happen is important and critical for the intended strategy of building a SCV.

Step 2: Build an Information Architecture: Within banking organisations, different business and technology architectures exist. The missing link has been the lack of a clear vision on building a unified information architecture. Defining the process for building a common symbology to serve as a single-source for cross-reference is critical. This will not only help in seamless update of all downstream systems but will also play a significant role in how information is received from upstream systems without any manual, intervention-based data cleansing effort. Limiting manual intervention can significantly reduce errors that typically occur during the creation of customer information.

Step 3: Define the view on solution choice: data warehouses and SOA both provide ways to achieve the single view on customers. Depending on the number of source-systems, data volume, and integration complexities, an organisation needs to have a clear perspective on the solution, which not only caters to the current needs but also addresses the needs of the business and customer growth in the foreseeable future. If the choice is to build a large data warehouse, it is important to understand how a single update of the other databases that store customer information can also happen.

In a nutshell, there is no silver bullet to addressing these requirements.  In order to optimise the tools and technologies to ensure that organisations have a single view on customers, they need to consider cost-effectiveness, flexibility, and analytical requirements. Building a single view on customers will help organisations benefit beyond regulatory compliance requirements. Finally, it is the deep understanding of the customer, which differentiates and propels competitive advantage for organisations.


April 10, 2013

The Jumbled Pizza Cataclysm

Whenever I am handed out a new assignment, Lao Tzu always crosses my mind - "A good traveller has no fixed plans and is not intent on arriving". Sadly enough, neither my boss, nor the clients ever seem to agree...

...which leads to some memoirs from "The Chronic Traveller's Diary" - Having seemingly survived Continental Europe as a vegan (and yes, including the inevitable sacrifice of piquancy) and with just English on my linguistic arsenal, I was exultant to have to be in the Land of the Brits. Apart from the cold weather, which you could easily mistake for a North Indian winter, I was pretty much at home. And comes with that is the ability to find vegan, especially curry in abundance. And one day, I was out with a good ol' friend for lunch. Even as the food arrived, we were still not done jabbering; when suddenly, I felt tasting something unpleasant. I gestured to the waitress. "Is this Veneziana?" I queried as she came up. "Oh! This is the Americano your friend ordered" she said and coolly switched our plates. "Everything's ok, now?" she remarked with a beaming smile.

As we were dragging ourselves back to work, I couldn't help but think if this one experience would put me off visiting this Pizzeria? If so, would it because of the bad initial service encounter, or, perhaps the ignorance to realize their mistake? In the financial services world, the Operational Risk Pundits would have loved to pounce on it and call it a "Reputational Risk Event". But, hang on, what if you were marooned on an island, waiting for Captain James Cook to make contact, where this was the only food source; hypothetically? Well, there's no bothering about risk, if it's not going to blow into a loss, is it?

With an over-arching risk like reputational, an unimaginable amount of factors come into play - Market dynamics like competition, which in effect is governed by entry and exit barriers, which is again predicated on a series of other factors. How feasible is it going to be setting up a new restaurant in an island, esp. given that island tribes aren't known a whole lot to be fond of Pizzas? [Sarcasm]. In banking parlance, this effect is more magnified. For one; we don't have financial mom and pop stores cropping up on every street corner - Regulatory and financial barriers are high! We can however, most certainly be glad that Banking Industry a'int a monopoly.

While most certainly banks may not be worried about a new competitor, they would need to  fear 'churn'. All said, reputational risk is really a function of market perception of operational risk management failures within the bank. (Now...Why didn't I say "function of Operational Risk Management"?...Hmm). A single risk management failure casts a cloud of cognitive bias on the risk management capabilities of the bank. Here, the assumption that such a reputational risk event would lead to losses is based on the notion of revenue thinning on account of customer churn. But, what would happen when every competitor (or majority of the big boys, atleast) has a similar breakdown of risk management? Ah, well...the credit crisis wasn't so long ago, was it? We could go with everyone's guilty and hence even, I suppose!

Clearly, as something of concern to a bank, reputational risk is narrower than it may sound, only referring to those events negatively affecting its revenue stream. For instance, a bank failing to fulfil its 'social' responsibilities, though probably, taking a hammer on its 'reputation', would not suffer a reputational risk as it does not cast a slur on its 'money-making competencies'. Rather it should act as a point in its favour, since the money can be deployed for better purposes. How do mortgage backed securities sound, for a start? [Sarcasm]

April 5, 2013

The Risk Scrutiny Galvanization

With operational risk management, organisations aim for an imperforate ambit, exactitude of the numbers and providence to emblematize the contingent. Numbers often grab centre stage, manifesting as milestones, unsurpassable; or financial dominance, resounding. With financial disciplines, this couldn't be more veracious; risk management is no exception.

In its quest for precision, every organisation, inevitably, commits the cardinal sins of - delimiting the unbounded, quantifying the abstruse and postulating the unknown.

For a discipline forced to cope with imperfection emanating from a source, disembodied, yet simultaneously braided within a majority of other event types, aka 'the people factor'; this can often be a tough ask.

In many ways, the 'people' facet of ORM is like directly stumbling onto the end of a book, only to find it abominable. Let's face it, there can nothing complete, accurate or predictable about people risk. The real question is how many organisations care to flip through the book, ending notwithstanding. It's like proposing a travel back in time, with a future, un-impacted by any change to the past. But, why wouldn't you just enjoy the ride?

Of those people risks internal to the organisation, quite a few (frauds, rogue trading), albeit not all (who are we kidding here), can be negated through an appropriate combo of system and process controls, properly implemented. Such incidents having surfaced even in the recent past, is a knock-out punch for the 'compliance' paradigm of risk management.

On the causes of people risk itself - Churn, though afflictive, is a lesser cause of concern for organisations, as against an apathetic workforce. Holding onto that thought, let's ponder the below...

Risk culture can shape risk awareness of the employees and resultantly, the risk profile of the organisation. While, risk culture and awareness are all permeating, arguably the former flows top-down, while the latter is bottom-heavy; either which way, agreeably people-driven, people-communicated, people-actioned structures in any facet of risk management.

Whilst every organisation might have an ethically sounding and perceivably fair set of policies, whether actualized or adopted, its adherence to, and every day actions set the management's tone towards risk culture. And when I say, management, I also mean the senior and middle management, as they often communicate the tone at the top.

Given the heavy reliance of risk management on decentralisation in identifying, tackling and reporting risk, or at bare minimum being cognizant of them in course of daily business, the contribution of risk culture to risk awareness cannot be emphasised enough.

Now, back to my point on the 'apathetic' workforce - This is precisely where organisations may shoot themselves in the foot by hopelessly holding on to the policies, rather than using them as guidelines. If legislations drafted by experts aren't fool proof, neither can an organisation's policies - Employees may start to drag their heels, stick to the job, and much less contribute to managing risks, whilst still being within the 'policy-defined terms of employment'.

In the current world of complexly muddled financial engineering, two remedial calls are growing louder, one for more regulatory impositions (which understandably, is going to be reactive - like Batman solving Riddler-Puzzles albeit, without the forewarnings), and the other is for organisations to be 'risk-smart' i.e. own up risk management. With the latter, agreeably, it's not like the entire organisation is contriving to profit by dodgy means. Au contraire, more often than not, it's a single employee or a team. But, hang on, accountable doesn't mean the employee concerned has a moral epiphany, infact far from it; it means the other employees are sufficiently motivated to 'rat out' (excuse the phrase) the wrongdoers!  

Employees are much like a financial instrument, risk and return, all packaged in one, and as long as the organisation's handling of the living organism deters risk or enables its identification, it's all good!

August 24, 2012

To Help Prevent the Next Market Disaster, Raise the Bar on Testing Standards

We have yet to learn all the details behind the US stock market's 45 minutes of terror on August 1, during which Knight Capital's automated trading systems spewed out erroneous orders on the New York Stock Exchange.  In the meantime, we can draw a preliminary conclusion from what we do know.  Knight's acknowledgement of a bug in software released the night before the event points directly to the need for higher testing standards.

Continue reading "To Help Prevent the Next Market Disaster, Raise the Bar on Testing Standards" »

April 16, 2012

The Basel III Greenhorn

Over my previous blogs, I have attempted to touch upon some not-so-widely-debated facets of Basel II and its impact on the crisis. While I presented selected aspects of Basel III as retrospectively applying to the Basel II era, I noted that the obvious - the evolution of Basel (proactively and reactively). Having said that, I wanted to stress test, one of my older concepts (the need for a unique technology framework to cater to this evolution), in a more comprehensive fashion. So, here goes...


[If you have been reading my earlier posts, you may want to directly skip to pages 4 and 5]

September 9, 2011

The Risk Cost-Worth Postulate

"Cost and Worth are very different things" - Luke Brandon

"Is it worth the risk" - Pessimist Populace 

Ok; let's back up a little bit here. Often, the economic concepts of cost and worth are taken to be synonyms. While cost is the factual and quantitative measure of the monetary value expended to achieve or acquire something; worth, though quantifiable is purely subjective and is at the mercy of a vantage point.

From the data at hand, while I can't draw a conclusion on how Basel III will fare in the world of black swans; what I can say with certainty is that movies offer good analogies in conveying the point I am trying to make. In this particular movie, Rebecca Bloomwood tries to buy a scarf by spreading the price over cash and multiple cards, but with one card being declined, is still $20 short. She rushes to a hot dog vendor, going to the front of the line, begging the vendor to give her cash back on a check, even offering to buy all of his hot dogs. Luke Brandon, the man in the front of the line gives her twenty dollars to get her out of the way, so he can get his hot dog, telling her there is a difference between cost and worth.

Back in the world of operational risk, cost and worth still have clear distinctions. The cost of a risk would, for me, really be the cost of the control (including the opportunity cost of capital and resources from their use elsewhere) that can leave the residual risk in the realm of low probability and impact. Besides the 'tail' risks (high impact, low probability items like catastrophes), this would be feasible for the vast majority. Well, but hold on, isn't the cost of a risk, the damages resulting from its graduation into a loss. No, because, for one, it is variable and can range to infinity depending on the dynamics of its occurrence; and for another, the cost of each risk in relation to another would become disproportionate, limiting the evaluation of its economics.

Now, how much is a risk worth?; Ah!, this one would include everything from the simplest, estimated tangible damages to a slightly more complicated to quantify reputational risk, arising from a probable loss event, after setting-off return on capital savings due to better management of risk.

Quantification of reputational risk, eh? - Easier said than done! While there is more than one logical approach, the question is really, how closer to comprehensive can / should you get? (My next post would be on this!)

In short, cost would be the expense in combating the risk, while worth would be penalty, if not. Or, rather, the latter would be the implied benefits from preventing the manifestation of risk into loss.

In the clamour for limited resources, the opportunity cost component needs to clearly reflect the preference in implementing control for one risk over another. Unfortunately, the only element considered for such decision making today, is the estimated loss for a risk based on historical frequency - severity. While what I have talked about above also includes this as a part of worth, (as for obvious reasons, incorrect as it may be, it is the only key statistical data that can be extrapolated from the past), consideration of other factors lowers its weight in the whole decision.

By the definition above, while cost (of risk) is singular across all its occurrences, worth should factor in the effect across the entire organisation. This would add more sanity to the prioritization of risks.

Outside the core business applications landscape, Cloud is doing well, from "Chrome-Alone" browser based OS, streaming OS, game checkpoints on the cloud, multiple device syncing to virtual graphics. Heck, you can even run your own makeshift private social network. However, with business applications, it hasn't gained much steam. With the current approach in this area being, shifting the entire application to, and delivering from the cloud, the key is to discern which chunks make better sense on the cloud. In case of risk management, anonymized "data" (cost, worth and plenty others - More on this later) is a winner. External loss data has been in use for many years now - This in reality extends the horizon of the 'types of data' available, facilitating better and faster intelligence, sans the data warehouses or the 'middle-men' data providers.

Risk management decisions are like much of their other economic counterparts (however, with their own versions of cost and worth) and hence would make a great deal of sense enhancing the risk controls assessment process (RCSA / RCA) in an endeavour to factor in the same.

My experiments on application of cost-worth principles in social relationships have often met with harsh criticism, resistance and the classic "What's wrong with you?" reactions deterring further work in this area [Sarcasm]. Anyways, for the socially challenged, there are other options, like just lighting up your Blu e-cigarettes

August 12, 2011

I'm Batman...

What would you do if you were really inspired by that late-night movie? 'Transform' an Optimus from trash cans? Improvise Tony Stark's Mark II Suit in a way that would rival 'War Machine' Rhodey? Attempt to become a real-life superhero? Develop an idée fixe of enigmatic moments to your special number in way that would trounce Scott Fahlman? Or armed with the same obsession, perhaps better Edward O. Thorp in 'counting' (for) some greenback?

I can reminisce about my Professors' analogies better than the classes - Sometimes it's nice to goof around the point. Anyways, circling back to the key topic - Expressing his anguish on the stance of the US government preventing civilians from launching into space; the protagonist Charles Famer who has a rocket under construction in his barn remarks that as a kid he was told, he could be anything he wanted to be and that he believes that. The recent string of hacks may lead to believe that someone's interpretation of Astronaut Farmer is skewed in the direction of irrationalism.

Beyond financial and reputational loss for the victim organisations and maybe, the backlog of Black Ops missions for users', the steady rise of such incidents in recent times is a real threat to the banks, credit card issuers, payment networks and insurance providers. Popular opinion has it that many of these have to do with the target organisations' (incl Sony, MasterCard, Visa etc) developing enemies in dark corners of the internet  Personal and financial data have been made away with in some cases, like Sony(n) and Citi. While debit and credit card holders, with limited timely action from their side are absolved of any large liabilities through various regulations (like Consumer Credit Act in UK, Truth in Lending Act in US), the real contention is who bears the loss then? If you aren't going to be paying for those shady transactions on your card, someone else is going to have to.

Unfortunately the story doesn't end there, financial data apart, loss of personal data and unencrypted passcodes, which users tend to rampantly reuse across the cyberspace heightens the potential for these incidents to magnify into large scale identity theft. While, the scene remains the same for customers, save for a lot of paperwork, the financial institutions are still the losing participant in the zero sum game. It remains to be seen whether and how victim organisations will be held responsible by the financial ecosystem for such write-offs.

When the users of the victimized organisation's service are spread across the globe (Eg: PSN), another key issue remains that the maturity of financial practices and the adroitness of the information systems supporting them are not on an even platform, making it challenging to provide a fighting chance against preventing or alerting on misuse. For instance, many countries do not have a unique resident id or central credit bureau, and the individual card issuers themselves may not have necessary infrastructure to effect pattern-based intelligence, leaving the card holder to foot the losses with the exception where certain classes of cards are insured against such mishaps, passing the buck back to the financial system.

Little comfort can be drawn from the fact that the bereavement from one of the breaches was just out-dated credit card information, since it begs the question of compliance with PCI-DSS 3.1 which emphasises minimal (amount and time) retention of cardholder data and secure deletion of data beyond what is dictated by business needs. This would hopefully drive the other online merchants to refrain from obsessively storing credit card information on opening an account, or comply with PCI-DSS. Well, it wouldn't hurt to atleast provide an option enabling the risk-averse / infrequent users to feed payment information on a transaction basis; after all it's their dough!

With these jeopardies no longer being surreal, the financial system has to bother about risks beyond its control, be it in the form of money (assuming non-recovery from victim organisations), procedural overhead or demand on its resources.

In the event of the bank or card issuer having to bear the monetary brunt, this would be yet another un-modelled scenario from an Operational Risk standpoint; well, it ain't a "catastrophe" which is what is bucketed / budgeted under 'external events' and even there, very few institutions factor in the far-side of the risk quadrant whilst assessing the extent of their exposure.

To re-quote Sheldon Cooper, even given the stolen identity, the hackers couldn't become Green Lantern unless they were chosen by the guardians of Oa, but given enough start-up capital and adequate research facilities, they could be Batman! The odds of witnessing the 'Dark' Knight seem oh so real now.

Even as I write this, news of infiltration into 72 world organisations targeting commercial, state secrets and intellectual property flows in. In this age, nothing is safe from the digital Jack Sparrow. But, hey, we can atleast do our part!

April 18, 2011

The Waiting Line Conundrum

14.28% of your life is an awful lot of time to be spent loathing, I thought as I was driving to work, on a Monday morning. Despite my Le(a)d (Zeppelin) foot affliction, a biker managed to keep pace through the highway. Cometh the city, he zipped ahead, wriggled through the traffic...till he got boxed in a corner, as I sped away gazing through my rear-view mirror. Inching closer to work (place), got me thinking, isn't that how most software products have b(f)oxed their customers?

Risk Management has traditionally been 'Compliance' coloured, which has painted the software in the shade of 'Products'. With our local weather channel dude not being able to foretell a day's weather, how could we predict black swans and accompanying regulatory reactive?  The funny thing with probability (for high impact items) is that, we have to end up planning for it, no matter how minimal the likelihood! The question remains, did the organisations plan for Basel III? Well, majorly for changes to their risk management system...

Years back, when the action started in this space, custom developed applications were obviously not the way to go - Clunky as the 'car', not set-up for success and besides did not offer the swift go-to-market and "clone the Basel handbook" advantage as the canned software product 'bikes'. However, swiftness doesn't mean much in the Basel 'box' - You could be waiting on your product vendor to support Basel 'n+1'. I did speak about a best of both worlds approach, kind of like a Batpod, you know rip through those cars in your way!    

These decisions involve a tough choice, like for instance, selecting a regular or an express check-out lane at the supermarket. While there are a variety of factors to consider; contrary to popular belief, the regular lane (less people, more items per person) is quicker than its counterpart (more people, less items per person). A study identifies the hidden culprit as the 'tender time' - 48 seconds for every additional customer translates to 17 additional items scanned at 2.8 seconds per item. Plus, I would assume the "Express" lane naming convention and the halo effect of people flocking to it have accentuated this misconstrued belief. Sounds familiar? Any which way, such amount of thought and reasoning is required when determining the IT strategy for Risk Management. The vendor non-dependency of these new-age componentized products transforms regulatory (or pretty much most) change(s) into a simple configuration affair. You can cut short entire enhancement / product release lifecycles, needless to say, the cost. You could argue that there are huge timelines for adoption of the new(er) accords, however these are better spent on realigning business - IT should scale to support business, rather than being expended upon to make-scale for business.

Extensive configurability, ability to piggyback upon existing IT investments, knack of jazzing up on upgrades to the piggybacked, infrequent need for enhancements to the 'core', elimination of vendor dependency, variety of interfaces; all clearly point to these componentized product frameworks being the better approach, unless of course Chief John Anderton starts a 'Pre-Basel' team.

P.S. I love going to work on a Monday morning. Drive Safe.

January 10, 2011

The Time-Travel Hypothesis

What would you do if you had a time machine to travel back in the past?  Show Da Vinci an iPhone? Invest in the start-up Google? Hang out with yourself chatting about your life in the future? Accelerate singularity by teaching science to cave men? Well, the list is endless...

But, if BCBS had travelled back in time with Basel III, what would have become of the financial ecosystem that surrounds us? Would the crisis have been averted? An assessment of what was lacking in Basel II and hence a wish-list in its newer successor would explicate this.

As we know, reliance on credit ratings to determine the purportedly low Basel II capital, through RWA led to the 'manufacture' of AAA-rated CDOs backed by lousy sub-prime mortgages, which fuelled the crisis. In Basel III, while specific problem areas in risk weighting (addressed through increase in risk weight for super-senior tranches of (re)securitization products; elimination of regulatory arbitrage between banking and trading book, by treating securitization exposures on the latter on par with the former; and strengthening requirements on OTC derivatives and repos through capital for MTM losses, rather Credit Valuation Adjustments) and quantum & quality of capital (addressed through higher tangible common equity, capital conservation and counter cyclical buffers) have been dealt with, the larger issue pertains to the concept of risk weighting itself. This approach still urges the banks to "find" apparently risk-free assets which can be leveraged much higher than their riskier counterparts - We may be witness to some whacky financial engineering, yet again!

While zero risk weight assumption for AAA and AA-rated sovereigns (which caused the Sovereign debt crisis), has been acknowledged as faulty, yet, it has been let be. Well, the governments which put Basel III together needed some incentive, didn't they? - Cheap borrowing!

While oligopoly of rating agencies and the Gaussian Copula-powered symbiotic growth of CDS' and CDOs played their part in harmonised synchronicity, the use of internal rating models finished things off. The dumbed down simplification of VaR garnered attention in expressing and interpreting individual and firm-wide risk as a single figure for any asset class, its limitations were however forgotten. The assumption that the bank was in the best position to measure its own risk, when coupled with VaR's "normal", no-extremities market, failed to pay-off. Risk-based compensation in this case proved counter-productive, further encouraging managers to paint a low-risk picture.

The back-stop non-risk based measure viz. leverage ratio is a step in the right direction, albeit low. If the past is any indication, Lehman was levered 31-1, whereas the current Basel III rules peg the requirement at 33-1. Ultimately, this treads on a fine line - What cost of economic growth is a fair price for curbing risk?

Lehman's folding was a result of liquidity problems from unwinding of huge derivative positions. The 30-day stressed Liquidity Coverage Ratio; encouragement of medium to long term funding through Net Stable Funding Ratio; and the variety of monitoring tools do well here. However, there are arguments implicating that the LCRs bias toward government bonds could hamper credit to small businesses, which is also interesting given that they are the ones who do not have access to capital markets, and hence turn to banks for fundraising, where their 'unrated' status again tend to extend the 'halo effect'.

Let's assume BIS travels into the future with Basel III, would it have avoided a recurrence? There's no telling Black Swans, much like Miles Dyson did not know his neural-net processor would create Skynet and bring upon the Judgement Day - no one ever sees it happening, that's why it does! That really explains the two Basel accords prior and the ones after...hopefully not!

All said, Basel III is one of swiftest and smoothest regulations in modern history.

August 12, 2010

Products - A Business Perspective through Technology Coloured Glasses

As I was browsing through my news feeds for the day, I began to wonder how much a couple of my favourite ones drew stark parallels to the current state of existence of business software products. On one hand, my personal favourite, engadget, abuzz with new developments, innovation, product launches on the gadgets and allied areas (leaving me dazed on how often there is something similar happening on the business software front); while on the other hand, lifehacker, flooded as usual with tips on maximising productivity and making the most of various softwares (something where most organisations have paid limited attention to, leaving vendors with the lack of impetus to tread in that direction).  The traditional business software products have only accentuated these miseries, ending up as nothing more than an isolated block in the organisation's application portfolio.

How often have you crumpled under the social pressure (or a mad zest?!) of buying of a hot new gadget and leaving it lying un-used? The state of risk management softwares hasn't been much different with the coercive factor being 'compliance'.

All the recent financial happenings have only heightened the debate about the ab(g)ility of organisation to respond to changes. How to make the empowering technology, agile, is the real question? In the world of risk management in particular, the ability of the business to flex technology and its ability to respond make all the difference.

Imagine your house as a non-separable whole where you can't add anything new (say, furniture) as much as you can't remove any other. Well, that's today's products for you!

Do you think you can always get saas-y? (We'll put that in perspective in a later post)

ADM remains painful in these cases - While I may be in the market for a new pc, I don't wish to witness how the platters in a hard disk are put together; or the die on the processor...

From a technology perspective, most business needs, to a great extent, can be addressed by a cogent organisation of a set of configurable components. For instance in a business scenario pertinent to risk management, Basel business hierarchy, risk rating, issue remediation, LDAs and EVTs translate into the likes of simple tree builders, workflow, rules engines, analytics, reporting tools etc. Leaving the configuration of every element in its silo make upgradeability and portability a cinch. Loosely couple these together with the business logic, standardise data access layer (with say, hibernate) to make it database agnostic and factor in the flexibility of the UI layer, and you have a componentised product framework in your hand. Want only select functionalities - no problem, just toss out components that you don't need, retain only relevant configurations in the remaining - flex that 'modular' muscle.

'Shared Infrastructure' is an undeniable value proposition. Apart from saving tons of money in duplicate investments, it provides the much needed business (process & system) integration that product silos can't. To illustrate, If your organisation happens to purchase / upgrade, say the intelligence engine, you can squeeze out every penny by making it available to all applications, and also where needed, by sharing the intelligence across the board. Atleast with intelligence, that's how it's really meant to be, isn't it? And what's more, your products remain as recent as their newest updated component

Business software products have transcended being applications and become the 'platform'.

If ever, there is a lingering thought about if and how this would work, rest assured, at Infosys, these methodologies have been tried and tested.

The definite question is, what does all of this have to do with risk and compliance - Well, to sum up my previous blogs: a brave new world; changed paradigm; a new breed of approaches...and this is technology catching up.

Having planted the thought, I take a pause before examining its particular significance in the risk(y) business...

July 7, 2010

An Incentivized Anarchy

Are you conservative ...or a 'branded' pessimist? Are you ignored for being sceptical?

That's what you get, for being a wet blanket in bright times - Pretty much the status of risk managers in the times leading up to the crisis. You could argue that they were simply doing what they were paid for. Think again - So were the people who ushered them out.

Way before the Aug 2007 trading freeze, certain banks had started accumulating heavy mortgage loan losses and property prices were saturated. Did they go un-noticed?

I often catch myself convincing that the drawbacks of my latest impulse gadget purchase aren't significant - The business had strong "incentives" to paint a picture that the bad news was just a temporary blip and normalcy would be restored on sub-prime arena.

During the thick of events, banks had become simple "production houses", working over-time to 'produce' loans to meet the demand for securitized instruments; and with a purely revenue driven incentive structure, the lenders took the plunge.

In my last blog, I spoke about the organisational 'motivation', rather, the lack thereof, in ensuring an accurate capture of Oprisk events (owing to same capital charge irrespective of risk bucket). This time around, I thought of looking through of the individual 'motivation' elements viz. compensation and incentives and their role in fuelling the crisis.

Misaligned incentive structures are anything but new, and any sane individual exploits every such possible opportunity to reduce his / her financial or career impact. Despite yielding beneficial results in the short term, competitive edge and performance sustainability in the longer run are sabotaged due to the high degree of uncertainty.

The prime reason accountability cropped up, is to ensure an effective decentralisation. As pretty as it may sound, the current state clearly evinces the need for a mechanism to enforce accountability, which, in effect, is the incentive structure.

And decentralisation is one area where technology has proved counter-productive; but has the potential to do way more. If Basel can factor risk of assets, why not we factor the risk in an income stream? This would allow setting and tracking of risk tolerances at an individual level while also aligning it with changes in risk appetite at the org level, instead of leaving them to be some values on paper, as they are today - Technology can prevail as a differentiator and integrator, all at once. Besides, as a metric, risk adjusted incomes help in evolving a better incentive structure, one that is consistent with, and enforces the realization of the business objectives. (At Infosys, these transcend mere concepts whereby technology has been harnessed to integrate the chain from operational risk strategy to execution.)

At the firm level, a concept of risk adjusted profits can be introduced to indicate to the stakeholders, the extent of risk undertaken to achieve the said profits. This would eventually boil down to how the business compensates the employee and the inducement of the employee to undertake risks. There is a lot to debate here, which is better saved for later.

So, should you ask your boss, the pay hike you've always wanted?!

June 22, 2010

Blurring Risk boundaries

It's been a while since my last post - I was travelling to Europe, which in recent times could turn out to be quite nightmarish. Infact, apart from leaving the airlines drooling in losses running into billions of dollars, the volcanic ash did pose a big operational risk for the project I was travelling for! - Quite a loss continuum, as I was remarking in my previous blog, but this time, just not within one entity, but across several industries like tourism, air freight and...of course, insurance (I'll get to that in a bit). Businesses which rely on airways for distribution of products, would have met with, in financial terms, an "un-modelled scenario". Interestingly, it bears quite a semblance to the credit crisis, where the CDOs and CDS' resulted in an un-factored systemic risk, owing to their risk cascading nature, which I spoke of in my initial post.

Being a supervening impossibility, the insurance companies weren't obliged to pay the claims to the airlines, but they did, in an attempt to reduce their reputational risk exposure. Now, the pertinent question is, where would these costs be booked? Clearly, the airlines couldn't sue the insurance companies, which rules out the Oprisk loss head, leaving it to something strategic, aimed at preserving reputation, client relationship and future revenue streams. Back in the Banking and Basel world, the problem at hand draws sharp parallels to the recent spark of discussions that the economic crisis was accentuated by operational risk sources. Keeping aside the Basel approach, the simple question is, "Where do we draw the line between various risks?" How to separate a poor lending choice (Operational) from a genuine default (Credit)? How to distinguish the voracity to make profits or poor investment choices (Operational) from sudden market fluctuations (Market)? Before this can be answered, we need to understand, who makes these decisions - More often than not, the person recording it is the one responsible for the loss itself - So much so for decentralisation!

The whole purpose is to embrace the "once bitten twice shy" phenomenon, particularly given the same capital charge irrespective of classification into credit, operational or market risk buckets. Furthermore, it also provides the business case for an operational improvement, which would else be sunk, much like the loss.

It would only be fair to say that risk culture, structure and process of the organisation could either make the deal (or) break it - no greys - just black or white! I would be delving into key areas under each of these in the coming days. In the meanwhile, let me know what you think...

May 21, 2010

Op Risk: Containing the loss continuum

A few days back, my boss asked me.... (Argghh!!...Let me stop myself right there! Are too many people using this line? Hmm.... the bosses get the brightest of ideas, after all)..."Have you ever thought about how a ship or submarine's damage control mechanism works?" What resulted is one of our finest implementations yet.

Traditionally, in the OpRisk world, the loss from a risk was thought of, as being one isolated event (say loss of revenue from system downtime). However, the reality is much different, take for instance, rogue trading, which in many a case, has resulted in a series of losses before being unearthed. The key to effective control is to monitor and react whilst ensuring there is always a fall back for worst case scenario. The biggest risk is being blithely unaware of a looming vulnerability. Let's take the popular instance of laptop theft - sure, data encryption could be prescribed and implemented, physical security beefed up; but that does not rule out the possibility of laptop going missing or data remaining unsecure. In this case, even if invoking access prevention, remote data purging etc fail, the business could make arrangements to reduce or eliminate some costs, say legal (eg: data confidentiality).

In this context, we created a concept, "Contained Damage" which is also an integral part of our risk management platform. Imagine an entire gamut of systems working like a neural network - slightest signs of trouble sensed, impact points delineated (by process maps), damages estimated (using algos), slew of preventive mechanisms kicked in (based on criticality of impact areas and predicted losses) and relevant people notified. (I'll save how this works, for a later post). Contrast this with a vessel - a series of 'flood gates' are activated on impact and damage is contained to a small portion between two gates. When one gate fails, the next provides resistance, still limiting damage. Meanwhile sirens blare and parallel evacuations ensue.

In short, be aware of vulnerability, contain the damage, have a safe 'wrangle out' strategy.

Have you ever thought of it this way?

April 30, 2010

Comprehend(ed), manage(d)!

I welcome you to the Risk and Compliance blog! For starters, I intend this first post to introduce operational risk and its share in the crisis, all alike, by just defining it.

During the thick of the crisis, while my team and I were working with the clientele to tidy up their risk management infrastructure, I could only, but wonder at how credit and liquidity risks had subsumed the others, and gained prominence. I often compare this to RCI (Root Cause Identification) analysis - simple, look beyond the obvious to find the true reason - much like the five why's. Penetrating this shroud, reveals the true cause - Operational risk - of which the RCI itself is a key tenet!

The usage of complex instruments / strategies, which was behind much of the crisis, had 2 key impact elements - their risks weren't clearly analysed or captured (for instance CDO and CDS were sliced into and treated as ordinary bonds with a set duration and interest rate) and their systemic impact was never clearly understood.

Be it slicing down a complex securitization, feeding oversimplified data/overly optimistic assumptions or building risk models using an unusually longer term trend, all these made sure that the alarms didn't sound early enough - the banks had their reason - they wanted to keep the limits imposed on the trading desk stable by maintaining a constant capital.

Before moving any further, let's quote the definition of Oprisk; again - "...risk of loss resulting from inadequate or failed internal processes, people and systems, or from external events"

In effect, the whole scheme of things point to the failure in managing the acts of employees (the management included), botched internal control processes and system checks.

Also, new products carry more risk. Period. Hence, the models should have imposed a penalty on assets that are complex, difficult to understand or rarely traded, which wasn't to be.

"Sound Practices for the Management and Supervision of Operational Risk" published in February 2003 clearly outlines a fundamental principle: "Banks should identify and assess the operational risk inherent in all material products, activities, processes and systems. Banks should also ensure that before new products, activities, processes and systems are introduced or undertaken, the operational risk inherent in them is subject to adequate assessment procedures" - Voila!, these new financial products should have been evaluated for their inherent risks and subjected to proper assessment and monitoring.

Further principles 1, 2 and 3 outline the responsibilities of the management and board, while principles 8 and 9 chalk out the role of supervisory oversight - all these work together to rule out any weak links in the 'subjective' Oprisk chain - turned out to be a failure as systemic as risk of these new instruments.

Operational risk needs to be viewed as a 'horizontal', pervasive across all transactions and unless this attitude seeps through the organisational culture, such events are bound to continue...