Infrastructure Services are definitely undergoing a major transformation. How does one navigate the web of emerging technology trends and stay ahead of the game? Read on to learn more on our Infra Matters blog.

January 27, 2014

Access management - Road to user experience nirvana?

(Posted by Praveen Vedula)

It's a bright Monday morning and today is the first day at your new job. You are excited as you are shown to your desk. After filling in all the mandatory forms, you try to get down to business....only to realize that you have to raise a multitude of requests just to get access to the necessary applications. Most of you have been there, done that already and can understand what a harrowing experience it can be.

Now consider this: It is possible to reorient this entire process in a way that is user friendly and in accordance with IT requirements; all it requires is a careful analysis of   the access product life cycle and how it overlaps with service catalogue from an ITSM point of view.

There is a thin line between role management and entitlement management. Role management deals with the administrative nature of roles while entitlement management deals with the functional aspect of access though both fall under the umbrella of Identity access management (IAM).

Control, accountability and transparency are the central tenets of Identity access management. So, how do we control or detect access violations? Most organizations depend on IT service management to have a seamless process of ordering products through a service catalogue. However, it remains a challenge to manage the user access lifecycle given the number of authorizations involved and may not be easy to manage due to its sheer volume and structure.  There are several products like Axiomatics , Securent (acquired by Cisco) in the market which manage authorizations. However, it will be a while before we have an end to end entitlement management product as pointed out by Earl Perkins from Gartner research, in his blog.

Having said that; there are three key issues which need to be addressed while managing access roles and entitlements-

  • How do we present the access roles as orderable items in service catalog?
  • How do we enforce the policies and rules for the access roles while ordering them?
  • How do we update CMDB with relevant entitlement data to drive IT service management? 

One of the most important aspects of a service catalog is the ease with which it can be accessed and browsed. The key challenge here is to transform an access product into an orderable item that can be accessed by users who have the requisite rights as determined by their roles. Given the flexibility of cloud based ITSM tools, it is quite possible to manage the search parameters on the front end while a compliance check is run by authorization tools in the back end. The governing rules of the access products can be centrally defined and managed at the application layer making it simpler to manage them at one go. In order to make life easy for business users, the orderable access items can also be grouped based on the job level or job description or any other parameter based on the organizational structure.

So, going back to the first example, a new employee has to simply select the access products required from the service catalog. This has been a success story at a large reinsurance firm in Europe that was recognized by the European Identity & Cloud awards 2013 for its project on access management using cloud and authorization tools.
Based on his or her role identity, it will be easy to assign the right levels of access to a given user. In one shot, a pleasant user experience and adherence to IT policies can be achieved.

January 21, 2014

Hybrid ITSM: Evolution and Challenges

(Posted by Manu Singh)

When you compare an ITSM solution based on public cloud with that of an on-premise solution, there is no way to determine which one is superior. Although public cloud based ITSM solutions provide an on-demand self-service; flexibility at a reduced cost is not the only factor that should be considered while choosing deployment options.

Customization has always been a major issue while deploying a cloud based ITSM solution. While every organization has its own way of handling incidents, problem, change and release management; it's the business needs that determine how the IT service management solution is deployed. Cloud based ITSM solutions can be inflexible at times - a kind of one-size-fits-all proposition. Any change / customization will go through testing across the entire user base for all the clients which will lead to unnecessary delay in deploying the required functionality.  In some cases, a release may not even be implemented at all if a few users do not approve of the change.

In other words, using a standard application across multiple users gives limited options for changes in configuration. Organizations may face a risk as requirements continue to change as dictated by a dynamic business environment. Limited options to change configuration settings may not be the best solution in such a scenario.

Another reason organizations are unlikely to stick with a cloud-only solution is that it gets expensive as the years go by. Analysts have also predicted that SaaS based ITSM tools may not be the preferred option as the amount of effort invested in implementing, integrating, operating and maintaining tools would likely result in increasing actual costs rather than reducing it.

But this does not mean that the cloud based ITSM model is likely to vanish. It will still be a good bet for organizations that have limited IT skills on-site and are only looking for standardization of their processes without much customization and dynamic integration requirements.
It stands to reason, that organizations would prefer to have both options - i.e. a cloud-based ITSM offering that can be rapidly deployed and a premise-based one which would support on-going customization and dynamic integration.

Hybrid ITSM combines best of both worlds' i.e. public and on-premise/private clouds.  It focuses on increasing the scalability, dependability and efficiency by merging shared public resources) with private dedicated resources.
However, implementing a hybrid model is not as easy as it seems, as it comes with its own set of challenges, some of which are listed below:

  • Management and visibility of resources that fall outside the scope of managed services
  • Ensuring the consistency of changes implemented between the on-premise and the cloud service provider
  • Supporting open tool access with consistency in the data / look and feel
  • Managing shared network between the on-premise data center and the cloud service provider
  • Seamless integration between on-premise and cloud infrastructure in order to share workload at peak times

Looking at the above challenges, it is clear that organizations need to do a thorough due diligence to identify:

  • Data and network security requirement (data encryption, firewall etc.)
  • Tool usage requirement, storage requirement (on-premise or cloud)
  • Robust change management in order to track and coordinate changes between the two environments
  • Fail-safe infrastructure set up so that the availability of the tool is not hampered
  • A robust asset and configuration management  to track the assets within business control and dependency with assets on public cloud
  • A framework defining governance, statutory and support requirements

Ideally, organizations need to follow an approach that incorporates the aforesaid requirements early on during the discovery and design phase.
My next post will cover the implementation approach for Hybrid ITSM along with the mitigation strategies for some of the common challenges.

November 7, 2013

DevOps: Befriending the alien

We are living in interesting times where organizations are re-thinking their commitment to value. There is more focus on the question: "Are we doing this right and creating value?"


Globally, production cycle times are getting reduced and time to market for any product is expected to be "quick". There is a concerted focus on automation which has given birth to relatively new jargons such as DevOps, Orchestration etc. Despite decades of process and policy re-engineering, we run the risk of missing out on success without automation systems to support us. 

So what are we trying to bring in our lives?  

DevOps is an interesting practice that has silently invaded the IT professional's sphere of influence - almost like the friendly neighborhood alien made famous by Steven Spielberg.

Let us figure out for a minute on what is DevOps and why it makes a difference to us here?

"DevOps is associated with a collaborative set of practices that "influences" IT Development, Testing & Operations/Service Management staff to join hands and deliver high quality IT Services/applications to the end-users "more frequently & consistently".

Based on my recent experiences with clients and prospects, I can say that nearly every global IT organization has heard about it and is interested to explore this 'jargon' (I say this is a jargon since there is no official framework which is authoritative enough to set a global definition and prescription to implement DevOps).


I have a rather radical observation and interpretation here. Innovations in technology mainly around Cloud, Mobility & Social space has taken a big lead compared to people practice maturity levels. There are expectations now to roll out frequent application/service releases - in some cases, a daily release.

This has resulted in the need of more "people centric" development methodologies that sometimes need radical shifts in organizational philosophy. For e.g. how many of us have actually seen  application development and IT operations members sitting together in the same room working daily to rollout regular releases?


In the next couple of years, this debate is likely to continue about how much of local collaboration is required to make DevOps a realistic goal. Again, technology has moved ahead in the game here, and it can be actually seen among the DevOps tool vendors where they aggressively claim to make this happen.

There are tools in the market that claim to automate the entire software delivery provided the developer writes the code (many a times using re-usable components), tester has uploaded the test cases at the same time when code is written and the environment/infrastructure is readily available on-demand. In a nutshell, you can check-in the code into a repository and go home; the rest is taken care of by the systems to make the new release available to the end-users.

But reality is different. It will not be so easy to adopt this disciplined practice - it is like applying moral science lessons in a chemistry lab!

The success of this concept hinges on the level of understanding of all the stakeholders involved and befriending this concept - however alien it may sound. (In the end, even the scientist ended up liking ET!)

October 28, 2013

Traditional Data versus Machine Data: A closer look

(Posted on behalf of Pranit Prakash)

You have probably heard this one a lot - Google's search engine processes approximately 20 petabytes (1 PB=1000 TB) of data per day and Facebook scans 105 terabytes (1 TB=1000 GB) of data every 30 minutes.
Predictably, very little of this data can be fit into the rows and columns of conventional databases given the unstructured type and volume of this data. The complexity of this data is commonly refered to as Big Data.

The question then arises - how is this type of data different from  system generated data? What happens when we compare system generated data - Logs, syslogs and the likes,  with Big Data?

We all understand that conventional data warehouses are one's where data is stored in form of table based structures and useful business insights  can be provided on this data by employing a relational business intelligence (BI) tool . However, analysis of Big Data is not possible using conventional tools owing to the sheer volume and complexity of data sets.
Machine or system generated data refers to the data generated from IT Operations and from infrastrucutre components such as server logs, syslogs, APIs, applications, firewalls etc. This data also requires special analytics tools to provide smart insights related to infrastructure uptime, performance, threat and vulnerabilities, usage patterns etc.

So where does system data differ from Big data or traditional data sets?
1. Format: Traditional data is stored in the form of rows and columns in a relational database whereas system data is stored in  the form of text that is loosely structured or even unstructured. The format of big data remains highly unstructured and contains even raw form of data that is generally not categorized but is partitioned in order to index and store.
2. Indexing: In traditional data sets, each record is identified by a key which is also used as index. In machine data, each record has unique time-stamp that is used for indexing unlike big data, where there is no criteria for indexing.
3. Query Type: There are pre-defined questions and searches conducted on the basis of structured language in traditional data analysis. In system or machine data, there is a wide variety of queries mostly on the basis of source-type, logs and time-stamps while in big data, there is no limit to the number of queries and it depends on how the data is configured.
4. Tools: Typical SQL and relational database tools are used to handle traditional data sets. For machine data, there are specialized log collection and analysis tool like Splunk, Sumologic, eMite which install an agent/forwarder on the devices to collect data from IT applications and devices and then apply statistical algorithms to process this data. In Big Data, there are several categories of tools ranging from areas of storage and batch processing(such as Hadoop) to aggregation and access (such as NoSQL) to processing and analytics (such as MapReduce).

When an user logs in to a social networking site, details such as name, age and other attributes, entered by the user, get stored in form of strucutred data and constitute traditional data - i.e. stored in the form of neat tables. On the other hand, data that is generated automatically during a user transaction such as the time stamp of a login constitutes system or machine data. This data is amorphous and cannot be modified by end users.

While analysis of some of the obvious attributes - name, age etc. gives an insight into consumer patterns as evidenced by BI and Big Data analysis, system data can also yield information at the infrastructure level. For instance, server log data from internet sites is commonly analyzed by web masters to identify peak browsing hours, heat maps and the like. The same can be done for an application server as well.


October 22, 2013

The Lean ITIL connection

(Posted on behalf of Manu Singh)

While trying to improve IT operations, the application of ITIL best practices alone does not necessarily guarantee effectiveness and efficiency in IT processes. ITIL, in fact, recognizes this, and for that reason, the ITIL v3 framework defines a service lifecycle stage - Continual Service Improvement (CSI) - intended to measure and improve processes and services over time. However, the 7-step improvement processes defined in CSI is perhaps too focused on improvements as opposed to reducing wastage of effort.
There is a significant amount of effort wastage while performing routine tasks and activities. So, any activity that does not focus on delivering value to the customer is a potential waste and needs to be removed or at least reduced.

And this is where Lean comes in.

Lean principles were originally developed to improve quality and reduce costs in manufacturing. But, over time, Lean principles have been used in the services industry as well.  Lean thinking has now evolved towards improving quality, eliminating waste, reducing lead times for implementations and, ultimately, reducing costs.

So, how do Lean principles compliment IT service management?

Let me give you an example: IT organizations around the globe follow the same practices i.e. detailing client requirements, designing the solution and getting it approved.  At the next stage, they build the solution, take it live and support the same. In a way all the ITSM processes are followed, however, the extent of detailing these processes will depend on many factors such as- the size of the organization, support requirements, geographic spread (for incorporating business rules for different countries) etc. Some of these processes may include wasteful effort that does not really add any value.

Lean helps in identifying  'fit for purpose' ITSM processes i.e. identifying the right fit based on organization requirements and removing those activities that are irrelevant for the business or which create unnecessary overheads. In this way, the correlation of Lean and ITSM principles can be seen as a natural progression towards delivering value in IT services - while Lean focuses on waste reduction in alignment to client requirements, ITSM focuses on delivering services that meet client expectations.

The best approach towards embarking on a Lean ITSM journey is to first identify what the business (internal and external stakeholders) perceives as Value and Non Value adds and then defining a "To-Be" value stream which will act as a baseline for the future improvement journey.  This "To-Be" value stream would take inputs from the corporate business strategy along with current and future business requirements.

Another important aspect is to define the change management and roll-out strategy so that the new/improved processes make sense to the process stakeholders. For this, organizations would need to focus on incremental process roll-outs by bundling them in a logical manner and involve all stakeholders to contribute in solution designing so as to reduce the resistance to change as everyone has the opportunity to contribute to the definition of the solution.

Over a period of time the incorporation of Lean principles in IT service management has evolved towards improving support efficiency, accelerated issue management and reducing costs through better allocation and utilization of support staff and budget funds. 
In the current market scenario, where IT spending is expected to slow significantly, it makes even more sense to apply Lean to gain cost advantages.

(Manu Singh is a Senior Consultant with the Service Transformation practice at Infosys. He has more than 8 years of experience in the industry and is focused on Service Transition, Program Management, IT service management, Process design and gap analysis.)

October 1, 2013

Transforming Enterprise IT through the cloud

Cloud technologies offer several benefits. With solutions that are quick to adopt, always accessible and at extreme scale to the capex-to opex benefit that the CFO likes, clouds have come of age. In the coming years, the IT organization is going to increasingly see the Cloud as a normal way to consume software, hardware and services. Clouds are transformational. Several companies today are already enjoying a competitive business advantage as opposed to their competition by being early adopters of this transformational paradigm. Today, however, we hear similar questions on near term and long term adoption from IT leaders, the summary of them being as follows- 

  • Where do I get started?
  • What are the quick wins?
  • What should we be doing in the next 3 yrs?

I have tried to address some of these common questions as below. These can be thought of as basic transformational patterns and a strategy for cloud adoption within the enterprise

  • Start with Software as a Service (SaaS)- explore ready to use solutions in key areas such as CRM, HR and IT Service Management. Some of the SaaS solutions such as Salesforce, Workday and ServiceNow have been in existence for some time now and deploy proven engagement models, so these will be quick wins to consider. Pick functionality, an app and a suitable sized footprint, so as to have a project scope that can create a real organizational level impact. From a time to market perspective SaaS can arguably be the quickest way to attain the benefits of cloud.
  • Explore the private cloud- Take virtualization to the next level by identifying an impactful area within the organization to start a private cloud project. One example with an immediate benefit is a way to enable application development teams to automate the request of development and test environments. Getting this done through a service catalog front end through to a private cloud back end, can cut provisioning times by 50% or more with an added benefit of freeing resources to focus on providing support to production environments. There are different product architectures to consider with pros and cons that are beyond the scope of this note- choose one that works for the organization and get going.
  • Explore Data Center and Infrastructure consolidation- Many large Fortune 500 organizations today have to deal with equipment sprawl. The issue with sprawl can manifest itself from technology rooms and closets into co-located space to even entire data centers. This can deal with the full stack of IT equipment from servers, network switches, storage devices to even desktops.  Private clouds can be used as a vehicle to consolidate, reduce footprint and increase overall levels of control and capacity of this infrastructure. An added benefit can be in terms of higher performance, lower energy costs and replacement of obsolete equipment.
  • Identify specific public cloud use cases- Depending on the business, some areas can benefit from adopting public clouds. For e.g.- Requiring a large amount of computing horsepower for a limited duration to do data analytics is a requirement for organizations in the pharmaceutical, healthcare, and financial industries. This is a challenging use case for traditional IT as this is capital and resource inefficient. Public clouds are the perfect answer to these types of workloads. The business unit can pay for what they use and does not get limited by the equipment available in-house.
  • Create a multi-year roadmap for expansion - These initiatives are only the beginning. IT leaders need to create a 3 year roadmap that plans how these initiatives can be expanded to their fullest potential within the organization. Enabling a strong project management practice and a proven project team will go a long way to ensuring success during execution. Create a financial cost- benefit analysis for each of these areas and ensure a positive Net Present Value (NPV) case for each one. Identify what partners bring to the table that is proven, yet unique to the cloud solution at hand. Go with the base assumption that in spite of replacements of some legacy tools and solutions, these Cloud initiatives will more or less continue alongside current in-house IT and management practices.

In summary, it is important to have a pragmatic view.  Cloud is not the silver bullet that will solve all IT problems. And neither will an organization automatically attain the promised benefits. No two organizations are alike, even those that are in the same industry and hence understanding what to do and which steps to take first will put IT on a course to being 'In the cloud'.

September 26, 2013

7 steps to a smarter IT Front End


We often praise a particular front end as compared to another. The world of Graphical user interfaces has transcended from PC's to Mac's to smartphones. But quite often the IT department ignores the 'Front End' that the modern user expects from IT. Most are fixated on the Service Desk as that all empowering front end. Even ITIL has prescriptive definitions. One can argue that this is not at all the case especially from an end user perspective.


We often hear the complaints of IT being slow, ineffective or behind on support commitments. Though there may be some truth to this, there's much to do with ignoring perceptions that have built up over time in user's minds. So what is that 'Front end'- I would define that as a cohesive combination of Resources, Service Desk response times, average speed of resolution, automated Service Catalog and a comprehensive Knowledge base.


So how does an organization build up that smart IT front end? Here are 7 steps to get going-


1)     Handle all actionable Service Requests through a single service catalog- Basically 100% of Service Requests should go centrally into one service catalog. Insist that the service should not exist if it does not exist on the Service Catalog! Obviously this requires a major change to sunset all kinds of tools and manual services, but the effort to consolidate on one clean interface is worth the time and effort.

2)     Support the Service Catalog through an automated back end - All actionable Service Requests should flow through an automated back end working their way through approvals, procurement, provisioning and fulfillment. Of course automating all of this is ideal and the holy grail! But make the move towards that goal and measure progress. Again shoot for 100% of backend processes; you will reach a high mark. E.g.-new user accounts, requesting a development environment, licenses, adding application access etc.

3)      Enable Problem to Incident (P2I) conversions- Resolving a problem is not the end of the day. Confirming that Level 1 teams understand what to do if the incident rears up again is a must. Consistently enforcing this policy of P2I connection and conversions will work wonders over a defined duration resulting in more incidents resolved faster and efficiently at Level 1 itself.

4)      100% self service for user induced incidents- Setup a Self Service gateway to manage all such common incidents. This will dramatically reduce time to improve speed of response. Examples include Account Lock Out, Password changes and resets, information /document upload, profile changes etc.

5)     Setup and maintain a corporate Wiki- Information discovery and ease of information consumption should play a key role in the roadmap of the IT Front end. Too often we see lack of information on how-to's, problems with finding the right document and obsolescence. An annual check on all key docs, along with the user's ability to edit and update docs will foster a sense of shared ownership within the user community. Enable access through all devices, especially smartphones. Experts will bubble up to the top and become allies of IT.

6)     100% of software installs via End users- through the self-service capability and service catalog automation, enable users to receive a temporary download link to software that they are allowed to install. In the long run, diminish the need for this install capability through adoption of Software as a Service and/or internal web applications. E.g. - Office 365, Sharepoint Online and Lync

7)     Periodic user engagement- IT often gets flak for not being there when it matters or simply not being around. Enabling user feedback, technology awareness sessions and formal internal training periodically can go to a great extent in bringing IT closer to the business community.


The organization of tomorrow requires a smart technology front end. Transforming from now to then requires investment of time, effort and resources. These steps can get you started. And there may be more. Do you have a take on additional steps- then do write in.

September 24, 2013

Service Redundancy - 5 Lessons from the Nirvanix blowout


Earlier last week one of the most prominent Cloud Storage Providers Nirvanix, gave a shocker to the tech community when it announced that it will be going out of business. Customers and partners were asked to stop replicating data to their storage infrastructure immediately and move out their data in about 2 weeks. I have been fascinated about this story and here are the facts.

-          Nirvanix pulled about $70 Mn in venture funding during its lifetime- starting Sept of 2007

-          It's key backers kept up 5 rounds of funding right upto May of 2012

-          Rated well by industry analysts and media

-          The cloud storage service was sold through several enterprise software resellers and service providers.

-          The service pulled in several key customers and was challenging the likes of Amazon's AWS S3 storage services.


What is evident is that the company was burning cash faster than generating revenues and it all came to an abrupt end, when it could not find any buyers/execute an exit strategy. One would have thought that enough value (IP or otherwise) would be generated to a potential buyer in 6 yrs of existence, but no more detail seems available. Nirvanix is by no means the first to go belly up- EMC had pulled the plug on Atmos Online service in 2010, though that was perceived as far smaller in impact.


From the enterprise standpoint, if an organization had been using their services, these 2 weeks are a time for scramble. Moving data out of the cloud is one tall order. Second issue is to find a new home for the data. And what if the data was being used in real time as the back end of some application. More trouble and pain. So here's my take on the top 5 areas clients/providers can address for Service Redundancy (leaning more on Cloud storage services)


1)     Architect multiple paths into the cloud- Have a redundant storage path into the cloud. Ie host data within 2 clouds at once. Now this depends also on the app that is using the service, geo and users, a primary/ secondary configuration, communication links and costs. For eg- a client could have an architecture where the primary cloud storage was on Nirvanix and the secondary on AWS. Throw in the established value from traditional in-house options and established disaster recovery providers.

2)     Be prepared to move data at a short notice- Based on the bandwidth available from source to target and the size of data in consideration, we can easily compute how much time it would take to move data out of a cloud. Add a factor of 50% efficiency (which could happen due to everyone trying to move data out), frequent testing and we have a realistic estimate of how long data migration will take. Now given that the 2 weeks from Nirvanix is a new benchmark, clients may choose to use this as a measure of how much data to store in one cloud- ie if it takes more than 2 weeks to move important data, then consider adding communications costs for better links or including a new provider into the mix.

3)     Consume through a regular service provider - Utilize the Service through a regular managed services provider. This has two benefits for clients a) the ability to enter into an enterprise type contract with the provider and ensure service levels b) the ability to gain financially in case of breach of service levels. Of course service providers in turn have to be vigilant and ensure that they have proper flow down provisions in their contracts with cloud providers and secondly there is an alternative to the service in case of issues.

4)     Establish a periodic service review process- Often we see the case of buy once and then forget. A regular review of the Service will often provide early warning of issues to come. For eg in the case of a cloud storage provider, tracking how much storage they are adding (new storage growth%) and new client labels signed on will give a good indication of any issues on the capacity front. This may in turn point to lack of available investment for growth.

5)     Understand provider financials- Cloud providers today focus on ease of use and new gee-whiz features- this often masks how exactly they are doing financially. And the market continues to value them on revenue growth. It does not matter whether the company is public/ private but as a client and under confidentiality, there exists a right to understand the financial performance and product roadmap, even if it is as a high level.


Cloud Solutions offer multiple benefits, but at the end of the day, they still serve another business as a service and there are issues with all services- cloud based, or traditional. Was Nirvanix an outlier or the warning of things to come? We don't know that yet, but as service providers and clients, this may be the 'heads-up' we need, to stay focused on the essentials for effective transformation of Infrastructure and Storage Services.

September 19, 2013

Does your customer know what you do?

(Published on behalf of Praveen Vedula)


IT departments are constantly engaged in a battle to provide quality services to their customers. In such a case, the communication regarding these services is equally important given the high stakes involved.. With the advent of SaaS based tools like ServiceNow, automation has been the mantra for the management of various processes. Communication regarding service outages or application downtimes to IT & business stakeholders is one such area that can be improved using the automation engine. Every organization has different communication needs depending on their core business.

If IT is to be considered as an enabler of business instead of a cost center it needs to act as any service provider would- ensuring its service levels are articulated  with no room for errors. So it goes without saying that any miscommunication or improper communication could impact the business for mission critical services.  There have been numerous instances of big outages which could have been averted through effective communication to the service desk.

Okay, so we know how important it is to communicate - how do we go about it?

The key focus of most ITSM tools has been in the area of the traditional ITIL processes such as CMDB, incident, problem, change and release management - predictably, communication based solely through these modules leaves a lot to be desired.
SaaS based ITSM tools have been the game changers when it comes to integrating the communication required for ITSM processes. Thanks to flexible tools like ServiceNow, an integrated communication model can be envisioned which caters to special customer requirements in relation to ITSM processes. The flexibility to package the communications and showcase on the IT portal dashboard has been an amazing output of automation.

As a consultant, I witnessed the potency of this approach in a recent engagement. For a client, pre-defined communication templates were used to integrate the communication for incident, release and deployment management processes for both unplanned and planned outages. This made it very simple for the communication teams to just enter the details in the incident or release and propagate communication on click of a button. The templates also had the ability to communicate on an ad-hoc basis through IT portal when details of an outage are not recorded in an incident or if the mail servers are down.

This was an interesting challenge as the application subscription data was not available readily. In order to drive the communication objectives, it was important to provide a platform for the IT and business users to subscribe to applications for communication and to be kept informed about weekly maintenance and infrastructure maintenance updates. With this in mind, the result was a dynamic visual dashboard that showed the summary of various outages, be it unplanned or planned; while also enabling a personalized email communication based on application subscription data. This was a great exercise on how the integration of ITSM processes for communication can be managed when subscription data is scattered around various tools and repositories. This reminds me of another intriguing topic to write about in my next post i.e. application access role data management and its implications on ITSM processes.

(Praveen Vedula has over 7 years of industry experience. He specializes in ITIL best practice oriented process designs and implementation)

September 16, 2013

Analytics in IT Operations - Defining the roadmap

(Published on behalf of Arun Kumar Barua)

As the speed of business accelerates, it is a lot more critical to have visibility into IT operations. However, getting that information in a form that you can use to direct faster and more informed usable decisions is a major challenge. Visibility into operations is one thing but turning massive amounts of low level, complex data into understandable and information useful intelligence is another. It must be cleansed, summarized, reconciled and contextualized in order to influence informed decisions.

Now let's think about it, what if organizations are able to effortlessly integrate their data; both structured and unstructured data within the organization? What if it were easy and simple for businesses to access it all? Think of a situation where this data acquisition process is predictable and consistent! Business insights is linked to a framework for quick decision-making and made available to all who require it.

In a previous post, we looked at the importance of the data that is generated on a daily basis through IT operations. Recognizing the importance of these data and analytics is essential but putting in place the processes and tools needed to deliver relevant data and analytics to business decision-makers is a different matter.

Predictive analytics is not about absolutes, it encompasses a variety of techniques from statistics and the use of machine learning algorithms on data sets alike to predict outcomes. Rather, it's about likelihoods. For example, there is a 76% chance that the primary server may failover to secondary in XY days. Or there is a 63% chance that Mr. Smith will buy at a probable price, or there is an 89% chance that certain hardware will be replaced in XY days. Good stuff, but it's difficult to understand and even complex to implement.
It's worth it, though. Organizations that use predictive analytics can reduce risk, challenge competitors, and save tons of money on the way.

Predictive Analytics can be used in multiple ways, for example: 

  • Capacity Planning: Helping the organization determine hardware requirements proactively and a forecast on energy consumption
  • Root Cause Analysis: Detecting abnormal patterns in events thus aiding the search exercise on single-point of failures and also mitigating them for future occurrences
  • Monitoring: Enhanced monitoring for vital components which can sense system failures and prevent outages

The selection of an apt tool will enable you to use reports and dashboards to monitor activities as they happen in real-time, and then detail them into events and determine the root-cause analysis to realize why it happened. This post talks a bit more about the selection of such a tool.
By identifying various patterns and correlations with events that are being monitored, you can predict future activity. With this information, you can proactively send alerts based on thresholds and investigate through what-if analysis to compare various scenarios.

The shortest road to meaningful operational intelligence comes by generating relevant business insights from the explosion of operational data.  The idea is to transform from reactive to proactive methods to analyze structured & unstructured operational data in an integrated manner. Without additional insights. it is likely that IT management will continue to struggle into a downward spiral.

Now would be a good time to tap into the data analytics fever and turn it inward. 

 (Arun Barua is a Senior Consultant at Infosys with more than 9 years of diverse experience in IT Infrastructure, Service and IT Process Optimization. His focus areas include IT Service Management solution development & execution, Strategic Program Management, Enterprise Governance and ITIL Service Delivery.)