Infrastructure Services are definitely undergoing a major transformation. How does one navigate the web of emerging technology trends and stay ahead of the game? Read on to learn more on our Infra Matters blog.


August 7, 2015

Driving down IT and Business risk from unsecured endpoints

In the Cloud and Bring Your Own Device (CBYOD) age, securing endpoints both inside and outside the corporate network is equally important. This is where Secure-Ops comes into play - ie the combination of secure practices integrated within regular service operations.


In the past I have written about how to deal with privileged IT endpoints. Again practicing sound IT Risk Management will lead one to the look at compensating controls which this post deals with.


Consistent processes drive effective controls. Change management is unique in that it is both a process and a control. The 10 questions for Enterprise change will open key areas within IT Service Management for further analysis. And it will complement evolving Trust Models for effective governance.


The 2015 Annual GRC conference is collaboration between the Institute of Internal Auditors (IIA) and the Information Systems Audit and Control Association (ISACA).


The conference is being held between Aug 17th to 19th at Phoenix, AZ and will be a great forum to learn more about emerging trends in IT compliance and controls.


I'm equally excited for having the opportunity to speak in my session on Aug 17th, 'Attesting IT Assets and key Configuration items as a pre-audit measure: The why and the how'.


More in my next post.

October 1, 2013

Transforming Enterprise IT through the cloud

Cloud technologies offer several benefits. With solutions that are quick to adopt, always accessible and at extreme scale to the capex-to opex benefit that the CFO likes, clouds have come of age. In the coming years, the IT organization is going to increasingly see the Cloud as a normal way to consume software, hardware and services. Clouds are transformational. Several companies today are already enjoying a competitive business advantage as opposed to their competition by being early adopters of this transformational paradigm. Today, however, we hear similar questions on near term and long term adoption from IT leaders, the summary of them being as follows- 

  • Where do I get started?
  • What are the quick wins?
  • What should we be doing in the next 3 yrs?

I have tried to address some of these common questions as below. These can be thought of as basic transformational patterns and a strategy for cloud adoption within the enterprise

  • Start with Software as a Service (SaaS)- explore ready to use solutions in key areas such as CRM, HR and IT Service Management. Some of the SaaS solutions such as Salesforce, Workday and ServiceNow have been in existence for some time now and deploy proven engagement models, so these will be quick wins to consider. Pick functionality, an app and a suitable sized footprint, so as to have a project scope that can create a real organizational level impact. From a time to market perspective SaaS can arguably be the quickest way to attain the benefits of cloud.
  • Explore the private cloud- Take virtualization to the next level by identifying an impactful area within the organization to start a private cloud project. One example with an immediate benefit is a way to enable application development teams to automate the request of development and test environments. Getting this done through a service catalog front end through to a private cloud back end, can cut provisioning times by 50% or more with an added benefit of freeing resources to focus on providing support to production environments. There are different product architectures to consider with pros and cons that are beyond the scope of this note- choose one that works for the organization and get going.
  • Explore Data Center and Infrastructure consolidation- Many large Fortune 500 organizations today have to deal with equipment sprawl. The issue with sprawl can manifest itself from technology rooms and closets into co-located space to even entire data centers. This can deal with the full stack of IT equipment from servers, network switches, storage devices to even desktops.  Private clouds can be used as a vehicle to consolidate, reduce footprint and increase overall levels of control and capacity of this infrastructure. An added benefit can be in terms of higher performance, lower energy costs and replacement of obsolete equipment.
  • Identify specific public cloud use cases- Depending on the business, some areas can benefit from adopting public clouds. For e.g.- Requiring a large amount of computing horsepower for a limited duration to do data analytics is a requirement for organizations in the pharmaceutical, healthcare, and financial industries. This is a challenging use case for traditional IT as this is capital and resource inefficient. Public clouds are the perfect answer to these types of workloads. The business unit can pay for what they use and does not get limited by the equipment available in-house.
  • Create a multi-year roadmap for expansion - These initiatives are only the beginning. IT leaders need to create a 3 year roadmap that plans how these initiatives can be expanded to their fullest potential within the organization. Enabling a strong project management practice and a proven project team will go a long way to ensuring success during execution. Create a financial cost- benefit analysis for each of these areas and ensure a positive Net Present Value (NPV) case for each one. Identify what partners bring to the table that is proven, yet unique to the cloud solution at hand. Go with the base assumption that in spite of replacements of some legacy tools and solutions, these Cloud initiatives will more or less continue alongside current in-house IT and management practices.

In summary, it is important to have a pragmatic view.  Cloud is not the silver bullet that will solve all IT problems. And neither will an organization automatically attain the promised benefits. No two organizations are alike, even those that are in the same industry and hence understanding what to do and which steps to take first will put IT on a course to being 'In the cloud'.

September 26, 2013

7 steps to a smarter IT Front End


We often praise a particular front end as compared to another. The world of Graphical user interfaces has transcended from PC's to Mac's to smartphones. But quite often the IT department ignores the 'Front End' that the modern user expects from IT. Most are fixated on the Service Desk as that all empowering front end. Even ITIL has prescriptive definitions. One can argue that this is not at all the case especially from an end user perspective.


We often hear the complaints of IT being slow, ineffective or behind on support commitments. Though there may be some truth to this, there's much to do with ignoring perceptions that have built up over time in user's minds. So what is that 'Front end'- I would define that as a cohesive combination of Resources, Service Desk response times, average speed of resolution, automated Service Catalog and a comprehensive Knowledge base.


So how does an organization build up that smart IT front end? Here are 7 steps to get going-


1)     Handle all actionable Service Requests through a single service catalog- Basically 100% of Service Requests should go centrally into one service catalog. Insist that the service should not exist if it does not exist on the Service Catalog! Obviously this requires a major change to sunset all kinds of tools and manual services, but the effort to consolidate on one clean interface is worth the time and effort.

2)     Support the Service Catalog through an automated back end - All actionable Service Requests should flow through an automated back end working their way through approvals, procurement, provisioning and fulfillment. Of course automating all of this is ideal and the holy grail! But make the move towards that goal and measure progress. Again shoot for 100% of backend processes; you will reach a high mark. E.g.-new user accounts, requesting a development environment, licenses, adding application access etc.

3)      Enable Problem to Incident (P2I) conversions- Resolving a problem is not the end of the day. Confirming that Level 1 teams understand what to do if the incident rears up again is a must. Consistently enforcing this policy of P2I connection and conversions will work wonders over a defined duration resulting in more incidents resolved faster and efficiently at Level 1 itself.

4)      100% self service for user induced incidents- Setup a Self Service gateway to manage all such common incidents. This will dramatically reduce time to improve speed of response. Examples include Account Lock Out, Password changes and resets, information /document upload, profile changes etc.

5)     Setup and maintain a corporate Wiki- Information discovery and ease of information consumption should play a key role in the roadmap of the IT Front end. Too often we see lack of information on how-to's, problems with finding the right document and obsolescence. An annual check on all key docs, along with the user's ability to edit and update docs will foster a sense of shared ownership within the user community. Enable access through all devices, especially smartphones. Experts will bubble up to the top and become allies of IT.

6)     100% of software installs via End users- through the self-service capability and service catalog automation, enable users to receive a temporary download link to software that they are allowed to install. In the long run, diminish the need for this install capability through adoption of Software as a Service and/or internal web applications. E.g. - Office 365, Sharepoint Online and Lync

7)     Periodic user engagement- IT often gets flak for not being there when it matters or simply not being around. Enabling user feedback, technology awareness sessions and formal internal training periodically can go to a great extent in bringing IT closer to the business community.


The organization of tomorrow requires a smart technology front end. Transforming from now to then requires investment of time, effort and resources. These steps can get you started. And there may be more. Do you have a take on additional steps- then do write in.

September 16, 2013

Analytics in IT Operations - Defining the roadmap

(Published on behalf of Arun Kumar Barua)

As the speed of business accelerates, it is a lot more critical to have visibility into IT operations. However, getting that information in a form that you can use to direct faster and more informed usable decisions is a major challenge. Visibility into operations is one thing but turning massive amounts of low level, complex data into understandable and information useful intelligence is another. It must be cleansed, summarized, reconciled and contextualized in order to influence informed decisions.

Now let's think about it, what if organizations are able to effortlessly integrate their data; both structured and unstructured data within the organization? What if it were easy and simple for businesses to access it all? Think of a situation where this data acquisition process is predictable and consistent! Business insights is linked to a framework for quick decision-making and made available to all who require it.

In a previous post, we looked at the importance of the data that is generated on a daily basis through IT operations. Recognizing the importance of these data and analytics is essential but putting in place the processes and tools needed to deliver relevant data and analytics to business decision-makers is a different matter.

Predictive analytics is not about absolutes, it encompasses a variety of techniques from statistics and the use of machine learning algorithms on data sets alike to predict outcomes. Rather, it's about likelihoods. For example, there is a 76% chance that the primary server may failover to secondary in XY days. Or there is a 63% chance that Mr. Smith will buy at a probable price, or there is an 89% chance that certain hardware will be replaced in XY days. Good stuff, but it's difficult to understand and even complex to implement.
It's worth it, though. Organizations that use predictive analytics can reduce risk, challenge competitors, and save tons of money on the way.

Predictive Analytics can be used in multiple ways, for example: 

  • Capacity Planning: Helping the organization determine hardware requirements proactively and a forecast on energy consumption
  • Root Cause Analysis: Detecting abnormal patterns in events thus aiding the search exercise on single-point of failures and also mitigating them for future occurrences
  • Monitoring: Enhanced monitoring for vital components which can sense system failures and prevent outages

The selection of an apt tool will enable you to use reports and dashboards to monitor activities as they happen in real-time, and then detail them into events and determine the root-cause analysis to realize why it happened. This post talks a bit more about the selection of such a tool.
By identifying various patterns and correlations with events that are being monitored, you can predict future activity. With this information, you can proactively send alerts based on thresholds and investigate through what-if analysis to compare various scenarios.

The shortest road to meaningful operational intelligence comes by generating relevant business insights from the explosion of operational data.  The idea is to transform from reactive to proactive methods to analyze structured & unstructured operational data in an integrated manner. Without additional insights. it is likely that IT management will continue to struggle into a downward spiral.

Now would be a good time to tap into the data analytics fever and turn it inward. 

 (Arun Barua is a Senior Consultant at Infosys with more than 9 years of diverse experience in IT Infrastructure, Service and IT Process Optimization. His focus areas include IT Service Management solution development & execution, Strategic Program Management, Enterprise Governance and ITIL Service Delivery.)

September 3, 2013

Testing the test environment - Infrastructure testing for non-production environments

(Published on behalf of Divya Teja Bodhanapati)

In our previous post, we looked at the perils of ignoring the non-production environments and focusing only on production environments.
In order to achieve a reduction in the total costs of operations, an optimized, robust, and reliable and "fit-for-purpose" non-production environment is essential.

The question is when can an environment be called "fit-for-purpose" or "reliable"?

The answer is "when all the components (infrastructure, middleware and the applications) involved in the environment perform as per the user defined requirements."

When we look at the largest component i.e. infrastructure, 3 elements stand out - Storage, Network and Computing components. Testing applications is a well-established function but how do we ensure the underlying infrastructure is also working as required?

Not many organizations have given a serious consideration to test their infrastructure before putting them to use. Over the past years, it has been observed that the outages and downtimes in environments are primarily due to infrastructure issues.
The Quorum Disaster Recovery Report, Q1 2013 says that, "55% of the failures are at hardware level (network and storage components)". This is not surprising.

In July 2012, micro blogging site Twitter had to post this message on their blog after a short downtime blaming an "infrastructure double whammy". This outage affected millions of Tweeters across the globe; a day before the Olympics was to begin in London.

But what is the impact of downtime in non-production environments?

Any system downtime will end up shortening the development and testing cycle as these processes will have to wait till the environment is up and running again. Due to insufficient time, the development and testing stages may not be conducted properly leading to a vicious cycle of possible defects in the final product. This would ultimately result in outages in production environment as well - the consequences of such as outage can be even more devastating to the business as seen in the Twitter case above..

Infrastructure testing essentially involves testing, verifying and validating that all components of the underlying infrastructure are operating as per the stipulated configurations. It also tests if the environment is stable under heavy loads and different combinations of integrations and configurations.
Infrastructure Testing includes all stages of software testing - unit testing, system & integrated testing, user acceptance testing and performance testing applied to the infrastructure layer. By bringing in the rigor of testing in the infrastructure space, infrastructure testing eliminates inconsistencies in the infrastructure configurations that are the main cause of outages and downtimes in non- production environments.

A thorough testing process is crucial to the success of any rollout - by testing the underlying infrastructure of the non-production environment, the probability of any defect due to incomplete testing can be safely ruled out.


(Divya Teja is an Associate Consultant at Infosys with close to 4 years of experience in the IT industry. Her focus areas include Non-Production Environment Management and Infrastructure Automation.)


July 30, 2013

Picking the right tools for IT operations analytics

(Published on behalf of Pranit Prakash)

In my earlier post, I discussed leveraging IT operations analytics for intelligent IT operations. In this post, we will discuss some differentiators that can help your organization select the right tools for adopting IT operations analytics.
A Gartner prediction shows how critical this is: "Through 2015, more than 90% of business leaders contend information is a strategic asset, yet fewer than 10% will be able to quantify its economic value."
With organizations keen on mining knowledge from data generated by IT operations, the market has a wide assortment of vendors in the IT operations analytics space. Many of the tools on offer are narrow-focused, most addressing just one of the many facets of IT operations. Few such tools address event correlation leading to optimized application performance, or configuration analytics to identify any discrepancy in standard parameters. Fewer still have the capability to sift through textual log data to find patterns that can enable proactive monitoring.

Factors for Selection
With a multitude of niche tools and exciting innovations, the IT operations analytics market offers organizations wide choice to harness a wealth of information for intelligent IT operations. The differentiator lies in selecting the right framework and strategy to implement IT operations analytics. Some critical success factors to identify the right tool and partner in line with enterprise requirements include:
• Application use-case
Enterprise IT may need operations intelligence to cover multiple areas and use-cases. However, to start with, the organization must identify the applicable business scenarios to help choose the right tool and technology. For instance, business cases may vary from limited drill-down capabilities into operational issues and lack of visibility into customer experience to revenue erosion from data-loss.
It is rare to find a single tool that offers effective solutions in all these areas. Moreover, several of the available analytics tools have proven case-studies in handling the individual challenges of IT operations.

• Data handling and statistical capabilities
All analytics tools offer algorithms to integrate and analyze data from available sources. On the other hand, a simple logger tool may not be of much use without the capability to churn and co-relate massive amounts of data to deliver insights relevant to decision making. The slice-and-dice capabilities of most of these tools help organizations mine their way to the right results. Organizations need to evaluate the unique point of differentiation for these tools by mapping tool features to their current challenges and business cases in longer term.

• Visual capabilities
An IT ops analytics tool must be smart enough to present information in a consumable format. Basic dashboards and standard graph capabilities are integral to such tools but what really adds value is the drill-down and customization capability of the said tool. Many such tools provide the capability to define role-based access. . This allows organizations to define key success factors that can be monitored on-the-go through alerts and threshold settings integrated with mobile devices.

Getting IT Right
In the evolving and innovative landscape of IT analytics, it is important for organizations to focus on the right strategy that can identify strengths and weaknesses of the tools available in the market, underlying technologies, and accurately assess the vendor's long-term execution and support capabilities.

July 22, 2013

The four knights of service excellence

The saga of service excellence continues!

In my last post, I had mentioned that there are four 'key ingredients' that act as the pillars for setting up the service excellence office that I refer to as SEO. The SEO is the "central owner" that drives sustained and predictable IT operations. A central owner may be a typical Service Manager or Process Manager, within the IT operational universe. The difference being an elevated set of responsibilities entrusted upon the SEO unlike other roles.
There are many examples of the roles that the SEO enacts. These are explained in detail in our paper titled "Creating a DNA of Service Excellence across IT".
Let me highlight here the key ingredients for a successful SEO setup - the four major cornerstones that drive sustained and predictable optimized IT operations in organizations:

 Our first ingredient is "A comprehensive Structure" - this structure needs to be established with certain responsibilities. Defining the SEO's role and responsibility by formalizing a structure where all  stakeholders are clearly identified, is a must. The SEO can be involved in any phase of service management i.e., from Service Strategy phase to Service Operations phase. For instance, the SEO can act as a "Solution Architect", whenever an innovative idea is to be generated, designed and implemented. This role lays a path for the technical gurus to think with a customer centric approach. SEO becomes the architect by differentiating between what clients "want" and what clients "need".  For instance, a client may want shorter 'Time to Resolve (TTR)' as a positive impact for end users. But the actual need is to improve the end user satisfaction. This need could be accomplished by addressing the pain points of its end users and a reduced TTR may be just one step in achieving this. The SEO is responsible for developing solutions that resolve client issues completely. So the SEO can wear any hat and can be accountable for either transformation, stabilization of operation or driving continual improvement. The important part is to freeze on which hat the SEO must wear.

The next ingredient is the use of "Innovative Toolsets" - In my earlier example of the SEO being a Solution Architect, I spoke about understanding the difference between client "wants" and "needs". To differentiate the two, we need to do the required due-diligence. It is through this due-diligence that we get to freeze on an innovative solution that would help address a specific problem.

The third ingredient involves "Meaningful Measurements" - A typical problem in IT operations occurs when IT is not able to link results with the relevant business metrics. The result is a complete miss in communications with the business and the value delivered by IT is undermined.

"Articulation of Value" is the fourth ingredient - As I explained above, we need to articulate to the stakeholder in a language they understand. The success or failure of an IT program is judged on the value it delivers to the business. Without a clear articulation, this value is unclear. The SEO ensures that the metrics are clearly defined and measured, thus leading to a precise articulation of the impact. By emphasizing on the value delivered, the SEO also helps set internal benchmarks and the roadmap for continuous improvement.

What do you think? Let me hear those thoughts! 

July 11, 2013

Of Service Level Agreements and Non-Production environments

(Posted on behalf of Pradeep Kumar Lingamallu)

In one of my earlier engagements, I was asked to review the service level agreement (SLA) of a service provider for a financial organization. Despite the said organization having a large non-production environment, the existing SLA did not have any service levels or priorities listed for this environment. A long round of negotiations ensured that due importance was given to the non-production environment in the SLA.

This is what happens most of the time - When a service level agreement (SLA) is drafted, all the focus is on production support and resolution of the production issues and availability of the production environment.  Neither the non-production environment nor support for this environment is documented properly in the SLA.  This may be largely due to lack of recognition of the importance of the non- production environment, especially in the case of new application roll-outs and training.

A typical non-production environment consists of a development environment, test environment and a training environment. Each one plays a crucial role during new deployments and rollout of applications. However, incidents in the non-production environments are generally not given the priorities that they deserve. Just as any incident on the development/testing environment will have a critical impact on the release calendar, any incident on the training environment will have a severe impact on the trainings scheduled. Application deployments get affected by both release and training of the personnel. Delays in either one of the environments are bound to have an impact on the release schedule.

I have seen this happen in a number of engagements that I have been part of. In one incident, the test servers were down and testing activities were delayed. As a result the entire release calendar was affected - in this case, it was a major release, so you can imagine the impact on business.

In another case, a downtime in the training environment again resulted in a delay in the release since the personnel could not complete their training on schedule. This may appear to be a small incident from a provider perspective, but for the organization, it is a significant delay.

Any downtime in the non-production environment is bound to affect production as well - but this fact is generally ignored to the buyer's peril. By specifying SLAs for support of non-production environments, organizations have an additional safeguard against any unplanned downtime that could affect the quality of service.


(Pradeep Kumar Lingamallu is a Senior Consultant at Infosys. He has over 18 years of experience in the field of IT service management including certifications in ITIL, CISA, CISM and PMP)


July 8, 2013

Service Excellence as a way of life

"Consciously or unconsciously, every one of us does render some service or another. If we cultivate the habit of doing this service deliberately, our desire for service will steadily grow stronger, and will make not only our own happiness, but that of the world at large." - Mahatma Gandhi

Mahatma Gandhi highlighted that excellence was an accumulation of righteous 'habits' and if inculcated will drive greater growth. This is applicable in IT as well.
It is possible to accumulate the right set of habits to drive growth within enterprises. This can be done by setting up dynamic mechanisms that identify, embed and replicate such habits across individual members across teams in an efficient manner.

But who should be made accountable for organizations to focus on Service Excellence? Can this person or entity bring in flexibility in operations to cater to the changing business environments? Can such flexibility be managed and governed? Can innovation be embedded on this path to achieving excellence?
We believe that setting up a Service Excellence Office (SEO), comprising of a dedicated pool of process consultants helps bring in the rigor, focus and accountability that is needed to achieve service excellence. SEO plays a dual role of an internal and an external consultant in the organization:
1 - As an internal consultant, SEO is involved in identifying initiatives or practices that ensures that the project (or program) goals and commitments made to client are achieved
2 - As an external consultant, SEO ensures that solutions deployed are customer centric i.e., addressing customer pain points

SEO focusses on identifying levers for improvement and enablers for change that tie back to business value, so that the progress and effort spent to drive benefits can be measured at every step. The emphasis is on overcoming challenges around demonstration of measurable value to both set of customers - internal and external.

We've identified the four key ingredients which are pillars in establishing a SEO in within the IT organization. These will be explained in the next blog post. 

July 4, 2013

IT Operations Analytics - Tapping a goldmine of data within IT

(Posted on behalf of Pranit Prakash)

Enterprise IT faces a continuous challenge to maximize availability of systems while optimizing the cost of operations. The need to maintain uptime while managing complex tiers of infrastructure force organizations to spend considerable amount of time and money in identifying the root cause of failures leading to downtime. 

According to Gartner's top 10 critical technology predictions for 2013,  "for every 25% increase in functionality of a system, there is  a 100% increase in complexity."

On the other hand, IT operations generate massive amount of data every second of every day. As of 2012, about 2.5 exabytes of data was created every single day and this number is said to double every 40 months.
This data remains in mostly unstructured format in form of log files but can contain a wealth of information not only for Infrastructure and operations but in areas related to end-user experience, vendor performance and business capacity planning.

As organizations face increasing complexity in their IT operations, they are unable to decipher any patterns that could actually help them improve current operations. This is because existing toolsets are largely confined to silos and are unable to analyze and consolidate details - For example, there are toolsets for IT service management, application performance management etc. that do not talk to each other. The result is a mass of unstructured data that if analyzed in detail, could provide valuable actionable information.

And this is where IT Operations Analytics comes in.

IT Operations Analytics enables enterprise to collect, index, parse and harness data from any source as system logs, network traffic, monitoring tools and custom applications. These data points are churned through a set of smart algorithms to result in meaningful insights for IT operations leading real-time trending and proactive monitoring that ensures higher uptime and co-relation across data-sources.

But IT Operations Analytics is not same as traditional BI which deals with structured data only. The real capability of this exciting field lies in two factors -

a. Comprehensiveness to include any amount of data in any format from anywhere and
b. Capability to co-relate the data to provide centralized view at the finest detail

The possibilities of using analytics to gather actionable insights from configuration and transactional data are endless. In my next set of posts I will explore key use cases for operational analytics and how organizations can adopt this evolving technology trend to their advantage.

(Pranit is a Lead Consultant with Infosys with a rich experience in IT infrastructure and consulting services. He is a certified CISA, Certified DC professional(CDCP) and a qualified BSI ISO 27001 Lead auditor)

June 29, 2013

Outcome Sourcing - Buying results rather than services

 In his interview with DNA India magazine, Chandrashekhar Kakal, SVP and Head of the Business IT Services unit at Infosys, talks about the rise of outcome based sourcing as opposed to the traditional mode of 'outsourcing'. Traditional IT outsourcing has been focused on cutting input costs with the buyer seeking to offload repetitive activities to a third party. However, organizations today are already looking at the next big thing.

As enteprises continue to spend majority of their budgets on running and maintaining the IT infrastructure, they need to look at new ways to innovate and transform the business such that it leads to growth.

Outcome sourcing allows organizations to buy results that matter to business growth rather than just focus on reducing the input costs. So, they look for "strategic partners" who can be entrusted with the end to end management of business applications and processes, as opposed to suppliers or vendors. As the emphasis moves towards delivering results that matter to the client, outcome sourcing can foster innovation and a closer alignment of a vendor's incentives with business requirements.

Click here to read the complete article.

May 15, 2013

Automating automation

(Published on behalf of Shalini Chandrasekharan)


In their ever present struggle against time and resources, Automation, is one of the most potent weapons being harnessed by CIOs. Small wonder then, to see organizations such as Morgan Stanley plan to spend as much as $ 250 Mn on automation software in order to improve overall customer experience. 

However, simply investing in automation software may not be enough. While basic automation strategies for common repetitive tasks especially monitoring alerts, ticket handling and troubleshooting may be in place for most organizations, an expert system based tool could actually help automate the whole process of automation itself leading to even more efficiencies. The staff engaged in routine activities can be reassigned to high value activities.

It is with this backdrop, that Chandrashekhar Kakal, SVP, and head of Business IT Services unit at Infosys, talks about the implications of autonomics for IT operations in his blog piece on the InfyTalk.

Click here to read the full post.

March 26, 2013

Windows 8 - Read the fine print

(Posted on behalf of Atul Kumar)

Much has been said about the adoption of Windows 8 by enterprises. There is no doubt that Windows 8 is a brilliant platform and comes with a multitude of stunning features. However, every time a new operating system comes into the market, the first question you ask is "what about system requirements?" Or "do I need to get rid of my PC?" 

While most operating systems do not require a major hardware overhaul, Windows 8 is packed with features that are likely to necessitate changes. And, if you are migrating from an older version - Windows XP, the entire hardware stack may need to be upgraded.

Windows XP, Windows Vista and Windows 7 can all be upgraded to Windows 8, but there is a limit on how much hardware can be carried forward. Essentially, enterprises that are on Windows 7 can take off for Windows 8 any time, but it is recommended that the hardware should have higher specifications to run additional programs effectively and to provide a better user experience. 

This article from Microsoft, explains in detail, the system requirements for Windows 8.  Windows 8 is the first operating system with the ability to support mobile computing in an enterprise environment. This is a significant achievement for organizations dealing with the increasing infiltration of mobile computing and employee owned devices in the workplace as it will enable them to standardize their operating environment. One of the key issues inhibiting the adoption of enterprise mobility has been the lack of a standard operating environment that would allow organizations to control the environment. And the launch of Windows 8 offers a way to integrate traditional desktop and mobile computing in the enterprise.

However, there are several inconspicuous requirements and recommendations that one should be aware of. For instance, there are major changes in the licensing norms for Windows 8 for virtual machines as per Microsoft's Purchase Use Rights (PUR) document. This will impact the licensing costs associated with Windows 8.

In the paper titled -'Is your IT Infrastructure ready for Windows 8?' we explore six important considerations to evaluate the readiness the IT infrastructure for the move to Windows 8. The article can be accessed here

March 21, 2013

How green is your data center?

(Posted on behalf of Niyati Kamthan)


Did you know that 90% of the data in the world has been created in just the last two years?  Walmart's warehouse itself handles 2.5 petabytes of information. Roughly equal to half the letters delivered by the US Postal department in 2010! As our world gets digitized, we are creating stupendous amount of data. Whenever we need information on anything under the sun we get it at one click and it appears within seconds on our screen.
And, all the information we use today sits somewhere in giant facilities called data centers. There are tens of thousands of data centers using computing power day and night to make information available to us at lightning speeds. However, these data centers are increasingly becoming a curse than a boon.

29.4 greendatacenter.jpg

Continue reading "How green is your data center?" »

March 13, 2013

Achieving and sustaining Service Excellence

One of the most difficult challenges is in identifying where does the onus lay on driving Service Excellence? Should it be driven using a top-down approach or a bottom-up approach? In an IT service scenario, most of the service delivery governance or management teams are involved in 'fixing' operational defects and people management. Driving Service Excellence is largely reactive - it gets triggered only when something goes wrong.

The challenge increases manifold if the service provider embarks on a journey towards service excellence by managing a multi-vendor environment. The main challenge here is to align the path of achieving Service Excellence with the client's organizational priorities and objectives. IT delivery teams that work as individual silos in hope of achieving such objectives end up impairing the process of achieving Service Excellence itself. One has to be vigilant in identifying 'improvement opportunities' and carry out the necessary due diligence in understanding the impact. If the impact is widespread and positive for client business operations, then such desirable opportunities should be quickly exploited.

For a large program where Infosys embarked on the journey of achieving service excellence, the focus was on bringing transparency into service performance and we emphasized the value delivered by explicitly linking the efforts spent by IT delivery teams and the business outcomes.

Our next set of posts will reveal more on where to focus and how to overcome challenges. Watch this space for more!

March 5, 2013

Service Excellence... What's that?

I have been contemplating to write something on Service Excellence Office (SEO) for a while now. And finally, here it is!

At Infosys, we finished one of the most successful SEO implementations for one of our largest clients. As we went along this journey, there was tremendous learning over the last two years. From this experience and from the queries that followed from different Infosys programs spanning across industry verticals, I realized that the most logical way to de-codify SEO was to start by defining what we mean by "Service Excellence" in the first place!

Service excellence is about embedding predictability and standardization in the way we deliver services to our clients (be it one time services or continual improvements). It is all about delighting our clients a feel of the "true-value" that these services deliver.

Imagine your favorite mechanic or handyman - why trust him and not the others? It is all in predictability and quality of service delivered by him! Service excellence is all about setting and exceeding our own internal benchmarks - about persistently competing with ourselves to create a compelling service experience. In an IT setup, it is about enabling mechanisms to share best practices across various IT services and reaping the benefits together. It is about creating a healthy competition between different IT services working towards a common goal. 

Service excellence is neither Science nor Art. It is a mix of both. It is scientific in the sense that it brings in predictability through mechanisms to demonstrate measurable business value. The artistic aspect comes in with the innovation, creativity and passion presented by the teams.

In an IT organizations' daily routine there are multiple occasions where opportunities are available to increase satisfaction of clients, reduce TCO and demonstrate value from IT. It is here that SEO works closely with operations' team to increase client satisfaction by driving process efficiencies, improving response and resolution statistics and remediating client pain points.

Further, SEO supports TCO reduction by helping achieve cost savings through rationalization of teams, optimization of processes and helping in optimum resource utilization through helping reduce overheads. Finally, SEO is most useful in demonstrating IT value to business; it is here where the tire meets the road. SEO does this impeccably by identifying key improvement levers that positively impact efficiency and effectiveness, builds agile measurement frameworks and communicates the benefits achieved to business in a timely and impactful manner.

In the next post we will gradually unravel what different types of challenges come in way of achieving Service excellence and the kind of organizational focus that is required to meet these challenges. Stay tuned!

February 14, 2013

Opportunities for data center outsourcing - Gathering momentum

(Published on behalf of Shalini Chandrasekharan)

According to the Symantec State of the Data Center survey, 2012, complexity in data centers is 'pervasive' leading to a host of problems including longer lead times, increasing costs and reduced agility in responding to business requirements.

Small wonder it is then, to see organizations looking out for options that will bring higher scalability and agility.

Data center outsourcing is one such option. In his interview with Steve Wexler of the IT-TNA magazine, Chandrashekhar Kakal, senior VP and global head of Business IT Services at Infosys, talks about how data center outsourcing can enable organizations to attain higher levels of scalability and agility as demands on IT outpace budgets and resources. The reality of today's situation is that organizations must increasingly look to data center consolidation and outsourcing as critical levers to achieve agility, as well as manage costs and service levels.

The article also quotes inputs from Gartner and IDC on the proliferation of data center outsourcing models including Cloud and Infrastructure-as-a-Service or IaaS offerings.

The full article can be accessed here.




February 4, 2013

Automation as the key to environment management

(Posted on behalf of Ruchira Anekar)


For years, software environments have played a key role in software development lifecycle. Now, with demand for agile software delivery and increased testing demands, there is a frequent need to build and test environments in order to support rapid delivery timelines and maintain quality.
However, the applications and infrastructure landscape has become much more complex. This has increased the challenges of managing software environments including planning, provisioning, configuration, deployment and testing of an environment.
According to Gartner, 40% of estimated effort during the software development lifecycle is wasted on resolving environment related issues. CIOs, therefore, will look to reduce the cost involved in managing the complex environments and keeping it up-to date so that business gets the required support.
Automation can play a major role in enabling organizations to cope with these challenges. Automation relevant to environment management includes a combination of pre-configured tools and scripts supporting the end-to-end environment management lifecycle.


Continue reading "Automation as the key to environment management" »

January 23, 2013

Windows 8 for a mobile world

(Posted on behalf of Shalini Chandrasekharan)

Microsoft has launched its latest OS version in 2012 - simply named Windows 8. Its predecessor - Windows 7 was launched in 2009 and has since then, stabilized quite well with market adoption edging close to 50%. According to estimates from market watcher NetApplications, Windows 8 seems to have garnered close to 1% of the market.

Much has already been said about this revamped version - its unique 'Metro' style UI, revamped BitLocker, Windows To Go and Dynamic Access Control to name a few. However, depending on whom you have read, Windows 8 may either be the best thing ever to hit desktop computing or is expected to fizzle out as a damp squib.

The reason for this has to do with the nature of changes brought in Windows 8 as opposed to its predecessor Windows 7. Windows 7 was a clear step up from Windows XP with well defined upgrades that also required a hardware refresh in most cases. Migrating from Windows XP to Windows 7 is already underway for most organizations as it is an extensive process involving application migration and remediation.

In contrast, Windows 8 basically retains the core underpinnings of Windows 7 especially  software license compliance and policy management being the same. The major difference is that Windows 8 looks beyond the simple PC to other form factors such as tablets and smartphones. its touch based support service enables organizations to support employee owned devices or BYOD programs.

While the invasion of consumer IT into business has definitely started with the advent of smart devices, the uptake of allowing employees to bring their own devices has been slow due to various reasons, security being one of them. With Microsoft releasing Windows 8, it enables organizations to choose from multiple device form factors enabling users with different requirements. But this could lead to a different problem for organizations since most of the installed base comprises of traditional laptops and desktops which do not allow a complete leverage of the touch enabled 'Metro UI'. In this sense, Gartner may be right - 'Windows 8 is a big gamble for Microsoft'.

However, most experts believe that it may be too early to really comment on the success or failure of the launch since it usually takes organizations about 10-18 months to pilot, analyze and decide on migration. The added deadline of April 2014, when Microsoft will formally end the extended support for its Windows XP version, is another consideration for organzations. Windows 7 may be an easier fit than Windows 8 however, in the longer run, Windows 8 may prove to be a better choice especially if a new version of Windows in expected.



January 10, 2013

Storage Virtualization - the road ahead...

(Posted on behalf of Vaibhav Jain)


A decade ago the management of disk storage was very simple and uncomplicated; if we needed more space then we replaced low capacity disk drive with a high capacity disk drive. But as the organization grew, data grew, so we started thinking about adding multiple disk drives. Now finding and managing multiple disk drives became harder, so we developed RAID (Redundant Array Independent Disk), NAS (Network Attached Storage) and SAN (Storage Area Network).The next step in storage Technology is storage virtualization, which adds a new layer of software and hardware between storage and servers.

In a heterogeneous type of storage infrastructures environment where you have different types of storage devices- there is an added complexity of managing and utilizing these storage resources effectively. As storage networking technology matures, larger and complex implementations are becoming more common. Specialized technologies are required to provide an adaptable infrastructure with reduced cost of management.

Storage Virtualization resolves this problem - pooling of physical storage from multiple network storage devices into a single virtual storage device that can be managed from a central console. This helps in performing regular tasks of backup, archiving, and recovery in a quicker and more efficient way. This in turn provides enhanced productivity, better utilization and management of the storage infrastructure. There are various options available in the market for the storage virtualization by using hardware and software hybrid appliances.
Some known available Hardware appliances for storage virtualization include EMC VPLEX, HITACHI DATA SYSTEM USPV, USP VM and VSP systems, NetApp V-Series appliances, IBM Storwize V7000 etc. On the software side, we have the likes of HP Store Virtual VSA Software and EMC Invista.

I have seen environments where organizations have decided to move away from multi-vendor storage systems to a single vendor system in the desire to reduce complexity. However there may be no need to discard a multivendor storage system for it is possible to achieve the same efficiency, scalability, and flexibility with a heterogeneous storage environment.
What you need to do is -choose the right storage system that supports storage virtualization technology and make that storage system as a front end storage system and connect rest of the present multivendor storage systems in the backend. With this, the same LUNs (Logical Unit Number) get maintained even with the help of the new virtualized storage system. With this arrangement you can scale up capacity without any need to migrate data from the old storage to new storage system. 

This way, you have utilized your old infrastructure with the no downtime and scalability and leveraged benefits of virtualized storage system including reduction in downtime, enhanced scalability and flexibility in managing resources with an overall reduction in the complexity of the IT environment.

January 8, 2013

Data center transformation - The next wave?

According to this report from Gartner, organizations are increasingly looking for ways to transform their data centers. In the report titled, "Competitive Landscape: Data Center Transformation services", Gartner reveals that the number of searches on the Gartner website for the topic has increased by about 358%.

The report also suggests that organizations are increasingly looking beyond simple hardware refresh but are also looking at re-engineering operating models and processes to increase functionality while optimizing costs. In this sense, there is a greater demand for consulting led solutions that organizations can streamline according to their requirements.

At Infosys, our value proposition involves helping organizations transform their data centers with a consulting-led approach. In addition, our intellectual property and solution accelerators help us provide a great framework for enterprises to gain the best return from their IT Infrastructure.

In this report, Infosys is one of the several vendors to be profiled. The report can be accessed here.

November 29, 2012

CIO Mandate: Optimize

Posted on behalf of Shalini Chandrasekharan

In his second post on CIO mandates, SVP and head of Business IT Services, Chandra Shekhar Kakal continues to emphasize the imperatives for CIOs.

Optimizing current IT operations is an ongoing enigma for CIOs. With IT operations consuming 60-70% of the average IT spend, it is no surprise that focus on IT infrastructure, and ways to gain additional leverage from investments already made, is gaining momentum.

Continue reading "CIO Mandate: Optimize" »

November 27, 2012

Transformation at the heart of IT

Posted on behalf of Shalini Chandrasekharan

In a fragile global economy, as enterprises find their value chains increasingly pressured, CIOs - alongside business - are taking on the transformation mandate and helping strengthen the value chain.

Continue reading "Transformation at the heart of IT" »

November 15, 2012

Infrastructure and Big Data analytics

Posted on behalf of Debashish Mohanty, MFG

The internet has spawned an explosion in data growth in the form of data sets, called Big Data, which are so large they are difficult to store , manage and analyze using traditional DB and Storage architecture. Not only is this new data heavily unstructured, it is voluminous, streaming rapidly and is difficult to harness.

Continue reading "Infrastructure and Big Data analytics" »

November 2, 2012

Chasing the G2C Dreams on Cloud

With the emergence of mobile and cloud-based technologies, governments in emerging countries can now provide government-to-citizen (G2C) services in a cost-effective manner.

Drawing on the power of cloud-based services, governments can improve governance and drive up the quality of services they provide to the public. This convergence in growth of internet and mobile penetration coupled with emergence of cloud as a pervasive computing platform provides opportunity for governments to use these disruptions as a medium for achieving their social and development goals.

Recent announcements by Amazon on creating a compute platform to enable federal agencies in the US to move workloads to cloud is an example of public/private partnership in enabling a safe, low-cost, and agile platform for government compute purposes. All this will ultimately enable governance.

However, emerging nations lag behind developed nations when it comes to adoption of such cloud-based Government-to-Citizen or G2C services. In this feature, I have discussed the challenges to G2C adoption in emerging economies and roadmap for the evolution of G2C services in such economies.

Continue reading "Chasing the G2C Dreams on Cloud" »

October 30, 2012

IT service management in the times of social media

Published on behalf of Shalini Chandrasekharan


Social Media has made a big splash. And IT Service Management is no exception to it.

Continue reading "IT service management in the times of social media" »

July 25, 2012

The Infrastructure Testing Inception Story ...

Today is an exciting day for us - Infosys has formally launched its Infrastructure Testing offering. This post is the story of how this offering came into being.

Continue reading "The Infrastructure Testing Inception Story ..." »

March 21, 2012

Is System z an unexploited platform for Cloud solutions..?


Having been in the Mainframe space for many years and having worked on one of the most scalable and secure computing environments, I was thinking - is System z missing the Cloud buzz?
The current platforms from IBM such as zBX and zEnterprise servers are able to support heterogeneous work loads of Mainframe, Unix, Java etc., have a seamless capability to upscale or downscale and are packed with ability to create new images without system down time. All this with unparalleled security of EAL5 level!
Given all of this, would System Z be a competitive platform for Cloud environment? What do you think..?

September 17, 2008

IT Hype Fatigue and the Economic Downturn

I don’t know about you, but I have worked in the IT industry for a while and I still see a lot of the jazz and hype in terms of what IT can do for companies. Here are some typical statements:

“IT can transform your business”
“IT working with you as a strategic business partner”
“Achieving competitive advantage in business with IT”

Innovation is another term that we all love to use. Innovation to me should be more than just good quality creative solutions based on industry practice to suit the client environment, it is transformative. Why not read more and join my POLL...

Continue reading "IT Hype Fatigue and the Economic Downturn" »

June 18, 2008

The Matrix of Corporate Meetings

Posted by Anurag Bahal, Senior Consultant, Infosys

Purpose – Accomplish more from corporate meetings

The Matrix is my favorite movie. It has an advanced technology depiction with an underlying philosophical message. Technology and Philosophy, what a cinematic combination! Morpheus, Neo's spiritual leader and guide tells us in The Matrix "Fate, it seems, is not without a sense of irony”. I have seen that organization meetings have a fate and the irony is in the participant behavior.
Being a consultant I organize meetings and I am invited to a quite a few Business and Technology meetings. There are many instances where I feel that meetings can be organized better and then they could have achieved better results. One theme that I have found particularly challenging is the chameleon factor.

Continue reading "The Matrix of Corporate Meetings" »