Infrastructure Services are definitely undergoing a major transformation. How does one navigate the web of emerging technology trends and stay ahead of the game? Read on to learn more on our Infra Matters blog.

« February 2013 | Main | April 2013 »

March 26, 2013

Windows 8 - Read the fine print

(Posted on behalf of Atul Kumar)

Much has been said about the adoption of Windows 8 by enterprises. There is no doubt that Windows 8 is a brilliant platform and comes with a multitude of stunning features. However, every time a new operating system comes into the market, the first question you ask is "what about system requirements?" Or "do I need to get rid of my PC?" 

While most operating systems do not require a major hardware overhaul, Windows 8 is packed with features that are likely to necessitate changes. And, if you are migrating from an older version - Windows XP, the entire hardware stack may need to be upgraded.

Windows XP, Windows Vista and Windows 7 can all be upgraded to Windows 8, but there is a limit on how much hardware can be carried forward. Essentially, enterprises that are on Windows 7 can take off for Windows 8 any time, but it is recommended that the hardware should have higher specifications to run additional programs effectively and to provide a better user experience. 

This article from Microsoft, explains in detail, the system requirements for Windows 8.  Windows 8 is the first operating system with the ability to support mobile computing in an enterprise environment. This is a significant achievement for organizations dealing with the increasing infiltration of mobile computing and employee owned devices in the workplace as it will enable them to standardize their operating environment. One of the key issues inhibiting the adoption of enterprise mobility has been the lack of a standard operating environment that would allow organizations to control the environment. And the launch of Windows 8 offers a way to integrate traditional desktop and mobile computing in the enterprise.

However, there are several inconspicuous requirements and recommendations that one should be aware of. For instance, there are major changes in the licensing norms for Windows 8 for virtual machines as per Microsoft's Purchase Use Rights (PUR) document. This will impact the licensing costs associated with Windows 8.

In the paper titled -'Is your IT Infrastructure ready for Windows 8?' we explore six important considerations to evaluate the readiness the IT infrastructure for the move to Windows 8. The article can be accessed here

March 21, 2013

How green is your data center?

(Posted on behalf of Niyati Kamthan)


Did you know that 90% of the data in the world has been created in just the last two years?  Walmart's warehouse itself handles 2.5 petabytes of information. Roughly equal to half the letters delivered by the US Postal department in 2010! As our world gets digitized, we are creating stupendous amount of data. Whenever we need information on anything under the sun we get it at one click and it appears within seconds on our screen.
And, all the information we use today sits somewhere in giant facilities called data centers. There are tens of thousands of data centers using computing power day and night to make information available to us at lightning speeds. However, these data centers are increasingly becoming a curse than a boon.

29.4 greendatacenter.jpg

Organizations are under pressure to generate information instantly so that there is no compromise on the user experience. For this reason, they run their facilities at maximum capacity around the clock, whatever the demand. As a result, data centers can waste 90 percent or more of the electricity they pull off the grid. A close look at the power consumption demands of these widespread data centers suggests that these digital warehouses use multibillion watts of electricity which can be sufficient to run a mid-sized town of the United States for a year. Moreover, In case of power failures, these data centers depend on the diesel fueled generators, which pollute environment to the extent that many data centers appear on government bodies' radar. 

Accountability to the environment is one thing but more often than not businesses argue that organizations do not run on the premise of "what serves the planet best".  It has to make sense and enable them to grow. Here is an eye opener - Gartner says that 20% of total IT spend is on data centers, with average utilization of data centers somwehere around 7-12 %. These figures are alarming and it is apparent that these energy hungry data centers are a huge cost drain. In addition, sales are getting disturbed as many of the end consumers consider the environmental record of their supplier. Shareholders also prefer to invest in the companies that support sustainability.

So, greener data centers are vital to meet business demands and to reduce the impact on the environment. Green strategies such as virtualization, standardization, and automation can enable the data center to deliver the same level of service with a smaller footprint. Server consolidation and application renewal can extend the life of existing systems and limit investment in new equipment.
Cloud based offerings can facilitate IT to shrink the carbon foot print even when the demand for IT processing continues to rise.

Here is an example, when the IT environment at Ricoh Europe was spiraling out of control - they envisioned a datacenter strategy to support their business transactions and process transformation program. The challenges that prevented Ricoh Europe to be agile, flexible and scalable included - a large number of physical machines adding to the footprint in energy inefficient datacenters, limited DR capability, high environment provisioning time for projects and slow response to growing business needs.

To solve all these problems, Ricoh Europe partnered with Infosys in January 2011. Infosys designed and implemented a private cloud with two fully resilient tier 3 super hubs to deliver platform on a modern infrastructure with 100% virtualization, along with a transparent IT services model while reducing total cost of ownership. This engagement resulted in some of the extraordinary benefits to the client such as reduction in datacenter space by 75% and carbon emission by approximately 85% and disaster recovery time objective of 3.5 hours from 12 hours.  Further, Infosys and Ricoh Europe were chosen as joint winners of the prestigious Green IT Award for 2012 in the Cloud/Virtualisation Project of the Year category. You can read the full story here.

The Ricoh story shows the power of bringing 'green' principles while designing the data center. The benefits of 'greening' a data center go beyond saving money or saving the planet. Green data centers can provide higher productivity from fewer infrastructures, easy maintenance and upgrades and reduced downtime.

March 17, 2013

Software defined everything !!!

The Software Defined Datacenter (SDDC) chatter is seemingly everywhere. Think about the possibility of virtualizing network and storage in a manner similar to how we have virtualized CPU & memory (compute).Think of  virtually managed computing, networking, storage, and security delivered as a service. The provisioning and operation of the infrastructure could entirely be automated by software leading to Software Defined Datacenter.

Most of today's networking and storage gears mix data and control functions, making it hard to add or adjust network and storage infrastructure when adding virtual machines (VMs) to enterprise data centers. By adding a well-defined programming interface between the two, networking hardware and storage hardware will become progressively more commoditized, just as servers (compute and memory) have become commoditized.

This will occur because all the value associated with configuration and management will get sucked out of the hardware and into other layers. This means is that in the new world, the manageability, control and optimization would sit at a layer above the actual hardware - leading to a software defined, software managed and software controlled enterprise compute, storage and network!

We can expect independent software appliances or current hypervisors (VMware, Hyper-v) to come with the ability to abstract and centralize the management functions to deliver provisioning, configuration management, automation, performance optimization, capacity utilization and reporting. Then there is absolute certainty that the SDDC will require a completely new management stack - one suited to the highly dynamic nature of an environment in which every key resource (CPU, memory, networking and, storage) is abstracted from its underlying hardware.

Among the many things we might need to see in the new management stack include data protection, security, configuration management, performance management and automation. The key feature that I would like to see here is that of datacenter analytics, as every layer of management software for the SDDC will be generating data at a rate that will demand a big data store just to keep up with and index the data so that bits from the various sources can in fact be compared to each other at the right time - Big Data for data center management!!

SDDC offers challenges and opportunities for product vendors, IT organizations, service providers and integrators alike. The journey has already started. Have you reconsidered  your data center strategy yet? Let me know your thoughts.

March 14, 2013

April 8, 2014 is no fool's day

(Published on behalf of Vivin George)


Did you know that the etymology of word "April" is from the Latin "aperire", meaning "to open"? It is the season when trees and flowers begin to "open".  However, it is ironic that the month of April in 2014 would be known for the 'closure' of Windows XP.

We all agree that Windows XP has been a great success for Microsoft and users alike, but I guess every good thing comes to an end for the next best thing. This article from the Microsoft blog explains in great detail, the impact of not migrating from Windows XP.  You'll be amazed by the figures quoted in this article.
The biggest whopper- If you stay on Windows XP beyond 2014, it is likely to cost 5 times more than running Windows 7 - consider the hole that is going to create in your IT budget!

But the death of Windows XP is not the only thing we have to deal with. In October 2012, Microsoft launched a new and radically different platform - Windows 8. With its release, now we have two options to choose from - Windows 7 and Windows 8. Both of these platforms have their own pluses and minuses. This calls for a good analysis of what you want from your operating system (OS).
Windows 8 is a product which could mean remarkably different things to organizations, based on usage, budget and user demographics. If yours is the kind of organization where mobility and social media have been infiltrating the workplace, you can leverage Windows 8 to integrate mobile computing with traditional desktop based computing.

On the other hand, Windows 7 is an excellent platform and is popular with consumers, especially enterprises. It has been around for four years and is a stable OS with most independent software vendors (ISV) having certified their applications for this platform. If you are looking for faster upgrade, reduced risk and standardized computing environments, Windows 7 is a way to go.

Given that Windows 8 has plenty of new features and needs ample support from the hardware perspective, it is clear that migration from Windows XP to Windows 8 is a big jump.  Moreover, most of the folks involved in the last rollout have either retired or are now part of the C-suites today!

So, it is imperative that organizations do a careful evaluation of business and technical factors to ensure the success of migration.
There cannot be a single solution or a thumb rule to migrate from Windows XP. In fact, this migration can open a "window" of opportunity for organizations to mull over how they see themselves in the next 10 years. Can this migration be combined with the cleanup of legacy systems or applications? Should the organization embrace enterprise mobility and BYOD? When organizations think through these basic questions, they can pick and choose the platform that fits the bill.

March 13, 2013

Achieving and sustaining Service Excellence

One of the most difficult challenges is in identifying where does the onus lay on driving Service Excellence? Should it be driven using a top-down approach or a bottom-up approach? In an IT service scenario, most of the service delivery governance or management teams are involved in 'fixing' operational defects and people management. Driving Service Excellence is largely reactive - it gets triggered only when something goes wrong.

The challenge increases manifold if the service provider embarks on a journey towards service excellence by managing a multi-vendor environment. The main challenge here is to align the path of achieving Service Excellence with the client's organizational priorities and objectives. IT delivery teams that work as individual silos in hope of achieving such objectives end up impairing the process of achieving Service Excellence itself. One has to be vigilant in identifying 'improvement opportunities' and carry out the necessary due diligence in understanding the impact. If the impact is widespread and positive for client business operations, then such desirable opportunities should be quickly exploited.

For a large program where Infosys embarked on the journey of achieving service excellence, the focus was on bringing transparency into service performance and we emphasized the value delivered by explicitly linking the efforts spent by IT delivery teams and the business outcomes.

Our next set of posts will reveal more on where to focus and how to overcome challenges. Watch this space for more!

March 7, 2013

Smart Moves

 In my last post, I spoke about the three available options of relocation, replication and rebuild in the case of migrating data centers.  In this post, I will explore is the concept of data center relocation. Relocation essentially means the movement of the same hardware from the old data center to the new one.  All the components of the data center are basically shifted lock, stock and barrel to a new facility.
Sounds simple - right? Well it is. 


Typically, a relocation may take place because of any of the following reasons:

• Lease tenure is ending or contractual obligations
• Inadequate budgets for a full scale transformation program
• Legacy applications and dependency on legacy hardware or platforms

These six basic steps can ensure a smooth migration process:

1. Due diligence where parameters such as asset functionality and possible bottlenecks are identified.

2. Identifying stakeholders who would typically be experts in their domains such as IT operations, network management and facilities management to name a few. A comprehensive stakeholder matrix with clearly define roles, responsibilities and scope of work is then prepared.

3. Application Bundling involves identifying applications and their interdependencies and then tracing it down every piece of hardware that is related to it. To ensure there is no lag in the performance of the applications, a performance testing is conducted before the movement.

4. The actual move should preferably be done in phases. An accurate back up and a robust disaster recovery plan are vital to a smooth migration. Business and IT support teams need to coordinate closely during the actual move.

5. Storage of the IT hardware equipment involves specialized containers that are well designed to protect the equipment from any static, shocks or jerks during the movement. Hardware vendors such as IBM have strict specifications for the transport and storage of their hardware products.

6. Auditing the new data center is the last step towards a complete migration: This ensures that the migration has been a success and meets all the business requirements as specified in the scope of the project. It is a mandatory step if a third party was used to make the move.

My next post will focus on the process of replicating a data center. While the basic process is largely the same, unlike a simple relocation, replicating an environment is an intensive process and needs to be executed with great care.

March 5, 2013

Service Excellence... What's that?

I have been contemplating to write something on Service Excellence Office (SEO) for a while now. And finally, here it is!

At Infosys, we finished one of the most successful SEO implementations for one of our largest clients. As we went along this journey, there was tremendous learning over the last two years. From this experience and from the queries that followed from different Infosys programs spanning across industry verticals, I realized that the most logical way to de-codify SEO was to start by defining what we mean by "Service Excellence" in the first place!

Service excellence is about embedding predictability and standardization in the way we deliver services to our clients (be it one time services or continual improvements). It is all about delighting our clients a feel of the "true-value" that these services deliver.

Imagine your favorite mechanic or handyman - why trust him and not the others? It is all in predictability and quality of service delivered by him! Service excellence is all about setting and exceeding our own internal benchmarks - about persistently competing with ourselves to create a compelling service experience. In an IT setup, it is about enabling mechanisms to share best practices across various IT services and reaping the benefits together. It is about creating a healthy competition between different IT services working towards a common goal. 

Service excellence is neither Science nor Art. It is a mix of both. It is scientific in the sense that it brings in predictability through mechanisms to demonstrate measurable business value. The artistic aspect comes in with the innovation, creativity and passion presented by the teams.

In an IT organizations' daily routine there are multiple occasions where opportunities are available to increase satisfaction of clients, reduce TCO and demonstrate value from IT. It is here that SEO works closely with operations' team to increase client satisfaction by driving process efficiencies, improving response and resolution statistics and remediating client pain points.

Further, SEO supports TCO reduction by helping achieve cost savings through rationalization of teams, optimization of processes and helping in optimum resource utilization through helping reduce overheads. Finally, SEO is most useful in demonstrating IT value to business; it is here where the tire meets the road. SEO does this impeccably by identifying key improvement levers that positively impact efficiency and effectiveness, builds agile measurement frameworks and communicates the benefits achieved to business in a timely and impactful manner.

In the next post we will gradually unravel what different types of challenges come in way of achieving Service excellence and the kind of organizational focus that is required to meet these challenges. Stay tuned!