Infrastructure Services are definitely undergoing a major transformation. How does one navigate the web of emerging technology trends and stay ahead of the game? Read on to learn more on our Infra Matters blog.

Main

May 29, 2014

Application Portability - Going beyond IaaS and PaaS - Part 1

Renjith Sreekumar

The other day, I was studying economies of scale for virtual machine (VMs) deployments for a client and tried to benchmark these with standard cloud deployment models - on and off premise. The VMs provide a dedicated run-time environment for a specific application based on the supported stack. In today's world, while the VM densities are increasing based on the processing and compute innovations at the hardware layer, application densities are not increasing due to the inability to support multiple stacks/ runtimes at a single OS level.
I tried to analyze this scenario with use cases that we have tried to address with client virtualization solutions - Citrix XenApp, App-v etc. Application virtualization essentially achieves this kind of isolation, where applications are packaged and streamed to client OS irrespective of the local OS preference. This results in multiple versions of the same application and applications certified on multiple stacks to be virtually run from the same client OS.

How can this process be replicated within an enterprise? As I pondered on this issue, I happened to meet with an old friend and a Linux geek in the subway. Our discussions led to deeper insights into the next wave of virtualization that we believe will redefine the way applications will be delivered in future. This will be a model that will enable cost effective PaaS model that provide massive scale, security and portability for applications. This prompted the following questions:

  • How can we enable standard VMs to support multiple apps- each having its own run-time dependencies?
  • Today's application teams need a plethora of development frameworks and tools. Can these be supported in one "box"?
  • How are PaaS providers delivering the environments today? How do they achieve the economies of scale while supporting these needs - or do they?
  • How complicated is it to use sandboxes to isolate application runtimes?
  • Can the Dev team get a library that is not in the standard "build" that the technology team offers today? Do we need to update technology stacks such that this library can be supported?

To put it simply - Do we give application teams, a fully layered OS image, complete with libraries in a VM - OR  would a "light-weight" container for the application to be packaged with its dependencies (libraries and run-time needs) isolated from the underlying OS and H/W suffice?

Welcome to the world of Application Containers.Here is how it works:

The infrastructure team creates a container image for the application team to layer with dependencies and later hand it over to infrastructure team for management. Now, the application is packaged as if it is running in its own host, with no conflicts with other applications that may also be running on the same machine.
Instead of having 100 VMs per server, I am now talking about thousands of hardware-isolated applications running on a single physical or virtual host. Such application container technologies can package an application and its dependencies in a virtual container that can run on any virtual or physical server. This helps enable flexibility and portability on where the application can run, be it on-premise public cloud, private cloud or otherwise.

In my next post, I will talk more about the underlying technology, its benefits, strategies around adoption and migrations.

July 2, 2013

Data center replication - a key step in disaster recovery

The Atlantic Hurricane season has officially started which means that most organizations would have made plans of some kind of disaster recovery for their data centers. This usually involves developing failover centers and backing up data in the event that the primary data center goes down. And this is where data center replication comes in.
As I mentioned in my last blog post, replication of a data center is a preferred option in scenarios where the target data centers will be equipped with identical hardware and OS as that of the source data center - resulting in a complete replication of the environment itself.

In tech-speak, we define replication as a method in which in various infrastructure computing devices (both physical and virtual) share information across data centers so as to ensure consistency between resources to improve reliability and fault-tolerance.
As with any other kind of migration, replication of a data center involves a set of key considerations primarily around the volume and type of data to be replicated and the number of sites to be migrated:

1- Size/volume to data to be replicated, velocity/Speed require replicating data and variety/type of data.
2- Distance from the source datacenter, number of sites/ domains and security policies to be migrated.

The migration itself may be done in several steps. For instance, a data center may be replicated with the same physical and virtual environment and the same storage or the storage may get replicated separately. In either case, the migration process would require proper planning with focus on bundling applications for relocation and managing licenses for both physical and/or virtual environments. There is a lot of potential to reduce both capex and opex costs at this stage. One sure shot way would be to use virtualization to reduce the count of servers and thereby reduce the associated license costs.
The importance of being prepared with DR cannot be stressed enough. A quick check of the news headlines including the Amazon outage in 2011 and the more recent outage of the French government's accounts payable system - Chorus, makes it clear that system downtime can be crippling for any organization. Replication of a data center to create a new DR site offers organizations a plausible approach towards balancing risks with business needs.

 

Continue reading "Data center replication - a key step in disaster recovery" »

March 21, 2013

How green is your data center?

(Posted on behalf of Niyati Kamthan)

 

Did you know that 90% of the data in the world has been created in just the last two years?  Walmart's warehouse itself handles 2.5 petabytes of information. Roughly equal to half the letters delivered by the US Postal department in 2010! As our world gets digitized, we are creating stupendous amount of data. Whenever we need information on anything under the sun we get it at one click and it appears within seconds on our screen.
And, all the information we use today sits somewhere in giant facilities called data centers. There are tens of thousands of data centers using computing power day and night to make information available to us at lightning speeds. However, these data centers are increasingly becoming a curse than a boon.

29.4 greendatacenter.jpg

Continue reading "How green is your data center?" »

March 17, 2013

Software defined everything !!!

The Software Defined Datacenter (SDDC) chatter is seemingly everywhere. Think about the possibility of virtualizing network and storage in a manner similar to how we have virtualized CPU & memory (compute).Think of  virtually managed computing, networking, storage, and security delivered as a service. The provisioning and operation of the infrastructure could entirely be automated by software leading to Software Defined Datacenter.

Most of today's networking and storage gears mix data and control functions, making it hard to add or adjust network and storage infrastructure when adding virtual machines (VMs) to enterprise data centers. By adding a well-defined programming interface between the two, networking hardware and storage hardware will become progressively more commoditized, just as servers (compute and memory) have become commoditized.

This will occur because all the value associated with configuration and management will get sucked out of the hardware and into other layers. This means is that in the new world, the manageability, control and optimization would sit at a layer above the actual hardware - leading to a software defined, software managed and software controlled enterprise compute, storage and network!

We can expect independent software appliances or current hypervisors (VMware, Hyper-v) to come with the ability to abstract and centralize the management functions to deliver provisioning, configuration management, automation, performance optimization, capacity utilization and reporting. Then there is absolute certainty that the SDDC will require a completely new management stack - one suited to the highly dynamic nature of an environment in which every key resource (CPU, memory, networking and, storage) is abstracted from its underlying hardware.

Among the many things we might need to see in the new management stack include data protection, security, configuration management, performance management and automation. The key feature that I would like to see here is that of datacenter analytics, as every layer of management software for the SDDC will be generating data at a rate that will demand a big data store just to keep up with and index the data so that bits from the various sources can in fact be compared to each other at the right time - Big Data for data center management!!

SDDC offers challenges and opportunities for product vendors, IT organizations, service providers and integrators alike. The journey has already started. Have you reconsidered  your data center strategy yet? Let me know your thoughts.

March 7, 2013

Smart Moves

 In my last post, I spoke about the three available options of relocation, replication and rebuild in the case of migrating data centers.  In this post, I will explore is the concept of data center relocation. Relocation essentially means the movement of the same hardware from the old data center to the new one.  All the components of the data center are basically shifted lock, stock and barrel to a new facility.
Sounds simple - right? Well it is. 
  

SmartMove.jpg

Typically, a relocation may take place because of any of the following reasons:

• Lease tenure is ending or contractual obligations
• Inadequate budgets for a full scale transformation program
• Legacy applications and dependency on legacy hardware or platforms

Continue reading "Smart Moves" »

February 28, 2013

VMware vs. Microsoft - is Microsoft leading the race?

(Posted on behalf of Akshay Sharma)

The race to lead the virtualization and private cloud space has been a close finish between two industry stalwarts - Microsoft and VMware.

But with the launch of Windows Server 2012, Microsoft seems to have stolen a lead over VMware - or has it?

Microsoft launched Windows Server 2012 last year and is positioning it as a cloud operating system. Windows Server 2012, with Hyper-V 3.0 and System Center 2012 put together is an impressive package that seems to have given Microsoft a significant lead on their strategy in the cloud space.

As organizations seek to deploy private clouds, one of the key requirements is to ensure there is a sufficiently large storage and computing power available in the form of servers and storage systems. To be cost effective, they require large virtual servers and virtual storage that can be easily added or removed with minimal disruptions.

In response to these challenges, Microsoft's Windows Server 2012 with Hyper-V 3.0 brings a wide collection of features and capabilities. It has built-in storage virtualization which lets the user configure storage into a single elastic and efficient storage pool. It also provides high availability for applications by setting up failover systems within the datacenter or at a remote location. With the new Hyper-V, it is possible to virtualize extensive workloads, with support for up to 64 virtual processors per virtual machine (VM) and up to 4TB of memory per host.

However, VMware, the original kingpin of the virtualization space, has the vCloud Suite 5.1 - complete with all the requisite cloud characteristics. With their new offering, they are working towards pooling industry standard hardware and then running each layer of the datacenter as software-defined services. vSphere 5.1, the VMware core hypervisor is packed with new features like live migration with shared storage and supports virtual machines that are double the size of already existing ones.

Talk about a knockout round!

Continue reading "VMware vs. Microsoft - is Microsoft leading the race?" »

February 14, 2013

Opportunities for data center outsourcing - Gathering momentum

(Published on behalf of Shalini Chandrasekharan)

According to the Symantec State of the Data Center survey, 2012, complexity in data centers is 'pervasive' leading to a host of problems including longer lead times, increasing costs and reduced agility in responding to business requirements.

Small wonder it is then, to see organizations looking out for options that will bring higher scalability and agility.

Data center outsourcing is one such option. In his interview with Steve Wexler of the IT-TNA magazine, Chandrashekhar Kakal, senior VP and global head of Business IT Services at Infosys, talks about how data center outsourcing can enable organizations to attain higher levels of scalability and agility as demands on IT outpace budgets and resources. The reality of today's situation is that organizations must increasingly look to data center consolidation and outsourcing as critical levers to achieve agility, as well as manage costs and service levels.

The article also quotes inputs from Gartner and IDC on the proliferation of data center outsourcing models including Cloud and Infrastructure-as-a-Service or IaaS offerings.

The full article can be accessed here.

 

 

 

January 10, 2013

Storage Virtualization - the road ahead...

(Posted on behalf of Vaibhav Jain)

 

A decade ago the management of disk storage was very simple and uncomplicated; if we needed more space then we replaced low capacity disk drive with a high capacity disk drive. But as the organization grew, data grew, so we started thinking about adding multiple disk drives. Now finding and managing multiple disk drives became harder, so we developed RAID (Redundant Array Independent Disk), NAS (Network Attached Storage) and SAN (Storage Area Network).The next step in storage Technology is storage virtualization, which adds a new layer of software and hardware between storage and servers.

In a heterogeneous type of storage infrastructures environment where you have different types of storage devices- there is an added complexity of managing and utilizing these storage resources effectively. As storage networking technology matures, larger and complex implementations are becoming more common. Specialized technologies are required to provide an adaptable infrastructure with reduced cost of management.

Storage Virtualization resolves this problem - pooling of physical storage from multiple network storage devices into a single virtual storage device that can be managed from a central console. This helps in performing regular tasks of backup, archiving, and recovery in a quicker and more efficient way. This in turn provides enhanced productivity, better utilization and management of the storage infrastructure. There are various options available in the market for the storage virtualization by using hardware and software hybrid appliances.
Some known available Hardware appliances for storage virtualization include EMC VPLEX, HITACHI DATA SYSTEM USPV, USP VM and VSP systems, NetApp V-Series appliances, IBM Storwize V7000 etc. On the software side, we have the likes of HP Store Virtual VSA Software and EMC Invista.


I have seen environments where organizations have decided to move away from multi-vendor storage systems to a single vendor system in the desire to reduce complexity. However there may be no need to discard a multivendor storage system for it is possible to achieve the same efficiency, scalability, and flexibility with a heterogeneous storage environment.
What you need to do is -choose the right storage system that supports storage virtualization technology and make that storage system as a front end storage system and connect rest of the present multivendor storage systems in the backend. With this, the same LUNs (Logical Unit Number) get maintained even with the help of the new virtualized storage system. With this arrangement you can scale up capacity without any need to migrate data from the old storage to new storage system. 

This way, you have utilized your old infrastructure with the no downtime and scalability and leveraged benefits of virtualized storage system including reduction in downtime, enhanced scalability and flexibility in managing resources with an overall reduction in the complexity of the IT environment.

January 8, 2013

Data center transformation - The next wave?

According to this report from Gartner, organizations are increasingly looking for ways to transform their data centers. In the report titled, "Competitive Landscape: Data Center Transformation services", Gartner reveals that the number of searches on the Gartner website for the topic has increased by about 358%.

The report also suggests that organizations are increasingly looking beyond simple hardware refresh but are also looking at re-engineering operating models and processes to increase functionality while optimizing costs. In this sense, there is a greater demand for consulting led solutions that organizations can streamline according to their requirements.

At Infosys, our value proposition involves helping organizations transform their data centers with a consulting-led approach. In addition, our intellectual property and solution accelerators help us provide a great framework for enterprises to gain the best return from their IT Infrastructure.

In this report, Infosys is one of the several vendors to be profiled. The report can be accessed here.

December 21, 2012

Onward to 2013- IT Infrastructure's home is more important than ever

A couple of weeks ago I attended the Gartner Data Center Conference 2012.  It is by far one of the largest conferences focused on infrastructure services and the Data Center market. The conference had about 2500 visitors and was very well organized on all fronts. Through that time and after discussions with clients and partners, some trends are becoming more evident than ever as 2012 comes to a close.  I won't try to expand on each of these observations in this short blog, however suffice to say that numerous examples around these areas exist today and are growing rapidly. Call it a crystal ball or some food for thought for the year end reading so here goes-

·         The emergence of the software defined and software managed modular data center

·         Enterprises' growing struggles to realize public cloud benefits within an enterprise construct

·         The expanding PaaS-ification of Public Clouds-note the announcement of Amazon Redshift earlier this month to challenge the likes of IBM in the Data Warehouse market

·         The Infrastructure automation market looks to be quite diverse and just getting started- no single pure play company is likely to wield any significant influence

·         Data Centers are growing and growing - more space is being put out for DC's inspite of virtualized infrastructure and converged infrastructure.

·         Data center operators' greater reliance on the power grid and self generation measures are inviting more scrutiny from power regulators

·         Converged Infrastructure is enabling faster and more frequent application rollouts- expect more on the lines of dev-ops. Also fueling faster private cloud adoption

·         Converged and commoditized infrastructure is starting to find it's way into the enterprise

·         Fully operational and scalable hybrid clouds are still more of a hype than reality

·         Converged Cloud Services is SaaS-ifying and commoditizing CRM, HRM, Financials and related business functions- eg Oracle Fusion cloud

·         Lots of initiatives planned within enterprises on Big Data. The emergence of Big data only startups- eg Horton Works

·         Big Data leading to the 'Internet of things' - eg sensors & cameras recording vehicle license plates at toll booths-Questions on what to do and how to store this data tsunami

·         Talent within IT organizations is getting redefined- more cross skilled, senior resources within Infra dealing with cross skilled developers. Forecasts of talent crunch to meet demand.

·         Increasing impact of Flash players in the storage market- Fusion IO, Violin Memory, etc. Major changes in storage backend is expected to lead to faster apps in general. And perhaps faster business cycles

So, one thing is really clear. Businesses continue to see IT Infrastructure services and the Data Center as the center of gravity for all technology operations, whether that is a 2 person startup or a global corporation. And as a result, things are looking really bright for technology professionals in 2013 and beyond.

 

December 3, 2012

Why automating your datacenter makes sense

Over the years datacenters have come a long way from the days when behemoth machines were housed inside large rooms with tonnes of cabling and traditional power hogging air-conditioning systems. With the complexity of datacenters increasing by the day and the kind of diverse infrastructure landscape prevalent within today's datacenter, the emphasis on the governance and maintenance of datacenters has gained importance.

Continue reading "Why automating your datacenter makes sense" »

November 29, 2012

CIO Mandate: Optimize

Posted on behalf of Shalini Chandrasekharan

In his second post on CIO mandates, SVP and head of Business IT Services, Chandra Shekhar Kakal continues to emphasize the imperatives for CIOs.

Optimizing current IT operations is an ongoing enigma for CIOs. With IT operations consuming 60-70% of the average IT spend, it is no surprise that focus on IT infrastructure, and ways to gain additional leverage from investments already made, is gaining momentum.

Continue reading "CIO Mandate: Optimize" »

October 25, 2012

Maximize Benefits of Datacenter Initiatives

In recent times, many organizations have undertaken large transformation programs in the areas of Data center consolidation, hardware virtualization for multiple reasons like legacy modernization, cloud journey and to reduce costs through automation and standardization. Many of these transformation programs fall short of the desired goals.

 

Continue reading "Maximize Benefits of Datacenter Initiatives" »

October 18, 2012

Who moved my datacenter?

Datacenter Migrations are critical activities for a reason - so much needs to be considered and so much can easily go wrong.

A lot of planning goes into the planning phase itself!  Starting with why and where we are moving our datacenters. Many factors like growth, performance, legacy resources, operational cost, green initiatives, real estate change and staffing have a bearing on  Datacenter migration. Understanding the reasons for the migration will help us prepare the right migration plan.
 

For instance, are we looking to relocate / re-build or just replicate our present conventional data center? Do we need cloud based services? Is a hardware refresh included or just virtualization?  
 

Continue reading "Who moved my datacenter?" »

March 13, 2012

Depleting IPv4 addresses: Is it time to start transitioning to IPv6? - part 2

In my previous post , I was discussing some of the probable solutions to tackle the IPv4 address depletion problem. In this blog, I would like to list down some of the most popular alternatives being adopted and try to arrive at a best fit.

a .Carrier Grade NAT (CGN): Traditionally enterprises have used NAT (Network Address Translation) as a mechanism to allow multiple internal "private" machines to share a unique public IP address.  This 'blankets' the enterprise network from the internet and provides a layer of security. The same concept is replicated in CGN on a larger scale, where the ISP assigns a single public IPv4 address to multiple clients, and the customers in turn share this address between the systems in their local network. Even though this might provide a temporary stop-gap solution, in the long term might not be scalable and result in increased levels of complexity and overhead in managing the networks.
b. Purchase additional IPv4 addresses: By the looks of it, this doesn't seem to be a very encouraging alternative.  The lack of IPv4 addresses has given rise to a vibrant market for trading addresses and efforts are on to put in place policies for legitimate address trading. The fallback is that, there is a possibility that organizations having more IP addresses than they need, can hoard the addresses. Again, this would only offset the crisis till a more viable long term solution is available.
c. Migrate to IPv6: During the early 1990's when it was realized that the IPv4 would eventually run out, work was started to develop a new version of the IP protocol and in 1998 IETF(Internet Engineering Task Force) came out with the first version of the new IPv6 protocol. An IPv6 address is 128-bit long and approximately 3.4 x 1038 addresses. To simplify understanding, we can do with an analogy. If we assume that total IPv6 address space is the size of the earth, two IPv4 addresses spaces would fit inside a single tennis ball! So this is literally a limitless supply of IP addresses.

Running through some of the parameters like capex, opex, scalability, flexibility, long term growth and extensibility, NAT (Network Address Translation) appears to be the least appealing, probably only scoring as far as capex is concerned. Evaluation of the second option - 'Purchase of additional IPv4 addresses' doesn't tip the scales on either side after taking the parameters into consideration. The only option which fits the bill now is the IPv6 Migration.

Since early IPv6 days, network equipment vendors had started work on incorporating support for IPv6 in their product suite. The major players continually released newer versions of their products with built-in IPv6 support. But the end users/enterprises were mostly unaware and complacent. Unless there is a compelling reason to shift, most of them would rather stick to easier alternatives. This is what has happened, and there is no real demand for IPv6. If the Equipment vendors are IPv6 compliant but the service providers/enterprises that deploy this equipment are not ready to migrate, it does not make sense at all. It would be like fighting for a lost cause unless there are collective efforts from all quarters. So it is pretty much like a vicious circle, with each waiting for the other to make the move, the IPv6 implementation taking a beating in the long run.

In the next blog, I would wish to explore more on the common IPv6 transition techniques and methodologies.

March 1, 2012

Depleting IPv4 addresses: Is it time to start transitioning to IPv6? - part 1

The other day I was searching for something on the internet and just when I thought that I'd found what I was looking for, the website I opened threw a '404 -Page not found error'.
This got me thinking, what if you woke up one morning and tried to connect to the internet and you find that everything is down. I know this sounds extremely far-fetched, but there is no denying the fact that the internet is so closely intertwined with our daily lives that even small glitches/changes have the potential to snowball into major disruptions.

Continue reading "Depleting IPv4 addresses: Is it time to start transitioning to IPv6? - part 1" »

May 3, 2010

Manage the assets' end-of-life in a "Green" Way

The summer is at peak in India and the temperature has surpassed its' previous high already. This has motivated me further to continue writing blogs on Green IT. I would like to continue from my comment in previous blog where I said that "maximizing utilization of existing assets is better than purchasing new environment friendly assets". There are two ways to look at it:

Continue reading "Manage the assets' end-of-life in a "Green" Way" »

April 26, 2010

"Greening" the IT Asset Usage

The usage phase of an IT assets' lifecycle is where most of the industry's "Greening" efforts are concentrated. Organizations are utilizing technologies like virtualization, centralized power management etc. to increase utilization of existing assets, reduce carbon emissions and cut down on operational expenses.

Continue reading ""Greening" the IT Asset Usage" »

April 5, 2010

Paint the Asset Lifecycle Green for Sustainable IT

When I chose to blog on this topic, I remembered my recent visit to a restaurant and thought of starting with the analogy.

Continue reading "Paint the Asset Lifecycle Green for Sustainable IT" »