Infrastructure Services are definitely undergoing a major transformation. How does one navigate the web of emerging technology trends and stay ahead of the game? Read on to learn more on our Infra Matters blog.


April 13, 2015

Hybrid ITSM - Key points to consider

IT service management(ITSM) tools play a pivotal role in managing diverse enterprise environments. There is a concerted movement towards hybrid IT where enterprises may leverage cloud for some workloads in addition to traditional data centers. As pointed out in my earlier post, a "one size fits all" approach does not work in the case of ITSM tool requirements. Each organization has its own requirements and faces unique challenges during the implementation.

Let's look at some of the key parameters that need special attention while implementing ITSM tool in hybrid IT environments:

While deciding on the cloud option for your ITSM deployment, Integration is one of the key areas that can foretell the success of ITSM implementation. Enterprises have a wide range of tools deployed across their IT; sometimes each department may have different tools to perform similar tasks. For instance, some departments may choose to use Nagios while others may use SCOM for monitoring the infrastructure. However, no ITSM tool can exist in isolation without working in cohesion with the other tools.
So here, the key considerations include: availability of plug-ins for integration with other tool sets, provide an end to end view of services to the customer (single source of truth) and enable effective problem management by integrating disconnected environments and platforms.

Security is another important aspect while implementing ITSM especially if the deployment is on a Cloud based platform. In accordance with enterprise security requirements, the cloud providers' practices with respect to the enterprise regulatory, auditory and compliance requirements need to be assessed.
In addition, the security of the connection between the cloud and on premise environments needs to be assessed especially with respect to the ability to move workloads from cloud to on premise data centers, as required by business. If the data is confidential then it is better to have it stored in a data center.

Configuration Management in hybrid IT environments is another factor which should be kept in mind while implementing ITSM tools. Cloud is known for its elasticity, multi tenancy and ability to assign resources on demand. With such dynamic changes it becomes difficult to track configuration changes and assess the impact of such changes to the cloud services at the same time. So, it is imperative that a robust CMDB strategy is in place that ensures cloud services don't fail due to inadvertent configuration changes. A simple way of tracking would be to have server based agents that can provide real time machine level statistics or use monitoring tools to generate alerts across the hybrid environment. These alerts can be routed to a central repository where it can be analyzed and appropriate action can be taken.

As enterprises move workloads across hybrid environments, process control and Governance become major issues. In many cases, enterprises may have decentralized operations with multiple tools for a similar process across locations. Needless to say, this makes it difficult to visualize process outcomes at any given time. A governance layer defining the responsibilities of each vendor, having SLA and OLAs in place that assigns responsibilities to the relevant teams, service reporting can avoid  issues like  delays, outages and inefficient operations.

An integrated approach towards IT service management spanning hybrid environments allows the enterprise govern the entire IT though a single lens. Process maturity is a key consideration here.


Posted by Manu Singh

May 29, 2014

Application Portability - Going beyond IaaS and PaaS - Part 1

Renjith Sreekumar

The other day, I was studying economies of scale for virtual machine (VMs) deployments for a client and tried to benchmark these with standard cloud deployment models - on and off premise. The VMs provide a dedicated run-time environment for a specific application based on the supported stack. In today's world, while the VM densities are increasing based on the processing and compute innovations at the hardware layer, application densities are not increasing due to the inability to support multiple stacks/ runtimes at a single OS level.
I tried to analyze this scenario with use cases that we have tried to address with client virtualization solutions - Citrix XenApp, App-v etc. Application virtualization essentially achieves this kind of isolation, where applications are packaged and streamed to client OS irrespective of the local OS preference. This results in multiple versions of the same application and applications certified on multiple stacks to be virtually run from the same client OS.

How can this process be replicated within an enterprise? As I pondered on this issue, I happened to meet with an old friend and a Linux geek in the subway. Our discussions led to deeper insights into the next wave of virtualization that we believe will redefine the way applications will be delivered in future. This will be a model that will enable cost effective PaaS model that provide massive scale, security and portability for applications. This prompted the following questions:

  • How can we enable standard VMs to support multiple apps- each having its own run-time dependencies?
  • Today's application teams need a plethora of development frameworks and tools. Can these be supported in one "box"?
  • How are PaaS providers delivering the environments today? How do they achieve the economies of scale while supporting these needs - or do they?
  • How complicated is it to use sandboxes to isolate application runtimes?
  • Can the Dev team get a library that is not in the standard "build" that the technology team offers today? Do we need to update technology stacks such that this library can be supported?

To put it simply - Do we give application teams, a fully layered OS image, complete with libraries in a VM - OR  would a "light-weight" container for the application to be packaged with its dependencies (libraries and run-time needs) isolated from the underlying OS and H/W suffice?

Welcome to the world of Application Containers.Here is how it works:

The infrastructure team creates a container image for the application team to layer with dependencies and later hand it over to infrastructure team for management. Now, the application is packaged as if it is running in its own host, with no conflicts with other applications that may also be running on the same machine.
Instead of having 100 VMs per server, I am now talking about thousands of hardware-isolated applications running on a single physical or virtual host. Such application container technologies can package an application and its dependencies in a virtual container that can run on any virtual or physical server. This helps enable flexibility and portability on where the application can run, be it on-premise public cloud, private cloud or otherwise.

In my next post, I will talk more about the underlying technology, its benefits, strategies around adoption and migrations.

October 28, 2013

Traditional Data versus Machine Data: A closer look

(Posted on behalf of Pranit Prakash)

You have probably heard this one a lot - Google's search engine processes approximately 20 petabytes (1 PB=1000 TB) of data per day and Facebook scans 105 terabytes (1 TB=1000 GB) of data every 30 minutes.
Predictably, very little of this data can be fit into the rows and columns of conventional databases given the unstructured type and volume of this data. The complexity of this data is commonly refered to as Big Data.

The question then arises - how is this type of data different from  system generated data? What happens when we compare system generated data - Logs, syslogs and the likes,  with Big Data?

We all understand that conventional data warehouses are one's where data is stored in form of table based structures and useful business insights  can be provided on this data by employing a relational business intelligence (BI) tool . However, analysis of Big Data is not possible using conventional tools owing to the sheer volume and complexity of data sets.
Machine or system generated data refers to the data generated from IT Operations and from infrastrucutre components such as server logs, syslogs, APIs, applications, firewalls etc. This data also requires special analytics tools to provide smart insights related to infrastructure uptime, performance, threat and vulnerabilities, usage patterns etc.

So where does system data differ from Big data or traditional data sets?
1. Format: Traditional data is stored in the form of rows and columns in a relational database whereas system data is stored in  the form of text that is loosely structured or even unstructured. The format of big data remains highly unstructured and contains even raw form of data that is generally not categorized but is partitioned in order to index and store.
2. Indexing: In traditional data sets, each record is identified by a key which is also used as index. In machine data, each record has unique time-stamp that is used for indexing unlike big data, where there is no criteria for indexing.
3. Query Type: There are pre-defined questions and searches conducted on the basis of structured language in traditional data analysis. In system or machine data, there is a wide variety of queries mostly on the basis of source-type, logs and time-stamps while in big data, there is no limit to the number of queries and it depends on how the data is configured.
4. Tools: Typical SQL and relational database tools are used to handle traditional data sets. For machine data, there are specialized log collection and analysis tool like Splunk, Sumologic, eMite which install an agent/forwarder on the devices to collect data from IT applications and devices and then apply statistical algorithms to process this data. In Big Data, there are several categories of tools ranging from areas of storage and batch processing(such as Hadoop) to aggregation and access (such as NoSQL) to processing and analytics (such as MapReduce).

When an user logs in to a social networking site, details such as name, age and other attributes, entered by the user, get stored in form of strucutred data and constitute traditional data - i.e. stored in the form of neat tables. On the other hand, data that is generated automatically during a user transaction such as the time stamp of a login constitutes system or machine data. This data is amorphous and cannot be modified by end users.

While analysis of some of the obvious attributes - name, age etc. gives an insight into consumer patterns as evidenced by BI and Big Data analysis, system data can also yield information at the infrastructure level. For instance, server log data from internet sites is commonly analyzed by web masters to identify peak browsing hours, heat maps and the like. The same can be done for an application server as well.


October 1, 2013

Transforming Enterprise IT through the cloud

Cloud technologies offer several benefits. With solutions that are quick to adopt, always accessible and at extreme scale to the capex-to opex benefit that the CFO likes, clouds have come of age. In the coming years, the IT organization is going to increasingly see the Cloud as a normal way to consume software, hardware and services. Clouds are transformational. Several companies today are already enjoying a competitive business advantage as opposed to their competition by being early adopters of this transformational paradigm. Today, however, we hear similar questions on near term and long term adoption from IT leaders, the summary of them being as follows- 

  • Where do I get started?
  • What are the quick wins?
  • What should we be doing in the next 3 yrs?

I have tried to address some of these common questions as below. These can be thought of as basic transformational patterns and a strategy for cloud adoption within the enterprise

  • Start with Software as a Service (SaaS)- explore ready to use solutions in key areas such as CRM, HR and IT Service Management. Some of the SaaS solutions such as Salesforce, Workday and ServiceNow have been in existence for some time now and deploy proven engagement models, so these will be quick wins to consider. Pick functionality, an app and a suitable sized footprint, so as to have a project scope that can create a real organizational level impact. From a time to market perspective SaaS can arguably be the quickest way to attain the benefits of cloud.
  • Explore the private cloud- Take virtualization to the next level by identifying an impactful area within the organization to start a private cloud project. One example with an immediate benefit is a way to enable application development teams to automate the request of development and test environments. Getting this done through a service catalog front end through to a private cloud back end, can cut provisioning times by 50% or more with an added benefit of freeing resources to focus on providing support to production environments. There are different product architectures to consider with pros and cons that are beyond the scope of this note- choose one that works for the organization and get going.
  • Explore Data Center and Infrastructure consolidation- Many large Fortune 500 organizations today have to deal with equipment sprawl. The issue with sprawl can manifest itself from technology rooms and closets into co-located space to even entire data centers. This can deal with the full stack of IT equipment from servers, network switches, storage devices to even desktops.  Private clouds can be used as a vehicle to consolidate, reduce footprint and increase overall levels of control and capacity of this infrastructure. An added benefit can be in terms of higher performance, lower energy costs and replacement of obsolete equipment.
  • Identify specific public cloud use cases- Depending on the business, some areas can benefit from adopting public clouds. For e.g.- Requiring a large amount of computing horsepower for a limited duration to do data analytics is a requirement for organizations in the pharmaceutical, healthcare, and financial industries. This is a challenging use case for traditional IT as this is capital and resource inefficient. Public clouds are the perfect answer to these types of workloads. The business unit can pay for what they use and does not get limited by the equipment available in-house.
  • Create a multi-year roadmap for expansion - These initiatives are only the beginning. IT leaders need to create a 3 year roadmap that plans how these initiatives can be expanded to their fullest potential within the organization. Enabling a strong project management practice and a proven project team will go a long way to ensuring success during execution. Create a financial cost- benefit analysis for each of these areas and ensure a positive Net Present Value (NPV) case for each one. Identify what partners bring to the table that is proven, yet unique to the cloud solution at hand. Go with the base assumption that in spite of replacements of some legacy tools and solutions, these Cloud initiatives will more or less continue alongside current in-house IT and management practices.

In summary, it is important to have a pragmatic view.  Cloud is not the silver bullet that will solve all IT problems. And neither will an organization automatically attain the promised benefits. No two organizations are alike, even those that are in the same industry and hence understanding what to do and which steps to take first will put IT on a course to being 'In the cloud'.

September 24, 2013

Service Redundancy - 5 Lessons from the Nirvanix blowout


Earlier last week one of the most prominent Cloud Storage Providers Nirvanix, gave a shocker to the tech community when it announced that it will be going out of business. Customers and partners were asked to stop replicating data to their storage infrastructure immediately and move out their data in about 2 weeks. I have been fascinated about this story and here are the facts.

-          Nirvanix pulled about $70 Mn in venture funding during its lifetime- starting Sept of 2007

-          It's key backers kept up 5 rounds of funding right upto May of 2012

-          Rated well by industry analysts and media

-          The cloud storage service was sold through several enterprise software resellers and service providers.

-          The service pulled in several key customers and was challenging the likes of Amazon's AWS S3 storage services.


What is evident is that the company was burning cash faster than generating revenues and it all came to an abrupt end, when it could not find any buyers/execute an exit strategy. One would have thought that enough value (IP or otherwise) would be generated to a potential buyer in 6 yrs of existence, but no more detail seems available. Nirvanix is by no means the first to go belly up- EMC had pulled the plug on Atmos Online service in 2010, though that was perceived as far smaller in impact.


From the enterprise standpoint, if an organization had been using their services, these 2 weeks are a time for scramble. Moving data out of the cloud is one tall order. Second issue is to find a new home for the data. And what if the data was being used in real time as the back end of some application. More trouble and pain. So here's my take on the top 5 areas clients/providers can address for Service Redundancy (leaning more on Cloud storage services)


1)     Architect multiple paths into the cloud- Have a redundant storage path into the cloud. Ie host data within 2 clouds at once. Now this depends also on the app that is using the service, geo and users, a primary/ secondary configuration, communication links and costs. For eg- a client could have an architecture where the primary cloud storage was on Nirvanix and the secondary on AWS. Throw in the established value from traditional in-house options and established disaster recovery providers.

2)     Be prepared to move data at a short notice- Based on the bandwidth available from source to target and the size of data in consideration, we can easily compute how much time it would take to move data out of a cloud. Add a factor of 50% efficiency (which could happen due to everyone trying to move data out), frequent testing and we have a realistic estimate of how long data migration will take. Now given that the 2 weeks from Nirvanix is a new benchmark, clients may choose to use this as a measure of how much data to store in one cloud- ie if it takes more than 2 weeks to move important data, then consider adding communications costs for better links or including a new provider into the mix.

3)     Consume through a regular service provider - Utilize the Service through a regular managed services provider. This has two benefits for clients a) the ability to enter into an enterprise type contract with the provider and ensure service levels b) the ability to gain financially in case of breach of service levels. Of course service providers in turn have to be vigilant and ensure that they have proper flow down provisions in their contracts with cloud providers and secondly there is an alternative to the service in case of issues.

4)     Establish a periodic service review process- Often we see the case of buy once and then forget. A regular review of the Service will often provide early warning of issues to come. For eg in the case of a cloud storage provider, tracking how much storage they are adding (new storage growth%) and new client labels signed on will give a good indication of any issues on the capacity front. This may in turn point to lack of available investment for growth.

5)     Understand provider financials- Cloud providers today focus on ease of use and new gee-whiz features- this often masks how exactly they are doing financially. And the market continues to value them on revenue growth. It does not matter whether the company is public/ private but as a client and under confidentiality, there exists a right to understand the financial performance and product roadmap, even if it is as a high level.


Cloud Solutions offer multiple benefits, but at the end of the day, they still serve another business as a service and there are issues with all services- cloud based, or traditional. Was Nirvanix an outlier or the warning of things to come? We don't know that yet, but as service providers and clients, this may be the 'heads-up' we need, to stay focused on the essentials for effective transformation of Infrastructure and Storage Services.

February 28, 2013

VMware vs. Microsoft - is Microsoft leading the race?

(Posted on behalf of Akshay Sharma)

The race to lead the virtualization and private cloud space has been a close finish between two industry stalwarts - Microsoft and VMware.

But with the launch of Windows Server 2012, Microsoft seems to have stolen a lead over VMware - or has it?

Microsoft launched Windows Server 2012 last year and is positioning it as a cloud operating system. Windows Server 2012, with Hyper-V 3.0 and System Center 2012 put together is an impressive package that seems to have given Microsoft a significant lead on their strategy in the cloud space.

As organizations seek to deploy private clouds, one of the key requirements is to ensure there is a sufficiently large storage and computing power available in the form of servers and storage systems. To be cost effective, they require large virtual servers and virtual storage that can be easily added or removed with minimal disruptions.

In response to these challenges, Microsoft's Windows Server 2012 with Hyper-V 3.0 brings a wide collection of features and capabilities. It has built-in storage virtualization which lets the user configure storage into a single elastic and efficient storage pool. It also provides high availability for applications by setting up failover systems within the datacenter or at a remote location. With the new Hyper-V, it is possible to virtualize extensive workloads, with support for up to 64 virtual processors per virtual machine (VM) and up to 4TB of memory per host.

However, VMware, the original kingpin of the virtualization space, has the vCloud Suite 5.1 - complete with all the requisite cloud characteristics. With their new offering, they are working towards pooling industry standard hardware and then running each layer of the datacenter as software-defined services. vSphere 5.1, the VMware core hypervisor is packed with new features like live migration with shared storage and supports virtual machines that are double the size of already existing ones.

Talk about a knockout round!

Continue reading "VMware vs. Microsoft - is Microsoft leading the race?" »

November 2, 2012

Chasing the G2C Dreams on Cloud

With the emergence of mobile and cloud-based technologies, governments in emerging countries can now provide government-to-citizen (G2C) services in a cost-effective manner.

Drawing on the power of cloud-based services, governments can improve governance and drive up the quality of services they provide to the public. This convergence in growth of internet and mobile penetration coupled with emergence of cloud as a pervasive computing platform provides opportunity for governments to use these disruptions as a medium for achieving their social and development goals.

Recent announcements by Amazon on creating a compute platform to enable federal agencies in the US to move workloads to cloud is an example of public/private partnership in enabling a safe, low-cost, and agile platform for government compute purposes. All this will ultimately enable governance.

However, emerging nations lag behind developed nations when it comes to adoption of such cloud-based Government-to-Citizen or G2C services. In this feature, I have discussed the challenges to G2C adoption in emerging economies and roadmap for the evolution of G2C services in such economies.

Continue reading "Chasing the G2C Dreams on Cloud" »

June 20, 2012

Service Design for Cloud

The second session of Special Interest Group (SIG) on "Cloud Service Management" from ITSMF Australia, Victoria Chapter, was conducted on 17th June 2012 at the Infosys Australia Docklands office. Topic of the discussion was "Service Design for Cloud". What followed was a fantastic focused group discussion that brought forward very innovative points being put up by the participants on key aspects of Service Design for Clouds - on service catalogues, cloud services and service integration management. Here is an excerpt of few interesting points discussed ...

Continue reading "Service Design for Cloud" »

March 21, 2012

Is System z an unexploited platform for Cloud solutions..?


Having been in the Mainframe space for many years and having worked on one of the most scalable and secure computing environments, I was thinking - is System z missing the Cloud buzz?
The current platforms from IBM such as zBX and zEnterprise servers are able to support heterogeneous work loads of Mainframe, Unix, Java etc., have a seamless capability to upscale or downscale and are packed with ability to create new images without system down time. All this with unparalleled security of EAL5 level!
Given all of this, would System Z be a competitive platform for Cloud environment? What do you think..?

ITSMF Australia "Cloud Service Mgmt SIG" launched in Victoria

ITSMF Australia SIG on "Cloud Service Management" was successfully launched on 14th March at the Infosys Docklands office in Melbourne. We had a great group of around 25 participants gathered to share their experiences on cloud service management. Topics discussed were various cloud service models (IaaS, PaaS, SaaS, BPaaS!!), their impact on Service Management functions such as financial management, compliance issues and participants experiences of cloud services among others. What a session it was !

Continue reading "ITSMF Australia "Cloud Service Mgmt SIG" launched in Victoria" »

June 23, 2010

Non-production environment (NPE) - Is it a potential value storehouse?

Recently I was asked to embark on a NPE environment assessment and optimization engagement for one of our telecom client's. The aim of the engagement was to streamline the client's pre-production environments by initially discovering and creating an inventory of their non-production infrastructure portfolio, which was procured and deployed pretty much in an ad hoc fashion. The key objective was to orchestrate a governance framework which would help the client in effective management of their non- production infrastructure.

In context of this engagement, I instantly remembered what my colleague Bruno said in one of his blog's - 'Organizations, especially those with mature production ITSM processes tend to show interest in their pre-production operations covering development test and assurance functions.' - Ah...It's so true!

It's atypical for organizations to focus on the non-production environments as much they care for their production environments and this situation can be attributed to the poor visibility of the gains one can reap by proactively managing the non production environments or in other words, lack of a criterion that can be applied to quantify the value extracted. Moreover, production environments are more controlled because of the fact that the risk of uncontrolled Production is apparent, where as the risks of non controlled development or test environments are less obvious.

 I think it would be uncontroversial to say that organizations are missing on an opportunity to save costs by not focusing on controlling their non-production environments. The same is true for virtualized or cloud based test or development infrastructure, reason being that the number of groups/projects consuming the dev, test and staging environments remains the same. Also, the monetary gains realized by leveraging technology (virtualization, Cloud computing) will be offset due to a non standard pre production environment operations.

We can safely conclude that not many organizations' culture and operating models account for optimization of their non-production environment. Different silos manage hardware and software resources and dev / test teams' provision and set up their own hardware and software environments in a largely unregulated fashion. There is a need for a consolidated view of the pre production environment landscape and this visibility of dev, test infrastructure is a key aspect in the NPE standardization journey. A pre production environments inventory serves as a master key for unleashing the potential incentives which can be achieved by proactively tracking and managing the test and dev environments in an organization.

In addition to the inventory, a well laid out operations structure for non production environments will enable

·         Structured environment requisition and allocation process

·         Reuse of pre production infrastructure

·         Free up the blocked/unused test, dev resources

·         Effective utilization of software licenses used for testing purposed

·         Updated environment inventory

I'm sure we can add at least a dozen more to the above list, as we delve deeper.

Coming back to the aspect of non availability of a standard criterion that can be used for quantification of the value extracted. I have attempted to list a few KPI's/metrics that the organizations might want to track for a standardized non production environments operations process, this list by no means is exhaustive.

·         No. of opportunities identified to leverage existing test, dev infrastructure

·        Approximate Capital investment ($) avoidance - On pre production infrastructure

·         Cost saved by leveraging existing software licenses for testing.

·        Reduction in % of environmental incidents related to deployments/configurations mismatch

I would like to leave you with these thoughts for now. Solicit your valuable comments on the topic.

Until next time....Happy reading J

Appreciate your time!