Infrastructure Services are definitely undergoing a major transformation. How does one navigate the web of emerging technology trends and stay ahead of the game? Read on to learn more on our Infra Matters blog.

Main

April 13, 2015

Hybrid ITSM - Key points to consider

IT service management(ITSM) tools play a pivotal role in managing diverse enterprise environments. There is a concerted movement towards hybrid IT where enterprises may leverage cloud for some workloads in addition to traditional data centers. As pointed out in my earlier post, a "one size fits all" approach does not work in the case of ITSM tool requirements. Each organization has its own requirements and faces unique challenges during the implementation.

Let's look at some of the key parameters that need special attention while implementing ITSM tool in hybrid IT environments:


While deciding on the cloud option for your ITSM deployment, Integration is one of the key areas that can foretell the success of ITSM implementation. Enterprises have a wide range of tools deployed across their IT; sometimes each department may have different tools to perform similar tasks. For instance, some departments may choose to use Nagios while others may use SCOM for monitoring the infrastructure. However, no ITSM tool can exist in isolation without working in cohesion with the other tools.
So here, the key considerations include: availability of plug-ins for integration with other tool sets, provide an end to end view of services to the customer (single source of truth) and enable effective problem management by integrating disconnected environments and platforms.


Security is another important aspect while implementing ITSM especially if the deployment is on a Cloud based platform. In accordance with enterprise security requirements, the cloud providers' practices with respect to the enterprise regulatory, auditory and compliance requirements need to be assessed.
In addition, the security of the connection between the cloud and on premise environments needs to be assessed especially with respect to the ability to move workloads from cloud to on premise data centers, as required by business. If the data is confidential then it is better to have it stored in a data center.

Configuration Management in hybrid IT environments is another factor which should be kept in mind while implementing ITSM tools. Cloud is known for its elasticity, multi tenancy and ability to assign resources on demand. With such dynamic changes it becomes difficult to track configuration changes and assess the impact of such changes to the cloud services at the same time. So, it is imperative that a robust CMDB strategy is in place that ensures cloud services don't fail due to inadvertent configuration changes. A simple way of tracking would be to have server based agents that can provide real time machine level statistics or use monitoring tools to generate alerts across the hybrid environment. These alerts can be routed to a central repository where it can be analyzed and appropriate action can be taken.


As enterprises move workloads across hybrid environments, process control and Governance become major issues. In many cases, enterprises may have decentralized operations with multiple tools for a similar process across locations. Needless to say, this makes it difficult to visualize process outcomes at any given time. A governance layer defining the responsibilities of each vendor, having SLA and OLAs in place that assigns responsibilities to the relevant teams, service reporting can avoid  issues like  delays, outages and inefficient operations.


An integrated approach towards IT service management spanning hybrid environments allows the enterprise govern the entire IT though a single lens. Process maturity is a key consideration here.

 

Posted by Manu Singh

September 29, 2014

The foundation for effective IT Security Management

 Of late the news on the IT Security front has been dominated by the mega hacks. Retailers in particular have taken the brunt of bad press with a large US home Improvement Company, the latest in the process of admitting to being compromised. The cyber criminals in all these cases took away credit card data belonging to retail customers. This in turn has resulted in a chain reaction where Financial Services firms are battling the growth of credit card fraud. The resulting bad press, loss of reputation and trust has affected the companies and their businesses.

The tools and exploits in these attacks were new, however the overall pattern is not. Cyber criminals have a vested interest in finding out new ways to penetrate the enterprise and that is really not going to go away anytime soon. What enterprises can do is to lower the risk of such events happening. That seems like a simple enough view but in reality the implementation is complex. Reactive responses to deal with security breaches involve investigations in collaboration with law enforcement on the nature of the breach, source, type of exploit used, locations, devices, third party access etc. But that along does not address the issue of enterprise risk.

Yes, a comprehensive approach is required. Many pieces of Enterprise Security have to come together to work as a cohesive force to reduce the risk of future attacks. These components include Security Awareness and Training, Access Control, Application Security, Boundary Defense and Incident Response amongst others. But effective IT Security Management is incomplete without the addressing one vital element. As an enterprise the understanding of 'what we own', 'in what state', 'where' and 'by whom' is often lost between the discussions and practices of penetration testing, discovery and audit.

These 4 elements coupled with the fifth one of 'management' on a 24*7 basis is typically in an area not within IT Security.  It is within IT Service Management (ITSM)- Asset & Configuration Management (ACM).  The foundation for effective IT Security begins with a strong collaboration and technology integration with the ACM practice. Without a capable ACM presence, IT Security Management is left to answer these questions by themselves.

So why have enterprises ignored or enabled for a weak ACM practice. Over the last decade, there are several reasons-  technological, structural and business related. From a technology standpoint, the available solutions had partial answers, long implementation times and not seen as robust enough. From a structural standpoint, the focus within ITSM was on 'Services' with Incident Management taking the lion's share of the budget and focus. From a business standpoint, multi-sourcing has played a huge role in the compartmentalization of the enterprise. Rightly so, Service providers focus is on achievement of service levels and watching what they are contracted to do and no more.

I would also argue that effective ACM is a key pillar to effective IT governance. The ability to know exactly what areas are being governed and how, from a non-strategic view, also depends on a sound ACM practice.  Again in a software centric world there is no application software, without effective Software Configuration Management (SCM) and tools like Git and Subversion.  So ignoring ACM, undermines the very functionality and availability of the software.

But our focus is on IT Security, so where does one start? Depending on the state of the ACM practice in the enterprise, there are may be a need to fund this central function, expand it's scope and bring in greater emphasis on tools, technology and people. More in my next blog .....

June 24, 2014

Application Portability - Going beyond IaaS and PaaS - Part 2

Renjith Sreekumar

 

In my last post,  we looked at the concept of running thousands of hardware-isolated applications on a single physical or virtual host. Applications are isolated from one another, each thinking that it has the whole virtual machine dedicated to itself. The container technology allows for application and its dependencies to be packaged in a virtual container that can run on any server running the same OS. This enables flexibility and portability on where the application can run -on premise, public cloud, private cloud or bare metal, etc. Sharing metadata reduces application footprint and start-up time along with improved performance and multi-tenancy.

Should we build multiple VMs to isolate applications or build complicated abstractions and layering on a VM to achieve this? Here is where the container technology can help.  Docker is one such container technology that extends the native Linux kernel capabilities and namespaces to create hardware-isolated OS images with their own isolated allocations of memory, storage, CPU, and network.

The base OS image will be customized using Docker to create a customer image. Docker file system can merge the various layers of customization on the base image together during run-time. As the container can abstract the underlying OS, it may not require a VM and hence can actually run on the bare metal OS as well. Containers may well be the Next VM !!!

Let's look at some of the practical use cases:

1) PaaS Delivery: Today most of the PaaS providers use sandbox methodologies to application colocation on a single OS instance. With the adoption of container technology makes it much easier to abstract the application environments, support multiple languages, databases and also improve manageability and security.

2) DevOps: Container-based PaaS provides app developers the flexibility to build and deploy application environments with much more ease. This reduces the provisioning lead time and also alleviates worries about OS and middleware management, allowing developers just  focus on just their applications.

3) Scale out and DR: While most hypervisor technologies allows moving apps around in VMs, we need a compatible hypervisor to run those VMs. However  Virtual Containers can run on any server running the same OS, whether on premise, public cloud, private cloud or bare metal allowing scale out to any types of clouds supporting the same OS.

Finally, what benefits and changes can we anticipate?
The Logical boundary of applications ecosystem will move from VMs to Containers while the mobility aspect of applications will move beyond single hypervisor zones: 
• A container holding just the application binaries reduces the complexity in provisioning and managing applications
• The coexistence of isolated apps in the same physical or virtual servers will reduce the platform development and management cost
• The use of standard frameworks, instead of platform-specific APIs (sandbox driven),will improve the user adoption
• The container based application is entirely self-contained making it inherently more secure

So what do you think? Are you considering container based Application isolation and delivery as well?

October 1, 2013

Transforming Enterprise IT through the cloud

Cloud technologies offer several benefits. With solutions that are quick to adopt, always accessible and at extreme scale to the capex-to opex benefit that the CFO likes, clouds have come of age. In the coming years, the IT organization is going to increasingly see the Cloud as a normal way to consume software, hardware and services. Clouds are transformational. Several companies today are already enjoying a competitive business advantage as opposed to their competition by being early adopters of this transformational paradigm. Today, however, we hear similar questions on near term and long term adoption from IT leaders, the summary of them being as follows- 

  • Where do I get started?
  • What are the quick wins?
  • What should we be doing in the next 3 yrs?

I have tried to address some of these common questions as below. These can be thought of as basic transformational patterns and a strategy for cloud adoption within the enterprise

  • Start with Software as a Service (SaaS)- explore ready to use solutions in key areas such as CRM, HR and IT Service Management. Some of the SaaS solutions such as Salesforce, Workday and ServiceNow have been in existence for some time now and deploy proven engagement models, so these will be quick wins to consider. Pick functionality, an app and a suitable sized footprint, so as to have a project scope that can create a real organizational level impact. From a time to market perspective SaaS can arguably be the quickest way to attain the benefits of cloud.
  • Explore the private cloud- Take virtualization to the next level by identifying an impactful area within the organization to start a private cloud project. One example with an immediate benefit is a way to enable application development teams to automate the request of development and test environments. Getting this done through a service catalog front end through to a private cloud back end, can cut provisioning times by 50% or more with an added benefit of freeing resources to focus on providing support to production environments. There are different product architectures to consider with pros and cons that are beyond the scope of this note- choose one that works for the organization and get going.
  • Explore Data Center and Infrastructure consolidation- Many large Fortune 500 organizations today have to deal with equipment sprawl. The issue with sprawl can manifest itself from technology rooms and closets into co-located space to even entire data centers. This can deal with the full stack of IT equipment from servers, network switches, storage devices to even desktops.  Private clouds can be used as a vehicle to consolidate, reduce footprint and increase overall levels of control and capacity of this infrastructure. An added benefit can be in terms of higher performance, lower energy costs and replacement of obsolete equipment.
  • Identify specific public cloud use cases- Depending on the business, some areas can benefit from adopting public clouds. For e.g.- Requiring a large amount of computing horsepower for a limited duration to do data analytics is a requirement for organizations in the pharmaceutical, healthcare, and financial industries. This is a challenging use case for traditional IT as this is capital and resource inefficient. Public clouds are the perfect answer to these types of workloads. The business unit can pay for what they use and does not get limited by the equipment available in-house.
  • Create a multi-year roadmap for expansion - These initiatives are only the beginning. IT leaders need to create a 3 year roadmap that plans how these initiatives can be expanded to their fullest potential within the organization. Enabling a strong project management practice and a proven project team will go a long way to ensuring success during execution. Create a financial cost- benefit analysis for each of these areas and ensure a positive Net Present Value (NPV) case for each one. Identify what partners bring to the table that is proven, yet unique to the cloud solution at hand. Go with the base assumption that in spite of replacements of some legacy tools and solutions, these Cloud initiatives will more or less continue alongside current in-house IT and management practices.

In summary, it is important to have a pragmatic view.  Cloud is not the silver bullet that will solve all IT problems. And neither will an organization automatically attain the promised benefits. No two organizations are alike, even those that are in the same industry and hence understanding what to do and which steps to take first will put IT on a course to being 'In the cloud'.

September 26, 2013

7 steps to a smarter IT Front End

 

We often praise a particular front end as compared to another. The world of Graphical user interfaces has transcended from PC's to Mac's to smartphones. But quite often the IT department ignores the 'Front End' that the modern user expects from IT. Most are fixated on the Service Desk as that all empowering front end. Even ITIL has prescriptive definitions. One can argue that this is not at all the case especially from an end user perspective.

 

We often hear the complaints of IT being slow, ineffective or behind on support commitments. Though there may be some truth to this, there's much to do with ignoring perceptions that have built up over time in user's minds. So what is that 'Front end'- I would define that as a cohesive combination of Resources, Service Desk response times, average speed of resolution, automated Service Catalog and a comprehensive Knowledge base.

 

So how does an organization build up that smart IT front end? Here are 7 steps to get going-

 

1)     Handle all actionable Service Requests through a single service catalog- Basically 100% of Service Requests should go centrally into one service catalog. Insist that the service should not exist if it does not exist on the Service Catalog! Obviously this requires a major change to sunset all kinds of tools and manual services, but the effort to consolidate on one clean interface is worth the time and effort.

2)     Support the Service Catalog through an automated back end - All actionable Service Requests should flow through an automated back end working their way through approvals, procurement, provisioning and fulfillment. Of course automating all of this is ideal and the holy grail! But make the move towards that goal and measure progress. Again shoot for 100% of backend processes; you will reach a high mark. E.g.-new user accounts, requesting a development environment, licenses, adding application access etc.

3)      Enable Problem to Incident (P2I) conversions- Resolving a problem is not the end of the day. Confirming that Level 1 teams understand what to do if the incident rears up again is a must. Consistently enforcing this policy of P2I connection and conversions will work wonders over a defined duration resulting in more incidents resolved faster and efficiently at Level 1 itself.

4)      100% self service for user induced incidents- Setup a Self Service gateway to manage all such common incidents. This will dramatically reduce time to improve speed of response. Examples include Account Lock Out, Password changes and resets, information /document upload, profile changes etc.

5)     Setup and maintain a corporate Wiki- Information discovery and ease of information consumption should play a key role in the roadmap of the IT Front end. Too often we see lack of information on how-to's, problems with finding the right document and obsolescence. An annual check on all key docs, along with the user's ability to edit and update docs will foster a sense of shared ownership within the user community. Enable access through all devices, especially smartphones. Experts will bubble up to the top and become allies of IT.

6)     100% of software installs via End users- through the self-service capability and service catalog automation, enable users to receive a temporary download link to software that they are allowed to install. In the long run, diminish the need for this install capability through adoption of Software as a Service and/or internal web applications. E.g. - Office 365, Sharepoint Online and Lync

7)     Periodic user engagement- IT often gets flak for not being there when it matters or simply not being around. Enabling user feedback, technology awareness sessions and formal internal training periodically can go to a great extent in bringing IT closer to the business community.

 

The organization of tomorrow requires a smart technology front end. Transforming from now to then requires investment of time, effort and resources. These steps can get you started. And there may be more. Do you have a take on additional steps- then do write in.

September 24, 2013

Service Redundancy - 5 Lessons from the Nirvanix blowout

 

Earlier last week one of the most prominent Cloud Storage Providers Nirvanix, gave a shocker to the tech community when it announced that it will be going out of business. Customers and partners were asked to stop replicating data to their storage infrastructure immediately and move out their data in about 2 weeks. I have been fascinated about this story and here are the facts.

-          Nirvanix pulled about $70 Mn in venture funding during its lifetime- starting Sept of 2007

-          It's key backers kept up 5 rounds of funding right upto May of 2012

-          Rated well by industry analysts and media

-          The cloud storage service was sold through several enterprise software resellers and service providers.

-          The service pulled in several key customers and was challenging the likes of Amazon's AWS S3 storage services.

 

What is evident is that the company was burning cash faster than generating revenues and it all came to an abrupt end, when it could not find any buyers/execute an exit strategy. One would have thought that enough value (IP or otherwise) would be generated to a potential buyer in 6 yrs of existence, but no more detail seems available. Nirvanix is by no means the first to go belly up- EMC had pulled the plug on Atmos Online service in 2010, though that was perceived as far smaller in impact.

 

From the enterprise standpoint, if an organization had been using their services, these 2 weeks are a time for scramble. Moving data out of the cloud is one tall order. Second issue is to find a new home for the data. And what if the data was being used in real time as the back end of some application. More trouble and pain. So here's my take on the top 5 areas clients/providers can address for Service Redundancy (leaning more on Cloud storage services)

 

1)     Architect multiple paths into the cloud- Have a redundant storage path into the cloud. Ie host data within 2 clouds at once. Now this depends also on the app that is using the service, geo and users, a primary/ secondary configuration, communication links and costs. For eg- a client could have an architecture where the primary cloud storage was on Nirvanix and the secondary on AWS. Throw in the established value from traditional in-house options and established disaster recovery providers.

2)     Be prepared to move data at a short notice- Based on the bandwidth available from source to target and the size of data in consideration, we can easily compute how much time it would take to move data out of a cloud. Add a factor of 50% efficiency (which could happen due to everyone trying to move data out), frequent testing and we have a realistic estimate of how long data migration will take. Now given that the 2 weeks from Nirvanix is a new benchmark, clients may choose to use this as a measure of how much data to store in one cloud- ie if it takes more than 2 weeks to move important data, then consider adding communications costs for better links or including a new provider into the mix.

3)     Consume through a regular service provider - Utilize the Service through a regular managed services provider. This has two benefits for clients a) the ability to enter into an enterprise type contract with the provider and ensure service levels b) the ability to gain financially in case of breach of service levels. Of course service providers in turn have to be vigilant and ensure that they have proper flow down provisions in their contracts with cloud providers and secondly there is an alternative to the service in case of issues.

4)     Establish a periodic service review process- Often we see the case of buy once and then forget. A regular review of the Service will often provide early warning of issues to come. For eg in the case of a cloud storage provider, tracking how much storage they are adding (new storage growth%) and new client labels signed on will give a good indication of any issues on the capacity front. This may in turn point to lack of available investment for growth.

5)     Understand provider financials- Cloud providers today focus on ease of use and new gee-whiz features- this often masks how exactly they are doing financially. And the market continues to value them on revenue growth. It does not matter whether the company is public/ private but as a client and under confidentiality, there exists a right to understand the financial performance and product roadmap, even if it is as a high level.

 

Cloud Solutions offer multiple benefits, but at the end of the day, they still serve another business as a service and there are issues with all services- cloud based, or traditional. Was Nirvanix an outlier or the warning of things to come? We don't know that yet, but as service providers and clients, this may be the 'heads-up' we need, to stay focused on the essentials for effective transformation of Infrastructure and Storage Services.

September 12, 2013

The palest ink is better than the best memory

(Published on behalf of Vishal Narsa)

Enterprises operating multiple testing environments often fail to realize the need for a comprehensive knowledge management solution which can cater to the information needs of the users and support groups for the test environments or non-production environments as they are called. As the effort and money spent on building these complex test environments is exorbitant, it makes complete business sense to quantify the value extracted from these investments.

One of the principal criteria for quantification is the availability and uptime of these non-production environments for the testing teams. Any downtime on non-production environments will lead to the releases getting delayed and testing resources being underutilized - a serious implication for the business itself.

Consider the case of the testing environment for a systems integration project wherein there are multiple teams jointly responsible for building and testing the entire business solution. The stake holders range from applications to infrastructure teams, from environment architects to development teams, 3rd party vendors to testing teams. It is not surprising, that the environment information exists in silos given the diverse set of teams involved.

Any downtime on an integrated testing environment means that an individual or a group of components malfunction and this hampers testing of the complete business functionality. Restoration of this environment needs a coordinated effort by different component teams involved.
But a lack of comprehensive knowledge base of the environment landscape springs up as one of the major challenges at this juncture. The component teams often tend to depend heavily on each other's personnel for their respective component configuration information and other technical details about the environment. This information is absolutely critical for the component teams and their ability to restore the environment.

Lack of a consolidated environment knowledge base leads to a delay in restoring the environment which in turn, has a far reaching impact on development/testing schedules, release schedules resulting in unplanned costs to the business.

Given the criticality of knowledge management and the positive influence it can have on achieving overall business objectives, it is important for organizations to deploy a standardized and well governed methodology to capture and reuse environments related knowledge assets. In organizations with mature ITIL practices in place, the environment knowledge base can be incorporated as a subset of enterprise Service knowledge management system (SKMS). Some of the palpable benefits of a matured knowledge management system include:
• Accelerated application delivery contributing to reduce time-to-market
• Reduced environment provisioning time due to readily available baseline configuration information
• Early resolution of environment incidents leveraging past knowledge
• Improved staff productivity due to enhanced knowledge sharing

The title of this post says it all - Knowledge written down and captured in a proper format will always be more accurate and valuable than referring to a collective organization memory.

September 3, 2013

Testing the test environment - Infrastructure testing for non-production environments

(Published on behalf of Divya Teja Bodhanapati)

In our previous post, we looked at the perils of ignoring the non-production environments and focusing only on production environments.
In order to achieve a reduction in the total costs of operations, an optimized, robust, and reliable and "fit-for-purpose" non-production environment is essential.

The question is when can an environment be called "fit-for-purpose" or "reliable"?

The answer is "when all the components (infrastructure, middleware and the applications) involved in the environment perform as per the user defined requirements."

When we look at the largest component i.e. infrastructure, 3 elements stand out - Storage, Network and Computing components. Testing applications is a well-established function but how do we ensure the underlying infrastructure is also working as required?

Not many organizations have given a serious consideration to test their infrastructure before putting them to use. Over the past years, it has been observed that the outages and downtimes in environments are primarily due to infrastructure issues.
The Quorum Disaster Recovery Report, Q1 2013 says that, "55% of the failures are at hardware level (network and storage components)". This is not surprising.

In July 2012, micro blogging site Twitter had to post this message on their blog after a short downtime blaming an "infrastructure double whammy". This outage affected millions of Tweeters across the globe; a day before the Olympics was to begin in London.

But what is the impact of downtime in non-production environments?

Any system downtime will end up shortening the development and testing cycle as these processes will have to wait till the environment is up and running again. Due to insufficient time, the development and testing stages may not be conducted properly leading to a vicious cycle of possible defects in the final product. This would ultimately result in outages in production environment as well - the consequences of such as outage can be even more devastating to the business as seen in the Twitter case above..


Infrastructure testing essentially involves testing, verifying and validating that all components of the underlying infrastructure are operating as per the stipulated configurations. It also tests if the environment is stable under heavy loads and different combinations of integrations and configurations.
Infrastructure Testing includes all stages of software testing - unit testing, system & integrated testing, user acceptance testing and performance testing applied to the infrastructure layer. By bringing in the rigor of testing in the infrastructure space, infrastructure testing eliminates inconsistencies in the infrastructure configurations that are the main cause of outages and downtimes in non- production environments.

A thorough testing process is crucial to the success of any rollout - by testing the underlying infrastructure of the non-production environment, the probability of any defect due to incomplete testing can be safely ruled out.

 

(Divya Teja is an Associate Consultant at Infosys with close to 4 years of experience in the IT industry. Her focus areas include Non-Production Environment Management and Infrastructure Automation.)

 

June 29, 2013

Outcome Sourcing - Buying results rather than services

 In his interview with DNA India magazine, Chandrashekhar Kakal, SVP and Head of the Business IT Services unit at Infosys, talks about the rise of outcome based sourcing as opposed to the traditional mode of 'outsourcing'. Traditional IT outsourcing has been focused on cutting input costs with the buyer seeking to offload repetitive activities to a third party. However, organizations today are already looking at the next big thing.

As enteprises continue to spend majority of their budgets on running and maintaining the IT infrastructure, they need to look at new ways to innovate and transform the business such that it leads to growth.

Outcome sourcing allows organizations to buy results that matter to business growth rather than just focus on reducing the input costs. So, they look for "strategic partners" who can be entrusted with the end to end management of business applications and processes, as opposed to suppliers or vendors. As the emphasis moves towards delivering results that matter to the client, outcome sourcing can foster innovation and a closer alignment of a vendor's incentives with business requirements.

Click here to read the complete article.

March 17, 2013

Software defined everything !!!

The Software Defined Datacenter (SDDC) chatter is seemingly everywhere. Think about the possibility of virtualizing network and storage in a manner similar to how we have virtualized CPU & memory (compute).Think of  virtually managed computing, networking, storage, and security delivered as a service. The provisioning and operation of the infrastructure could entirely be automated by software leading to Software Defined Datacenter.

Most of today's networking and storage gears mix data and control functions, making it hard to add or adjust network and storage infrastructure when adding virtual machines (VMs) to enterprise data centers. By adding a well-defined programming interface between the two, networking hardware and storage hardware will become progressively more commoditized, just as servers (compute and memory) have become commoditized.

This will occur because all the value associated with configuration and management will get sucked out of the hardware and into other layers. This means is that in the new world, the manageability, control and optimization would sit at a layer above the actual hardware - leading to a software defined, software managed and software controlled enterprise compute, storage and network!

We can expect independent software appliances or current hypervisors (VMware, Hyper-v) to come with the ability to abstract and centralize the management functions to deliver provisioning, configuration management, automation, performance optimization, capacity utilization and reporting. Then there is absolute certainty that the SDDC will require a completely new management stack - one suited to the highly dynamic nature of an environment in which every key resource (CPU, memory, networking and, storage) is abstracted from its underlying hardware.

Among the many things we might need to see in the new management stack include data protection, security, configuration management, performance management and automation. The key feature that I would like to see here is that of datacenter analytics, as every layer of management software for the SDDC will be generating data at a rate that will demand a big data store just to keep up with and index the data so that bits from the various sources can in fact be compared to each other at the right time - Big Data for data center management!!

SDDC offers challenges and opportunities for product vendors, IT organizations, service providers and integrators alike. The journey has already started. Have you reconsidered  your data center strategy yet? Let me know your thoughts.

February 28, 2013

VMware vs. Microsoft - is Microsoft leading the race?

(Posted on behalf of Akshay Sharma)

The race to lead the virtualization and private cloud space has been a close finish between two industry stalwarts - Microsoft and VMware.

But with the launch of Windows Server 2012, Microsoft seems to have stolen a lead over VMware - or has it?

Microsoft launched Windows Server 2012 last year and is positioning it as a cloud operating system. Windows Server 2012, with Hyper-V 3.0 and System Center 2012 put together is an impressive package that seems to have given Microsoft a significant lead on their strategy in the cloud space.

As organizations seek to deploy private clouds, one of the key requirements is to ensure there is a sufficiently large storage and computing power available in the form of servers and storage systems. To be cost effective, they require large virtual servers and virtual storage that can be easily added or removed with minimal disruptions.

In response to these challenges, Microsoft's Windows Server 2012 with Hyper-V 3.0 brings a wide collection of features and capabilities. It has built-in storage virtualization which lets the user configure storage into a single elastic and efficient storage pool. It also provides high availability for applications by setting up failover systems within the datacenter or at a remote location. With the new Hyper-V, it is possible to virtualize extensive workloads, with support for up to 64 virtual processors per virtual machine (VM) and up to 4TB of memory per host.

However, VMware, the original kingpin of the virtualization space, has the vCloud Suite 5.1 - complete with all the requisite cloud characteristics. With their new offering, they are working towards pooling industry standard hardware and then running each layer of the datacenter as software-defined services. vSphere 5.1, the VMware core hypervisor is packed with new features like live migration with shared storage and supports virtual machines that are double the size of already existing ones.

Talk about a knockout round!

Continue reading "VMware vs. Microsoft - is Microsoft leading the race?" »

February 20, 2013

The Silver bullet for BYOD

Mobile workers and customers create new challenges and opportunities in the network space. BYOD is forcing a rapid WLAN evolution to handle the onslaught of new devices and applications on the access network. BYOD adoption seems to be accelerating day by day. Letting the iPad and other tablets onto your wireless local area network (WLAN) can create bandwidth competition that slows everyone down.

If the WLAN can't natively differentiate between vetted and potentially infected devices, the network is at risk. Wireless access points (APs) could be saturated with rogue devices, and DHCP servers are running out of addresses if IT does not have good visibility into how many devices are connecting to the network. BYOA trend is demanding a wireless LAN prepared to handle not just new devices, but the slew of new applications being brought into the workplace.

Organizations are trying to address this by redesigning and upgrading their WLAN topology, looking at solutions to improve the bandwidth and coverage, use of WLAN analyzers, spectrum analyzers and wireless intrusion prevention systems (WIPS) to identify and measure the network impact.

So the question is - Do we have the silver bullet yet?  No, IT shops are instead patching together various tools that range from mobile device management (MDM) to network access control (NAC) and traditional management systems. How long do we need to find the perfect solution.? how are you trying to address this at your organization.?

 

January 8, 2013

Data center transformation - The next wave?

According to this report from Gartner, organizations are increasingly looking for ways to transform their data centers. In the report titled, "Competitive Landscape: Data Center Transformation services", Gartner reveals that the number of searches on the Gartner website for the topic has increased by about 358%.

The report also suggests that organizations are increasingly looking beyond simple hardware refresh but are also looking at re-engineering operating models and processes to increase functionality while optimizing costs. In this sense, there is a greater demand for consulting led solutions that organizations can streamline according to their requirements.

At Infosys, our value proposition involves helping organizations transform their data centers with a consulting-led approach. In addition, our intellectual property and solution accelerators help us provide a great framework for enterprises to gain the best return from their IT Infrastructure.

In this report, Infosys is one of the several vendors to be profiled. The report can be accessed here.

December 21, 2012

Onward to 2013- IT Infrastructure's home is more important than ever

A couple of weeks ago I attended the Gartner Data Center Conference 2012.  It is by far one of the largest conferences focused on infrastructure services and the Data Center market. The conference had about 2500 visitors and was very well organized on all fronts. Through that time and after discussions with clients and partners, some trends are becoming more evident than ever as 2012 comes to a close.  I won't try to expand on each of these observations in this short blog, however suffice to say that numerous examples around these areas exist today and are growing rapidly. Call it a crystal ball or some food for thought for the year end reading so here goes-

·         The emergence of the software defined and software managed modular data center

·         Enterprises' growing struggles to realize public cloud benefits within an enterprise construct

·         The expanding PaaS-ification of Public Clouds-note the announcement of Amazon Redshift earlier this month to challenge the likes of IBM in the Data Warehouse market

·         The Infrastructure automation market looks to be quite diverse and just getting started- no single pure play company is likely to wield any significant influence

·         Data Centers are growing and growing - more space is being put out for DC's inspite of virtualized infrastructure and converged infrastructure.

·         Data center operators' greater reliance on the power grid and self generation measures are inviting more scrutiny from power regulators

·         Converged Infrastructure is enabling faster and more frequent application rollouts- expect more on the lines of dev-ops. Also fueling faster private cloud adoption

·         Converged and commoditized infrastructure is starting to find it's way into the enterprise

·         Fully operational and scalable hybrid clouds are still more of a hype than reality

·         Converged Cloud Services is SaaS-ifying and commoditizing CRM, HRM, Financials and related business functions- eg Oracle Fusion cloud

·         Lots of initiatives planned within enterprises on Big Data. The emergence of Big data only startups- eg Horton Works

·         Big Data leading to the 'Internet of things' - eg sensors & cameras recording vehicle license plates at toll booths-Questions on what to do and how to store this data tsunami

·         Talent within IT organizations is getting redefined- more cross skilled, senior resources within Infra dealing with cross skilled developers. Forecasts of talent crunch to meet demand.

·         Increasing impact of Flash players in the storage market- Fusion IO, Violin Memory, etc. Major changes in storage backend is expected to lead to faster apps in general. And perhaps faster business cycles

So, one thing is really clear. Businesses continue to see IT Infrastructure services and the Data Center as the center of gravity for all technology operations, whether that is a 2 person startup or a global corporation. And as a result, things are looking really bright for technology professionals in 2013 and beyond.

 

November 29, 2012

CIO Mandate: Optimize

Posted on behalf of Shalini Chandrasekharan

In his second post on CIO mandates, SVP and head of Business IT Services, Chandra Shekhar Kakal continues to emphasize the imperatives for CIOs.

Optimizing current IT operations is an ongoing enigma for CIOs. With IT operations consuming 60-70% of the average IT spend, it is no surprise that focus on IT infrastructure, and ways to gain additional leverage from investments already made, is gaining momentum.

Continue reading "CIO Mandate: Optimize" »

November 27, 2012

Transformation at the heart of IT

Posted on behalf of Shalini Chandrasekharan

In a fragile global economy, as enterprises find their value chains increasingly pressured, CIOs - alongside business - are taking on the transformation mandate and helping strengthen the value chain.

Continue reading "Transformation at the heart of IT" »

November 15, 2012

Infrastructure and Big Data analytics

Posted on behalf of Debashish Mohanty, MFG

The internet has spawned an explosion in data growth in the form of data sets, called Big Data, which are so large they are difficult to store , manage and analyze using traditional DB and Storage architecture. Not only is this new data heavily unstructured, it is voluminous, streaming rapidly and is difficult to harness.

Continue reading "Infrastructure and Big Data analytics" »

October 25, 2012

Maximize Benefits of Datacenter Initiatives

In recent times, many organizations have undertaken large transformation programs in the areas of Data center consolidation, hardware virtualization for multiple reasons like legacy modernization, cloud journey and to reduce costs through automation and standardization. Many of these transformation programs fall short of the desired goals.

 

Continue reading "Maximize Benefits of Datacenter Initiatives" »

October 1, 2012

IT Governance - a lot more than what meets the (I) eye

Imagine the best traffic system - wide roads, well-lit signals, separate lanes for each type of vehicular movement, pedestrian and bicycle path, etc. Wouldn't it seem to be the best place to drive? And doesn't it seem to be like one of the best traffic management systems too? Well, it is true.
 
But come to think of it, even if you have the best traffic management system, how would you ensure that folks keep to their lanes or that they do not jump signals? There would be CCTVs installed to monitor the functioning of this traffic management system. 

That is Governance.

 

Not only does this show the difference between Management and Governance, but that even the best management practice needs governance in place.  For more about Governance and Management, read on...
 

Continue reading "IT Governance - a lot more than what meets the (I) eye" »

September 25, 2012

Can you find that transformation pattern?


We live in a day and age of Infrastructure transformation. All kinds of IT Infrastructure from servers, network, databases and management applications are in the midst of a giant transformation primarily triggered by cloud technologies.

Continue reading "Can you find that transformation pattern?" »

August 28, 2012

Knowledge Management: Through a Consultant's Prism

Knowledge, as they say, leads up to wisdom. This sounds like an obvious transition at a spiritual level. For an organisation, however, 'knowledge' - if stored, managed, controlled and governed effectively - itself could prove more important to have than wisdom. Simply put, my view is that knowledge for today's organisations is a 'must have', and wisdom is a 'nice to have' entity...

Continue reading "Knowledge Management: Through a Consultant's Prism" »

May 15, 2012

Pseudo-wires over MPLS network - Dawn of a new era?

Have you ever thought of the reason why Pseudo-wire configurations over MPLS cloud are getting popular?

Continue reading "Pseudo-wires over MPLS network - Dawn of a new era?" »