The Infosys Labs research blog tracks trends in technology with a focus on applied research in Information and Communication Technology (ICT)

« Process Modeling Series II: Business Process - Sea of Glossaries/Terms; where does Process Modeling stand? | Main | Process Modeling Series III: Composite Business Process Modeling Framework - components of enterprise wide process modeling initiative »

Server Consolidation - Key Considerations

Next Generation Data Centers (NGDC) would be a combination of shared, virtualized, automated, and fully managed adaptive infrastructures.

Virtualization, one of the key capabilities of data centers could be leveraged for reducing energy and hardware costs through server consolidation.

Server virtualization helps to slice large underutilized physical servers into smaller and virtual ones. Virtualization helps the application owners to separate themselves from the hardware. It helps in hiding the details of server resources from users .


One of the key problems being faced when we go for virtualization is to draw an accurate performance model, enabling application consolidation and optimal usage of the server resources .A simple approach in this regard being followed by several service providers is to evaluate the workloads of each application, then estimate the peak resource requirement of each workload  and come to a conclusion by summing up the peak resource requirement of group of workloads. An approach ,the one mentioned above would lead to over-provisioning of resources as it does not take resource sharing into account (key benefits of virtualization). Following are the  few key points which should be taken care while doing server consolidation:


·         Usage Profile of each application should be taken typically for a period of 6-12 months.


·         Applications having workload patterns  which complement each other should be clubbed.


·         Overhead of virtualization layer should be taken into account which could make applications to behave a bit differently.(Overhead of virtualization could vary according to virtualization techniques).


·         Performance Modeling of each application should be done to identify application level resource needs


·         Scaling factor for hardware may not be that effective due to several reasons

o   Applications can exhibit different levels of overhead depending on the rate and type of I/O being performed (so a simple multiplication factor won't work)

o   Limited Spec Benchmark available for virtualization environment


Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Please key in the two words you see in the box to validate your identity as an authentic user and reduce spam.

Subscribe to this blog's feed

Follow us on