The Infosys Labs research blog tracks trends in technology with a focus on applied research in Information and Communication Technology (ICT)

« NextGen Data Warehousing Trends - Part I | Main | Workload Modeling of SaaS based Multi-tenant Applications »

Parallelism - Scalability and Amdhal's Law

Scalability of a system, from a performance engineering point of view, is the ability of a system to use additional resources available to it in a judicious manner and maintain the performance parameters within acceptable limits. Load testing can be used to determine whether a system is able to progressively use additional hardware available to it and maintain Non-Functional Requirement (NFR) metrics at a constant level under increased user load. A performance test engineer can determine whether his software is scaling well by looking at how three parameters of the system behave, namely, Resource Utilization, Throughput and Response Time.


Ideally, with an increase in user load, the system should be able to progressively increase Resource Utilization. A graph which plots Resource Utilization vs. User Load should have a positive slope. Likewise, for a scalable application, Throughput should also increase with an increase in user load. On the other hand, the response time should remain more or less constant; adhering to the NFR. This graph will ideally have a slope of zero. But in a real world scenario, a deviation of upto 15 percentage is considered acceptable. Here is an excellent article on Scalability from MSDN.

Today's software is radically distributed and the need for these to be intrinsically scalable is important. Design of software for parallel performance, i.e. scalability is determined by the percentage of code that can be parallelized. No software is fully scalable, i.e., there will always be a small amount of code which cannot be parallelized. Amdahl's Law states that

Speedup = 1 / (s + p / N)

where N is the number of processors, s is the amount of time spent (by a serial processor) on serial parts of a program and p is the amount of time spent (by a serial processor) on parts of the program that can be done in parallel.

As you can see, an increase number of processors (N), will only increase the efficiency of the software by a factor influenced by the percentage of code represented by 'p'. So systems designed to operate in parallel environments should have minimum code under 's'.

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Please key in the two words you see in the box to validate your identity as an authentic user and reduce spam.

Subscribe to this blog's feed

Follow us on