The Infosys Labs research blog tracks trends in technology with a focus on applied research in Information and Communication Technology (ICT)

« Frame Navigation Policies in web browsers-One reason to upgrade to modern browsers | Main | Microsoft C++ AMP is now 'Open' »

High Performance Computing: An Overview

High Performance Computing (HPC) is today, one of the fastest growing fields in the IT industry. At a recent survey conducted by IDC, it asked companies across many industry verticals such as aerospace, energy, life sciences and financial services, about how they would be impacted without access to HPC. An interesting result from the survey is that, 47% of the companies said they could not exist as a business without HPC and another 34% said they will not be able to compete and would face issues in terms of cost and time to market in the absence of HPC. So this makes it obvious that HPC is one of the key technologies for every company to invest, in order to innovate and compete.

What is HPC? HPC is a system which brings 3 elements together. They are - Computers, Software and the right Expertise to utilize them. HPC technologies are used to solve problems which have traditionally known to be difficult to solve. When I say difficult problems, one could think of complex simulation problems encountered in financial systems for risk assessment of portfolios or in life sciences field for gene sequence matching or for the seismic imaging of earth's sub-surface by an oil services company. These are very time consuming applications and require enormous computation power. The need in such systems is to accelerate the execution by best utilization of the computer hardware resources on hand. This is where the modern day multi-core and many-core processors such as those from Intel, AMD and Nvidia come to aid by accelerating the compute and data intensive applications. With 8-16 cores on an Intel multi-core CPU to somewhere around 400-500 cores on an Nvidia GPU chip, there is enormous compute power that is available for use. Their processing power is in the order of GFLOPS to TFLOPS. In order to utilize such enormous compute power one will have to design their applications to effectively utilize the hardware resources. But most of the current mainstream applications and scientific applications have been developed using sequential algorithms and cannot effectively utilize the large computation power that is at disposal. The only way to make best use of this processing power is by porting or rewriting these applications using parallel algorithms. By multithreading the applications across the multiple cores of the processor, thereby dividing work, enormous speed-up can be achieved and benefits can be derived in terms of throughput and performance. This is the essence of HPC, which is to break problems into pieces and work on those pieces at the same time.

With large computing capacity processors available, we need the right set of technologies to harness them effectively. This is where the 'software' element of HPC that I mentioned earlier comes to the fore. There are vast set of HPC technologies to choose from. One could categorize them in multiple ways but following are the 2 categories that could be done based on the architectural differences of the underlying hardware. These are : multicore CPU technologies and many-core GPU technologies.
1.      Multicore Technologies: This includes all the software tools and libraries that help utilize the multicore CPU infrastructure efficiently. This class of technologies is suitable for compute intensive workloads where computation can be divided into individual tasks each performing a specific functionality. Some of the multicore technologies include Microsoft .Net 4 Parallel framework, Windows HPC Server 2008 and Intel's Parallel Studio. Windows HPC server is a cluster based solution that helps in dividing the work into many chunks and distributes the workload to all the compute machines in the cluster. Here the granularity of the processing can go upto the individual core of the processor.
2.      Many-core Technologies: Under this category, the most prominent and revolutionary technology is the "GPU Computing", which is widely used by scientific computing community and a lot of industry verticals. With hundreds of processing cores on a single chip, GPUs have vast potential for parallelism. GPU computing refers to using GPUs to perform general purpose computations instead of graphics/visualization purposes which they have traditionally been known for. This class of technologies is suitable for data intensive applications where computation has to be performed on large datasets and each data element could be processed in parallel. GPUs are more energy efficient than similar compute capacity CPUs which makes them more attractive for HPC applications because of the large data processing involved. Some of the important GPU technologies include Nvidia's CUDA, Microsoft's C++ AMP and OpenCL.

HPC today is an indispensable area for companies to gain competitive advantage, innovate and build products that was once considered solely the province of big science. The wide variety of tools available in the market is helping democratize HPC usage.

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Please key in the two words you see in the box to validate your identity as an authentic user and reduce spam.

Subscribe to this blog's feed

Follow us on