The Infosys Labs research blog tracks trends in technology with a focus on applied research in Information and Communication Technology (ICT)

« IDC Top 10 HPC Market Predictions for 2012 | Main | OpenCL Compiler from PGI for multicore ARM processors »

Is Parallel Computing a Rocket Science or Esoteric? Part 3

Having said a lot about the hardware evolution and intricacies in my previous posts(Part1 & Part2) that have influenced Parallel Computing, the question now to ask would be - Is Parallel Computing really a Rocket Science? Is Parallel Computing esoteric? The answer may be both yes and no. Bill Gates' Keynote in Supercomputing 05 Conference was titled "The Future of Computing in the Sciences"; the title seems apt, as Parallel Computing evolved mainly owing to the computational requirements for solving complex and advance computation problems in sciences that entailed high performance. This involved use of huge clusters and supercomputers. This class of computing is thus rightly named High Performance Computing (HPC). Understandably, these class of applications happened to be aimed at solving the toughest and convoluted problems of diverse sciences like astronomy, biology, mathematics and so on. Thus, owing to the complex nature and specialization entailment of these subjects, HPC seems to be esoteric here.

But due to the hardware technology advancement today; we have servers approaching teraflops speed thus the realization of "Supercomputer in your Desktop" may not be too far from reality in the near future. Desktops today have multicore-processors with languages that support porting of functionalities from legacy serial application to parallel. These parallel languages are powerful yet simple and intelligible to a novice programmer. So, it wouldn't be condescending to the power that Parallel Computing brings, to say that parallel programming is becoming easier. The problem however rests in migration of complex logic inherent to legacy application while porting from serial to parallel. Thus, owing to ever simpler paradigms for parallel programming PC is not a rocket science after all sans the HPC problems.

The future looks to be an extremely adventurous ride given the present technology tendencies. We are in times of shaping new horizons and touching upon new frontiers. Let's not ostracize Parallel Computing thinking it to be a rocket science and esoteric; let's embrace it with open arms because there always rests a middle path for us to choose.



I think the question also is will the power of parallel languages be sufficient to create programs with parallelization in them? Will we end up with dummies who have writen the code, but if it fails to work or doesn't gives right results, have no clue on how or what to fix?

As someone i know says "a fool with a tool is still a fool"

Hi Atul,

As you have quoted "a fool with a tool is still a fool" is a general truth. But, what I have attempted to answer is more of contrasting the past with the present and ultimately the future that one percieves of parallel computing.

Parallel programming languages are continuously evolving from what happened to be a niche knowledge which required toying with the graphics pipeline in GPUs to a structured paradigm say with CUDA, OPENCL etc. Also, programming on multiprocessors has evolved from writing explicit threading code to using a simple parallelized 'for' loop with Task Parallel Library in .NET 4.0, Intel Cilk etc. The new Microsoft C++ AMP allows you to code in a 'C' fashioned way just by using templates and lambda expressions.

I do agree one needs to have a good understanding to evaluate the applicability for Parallel computing which may also need some domain expertise,but what I am trying to bring to the light is Parallel programming i,e. Parallel Computing is approaching main stream and is no more a difficult thing to learn. The code written by a dummy wouldnt be the most optimized however given the simplified constructs the dummy could convert the simple serial code to parallel thus making the things more relevant to a larger user base than it being esoteric

Parallel languages I think are still evolving to a stage where it will abstract out most of the plumbing work need to parallelize application code. Most of the mature ones are still low level programming languages. But to answer your question, by no means a dumb programmer will do. Identifying the right portions to parallize, designing/redesigning for a parallel architecure, what is best suited for control plane versus data plane activities are some of the things that are still left to a developer to analyse. Parallel language tools will take over only after these initial analysis and design are completed

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Please key in the two words you see in the box to validate your identity as an authentic user and reduce spam.

Subscribe to this blog's feed

Follow us on