Hello Parallel Computing!
Now that I have (hopefully) your attention, let me pop a question. So is parallel computing as complex as it sounds to be? And, unfortunately, the answer is yes. It is a hard job to design and develop an error-free parallel program. Hard because thinking in parallel and designing parallel programs is not something most are trained for and also because this process takes a good amount of time.
In the world of HPC, parallel computing is a must in order to use computing resources like multi-core CPUs, GPUs and other accelerators. And, the program design varies from one computing resource to another. It is because the computing resources can be classified to work best for a particular class of parallelism. On a high level, parallelism can be classified as Task Parallel and Data Parallel. To be able to design a well optimized parallel program, it will be essential to identify whether the problem is task or data parallel or is a mix of both. The next step will be to identify the appropriate computing resource. A data parallel program is well suited for a GPU. On the other hand, the CPU is best for task parallel programs. With the decision on the hardware made, comes the next step of actually designing the parallel program, developing this using the appropriate programming model and running this to check for correctness. This is a mammoth step and an important achievement. And then it comes to the star step, to quench the thirst for speed, it is essential to fine tune or optimize the program to achieve the much sought after speed up.
Things get really exciting in my side of the world, the HPC world. Keep watch to read about ways to solve the mysteries of parallel computing.