Parallel Computing : Its Evolution and What You Should Be Doing About It
September 16, 2012 Editor 0
If you are a “mobile app” programmer, holding a smartphone on your hand, a looking deeply into its lovely interface, there is something you need to know about its past and how your relationship will blossom.
“I think OpenCL in the mobile is going to be fundamental to bring parallel computation to mobile devices… and into the web through WebCL”. Neil Trevett, member of The Khronos Group and engineer at NVidia.
A time had to come when the PC’s central processing unit (CPU) clock speeds had to stop being pushed. Every doubling of speed and shrinking of the processor meant that the heat produced increased several-fold. So in order to drop the power consumption of CPUs, which was mostly turning to heat, the CPUs now as “cores” were being put on one silicon “chip” package. This meant that same tasks being carried out by one fast CPU in its own pack, fitting in a motherboard socket were to be carried out by a number of slower CPUs all encased in a package similar to the earlier CPU but fitting in its own socket as well. The computer chip maker, Intel cleverly turned this process into a marketing brand, Intel Core.
But since 2010 mainstream computing has not been orchestrated by Intel’s and AMD’s x86 processors anymore. The graphics processing unit (GPU) has become an important part of the computer initially to present computer games at their best possible resolution and frame rate. At that time, information would flow one way from the CPU to the GPU. But with the rapid developments in the chip fabrication and data transport technologies, the GPUs are now able to process data and communicate back and forth with the CPU. This now means that the GPU can help out the CPU in carrying out general purpose computing instructions like “get me a cup of tea” or “get rid of the trash”, and not the only job GPUs were for, “I want this drawing done NOW!”
So GPUs have become very useful for ultra-fast computations on large data sets that are now the norm on the Internet. The wait for cheap super computers has quickly come to an end as most of top 500 “supercomputers” are migrating from being CPU based to a mixed architecture of CPU – GPU. This happened rapidly between 2009 and 2011.
Thus, the computer hardware market has been disrupted severely and companies are actively defending market share on different levels, RAM, storage and networking being the main areas. Others are branching out into new areas such as Intel and NVidia now producing processors for mobile devices.
The Internet has evolved and new infrastructural developments like “clusters”, “virtualization” and “clouds” are not just buzzwords but are demands on the underlying computer hardware that makes the Internet possible. This hardware has to cater for distributed operating systems, databases and processes.
To illustrate “distributed”, think of your computer’s operating system that seems to be one desktop on your computer screen. Imagine your computer screen being connected to 50 CPU boxes humming quietly behind your desk. Imagine that these boxes are wired to each other as well. There is one file on the desktop. That file might have been saved on 1 hard disk in 1 CPU box or saved as 50 pieces across numerous hard disks on all the boxes. For the geeks, they get excited explaining the difference between a “virtual computer” and a “cloud”. Walk away.
Consumption of information on the Internet and off is moving from the PC to new products like smartphones and tablets. These can now run almost any software. And users are now viewing mobile devices as extensions of themselves and expect to be always connected to the Internet.
When the iPhone started to gain popularity, the cohesion of the software and its user-friendliness were believed to be the reason for its success. What it actually started was a competition based mostly on hardware (such as the multi-touch screen and a fast “mobile” processor), where mobile devices went through the evolution at least twice as fast as PCs due to increased customer demands. Older mobile phone makers who cannot keep up are dying or are forging new partnerships to remain relevant. And this is happening as most tablets and smartphones are getting multi-core processors built into them.
A New Computing Paradigm
In the fading signal-core CPU era, “serial computing” has been used. In this type of computing, only one instruction may execute at a time—after that instruction is over, the next is executed and so on. Now with the introduction of multi-core CPUs and GPUs, programming software is beginning to change. The new concept is Parallel Computing. This is where one instruction, either starts off a number of other instructions that are sent to different cores, producing a number of results. Or the instruction in itself can run as one instruction on a single core, or that can be divided into small “sub instructions” and sent to different cores and later collected back into one with a result.
So as an “mVitu” programmer, strutting and beating your chests on the latest mKitu, if you don’t keep up and maintain your “mobile edge” by moving into this new era, you and I will be overtaken and this time there will be no “leap” to catch up. The relevant programming skills will be “parallel programming” and “massively parallel programming”.
The new standard for this is OpenCL, the Open Computing Language.
Subscribe to our stories
- Can Africa’s tech start-up scene rise to the next level? November 20, 2017
- Chocolate innovation: Sweet tooth hackers solve cocoa farmers’ challenges November 20, 2017
- A new generation of CEOs: Running a business in West Africa as a woman November 20, 2017
- Is crowdfunding the silver bullet to expanding innovation in the developing world? November 20, 2017
- Towards building an Entrepreneurship Ecosystem- Global Entrepreneurship Week and Freetown Pitch Night-The Role and Significance of the Freetown Pitch Night November 20, 2017