CUDA lets you work with familiar programming concepts while developing software that can run on a GP This is the first of a series of articles to introduce you to the power of CUDA -- through working code -- and to the thought process to help you map applications onto multi-threaded hardware (such as GPUs) to get big performance increases. Of course, not all problems can be mapped efficiently onto multi-threaded hardware, so part of my thought process will be to distinguish what will and what won't work, plus provide a common-sense idea of what might work "well-enough". "CUDA programming" and "GPGPU programming" are not the same (although CUDA runs on GPUs). CUDA permits working with familiar programming concepts while developing software that can run on a GPU. It also avoids the performance overhead of graphics layer APIs by compiling your software directly to the hardware (GPU assembly language, for instance), thereby providing great performance.
A. Cheik Ahamed, and F. Magoulès. Distributed Computing and Applications to Business, Engineering Science (DCABES), 2013 12th International Symposium on, page 16-20. (September 2013)
A. Cheik Ahamed, and F. Magoulès. High Performance Computing and Communications, 2014 IEEE 6th Intl Symp on Cyberspace Safety and Security, 2014 IEEE 11th Intl Conf on Embedded Software and Syst (HPCC,CSS,ICESS), 2014 IEEE Intl Conf on, page 121-128. (August 2014)
A. Cheik Ahamed, and F. Magoulès. Distributed Computing and Applications to Business, Engineering and Science (DCABES), 2014 13th International Symposium on, page 19-23. (November 2014)