We've discussed differences between parallel, computational systems and concurrent, event handling systems. Areas of differences include:
Determinism: desirable vs impossible
Sign of parallel safety: same results vs correct results
Parallelism bugs: easy to pinpoint vs impossible to define
Queues: implementation detail vs part of the interface
Preemption: nearly useless vs nearly essential
In an earlier post I mentioned that one goal of the new introductory curriculum at Carnegie Mellon is to teach parallelism as the general case of computing, rather than an esoteric, specialized subject for advanced students. Many people are incredulous when I tell them this, because it immediately conjures in their mind the myriad complexities…
With the advent of multi-core processors concurrent programming is becoming indispensable. Scala's primary concurrency construct is actors. Actors are basically concurrent processes that communicate by exchanging messages. Actors can also be seen as a form of active objects where invoking a method corresponds to sending a message. The Scala Actors library provides both asynchronous and synchronous message sends (the latter are implemented by exchanging several asynchronous messages). Moreover, actors may communicate using futures where requests are handled asynchronously, but return a representation (the future) that allows to await the reply. This tutorial is mainly designed as a walk-through of several complete example programs Our first example consists of two actors that exchange a bunch of messages and then terminate. The first actor sends "ping" messages to the second actor, which in turn sends "pong" messages back (for each received "ping" message one "pong" message).
CUDA lets you work with familiar programming concepts while developing software that can run on a GP This is the first of a series of articles to introduce you to the power of CUDA -- through working code -- and to the thought process to help you map applications onto multi-threaded hardware (such as GPUs) to get big performance increases. Of course, not all problems can be mapped efficiently onto multi-threaded hardware, so part of my thought process will be to distinguish what will and what won't work, plus provide a common-sense idea of what might work "well-enough". "CUDA programming" and "GPGPU programming" are not the same (although CUDA runs on GPUs). CUDA permits working with familiar programming concepts while developing software that can run on a GPU. It also avoids the performance overhead of graphics layer APIs by compiling your software directly to the hardware (GPU assembly language, for instance), thereby providing great performance.
J. Swalens, J. De Koster, und W. De Meuter. 30th European Conference on Object-Oriented Programming (ECOOP 2016), Volume 56 von Leibniz International Proceedings in Informatics (LIPIcs), Seite 23:1--23:28. Dagstuhl, Germany, Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik, (2016)
D. Bonetta, L. Salucci, S. Marr, und W. Binder. Proceedings of the 2016 ACM International Conference on Object Oriented Programming Systems Languages & Applications, Seite 531--547. ACM, (2016)(acceptance rate 25%).
B. Daloze, S. Marr, D. Bonetta, und H. Mössenböck. Proceedings of the 2016 ACM International Conference on Object Oriented Programming Systems Languages & Applications, Seite 642--659. ACM, (2016)(acceptance rate 25%).
S. Abramsky. Electronic Notes in Theoretical Computer Science(2006)Proceedings of the Workshop "Essays on Algebraic Process Calculi" (APC 25)Proceedings of the Workshop "Essays on Algebraic Process Calculi" (APC 25).