This blog is again about a set of parallel algorithm strategy patterns in OPL: Loop Parallelism, Task Queue, and Graph Partitioning.
Loop parallelism deals specifically with concurrent execution of loop recurrences. This is a common form of optimization in compilers and processors. Mark Murphy went through the common techniques in this pattern. It is interesting that the authors states it is usually easiest to develop a program as sequential and then gradually refactor in parallelism. It echos various other reading materials we have in CS527 that say doing parallelism right is hard. This implmentation approach also stresses the need to have a good unit testing suite which is I feel is one of the most important enablers of refactoring.
Task queue is also a familiar concept to me. Although in my experience the task queues that I've encountered are all using the FIFO scheduling since it is the simplest. Here Ekaterina Gonina also mentions about intelligent scheduling such as by having multiple queues (to guarantee locality) or by assigning priority (to guarantee ordering).
The last pattern is about graph partitioning. As I have commented on the Graph Algorithms pattern, coming up with a good representation of the problem is more than a half of the battle. After that, it is then to dig out and to apply the algorithms learned in the theory class. Just as the Graph Algorithm pattern, here in the pattern Mark Murphy also gives a good list of partitioning algorithms that we can use.
Subscribe to:
Post Comments (Atom)
 
No comments:
Post a Comment