MathJax

Wednesday, July 3, 2013

Don't Panic: Parallel processing for intelligent dummies.


Multitasking is a family of four living in a house with one television set.
Parallel processing is a family of four living in a house with four television sets.[1]


The distinction between multitasking and parallel processing is important. But the distinction in humans (if it applies at all) is much more complex than it is in computers. It's imbedded within the knotty questions surrounding "attention."[2] As far as I know, these questions regarding attention for the human mind/brain are far from being settled, and they're a significant part of the subtext in this thread. Fortunately, the case is not (yet) as difficult to understand for the computer as it is for the human.[3]

In contrast to parallel (simultaneity) processing, multitasking is a kind of serial (single time line) processing. But things happen so fast inside a computer that the two are indistinguishable to the user – and are meant to be. So it helps (especially for my purpose here) to slow things down to a sub-computer (human?) speed to watch what's going on. This slowing down, in the example I'm about to give, will reveal the action of the well-known permutation algorithm, the perfect shuffle, presented in the Kevin Houston video I used at the beginning of the "Jeu de Cartes" post.

The following example is a variation on an example taken from a 1971 paper by Harold S. Stone, "Parallel Processing with the Perfect Shuffle." This paper was written at a time when parallel processing was in a relatively early stage of development. It gives four interesting processing applications for the perfect shuffle: the fast Fourier transform[4], polynomial evaluation, sorting, and matrix transposition. The example here will be polynomial evaluation, because the only math needed to understand the action of the perfect shuffle in parallel processing here is multiplication.

In a deck of eight cards labeled 1, 2, 3, 4, 5, 6, 7, 8, the perfect out shuffle looks like this:



The "one-line" notation for this ("coil") permutation is (15263748), and the cyclic notation is
(1)(8)(253)(467). The 1 and 8 are "fixed points" – they don't change in the permutation and so stay on the "outside" after the shuffle, hence the "out" in the designation. 2-5-3 and 4-6-7 cycle through together during consecutive shuffles, indicating that after only three shuffles, an eight-card deck will return to its original order:


Remembering that the objects being permuted need not be the usual consecutive integers, but can be any "objects," we make a substitution mapping all the odd integers 1,3,5,7 to "0" and the even integers 2,4,6,8 to "1".  Leaving off the return permutation, we then have the map:


Now, an "object" in the context here need not be a "noun thing" like a number or an apple or a pitch or a word. It can also be an instruction such as "Do X" or the response to a question such as "Should I do X?" This will be important in a future post.

So we can then treat the perfect shuffle of 0's and 1's as a "mask" or "sieve" so that a "1" tells the computer to do something to a piece of data before letting it pass through to the next step, while a "0" tells the computer to let the datum pass through without doing anything to it.

In the example taken in Stone's paper we want to evaluate a 6th-degree polynomial, which has a formidable look to it


If we were to try this using only our human serial/multi-tasking abilities, or work it on a computer that could only perform SISD (single instruction, single data), we would have to go one step at a time from left to right. The parallel processing suggested by Stone in 1971 feeds all the data in through eight processors simultaneously. At the first stage, each processor asks the mask, "Should I multiply the datum I was given by x?" If the answer is "1" then it multiplies by x and passes the answer on to the next stage. If it's "0" it simply passes on what it was given without doing anything to it. Then the computer shuffles that mask to produce the next mask which answers the question, "Should I multiply by x-squared or not?" Then a final shuffle to answer "Should I multiply by x-to-the-4th or not?"[5]



So if we wanted to evaluate this polynomial for x=2


it would go through in three steps:

Finally, as any Hitchhiker can predict, when you sum all the terms in the end (remember to leave out the 256), the answer will be 42.



Still, there's more to life than 42. The idea to take away from all this is not a more efficient and faster way for a computer to evaluate polynomials, but the idea of permutations of instructions. And that will now lead to some music-world observations about voice leading, neo-Riemannian transformations, and cycles of sonorities in atonal settings – and then moving on to a fantasy on the question, "What did you do in the war, Uncle Miltie?" 



_____________________

[1] I guess I should add this more technical distinction taken from ComputerUser Dictionary:

Multitasking:
In computing, multitasking is a technique used in an operating system for sharing a single processor between several independent jobs. In the case of a computer with a single CPU, only one task is said to be running at any point in time, meaning that the CPU is actively executing instructions for that task. Multitasking solves the problem by scheduling which task may be the one running at any given time, and when another waiting task gets a turn. The act of reassigning a CPU from one task to another one is called a context switchWhen context switches occur frequently enough the illusion of parallelism is achieved. Even on computers with more than one CPU, multitasking allows many more tasks to be run than there are CPUs.
Parallel processing:
Parallel processing is a computing architecture within a single computer that performs more than one operation at the same time. Parallel processing can also be achieved by using multiple computers clustered together to process one part of a large function simultaneously to obtain results faster.
[2] Recommended reading: a summary of philosophies of attention can be found in the online Stanford Encyclopedia of Philosophy.
[3] A helpful tutorial on parallel processing is on the Lawrence Livermore National Laboratory web site.
[4] There is a humorous tale of one man's encounter with fast Fourier transforms at John the Math Guy, a blog devoted to a light hearted but informative approach to many topics in math – highly recommended blog.
[5] Why do these particular multipliers in this particular order make this work?
Hint 1: remember from elementary algebra that
Hint 2: read the mask from top to bottom: 000, 100, ..., 111




No comments: