Topic : Parallel Programming
Author : LUPG
Page : 1 Next >>
Go to page :


Parallel Programming - Basic Theory For The Unwary



Preface - Why Should You Care?

Writing sequential programs is the what most, if not all, programmers are being initially trained to do. At a certain phase in their life they discover that there are other models for programs. One of them is using parallelism. This means that instead of having your program carried out in a sequence, one instruction at a time, it is being executed by several different entities simultaneously. This can sometimes make the programs simpler to design, and may also run faster then a matching sequential program.

For example, if you have a computer with several processors, they might each be running a part of the program simultaneously, and thus complete it faster then if only one processor would have to run the whole program.

Another example is if you have a program that needs to read data from a disk, and meanwhile make some heavy calculations on it. Since the disk can transfer data to memory without intervention of the CPU, it would make sense to split your program into two parts running in parallel: one handles I/O, and reads data into memory. The other does the heavy calculations. You could do it all in one sequential part, that majorly does the calculations, and from time to time goes to read another block of data, but it is harder to write it this way, or to be efficient (how will the program know when the last block of data was read, and it is time to read another block?)




Scope Of This Tutorial

This document attempts to illustrate the terms and principles commonly used in parallel systems. It is by no means a replacement for a good parallel programming course. Yet, it may make it easier for people without this background able to read and understand the various parallel programming tutorials, and start writing parallel programs. I'd still advise that they eventually take a course or two about parallel and/or distributed programming. I'd like to hear from you if this tutorial really achieved its purpose for you, if it was too theoretical, too short, or was flawed in a different way.

Part of this tutorial concentrates on the underlying hardware and operating system software used in parallel systems. The last section tries to give a few in-sites as to when to try to make a system parallelic, how to design it properly, and what kind of tools and frameworks one could expect to find that can ease such development.




Entities Of Parallel Systems

A parallel system is a system (software and/or hardware) that allows one to write programs whose different parts are carried out in different threads of execution.
In order to better understand what a parallel (or parallelic) system is, we should check what are the different components such a system is made of.




Processes

A process is an entity that executes a computer program, and manipulates all other resources in order to fulfill the mission of the program. Several (or many) processes may be running on the same computer, or on different computers. Usually a single process is running on a single computer. Also, usually each process has its own address space, as well as a few other private resources (an private execution stack, for example).

Note - what we call here 'process' is a broader entity then what you might know as a Unix process. We should have actually called it a 'runnable', and in practice it could be a Unix process, a Unix thread and so on.




Resources

A resource is an entity that can be used by a process to perform some specific operation. There could be physical resources, as well as logical (or virtual) resources.
A physical resource could be, for example, a disk, which is used for saving files. Or a keyboard, which is used to read data from the user.
A logical resource could be a shared memory area, which several processes may access in order to read data to, or read data from.




Executers

An executer is a special resource that is used to execute a process. Usually, this is a CPU, so we'll use the term CPU from now on. Note that not all CPUs are equal - some are general-purpose, and might carry out the program as a whole. Others are specialized for different tasks - a CPU that only does mathematical floating point operations; A CPU that only handles graphical operations (commonly found on graphical screen cards, and serving as a graphic accelerator) and so on.




Schedulers
A scheduler is an entity specifying when processes will run, on which CPUs (executors) they will run, and in what order. Such a scheduler may be controlling a single computer, or several computers (see clustering below).




Synchronizers

A synchronizer is an entity used to control the access of processes to resources. Some synchronizers are used to make sure only a single process uses a given resource at any one time. Other synchronizers are used to make sure a given resource is accessed in a given order of priority by different processes. There is a host of other types of synchronizers to be dealt with. You will most likely bump into synchronizers such as Mutexes, Semaphores, Monitors, Conditions, etc.




Short Terms Dictionary

There are a few terms used in conjunction with all types of parallel systems. We will describe them here, divided into several categories in some sort of a (hopefully) logical order. For each term, we'll try to give a real-life example making it easier to grasp the concept, and to remember it (I'd call this kind of example a "mantra". Forgive me for abusing this word).




Accessing Resources

Mutual Exclusion

A mechanism used to make sure no two processes are trying to use a resource at the same time. Used to avoid corrupting the internal state state of this resource. (mantra - that great big nurse sitting in front of the doctor's room, making sure no one gets in until the previous person comes out).

Deadlock

A situation in which a group of two or more processes are all waiting for a set of resources that are currently taken by other processes in the same group, or waiting for events that are supposed to be sent from other processes in the group. (mantra - when was the last time you were anxiously dialing to your boyfriend, finding his phone line constantly busy, while he was trying to call you back all this time? Each one of you was making one phone busy, while trying to dial the other's phone's number. If none of you would have let go of his/her phone, you'd be in a deadlock. The phone is, indeed, a valuable resource).

Starvation

A situation where a process that tries to access some resource is never granted access to that resource (mantra - think of the last time you went into your fast-food restaurant, approached the counter, and were always pushed back to the end of the line by the crowd, verbally "starving to death").

Race Condition

A situation in which two (or more) processes are doing some competing operations at the same time, and the results might come up screwed up due to collisions. (mantra - think of two people trying to get into the same door at the same time, jamming each other's way in).

Busy Wait

A situation in which a process that is waiting for a resource to become free, enters a loop of constantly polling the resource in order to find if it is free. In the process it consumes all available CPU power to perform its constant polls. (mantra - when you are waiting for the cable guy to come and connect you new apartment, and keep looking out the window all the time to see when the technician arrives).




Handling Events And Notifications

Asynchronous Notification

A method for one process to notify a second process about some state change, while the second process is executing some unrelated operations. This is usually done using hardware interrupts, or using Unix signals (see the Unix signal programming tutorial). In properly designed programs, this notification method is not commonly used. (mantra - you go downtown to do some business, when suddenly your cellular phone rings, with your dad on the other side of the line, telling you his PC tells him he is "bad and invalid", and asks what to do...).

Synchronous Notification

A method for one process to notify a second process about some state change, that the second process will receive only when specifically polling for such notification. This is done by using functions such as select() (see internetworking with Unix sockets) in some sort of an event loop.

Event Loop

A control loop constantly executed by an event-driven program - its structure is:
1. Wait for a new event, suspending execution until one arrives from another process (or from the operating system).
2. Check which event type we got.
3. Invoke a function that can

Page : 1 Next >>