Topic : Parallel Programming
Author : LUPG
Page : << Previous 3  Next >>
Go to page :

between CPUs). An example of such a system is Silicon Graphic's Origin 2000 system. CC-NUMA systems are usually harder to implement then SMP systems (and thus more expensive), and thus normally not found in low-end or mid-sized systems.


Clustering is a technique used to make several computers act as one larger machine, splitting tasks amongst them. They allow one to take several cheap stations, and combine them together to a larger system. It also allows for more redundancy for the system - if one machine in the cluster dies off, the other machines can cover up for it until the malfunctioning machine is repaired, and all this without bringing the whole system down. This type of setup is thus common in systems that must run 24 hours non-stop.

Clustering is often implemented in software, often using a protocol named PVM to communicate between the different machines. Examples fir such systems are Beowulf, for Linux systems, or the clustering system by Tandem corporation.

Tools And Methods For Parallel Applications Development

Writing a parallel application takes a different approach then writing a sequential program. After we decide what needs to be done, we need to decide who gets to do what, and find points where extra parallelism would be beneficial. We then need to decide how our different runnables are going to communicate with one another - sometimes a whole slew of different communications methods is used in one large parallel application, each of which fits a particular need best. We then come to the art of debugging parallel applications, which requires some techniques not required when debugging sequential applications. You will note that similar techniques are also used when debugging device drivers, and even windowing GUI applications.

Note that we're not trying to teach the whole methodology in a few paragraphs, but rather just to point out a few places where one might search for more information and wisdom.

Designing A Parallel Application

The first step in designing a parallel application, is determining what level or parallelism, if at all, is beneficial to the problem our application tries to solve. In many cases, parallelism would add much more overhead, then benefits. An important factor is the experience of the programmers with parallel systems. This is not a factor when you're trying to learn, of-course, but it is a factor if you want to get something done in an reasonable amount of time. take into account some extra overhead needed to fix hard bugs that stem from timing problems, race conditions, deadlocks and the like.

Once we decided to use parallel programming, we should work on decomposing our system into units that would logically belong to a single runnable. Sometimes we find very natural divisions, other times only experience will help us, or better, looking at other similar applications for which we could find some success record. If we're programming in order to learn, we should mostly experiment, write code, test it, dump bad ideas, and be ready to write again from scratch. If we see our design leads to new complexities, its probably time for a change.

Communications Frameworks

A very important factor for the success or a parallel application, is choosing an appropriate communications framework. There are several such framework in common use, and for anything but simplistic and experimental work, we should consider using one of them. We'll show here a few examples, thought of-course other methods (including methods implemented by various commercial products) exist.

ONC RPC - Remote Procedure Call

Remote Procedure Calls (RPC) are a method originally developed by Sun microsystemsİ, allows one process to activate a procedure in a second process, passing it parameters, and optionally getting a result back from the call.

The set of procedures supported by a process are defined in a file using notation called 'RPC Language', and is pre-processed by a tool named 'rpcgen', which creates two groups of files forming two 'stubs'. One stub defines functions whose invocation will cause a message to be sent to the remote process, with a request to invoke a certain procedure. This function is invoked by the first (client) process, and returns when we get a reply from the second (server) process, with the value it returned. The second stub contains declarations of functions that need to be implemented by the second (server) process, in order to actually implement the procedures.

During the years, new RPC variants were created, most notably 'DCE RPC', which is part of the 'Distributed Computing Environment', now being maintained by

OMG's CORBA - Common Object Request Broker Architecture

CORBA (Common Object Request Broker Architecture) was started up as an attempt of few hundreds of companies to define a standard that allows clients to invoke methods on specific objects running in remote servers.

This framework defines a language-neutral protocol to allow processes to communicate even if they are written in different programming languages, or running on different operating systems. A declarative language, named IDL (Interface Definition Language) was defined to allow specifying language-neutral interfaces. Each interface has a name, and contains a set of methods and attributes. The interface is then pre-processed by some tool, and generates client and server stubs, similarly to how it is done with 'rpcgen' for RPC. An entity named 'ORB' (Object Request Broker) is then used to allow these clients and servers to communicate.

Above this basic interface, a set of standard services were defined, that supply features that are commonly required: A naming service, to allow client processes to locate remote objects by name, An event service, to allow different objects to register for events sent by other objects, and so on. These services are collectively known as 'Horizontal CORBA Services'.
Yet other services are being defined for different areas of computing, for instance, services to be used by medical applications. These are called 'vertical services'.

For more information about CORBA, please refer to the Object Management Group's web site ( You may also check the free CORBA ( page to locate and download various 'free' implementations of CORBA.

Microsoftİ's DCOM - Distributed Component Object Model

I might annoy some of the readers here, but although we are dealing here with Unix programming, we cannot ignore what appears to be the currently more developed distributed objects framework, DCOM (Distributed Component Object Model). DCOM gives us rather similar services to what CORBA does, and people usually argue that their range of services is smaller then what CORBA provides. However, DCOM served as the foundation for the ActiveX set of interfaces, that are used to allow one windows application to activate another one and fully control it. This is most commonly used to allow one application to embed an object created by another application, and allow the user to manipulate the embedded object by invoking the second application when required. Of-course, the ActiveX interface allows much more then that.

There are several reasons why DCOM is important also for Unix programmers:

In many cases, one needs to be able to make Unix and Windows programs communicate. For that reason, a standard for a CORBA to DCOM interface has been defined by the OMG, and you, as a Unix programmer, might find yourself in a need to interface with such programs.
There are various ideas that exist in DCOM, that are worthy of looking at, and implementing on top of native Unix frameworks (such as CORBA). The rule here is - "don't throw out the baby with the bath water".
The Win32 API system is being ported to the Unix environment by several companies, and COM (the non-distributed version of DCOM) is already available for various Unix platforms. DCOM is also being ported to these platforms. When this is done, Unix programmers will be able to employ DCOM without the need to use a Microsoftİ operating system.

Third-Party Libraries Supporting Process/Thread Abstractions

Various third-party libraries exist, whose purpose is to ease the development of cross-platform applications. of those, various libraries try to make multi-process and multi-threaded programming easier.

ACE ( (Adaptive Communications Environment) is a large C++ library, developed at the Washington university in St. Louis. ACE attempts to supply abstractions for a lot of system programming concepts, including sockets, pipes, share memory, processes and threads. These abstractions allow one source-code to be compiled by different compilers on different operating systems, from PCs running Linux, BSD and windows systems, through most types of Unix for workstations, and up to IBM's MVS open edition, and not to forget several real-time operating systems, such as VxWorks and LynxOS. There is also a version of ACE ported to Java, named JACE.

Rogue Waveİ ( is a company known for writing commercial libraries that are used to ease the development of applications. One of their libraries is named 'Threads++', and is used to make multi-threaded programming easier. This library is something to consider when developing a commercial multi-threaded application. Refer to Rogue Wave's home page for more information.

Debugging And Logging Techniques

Last, but not least, comes the host of problems

Page : << Previous 3  Next >>