CS 358. Concurrent Object-Oriented Programming
Spring 1996

Reference: D.C. Luckham, J. Vera, D. Bryan, L. Augustin and F. Belz, Partial orderings of event sets and their application to prototyping concurrent timed systems, J. Systems and Software 21(3): 253-265, 1993.

Lectures 9-10. Rapide

Rapide is a programming system developed at Stanford by a group headed by David Luckham, in collaboration with Frank Belz and others from TRW. The system consists of The motivating application for Rapide programs is the design and prototyping of concurrent systems, possibly subject to real-time constraints. The architecture language is used to describe the basic modules, or components, of a system, together with their interconnections. The types language is shared between the architecture language, where it is used to define components and their interfaces, and the concurrent programming language. The concurrent, reactive programming language is used to define systems, or prototypes of systems that realize the desired architectures. One idea that was influential in the early design stages of Rapide was that a prototype of a software, or software/hardware, system could be written in the Rapide language. Then, after some evaluation and refinement of the design, modules could be reimplemented in another language (such as C, C++ or Ada).

We will be primarily interested in the concurrent, reactive programming language, in combination with the form of objects, called modules, provided by the types language. The main properties of interest, from the point of view of this course, are:

We will also take a quick look at some example partial order specifications, since these are useful in discussing the properties of concurrent programs.

The specification language includes a pattern language for describing the correct partially-ordered sets of events that must, or must not, occur during any computation. For example, a pattern might specify that if a deposit is made to a bank account, then this causes an increased bank balance to be reported to the account holder.

Rapide Computation

Rapide computations define partial orders of events, where the order relation intuitively corresponds to causality. (See Lectures 2-3.) It is also possible to maintain a temporal order between events, for modeling the real-time behavior of programs, but we will not go into this.

Events are generated by basic operations that correspond to communication between modules. Two examples are actions, which are asynchronous broadcast communication, and remote procedure call, which is synchronous communication between modules. When an action is performed by one module, any other module may detect that event using a pattern expression that matches the form of the action. In particular, more than one module may observe, or react to, a single event. This can be useful in system development, for example, where one module is added to the system to observe the behavior (input and output actions) of another module and report errors.

Partial Orders in Simulation and Debugging

The following partial order might arise in a computation:
                   Msg_In
                   /    \
                  /      \
              Msg_In    Send
                /  \    / \
               /    \  /   \
           Msg_In   Send   Receive
              \     /  \     /
               \   /    \   /
                Send    Receive
                  \       /
                   \     /
                   Receive
This partial order corresponds to several linear traces, such as
    Msg_In, Send, Receive, Msg_In, Send, Receive, Msg_In, Send, Receive

    Msg_In, Msg_In, Msg_In, Send, Send, Send, Receive, Receive, Receive
If we execute a program with "causality" corresponding to the partial order, then either of these traces might result. Neither gives as much information about the synchronization of events in the program as the partial order. In fact, to get the same information, we would have to look at a large number of linear traces, each containing all 9 events.

If we simulate a concurrent program and "randomly" make all nondeterministic choices using a random number generator, then we would have to repeat the execution of the program a potentially large number of times to make sure we have a good sample of possible orders of events. The number might actually have to be very large if we want to make sure we detect all synchronization errors.

For the purposes of testing and debugging a concurrent program, it is therefore more useful to have a system that actually constructs the partial order. This may take significantly more computation time and space than producing a single linear trace, but it is potentially much more efficient than repeating the computation a large number of times to make sure that we have seen enough linear traces to consider the program correct.

An innovation in Rapide is that not only does the Rapide run-time system produce a partial order, but the program itself may, at some point in computation, include statements that depend on the partial order produced so far. This has significant advantages and disadvantage:

But Remember: the meaning of a parallel program is a set of partial orders, not a single partial order. Therefore, diagnostics run on a single partial order cannot tell you the program is correct. You have to try all possible partial orders. Is it true that there are fewer partial orders than traces? Since the trace semantics blurs distinctions between partial orders, it might actually be algorithmically better for highly nondeterministic programs.

Language Overview

Types, interfaces and specifications

The type and module language provides constructs for structuring systems into components and specifying the interfaces between them. Rapide modules are a general aggregation and encapsulation construct. A module may represent a single integer, or a large subsystem consisting of many communicating submodules.

The type of a Rapide module is called an interface. There may be many modules with the same interface. In particular, we can great more than one module of the same type, or consider two modules to have the same type due to structural subtyping.

An interface may declare types, submodules and actions. An interface may also contain constraints, written in the specification language, that may be used to automatically check the poset computations of modules.

For example, the following declaration defines an interface and binds it to the type name Channels. The modules with this interface may read actions called Take_In and produce actions called Deliver. The third line is a constraint on patterns of Take_In and Deliver actions, explained below.

type Channels is interface
   in  action Take_In(Msg: String);
   out action Deliver(Msg: String);
   match (?S : String;  Take_In(?S)  =>  Deliver(?S))*~
end Channels
The keyword match in the constraint specifies that the partial order of Channels events must match a specification exactly. The specification of events may be parsed into two parts, a pattern constraint and a pair of post-fix operators (* and ~). The pattern constraint ?S : String; Take_In(?S) => Deliver(?S) consists of a the declaration ?S : String, which declares that the placeholder ?S matches any string, and the pattern Take_In(?S) => Deliver(?S) which matches a pair of events, a Take_In and a causally dependent Deliver, both involving the same string. The post-fix operator * indicates that the partial order of events involving this module may include any number of Take-In and Deliver pairs and the operator ~ means that the pairs must be disjoint. In particular, one Take-In(Bob) followed by two Deliver(Bob)'s would be incorrect, since the only way for this two match Take_In(?S) => Deliver(?S) twice would be for two Take-In(?S)s to match the same Take-In.

The names given in the interface determine the visibility of declarations in a module. If a function name, for example, is declared in a module of type Channels, then it is private to the module since it does not appear in the interface.

Modules and the Executable Language

A module may declare a set of types, functions, and reactive processes. The reactive processes in a module may execute independently. The main syntactic form for defining a reactive process is
    when  pattern  do
             statement list
    end when
which is triggered by any events that match the indicated pattern. When this process definition occurs within a module, the pattern is matched against the poset of events that are either generated within the module (either by locally nested modules of internal action calls) or events communicated from outside the module by calls to in actions listed in the module interface.

An example is a simple reactive process for a Channel (as defined above):

    ?M : String;
    when  Take_In(?M) do
               Deliver(?M);
    end when;
Can combine processes using two forms:
	Parallel
	  when
	||
	  when
	end;
and
	await
	  pattern => action;
	  pattern => action;
	end 
The await can be found in a loop, not the Parallel (for some obscure reason). Since When fork new thread, posets obtain from the two construct are often quite different. (- Francois)

-----------------

  * The syntax for When is more something like: 
	when pattern do
	  statement list
	end when
  * Place holders ?m can only be defined in pattern,
  * reference are declared as k: var integer;
Also about the await statement it is not a way to combine processes, but a way to wait for several possible matches (like select in Occam) (Sorry if my previous message was not very clear).

-----------

Examples

Example 1

Question: for nondeterministic program, how does size of set of all posets (i.e., sum of sizes of distinct posets) compare with size of all traces?
TYPE Producer IS INTERFACE ACTION OUT Emit( n : Integer ); END Producer;
TYPE Consumer IS INTERFACE ACTION IN Source( n : Integer ); END
Consumer;

MODULE New_Producer( min : Integer ) RETURN Producer IS
   FUNCTION Compute( n : Integer ) RETURN Integer IS
      BEGIN RETURN n + 1; END FUNCTION Compute;
INITIAL
   Emit(min);
PARALLEL
   WHEN (?x IN Integer) Emit(?x) WHERE ?x < 20 DO 
      Emit( Compute( ?x ));
   END WHEN;
END MODULE New_Producer;

MODULE New_Consumer() RETURN Consumer IS
   FUNCTION Use( n : Integer ) IS BEGIN NULL; END FUNCTION;
PARALLEL
   WHEN (?y IN Integer) Source(?y) DO Use(?y); END WHEN;
END MODULE New_Consumer;

ARCHITECTURE ProdCon() IS
   Prod : Producer IS New_Producer(4);
   Cons : Consumer IS New_Consumer();
CONNECT
   (?n IN Integer)
   Prod.Emit(?n) TO Cons.Source(?n);
END ARCHITECTURE ProdCon;

Example 2

One powerful but sometimes difficult aspects of Rapide programming involves the interaction between shared variables and reactive processes. For example, consider the following module with assignable variables used to communicate between reactive processes.
    k : var Integer;
    parallel 
        when (?M: String); Take_In(?M)  Where k.odd() then
                   Deliver_1(?M); k := k+1; 
        end when;
    ||
        when (?M: String); Take_In(?M)  Where k.even then
                   Deliver_2(?M); k := k+1; 
        end when;
    end parallel;
This "channel splitter" takes in actions with string data and alternates between sending them out to two different targets.

Some tricky issues. Can the second process start before the first finishes? Rapide decision is yes, for increased concurrency. But then what if we have two processes that are possible when k even and two when k odd? See below. Same occurs with code above is second invocation of one process definition can proceed before first completes.

    k : Integer;
    parallel 
        when (?M: String);  Take_In(?M)  Where k.odd() then
                   Deliver_1(?M); k := k+1; 
        end when;
    ||
        when (?M: String);  Take_In(?M)  Where k.odd() then
                   Deliver_1(?M); k := k+1; 
        end when;
    ||
        when (?M: String);  Take_In(?M)  Where k.even then
                   Deliver_2(?M); k := k+1; 
        end when;
    end parallel;
Race condition --- possible to try to evaluate k-j during assignment to j and k... (FIX THIS EXAMPLE.)
    j, k : Integer;
    
    when (?M: String); Take_In(?M) Where odd(k.minus(j)) then
               Deliver_1(?M);  ... ; j := j+1; k := k+1; 
    end when;

    when (?M: String); Take_In(?M) Where even(k.minus(j)) then
               Deliver_2(?M); ... ;  j := j+1; k := k+1; 
    end when;

    k : Integer;
    parallel 
        when (?M: String); Take_In(?M)  Where odd(k.minus(j)) then
                   Deliver_1(?M);  ... ; j := j+1; ... ; k := k+1; 
        end when;
    ||
        when (?M: String); Take_In(?M)  Where even(k.minus(j)) then
                   Deliver_2(?M);   ... ; j := j+1; ... ; k := k+1; 
        end when;
    end parallel;

Implementation Issues

Rapide raises some interesting implementation issues. In particular, in a distributed implementation, the partial order must be maintained in a distributed fashion and communicated between processors as needed. There are also some interesting issues in the implementation of clocks for real-time constraints (not discussed here).

Construcing the causality partial order

Rapide computation involves explicit manipulation of a partial order of events. Therefore, the implementation must encode and store a partial order. An algorithm developed independently by Fidge [Fidge 88, Fidge 91] and Mattern [Mattern 88] may be used to encode the partial order withough extra synchronization and communication links or events, and without a central timestamping process. However, if there are n processes, the algorithm requires vectors of length n. Although the number of Rapide processes may change during computation, we describe this algorithm as if the number, n, is fixed.

In the Fidge-Mattern algorithm, each of the n processes maintains an integer vector if length n. Intuitively, the ith component of this vector represents an approximation of the ith processes event counter. The FM algorith proceeds as follows:

With this algorithm in place, event e comes before event f in the causal ordering iff every component of the e vector is less than or equal than the corresponding component of the f vector, and at least one component is strictly less.

Rapide does not need to compare arbitrary events for causality. Since each module must declare its in and out actions, it is possible to analyze the set of actions that must be transmitted from one process to another. This may be combined with the fact that Rapide only requires the causal order between events arriving at a common receiver to make several optimizations possible. Some optimizations that have been explored are:

These optimizations can make a significant difference. On an example where an unoptimized prototype required 1024 vector components, optimization made it possible to reduce this to 118 vector components.

Maintaining "orderly observation"

Fidge-Mattern vectors make it possible to determine causality between two events. However, in a distributed implementation, there is still the real possiblity that events may arrive out of causal sequence. This can create problems for constraints such as "when an A event happens, then a B event must occur before a C event can happen" -- a violation of this constraint might be erroneoously detected if C arrives before B. (It is possible for a reactive process to execute in response to a failure of such a constraint, I believe.)

The problem of arranging arrival of events so that each process sees events in a manner consisten with the global partial order is called "orderly observation."

There are several ways that this problem could be approached. One is to ensure that events arrive at each processor in a temporal order that is consistent with the partial order. Another is to provide some sort of rollback mechanism, in case a later event invalidates an earlier computation. (Potentially very expensive in a distributed environment, I think, since a rollback of one process may trigger rollbacks of others.) A third is for each event to contain enough information to determine whether there are causally earlier events.

The current Rapide implementation (as of the reference consulted) uses a global FIFO queue. Processes must synchronize with the queue when they generate and receive events. This sounds like a serious bottleneck, but the actual consequences of this have not been explored and compared with other approaches.

Possible term project: What changes in the pattern language would make it possible to simplify or eliminate this problem? For example, drop a "negation" operator and only look at "monotonic" patterns. (A pattern is "monotonic" is, whenver it matches a partial order, it matches all larger partial orders.)

Clocks

(See Section 4.3 of D.C. Luckham, J. Vera, D. Bryan, L. Augustin and F. Belz, Partial orderings of event sets and their application to prototyping concurrent timed systems. )

References

[Fidge 88] C.J. Fidge, Timestamps on message-passing systems that preserve the partial ordering. Australian Computer Science Communications 10(1):55-66, Feb 1988.
[Fidge 91] C.J. Fidge, Logical time in distributed systems. Computer 24(8):28-33, Aug 1991.
[Mattern 88] F. Mattern, Virtual time and global states of distributed systems. In M. Cosnard (ed.), Proc. Parallel and Distributed Algorithms, Elsevier Science Publications, 1988. Also Report SFB124P38/88, Dept.~Computer Science, Univ. of Kaiserslautern.
[MSV91] S. Meldal, S. Sankar and J. Vera, Exploiting locality in maintaining potential causality. In Proc. 10th ACM Symp. on Principles of Distributed Computing, Montreal, Canada, Aug 1991, pages 231-239.

--------

Can we consider remote procedure call as a pair of events, a call event and return event? Maybe if we also have a "suspend" or blocking primitive. But should they really be implemented the same way? (See related discussion in the context of Actors.)

Compare to Linda's tuple space.

Remember: the meaning of a parallel program is a set of partial orders, not a single partial order. Therefore, diagnostics run on a single partial order cannot tell you the program is correct. You have to try all possible partial orders. Is it true that there are fewer partial orders than traces? Since the trace semantics blurs distinctions between partial orders, it might actually be algorithmically better for highly nondeterministic programs.