Real-Time Computing
For Human Computer Interfacing

November 5-7, 1996

Copyright 1996, Perry R. Cook, Princeton University


* Permission to make digital or hard copies of part or all
of this work for personal or classroom use is granted with
or without fee provided that copies are not made or dis-
tributed for profit or commercial advantage and that copies
bear this notice and full citation on the first page. To copy
otherwise,to republish, to post on services, or to redistribute
to lists, requires specific permission and/or a fee.




I. What's Real-Time?

There is considerable disparity on the definition of a "Real-Time" computer system, depending on the particular industry and field of use, the history of computer useage in that industry, and many other factors.

The general agreement is that there is some notion of time-critical processing. That is, some tasks that the computer must perform must be done a timely fashion. "Timely" can be defined somewhat loosely, as in the case of a request for an account balance in a bank automatic teller machine, or more precisely as in the case of the control systems in a modern jet airplane. For simplicity, we'll separate real-time systems into three groups:

1) Business Data Processing

This first group is in many ways simply a speed improvement on classical information batch-processing used by banks, insurance companies, etc, where queries are posted to a central computer or network, and responses come back containing the desired information. The real-time aspect is that a human can post the query using a terminal, the waiting time is short enough that the human can wait at the terminal until the response comes back, and the response is displayed on a terminal or other device near the human operator.

2) Communications Switching, and Process Control

This second group contains two types of systems, but the similarity is that some process is taking place which usually involves flow (data, water in a pipe, parts on a conveyor belt, etc), and a computer is used to control parameters of that flow. The parameters could be connections or addresses as in the case of telephone communications, rates of flow as in the case of a commercial gas pump, or both as in the case of the complex routing and flow-control systems of a modern oil refinery or beer brewery.

3) Closed Loop Control Systems

This third system type involves online monitoring of processes (usually physical) and external inputs, and closing a feedback path with a computer which controls the processes to minimize some error criterion. This can be closely related to the Process Control system of the second type listed above.

Of course, real world systems often are combinations of these basic types.

A telephone service provider would use systems of both the first and second types, for example. Long distance phone calls would first be connected to a local switch, then information would be posted as queries to the Business Data Processing system (to check billing information), then further switching systems would take over to connect the call to the final destination. Every month the business data processing system would generate bills for each customer, but probably not in real time. Any time, however, some customers might be able to call in and see how many remaining hours they have on a prepaid long distance calling plan.

A jet airplane would likely include systems of both the second and third types, with closed loop control being used to ensure accuracy and stability in steering, etc., and process control being used to ensure steady and economical fuel flow to the engines.

The Dictionary of Computer Science (Van Nostrand Reinhold, 1993) defines a simple table of average response times, and processing methods and models:

    Computer Processing Modes and Times:

    Card Oriented Batch

    100-10000s

    Keyboard Oriented Batch

    1-100s

    Interactive Computing

    1-10s

    Online Inquiry and Transactions

    1-10s

    Message Switching

    0.1-10s

    Data Acquisition and Control

    0.01-10s

Which of these are real-time? Clearly by modern standards, response times of 5 seconds or less would be required of any business data processing system carrying the label of Real-Time. But from the Nyquist Sampling Criterion a control system with 5 seconds response time could at best control continuous processes which change no faster than every 10 seconds. To ensure stability and control of rapidly varying processes, many modern feedback control systems operate at thousands of samples per second. So the lesson we can take from this disparity in "real times" is that different applications may have very different definitions.

Desktop computer companies, recently becoming more interested in the delivery of graphics and sound, even have very different definitions of real-time. Windows95 and NT basically describes the successful delivery of media in real time by saying "graphics and audio appear smooth, no objectionable jerkiness in visual appearance, not too many clicks or interruptions in the audio presentation, pre-recorded game sound effects should play with no noticeable delay, etc." (Here I'm paraphrasing many different sources and references for programmers).

Silicon Graphics specifies real-time in that multiple types of media, specifically graphics animation, digital sound, and MIDI commands, can be synchronized in output accurate to a single audio sample (1/48000 seconds). At first this might seem like an extremely stringent criterion, but there is at least 1 missing ingredient to the description, and that is "latency". Again there are disagreements on this definition as well, but I'll use latency to mean the delay between the time of an external control input and the measurable response at the display device(s). Display is used here to mean audio, video, or any other real-time output device. If media on a desktop computer is being controlled by inputs from a human or other computer, it is necessary to also think about latency as well as the ability to synchronize.

Other computer companies who have paid significant attention to media display, such as Apple and NeXT, have slightly different definitions and specifications for real-time, and quite different means for addressing the problems of smoothness (not losing data), synchronization, and latency. The undeniable fact, however, is that to ensure smoothness and synchronization, especially in a multi-tasking environment, requires a tradeoff of longer latency times. We will delve a little more deeply into this later in these notes.




II. Control Systems vs. Media Delivery

Controls systems and some process control systems have the most stringent time requirements. In order to ensure stability and robust behavior, these systems must maintain an accurate periodic sampling of the system variables. The media delivery task described above can be viewed as a process control system, with the objective being to provide samples or frames of audio, video, MIDI control data, etc. to the output of the system in a predictable way. The basic media delivery task differs from most process control systems, however, in that once the parameters are selected (audio sampling rate, video frame rate, video resolution, etc.) the system usually doesn't control the rates of flow. The system does, however, need to potentially arbitrate resources between the media delivery tasks and other tasks which may need to be accomplished.

Some basic working requirements of Media Delivery could be summarized: Make it look/sound smooth. No clicks or pops in audio. No jerks in video. If things do get behind, degrade gracefully.

A basic working requirement for a Control System might be: Close a feedback loop with sensors and motors. Guarantee Stability.

Modern HCI is really both of these: We're interested in combining these two areas. We want to collect user input, process it in a timely fashion, and display something smoothly and convincingly (with small or controllable delay) in response. The human and computer make up a complete closed-loop feedback system. Many modern HCI systems are powered feedback control systems, and the human is capable of adding energy/gain to the system. Thus stability can be a consideration.




III. Control Systems:

More on this from Putnam at Stanford.




IV. Synchronous vs. Asynchronous Models
Polling vs. Interrupts

Synchronous real time systems are based on time-ordered external events. The processor is assumed to respond instantaneously to these external events. In practice, this condition is said to be met if the response time is much less than the time between external events.

Asynchronous real-time systems assume that external events occur at times which are elements of the real numbers (with possibility for extremely dense event clusters), and the system is responsible for responding within some specified time bound.

To accomplish either of these systems, we can poll, use interrupts, or a combination. These two forms of event inputs, however, lend themselves more naturally to one or another type of system. In a polled system, a computer loop looks periodically for external events, and if anything has changed since the last time, actions are taken. This lends quite naturally to synchronois real-time, because we are forcing the inputs to occur on specific time boundaries. Unless the loop is carefully written, however, we're not sure that the periodic checking is truly regular in time. Also, events which are shorter than the time between polls can be missed entirely without some type of hardware buffering. In interrupt-level processing any external event causes the processor to yield to an Interrupt Service Routine (ISR), which is usually a short program that is ensured to either finish quickly before any other tasks must be performed, or an even shorter program that simply might place the event into a queue and return execution to a higher level program.




V. Data and Processing Modules

The basic model of events that must be serviced leads to the notion of a Queue. A Queue can be as simple as a FIFO (first-in first-out) buffer, where input requests are placed in the buffer and serviced in order of their occurrance. This only makes sense if all inputs are of the same type or priority, and all tasks that are to be executed are also of the same type or priority.

A more common type of event queue also includes a notion of priority. In this type of queue the events would be placed in along with an explicit device number or address, a priority number, and an execution deadline (a time beyond which if the event hasn't been serviced, the system is considered to have failed), There can be many processes, all of which can place events into the queue, service them, and take them out. Or there can be many many processes capable of "posting" requests to the queue, and only one master process which looks at the queue, determines which task should be performed next, allocates resources, and removes events once they have been serviced. Schemes abound for handling queues, but it is imperative that whatever architecture is used, debugging and verification should be part of the design. The more elaborate and complex a system for handling events in real time, perhaps using multiple processes which can write and read from a common event queue and data pool, the more likely the system is to encounter fatal and hard-to-find errors like deadlocks, lost messages, recursive conditions that never halt, etc.

If we are using interrupts to handle critical inputs, there are many options for interrupt architecture and handling. Interrupts can be single priority or multi-level, in hardware or software. Interrupts can be nestable, where during one ISR another interrupt can cause a Branch to another ISR. Upon returning from the latter ISR the first ISR is picked up where it left off. These options and more can be implemented in hardware, software, or a mixture. Some dedicated processors designed specifically for real-time computing are differentiated from other processors specifically by the hardware they include to deal with interrupts.

If we are to use multiple processes in a real-time system, they must communicate at one or more levels. The queue is one mechanism of communication between logical processes, but often processes need to relay information regarding state and data. Shared memory is one mechanism, especially in a single processor system, but can make a system hard to debug because it may be difficult to determine which process changed memory in an undesirable way. Message passing is a more modern object-oriented way of passing information between processes. For multi-processor systems, shared memory rapidly becomes expensive, and a hardware data streaming method called Direct Memory Access is often used.

Also for multi-processor systems, we must determine what topology we will use to connect the different system components. This brings us to the question of networks, where the most common connection schemes are the Star (one master and many slave processors, the master has a separate port for each slave), the Ring (each processor gets data from the neighbor on the left, and gives data to the neighbor on the right, and the ends are wrapped to connect in a ring), and Busses including the Common-Bus (all processors connect to a common bus, only one can write to it at any given time, but all can read) and the Multi-Bus (all processors hook to all other processors via dedicated busses, not very practical for large numbers of processors, because there are N*(N-1) pathways required for N processors).




VI. Operating Systems for Real-time

The past few sections have just scratched the surface of the level of complexity that can be encountered when designing and programming real-time systems. Online, while the system is running, attention must be paid to scheduling and queueing, interrupt handling, load monitoring, etc., plus the computation required to accomplish the required tasks. During development, a system must aid in debugging, optimization, etc. All of this points to the need for an operating system. There are, of course, a large number of Real-time operating systems that have been created over the years. These are as varied as the applications they were designed to serve, or perhaps as varied as the theorems they were designed to test and verify. A short list of host-based real-time operating systems include VxWorks, OS-9, VRTX, LynxOS, Chimera, and RT Mach. A short list of DSP (see later section in these notes) includes MWave, VCOS, and SPOX.




VII. More on Mutiple Processes

v A common software model for handling multiple processes involves the use of a main loop and low-level interrupts. The main loop typically polls the less-critical inputs, services the queue by looking for tasks that need to be accomplished and passing control to processes which do the required work, takes events off the queue once they are completed, and otherwise waits around a lot. One or more Interrupt Service Routines respond to critical inputs and output requests. Variations on this model abound, with one very common system using one clocked ISR as the master queue servicing routine. It is possible to construct a system which uses only interrupts (once the system is configured by a startup routine), or only operates under control of a main loop.


VIII. More on Multiple Processors

The use of multiple processors in real-time systems is motivated by many factors. The main ones include

1) Segments the overall task, in good engineering practice, into smaller sub-problems which can more easily dealt with conceptually, and the processor types can be matched more appropriately with the functions being executed by them.

2) Response times can be improved, because a local processor can collect and interpret data before determining whether to disturb any other processors in the system.

3) Cost can be saved, because by matching the processors to the tasks at hand, a minimum cost-per-function can be achieved.

The use of multiple processors brings potential difficulties as well, however, including:

1) Multiple processors can increase complexity and indeterminacy. Debugging systems with multiple asynchronous processors, running different algorithms, possibly sharing memory or at least communicating information to each other, is difficult. If the multiple processors are not all of the same type, multiple sets of development tools which are not integrated with each other may need to be used simultaneously.

2) Depending on the selection of connection topology and hardware capabilities of the individual processors, response times can be degraded rather than improved. Passing data, synchronization, arbitration for busses and memory, etc. can all degrade the performance of a multi-processor system, compared to a single processor system.




IX. DSP, DSP Chips, Demos (Putnam at Stanford)




X. Microcontrollers

As with all definitions related to hardware and performance throughout the history of computing, the definition of a micro- controller has also changed somewhat. Common features are relatively constant however. In the past, microcontrollers typically exhibited

1) Small Word Size and Integer Math: This keeps the size, cost and power consumption down.

2) Low Level Language Interface: Typically Assembler, maybe C

3) Hardware Interrupt Support

4) Peripheral Devices Required for Inputs and Outputs

5) Low Cost

Examples of microcontrollers from the 1970's - 1980's include:


H8, 68xx, 65xx, Zx
8 bit data, 8 bit instruction, 2+K address
Clock Rate: 1-2 MHz
Language and Interface: Assembler via TTY or Hex Keypad
Typical System cost: $100.00

Those readers with some consumer microcomputer system experience might recognize that the 6800, 6502, Z80, and others from these families were actually the host processors resident in desktop computers like the early Apple I and II, the Commodore Vic and 64, the Atari 400 and 800, various Tandy and Radio Shack computers, etc. One relatively common thread in microcontroller history is that as a processor ends its life cycle as a main processor, it may just be beginning its life as a microcontroller. A mature processor that can be manufactured cheaply, and which has a long history of reliable software and tools often makes an excellent choice for a microcontroller.

Modern microcontrollers include many updated versions of historically popular microprocessors, and also some new processors designed specifically for use as microcontrollers. many basic goals and features still persist, with some new additions:


1) Small Words (relatively) Integer Math (or not)
2) Higher Level Language Support: Assembler, C, Forth, BASIC, with tools
3) Peripheral Inputs Integrated
4) Low Cost

Examples of currently available microcontrollers:


Updated versions of 68xx, 65xx, Zx, + 680xx family
PIC Chip ($4-10.00)
The BASIC Stamp = PIC + More
16 bit data, high level instructions, 2K memory
Clock Rate: 4-20 MHz
Language and Interface: BASIC via PC Serial Port
System cost: $10-35.00




References:

Some Web References:


Educational Research Groups and Projects:


http://www.eecs.umich.edu/RTCL/ U. Michigan Real-Time Computing Lab
http://www.cs.cmu.edu/Groups/real-time/cmu.html Carnegie Melon Real-Time Groups
http://www.elec-eng.leeds.ac.uk/realtime/List.html A Good OS List from Leeds

Commercial:
http://www.heurikon.com/whitePapers/ChoosingOS.html> Heurikon
http://www.realtime-os.com/rtresour.html> Resource List Compiled by E. Douglas Jensen

Some Non-Web References Made from Dead Trees:


"Encyclopedia of Computer Science"
A. Ralston and E. Reilly, eds.
New York: Van Nostrand Reinhold, 1993


"Real-Time Programming : Neglected Topics"
Caxton C. Foster.
Reading, Mass. : Addison-Wesley Pub. Co., c1981.


"Real-Time Software for Control : Program Examples in C"
David M. Auslander, Cheng H. Tham.
Englewood Cliffs, N.J. : Prentice Hall, c1990.


"Real-time systems, Specification, Verification, and Analysis,"
Mathai Joseph, ed.
Englewood Cliffs, N.J. : Prentice Hall, 1996


"Introduction to Real-time Software Design,"
S.T. Allworth and R.N. Zobel
New York, Springer Verlag, 1989