Multitasking is the ability to execute multiple separate tasks in a
fashion that is seemingly simultaneous. Note the phrase "seemingly
simultaneous." Short of a multiple processor system, there is no way to
make a single processor execute multiple tasks at the same time.
However, there is a way to create a system that seems to execute
multiple tasks at the same time.
The secret is to divide up the processor's time so it can put a
segment of time on each of the tasks on a regular basis. The result is
the appearance that the processor is executing multiple tasks, when in
actuality the processor is just switching between the tasks too quickly
to be noticed.
As an example, consider four cars driving on a freeway. Each car has
a driver and a desired destination, but no engine. A repair truck
arrives, but it only has one engine. For each car to move toward its
destination, it must use a common engine, shared with the other cars on
the freeway. (See Figure 2.1 below.)
Now in one scenario, the engine could be given to a single car,
until it reaches its destination, and then transferred to the next car
until it reaches its destination, and so on until all the cars get
where they are going. While this would accomplish the desired result,
it does leave the other cars sitting on the freeway until the car with
the engine finishes its trip. It also means that the cars would not be
able to interact with each other during their trips.
A better scenario would be to give the engine to the first car for a
short period of time, then move it to the second for a short period,
then the third, then the fourth, and then back to first, continuing the
rotation through the cars over and over. In this scenario, all of the
cars make progress toward their destinations.
They won't make the same rate of progress that they would if they
had exclusive use of the engine, but they all do move together. This
has a couple of advantages; the cars travel at a similar rate, all of
the cars complete their trip at approximately the same time, and the
cars are close enough during their trip to interact with each other.
2.1. Automotive multitasking
This scenario is in fact, the common method for multitasking in an
operating system. A task is granted a slice of execution time, then
halted, and the next task begins to execute. When its time runs out, a
third task begins executing, and so on.
While this is an over-simplification of the process, it is the basic
underlying principle of a multitasking operating system: multiple
programs operating within small slices of time, with a central control
that coordinates the changes. The central control manages the switching
between the various tasks, handles communications between the tasks,
and even determines which tasks should run next.
This central control is in fact the multitasking operating system.
If we plan to develop software that can multitask without an operating
system, then our design must include all of the same elements of an
operating system to accomplish multitasking.
Four Basic Requirements of Multitasking
The three basic requirements of a multitasking system are: context
switching, communications, managing priorities. To these three
functions, a fourth—timing control—is required to manage multitasking
in a real-time environment. Functions to handle each of these
requirements must be developed within a system for that system to be
able to multitask in real time successfully.
To better understand the requirements, we will start with a general
description of each requirement, and then examine how the two main
classes of multitasking operating systems handle the requirements.
Finally, we'll look at how a stand-alone system can manage the
requirements without an operating system.
Context Switching. When a processor is
executing a program, several registers contain data associated with the
execution. They include the working registers, the program counter, the
system status register, the stack pointer, and the values on the stack.
For a program to operate correctly, each of these registers must have
the right data and any changes caused by the execution of the program
must be accurately retained. There may also be addition data, variables
used by the program, intermediate values from a complex calculation, or
even hidden variables used by utilities from a higher level language
used to generate the program. All of this information is considered the
program, or task, context.
When multiple tasks are multitasking, it is necessary to swap in and
out all of this information or context, whenever the program switches
from one task to another. Without the correct context, the program that
is loaded will have problems, RETURNs will not go to the right address,
comparisons will give faulty results, or the microcontroller could even
lose its place in the program.
To make sure the context is correct for each task, a specific
function in the operating system, called the Context Switcher, is needed. Its
function is to collect the context of the previous task and save it in
a safe place. It then has to retrieve the context of the next task and
restore it to the appropriate registers. In addition to the context
switcher, a block of data memory sufficient to hold the context of each
task must also be reserved for each task operating.
When we talk about multitasking with an operating system in the next
section, one of the main differentiating points of operating systems is
the event that triggers context switcher, and what effect that system
has on both the context switcher and the system in general.
Communications. Another requirement of a
multitasking system is the ability of the various tasks in the system
to reliably communicate with one another. While this may seem to be a
trivial matter, it is the very nature of multitasking that makes the
communications between tasks difficult. Not only are the tasks never
executing simultaneously, the receiving task may not be ready to
receive when the sending task transmits.
The rate at which the sending task is transmitting may be faster
than the receiving task can accept the data. The receiving task may not
even accept the communications. These complications, and others, result
in the requirement for a communications system between the
various tasks. Note: the generic term "intertask communications" will
typically be used when describing the data passed through the
communications system and the various handshaking protocols used.
Managing Priorities. The priority manager operates in
concert with the context switcher, determining which tasks should be
next in the queue to have execution time. It bases its decisions on the
relative priority of the tasks and the current mode of operation for
the system. It is in essence an arbitrator, balancing the needs of the
various tasks based on their importance to the system at a
In larger operating systems, system configuration, recent
operational history, and even statistical analysis of the programs can
be used by the priority manager to set the system's priorities. Such a
complicated system is seldom required in embedded programming, but some
method for shifting emphasis from one task to another is needed for the
system to adapt to the changing needs of the system.
Timing Control. The final requirement for
real-time multitasking is timing control. It is
responsible for the timing of the task's execution. Now, this may sound
like just a variation on the priority manager, and the timing control
does interact with the priority manager to do its job.
But while the priority manager determines which tasks are next, it
is the timing control that determines the order of execution, setting
when the task executes. The distinction between the roles can be
somewhat fuzzy. However, the main point to remember is that the timing
control determines when a
task is executed, and it is the priority control that determines if
the task is executed.
Balancing the requirements of the timing control and the priority
manager is seldom simple nor easy. After all, real-time systems often
have multiple asynchronous tasks, operating at different rates,
interacting with each other and the asynchronous real world. However,
careful design and thorough testing can produce a system with a
reasonable balance between timing and priorities.
To better understand the requirements of multitasking, let's take a
look at how two different types of operating systems handle
multitasking. The two types of operating system are preemptive and cooperative. Both utilize a context switcher to
swap one task for another; the difference is the event that triggers
the context switch.
A preemptive operating system typically uses a
timer-driven interrupt, which calls the context switcher through the
interrupt service routine. A cooperative
relies on subroutine calls by the task to periodically invoke the
context switcher. Both systems employ the stack to capture and retrieve
the return address; it is just the method that differs. However, as we
will see below, this creates quite a difference in the operation of the
Of the two systems, the more familiar is the preemptive style of
operating system. This is because it uses the interrupt mechanism
within the microcontroller in much the same way as an interrupt service
When the interrupt fires, the current program counter value is
pushed onto the stack, along with the status and working registers. The
microcontroller then calls the interrupt service
routine, or ISR, which determines the cause of the interrupt,
handles the event, and then clears the interrupt condition. When the
ISR has completed its task, the return address, status and register
values are then retrieved and restored, and the main program continues
on without any knowledge of the ISR's execution.
The difference between the operation of the ISR and a preemptive
operating system is that the main program that the ISR returns to is
not the same program that was running when the interrupt occurred.
That's because, during the interrupt, the context switcher swaps in the
context for the next task to be executed. So, basically, each task is
operating within the ISR of every other task. And just like the program
interrupted by the ISR, each task is oblivious to the execution of all
the other tasks. The interrupt driven nature of the preemptive
operating system gives rise to some advantages that are unique to the
preemptive operating system:
- The slice of time that each task is allocated is strictly
regulated. When the interrupt fires, the current task loses access to
the microcontroller and the next task is substituted. So, no one task
can monopolize the system by refusing to release the microcontroller.
- Because the transition from one task to the next is driven by
hardware, it is not dependent upon the correct operation of the code
within the current task. A fault condition that corrupts the program
counter within one task is unlikely to corrupt another current task,
provided the corrupted task does not trample on another task's variable
space. The other tasks in the system should still operate, and the
operating system should still swap them in and out on time. Only the
corrupted task should fail. While this is not a guarantee, the
interrupt nature of the preemptive system does offer some protection.
- The programming of the individual tasks can be linear, without
any special formatting to accommodate multitasking. This means
traditional programming practices can be used for development, reducing
the amount of training required to bring on-board a new designer.
However, because the context switch is asynchronous to the task
timing, meaning it can occur at any time during the task execution,
complex operations within the task may be interrupted before they
complete, so a preemptive operating system also suffers from some
disadvantages as well:
- Multibyte updates to variables and/or peripherals may not
complete before the context switch, leaving variable updates and
peripheral changes incomplete. This is the reason preemptive operating
systems have a communications manager to handle all communications. Its
job is to only pass on updates and changes that are complete, and hold
any that did not complete.
- Absolute timing of events in the task cannot rely on execution
time. If a context switch occurs during a timed operation, the time
between actions may include the execution time of one or more other
tasks. To alleviate this problem timing functions must rely on an
external hardware function that is not tied to the task's execution.
- Because the operating system does not know what context variables
are in use when the context switch occurs, any and all variables used
by the task, including any variables specific to the high-level
language, must be saved as part of the context. This can significantly
increase the storage requirements for the context switcher.
While the advantages of the preemptive operating system are
attractive, the disadvantages can be a serious problem in a real-time
system. The communications problems will require a communications
manager to handle multibyte variables and interfaces to peripherals.
Any timed event will require a much more sophisticated timing control
capable of adjusting the task's timing to accommodate specific timing
And, the storage requirements for the context switcher can require
upwards of 10"30 bytes, per task—no small amount of memory space as 5
to 10 tasks are running at the same time. All in all, a preemptive
system operates well for a PC, which has large amounts of data memory
and plenty of program memory to hold special communications and timing
handlers. However, in real-time microcontroller applications, the
advantages are quickly outweighed by the operating system's complexity.
The second form of multitasking system is the Cooperative operating system.
In this operating system, the event triggering the context switch is a
subroutine call to the operating system by the task currently
executing. Within the operating system subroutine, the current context
is stored and the next is retrieved. So, when the operating system
returns from the subroutine, it will be to an entirely different task,
which will then run until it makes a subroutine call to the operating
system. This places the responsibility for timing on the tasks
themselves. They determine when they will release the microcontroller
by the timing of their call to the operating system, thus the name
cooperative. This solves some of the more difficult problems
encountered in the preemptive operating system:
- Multibyte writes to variables and peripherals can be completed
prior to releasing the microcontroller, so no special communications
handler is required to oversee the communications process.
- The timed events, performed between calls to the
operating system, can be based on execution time, eliminating the need
for external hardware-based delay systems, provided a call
to the operating system is not made between the start and end of the
- The context storage need only save the current address and the
stack. Any variables required for statement execution, status, or even
task variables do not need to be saved as all statement activity is
completed before the statement making the subroutine call is executed.
This means that a cooperative operating system has a significantly
smaller context storage requirement than a preemptive system. This also
means the context switcher does not need intimate knowledge about
register usage in the highlevel language to provide context storage.
However, the news is not all good; there are some drawbacks to the
cooperative operating system that can be just as much a problem as the
preemptive operating system:
- Because the context switch requires the task to make a call to
the operating system, any corruption of the task execution, due to EMI,
static, or programming errors, will cause the entire system to fail.
Without the voluntary call to the operating system, a context switch
cannot occur. Therefore, a cooperative operating system will typically
require an external watchdog function to detect and recover from system
- Because the time of the context switch is dependent on the flow
of execution within the task, variations in the flow of the program can
introduce variations into the system's long-term timing. Any timed
events that span one or more calls to the operating system will still
require an external timing function.
- Because the periodic calls to the operating system are the means
of initiating a context switch, it falls to the designer to evenly
space the calls throughout the programming for all tasks. It also means
that if a significant change is made in a task, the placement of the
calls to the operating system may need to be adjusted. This places a
significant overhead on the designer to insure that the execution times
allotted to each task are reasonable and approximately equal.
As with the preemptive system, the cooperative system has several
advantages, and several disadvantages as well. In fact, if you examine
the lists closely, you will see that the two systems have some
advantages and disadvantages that are mirror images of each other. The
preemptive system's context system is variable within the tasks,
creating completion problems. The cooperative system gives the designer
the power to determine where and when the context switch occurs, but it
suffers in its handling of fault conditions. Both suffer from
complexity in relation to timing issues, both require some specialized
routines within the operating system to execute properly, and both
require some special design work by the designer to implement and
The third way: state machine multitasking
So, if preemptive and cooperative systems have both good and bad
points, and neither is the complete answer to writing multitasking
software, is there a third alternative? The answer is yes, a compromise
system designed in a cooperative style with elements of the preemptive
system. Specifically, the system uses state machines for the individual
tasks with the calls to the state machine regulated by a
hardware-driven timing system. Priorities are managed based on the
current value in the state variables and the general state of the
system. Communications are handled through a simple combination of
handshaking protocols and overall system design.
The flowchart of the collective system is shown in Figure 2.2.
Within a fixed infinite loop, each state machine is called based on its
current priority and its timing requirements. At the end of each state,
the state machine executes a return and the loop continues onto the
next state machine. At the end of the loop, the system pauses, waiting
for the start of the next pass, based on the timeout of a hardware
timer. Communications between the tasks are handled through variables,
employing various protocols to guarantee the reliable communications of
2.2. State Machine Multitasking
As with both the preemptive and cooperative systems, there are also
a number of advantages to a state machine-based system:
- The entry and exit points are fixed by the design of the
individual states in the state machines, so partial updates to
variables or peripherals are a function of the design, not the timing
of the context switch.
- A hardware timer sets the timing of each pass through the system
loop. Because the timing of the loop is constant, no specific delay
timing subroutines are required for the individual delays within the
task. Rather, counting passes through the loop can be used to set
individual task delays.
- Because the individual segments within each task are accessed via
a state variable, the only context that must be saved is the state
- Because the design leaves slack time at the end of the loop and
the start of the loop is tied to an external hardware timer, reasonable
changes to the execution time of individual states within the state
machine do not affect the overall timing of the system.
- The system does not require any third-party software to
implement, so no license fees or specialized software are required to
generate the system.
- Because the designer designs the entire system, it is completely
scalable to whatever program and data memory limitation may exist.
There is no minimal kernel required for operation.
However, just like the other operating systems, there are a few
disadvantages to the state machine approach to multitasking:
- Because the system relies on the state machine returning at the
end of each state, EMI, static, and programming flaws can take down all
of the tasks within the system. However, because the state variable
determines which state is being executed, and it is not affected by a
corruption of the program counter, a watchdog timer driven reset can
recover and restart uncorrupted tasks without a complete restart of the
- Additional design time is required to create the state machines,
communications, timing, and priority control system.
The resulting state machine-based multitasking system is a
collection of tasks that are already broken into function-convenient
time slices, with fixed hardware-based timing and a simple priority and
communication system specific to the design. Because the overall design
for the system is geared specifically to the needs of the system, and
not generalized for all possible designs, the operation is both simple
and reliable if designed correctly.
To read Part 1 in this series, go
simple MCU state machine contstructs."
Author Keith Curtis is principal
applications engineer at Microchip
Used with the
permission of the publisher, Newnes/Elsevier, this series of two
articles is based on Chapter 2: Basic Embedded Programming Concepts
multitasking with small microcontrollers," by Keith Curtis.