Physical systems are controlled by measuring several physical quantities used to compute a feedback control law determining how actuators influence back the physical system. The computation of the feedback control law is typically done on a microprocessor since digital implementations offer many advantages (accuracy, flexibility,..) with respect to analog implementations.
Under digital implementations, one important question is to determine when to measure the physical quantities of interest, when to execute the control law, and when to update the actuators in order to achieve desired levels of control performance. Traditionally, engineers and researchers have opted for conservative strategies, such as periodic sampling of signals and periodic execution of control laws. However, due to the growing complexity of systems, more efficient strategies are required, since resources are usually shared between several subsystems. Our goal is to go beyond the periodic paradigm and, drawing inspiration from event-triggered control, to develop self-triggered feedback control laws that decide their next execution time based on the current state of the system. This approach considerably reduces resource utilization while ensuring stability and desired levels of control performance.
To illustrate some of the advantages of self-triggered control, let´s look at a simple example: the control of a jet engine compressor. Below we can see the evolution of the pressure rise and the mass flow under a stabilizing feedback control law (the goal is to steer both quantities to the origin).
The performance is very similar since we almost cannot distinguish between the blue and black lines, representing the evolution of the physical system under a self-triggered strategy, and the red and green lines, representing the evolution of the physical system under a periodic strategy. Yet, when we observe the evolution of the actuator values over time the differences are important.
At the beginning, both strategies update the value of the actuator at the same rate, but as time evolves and the system tends to the equilibrium point, the self-triggered strategy enlarges the times between executions.
The self-triggered strategy adapts the inter-execution times according to the current state of the system, and thus leads to a lower number of executions.
Real-Time Scheduling of Self-Triggered Tasks
So far we haven't discussed an important point. How do we guarantee that self-triggered control tasks are executed before the deadlines since the deadlines change depending on the state of the system? And how do we take advantage on the reduced processor utilization resulting from self-triggered control tasks? We are currently investigating several generalizations to the control server model, introduced by Cervin and Eker in 2003 [CE03], to address the self-triggered case.
Here is an illustration of these techniques for the scheduling of two jet engine control tasks, one periodic hard real-time task and several soft real-time tasks. At the beginning, both control tasks require a high number of executions to achieve the desired performance, and hence there is no spare CPU time for the soft tasks. However, as time evolves the relative deadlines tend to enlarge as the system steers to the equilibrium point, giving more CPU time to the soft tasks. At t=0.7s, a disturbance steers the system far away from the origin, and therefore the CPU reduces the relative deadlines accordingly to guarantee the required performance at the expense of delaying other soft tasks.
Simulating Self-Triggered Control Tasks
Simulating different scheduling strategies for self-triggered control tasks is also challenging since the real-time properties of these tasks such as deadlines depend on the evolution of the physical system. We thus have to integrate the simulation of the physical system with the simulation of the real-time scheduler and all the other tasks requiring processor time. We are currently extending the real-time simulation environment TrueTime [CHL+03] to the context of self-triggered control tasks. The files for the above described example can be downloaded here. A brief manual is also available here.
To learn more about these techniques please see the links below (in reverse chronological order)