Logical
Execution Time
(LET)

Logical Execution Time for Control Applications

The most obvious approach for parallelization or porting control software to multi-core systems is based on a last is best model that relies on shared memory and global variables. However, jitter can occur when using this approach, which results in non-deterministic runtime behaviour. Additionally, the approach has poor scalability with the number of cores. This article discusses the use of logical execution time for parallelization of control software for multi-core systems that overcomes these limitations.

Inter-Task Commuication Model

Last Is Best (LIB)

Currently, most control software uses the Last Is Best (LIB) inter-task communication model. The last available values are read and written from a shared memory. Data consistency is ensured at task level by synchronization mechanisms (such as spin locks), so that a component always reads the same age of values, e.g. temperature and speed. In parallel processing, different components are executed on different cores or tasks.1

One problem of the LIB model is sampling and response time jitter. In the event of jitter, a component either accesses obsolete data or control decisions are available too late. Both effects lead to a degradation in the performance of the control software.

Another disadvantage of jitter is that the system no longer behaves deterministically. Whether jitter occurs or not depends on the execution time of each task and the system load. Since utilization and execution times vary, it effectively depends on a random selection whether old or current data is used.

The problem of jitter increases with the number of cores since more components compete simultaneously for access to shared memory. This is one cause why the LIB model scales poorly with an increasing number of cores.

Logical Execution Time (LET)

To address the problem of deterministic execution of the control software, the concept of Logical Execution Time (LET) was introduced as an alternative to the LIB model. In this model, the read and write accesses are executed in a deterministic order, so that it is already determined at compile time whether a component is working on old or current data.

emmtrix Parallel Studio (ePS) has been using the Logical Execution Time (LET) concept for over 5 years. ePS calculates a static schedule for parallelization. A deterministic and correct execution is more important than pure performance. After parallelizing, it is guaranteed that the LET is maintained at runtime. By using lock-free queues between the cores, waiting times on the cores are also reduced.

The end user specifies the LET using a sequential program that is used for parallel input. This logical sequence implicitly determines which module accesses data from the current or previous time step. This concept can be easily extended so that you can define for each module which data from the current or previous time step is used.

1 D. Ziegenbein und A. Hamann, „Timing-aware control software design for automotive systems“, in 2015 52nd ACM/EDAC/IEEE Design Automation Conference (DAC), 2015, S. 1–6.

Advantages that Result from the Use of LET

1. Single Deterministic Shedule

In contrast to the LIB model, the schedule does not depend on the runtimes of the individual components. A single static schedule is determined at compile time. This ensures an identical program flow without coincidences.

2. Deterministic Data Access (without Jitter)

The deterministic schedule also ensures deterministic data accesses. This means it is already defined at compile time whether a component accesses data from the current or previous time step. An analysis of the closed-loop properties is thus significantly simplified.

3. Scalability by Distributed Memories

The data is no longer stored in a central memory. Instead a local data copy is kept per core. The bottleneck of a central memory with central spin locks is eliminated, which offloads the central memory bus and ensures the scalability of the concept.

4. Scalability by Decentral Communication

Due to the elimination of central memory, communication is also organized decentrally. By using 1:1 communication, data exchange can be realized efficiently, e.g. using lockless data structures. In addition, the concept even scales on systems without a central bus, such as manycore systems.

5. Guarantee of Real-Time Capability

The single deterministic schedule significantly simplifies an analysis of real-time capability. With spin locks, for example, the overhead of locking or querying the locks increases significantly as the system load increases.

About the Author

Dr. Timo Stripf

Dr.-Ing. Timo Stripf has studied computer science at the Karlsruhe Institute of Technology and wrote his doctoral thesis in the area of electronic engineering about compiler construction. Between 2012 and 2015, he jointly coordinated the ALMA EU project with Prof. Jürgen Becker in which automated parallel software development for embedded multicore systems was investigated. Since 2016, he is the technical managing director of emmtrix Technology GmbH, which provides automated software development and parallelization tools for heterogeneous multicore processor systems.

Cookie Consent with Real Cookie Banner