Logical Execution Time

for Control Applications

The most obvious approach for parallelization or porting control software to multi-core system is based on last is best model that relies on shared memory and global variables. However, jitter can occur using this approach, which results in non-deterministic runtime behavior. Additionally, the approach has poor scalability with the number of cores. This article discusses the use of logical execution time for parallelization of control software for multi-core systems that overcome these limitations.

Inter-task communication model

Last is best (LIB)

Currently, most control software uses the Last Is Best (LIB) inter-task communication model. The last available values are read and written from a shared memory. Data consistency is ensured at task level by synchronization mechanisms (such as spin locks), so that a component always reads the same age of values, e.g. temperature and speed. In parallel processing, different components are executed on different cores or tasks. [1]

One problem of the LIB model is sampling and response time jitter. In the event of a jitter, a component either accesses obsolete data or control decisions are available too late. Both effects lead to a degradation in the performance of the control software.

Another disadvantage of jitter is that the system no longer behaves deterministic. Whether a jitter occurs or not depends on the execution time of each task and the system load. Since utilization and execution times vary, it effectively depends on a random selection whether old or current data is used.

The problem of jitter increases with the number of cores since more components compete simultaneously for access to shared memory. This is one cause why the LIB model scales poorly with increasing numbers of cores.

Logical Execution Time (LET)

To address the problem of deterministic execution of the control software, the concept of Logical Execution Time (LET) was introduced as an alternative to the LIB model. In this model, the read and write accesses are executed in a deterministic order, so that it is already determined at compile time whether a component is working on old or current data.

emmtrix Parallel Studio (ePS) has been using the Logical Execution Time (LET) concept for over 5 years. ePS calculates a static schedule for parallelization. A deterministic and correct execution is more important than pure performance. After parallelizing, it is guaranteed that the LET is maintained at runtime. By using lock-free queues between the cores, waiting times on the cores are also reduced.

The end user specifies the LET using a sequential program that is used for parallel input. This logical sequence implicitly determines which module accesses data from the current or previous time step. This concept can be easily extended so that you can define for each module which data from the current or previous time step is used.

The following advantages result from the use of the LET:

  1. Single deterministic schedule

In contrast to the LIB model, the schedule does not depend on the runtimes of the individual components. A single static schedule is determined at compile time. This ensures an identical program flow without coincidences.

  1. Deterministic data access (without jitter)

The deterministic schedule also ensures deterministic data accesses. This means it is already defined at compile time whether a component accesses data from the current or previous time step. An analysis of the closed-loop properties is thus significantly simplified.

  1. Scalability by distributed memories

The data is no longer stored in a central memory. Instead a local data copy is kept per core. The bottleneck of a central memory with central spin locks is eliminated, which offloads the central memory bus and ensures the scalability of the concept.

  1. Scalability by decentral communication

Due to the elimination of central memory, communication is also organized decentrally. By using 1:1 communication, data exchange can be realized efficiently, e.g. using lockless data structures. In addition, the concept even scales on systems without a central bus, such as manycore systems.

  1. Guarantee of real-time capability

The single deterministic schedule significantly simplifies an analysis of real-time capability. With spin locks, for example, the overhead of locking or querying the locks increases significantly as the system load increases.

 

[1] D. Ziegenbein und A. Hamann, „Timing-aware control software design for automotive systems“, in 2015 52nd ACM/EDAC/IEEE Design Automation Conference (DAC), 2015, S. 1–6.

Dr. Timo Stripf

Dr. Timo Stripf

Dr.-Ing. Timo Stripf has studied computer science at the Karlsruhe Institute of Technology and wrote his doctoral thesis in the area of electronic engineering about compiler construction. Between 2012 and 2015, he jointly coordinated the ALMA EU project with Prof. Jürgen Becker in which automated parallel software development for embedded multicore systems was investigated. Since 2016, he is technical managing director of emmtrix Technology GmbH, which provides automated software development and parallelization tools for heterogeneous multicore processor systems.