Automatic Optimizations for Runtime Verification Specifications

Jan Baumeister, Bernd Finkbeiner, Matthis Kruse, Stefan Oswald, Noemi Passing, Maximilian Schwenger

Some medical implants act autonomously: they assess the current health status of a patient and administer treatment when appropriate. An improper treatment, however, can cause serious harm. Here, the decision logic leading to the treatment relies on data obtained from sensors — an inherently imperfect medium. Coping with these inaccuracies requires the logic to be robust in the sense that slight perturbations in the measurements do not significantly alter the decision. Determining the extent to which an algorithm is robust automatically does not scale well for complex and opaque components. This is particularly problematic when machine learning is involved. Yet, the analysis is feasible for simpler safety-related components such as a runtime monitor, which observes the system and intervenes in a treatment when necessary. Its significantly lower complexity generally allows for providing static guarantees on the runtime behavior of the monitor. Complementing these guarantees with a robustness analysis constitutes a major step toward certifiable medical cyber-physical systems controlled by opaque, machine-learned components. Hence, this paper reports on ongoing research in the direction of a robustness analysis for the runtime monitoring framework RTLola.

6th Workshop on Monitoring and Testing of Cyber-Physical Systems (MT-CPS 2021).

(pdf)