A microservice-based framework for multi-level testing of cyber-physical systems

In the last years, the use of microservice architectures is spreading in Cyber-Physical Systems (CPSs) and Internet of Things (IoT) domains. CPSs are systems that integrate digital cyber computations with physical processes. The development of software for CPSs demands a constant maintenance to support new requirements, bug fixes, and deal with hardware obsolescence. The key in this process is code testing and more if the code is fragmented during the development of CPSs. It is important to remark that this process is challenging and time-consuming. In this paper, we report on the experience of instantiating of the microservice-based architecture for DevOps of CPSs to test elevator dispatching algorithms across different test levels (i.e., SiL, HiL and Operation). Such an architecture allows for a continuous deployment, monitoring and validation of CPSs. By integrating the approach with a real industrial case study, we demonstrate that our approach reduces significantly the time needed in the testing process and consequently, reduces the economic cost of the entire process.

DOI: https://doi.org/10.1007/s11219-023-09639-z

Authors: Iñigo Aldalur, Aitor Arrieta, Aitor Agirre, Goiuria Sagardui and Maite Arratibel.

Title of the source: Software Quality Journal

Publisher: Springer

Relevant pages:

Year: 2023

More info

Evolutionary generation of metamorphic relations for cyber-physical systems

A problem when testing Cyber-Physical Systems (CPS) is the difficulty of determining whether a particular system output or behaviour is correct or not. Metamorphic testing alleviates such a problem by reasoning on the relations expected to hold among multiple executions of the system under test, which are known as Metamorphic Relations (MRs). However, the development of effective MRs is often challenging and requires the involvement of domain experts. This paper summarizes our recent publication: “Generating Metamorphic Relations for Cyber-Physical Systems with Genetic Programming: An Industrial Case Study”, presented at ESEC/FSE 2021. In that publication we presented GAssertMRs, the first technique to automatically generate MRs for CPS, leveraging GP to explore the space of candidate solutions. We evaluated GAssertMRs in an industrial case study, outperforming other baselines.

DOI: https://doi.org/10.1145/3520304.3534077

Authors: Ayerdi, Jon and Terragni, Valerio and Arrieta, Aitor and Tonella, Paolo and Sagardui, Goiuria and Arratibel, Maite

Title of the source: Proceedings of the Genetic and Evolutionary Computation Conference Companion

Publisher:Association for Computing Machinery

Relevant pages: 15–16

Year: 2022

More info

Big data testing techniques: taxonomy, challenges and future trends

Big Data is reforming many industrial domains by providing decision support through analyzing large data volumes. Big Data testing aims to ensure that Big Data systems run smoothly and error-free while maintaining the performance and quality of data. However, because of the diversity and complexity of data, testing Big Data is challenging. Though numerous research efforts deal with Big Data testing, a comprehensive review to address testing techniques and challenges of Big Data is not available as yet. Therefore, we have systematically reviewed the Big Data testing techniques’ evidence occurring in the period 2010–2021. This paper discusses testing data processing by highlighting the techniques used in every processing phase. Furthermore, we discuss the challenges and future directions. Our findings show that diverse functional, non-functional and combined (functional and non-functional) testing techniques have been used to solve specific problems related to Big Data. At the same time, most of the testing challenges have been faced during the MapReduce validation phase. In addition, the combinatorial testing technique is one of the most applied techniques in combination with other techniques (i.e., random testing, mutation testing, input space partitioning and equivalence testing) to find various functional faults through Big Data testing.

DOI: https://doi.org/10.32604/cmc.2023.030266

Authors: Iram Arshad, Saeed Hamood Alsamhi, Wasif Afzal

Title of the source: Computers, Materials & Continua

Publisher:  Computers, Materials & Continua

Relevant pages: 2739-2770

Year: 2023

More info

Quality assuring the quality assurance tool: applying safety-critical concepts to test framework development

The quality of embedded systems is demonstrated by the performed tests. The quality of such tests is often dependent on the quality of one or more testing tools, especially in automated testing. Test automation is also central to the success of agile development. It is thus critical to ensure the quality of testing tools. This work explores how industries with agile processes can learn from safety-critical system development with regards to the quality assurance of the test framework development. Safety-critical systems typically need adherence to safety standards that often suggests substantial upfront documentation, plans and a long-term perspective on several development aspects. In contrast, agile approaches focus on quick adaptation, evolving software and incremental deliveries. This article identifies several approaches of quality assurance of software development tools in functional safety development and agile development. The extracted approaches are further analyzed and processed into candidate solutions, i.e., principles and practices for the test framework quality assurance applicable in an industrial context. An industrial focus group with experienced practitioners further validated the candidate solutions through moderated group discussions. The two main contributions from this study are: (i) 48 approaches and 25 derived candidate solutions for test framework quality assurance in four categories (development, analysis, run-time measures, and validation and verification) with related insights, e.g., a test framework should be perceived as a tool-chain and not a single tool, (ii) the perceived value of the candidate solutions in industry as collected from the focus group.

DOI: 10.7717/peerj-cs.1131

Authors:Jonathan ThörnPer Erik StrandbergDaniel Sundmark, and Wasif Afzal

Title of the source: PeerJ Comput Sci. 2022; 8: e1131

Publisher: PeerJ Comput Sci. 2022; 8: e1131

Relevant pages:1–37

Year: 2022

More info

Digital Twin-based Anomaly Detection with Curriculum Learning in Cyber-physical Systems. ACM Transactions on Software Engineering and Methodology.

Anomaly detection is critical to ensure the security of cyber-physical systems (CPS). However, due to the increasing complexity of attacks and CPS themselves, anomaly detection in CPS is becoming more and more challenging. In our previous work, we proposed a digital twin-based anomaly detection method, called ATTAIN, which takes advantage of both historical and real-time data of CPS. However, such data vary significantly in terms of difficulty. Therefore, similar to human learning processes, deep learning models (e.g., ATTAIN) can benefit from an easy-to-difficult curriculum. To this end, in this paper, we present a novel approach, named digitaL twin-based Anomaly deTecTion wIth Curriculum lEarning (LATTICE), which extends ATTAIN by introducing curriculum learning to optimize its learning paradigm. LATTICE attributes each sample with a difficulty score, before being fed into a training scheduler. The training scheduler samples batches of training data based on these difficulty scores such that learning from easy to difficult data can be performed. To evaluate LATTICE, we use five publicly available datasets collected from five real-world CPS testbeds. We compare LATTICE with ATTAIN and two other state-of-the-art anomaly detectors. Evaluation results show that LATTICE outperforms the three baselines and ATTAIN by 0.906%-2.367% in terms of the F1 score. LATTICE also, on average, reduces the training time of ATTAIN by 4.2% on the five datasets and is on par with the baselines in terms of detection delay time.

DOI: https://doi.org/10.1145/3582571

Authors: Qinghua Xu, Shaukat Ali, Tao Yue

Title of the source: ACM Transactions on Software Engineering and Methodology

Publisher:  ACM

Relevant pages: Just accepted. Online but not included in the journal yet.

Year: 2023

More info

Towards dependable CPS/IoT ecosystem

This thesis defines a concept of CPS/IoT Ecosystem as a hierarchical structure, that governs practices and procedures for modeling, design, development, execution and operation of smart systems. We divide these systems in three loosely dependent scopes of operation: the cloud, the fog, and the swarm. Furthermore, we propose a series of methods and approaches that support the dependable design, execution, and operation of CPS/IoT Ecosystems: the methods for ensuring the deterministic execution of tasks in safety constrained applications, a communication channels virtualization for many-core architectures, and a secure communication architecture for many-core platforms. A CPS/IoT Ecosystem is a highly heterogeneous environment with hardware and software components that are designed and implemented by multiple organizations. To ensure coherence between different components and to reduce complexity we propose a continuous integration and deployment (CI/CD) scheme for CPS/IoT Ecosystem. Furthermore, we demonstrate a runtime verification (RV) mechanism that provides a basis for quality of service (QoS) orchestration and dynamic reconfiguration of CPS/IoT applications. As final step in this thesis we propose methods to achieve energy-sustainable CPS/IoT Ecosystems. In conclusion, this thesis tries to seed methodological guidelines on how to build dependable CPS/IoT Ecosystems for applications with various confidence requirements. We want to understand the upcoming changes and reduce eventual effects of ad-hoc development. To explain physical environments using mathematical models and to learn new emerging behaviors using this massive incursion of new data and new insights.

DOI: https://doi.org/10.34726/hss.2022.103104

Authors:Haris Isakovic

Title of the source: Doctoral dissertation

Publisher:  Technische Universität Wien

Relevant pages: 1-155

Year: 2022

More info

An Energy Sustainable CPS/IoT Ecosystem

This paper provides a short overview on methods and technologies necessary to build smart and sustainable Internet-of-Things (IoT). It observes IoT systems in a close relation with data centered intelligence and its application in cyber-physical systems. With the current rate of growth IoT devices and supporting CPS infrastructure will reach extremely high numbers in less than a decade. This will create an enormous overhead on world’s supply of electrical energy. In this paper, we propose a model extension for estimation of energy consumption by IoT devices in next decade. The paper gives a definition of CPS/IoT Ecosystem as a mutually codependent heterogeneous multidisciplinary structure. Further we explore a set of methods to reduce energy consumption and make CPS/IoT Ecosystem sustainable by design. As a case study we propose energy harvesting sensor node implemented as a wildfire early detection system.

DOI: https://doi.org/10.1007/978-3-030-76063-2_22

Authors: Haris Isakovic, Edgar Azpiazu Crespo, Radu Grosu

Title of the source: Science and Technologies for Smart Cities – 6th EAI International Conference, SmartCity360°

Publisher:  Springer

Relevant pages: 305-322

Year: 2020

More info

QoS for Dynamic Deployment of IoT Services

This paper introduces RVAF, a runtime verification (RV) extension of the Arrowhead Framework (AF) with container-based service-deployment and runtime-enforcement of a desired quality of service (QoS). AF is a service-oriented middleware architecture for IoT-applications, consisting of a set of core and auxiliary services and systems, respectively. The QoS manager (QoSM) is one AF’s most important auxiliary systems, which can be used to guarantee the application’s QoS for a wide set of parameters. In RVAF the QoS offered to a particular IoT-application is specified in signal temporal logic, and is continuously monitored by the RVAF-QoSM. In case of an imminent violation, RVAF automatically initiates a container-based reconfiguration, which is ensured to maintain the desired QoS. RVAF is beneficial to large IoT-applications, where the use of continuous-integration and continuous-deployment tools, is not only a recommended practice but also a necessity. Moreover, the use of RVAF is advantageous both during the development of an IoT application, and after its deployment. We describe the architecture of RVAF, provide its formal underpinning, and demonstrate the usefulness of RVAF supported by an industrial IoT application. The main contribution of this work is to show what it takes to incorporate RV concepts into modern SOA frameworks supporting the development of IoT applications.

DOI: https://doi.org/10.1109/ICIT46573.2021.9453670

Authors: Haris Isakovic, Luis Lino Ferreira, Irmin Okic, Adam Dukkon, ZlatanTucakovic, Radu Grosu

Title of the source: 2021 22nd IEEE International Conference on Industrial Technology (ICIT)

Publisher:  IEEE

Relevant pages: 1144-1151

Year: 2021

More info

Uncertainty-aware Robustness Assessment of Industrial Elevator Systems

Industrial elevator systems are commonly used software systems in our daily lives, which operate in uncertain environments such as unpredictable passenger traffic, uncertain passenger attributes and behaviors, and hardware delays. Understanding and assessing the robustness of such systems under various uncertainties enable system designers to reason about uncertainties, especially those leading to low system robustness, and consequently improve their designs and implementations in terms of handling uncertainties. To this end, we present a comprehensive empirical study conducted with industrial elevator systems provided by our industrial partner Orona, which focuses on assessing the robustness of a dispatcher, i.e., a software component responsible for elevators’ optimal scheduling. In total, we studied 90 industrial dispatchers in our empirical study. Based on the experience gained from the study, we derived an uncertainty-aware robustness assessment method (named UncerRobua) comprising a set of guidelines on how to conduct the robustness assessment and a newly proposed ranking algorithm, for supporting the robustness assessment of industrial elevator systems against uncertainties.


Authors: Liping Han, Shaukat Ali, Tao Yue, Aitor Arrieta and Maite Arratibel

Title of the source: ACM Transactions on Software Engineering and Methodology

Publisher:  ACM Journals

Relevant pages:  

Year: 2022

Uncertainty-Aware Transfer Learning to Evolve Digital Twins for Industrial Elevators

Digital twins are increasingly developed to support the development, operation, and maintenance of cyber-physical systems such as industrial elevators. However, industrial elevators continuously evolve due to changes in physical installations, introducing new software features, updating existing ones, and making changes due to regulations (e.g., enforcing restricted elevator capacity due to COVID-19), etc. Thus, digital twin functionalities (often built on neural network-based models) need to evolve themselves constantly to be synchronized with the industrial elevators. Such an evolution is preferred to be automated, as manual evolution is timeconsuming and error-prone. Moreover, collecting sufficient data to re-train neural network models of digital twins could be expensive or even infeasible. To this end, we propose unceRtaInty-aware tranSfer lEarning enriched Digital Twins (RISE-DT), a transfer learning based approach capable of transferring knowledge about the waiting time prediction capability of a digital twin of an industrial elevator across different scenarios. RISE-DT also leverages uncertainty quantification to further improve its effectiveness. To evaluate RISE-DT, we conducted experiments with 10 versions of an elevator dispatching software from Orona, Spain, which are deployed in a Software in the Loop (SiL) environment. Experiment results show that RISE-DT, on average, improves the Mean Squared Error by 13.131% and the utilization of uncertainty quantification further improves it by 2.71%.

DOI: https://doi.org/10.1145/3540250.3558957

Authors: Qinghua Xu, Shaukat Ali, Tao Yue and Maite Arratibel

Title of the source: ESEC/FSE 2022: Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering

Publisher:  Association for Computing Machinery

Relevant pages: 

Year: 2022

More info