Scientific publications

Scientific publications

Using Regression Learners to Predict Performance Problems on Software Updates: a Case Study on Elevators Dispatching Algorithms

Remote software deployment and updating has long been common-
place in many different fields, but now, the increasing expansion of IoT and CPSoS (Cyber-Physcal System of Systems) has highlighted the need for additional mechanisms in these systems, to ensure the correct behaviour of the deployed software version after deployment. In this sense, this paper investigates the use of Machine Learning algorithms to predict acceptable behaviour in system performance of a new software release. By monitoring the real performance, eventual unexpected problems can be identified. Based on previous knowledge and actual run-time information, the proposed approach predicts the response time that can be considered acceptable for the new software release, and this information is used to identify problematic releases. The mechanism has been applied to the post-deployment monitoring of traffic algorithms in elevator systems. To evaluate the approach, we have used performance mutation testing, obtaining good results. This paper makes two contributions. First, it proposes several regression learners that have been trained with different types of traffic profiles to efficiently predict response time of the traffic dispatching algorithm. This prediction is then compared with the actual response time of the new algorithm release, and provides a verdict about its performance.Secondly, a comparison of the different learners is performed.

DOI: https://doi.org/10.1145/3412841.3441894

Authors: Aitor Gartziandia, Aitor Arrieta, Aitor Agirre, Goiuria Sagardui, Maite Arratibel

Title of the source: Proceedings of the 36th Annual ACM Symposium on Applied Computing

Publisher: ACM

Relevant pages: 135-144

Year: 2021

More info

Anomaly Detection with Digital Twin in Cyber-Physical Systems

Cyber-Physical Systems (CPSs) are susceptible to various anomalies during their operations. Thus, it is important to detect such anomalies. Detecting such anomalies is challenging since it is uncertain when and where anomalies can happen. To this end, we present a novel approach called Anomaly deTection with digiTAl twIN (ATTAIN), which continuously and automatically builds a digital twin with live data obtained from a CPS for anomaly detection. ATTAIN builds a Timed Automaton Machine (TAM) as the digital representation of the CPS, and implements a Generative Adversarial Network (GAN) to detect anomalies. GAN uses a GCN-LSTM-based module as a generator, which can capture temporal and spatial characteristics of the input data and learn to produce realistic unlabeled fake samples. TAM labels these fake samples, which are then fed into a discriminator along with real labeled samples. After training, the discriminator is capable of distinguishing anomalous data from normal data with a high F1 score. To evaluate our approach, we used three publicly available datasets collected from three CPS testbeds. Evaluation results show that ATTAIN improved the performance of two state-of-art anomaly detection methods by 2.413%, 8.487% and 5.438% on average on the three datasets, respectively. Moreover, ATTAIN achieved on average 8.39% increase in the anomaly detection capability with digital twins as compared with an approach of not using digital twins.

Authors: Qinghua Xu, Shaukat Ali, Tao Yue

Title of the source: IEEE International Conference on Software Testing

Publisher: IEEE

Year: 2021


More info

An Evaluation of Monte Carlo-Based Hyper-Heuristic for Interaction Testing of Industrial Embedded Software Applications

Hyper-heuristic is a new methodology for the adaptive hybridization of meta-heuristic algorithms to derive a general algorithm for solving optimization problems. This work focuses on the selection type of hyper-heuristic, called the exponential Monte Carlo with counter (EMCQ). Current implementations rely on the memory-less selection that can be counterproductive as the selected search operator may not (historically) be the best performing operator for the current search instance. Addressing this issue, we propose to integrate the memory into EMCQ for combinatorial t-wise test suite generation using reinforcement learning based on the Q-learning mechanism, called Q-EMCQ. The limited application of combinatorial test generation on industrial programs can impact the use of such techniques as Q-EMCQ. Thus, there is a need to evaluate this kind of approach against relevant industrial software, with a purpose to show the degree of interaction required to cover the code as well as finding faults. We applied Q-EMCQ on 37 real-world industrial programs written in Function Block Diagram (FBD) language, which is used for developing a train control management system at Bombardier Transportation Sweden AB. The results show that Q-EMCQ is an efficient technique for test case generation. Additionally, unlike the t-wise test suite generation, which deals with the minimization problem, we have also subjected Q-EMCQ to a maximization problem involving the general module clustering to demonstrate the effectiveness of our approach. The results show the Q-EMCQ is also capable of outperforming the original EMCQ as well as several recent meta/hyper-heuristic including modified choice function, Tabu high-level hyper-heuristic, teaching learning-based optimization, sine cosine algorithm, and symbiotic optimization search in clustering quality within comparable execution time.

DOI: https://doi.org/10.1007/s00500-020-04769-z

Authors: Bestoun S. Ahmed, Eduard Enoiu, Wasif Afzal, Kamal Z. Zamli

Title of the source: Journal of soft computing

Publisher: Springer

Relevant pages: 13929-13954

Year: 2020


More info

Towards a Taxonomy for Eliciting Design-Operation Continuum Requirements of Cyber-Physical Systems

Software systems that are embedded in autonomous Cyber-Physical Systems (CPSs) usually have a large life-cycle, both during its development and in maintenance. This software evolves during its life-cycle in order to incorporate new requirements, bug fixes, and to deal with hardware obsolescence. The current process for developing and maintaining this software is very fragmented, which makes developing new software versions and deploying them in the CPSs extremely expensive. In other domains, such as web engineering, the phases of development and operation are tightly connected, making it possible to easily perform software updates of the system, and to obtain operational data that can be analyzed by engineers at development time. However, in spite of the rise of new communication technologies (e.g., 5G) providing an opportunity to acquire Design-Operation Continuum Engineering methods in the context of CPSs, there are still many complex issues that need to be addressed, such as the ones related with hardware-software co-design. Therefore, the process of Design-Operation Continuum Engineering for CPSs requires substantial changes with respect to the current fragmented software development process. In this paper, we build a taxonomy for Design-Operation Continuum Engineering of CPSs based on case studies from two different industrial domains involving CPSs (elevation and railway). This taxonomy is later used to elicit requirements from these two case studies in order to present a blueprint on adopting Design-Operation Continuum Engineering in any organization developing CPSs.

DOI: https://doi.org/10.1109/RE48521.2020.00038

Authors: Jon Ayerdi , Aitor Garciandia , Aitor Arrieta , Wasif Afzal, Eduard Paul Enoiu, Aitor Agirre , Goiuria Sagardui , Maite Arratibel , Ola Sellin

Title of the source: 28th International Requirements Engineering Conference

Publisher: IEEE

Relevant pages: 280-290

Year: 2020


More info

Detecting Inconsistencies in Annotated Product Line Models

Model-based product line engineering applies the reuse practices from product line engineering with graphical modeling for the specification of software intensive systems. Variability is usually described in separate variability models, while the implementation of the variable systems is specified in system models that use modeling languages such as SysML. Most of the SysML modeling tools with variability support, implement the annotation-based modeling approach. Annotated product line models tend to be error-prone since the modeler implicitly describes every possible variant in a single system model. To identifying variability-related inconsistencies, in this paper, we firstly define restrictions on the use of SysML for annotative modeling in order to avoid situations where resulting instances of the annotated model may contain ambiguous model constructs. Secondly, inter-feature constraints are extracted from the annotated model, based on relations between elements that are annotated with features. By analyzing the constraints, we can identify if the combined variability- and system model can result in incorrect or ambiguous instances. The evaluation of our prototype implementation shows the potential of our approach by identifying inconsistencies in the product line model of our industrial partner which went undetected through several iterations of the model.

DOI: https://doi.org/10.1145/3382025.3414969

Authors: Damir Bilic, Jan Carlson, Daniel Sundmark, Wasif Afzal, Peter Wallin

Title of the source: Proceedings of the 24th ACM Conference on Systems and Software Product Line: Volume A

Publisher: ACM

Relevant pages: 1-11

Year: 2020


More info

Intermittently Failing Tests in the Embedded Systems Domain

Software testing is sometimes plagued with intermittently failing tests and finding the root causes of such failing tests is often difficult. This problem has been widely studied at the unit testing level for open source software, but there has been far less investigation at the system test level, particularly the testing of industrial embedded systems. This paper describes our investigation of the root causes of intermittently failing tests in the embedded systems domain, with the goal of better understanding, explaining and categorizing the underlying faults. The subject of our investigation is a currently-running industrial embedded system, along with the system level testing that was performed. We devised and used a novel metric for classifying test cases as intermittent. From more than a half million test verdicts, we identified intermittently and consistently failing tests, and identified their root causes using multiple sources. We found that about 1-3% of all test cases were intermittently failing. From analysis of the case study results and related work, we identified nine factors associated with test case intermittence. We found that a fix for a consistently failing test typically removed a larger number of failures detected by other tests than a fix for an intermittent test. We also found that more effort was usually needed to identify fixes for intermittent tests than for consistent tests. An overlap between root causes leading to intermittent and consistent tests was identified. Many root causes of intermittence are the same in industrial embedded systems and open source software. However, when comparing unit testing to system level testing, especially for embedded systems, we observed that the test environment itself is often the cause of intermittence.

DOI: https://doi.org/10.1145/3395363.3397359

Authors: Per Erik Strandberg, Thomas J. Ostrand, Elaine J. Weyuker, Wasif Afzal, Daniel Sundmark

Title of the source: ISSTA 2020: Proceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis

Publisher: ACM

Relevant pages: 337-348

Year: 2020


More info