Abstract—Elevators are among the oldest and most widespread transportation systems, yet their complexity increases rapidly to satisfy customization demands and to meet quality of service requirements. Verification and validation tasks in this context are costly, since they rely on the manual intervention of domain experts at some points of the process. This is mainly due to the difficulty to assess whether the elevators behave as expected in the different test scenarios, the so-called test oracle problem. Metamorphic testing is a thriving testing technique that alleviates the oracle problem by reasoning on the relations among multiple executions of the system under test, the so-called metamorphic relations. In this practical experience paper, we report on the application of metamorphic testing to verify an industrial elevator dispatcher. Together with domain experts from the elevation sector, we defined multiple metamorphic relations that consider domain-specific quality of service measures. Evaluation results with seeded faults show that the approach is effective at detecting faults automatically.
Jon Ayerdi, Sergio Segura, Aitor Arrieta, Goiuria Sagardui, Maite Arratibel
Title of the source: IEEE 31st International Symposium on Software Reliability Engineering (ISSRE)
Relevant pages: 104-114
Year: 2020More info
Microservices for Continuous Deployment, Monitoring and Validation in Cyber-Physical Systems: an Industrial Case Study for Elevators Systems
Abstract—Cyber-Physical Systems (CPSs) are systems that integrate digital cyber computations with physical processes. The software embedded in CPSs has a long life-cycle, requiring constant evolution to support new requirements, bug fixes, and deal with hardware obsolescence. To date, the development of software for CPSs is fragmented, which makes it extremely expensive. This could be substantially enhanced by tightly connecting the development and operation phases, as is done in other software engineering domains (e.g., web engineering through DevOps). Nevertheless, there are still complex issues that make it difficult to use DevOps techniques in the CPS domain, such as those related to hardware-software co-design. To pave the way towards DevOps in the CPS domain, in this paper we instantiate part of the reference architecture presented in the H2020 Adeptness project, which is based on microservices that allow for the continuous deployment, monitoring and validation of CPSs. To this end, we elaborate a systematic methodology that considers as input both domain expertise and a previously defined taxonomy for DevOps in the CPS domain. We obtain a generic microservice template that can be used in any kind of CPS. In addition, we instantiate this architecture in the context of an industrial case study from the elevation domain.
Authors: Aitor Gartziandia, Jon Ayerdi, Aitor Arrieta; Shaukat Ali, Tao Yue, Aitor Agirre, Goiuria Sagardui, Maite Arratibel
Title of the source: IEEE 18th International Conference on Architecture Companion
Relevant pages: 46-53
Abstract—The abstract test cases generated through model-based testing (MBT) need to be concretized to make them executable on the software under test (SUT). Multiple researchers proposed different solutions, e.g., by utilizing adapters for concretization of abstract test cases and generation of test scripts. In this paper, we propose our Model-Based Test scrIpt GenEration fRamework (TIGER) based on GraphWalker, an open source MBT tool. The framework is capable of generating test scripts for embedded software controlling functions of a cyber physical system such as passenger trains developed at Bombardier Transportation AB. The framework follows some defined mapping rules for the concretization of abstract test cases. We have evaluated the generated test scripts using an industrial case study in terms of fault detection. We have induced faults in the model of the SUT based on three mutation operators to generate faulty test scripts. The aim of generating faulty test scripts is to produce failed test steps and to guarantee the absence of faults in the SUT. Moreover, we have also generated the test scripts using the correct version of the model and executed it to analyse the behaviour of the generated test scripts in comparison with manually-written test scripts. The results show that the test scripts generated by GW using the proposed framework are executable, provide 100% requirements coverage and can be used to uncover faults at software-in-the-loop simulation level of sub-system testing.
Authors:Muhammad Nouman Zafar, Wasif Afzal, Eduard Paul Enoiu, Athanasios Stratis , Ola Sellin
Title of the source: The 17th International Workshop on Advances in Model Based Testing
Relevant pages: 192-198
Abstract—Model-based testing (MBT) generates many test cases for validating a system under test against the user-defined requirements. Cloud computing provides powerful resources that can be utilised to execute these many test cases that would otherwise take much resources locally. Other benefits of utilizing cloud-based resources are elastic and on-demand, rapid provisioning and release of new, potentially value-adding services. Although cloud providers such as Amazon Web Services (AWS) have provided the necessary technologies for successful cloud- based operation, it remains difficult to migrate and hence achieve the realisation of MBT as a service for traditional in-house testing operations, especially for embedded software. In this paper, we present a series of cloud-based architectures powered by AWS and an open-source MBT tool, GraphWalker. These architectures are realized at simulation testing stage for real-world embedded software and particularly cater for online MBT, whereby the model-based tool is deployed as a RESTful web service, accessible through a number of REST API commands. The presented architectures as well as their realization through AWS can be adopted in future for more advanced levels of simulation testing of embedded software.
Authors: Wasif Afzal, Amirali Piadehbasmenj
Title of the source: 9th International Conference on Cyber-Physical Systems and Internet of Things
Using Machine Learning to Build Test Oracles: an Industrial Case Study on Elevators Dispatching algorithms
Abstract—The software of elevators requires maintenance over several years to deal with new functionality, correction of bugs or legislation changes. To automatically validate this software, test oracles are necessary. A typical approach in industry is to use regression oracles. These oracles have to execute the test input both, in the software version under test and in a previous software version. This practice has several issues when using simulation to
test elevators dispatching algorithms at system level. These issues include a long test execution time and the impossibility of re-using test oracles both at different test levels and in operation. To deal with these issues, we propose DARIO, a test oracle that relies on
regression learning algorithms to predict the Qualify of Service of the system. The regression learning algorithms of this oracle are trained by using data from previously tested versions. An empirical evaluation with an industrial case study demonstrates the feasibility of using our approach in practice. A total of five regression learning algorithms were validated, showing that the regression tree algorithm performed best. For the regression tree algorithm, the accuracy when predicting verdicts by DARIO ranged between 79 to 87%.
Authors: Aitor Arrieta, Jon Ayerdi, Miren Illarramendi, Aitor Agirre, Goiuria Sagardui, Maite Arratibel
Title of the source: 2nd ACM/IEEE International Conference on Automation of Software Tests
Using Regression Learners to Predict Performance Problems on Software Updates: a Case Study on Elevators Dispatching Algorithms
Remote software deployment and updating has long been common-
place in many different fields, but now, the increasing expansion of IoT and CPSoS (Cyber-Physcal System of Systems) has highlighted the need for additional mechanisms in these systems, to ensure the correct behaviour of the deployed software version after deployment. In this sense, this paper investigates the use of Machine Learning algorithms to predict acceptable behaviour in system performance of a new software release. By monitoring the real performance, eventual unexpected problems can be identified. Based on previous knowledge and actual run-time information, the proposed approach predicts the response time that can be considered acceptable for the new software release, and this information is used to identify problematic releases. The mechanism has been applied to the post-deployment monitoring of traffic algorithms in elevator systems. To evaluate the approach, we have used performance mutation testing, obtaining good results. This paper makes two contributions. First, it proposes several regression learners that have been trained with different types of traffic profiles to efficiently predict response time of the traffic dispatching algorithm. This prediction is then compared with the actual response time of the new algorithm release, and provides a verdict about its performance.Secondly, a comparison of the different learners is performed.
Authors: Aitor Gartziandia, Aitor Arrieta, Aitor Agirre, Goiuria Sagardui, Maite Arratibel
Title of the source: Proceedings of the 36th Annual ACM Symposium on Applied Computing
Relevant pages: 135-144