Scientific publications

Scientific publications

Detecting Inconsistencies in Annotated Product Line Models

Model-based product line engineering applies the reuse practices from product line engineering with graphical modeling for the specification of software intensive systems. Variability is usually described in separate variability models, while the implementation of the variable systems is specified in system models that use modeling languages such as SysML. Most of the SysML modeling tools with variability support, implement the annotation-based modeling approach. Annotated product line models tend to be error-prone since the modeler implicitly describes every possible variant in a single system model. To identifying variability-related inconsistencies, in this paper, we firstly define restrictions on the use of SysML for annotative modeling in order to avoid situations where resulting instances of the annotated model may contain ambiguous model constructs. Secondly, inter-feature constraints are extracted from the annotated model, based on relations between elements that are annotated with features. By analyzing the constraints, we can identify if the combined variability- and system model can result in incorrect or ambiguous instances. The evaluation of our prototype implementation shows the potential of our approach by identifying inconsistencies in the product line model of our industrial partner which went undetected through several iterations of the model.

DOI: https://doi.org/10.1145/3382025.3414969

Authors: Damir Bilic, Jan Carlson, Daniel Sundmark, Wasif Afzal, Peter Wallin

Title of the source: Proceedings of the 24th ACM Conference on Systems and Software Product Line: Volume A

Publisher: ACM

Relevant pages: 1-11

Year: 2020

More info

Intermittently Failing Tests in the Embedded Systems Domain

Software testing is sometimes plagued with intermittently failing tests and finding the root causes of such failing tests is often difficult. This problem has been widely studied at the unit testing level for open source software, but there has been far less investigation at the system test level, particularly the testing of industrial embedded systems. This paper describes our investigation of the root causes of intermittently failing tests in the embedded systems domain, with the goal of better understanding, explaining and categorizing the underlying faults. The subject of our investigation is a currently-running industrial embedded system, along with the system level testing that was performed. We devised and used a novel metric for classifying test cases as intermittent. From more than a half million test verdicts, we identified intermittently and consistently failing tests, and identified their root causes using multiple sources. We found that about 1-3% of all test cases were intermittently failing. From analysis of the case study results and related work, we identified nine factors associated with test case intermittence. We found that a fix for a consistently failing test typically removed a larger number of failures detected by other tests than a fix for an intermittent test. We also found that more effort was usually needed to identify fixes for intermittent tests than for consistent tests. An overlap between root causes leading to intermittent and consistent tests was identified. Many root causes of intermittence are the same in industrial embedded systems and open source software. However, when comparing unit testing to system level testing, especially for embedded systems, we observed that the test environment itself is often the cause of intermittence.

DOI: https://doi.org/10.1145/3395363.3397359

Authors: Per Erik Strandberg, Thomas J. Ostrand, Elaine J. Weyuker, Wasif Afzal, Daniel Sundmark

Title of the source: ISSTA 2020: Proceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis

Publisher: ACM

Relevant pages: 337-348

Year: 2020


More info

Model-Based Testing in Practice: An Industrial Case Study using GraphWalker

Model-based testing (MBT) is a test design technique that supports the automation of software testing processes and generates test artefacts based on a system model representing behavioural aspects of the system under test (SUT). Previous research has shown some positive aspects of MBT such as low-cost test case generation and fault detection effectiveness. However, it is still a challenge for both practitioners and researchers to evaluate MBT tools and techniques in real, industrial settings. Consequently, the empirical evidence regarding the mainstream use, including the modelling and test case generation using MBT tools, is limited. In this paper, we report the results of a case study on applying GraphWalker, an open-source tool for MBT, on an industrial cyber-physical system (i.e., a Train Control Management System developed by Bombardier Transportation in Sweden), from modelling of real-world requirements and test specifications to test case generation. We evaluate the models of the SUT for completeness and representativeness, compare MBT with manual test cases written by practitioners using multiple attributes as well as share our experiences of selecting and using GraphWalker for industrial application. The results show that a model of the SUT created using both requirements and test specifications provides better understanding of the SUT from testers’ perspective, making it more complete and representative than the model created based only on the requirements specification alone. The generated model-based test cases are longer in terms of the number of test steps, achieve better edge coverage and can cover requirements more frequently in different orders while achieving the same level of requirements coverage as manually created test cases.

DOI: https://doi.org/10.1145/3452383.3452388

Authors: Muhammad Nouman Zafar, Wasif Afzal, Eduard Paul Enoiu, Athanasios Stratis, Aitor Arrieta, Goiuria Sagardui

Title of the source: Innovations in Software Engineering Conference 2021

Publisher: ACM

Relevant pages: 1-11

Year: 2021


More info

Industrial Scale Passive Testing with T-EARS

Passive testing continuously observes the system or system execution logs without any interference or instrumentation to test diverse combinations of functions, resulting in a more thorough evaluation over time. However, reaching a working solution to passive testing is not without challenges. While there have been some efforts to extract information from system requirements to create passive test cases, to our knowledge, no such efforts are mature enough to be applied in a real, industrial safety-critical context. Our passive testing approach uses the Timed – Easy Approach to Requirements Syntax (T-EARS) specification language and its accompanying tool-chain. This study reports challenges and solutions to introducing system-level passive testing for a vehicular safety-critical system through industrial data analysis, including 116 safety-related requirements. Our results show that passive testing using the T-EARS language and its tool-chain can be used for system-level testing in an industrial setting for 64% of the studied requirements. We identified several sources of false positive results and show how to tune test cases to reduce such false positives systematically. Finally, we show the requirement coverage achieved by a manual test session and that passive testing using T-EARS can find a set of injected faults that are considered hard to find with other test techniques.

DOI: https://doi.org/10.1109/ICST49551.2021.00047

Authors: Daniel Flemström, Henrik Jonsson, Eduard Paul Enoiu, Wasif Afzal

Title of the source: IEEE Conference on Software Testing, Validation and Verification 2021

Publisher: IEEE

Relevant pages: 351-361

Year: 2021


More info

Handling Uncertainties in Cyber-Physical Systems during Their Operations with Digital Twins

It is a well-recognized fact that a Cyber-Physical System (CPS) experiences uncertain (including unknown) situations during their operations. Some of such uncertainties could potentially lead to failures of CPS operations. Factors contribute to such uncertainties include 1) the intrinsically unpredictable physical environment of a CPS, 2) the use of communication networks continuously experiencing problems (e.g., slower connection than expected), and 3) the increasing use of machine learning algorithms in CPSs which introduce inherent uncertainties to these CPSs.

No matter how meticulously a CPS is designed and developed, it is impossible to predict all possible uncertain situations it will experience during its operation. Thus, there is a need for new methods for discovering and handling uncertain situations during the CPS operation to prevent it from failure. In this paper, we present our ideas on how digital twins, i.e., “live models” of CPSs can help in discovering and handling potentially unsafe situations during its operation.

We present the research challenges and potential solutions to develop, deploy, and operate such digital twins.

Authors: Shaukat Ali & Tao Yue, from Simula Research Laboratory, Norway


Read the full publication