| 501 | Bartosz Bogacki and Bartosz Walter Aspect-oriented Response Injection: an Alternative to Classical Mutation Testing Software Engineering Techniques: Design for Quality, 2007. |
|
| | Abstract: Due to increasing importance of test cases in software development, there is a need to verify and assure their quality. Mutation testing is an effective technique of checking if tests react properly to changes by introducing alterations to the original source code. A mutant which survives all test cases indicates insufficient or inappropriate testing assertions. The most onerous disadvantage of this technique is considerable time required to generate, compile mutants and then execute test cases against each of them. In the paper we propose an aspect-oriented approach to generation and execution of mutants, called response injection, which excludes the need for separate compilation of every mutant. |
| | @INPROCEEDINGS{BogackiW07,
author = {Bartosz Bogacki and Bartosz Walter},
title = {Aspect-oriented Response Injection: an Alternative to Classical Mutation Testing},
booktitle = {Software Engineering Techniques: Design for Quality},
year = {2007},
address = {},
month = {},
pages = {273-282}
} |
| 502 | Kamel Ayari and Salah Bouktif and Giuliano Antoniol Automatic Mutation Test Input Data Generation via Ant Colony Proceedings of the Genetic and Evolutionary Computation Conference (GECCO'07)London, England, 7-11 July 2007. |
|
| | Abstract: Fault-based testing is often advocated to overcome limitations ofother testing approaches; however it is also recognized as beingexpensive. On the other hand, evolutionary algorithms have beenproved suitable for reducing the cost of data generation in the contextof coverage based testing. In this paper, we propose a newevolutionary approach based on ant colony optimization for automatictest input data generation in the context of mutation testingto reduce the cost of such a test strategy. In our approach the antcolony optimization algorithm is enhanced by a probability densityestimation technique. We compare our proposal with otherevolutionary algorithms, e.g., Genetic Algorithm. Our preliminaryresults on JAVA testbeds show that our approach performed significantlybetter than other alternatives. |
| | @INPROCEEDINGS{AyariBA07,
author = {Kamel Ayari and Salah Bouktif and Giuliano Antoniol},
title = {Automatic Mutation Test Input Data Generation via Ant Colony},
booktitle = {Proceedings of the Genetic and Evolutionary Computation Conference (GECCO'07)},
year = {2007},
address = {London, England},
month = {7-11 July},
pages = {1074-1081}
} |
| 503 | Sergiy Borodaya and Alexandre Petrenkoa and Roland Groz Can a Model Checker Generate Tests for Non-Deterministic Systems? Proceedings of the 3rd Workshop on Model Based Testing (MBT'07)Braga, Portugal, 31 March-1 April 2007. |
|
| | Abstract: Modern software is increasingly concurrent, timed, distributed, and therefore, non-deterministic. While it is well known that tests can be generated as LTL or CTL model checker counterexamples, we argue that non-determinism creates difficulties that need to be resolved and propose test generation methods to overcome them. The proposed methods rely on fault modeling by mutation and use conventional (closed) and modular (open) model checkers. |
| | @INPROCEEDINGS{BorodayaPG07,
author = {Sergiy Borodaya and Alexandre Petrenkoa and Roland Groz},
title = {Can a Model Checker Generate Tests for Non-Deterministic Systems?},
booktitle = {Proceedings of the 3rd Workshop on Model Based Testing (MBT'07)},
year = {2007},
address = {Braga, Portugal},
month = {31 March-1 April},
pages = {3-19}
} |
| 504 | Lijun Shan and Hong Zhu Generating Structurally Complex Test Cases By Data Mutation: A Case Study Of Testing An Automated Modelling Tool The Computer Journal, (), June 2007. |
|
| | Abstract: Generation of adequate test cases is difficult and expensive, especially for testing software systems whose input is structurally complex. This paper presents an approach called data mutation to generating a large number of test data from a few seed test cases. It is inspired by mutation testing methods, but differs from them in the aim and the way that mutation operators are defined and used. While mutation testing is a method for measuring test adequacy, data mutation is a method of test case generation. In traditional mutation testing, mutation operators are used to transform the program under test. In contrast, mutation operators in our approach are applied on input data to generate test cases, hence called data mutation operators. The paper reports a case study with the method on testing an automated modelling tool to illustrate the applicability of the proposed method. Experiment data clearly demonstrate that the method is adequate and cost effective, and able to detect a large proportion of faults. |
| | @ARTICLE{shanZ07,
author = {Lijun Shan and Hong Zhu},
title = {Generating Structurally Complex Test Cases By Data Mutation: A Case Study Of Testing An Automated Modelling Tool},
journal = {The Computer Journal},
year = {2007},
month = {June},
volume = {},
number = {},
pages = {}
} |
| 505 | Lydie du Bousquet and Michel Delaunay Using Mutation Analysis to Evaluate Test Generation Strategies in a Synchronous Context Proceedings of the 2nd International Conference on Software Engineering Advances (ICSEA'07)Cap Esterel, French Riviera, France, 25-31 August 2007. |
|
| | Abstract: LUTESS is a test data generator dedicated to synchronous software validation. The tool produces test with respect to an environment description. To check if this description is really as expected, we use mutation analysis. The key point of the approach is to select a subset of mutants which characterizes some kind of "interesting situations" that are supposed to be often produced thanks to environment description. Intuitively, if preselected mutants are killed "very often" during tests, environment description is as expected (with respect to these "interesting situations"). |
| | @INPROCEEDINGS{BousquetD07,
author = {Lydie du Bousquet and Michel Delaunay},
title = {Using Mutation Analysis to Evaluate Test Generation Strategies in a Synchronous Context},
booktitle = {Proceedings of the 2nd International Conference on Software Engineering Advances (ICSEA'07)},
year = {2007},
address = {Cap Esterel, French Riviera, France},
month = {25-31 August},
pages = {40}
} |
| 506 | Sean A. Irvine and Tin Pavlinic and Leonard Trigg and John Gerald Cleary and Stuart J. Inglis and Mark Utting Jumble Java Byte Code to Measure the Effectiveness of Unit Tests Proceedings of the 3rd Workshop on Mutation Analysis (MUTATION'07)Windsor, UK, 10-14 September 2007. |
|
| | Abstract: Jumble is a byte code level mutation testing tool for Java which inter-operates with JUnit. It has been designed to operate in an industrial setting with large projects. Heuristics have been included to speed the checking of mutations, for example, noting which test fails for each mutation and running this first in subsequent mutation checks. Significant effort has been put into ensuring that it can test code which uses custom class loading and reflection. This requires careful attention to class path handling and coexistence with foreign class-loaders. Jumble is currently used on a continuous basis within an agile programming environment with approximately 370,000 lines of Java code under source control. This checks out project code every fifteen minutes and runs an incremental set of unit tests and mutation tests for modified classes. Jumble is being made available as open source. |
| | @INPROCEEDINGS{IrvinePTCIU07,
author = {Sean A. Irvine and Tin Pavlinic and Leonard Trigg and John Gerald Cleary and Stuart J. Inglis and Mark Utting},
title = {Jumble Java Byte Code to Measure the Effectiveness of Unit Tests},
booktitle = {Proceedings of the 3rd Workshop on Mutation Analysis (MUTATION'07)},
year = {2007},
address = {Windsor, UK},
month = {10-14 September},
pages = {169-175}
} |
| 507 | Lydie du Bousquet and Michel Delaunay Mutation Analysis for Lustre programs: Fault Model Description and Validation Proceedings of the 3rd Workshop on Mutation Analysis (MUTATION'07)Windsor, UK, 10-14 September 2007. |
|
| | Abstract: Mutation analysis is usually used to provide an indication of the fault detection ability of a test set. It is mainly used for unit testing evaluation, but has also been extended for integration testing evaluation. This paper describes adaptation of mutation analysis to the Lustre programming language, including both unit and integration testing. This paper focuses on the fault model, which has been extended since our previous works. Validation of the fault model is presented. |
| | @INPROCEEDINGS{BousquetD07b,
author = {Lydie du Bousquet and Michel Delaunay},
title = {Mutation Analysis for Lustre programs: Fault Model Description and Validation},
booktitle = {Proceedings of the 3rd Workshop on Mutation Analysis (MUTATION'07)},
year = {2007},
address = {Windsor, UK},
month = {10-14 September},
pages = {176-184}
} |
| 508 | M. Ellims and D. Ince and M. Petre The Csaw C Mutation Tool: Initial Results Testing: Academic and Industrial Conference Practice and Research Techniques - MUTATION (TAICPART-MUTATION 2007), Sept 2007. |
|
| | Abstract: Available soon... |
| | @INPROCEEDINGS{ellims_taic_part_07,
author = {M. Ellims and D. Ince and M. Petre},
title = {The Csaw C Mutation Tool: Initial Results},
booktitle = {Testing: Academic and Industrial Conference Practice and Research Techniques - MUTATION (TAICPART-MUTATION 2007)},
year = {2007},
address = {},
month = {Sept},
pages = {185-192}
} |
| 509 | Shan-Shan Hou and Lu Zhang and Tao Xie and Hong Mei and Jia-Su Sun Applying Interface-Contract Mutation in Regression Testing of Component-Based Software Proceedings of the 23rd International Conference on Software Maintenance (ICSM'07)Paris, France, 2-5 October 2007. |
|
| | Abstract: Regression testing, which plays an important role in software maintenance, usually relies on test adequacy criteria to select and prioritize test cases. However, with the wide use and reuse of black-box components, such as reusable class libraries and COTS components, it is challenging to establish test adequacy criteria for testing software systems built on components whose source code is not available. Without source code or detailed documents, the misunderstanding between the system integrators and component providers has become a main factor of causing faults in component-based software. In this paper, we apply mutation on interface contracts, which can describe the rights and obligations between component users and providers, to simulate the faults that may occur in this way of software development. The mutation adequacy score for killing the mutants of interface contracts can serve as a test adequacy criterion. We performed an experimental study on three subject systems to evaluate the proposed approach together with four other existing criteria. The experimental results show that our adequacy criterion is helpful for both selecting good-quality test cases and scheduling test cases in an order of exposing faults quickly in regression testing of component-based software. |
| | @INPROCEEDINGS{HouZXMS07,
author = {Shan-Shan Hou and Lu Zhang and Tao Xie and Hong Mei and Jia-Su Sun},
title = {Applying Interface-Contract Mutation in Regression Testing of Component-Based Software},
booktitle = {Proceedings of the 23rd International Conference on Software Maintenance (ICSM'07)},
year = {2007},
address = {Paris, France},
month = {2-5 October},
pages = {174-183}
} |
| 510 | Jeremy S. Bradbury and James R. Cordy and Juergen Dingel Comparative Assessment of Testing and Model Checking Using Program Mutation Proceedings of the 3rd Workshop on Mutation Analysis (MUTATION'07)Windsor, UK, 2007. |
|
| | Abstract: Developing correct concurrent code is more difficult than developing correct sequential code. This difficulty is due in part to the many different, possibly unexpected, executions of the program, and leads to the need for special quality assurance techniques for concurrent programs such as randomized testing and state space exploration. In this paper an approach is used that assesses testing and formal analysis tools using metrics to measure the effectiveness and efficiency of each technique at finding concurrency bugs. Using program mutation, the assessment method creates a range of faulty versions of a program and then evaluates the ability of various testing and formal analysis tools to detect these faults. The approach is implemented and automated in an experimental mutation analysis framework (ExMAn) which allows results to be more easily reproducible. To demonstrate the approach, we present the results of a comparison of testing using the IBM tool ConTest and model checking using the NASA tool Java PathFinder (JPF). |
| | @INPROCEEDINGS{BradburyCD07,
author = {Jeremy S. Bradbury and James R. Cordy and Juergen Dingel},
title = {Comparative Assessment of Testing and Model Checking Using Program Mutation},
booktitle = {Proceedings of the 3rd Workshop on Mutation Analysis (MUTATION'07)},
year = {2007},
address = {Windsor, UK},
month = {},
pages = {210-222}
} |