Understanding Regression Failures through Test-Passing and Test-Failing Code Changes
by Roykrong Sukkerd, Ivan Beschastnikh, Jochen Wuttke, Sai Zhang, Yuriy Brun
Abstract:

Debugging and isolating changes responsible for regression test failures are some of the most challenging aspects of modern software development. Automatic bug localization techniques reduce the manual effort developers spend examining code, for example by focusing attention on the minimal subset of recent changes that results in the test failure, or on changes to components with most dependencies or highest churn.

We observe that another subset of changes is worth the developers' attention: the complement of the maximal set of changes that does not produce the failure. While for simple, independent source-code changes, existing techniques localize the failure cause to a small subset of those changes, we find that when changes interact, the failure cause is often in our proposed subset and not in the subset existing techniques identify.

In studying 45 regression failures in a large, open-source project, we find that for 87% of those failures, the subset our technique finds is different from existing work, and that for 78% of the failures, our technique identifies relevant changes ignored by existing work. These preliminary results suggest that combining our ideas with existing techniques, as opposed to using either in isolation, can improve the effectiveness of bug localization tools.

Citation:
Roykrong Sukkerd, Ivan Beschastnikh, Jochen Wuttke, Sai Zhang, and Yuriy Brun, Understanding Regression Failures through Test-Passing and Test-Failing Code Changes, in Proceedings of the New Ideas and Emerging Results Track at the 35th International Conference on Software Engineering (ICSE), 2013, pp. 1177–1180.
Bibtex:
@inproceedings{Sukkerd13icse-nier,
  author = {Roykrong Sukkerd and Ivan Beschastnikh and Jochen Wuttke and Sai
  Zhang and Yuriy Brun},
  title =
  {\href{http://people.cs.umass.edu/brun/pubs/pubs/Sukkerd13icse-nier.pdf}{Understanding
  Regression Failures through Test-Passing and Test-Failing Code Changes}},
  booktitle = {Proceedings of the New Ideas and Emerging Results Track at the
  35th International Conference on Software Engineering (ICSE)},  
  venue = {ICSE NIER},
  address = {San Francisco, CA, USA},
  month = {May},
  date = {22--24},
  year = {2013},
  pages = {1177--1180},
  doi = {10.1109/ICSE.2013.6606672},
  note = {\href{http://dx.doi.org/10.1109/ICSE.2013.6606672}{DOI: 10.1109/ICSE.2013.6606672}},
  accept = {$\frac{31}{143} \approx 22\%$},

  abstract = {<p>Debugging and isolating changes responsible for
  regression test failures are some of the most challenging aspects of
  modern software development. Automatic bug localization techniques
  reduce the manual effort developers spend examining code, for
  example by focusing attention on the minimal subset of recent
  changes that results in the test failure, or on changes to
  components with most dependencies or highest churn.</p>

  <p>We observe that another subset of changes is worth the developers'
  attention: the complement of the maximal set of changes that does
  not produce the failure. While for simple, independent source-code
  changes, existing techniques localize the failure cause to a small
  subset of those changes, we find that when changes interact, the
  failure cause is often in our proposed subset and not in the subset
  existing techniques identify.</p>

  <p>In studying 45 regression failures in a large, open-source project,
  we find that for 87% of those failures, the subset our technique
  finds is different from existing work, and that for 78% of the
  failures, our technique identifies relevant changes ignored by
  existing work. These preliminary results suggest that combining our
  ideas with existing techniques, as opposed to using either in
  isolation, can improve the effectiveness of bug localization
  tools.</p>}, 
}