Seldonian Toolkit: Building Software with Safe and Fair Machine Learning
by Austin Hoag, James E. Kostas, Bruno Castro da Silva, Philip S. Thomas, Yuriy Brun
Abstract:

We present the Seldonian Toolkit, which enables software engineers to integrate provably safe and fair machine learning algorithms into their systems. Software systems that use data and machine learning are routinely deployed in a wide range of settings, ranging from medical applications, autonomous vehicles, the criminal justice system, and hiring processes. These systems, however, can produce unsafe and unfair behavior, such as suggesting potentially fatal medical treatments, making racist or sexist predictions, or facilitating radicalization and polarization. To reduce these undesirable behaviors, software engineers need the ability to easily integrate their machine-learning-based systems with domain-specific safety and fairness requirements defined by domain experts, such as doctors and hiring managers. The Seldonian Toolkit provides special machine learning algorithms that enable software engineers to incorporate such expert-defined requirements of safety and fairness into their systems, while provably guaranteeing those requirements will be satisfied. A video demonstrating the Seldonian Toolkit is available at https://youtu.be/wHR-hDm9jX4/.

Citation:
Austin Hoag, James E. Kostas, Bruno Castro da Silva, Philip S. Thomas, and Yuriy Brun, Seldonian Toolkit: Building Software with Safe and Fair Machine Learning, in Proceedings of the Demonstrations Track at the 45th International Conference on Software Engineering (ICSE), 2023.
Bibtex:
@inproceedings{Hoag23icse-demo,
  author = {Austin Hoag and James E. Kostas and Bruno Castro da Silva and 
  Philip S. Thomas and Yuriy Brun},
  title =
  {\href{http://people.cs.umass.edu/brun/pubs/pubs/Hoag23icse-demo.pdf}{Seldonian Toolkit: 
  {Building} Software with Safe and Fair Machine Learning}},
  booktitle = {Proceedings of the Demonstrations Track at the 45th International Conference on Software Engineering (ICSE)},
  venue = {ICSE Demo},
  address = {Melbourne, Australia},
  month = {May},
  date = {14--20},
  year = {2023},
  accept = {$\frac{38}{80} \approx 48\%$},

  abstract = {<p>We present the Seldonian Toolkit, which enables software
  engineers to integrate provably safe and fair machine learning algorithms
  into their systems. Software systems that use data and machine learning are
  routinely deployed in a wide range of settings, ranging from medical
  applications, autonomous vehicles, the criminal justice system, and hiring
  processes. These systems, however, can produce unsafe and unfair behavior,
  such as suggesting potentially fatal medical treatments, making racist or
  sexist predictions, or facilitating radicalization and polarization. To
  reduce these undesirable behaviors, software engineers need the ability to
  easily integrate their machine-learning-based systems with domain-specific
  safety and fairness requirements defined by domain experts, such as doctors
  and hiring managers. The Seldonian Toolkit provides special machine
  learning algorithms that enable software engineers to incorporate such
  expert-defined requirements of safety and fairness into their systems,
  while provably guaranteeing those requirements will be satisfied. A video
  demonstrating the Seldonian Toolkit is available at
  https://youtu.be/wHR-hDm9jX4/.</p>},
}