2024 + Forthcoming

Weighted Notions of Fairness with Binary Supermodular Chores

Vignesh Viswanathan and Yair Zick
U5

Abstract

We study the problem of allocating indivisible chores among agents with binary supermodular cost functions. In other words, each chore has a marginal cost of 0 or 1 and chores exhibit increasing marginal costs (or decreasing marginal utilities). In this note, we combine the techniques of Viswanathan and Zick (2022) and Barman et al. (2023) to present a general framework for fair allocation with this class of valuation functions. Our framework allows us to generalize the results of Barman et al. (2023) and efficiently compute allocations which satisfy weighted notions of fairness like weighted leximin or min weighted p-mean malfare for any p≥1.

Axiomatic Aggregations of Abductive Explanations

Gagan Biradar, Yacine Izza, Elita Lobo, Vignesh Viswanathan and Yair Zick
C46 Proceedings of the 38th AAAI Conference on Artificial Intelligence (AAAI 2024)

Abstract

The recent criticisms of the robustness of post hoc model approximation explanation methods (like LIME and SHAP) have led to the rise of model-precise abductive explanations. For each data point, abductive explanations provide a minimal subset of features that are sufficient to generate the outcome. While theoretically sound and rigorous, abductive explanations suffer from a major issue - there can be several valid abductive explanations for the same data point. In such cases, providing a single abductive explanation can be insufficient; on the other hand, providing all valid abductive explanations can be incomprehensible due to their size. In this work, we solve this issue by aggregating the many possible abductive explanations into feature importance scores. We propose three aggregation methods: two based on power indices from cooperative game theory and a third based on a well-known measure of causal strength. We characterize these three methods axiomatically, showing that each of them uniquely satisfies a set of desirable properties. We also evaluate them on multiple datasets and show that these explanations are robust to the attacks that fool SHAP and LIME.

​ ​

2023

A Learning Framework for Distribution-Based Game-Theoretic Solution Concepts

TJ Jha and Yair Zick
J12 ACM Transactions on Economics and Computation, January 2023

Abstract

The past few years have seen several works exploring learning economic solutions from data; these include optimal auction design, function optimization, stable payoffs in cooperative games and more. In this work, we provide a unified learning-theoretic methodology for modeling such problems, and establish tools for determining whether a given solution concept can be efficiently learned from data. Our learning theoretic framework generalizes a notion of function space dimension — the graph dimension — adapting it to the solution concept learning domain. We identify sufficient conditions for efficient solution learnability, and show that results in existing works can be immediately derived using our methodology. Finally, we apply our methods in other economic domains, yielding learning variants of competitive equilibria and Condorcet winners.

Percentile Criterion Optimization in Offline Reinforcement Learning

Cyrus Cousins, Elita Lobo, Marek Petrik and Yair Zick
C45 Proceedings of the 37th Conference on Neural Information Processing Systems (NeurIPS 2023)

Abstract

In reinforcement learning, robust policies for high-stakes decision-making problems with limited data are usually computed by optimizing the percentile criterion. The percentile criterion is optimized by constructing an uncertainty set that contains the true model with high probability and optimizing the policy for the worst model in the set. Since the percentile criterion is non-convex, constructing uncertainty sets is often challenging. Existing works use Bayesian credible regions as uncertainty sets, but they are often unnecessarily large and result in learning overly conserva- tive policies. To overcome these shortcomings, we propose a novel Value-at-Risk based dynamic programming algorithm to optimize the percentile criterion without explicitly constructing any uncertainty sets. Our theoretical and empirical results show that our algorithm implicitly constructs much smaller uncertainty sets and learns less conservative robust policies.

The Good, the Bad and the Submodular: Fairly Allocating Mixed Manna Under Order-Neutral Submodular Preferences

Cyrus Cousins, Vignesh Viswanathan and Yair Zick
C44 Proceedings of the 19th Conference on Web and Internet Economics (WINE 2023)

Abstract

We study the problem of fairly allocating indivisible goods (positively valued items) and chores (negatively valued items) among agents with decreasing marginal utilities over items. Our focus is on instances where all the agents have simple preferences; specifically, we assume the marginal value of an item can be either −1, 0 or some positive integer c. Under this assumption, we present an efficient algorithm to compute leximin allocations for a broad class of valuation functions we call order-neutral submodular valuations. Order-neutral submodular valuations strictly contain the well-studied class of additive valuations but are a strict subset of the class of submodular valuations. We show that these leximin allocations are Lorenz dominating and approximately proportional. We also show that, under further restriction to additive valuations, these leximin allocations are approximately envy-free and guarantee each agent their maxmin share. We complement this algorithmic result with a lower bound showing that the problem of computing leximin allocations is NP-hard when c is a rational number.

Dividing Good and Better Items Among Agents with Submodular Valuations

Cyrus Cousins, Vignesh Viswanathan and Yair Zick
C43 Proceedings of the 19th Conference on Web and Internet Economics (WINE 2023)

Abstract

We study the problem of fairly allocating a set of indivisible goods among agents with bivalued submodular valuations -- each good provides a marginal gain of either a or b (a < b) and goods have decreasing marginal gains. This is a natural generalization of two well-studied valuation classes -- bivalued additive valuations and binary submodular valuations. We present a simple sequential algorithmic framework, based on the recently introduced Yankee Swap mechanism, that can be adapted to compute a variety of solution concepts, including leximin, max Nash welfare (MNW) and p-mean welfare maximizing allocations when a divides b. This result is complemented by an existing result on the computational intractability of leximin and MNW allocations when a does not divide b. We further examine leximin and MNW allocations with respect to two well-known properties -- envy freeness and the maximin share guarantee. On envy freeness, we show that neither the leximin nor the MNW allocation is guaranteed to be envy free up to one good (EF1). This is surprising since for the simpler classes of bivalued additive valuations and binary submodular valuations, MNW allocations are known to be envy free up to any good (EFX). On the maximin share guarantee, we show that MNW and leximin allocations guarantee each agent 1/4 and a/(b+3a) of their maximin share respectively when a divides b. This fraction improves to 1/3 and a/(b+2a) respectively when agents have bivalued additive valuations.

Into the Unknown: Assigning Reviewers to Papers with Uncertain Affinities

Cyrus Cousins, Justin Payan and Yair Zick
C42 Proceedings of the 16th International Symposium on Algorithmic Game Theory (SAGT 2023)

Abstract

Peer review cannot work unless qualified and interested reviewers are assigned to each paper. Nearly all automated reviewer assignment approaches estimate real-valued affinity scores for each paper-reviewer pair that act as proxies for the predicted quality of a future review; conferences then assign reviewers to maximize the sum of these values. This procedure does not account for noise in affinity score computation -- reviewers can only bid on a small number of papers, and textual similarity models are inherently probabilistic estimators. In this work, we assume paper-reviewer affinity scores are estimated using a probabilistic model. Using these probabilistic estimates, we bound the scores with high probability and maximize the worst-case sum of scores for a reviewer allocation. Although we do not directly recommend any particular method for estimation of probabilistic affinity scores, we demonstrate how to robustly maximize the sum of scores across multiple different models. Our general approach can be used to integrate a large variety of probabilistic paper-reviewer affinity models into reviewer assignment, opening the door to a much more robust peer review process.

A General Framework for Fair Allocation with Matroid Rank Valuations

Vignesh Viswanathan and Yair Zick
C41 Proceedings of the 24th ACM Conference on Economics and Computation (EC 2023)

Abstract

We study the problem of fairly allocating a set of indivisible goods among agents with matroid rank valuations -- every good provides a marginal value of 0 or 1 when added to a bundle and valuations are submodular. We generalize the Yankee Swap algorithm to create a simple framework, called General Yankee Swap, that can efficiently compute allocations that maximize any justice criterion (or fairness objective) satisfying some mild assumptions. Along with maximizing a justice criterion, General Yankee Swap is guaranteed to maximize utilitarian social welfare, ensure strategyproofness and use at most a quadratic number of valuation queries. We show how General Yankee Swap can be used to compute allocations for five different well-studied justice criteria: (a) Prioritized Lorenz dominance, (b) Maximin fairness, (c) Weighted leximin, (d) Max weighted Nash welfare, and (e) Max weighted p-mean welfare. In particular, our framework provides the first polynomial time algorithms to compute weighted leximin, max weighted Nash welfare and max weighted p-mean welfare allocations for agents with matroid rank valuations.

Yankee Swap: a Fast and Simple Fair Allocation Mechanism for Matroid Rank Valuations

Vignesh Viswanathan and Yair Zick
C40 Proceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023).

Abstract

We study fair allocation of indivisible goods when agents have matroid rank valuations. Our main contribution is a simple algorithm based on the colloquial Yankee Swap procedure that computes provably fair and efficient Lorenz dominating allocations. While there exist polynomial time algorithms to compute such allocations, our proposed method improves on them in two ways. (a) Our approach is easy to understand and does not use complex matroid optimization algorithms as subroutines. (b) Our approach is scalable; it is provably faster than all known algorithms to compute Lorenz dominating allocations. These two properties are key to the adoption of algorithms in any real fair allocation setting; our contribution brings us one step closer to this goal.

For One and All: Individual and Group Fairness in the Allocation of Indivisible Goods (Extended Abstract)

Jonathan Scarlett, Nicholas Teh and Yair Zick
S2 Proceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023).

Abstract

Traditionally, research into the fair allocation of indivisible goods has focused on individual fairness and group fairness. In this paper, we explore the co-existence of individual envy-freeness (i-EF) and its group counterpart, group weighted envy-freeness (g-WEF). We propose several polynomial-time algorithms that can provably achieve i-EF and g-WEF simultaneously in various degrees of approximation under three different conditions on the agents’ valuation functions: (i) when agents have identical additive valuation functions, i-EFX and g-WEF1 can be achieved simultaneously; (ii) when agents within a group share a common valua- tion function, an allocation satisfying both i-EF1 and g-WEF1 exists; and (iii) when agents’ valuations for goods within a group differ, we show that while maintaining i-EF1, we can achieve a 1/3-approximation to g-WEF1 in expectation. In addition, we introduce several novel fairness characterizations that exploit inherent group structures and their relation to individuals, such as proportional envy-freeness and group stability. We show that our algorithms can guarantee these properties approximately in polynomial time. Our results thus provide a first step into connecting individual and group fairness in the allocation of indivisible goods.

2022

I Will Have Order! Optimizing Orders for Fair Reviewer Assignment

Justin Payan and Yair Zick
C39 Proceedings of the 31st International Joint Conference on Artificial Intelligence (IJCAI 2022).

Abstract

Scientific advancement requires effective peer review. Papers should be reviewed by experts in the subject area, but it is equally important that reviewer quality is fairly distributed amongst papers. We model reviewer assignment as an instance of a fair allocation problem, presenting an extension of the classic round-robin mechanism, called Reviewer Round Robin (RRR). Round-robin mechanisms are a standard tool to ensure envy-free up to one item (EF1) allocations. However, fairness often comes at the cost of decreased efficiency. To overcome this challenge, we carefully select an approximately optimal round-robin order. Applying a relaxation of submodularity, γ-weak submodularity, we show that greedily inserting papers into an order yields a (1+γ2)-approximation to the maximum welfare attainable by our round-robin mechanism under any order. Our approach outputs highly efficient EF1 allocations for three real conference datasets, outperforming several state-of-the-art paper assignment methods in fairness, efficiency and runtime.

Model Explanations with Differential Privacy

Neel Patel, Reza Shokri and Yair Zick
C38 Proceedings of the 5th ACM Conference on Fairness, Accountability and Transparency (FAccT 2022).

Abstract

Complex machine learning models are regularly used in critical decision-making domains, as black-box algorithms. This has given rise to several calls for algorithmic explainability. The challenge is that explanations can leak sensitive information about their datasets, which are used to locally approximate the black-box models. To address this issue, we design differentially private explanation algorithms, for interactive and non-interactive settings. Our model explanations offer both privacy and explanation quality guarantees: their distance from their non-differentially private counterparts is small, and decreases in the size of the dataset used to generate them. To showcase the effectiveness of our approach, and the tradeoffs between explainability and privacy, we conduct several experiments on real-life datasets.

2021

Weighted Envy-freeness in Indivisible Item Allocation

Mithun Chakraborty, Ayumi Igarashi, Warut Suksompong and Yair Zick
J11 ACM Transactions on Economics and Computation, Volume 9, Issue 3, September 2021, Article No.: 18,pp 1–39

Abstract

We introduce and analyze new envy-based fairness concepts for agents with weights that quantify their entitlements in the allocation of indivisible items. We propose two variants of weighted envy-freeness up to one item (WEF1): strong, where envy can be eliminated by removing an item from the envied agent’s bundle, and weak, where envy can be eliminated either by removing an item (as in the strong version) or by replicating an item from the envied agent’s bundle in the envying agent’s bundle. We show that for additive valuations, an allocation that is both Pareto optimal and strongly WEF1 always exists and can be computed in pseudo-polynomial time; moreover, an allocation that maximizes the weighted Nash social welfare may not be strongly WEF1, but it always satisfies the weak version of the property. Moreover, we establish that a generalization of the round-robin picking sequence algorithm produces in polynomial time a strongly WEF1 allocation for an arbitrary number of agents; for two agents, we can efficiently achieve both strong WEF1 and Pareto optimality by adapting the adjusted winner procedure. Our work highlights several aspects in which weighted fair division is richer and more challenging than its unweighted counterpart.

Finding Fair and Efficient Allocations for Matroid Rank Valuations

Nawal Benabbou, Mithun Chakraborty, Ayumi Igarashi and Yair Zick
J10 ACM Transactions on Economics and Computation, Volume 9, Issue 4, December 2021, Article No.: 21, pp 1-41

Abstract

In this article, we present new results on the fair and efficient allocation of indivisible goods to agents whose preferences correspond to matroid rank functions. This is a versatile valuation class with several desirable properties (such as monotonicity and submodularity), which naturally lends itself to a number of real-world domains. We use these properties to our advantage; first, we show that when agent valuations are matroid rank functions, a socially optimal (i.e., utilitarian social welfare-maximizing) allocation that achieves envy-freeness up to one item (EF1) exists and is computationally tractable. We also prove that the Nash welfare-maximizing and the leximin allocations both exhibit this fairness/efficiency combination by showing that they can be achieved by minimizing any symmetric strictly convex function over utilitarian optimal outcomes. To the best of our knowledge, this is the first valuation function class not subsumed by additive valuations for which it has been established that an allocation maximizing Nash welfare is EF1. Moreover, for a subclass of these valuation functions based on maximum (unweighted) bipartite matching, we show that a leximin allocation can be computed in polynomial time. Additionally, we explore possible extensions of our results to fairness criteria other than EF1 as well as to generalizations of the above valuation classes.

Privacy Risks of Interpretable Machine Learning

Reza Shokri, Martin Strobel and Yair Zick
C37 Proceedings of the 4th AAAI/ACM Conference on Artificial Intelligence, Ethics and Society (AIES 2021).

Abstract

Can an adversary exploit transparent machine learning to infer information about the training set? To investigate this question, we focus on the basic membership inference attack: given a data point and the output of a transparent machine learning model the attacker's goal is to decide whether the point belongs to the training data. We study this problem for two popular transparency methods: gradient-based attribution methods and record-based influence measures. For gradient-based methods, we develop an attack that can be executed by an attacker with very limited power that still obtains comparable accuracy to existing membership inference attacks. Datapoint-based measures can also be utilized for membership inference attacks; moreover, we demonstrate that they can be effectively used to recover significant parts of the training set. Finally, our results indicate that minorities and outliers are more vulnerable to these type of attacks than majority groups.

High-Dimensional Model Explanations: an Axiomatic Approach

Neel Patel, Martin Strobel and Yair Zick
C36 Proceedings of the 4th ACM Conference on Fairness, Accountability and Transparency (FAccT 2021)

Abstract

Complex black-box machine learning models are regularly used in critical decision-making domains. This has given rise to several calls for algorithmic explainability. Many explanation algorithms proposed in literature assign importance to each feature individually. However, such explanations fail to capture the joint effects of sets of features. Indeed, few works so far formally analyze high dimensional model explanations. In this paper, we propose a novel high dimension model explanation method that captures the joint effect of feature subsets. We propose a new axiomatization for a generalization of the Banzhaf index; our method can also be thought of as an approximation of a black-box model by a higher-order polynomial. In other words, this work justifies the use of the generalized Banzhaf index as a model explanation by showing that it uniquely satisfies a set of natural desiderata and that it is the optimal local approximation of a black-box model. Our empirical evaluation of our measure highlights how it manages to capture desirable behavior, whereas other measures that do not satisfy our axioms behave in an unpredictable manner.

The Price is (Probably) Right: Learning Market Equilibria From Samples

Omer Lev, Vignesh Viswanathan, Neel Patel and Yair Zick
C35 Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2021)

Abstract

Equilibrium computation in markets usually considers settings where player valuation functions are known. We consider the setting where player valuations are unknown; using the probably approximately correct (PAC) learning framework, we examine some classes of common valuation functions, and provide algorithms which output PAC equilibrium allocations. Since there exist trivial PAC market outcomes with an unbounded worst-case efficiency loss, we lower-bound the efficiency of our algorithms. While the efficiency loss under general distributions is rather high, we show that under unit-demand valuations and samples drawn from product distributions, it is possible to find a PAC market equilibrium with significantly better utility.

Learning Cooperative Solution Concepts From Voting Behavior: A Case Study on the Israeli Knesset

Omer Lev, Wei Lu, Alan Tsang and Yair Zick
S2 Short paper. Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2021)

Abstract

Most frameworks for computing solution concepts in hedonic games are theoretical in nature, and require complete knowledge of all agent preferences, an impractical assumption in real-world settings. Recent work lays the theoretical foundation of Probably Approximately Correct (PAC) stability to model stability under uncertainty through sampling. This paper presents the first application of strategic hedonic game models on real-world data. We show that PAC stable solutions can reflect Members of Knesset's political positions at large. Moreover, these comparisons also reveal politicians who are known to deviate from party lines. Finally, we show that PAC hedonic game models compare favorably to both k-means clustering and stochastic block models, which do not account for strategic behavior, for uncovering voting patterns.

2020

Human-computer Coalition Formation in Weighted Voting Games

Moshe Mash, Roy Fairstein, Yoram Bachrach, Kobi Gal and Yair Zick
J9 ACM Transactions on Intelligent Systems Technology 11(6): 73:1-73:20 (2020)

Abstract

This article proposes a negotiation game, based on the weighted voting paradigm in cooperative game theory, where agents need to form coalitions and agree on how to share the gains. Despite the prevalence of weighted voting in the real world, there has been little work studying people’s behavior in such settings. This work addresses this gap by combining game-theoretic solution concepts with machine learning models for predicting human behavior in such domains. We present a five-player online version of a weighted voting game in which people negotiate to create coalitions. We provide an equilibrium analysis of this game and collect hundreds of instances of people’s play in the game. We show that a machine learning model with features based on solution concepts from cooperative game theory (in particular, an extension of the Deegan-Packel Index) provide a good prediction of people’s decisions to join coalitions in the game. We designed an agent that uses the prediction model to make offers to people in this game and was able to outperform other people in an extensive empirical study. These results demonstrate the benefit of incorporating concepts from cooperative game theory in the design of agents that interact with people in group decision-making settings.

The Price of Quota-based Diversity in Assignment Problems

Nawal Benabbou, Mithun Chakraborty, Xuan-Vinh Ho, Jakub Sliwinski and Yair Zick
J8 ACM Transactions on Economics and Computation 8(3): 14:1-14:32 (2020)

Abstract

In this article, we introduce and analyze an extension to the matching problem on a weighted bipartite graph (i.e., the assignment problem): Assignment with Type Constraints. Here, the two parts of the graph are each partitioned into subsets, called types and blocks, respectively; we seek a matching with the largest sum of weights under the constraint that there is a pre-specified cap on the number of vertices matched in every type-block pair. Our primary motivation stems from the large-scale public housing program run by the state of Singapore, accounting for over 70% of its residential real estate. To promote ethnic diversity within its housing projects, Singapore imposes ethnicity quotas: The population is divided into ethnicity-based groups and each new housing development into blocks of flats such that each group must not own more than a certain percentage of flats in a block. However, other domains use similar hard capacity constraints to maintain diversity: These include matching prospective students to schools or medical residents to hospitals. Limiting agents’ choices for ensuring diversity in this manner naturally entails some welfare loss. One of our goals is to study the tradeoff between diversity and (utilitarian) social welfare in such settings. We first show that, while the classic assignment program is polynomial-time computable, adding diversity constraints makes the problem computationally intractable; however, we identify a 1/2-approximation algorithm, as well as reasonable assumptions on the structure of utilities (or weights) that permit poly-time algorithms. Next, we provide two upper bounds on the price of diversity—a measure of the loss in welfare incurred by imposing diversity constraints—as functions of natural problem parameters. We conclude the article with simulations based on publicly available data from two diversity-constrained allocation problems—Singapore Public Housing and Chicago School Choice—which shed light on how the constrained maximization as well as lottery-based variants perform in practice.

Finding Fair and Efficient Allocations When Valuations Don't Add Up

Nawal Benabbou, Mithun Chakraborty, Ayumi Igarashi and Yair Zick
C34 Proceedings of the 13th Symposium on Algorithmic Game Theory (SAGT 2020)

Abstract

In this paper, we present new results on the fair and efficient allocation of indivisible goods to agents that have monotone, submodular, non-additive valuation functions over bundles. In particular, we show that, if such a valuation function additionally has binary marginal gains, a socially optimal (i.e., utilitarian social welfare-maximizing) allocation that achieves envy-freeness up to one item --- a property that has been called ``unreasonable fairness'' in prior work --- exists and can be computed efficiently. We also prove that the Nash welfare-maximizing and the leximin allocations both exhibit this fairness-efficiency combination, by showing that they can be achieved by minimizing any symmetric strictly convex function over utilitarian optimal outcomes. Moreover, for a subclass of these valuation functions based on maximum (unweighted) bipartite matching, we show that a leximin allocation can be computed in polynomial time.

A Learning Framework for Distribution-Based Game-Theoretic Solution Concepts

Tushant Jha and Yair Zick
C33 Proceedings of the 21st ACM Conference on Economics and Computation (EC 2020)

Abstract

The past few years have seen several works establishing PAC frameworks for solving various problems in economic domains; these include optimal auction design, approximate optima of submodular functions, stable partitions and payoff divisions in cooperative games and more. In this work, we provide a unified learning-theoretic methodology for modeling these problems, and establish some useful tools for determining whether a given economic solution concept can be learned from data. Our learning theoretic framework generalizes a notion of function space dimension --- the graph dimension --- adapting it to the solution concept learning domain. We identify sufficient conditions for the PAC learnability of solution concepts, and show that results in existing works can be immediately derived using our general methodology. Finally, we apply our methods in other economic domains, yielding a novel notion of PAC competitive equilibrium and PAC Condorcet winners.

Keeping Your Friends Close: Land Allocation With Friends

Edith Elkind, Neel Patel, Alan Tsang and Yair Zick
C32 Proceedings of the 29th International Conference on Artificial Intelligence (IJCAI), 2020.

Abstract

We examine the problem of assigning plots of landto prospective buyers who prefer living next to theirfriends. In this setting, each agent’s utility depends on the plot she receives and the identities of the agents who receive the adjacent plots. We are interested in mechanisms without money that guarantee truthful reporting of both land values and friendships, as well as Pareto optimality and computational efficiency. We explore several modifications of the Random Serial Dictatorship (RSD) mechanism, and identify one that performs well according to these criteria. We also study the expected social welfare of the assignments produced by our mechanisms when agents’ values for the land plots are binary; it turns out that we can achieve good approximations to the optimal social welfare, but only if the agents value the friendships highly.

Threshold Task Games: Theory, Platform and Experiments

Kobi Gal, Ta Duy Nguyen, Quang Tran-Nhat and Yair Zick
C31 Proceedings of the 19th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2020.

Abstract

Threshold task games (TTGs) are a class of cooperative games in which participants form coalitions to complete tasks associated with different rewards and thresholds for success. We provide efficient algorithms for computing approximately optimal coalition structures in TTGs. We also present non-trivial bounds on the cost of stability for this class, and for weighted voting games (WVGs) — a subclass of TTGs. We put our theoretical results to practice; we design a web-based framework which allows human players to interact in a collaborative task-based model. Our analysis of human play shows that players succeed in general to form optimal coalition structures, and converge to approximately stable payoff divisions. We find that the power relationship in the game is a key determinant in responders’ decision to join coalitions.

On Weighted Envy-Freeness in Indivisible Item Allocation

Mithun Chakraborty, Ayumi Igarashi, Warut Suksompong and Yair Zick
C30 Proceedings of the 19th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2020.

Abstract

In this paper, we introduce and analyze new envy-based fairness concepts for agents with weights: these weights regulate their mutual envy in a situation where indivisible goods are allocated to the agents. We propose two variants of envy-freeness up to one item for the weighted setting: in the strong variant, the envy can be eliminated by removing an item from the envied agent's bundle, whereas in the weak variant, envy can be eliminated by either removing an item from the envied agent's bundle or by replicating an item from the envied agent's bundle in the envying agent's bundle. We prove that for additive valuations, a strongly weighted envy-free allocation up to one item always exists and can be efficiently computed by means of a weight-based picking sequence. For two agents, we can also efficiently achieve strong weighted envy-freeness up to one item in conjunction with Pareto optimality using a weighted version of the classic adjusted winner algorithm. In addition, we show that an allocation that maximizes the weighted Nash social welfare always satisfies weak weighted envy-freeness up to one item, but may fail to satisfy the strong version of this property.

2019

Group Fairness in Influence Maximization

Alan Tsang, Bryan Wilder, Eric Rice, Milind Tambe, and Yair Zick
C29 The 28th International Joint Conference on Artificial Intelligence (IJCAI), 2019.

Abstract

Influence maximization is a widely used model for information dissemination in social networks. Recent work has employed such interventions across a wide range of social problems, spanning public health, substance abuse, and international development (to name a few examples). A critical but understudied question is whether the benefits of such interventions are fairly distributed across different groups in the population; e.g., avoiding discrimination with respect to sensitive attributes such as race or gender. Drawing on legal and game-theoretic concepts, we introduce formal definitions of fairness in influence maximization. We provide an algorithmic framework to find solutions which satisfy fairness constraints, and in the process improve the state of the art for general multi-objective submodular maximization problems. Experimental results on real data from an HIV prevention intervention for homeless youth show that standard influence maximization techniques oftentimes neglect smaller groups which contribute less to overall utility, resulting in a disparity which our proposed algorithms substantially reduce.

Fairness Towards Groups of Agents in the Allocation of Indivisible Items

Nawal Benabbou, Mithun Chakraborty, Edith Elkind and Yair Zick
C28 The 28th International Joint Conference on Artificial Intelligence (IJCAI), 2019.

Abstract

In this paper, we study the problem of matching a set of items to a set of agents partitioned into types so as to balance fairness towards the types against overall utility. We extend multiple desirable properties of an allocation of indivisible goods to our model and investigate the possibility and hardness of achieving combinations of these properties, e.g. we prove that maximizing utilitarian social welfare under constraints of typewise envy-freeness up to one item (TEF1) is computationally intractable. In particular, we define a new concept of waste for this setting; we show experimentally that augmenting an existing algorithm with a marginal utility maximization heuristic can produce a TEF1 solution with reduced waste, and also provide a polynomialtime algorithm for computing a non-wasteful TEF1 allocation for binary agent-item utilities.

Statistically Stable Communities with Limited Interactions

Ayumi Igarashi, Jakub Sliwinski and Yair Zick
C27 The 33rd National Conference on Artificial Intelligence (AAAI), 2019.

Abstract

We consider the PAC stabilizability of hedonic games with an underlying graph communication structure. In this setting, there exists some interaction network connecting players, and players would only consider joining groups that are connected components of the underlying graph. We show that when the interaction network is a tree, one can efficiently compute PAC stable coalition structures; that is, player partitions that are likely to be resistant against group deviation. This result is particularly interesting as computing stable coalition structure on tree-restricted hedonic games is computationally intractable. We further show that when the underlying interaction graph is not a tree, efficient PAC stabilizability is no longer achievable. Thus, our results completely characterize when one can leverage the underlying graph structure in order to compute PAC stable outcomes for hedonic games.

A Characterization of Monotone Influence Measures for Data Classification

Martin Strobel, Jakub Sliwinski and Yair Zick
C26 The 33rd National Conference on Artificial Intelligence (AAAI), 2019.

Abstract

In this work we focus on the following question: how important was the i-th feature in determining the outcome for a given datapoint? We identify a family of influence measures; functions that, given a datapoint x, assign a value to every feature i, which roughly corresponds to that i's importance in determining the outcome for x. This family is uniquely derived from a set of axioms: desirable properties that any reasonable influence measure should satisfy. Departing from prior work on influence measures, we assume no knowledge of - or access to - the underlying classifier labelling the dataset. In other words, our influence measures are based on the dataset alone, and do not make any queries to the classifier. While this requirement naturally limits the scope of explanations we provide, we show that it is effective on real datasets.

Fairness and Diversity in Public Resource Allocation Problems.

Nawal Benabbou, Mithun Chakraborty and Yair Zick
J7 IEEE Data Engineering Bulletin 42(3), pages 64-75 (2019)

Abstract

In this article, we address important extensions to the problem of allocating indivisible items to apopulation of agents: The agents are partitioned into disjoint groups on the basis of attributes (e.g.,ethnicity) and we want the overall utility of the allocation to respect some notion ofdiversityand/orfairnesswith respect to these groups. We study two specific incarnations of this general problem. First, weaddress a constrained optimization problem, inspired by diversity quotas in some real-world allocationproblems, where the items are also partitioned into blocks and there is an upper bound on the numberof items from each block that can be assigned to agents in each group. We theoretically analyze theprice of diversity – a measure of the overall welfare loss due to these capacity constraints – and reportexperiments based on two real-world data sets (Singapore public housing and Chicago public schooladmissions) comparing this constrained optimization-based approach with a lottery mechanism withsimilar quotas. Next, instead of imposing hard constraints, we cast the problem as a variant of fairallocation of indivisible goods – we treat each group of agents as a single entity receiving a bundle ofitems whose valuation is the maximum total utility of matching agents in that group to items in that bundle;we present algorithms that achieve a standard relaxation of envy-freeness in conjunction with specificefficiency criteria.

Analyzing Power in Weighted Voting Games with Super-Increasing Weights.

Yuval Filmus, Joel Oren, Yair Zick and Yoram Bachrach
J6 Theory of Computing Systems 63(1), pages 150-174 (2019)

Abstract

Weighted voting games (WVGs) are a class of cooperative games that capture settings of group decision making in various domains, such as parliaments or committees. Earlier work has revealed that the effective decision making power, or influence of agents in WVGs is not necessarily proportional to their weight. This gave rise to measures of influence for WVGs. However, recent work in the algorithmic game theory community have shown that computing agent voting power is computationally intractable. In an effort to characterize WVG instances for which polynomial-time computation of voting power is possible, several classes of WVGs have been proposed and analyzed in the literature. One of the most prominent of these are super increasing weight sequences. Recent papers show that when agent weights are super-increasing, it is possible to compute the agents’ voting power (as measured by the Shapley value) in polynomial-time. We provide the first set of explicit closed-form formulas for the Shapley value for super-increasing sequences. We bound the effects of changes to the quota, and relate the behavior of voting power to a novel function. This set of results constitutes a complete characterization of the Shapley value in weighted voting games, and answers a number of open questions presented in previous work.

Cooperative games with overlapping coalitions: Charting the tractability frontier

Yair Zick, Georgios Chalkiadakis, Edith Elkind and Evangelos Markakis
J5 Artificial Intelligence, Vol. 271, June 2019, pages 74-97 (subsumes the AAMAS 2011 and AAAI 2012 papers below).

Abstract

The framework of cooperative games with overlapping coalitions (OCF games), which was proposed by Chalkiadakis et al., generalizes classic cooperative games to settings where agents may belong to more than one coalition. OCF games can be used to model scenarios where agents distribute resources, such as time or energy, among several tasks, and then divide the payoffs generated by these tasks in a fair and/or stable manner. As the framework of OCF games is very expressive, identifying settings that admit efficient algorithms for computing ‘good’ outcomes of OCF games is a challenging task. In this work, we put forward two approaches that lead to tractability results for OCF games. First, we propose a discretized model of overlapping coalition formation, where each agent i has a weight and may allocate an integer amount of weight to any task. Within this framework, we focus on the computation of outcomes that are socially optimal and/or stable. We discover that the algorithmic complexity of this task crucially depends on the amount of resources that each agent possesses, the maximum coalition size, and the pattern of communication among the agents. We identify several constraints that lead to tractable subclasses of discrete OCF games, and supplement our tractability results by hardness proofs, which clarify the role of our constraints. Second, we introduce and analyze a natural class of (continuous) OCF games—the Linear Bottleneck Games. We show that such games always admit a stable outcome, even assuming a large space of feasible deviations, and provide an efficient algorithm for computing such outcomes.

2018

Resource Based Cooperative Games: Optimization, Fairness and Stability

Ta-Duy Nguyen and Yair Zick
C25 The 11th International Symposium on Algorithmic Game Theory (SAGT), 2018 (short paper).

Abstract

We introduce the class of resource-based coalitional games, a novel and complete representation of cooperative games extending threshold task games introduced by Chalkiadakis et al.. Starting from the class of weighted voting games (the simplest example of resourcebased coalitional games), we provide efficient algorithms which compute solution concepts for resource-based coalitional games; these include approximately optimal coalition structures and the Shapley value. We also present non-trivial bounds on the cost of stability for this class; in particular, we improve upon the bounds given in Bachrach et al. for weighted voting games

Diversity Constraints in Public Housing Allocation

Nawal Benabbou, Mithun Chakraborty, Vinh Ho Xuan, Jakub Sliwinski and Yair Zick
C24 The 17th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2018.

Abstract

The state of Singapore employs a unique large-scale public housing program, accounting for over 80 percent of its residential real-estate. In addition to providing a social benefit to its citizens and permanent residents in the form of subsidized housing, Singapore uses its housing allocation program to ensure ethnic diversity in its neighborhoods. It does so by imposing ethnic quotas: every new housing development may not house more than a certain predetermined percentage of households from each ethnic quota. Imposing such quotas ensures that every housing block contains members from each of Singapore's main ethnicities; however, limiting people's ability to freely choose apartments incurs some welfare loss. Our work studies this problem via the computational economics lens. The Singapore public housing allocation is an extension of the classic assignment problem, to which diversity constraints are added. We begin by showing that adding these constraints makes the problem computationally intractable; however, we identify a 2-approximation algorithm, as well as realistic agent utility models which admit a polynomial-time algorithm. In addition, we study the price of diversity: this is the loss in welfare incurred by imposing diversity constraints; we show that while the price of diversity can be unbounded if the number of ethnicities is unbounded, it is a constant if the number of ethnicities is small (as is the case for Singapore). Finally, by analyzing public data from Singapore's Housing Development Board, we create a simulated framework testing the welfare loss due to diversity constraints in realistic large-scale scenarios.

Context-Aware Fusion for Continuous Biometric Authentication

Divya Sivasankaran, Mona Ragab, Terrence Sim and Yair Zick
C23 The 11th IAPR International Conference on Biometrics (ICB 2018)

Abstract

Continuous authentication using biometrics is receiving renewed attention owing to recent advances in mobile technology. However, the context in which biometric inputs are acquired can affect the quality of information available for authentication. For example, in multi-speaker environments, face or gait could be better authenticators than voice. Unfortunately, existing fusion methods do not take this into account. In this paper, we propose a novel fusion method that accounts for context, and that can operate at both decision and score levels. Theoretical bounds on the proposed method are presented along with experiments on synthetic and real multi-modal biometric data. The results show that our proposed method is better than commonly used fusion methods, even when using state-of-the-art deep learners. Moreover our method outperforms score-level fusion methods even at the decision-level, debunking the common myth that decision-level fusion is inferior, and showcasing the power of contextual learning.

2017

Which is the Fairest (Rent Division) of Them All?

Ya'akov Gal, Moshe Mash, Ariel D. Procaccia and Yair Zick
J4 Journal of the ACM, Vol. 64, No. 6, Article 39, November 2017 (subsumes the EC 2016 version below).

Abstract

What is a fair way to assign rooms to several housemates and divide the rent between them? This is not just a theoretical question: many people have used the Spliddit website to obtain envy-free solutions to rent division instances. But envy freeness, in and of itself, is insufficient to guarantee outcomes that people view as intuitive and acceptable. We therefore focus on solutions that optimize a criterion of social justice, subject to the envy-freeness constraint, in order to pinpoint the “fairest” solutions.We develop a general algorithmic framework that enables the computation of such solutions in polynomial time. We then study the relations between natural optimization objectives and identify the maximin solution, which maximizes the minimum utility subject to envy freeness, as the most attractive. We demonstrate, in theory and using experiments on real data from Spliddit, that the maximin solution gives rise to significant gains in terms of our optimization objectives. Finally, a user study with Spliddit users as subjects demonstrates that people find the maximin solution to be significantly fairer than arbitrary envy-free solutions; this user study is unprecedented in that it asks people about their real-world rent division instances. Based on these results, the maximin solution has been deployed on Spliddit since April 2015.

Learning Hedonic Games

Jakub Sliwinski and Yair Zick
C22 The 26th International Joint Conference on Artificial Intelligence (IJCAI), 2017

Abstract

Coalitional stability in hedonic games has usually been considered in the setting where agent preferences are fully known. We consider the setting where agent preferences are unknown; we lay the theoretical foundations for studying the interplay between coalitional stability and (PAC) learning in hedonic games. We introduce the notion of PAC stability — the equivalent of core stability under uncertainty — and examine the PAC stabilizability and learnability of several popular classes of hedonic games.

How to Form Winning Coalitions in Mixed Human-Computer Settings

Moshe Mash, Yair Zick, Yoram Bachrach, and Kobi Gal
C21 The 26th International Joint Conference on Artificial Intelligence (IJCAI), 2017

Abstract

This paper proposes a new negotiation game, based on the weighted voting paradigm in cooperative game theory, where agents need to form coalitions and agree on how to share the gains. Despite the prevalence of weighted voting in the real world, there has been little work studying people's behavior in such settings. We show that solution concepts from cooperative game theory (in particular, an extension of the Deegan-Packel Index) provide a good prediction of people's decisions to join coalitions in an online version of a weighted voting game. With this insight in mind, we design an agent that combines supervised learning with decision theory to make offers to people in this game. We show that the agent was able to obtain higher shares from coalitions than did people playing other people, without reducing the acceptance rate of its offers. We also find that people display certain biases in weighted voting settings, like creating unnecessarily large coalitions, and not rewarding strong players. These results demonstrate the benefit of incorporating concepts from cooperative game theory in the design of agents that interact with other people in weighted voting systems.

2016

Voting Rules as Error-Correcting Codes

J3 Artificial Intelligence Volume 231, February 2016, Pages 1-16

Abstract

We present the first model of optimal voting under adversarial noise. From this viewpoint, voting rules are seen as error-correcting codes: their goal is to correct errors in the input rankings and recover a ranking that is close to the ground truth. We derive worst-case bounds on the relation between the average accuracy of the input votes, and the accuracy of the output ranking. Empirical results from real data show that our approach produces significantly more accurate rankings than alternative approaches.

Which is the Fairest (Rent Division) of them All?

Kobi Gal, Moshe Mash, Ariel D. Procaccia and Yair Zick
C20 The 17th ACM Conference on Economics and Computation (EC), 2016, Pages 67-84

Abstract

What is a fair way to assign rooms to several housemates, and divide the rent between them? This is not just a theoretical question: many people have used the Spliddit website to obtain envy-free solutions to rent division instances. But envy freeness, in and of itself, is insufficient to guarantee outcomes that people view as intuitive and acceptable. We therefore focus on solutions that optimize a criterion of social justice, subject to the envy freeness constraint, in order to pinpoint the “fairest” solutions. We develop a general algorithmic framework that enables the computation of such solutions in polynomial time. We then study the relations between natural optimization objectives, and identify the maximin solution, which maximizes the minimum utility subject to envy freeness, as the most attractive. We demonstrate, in theory and using experiments on real data from Spliddit, that the maximin solution gives rise to significant gains in terms of our optimization objectives. Finally, a user study with Spliddit users as subjects demonstrates that people find the maximin solution to be significantly fairer than arbitrary envy-free solutions; this user study is unprecedented in that it asks people about their real-world rent division instances. Based on these results, the maximin solution has been deployed on Spliddit since April 2015.

Analyzing Power in Weighted Voting Games with Super-Increasing Weights

C19 The 9th International Symposium on Algorithmic Game Theory (SAGT), 2016, pages 169-181

Abstract

Weighted voting games (WVGs) are a class of cooperative games that capture settings of group decision making in various domains, such as parliaments or committees. Earlier work has revealed that the effective decision making power, or influence of agents in WVGs is not necessarily proportional to their weight. This gave rise to measures of influence for WVGs. However, recent work in the algorithmic game theory community have shown that computing agent voting power is computationally intractable. In an effort to characterize WVG instances for which polynomial-time computation of voting power is possible, several classes of WVGs have been proposed and analyzed in the literature. One of the most prominent of these are super increasing weight sequences. Recent papers show that when agent weights are super-increasing, it is possible to compute the agents’ voting power (as measured by the Shapley value) in polynomial-time. We provide the first set of explicit closed-form formulas for the Shapley value for super-increasing sequences. We bound the effects of changes to the quota, and relate the behavior of voting power to a novel function.

A Characterization of Voting Power for Discrete Weight Distributions

C18 The 25th International Joint Conference on Artificial Intelligence (IJCAI), 2016, pages 74-80

Abstract

Weighted voting games model decision-making bodies where decisions are made by a majority vote. In such games, each agent has a weight, and a coalition of agents wins the game if the sum of the weights of its members exceeds a certain quota. The Shapley value is used as an index for the true power held by the agents in such games. Earlier work has studied the implications of setting the value of the quota on the agents’ power under the assumption that the game is given with a fixed set of agent weights. We focus on a model where the agent weights originate from a stochastic pro- cess, resulting in weight uncertainty. We analyze the expected effect of the quota on voting power given the weight generating process. We examine two extreme cases of the balls and bins model: uniform and exponentially decaying probabilities. We show that the choice of a quota may have a large influence on the power disparity of the agents, even when the governing distribution is likely to result in highly similar weights for the agents. We characterize various interesting repetitive fluctuation patterns in agents’ power as a function of the quota.

Misrepresentation in District Voting

C17 The 25th International Joint Conference on Artificial Intelligence (IJCAI), 2016, pages 81-87

Abstract

Voting systems in which voters are partitioned to districts encourage accountability by providing voters an easily identifiable district representative, but can result in a selection of representatives not representative of the electorate's preferences. In some cases, a party may have a majority of the popular vote, but lose the elections due to districting effects. We define the Misrepresentation Ratio which quantifies the deviation from proportional representation in a district-based election, and provide bounds for this ratio under various voting rules. We also examine probabilistic models for election outcomes, and provide an algorithm for approximating the expected Misrepresentation Ratio under a given probabilistic election model. Finally, we provide simulation results for several such probabilistic election models, showing the effects of the number of voters and candidates on the misrepresentation ratio.

Optimal Interdiction of Illegal Network Flow

Qingyu Guo, Bo An, Yair Zick and Chunyan Miao
C16 The 25th International Joint Conference on Artificial Intelligence (IJCAI), 2016, pages 2507-2513

Abstract

Large scale smuggling of illegal goods is a long-standing problem, with $1.4b and thousands of agents assigned to protect the borders from such activity in the US-Mexico border alone. Illegal smuggling activities are usually blocked via inspection stations or ad-hoc checkpoints/roadblocks. Security resources are insufficient to man all stations at all times; furthermore, smugglers regularly conduct surveillance activities.This paper makes several contributions toward the challenging task of optimally interdicting an illegal network flow: i) A new Stackelberg game model for network flow interdiction; ii) A novel Column and Constraint Generation approach for computing the optimal defender strategy; iii) Complexity analysis of the column generation subproblem; iv) Compact convex nonlinear programs for solving the subproblems; v) Novel greedy and heuristic approaches for subproblems with good approximation guarantee. Experimental evaluation shows that our approach can obtain a robust enough solution outperforming the existing methods and heuristic baselines significantly and scale up to realistic-sized problems.

Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems

Anupam Datta, Shayak Sen and Yair Zick
C15 The 37th IEEE Symposium on Security and Privacy (Oakland), 2016, pages 598-617

Abstract

Algorithmic systems that employ machine learning play an increasing role in making substantive decisions in modern society, ranging from online personalization to insurance and credit decisions to predictive policing. But their decision-making processes are often opaque -- it is difficult to explain why a certain decision was made. We develop a formal foundation to improve the transparency of such decision-making systems. Specifically, we introduce a family of Quantitative Input Influence (QII) measures that capture the degree of influence of inputs on outputs of systems. These measures provide a foundation for the design of transparency reports that accompany system decisions (e.g., explaining a specific credit decision) and for testing tools useful for internal and external oversight (e.g., to detect algorithmic discrimination). Distinctively, our causal QII measures carefully account for correlated inputs while measuring influence. They support a general class of transparency queries and can, in particular, explain decisions about individuals (e.g., a loan decision) and groups (e.g., disparate impact based on gender). Finally, since single inputs may not always have high influence, the QII measures also quantify the joint influence of a set of inputs (e.g., age and income) on outcomes (e.g. loan decisions) and the marginal influence of individual inputs within such a set (e.g., income). Since a single input may be part of multiple influential sets, the average marginal influence of the input is computed using principled aggregation measures, such as the Shapley value, previously applied to measure influence in voting. Further, since transparency reports could compromise privacy, we explore the transparency-privacy tradeoff and prove that a number of useful transparency reports can be made differentially private with very little addition of noise. Our empirical validation with standard machine learning algorithms demonstrates that QII measures are a useful transparency mechanism when black box access to the learning system is available. In particular, they provide better explanations than standard associative measures for a host of scenarios that we consider. Further, we show that in the situations we consider, QII is efficiently approximable and can be made differentially private while preserving accuracy.

Tracking Performance and Forming Study Groups for Prep Courses Using Probabilistic Graphical Models (Extended Abstract)

S1 The 15th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2016, pages 1359-1361

Abstract

Efficient tracking of class performance across topics is an important aspect of classroom teaching; this is especially true for psychometric general intelligence exams, which test a varied range of abilities. We develop a framework that uncovers a hidden thematic structure underlying student responses to a large pool of questions, using a probabilistic graphical model.

2015

Incentivizing Peer Grading in MOOCS: an Audit Game Approach

Alejandro Carbonara, Anupam Datta, Arunesh Sinha and Yair Zick
C14The 24th International Joint Conference on Artificial Intelligence (IJCAI), 2015, pages 497-503

Abstract

In Massively Open Online Courses (MOOCs) TA resources are limited; most MOOCs use peer assessments to grade assignments. Students have to divide up their time between working on their own homework and grading others. If there is no risk of being caught and penalized, students have no reason to spend any time grading others. Course staff want to incentivize students to balance their time between course work and peer grading. They may do so by auditing students, ensuring that they perform grading correctly. One would not want students to invest too much time on peer grading, as this would result in poor course performance. We present the first model of strategic auditing in peer grading, modeling the student's choice of effort in response to a grader's audit levels as a Stackelberg game with multiple followers. We demonstrate that computing the equilibrium for this game is computationally hard. We then provide a PTAS in order to compute an approximate solution to the problem of allocating audit levels. However, we show that this allocation does not necessarily maximize social welfare; in fact, there exist settings where course auditor utility is arbitrarily far from optimal under an approximately optimal allocation. To circumvent this issue, we present a natural condition that guarantees that approximately optimal TA allocations guarantee approximately optimal welfare for the course auditors.

Learning Cooperative Games

C13The 24th International Joint Conference on Artificial Intelligence (IJCAI), 2015, pages 475-481

Abstract

This paper explores a PAC (probably approximately correct) learning model in cooperative games. Specifically, we are given m random samples of coalitions and their values, taken from some unknown cooperative game; can we predict the values of unseen coalitions? We study the PAC learnability of several well-known classes of cooperative games, such as network flow games, threshold task games, and induced subgraph games. We also establish a novel connection between PAC learnability and core stability: for games that are efficiently learnable, it is possible to find payoff divisions that are likely to be stable using a polynomial number of samples.

Non-Myopic Negotiators See What's Best

C12The 24th International Joint Conference on Artificial Intelligence (IJCAI), 2015, pages 2047-2053

Abstract

We consider revenue negotiation problems in iterative settings. In our model, a group of agents has some initial resources, used in order to generate revenue. Agents must agree on some way of dividing resources, but there's a twist. At every time-step, the revenue shares received at time t are agent resources at time t + 1, and the game is repeated. The key issue here is that the way resources are shared has a dramatic effect on long-term social welfare, so in order to maximize individual long-term revenue one must consider the welfare of others, a behavior not captured by other models of cooperation and bargaining. Our work focuses on homogeneous production functions. We identify conditions that ensure that the socially optimal outcome is an ε-Nash equilibrium. We apply our results to some families of utility functions, and discuss their strategic implications.

Influence in Classification via Cooperative Game Theory

C11The 24th International Joint Conference on Artificial Intelligence (IJCAI), 2015, pages 511-517

Abstract

A dataset has been classified by some unknown classifier into two types of points. What were the most important factors in determining the classification outcome? In this work, we employ an axiomatic approach in order to uniquely characterize an influence measure: a function that, given a set of classified points, outputs a value for each feature corresponding to its influence in determining the classification outcome. We show that our influence measure takes on an intuitive form when the unknown classifier is linear. Finally, we employ our influence measure in order to analyze the effects of user profiling on Google's online display advertising.

Voting Rules as Error-Correcting Codes

C10The 29th AAAI Conference on Artificial Intelligence (AAAI), 2015, pages 1000-1006

Abstract

We present the first model of optimal voting under adversarial noise. From this viewpoint, voting rules are seen as error-correcting codes: their goal is to correct errors in the input rankings and recover a ranking that is close to the ground truth. We derive worst-case bounds on the relation between the average accuracy of the input votes, and the accuracy of the output ranking. Empirical results from real data show that our approach produces significantly more accurate rankings than alternative approaches.

2014

Arbitration and Stability in Cooperative Games with Overlapping Coalitions

J2Journal of Artificial Intelligence Research Volume 50, 2014, Pages 847-884

Abstract

Overlapping Coalition Formation (OCF) games, introduced by Chalkiadakis, Elkind, Markakis, Polukarov and Jennings in 2010, are cooperative games where players can simultaneously participate in several coalitions. Capturing the notion of stability in OCF games is a difficult task: deviating players may continue to contribute resources to joint projects with non-deviators, and the crucial question is what payoffs the deviators expect to receive from such projects. Chalkiadakis et al. introduce three stability concepts for OCF games — the conservative core, the refined core, and the optimistic core — that are based on different answers to this question. In this paper, we propose a unified framework for the study of stability in the OCF setting, which encompasses the stability concepts considered by Chalkiadakis et al. as well as a wide variety of alternative stability concepts. Our approach is based on the notion of arbitration functions, which determine the payoff obtained by the deviators, given their deviation and the current allocation of resources. We provide a characterization of stable outcomes under arbitration. We then conduct an in-depth study of four types of arbitration functions, which correspond to four notions of the core; these include the three notions of the core considered by Chalkiadakis et al. Our results complement those of Chalkiadakis et al. and answer questions left open by their work. In particular, we show that OCF games with the conservative arbitration function are essentially equivalent to non-OCF games, by relating the conservative core of an OCF game to the core of a non-overlapping cooperative game, and use this result to obtain a strictly weaker sufficient condition for conservative core non-emptiness than the one given by Chalkiadakis et al.

Arbitration and Stability in Cooperative Games

Yair Zick and Edith Elkind
J1SigEcom Exchanges 12.2, 2014, Pages 847-884

Abstract

We present a formal framework for handling deviation in settings where players divide resources among multiple projects, forming overlapping coalition structures. Having formed such a coalition structure, players share the revenue generated among themselves. Given a profit division, some players may decide that they are underpaid, and deviate from the outcome. The main insight of the work presented in this survey is that when players want to deviate, they must know how the non-deviators would react to their deviation: after the deviation, the deviators may still work with some of the non-deviators, which presents an opportunity for the non-deviators to exert leverage on deviators. We extend the overlapping coalition formation (OCF) model of Chalkiadakis et al. [2010] for cooperation with partial coalitions, by introducing arbitration functions, a general framework for handling deviation in OCF games. We review some interesting aspects of the model, characterizations of stability in this model, as well as methods for computing stable outcomes.

2013

On Random Quotas and Proportional Representation in Weighted Voting Games

Yair Zick
C9The 23rd International Joint Conference on Artificial Intelligence (IJCAI), 2013, pages 432-439

Abstract

Weighted voting games (WVGs) model decision making bodies such as parliaments and councils. In such settings, it is often important to provide a measure of the influence a player has on the vote. Two highly popular such measures are the Shapley-Shubik power index, and the Banzhaf power index. Given a power measure, proportional representation is the property of having players' voting power proportional to the number of parliament seats they receive. Approximate proportional representation (w.r.t. the Banzhaf power index) can be ensured by changing the number of parliament seats each party receives; this is known as Penrose's square root method. However, a discrepancy between player weights and parliament seats is often undesirable or unfeasible; a simpler way of achieving approximate proportional representation is by changing the quota, i.e. the number of votes required in order to pass a bill. It is known that a player's Shapley-Shubik power index is proportional to his weight when one chooses a quota at random; that is, when taking a random quota, proportional representation holds in expectation. In our work, we show that not only does proportional representation hold in expectation, it also holds for many quotas. We do so by providing bounds on the variance of the Shapley value when the quota is chosen at random, assuming certain weight distributions. We further explore the case where weights are sampled from i.i.d. binomial distributions; for this case, we show good bounds on an important parameter governing the behavior of the variance, as well as substantiating our claims with empirical analysis.

Bounding the Cost of Stability in Games Over Interaction Networks

C8The 27th AAAI Conference on Artificial Intelligence (AAAI), 2013, pages 690-696

Abstract

We study the stability of cooperative games played over an interaction network, in a model that was introduced by Myerson ['77]. We show that the cost of stability of such games (i.e., the subsidy required to stabilize the game) can be bounded in terms of natural parameters of their underlying interaction networks. Specifically, we prove that if the treewidth of the interaction network H is k, then the relative cost of stability of any game played over H is at most k + 1, and if the pathwidth of H is k', then the relative cost of stability is at most k'. We show that these bounds are tight for all k≥ 2 and all k' ≥ 1, respectively.

Dynamic Weighted Voting Games

C7The 12th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2013, pages 515-522

Abstract

We initiate the study of dynamic cooperative games—cooperative games where the characteristic function may change over time. We introduce two types of algorithmic problems for such games: computing a given solution concept at time t, and checking that a certain function of the game (e.g., the Shapley value of a given player or the value of the least core) remains within given bounds during time interval [t0, t1]. We then investigate the complexity of these problems for dynamic weighted voting games, where the weight of each player and the quota are functions of time that are given by low-degree polynomials with integer coefficients. We provide pseudopolynomial algorithms for problems of both types, for a variety of solution concepts. We then use our results to investigate the changes in power distribution in the Council of the European Union over the next 50 years.

Taxation and Stability in Cooperative Games

C6The 12th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2013, pages 523-530

Abstract

Cooperative games are a useful framework for modeling multiagent behavior in environments where agents must collaborate in order to complete tasks. Having jointly completed a task and generated revenue, agents need to agree on some reasonable method of sharing their profits. One particularly appealing family of payoff divisions is the core, which consists of all coalitionally rational (or, stable) payoff divisions. Unfortunately, it is often the case that the core of a game is empty, i.e. there is no payoff scheme guaranteeing each group of agents a total payoff higher than what they can get on their own. As stability is a highly attractive property, there have been various methods of achieving it proposed in the literature. One natural way of stabilizing a game is via taxation, i.e. reducing the value of some coalitions in order to decrease their bargaining power. Existing taxation methods include the ε-core, the least-core and several others. However, taxing coalitions is in general undesirable: one would not wish to overly tamper with a given coalitional game, or overly tax the agents. Thus, in this work we study minimal taxation policies, i.e. those minimizing the amount of tax required in order to stabilize a given game. We show that games that minimize the total tax are to some extent a linear approximation of the original games, and explore their properties. We demonstrate connections between the minimal tax and the cost of stability, and characterize the types of games for which it is possible to obtain a tax-minimizing policy using variants of notion of the ε-core, as well as those for which it is possible to do so using reliability extensions.

On Manipulation in Multi-Winner Elections Based on Scoring Rules

C5The 12th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2013, pages 359-366

Abstract

Multi-winner elections model scenarios where voters must select a fixed-size group of candidates based on their individual preferences. In such scenarios, it is often the case that voters are incentivized to manipulate, i.e. misreport their preferences in order to obtain a better outcome. In this paper, we study the complexity of manipulating multiwinner elections under scoring rules, with a particular focus on the role of tie-breaking rules. We consider both lexicographic tie-breaking rules, which break ties according to a fixed ordering of the candidates, and a natural randomized tie-breaking rule. We describe polynomial-time manipulation algorithms for several special cases of our problem. Specifically, we show that finding a successful manipulation is easy if the underlying voting rule is k-Approval or the number of candidates to be elected is bounded by a constant (for both types of tie-breaking rules), as well as if the manipulator’s utility function only takes values in {0, 1} and the ties are broken in the manipulator’s favor.

2012

Stability via Convexity and LP Duality in Games with Overlapping Coalitions

C4The 26th AAAI Conference on Artificial Intelligence (AAAI), 2012, pages 1506-1512

Abstract

The core is a central solution concept in cooperative game theory, and therefore it is important to know under what conditions the core of a game is guaranteed to be non-empty. Two notions that prove to be very useful in this context are Linear Programming (LP) duality and convexity. In this work, we apply these tools to identify games with overlapping coalitions (OCF games) that admit stable outcomes. We focus on three notions of the core defined in (Chalkiadakis et al. 2010) for such games, namely, the conservative core, the refined core and the optimistic core. First, we show that the conservative core of an OCF game is non-empty if and only if the core of a related classic coalitional game is non-empty. This enables us to improve the result of (Chalkiadakis et al. 2010) by giving a strictly weaker sufficient condition for the non-emptiness of the conservative core. We then use LP duality to characterize OCF games with non-empty refined core; as a corollary, we show that the refined core of a game is non-empty as long as the superadditive cover of its characteristic function is convex. Finally, we identify a large class of OCF games that can be shown to have a non-empty optimistic core using an LP based argument.

Overlapping Coalition Formation Games: Charting the Tractability Frontier

C3The 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2012, pages 787-794

Abstract

Cooperative games with overlapping coalitions (OCF games) [3, 23] model scenarios where agents can distribute their resources among several tasks; each task generates a profit which may be freely divided among the agents participating in the task. The goal of this work is to initiate a systematic investigation of algorithmic aspects of OCF games. We propose a discretized model of overlapping coalition formation, where each agent i ∈ N has a weight wi ∈ N and may allocate an integer amount of weight to any task. Within this framework, we focus on the computation of outcomes that are socially optimal and/or stable. We discover that the algorithmic complexity of the associated problems crucially depends on the amount of resources that each agent possesses, the maximum coalition size, and the pattern of interaction among the agents. We identify several constraints that lead to tractable subclasses of OCF games, and provide efficient algorithms for games that belong to these subclasses. We supplement our tractability results by hardness proofs, which clarify the role of our constraints.

2011

The Shapley Value as a Function of the Quota in Weighted Voting Games

C2The 22nd International Joint Conference on Artificial Intelligence (IJCAI), 2011, pages 490-496

Abstract

In weighted voting games, each agent has a weight, and a coalition of players is deemed to be winning if its weight meets or exceeds the given quota. An agent’s power in such games is usually measured by her Shapley value, which depends both on the agent’s weight and the quota. [Zuckerman et al., 2008] show that one can alter a player’s power significantly by modifying the quota, and investigate some of the related algorithmic issues. In this paper, we answer a number of questions that were left open by [Zuckerman et al., 2008]: we show that, even though deciding whether a quota maximizes or minimizes an agent’s Shapley value is coNP hard, finding a Shapley value-maximizing quota is easy. Minimizing a player’s power appears to be more difficult. However, we propose and evaluate a heuristic for this problem, which takes into account the voter’s rank and the overall weight distribution. We also explore a number of other algorithmic issues related to quota manipulation.

Arbitrators in Overlapping Coalition Formation Games

Yair Zick and Edith Elkind
C1The 10th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2011, pages 55-62

Abstract

Overlapping Coalition Formation (OCF) games [3, 4] are cooperative games where the players can simultaneously participate in several coalitions. Capturing the notion of stability in OCF games is a difficult task: a player may deviate by abandoning some, but not all of the coalitions he is involved in, and the crucial question is whether he then gets to keep his payoff from the unaffected coalitions. In [4] the authors introduce three stability concepts for OCF games—the conservative, refined, and optimistic core—that are based on different answers to this question. In this paper, we propose a unified framework for the study of stability in the OCF setting, which encompasses the concepts considered in [4] as well as a wide variety of alternative stability concepts. Our approach is based on the notion of an arbitrator, which can be thought of as an external party that determines payoff to deviators. We give a complete characterization of outcomes that are stable under arbitration. In particular, our results provide a criterion for the outcome to be in the refined or optimistic core, thus complementing the results in [4] for the conservative core, and answering questions left open in [4]. We also introduce a notion of the nucleolus for arbitrated OCF games, and argue that it is non–empty. Finally, we extend the definition of the Shapley value [12] to the OCF setting, and provide an axiomatic characterization for it.