Toru Kitagawa

Department of Economics, University College London


Constrained classification and policy learning


Statistics Seminar


18th December 2020, 3:00 pm – 4:00 pm
,


(joint work with Shosei Sakaguchi and Aleksey Tetenov)

Modern machine learning approaches to classification, including AdaBoost, support vector machines, and deep neural networks, utilize the surrogate-loss techniques to circumvent computational complexity in minimizing the empirical classification risk. These techniques are useful also for causal policy learning problems as estimation of individualized treatment rules can be cast as weighted (cost-sensitive) classification. Consistency of these surrogate-loss approaches studied in Zhang (2004) and Bartlett et al (2006) crucially relies on the assumption of correct specification, meaning that the specified class of policies is rich enough to contain a first-best. This assumption is, however, less credible when the class of policies is constrained by interpretability or fairness, leaving applicability of the surrogate-loss based algorithms unknown in such second-best scenarios. This paper analyses consistency of the surrogate-loss procedures under a constrained set of policies without assuming correct specification. We show that in the setting where the constraint restricts classifier's prediction set only, the hinge losses (i.e., l1-support vector machines) are the only surrogate losses that preserve consistency in the second-best scenarios. If the constraint additionally restricts a functional form of the classifiers, consistency of the surrogate loss approach is not guaranteed even with the hinge loss. We hence characterize conditions for the constrained set of classifiers that can guarantee consistency of the hinge-risk minimizing classifiers. We illustrate implications and uses of our theoretical results in monotone classification by proposing computationally attractive procedures that are robust to misspecification.





Organiser: Juliette Unwin

Comments are closed.
css.php