Dr. Song Liu and Mingxuan Yi are publishing papers at top international machine learning conferences.
The first paper is entitled “Estimating the Arc Length of the Optimal ROC Curve and Lower Bounding the Maximal AUC” and the second is “Sliced Wasserstein Variational Inference”. Huge congratulations to Song on this achievement.
Details of both papers are below:
Title: “Estimating the Arc Length of the Optimal ROC Curve and Lower Bounding the Maximal AUC”
Author: Song Liu
Why is this research important?
Classification is a fundamental problem in machine learning and studying its performance metric has huge importance as it reveals quantities using which we can finetune the performance of our classification algorithms. In this paper, we study a commonly used classification performance metric, the ROC curve, and reveal its previously unknown features linked with binary classification.
Abstract: In this paper, we show that the arc length of the optimal ROC curve is an f-divergence and can be approximated from samples using positive and negative datasets. We show this estimator is also a consistent estimator of the arctangent ratio between the positive and negative density functions. Moreover, using this estimator, we can lowerbound the maximal Area Under the Curve (AUC) using a surface integral. This maximal AUC lowerbound shows good potential in imbalanced classification tasks.
This paper has been accepted by NeurIPS 2022 (25% acceptance rate).
Title: “Sliced Wasserstein Variational Inference”
Authors: Mingxuan Yi (Math PhD student) and Song Liu
Why is this research important?
Variational inference finds an approximation to a target distribution that tells us the latent patterns of our dataset. The conventional approach that minimizes a Kullback-Leibler (KL) divergence is known to miss important latent patterns. Thus, it is a vital task to find computationally efficient alternatives to KL divergence for variational inference.
Abstract: Wasserstein distance is a popular metric for measuring the differences between two distributions. However, the computation of Wasserstein distance has always been prohibitive for many big data applications. This paper show a “sliced” variant of Wasserstein distance called Sliced Wasserstein distance (Bonneel et al., 2015) can be an effective and computationally efficient tool for variational inference.
This paper has been accepted by Asian Conference on Machine Learning 2022 (30 % acceptance rate)
Bonneel et al., Sliced and Radon Wasserstein Barycenters of Measures, Journal of Mathematical Imaging and Vision, 51(1):22–45, 2015.