In April 2016 Manchester eScholar was replaced by the University of Manchester’s new Research Information Management System, Pure. In the autumn the University’s research outputs will be available to search and browse via a new Research Portal. Until then the University’s full publication record can be accessed via a temporary portal and the old eScholar content is available to search and browse via this archive.

Learning in high dimensions with asymmetric costs

Reeve, Henry

[Thesis]. Manchester, UK: The University of Manchester; 2019.

Access to files

Abstract

A classifier making predictions in noisy environments with limited data will inevitably make some mistakes. However, some types of error are more harmful than others. For example, the cost of misidentifying an explosive as a benign electronic device is far greater than the cost of misidentifying a laptop as an explosive. In such cases it is of paramount significance that the relative cost of different types of errors is taken into account when constructing and assessing algorithms for making predictions under uncertainty. This thesis will develop a distribution dependent theory of learning with asymmetric costs. For a comprehensive coverage we consider four different learning scenarios: (1) Supervised learning with a fixed cost matrix, (2) Weakly supervised cost-sensitive learning with unknown costs, (3) Sequential cost-sensitive learning with bandit feedback and (4) Neyman Pearson classification for qualitatively asymmetric costs. In each of these scenarios we seek to identify the key properties of the data distribution which influence the difficulty of the learning problem. These include the distribution's dimensionality, smoothness and the level of noise. We shall pay particular attention to minimax rates which give the minimum risk attainable by a classification algorithm, uniformly, over a given family of distributions. We begin our analysis with a supervised setting in which the costs of the possible prediction errors are given by a known cost matrix. In this setting we give a natural generalisation of Tysbakov's margin condition which quantifies the level of noise in the data in relation to the optimal decision boundaries for the cost matrix. We then identify the minimax rates with an exponent which depends solely upon the dimensionality of the distribution, the smoothness of the regression function and the level of noise. Significantly, these rates hold for all compact sub-manifolds and for any reasonable cost matrix. We then show that the minimax rates are actually attained by a simple cost-sensitive k-nearest neighbour method. Departing from the fully supervised setting we turn to a class of weakly supervised cost-sensitive learning problems with stochastic and feature dependent costs. We show that the minimax rates have the same exponent as their supervised counter parts and once again, these rates are attainable by simple k-nearest neighbour methods. We then move to an online scenario in which the learner sequentially makes a prediction and then incurs the cost of their prediction, without ever observing either the true class label or the costs that would have been incurred if an alternative prediction had been made. This problem is equivalent to the problem of multi-armed bandits with covariates. We present the conceptually simple k-NN UCB algorithm for this problem, which combines the k-NN method from supervised learning with the UCB algorithm for multi-armed bandits. We establish a minimax optimal regret bound for this algorithm (up to logarithmic terms) which demonstrates that the method is capable of adapting automatically to the unknown intrinsic dimensionality of the distribution. An alternative approach to learning with asymmetric costs is the Neyman-Pearson paradigm. In this paradigm the learner is given a set of samples from both a null and a positive class. The learner's goal is to construct a classifier which minimises the false negative rate whilst satisfying a user-specified upper bound on the false positive rate. This approach to classification is appropriate in applications such as medical diagnosis when the different types of errors are of a fundamentally different category. We present a non-parametric approach to Neyman-Pearson learning which side-steps the necessity of density estimation and applies when the data is distributed on an unknown manifold within a high-dimensional ambient feature space. We prove a high probability distribution-dependent performance bound. Finally, we present the first minimax lower bounds in the Neyman-Pearson setting. Overall this thesis presents a clear picture of the way in which the properties of the distribution affect the optimal learning rates for high-dimensional classification problems with asymmetric costs. Moreover, we demonstrate that these optimal rates are often attained by non-parametric techniques which are both conceptually simple and straightforward to implement.

Bibliographic metadata

Type of resource:
Content type:
Form of thesis:
Type of submission:
Degree type:
Doctor of Philosophy
Degree programme:
PhD Computer Science (CDT)
Publication date:
Location:
Manchester, UK
Total pages:
217
Abstract:
A classifier making predictions in noisy environments with limited data will inevitably make some mistakes. However, some types of error are more harmful than others. For example, the cost of misidentifying an explosive as a benign electronic device is far greater than the cost of misidentifying a laptop as an explosive. In such cases it is of paramount significance that the relative cost of different types of errors is taken into account when constructing and assessing algorithms for making predictions under uncertainty. This thesis will develop a distribution dependent theory of learning with asymmetric costs. For a comprehensive coverage we consider four different learning scenarios: (1) Supervised learning with a fixed cost matrix, (2) Weakly supervised cost-sensitive learning with unknown costs, (3) Sequential cost-sensitive learning with bandit feedback and (4) Neyman Pearson classification for qualitatively asymmetric costs. In each of these scenarios we seek to identify the key properties of the data distribution which influence the difficulty of the learning problem. These include the distribution's dimensionality, smoothness and the level of noise. We shall pay particular attention to minimax rates which give the minimum risk attainable by a classification algorithm, uniformly, over a given family of distributions. We begin our analysis with a supervised setting in which the costs of the possible prediction errors are given by a known cost matrix. In this setting we give a natural generalisation of Tysbakov's margin condition which quantifies the level of noise in the data in relation to the optimal decision boundaries for the cost matrix. We then identify the minimax rates with an exponent which depends solely upon the dimensionality of the distribution, the smoothness of the regression function and the level of noise. Significantly, these rates hold for all compact sub-manifolds and for any reasonable cost matrix. We then show that the minimax rates are actually attained by a simple cost-sensitive k-nearest neighbour method. Departing from the fully supervised setting we turn to a class of weakly supervised cost-sensitive learning problems with stochastic and feature dependent costs. We show that the minimax rates have the same exponent as their supervised counter parts and once again, these rates are attainable by simple k-nearest neighbour methods. We then move to an online scenario in which the learner sequentially makes a prediction and then incurs the cost of their prediction, without ever observing either the true class label or the costs that would have been incurred if an alternative prediction had been made. This problem is equivalent to the problem of multi-armed bandits with covariates. We present the conceptually simple k-NN UCB algorithm for this problem, which combines the k-NN method from supervised learning with the UCB algorithm for multi-armed bandits. We establish a minimax optimal regret bound for this algorithm (up to logarithmic terms) which demonstrates that the method is capable of adapting automatically to the unknown intrinsic dimensionality of the distribution. An alternative approach to learning with asymmetric costs is the Neyman-Pearson paradigm. In this paradigm the learner is given a set of samples from both a null and a positive class. The learner's goal is to construct a classifier which minimises the false negative rate whilst satisfying a user-specified upper bound on the false positive rate. This approach to classification is appropriate in applications such as medical diagnosis when the different types of errors are of a fundamentally different category. We present a non-parametric approach to Neyman-Pearson learning which side-steps the necessity of density estimation and applies when the data is distributed on an unknown manifold within a high-dimensional ambient feature space. We prove a high probability distribution-dependent performance bound. Finally, we present the first minimax lower bounds in the Neyman-Pearson setting. Overall this thesis presents a clear picture of the way in which the properties of the distribution affect the optimal learning rates for high-dimensional classification problems with asymmetric costs. Moreover, we demonstrate that these optimal rates are often attained by non-parametric techniques which are both conceptually simple and straightforward to implement.
Thesis main supervisor(s):
Thesis co-supervisor(s):
Language:
en

Institutional metadata

University researcher(s):

Record metadata

Manchester eScholar ID:
uk-ac-man-scw:318266
Created by:
Reeve, Henry
Created:
5th February, 2019, 11:45:12
Last modified by:
Reeve, Henry
Last modified:
8th February, 2019, 13:28:08

Can we help?

The library chat service will be available from 11am-3pm Monday to Friday (excluding Bank Holidays). You can also email your enquiry to us.