### Single-Solution Hypervolume Maximization and its use for Improving Generalization of Neural Networks

**2016-02-03**

1602.01164 | cs.LG

This paper introduces the hypervolume maximization with a single solution as
an alternative to the mean loss minimization. The relationship between the two
problems is proved through bounds on the cost function when an optimal solution
to one of the problems is evaluated on the other, with a hyperparameter to
control the similarity between the two problems. This same hyperparameter
allows higher weight to be placed on samples with higher loss when computing
the hypervolume's gradient, whose normalized version can range from the mean
loss to the max loss. An experiment on MNIST with a neural network is used to
validate the theory developed, showing that the hypervolume maximization can
behave similarly to the mean loss minimization and can also provide better
performance, resulting on a 20% reduction of the classification error on the
test set.

**Login to like/save this paper, take notes and configure your recommendations**

# Related Articles

**2017-06-12**

1706.03847 | cs.LG

RNNs have been shown to be excellent models for sequential data and in
particular for data that is g… show more

**2018-10-10**

1810.04650 | cs.LG

In multi-task learning, multiple tasks are solved jointly, sharing inductive
bias between them. Mult… show more

**2017-01-23**

1701.06264 | cs.CV

In this paper, we present the Lipschitz regularization theory and algorithms
for a novel Loss-Sensit… show more

**2018-06-26**

1806.09842 | cs.LG

We introduce a new convex optimization problem, termed quadratic decomposable
submodular function mi… show more

**2018-11-19**

1811.07591 | cs.LG

Learning a deep neural network requires solving a challenging optimization
problem: it is a high-dim… show more

**2015-10-01**

1510.00452 | cs.LG

We address the problem of aggregating an ensemble of predictors with known
loss bounds in a semi-sup… show more

**2018-03-23**

1803.08661 | cs.LG

We propose a Bayesian optimization algorithm for objective functions that are
sums or integrals of e… show more

**2018-07-10**

1807.03858 | cs.LG

Model-based reinforcement learning (RL) is considered to be a promising
approach to reduce the sampl… show more

**2018-02-21**

1802.07595 | cs.LG

The top-k error is a common measure of performance in machine learning and
computer vision. In pract… show more

**2016-02-07**

1602.02355 | stat.ML

Most models in machine learning contain at least one hyperparameter to
control for model complexity.… show more

**2017-03-23**

1703.07950 | cs.LG

In recent years, Deep Learning has become the go-to solution for a broad
range of applications, ofte… show more

**2018-09-05**

1809.01275 | math.OC

We propose a new primal-dual homotopy smoothing algorithm for a linearly
constrained convex program,… show more

**2019-02-19**

1902.07234 | cs.LG

We present algorithms for efficiently learning regularizers that improve
generalization. Our approac… show more

**2019-04-12**

1904.06373 | cs.CV

One-stage object detectors are trained by optimizing classification-loss and
localization-loss simul… show more

**2018-10-25**

1810.11071 | cs.LG

Ensemble techniques are powerful approaches that combine several weak
learners to build a stronger o… show more

**2019-02-02**

Despite the non-convex nature of their loss functions, deep neural networks
are known to generalize … show more