KEMBAR78
Information-Geometric Optimization Algorithms | PDF | Mathematical Optimization | Gradient
0% found this document useful (0 votes)
13 views65 pages

Information-Geometric Optimization Algorithms

The document presents the Information-Geometric Optimization (IGO) method, which transforms smooth parametric families of probability distributions into continuous-time black-box optimization algorithms. It emphasizes invariance principles to minimize arbitrary choices and derives various known algorithms, including natural evolution strategies and the cross-entropy method, under a unified framework. The IGO method achieves maximal invariance properties, allowing it to effectively explore multiple optima in optimization problems.

Uploaded by

690977791qq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views65 pages

Information-Geometric Optimization Algorithms

The document presents the Information-Geometric Optimization (IGO) method, which transforms smooth parametric families of probability distributions into continuous-time black-box optimization algorithms. It emphasizes invariance principles to minimize arbitrary choices and derives various known algorithms, including natural evolution strategies and the cross-entropy method, under a unified framework. The IGO method achieves maximal invariance properties, allowing it to effectively explore multiple optima in optimization problems.

Uploaded by

690977791qq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

Journal of Machine Learning Research 18 (2017) 1-65 Submitted 11/14; Revised 10/16; Published 4/17

Information-Geometric Optimization Algorithms: A


Unifying Picture via Invariance Principles

Yann Ollivier yann.ollivier@lri.fr


CNRS & LRI (UMR 8623), Université Paris-Saclay
91405 Orsay, France
Ludovic Arnold arnold@lri.fr
Univ. Paris-Sud, LRI
91405 Orsay, France
Anne Auger anne.auger@inria.fr
Nikolaus Hansen nikolaus.hansen@inria.fr
Inria & CMAP, Ecole polytechnique
91128 Palaiseau, France

Editor: Una-May O’Reilly

Abstract
We present a canonical way to turn any smooth parametric family of probability distributions
on an arbitrary search space 𝑋 into a continuous-time black-box optimization method on
𝑋, the information-geometric optimization (IGO) method. Invariance as a major design
principle keeps the number of arbitrary choices to a minimum. The resulting IGO flow is
the flow of an ordinary differential equation conducting the natural gradient ascent of an
adaptive, time-dependent transformation of the objective function. It makes no particular
assumptions on the objective function to be optimized.
The IGO method produces explicit IGO algorithms through time discretization. It
naturally recovers versions of known algorithms and offers a systematic way to derive new
ones. In continuous search spaces, IGO algorithms take a form related to natural evolution
strategies (NES). The cross-entropy method is recovered in a particular case with a large
time step, and can be extended into a smoothed, parametrization-independent maximum
likelihood update (IGO-ML). When applied to the family of Gaussian distributions on R𝑑 ,
the IGO framework recovers a version of the well-known CMA-ES algorithm and of xNES.
For the family of Bernoulli distributions on {0, 1}𝑑 , we recover the seminal PBIL algorithm
and cGA. For the distributions of restricted Boltzmann machines, we naturally obtain
a novel algorithm for discrete optimization on {0, 1}𝑑 . All these algorithms are natural
instances of, and unified under, the single information-geometric optimization framework.
The IGO method achieves, thanks to its intrinsic formulation, maximal invariance
properties: invariance under reparametrization of the search space 𝑋, under a change of
parameters of the probability distribution, and under increasing transformation of the
function to be optimized. The latter is achieved through an adaptive, quantile-based
formulation of the objective.
Theoretical considerations strongly suggest that IGO algorithms are essentially charac-
terized by a minimal change of the distribution over time. Therefore they have minimal loss
in diversity through the course of optimization, provided the initial diversity is high. First
experiments using restricted Boltzmann machines confirm this insight. As a simple conse-

○2017
c Yann Ollivier, Ludovic Arnold, Anne Auger and Nikolaus Hansen.
License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided
at http://jmlr.org/papers/v18/14-467.html.
Ollivier, Arnold, Auger and Hansen

quence, IGO seems to provide, from information theory, an elegant way to simultaneously
explore several valleys of a fitness landscape in a single run.
Keywords: black-box optimization, stochastic optimization, randomized optimization,
natural gradient, invariance, evolution strategy, information-geometric optimization

1. Introduction
Optimization problems are at the core of many disciplines. Given an objective function
𝑓 : 𝑋 → R, to be optimized on some space 𝑋, the goal of black-box optimization is to
find solutions 𝑥 ∈ 𝑋 with small (in the case of minimization) value 𝑓 (𝑥), using the least
number of calls to the function 𝑓 . In a black-box scenario, knowledge about the function
𝑓 is restricted to the handling of a device (e.g., a simulation code) that delivers the value
𝑓 (𝑥) for any input 𝑥 ∈ 𝑋. The search space 𝑋 may be finite, discrete infinite, or continuous.
However, optimization algorithms are often designed for a specific type of search space,
exploiting its specific structure.
One major design principle in general and in optimization in particular is related to
invariance. Invariance extends performance observed on a single function to an entire
associated invariance class, that is, it generalizes from a single problem to a class of problems.
Thus it hopefully provides better robustness w.r.t. changes in the presentation of a problem.
For continuous search spaces, invariance under translation of the coordinate system is
standard in optimization. Invariance under general affine-linear changes of the coordinates
has been—we believe—one of the keys to the success of the Nelder-Mead Downhill Simplex
method (Nelder and Mead, 1965), of quasi-Newton methods (Deuflhard, 2011) and of the
covariance matrix adaptation evolution strategy, CMA-ES (Hansen and Ostermeier, 2001;
Hansen and Auger, 2014). While these relate to transformations in the search space, another
important invariance concerns the application of monotonically increasing transformations
to 𝑓 , so that it is indifferent whether the function 𝑓 , 𝑓 3 or 𝑓 × |𝑓 |−2/3 is minimized. This
way some non-convex or non-smooth functions can be as “easily” optimised as convex ones.
Invariance under 𝑓 -transformation is not uncommon, e.g., for evolution strategies (Schwefel,
1995) or pattern search methods (Hooke and Jeeves, 1961; Torczon, 1997; Nelder and Mead,
1965); however it has not always been recognized as an attractive feature.
Many stochastic optimization methods have been proposed to tackle black-box optimiza-
tion. The underlying (often hidden) principle of these stochastic methods is to iteratively
update a probability distribution 𝑃𝜃 defined on 𝑋, parametrized by a set of parameters 𝜃.
At a given iteration, the distribution 𝑃𝜃 represents, loosely speaking, the current belief about
where solutions with the smallest values of the function 𝑓 may lie. Over time, 𝑃𝜃 is expected
to concentrate around the minima of 𝑓 . The update of the distribution involves querying
the function with points sampled from the current probability distribution 𝑃𝜃 . Although
implicit in the presentation of many stochastic optimization algorithms, this is the explicit
setting for the wide family of estimation of distribution algorithms (EDA) (Larranaga and
Lozano, 2002; Baluja and Caruana, 1995; Pelikan et al., 2002). Updates of the probability
distribution often rely on heuristics (nevertheless in Toussaint 2004 the possible interest of
information geometry to exploit the structure of probability distributions for designing better
grounded heuristics is pointed out). In addition, in the EDA setting we can distinguish two
theoretically founded approaches to update 𝑃𝜃 .

2
Information-Geometric Optimization

First, the cross-entropy method consists in taking 𝜃 minimizing the Kullback–Leibler


divergence between 𝑃𝜃 and the indicator of the best points according to 𝑓 (de Boer et al.,
2005).
Second, one can transfer the objective function 𝑓 to the space of parameters 𝜃 by taking
the average of 𝑓 under 𝑃𝜃 , seen as a function of 𝜃. This average is a new function from an
Euclidian space to R and is minimal when 𝑃𝜃 is concentrated on minima of 𝑓 . Consequently,
𝜃 can be updated by following a gradient descent of this function with respect to 𝜃. This has
been done in various situations such as 𝑋 = {0, 1}𝑑 and the family of Bernoulli measures
(Berny, 2000a) or of Boltzmann machines (Berny, 2002), or on 𝑋 = R𝑑 for the family of
Gaussian distributions (Berny, 2000b; Gallagher and Frean, 2005) or the exponential family
also using second order information (Zhou and Hu, 2014; E. Zhou, 2016).
However, taking the ordinary gradient with respect to 𝜃 depends on the precise way
a parameter 𝜃 is chosen to represent the distribution 𝑃𝜃 , and does not take advantage of
the Riemannian metric structure of families of probability distributions. In the context of
machine learning, Amari noted the shortcomings of the ordinary gradient for families of
probability distributions (Amari, 1998) and proposed instead to use the natural gradient
with respect to the Fisher metric (Rao, 1945; Jeffreys, 1946; Amari and Nagaoka, 2000). In
the context of optimization, this natural gradient has been used for exponential families
on 𝑋 = {0, 1}𝑑 (Malagò et al., 2008, 2011) and for the family of Gaussian distributions on
𝑋 = R𝑑 with so-called natural evolution strategies (NES, Wierstra et al. 2008; Sun et al.
2009; Glasmachers et al. 2010; Wierstra et al. 2014).
However, none of the previous attempts using gradient updates captures the invariance
under increasing transformations of the objective function, which is instead, in some cases,
enforced a posteriori with heuristic arguments.
Building on these ideas, this paper overcomes the invariance problem of previous attempts
and provides a consistent, unified picture of optimization on arbitrary search spaces via
invariance principles. More specifically, we consider an arbitrary search space 𝑋, either
discrete or continuous, and a black-box optimization problem on 𝑋, that is, a function
𝑓 : 𝑋 → R to be minimized. We assume that a family of probability distributions 𝑃𝜃 on 𝑋
depending on a continuous multicomponent parameter 𝜃 ∈ Θ has been chosen. A classical
example is to take 𝑋 = R𝑑 and to consider the family of all Gaussian distributions 𝑃𝜃 on R𝑑 ,
with 𝜃 = (𝑚, 𝐶) the mean and covariance matrix. Another simple example is 𝑋 = {0, 1}𝑑
equipped with the family of Bernoulli measures: 𝜃 = (𝜃𝑖 )16𝑖6𝑑 and 𝑃𝜃 (𝑥) = 𝜃𝑖𝑥𝑖 (1 − 𝜃𝑖 )1−𝑥𝑖
∏︀

for 𝑥 = (𝑥𝑖 ) ∈ 𝑋.
From this setting, information-geometric optimization (IGO) can be defined in a natural
way. At each (continuous) time 𝑡, we maintain a value 𝜃𝑡 of the parameter of the distribution.
The function 𝑓 to be optimized is transferred to the parameter space Θ by means of a
suitable time-dependent transformation based on the 𝑃𝜃𝑡 -levels of 𝑓 (Definition 3). The IGO
flow, introduced in Definition 4, follows the natural gradient of the expected value of this
function of 𝜃𝑡 in the parameter space Θ, where the natural gradient derives from the Fisher
information metric. The IGO flow is thus the flow of an ordinary differential equation in
space Θ. This continuous-time gradient flow is turned into a family of explicit IGO algorithms
by taking an Euler time discretization of the differential equation and approximating the
distribution 𝑃𝜃𝑡 by using samples. From the start, the IGO flow is invariant under strictly
increasing transformations of 𝑓 (Proposition 17); we also prove that the sampling procedure

3
Ollivier, Arnold, Auger and Hansen

is consistent (Theorem 6). IGO algorithms share their final algebraic form with the natural
evolution strategies (NES) introduced in the Gaussian setting (Wierstra et al., 2008; Sun
et al., 2009; Glasmachers et al., 2010; Wierstra et al., 2014); the latter are thus recovered
in the IGO framework as an Euler approximation to a well-defined flow, without heuristic
arguments.
The IGO method also has an equivalent description as an infinitesimal maximum likelihood
update (Theorem 10); this reveals a new property of the natural gradient and does not require
a smooth parametrization by 𝜃 anymore. This also establishes a specific link (Theorem 12)
between IGO, the natural gradient, and the cross-entropy method (de Boer et al., 2005).
When we instantiate IGO using the family of Gaussian distributions on R𝑑 , we naturally
obtain versions of the well-known covariance matrix adaptation evolution strategy (CMA-ES,
Hansen and Ostermeier 2001; Hansen and Kern 2004; Jastrebski and Arnold 2006) and of
natural evolution strategies. With Bernoulli measures on the discrete cube {0, 1}𝑑 , we recover
(Proposition 14) the well-known population-based incremental learning (PBIL, Baluja and
Caruana 1995; Baluja 1994) and the compact genetic algorithm (cGA, Harik et al. 1999); this
derivation of PBIL or cGA as a natural gradient ascent appears to be new, and emphasizes
the common ground between continuous and discrete optimization.
From the IGO framework, it is (theoretically) immediate to build new optimization
algorithms using more complex families of distributions than Gaussian or Bernoulli. As an
illustration, distributions associated with restricted Boltzmann machines (RBMs) provide
a new but natural algorithm for discrete optimization on {0, 1}𝑑 which is able to handle
dependencies between the bits (see also Berny 2002). The probability distributions associated
with RBMs are multimodal; combined with the specific information-theoretic properties of
IGO that guarantee minimal change in diversity over time, this allows IGO to reach multiple
optima at once in a natural way, at least in a simple experimental setup (Appendix B).
The IGO framework is built to achieve maximal invariance properties. First, the IGO
flow is invariant under reparametrization of the family of distributions 𝑃𝜃 , that is, it only
depends on 𝑃𝜃 and not on the way we write the parameter 𝜃 (Proposition 18). This
invariance under 𝜃-reparametrization is the main idea behind information geometry (Amari
and Nagaoka, 2000). For instance, for Gaussian measures it should not matter whether
we use the covariance matrix or its inverse or a Cholesky factor as the parameter. This
limits the influence of encoding choices on the behavior of the algorithm. Second, the IGO
flow is invariant under a change of coordinates in the search space 𝑋, provided that this
change of coordinates globally preserves the family of distributions 𝑃𝜃 (Proposition 19). For
instance, for Gaussian distributions on R𝑑 , this includes all affine changes of coordinates.
This means that the algorithm, apart from initialization, does not depend on the precise way
the data is presented. Last, the IGO flow and IGO algorithms are invariant under applying
a strictly increasing function to 𝑓 (Proposition 17). Contrary to previous formulations using
natural gradients (Wierstra et al., 2008; Glasmachers et al., 2010; Akimoto et al., 2010),
this invariance is achieved from the start. Such invariance properties mean that we deal
with intrinsic properties of the objects themselves, and not with the way we encode them
as collections of numbers in R𝑑 . It also means, most importantly, that we make a minimal
number of arbitrary choices.

4
Information-Geometric Optimization

In Section 2, we define the IGO flow and the IGO algorithm. We begin with standard
facts about the definition and basic properties of the natural gradient, and its connection
with Kullback–Leibler divergence and diversity. We then proceed to the detailed description
of the algorithm.
In Section 3, we state some first mathematical properties of IGO. These include monotone
improvement of the objective function, invariance properties, and the form of IGO for
exponential families of probability distributions.
In Section 4 we explain the theoretical relationships between IGO, maximum likelihood
estimates and the cross-entropy method. In particular, IGO is uniquely characterized by a
weighted log-likelihood maximization property (Theorem 10).
In Section 5, we apply the IGO framework to explicit families of probability distributions.
Several well-known optimization algorithms are recovered this way. These include PBIL
(Sec. 5.1) using Bernoulli distributions, and versions of CMA-ES and other evolutionary
algorithms such as EMNA and xNES (Sec. 5.2) using Gaussian distributions.
Appendix A discusses further aspects, perspectives and implementation matters of
the IGO framework. In Appendix B, we illustrate how IGO can be used to design new
optimization algorithms, by deriving the IGO algorithm associated with restricted Boltzmann
machines. We illustrate experimentally, on a simple bimodal example, the specific influence
of the Fisher information matrix on the optimization trajectories, and in particular on the
diversity of the optima obtained. Appendix C details a number of further mathematical
properties of IGO (such as its invariance properties or the case of noisy objective functions).
Appendix D contains the previously omitted proofs of the mathematical statements.

2. Algorithm Description
We now present the outline of the algorithm. Each step is described in more detail in the
sections below.
The IGO flow can be seen as an estimation of distribution algorithm: at each time 𝑡, we
maintain a probability distribution 𝑃𝜃𝑡 on the search space 𝑋, where 𝜃𝑡 ∈ Θ. The value of
𝜃𝑡 will evolve so that, over time, 𝑃𝜃𝑡 gives more weight to points 𝑥 with better values of the
function 𝑓 (𝑥) to optimize.
A straightforward way to proceed is to transfer 𝑓 from 𝑥-space to 𝜃-space: define a
function 𝐹 (𝜃) as the 𝑃𝜃 -average of 𝑓 and then do a gradient descent for 𝐹 (𝜃) in space Θ
(Berny, 2000a, 2002, 2000b; Gallagher and Frean, 2005). This way, 𝜃 will converge to a point
such that 𝑃𝜃 yields a good average value of 𝑓 . We depart from this approach in two ways:

∙ At each time, we replace 𝑓 with an adaptive transformation of 𝑓 representing how


good or bad observed values of 𝑓 are relative to other observations. This provides
invariance under all monotone transformations of 𝑓 .

∙ Instead of the vanilla gradient for 𝜃, we use the so-called natural gradient given by
the Fisher information matrix. This reflects the intrinsic geometry of the space of
probability distributions, as introduced by Rao and Jeffreys (Rao, 1945; Jeffreys, 1946)
and later elaborated upon by Amari and others (Amari and Nagaoka, 2000). This
provides invariance under reparametrization of 𝜃 and, importantly, minimizes the
change of diversity of 𝑃𝜃 .

5
Ollivier, Arnold, Auger and Hansen

The algorithm is constructed in two steps: we first give an “ideal” version, namely, a
version in which time 𝑡 is continuous so that the evolution of 𝜃𝑡 is given by an ordinary
differential equation in Θ. Second, the actual algorithm is a time discretization using a finite
time step and Monte Carlo sampling instead of exact 𝑃𝜃 -averages.

2.1 The Natural Gradient on Parameter Space


We recall suitable definitions of the vanilla and the natural gradient and motivate using the
natural gradient in the context of optimization.

2.1.1 About Gradients and the Shortest Path Uphill


Let 𝑔 be a smooth function from R𝑑 to R, to be maximized. We first recall the interpretation
of gradient ascent as “the shortest path uphill”.
Let 𝑦 ∈ R𝑑 . Define the vector 𝑧 by

𝑧 = lim arg max 𝑔(𝑦 + 𝜀𝑧) . (1)


𝜀→0 𝑧, ‖𝑧‖61

𝜕𝑔/𝜕𝑦𝑖
Then one can check that 𝑧 is the normalized gradient of 𝑔 at 𝑦: 𝑧𝑖 = ‖𝜕𝑔/𝜕𝑦‖ . (This holds
only at points 𝑦 where the gradient of 𝑔 does not vanish.)
This shows that, for small 𝛿𝑡, the well-known gradient ascent of 𝑔 given by
𝜕𝑔
𝑦𝑖𝑡+𝛿𝑡 = 𝑦𝑖𝑡 + 𝛿𝑡 𝜕𝑦𝑖

realizes the largest increase of the value of 𝑔, for a given step size ‖𝑦 𝑡+𝛿𝑡 − 𝑦 𝑡 ‖.
The relation (1) depends on the choice of a norm ‖ · ‖ (the gradient of 𝑔 is given by
𝜕𝑔/𝜕𝑦
√︁∑︀ 𝑖 only in an orthonormal basis). If we√︁ use, instead of the standard metric ‖𝑦 − 𝑦 ′ ‖ =
(𝑦𝑖 − 𝑦𝑖′ )2 on R𝑑 , a metric ‖𝑦 − 𝑦 ′ ‖𝐴 = 𝐴𝑖𝑗 (𝑦𝑖 − 𝑦𝑖′ )(𝑦𝑗 − 𝑦𝑗′ ) defined by a positive
∑︀

definite matrix 𝐴𝑖𝑗 , then the gradient of 𝑔 with respect to this metric is given by 𝑗 𝐴−1 𝜕𝑔
∑︀
𝑖𝑗 𝜕𝑦𝑖 .
This follows from the textbook definition of gradients by 𝑔(𝑦 + 𝜀𝑧) = 𝑔(𝑦) + 𝜀⟨∇𝑔, 𝑧⟩𝐴 + 𝑂(𝜀2 )
with ⟨·, ·⟩𝐴 the scalar product associated with the matrix 𝐴𝑖𝑗 (Schwartz, 1992).
It is possible to write the analogue of (1) using the 𝐴-norm. We then find that the
gradient ascent associated with metric 𝐴 is given by

𝑦 𝑡+𝛿𝑡 = 𝑦 𝑡 + 𝛿𝑡 𝐴−1 𝜕𝑔
𝜕𝑦

for small 𝛿𝑡 and maximizes the increment of 𝑔 for a given 𝐴-distance ‖𝑦 𝑡+𝛿𝑡 −𝑦 𝑡 ‖𝐴 —it realizes
the steepest 𝐴-ascent. Maybe this viewpoint clarifies the relationship between gradient and
metric: this steepest ascent property can actually be used as a definition of gradients.
In our setting we want to use a gradient ascent in the parameter space Θ of our
distributions 𝑃𝜃 . The “vanilla” gradient 𝜕𝜃𝜕 𝑖 is associated with the metric ‖𝜃 − 𝜃′ ‖ =
√︁∑︀
(𝜃𝑖 − 𝜃𝑖′ )2 and clearly depends on the choice of parametrization 𝜃. Thus this metric, and
the direction pointed by this gradient, are not intrinsic, in the sense that they do not depend
only on the distribution 𝑃𝜃 . A metric depending on 𝜃 only through the distributions 𝑃𝜃 can
be defined as follows.

6
Information-Geometric Optimization

2.1.2 Fisher Information and the Natural Gradient on Parameter Space


Let 𝜃, 𝜃′ ∈ Θ be two values of the distribution parameter. A widely used way to define a
“distance” between two generic distributions1 𝑃𝜃 and 𝑃𝜃′ is the Kullback–Leibler divergence
from information theory, defined (Kullback, 1997) as

𝑃𝜃′ (d𝑥)
∫︁
KL(𝑃𝜃′ || 𝑃𝜃 ) = ln 𝑃𝜃′ (d𝑥) .
𝑥 𝑃𝜃 (d𝑥)

When 𝜃′ = 𝜃 + 𝛿𝜃 is close to 𝜃, under mild smoothness assumptions we can expand the


Kullback–Leibler divergence at second order in 𝛿𝜃. This expansion defines the Fisher
information matrix 𝐼 at 𝜃 (Kullback, 1997):

1 ∑︁
KL(𝑃𝜃+𝛿𝜃 || 𝑃𝜃 ) = 𝐼𝑖𝑗 (𝜃) 𝛿𝜃𝑖 𝛿𝜃𝑗 + 𝑂(𝛿𝜃3 ) . (2)
2
An equivalent definition of the Fisher information matrix is by the usual formulas (Cover
and Thomas, 2006)

𝜕 ln 𝑃𝜃 (𝑥) 𝜕 ln 𝑃𝜃 (𝑥) 𝜕 2 ln 𝑃𝜃 (𝑥)


∫︁ ∫︁
𝐼𝑖𝑗 (𝜃) = 𝑃𝜃 (d𝑥) = − 𝑃𝜃 (d𝑥) .
𝑥 𝜕𝜃𝑖 𝜕𝜃𝑗 𝑥 𝜕𝜃𝑖 𝜕𝜃𝑗

The Fisher information matrix defines a (Riemannian) metric on Θ: the distance, in


this metric, between two very close values of 𝜃 is given by the square root of twice the
Kullback–Leibler divergence. Since the Kullback–Leibler divergence depends only on 𝑃𝜃 and
not on the parametrization of 𝜃, this metric is intrinsic.
If 𝑔 : Θ → R is a smooth function on the parameter space, its natural gradient (Amari,
1998) at 𝜃 is defined in accordance with the Fisher metric as

𝜕𝑔(𝜃)
𝐼𝑖𝑗−1 (𝜃)
∑︁
(∇
̃︀ 𝜃 𝑔)𝑖 =
𝑗
𝜕𝜃𝑗

or more synthetically
̃︀ 𝜃 𝑔 = 𝐼 −1 𝜕𝑔
∇ .
𝜕𝜃
From now on, we will use ∇ ̃︀ 𝜃 to denote the natural gradient and 𝜕 to denote the vanilla
𝜕𝜃
gradient.
By construction, the natural gradient descent is intrinsic: it does not depend on the
chosen parametrization 𝜃 of 𝑃𝜃 , so that it makes sense to speak of the natural gradient
ascent of a function 𝑔(𝑃𝜃 ). The Fisher metric is essentially the only way to obtain this
property (Amari and Nagaoka, 2000, Section 2.4).
Given that the Fisher metric comes from the Kullback–Leibler divergence, the “shortest
path uphill” property of gradients mentioned above translates as follows (see also Amari
1998, Theorem 1):
1. Throughout the text we do not distinguish a probability distribution 𝑃 , seen as a measure, and its density
with respect to some unspecified reference measure d𝑥, and so will write indifferently 𝑃 (d𝑥) or 𝑃 (𝑥)d𝑥.
The measure-theoretic viewpoint allows for a unified treatment of the discrete and continuous case.

7
Ollivier, Arnold, Auger and Hansen

Proposition 1 The natural gradient ascent points in the direction 𝛿𝜃 achieving the largest
change of the objective function, for a given distance between 𝑃𝜃 and 𝑃𝜃+𝛿𝜃 in Kullback–
Leibler divergence. More precisely, let 𝑔 be a smooth function on the parameter space Θ. Let
𝜃 ∈ Θ be a point where ∇𝑔(𝜃)
̃︀ does not vanish. Then, if
∇𝑔(𝜃)
̃︀
𝛿𝜃 =
‖∇𝑔(𝜃)‖
̃︀

is the direction of the natural gradient of 𝑔 (with ‖ · ‖ the Fisher norm), we have
1
𝛿𝜃 = lim arg max 𝑔(𝜃 + 𝛿𝜃′ ).
𝜀→0 𝜀 𝛿𝜃′ such that
KL(𝑃𝜃+𝛿𝜃′ || 𝑃𝜃 )6𝜀2 /2

Here we have implicitly assumed that the parameter space Θ is such that no two points
𝜃 ∈ Θ define the same probability distribution, and the mapping 𝑃𝜃 ↦→ 𝜃 is smooth.

2.1.3 Why Use the Fisher Metric Gradient for Optimization? Relationship
to Diversity
The first reason for using the natural gradient is its reparametrization invariance, which
makes it the only gradient available in a general abstract setting (Amari and Nagaoka, 2000).
Practically, this invariance also limits the influence of encoding choices on the behavior of
the algorithm (Appendix C.1). The Fisher matrix can be also seen as an adaptive learning
rate for different components of the parameter vector 𝜃𝑖 : components 𝑖 with a high impact
on 𝑃𝜃 will be updated more cautiously.
Another advantage comes from the relationship with Kullback–Leibler distance in view
of the “shortest path uphill” (see also Amari 1998). To minimize the value of some function
𝑔(𝜃) defined on the parameter space Θ, the naive approach follows a gradient descent for 𝑔
using the “vanilla” gradient
𝜕𝑔
𝜃𝑖𝑡+𝛿𝑡 = 𝜃𝑖𝑡 + 𝛿𝑡 𝜕𝜃 𝑖

and, as explained above, this maximizes the increment of 𝑔 for a given increment ‖𝜃𝑡+𝛿𝑡 − 𝜃𝑡 ‖.
On the other hand, the Fisher gradient
𝜃𝑖𝑡+𝛿𝑡 = 𝜃𝑖𝑡 + 𝛿𝑡𝐼 −1 𝜕𝜃
𝜕𝑔
𝑖

maximizes the increment of 𝑔 for a given Kullback–Leibler distance KL(𝑃𝜃𝑡+𝛿𝑡 || 𝑃𝜃𝑡 ).


In particular, if we choose an initial value 𝜃0 such that 𝑃𝜃0 covers the whole space 𝑋
uniformly (or a wide portion, in case 𝑋 is unbounded), the Kullback–Leibler divergence
between 𝑃𝜃𝑡 and 𝑃𝜃0 is the Shannon entropy of the uniform distribution minus the Shannon
entropy of 𝑃𝜃𝑡 , and so this divergence measures the loss of diversity of 𝑃𝜃𝑡 with respect to
the uniform distribution.
Proposition 2 Let 𝑔 : Θ → R be a regular function of 𝜃 and let 𝜃0 such that 𝑃𝜃0 is the
uniform distribution on a finite space 𝑋. Let (𝜃𝑡 )𝑡>0 be the trajectory of the gradient ascent
of 𝑔 using the natural gradient. Then for small 𝑡 we have
𝜃𝑡 = arg max {𝑡 · 𝑔(𝜃) + Ent(𝑃𝜃 )} + 𝑜(𝑡) (3)
𝜃

where Ent is the Shannon entropy.

8
Information-Geometric Optimization

(We have stated the proposition over a finite space to have a well-defined uniform
distribution. A short proof, together with the regularity conditions on 𝑔, is given in
Appendix D.)
So following the natural gradient of a function 𝑔, starting at or close to the uniform
distribution, amounts to optimizing the function 𝑔 with minimal loss of diversity, provided
the initial diversity is large. (This is valid, of course, only at the beginning; once one gets too
far from uniform, a better interpretation is minimal change of diversity.) On the other hand,
the vanilla gradient descent does not satisfy Proposition 2: it optimizes 𝑔 with minimal
change in the numerical values of the parameter 𝜃, which is of little interest.
So arguably this method realizes the best trade-off between optimization and loss
of diversity. (Though, as can be seen from the detailed algorithm description below,
maximization of diversity occurs only greedily at each step, and so there is no guarantee
that after a given time, IGO will provide the highest possible diversity for a given objective
function value.)
An experimental confirmation of the positive influence of the Fisher matrix on diversity
is given in Appendix B below.

2.2 IGO: Information-Geometric Optimization


We now introduce a quantile-based rewriting of the objective function. From applying the
natural gradient on the rewritten objective we derive information-geometric optimization.

2.2.1 Quantile Rewriting of 𝑓


Our original problem is to minimize a function 𝑓 : 𝑋 → R. A simple way to turn 𝑓 into a
function on Θ is to use the expected value −E𝑃𝜃 𝑓 (Berny, 2000a; Wierstra et al., 2008), but
expected values can be unduly influenced by extreme values and using them can be rather
unstable (Whitley, 1989); moreover −E𝑃𝜃 𝑓 is not invariant under increasing transformation
of 𝑓 (this invariance implies we can only compare 𝑓 -values, not sum them up).
Instead, we take an adaptive, quantile-based approach by first replacing the function
𝑓 with a monotone rewriting 𝑊𝜃𝑓𝑡 , depending on the current parameter value 𝜃𝑡 , and then
following the gradient of E𝑃𝜃 𝑊𝜃𝑓𝑡 , seen as a function of 𝜃. A due choice of 𝑊𝜃𝑓𝑡 allows to
control the range of the resulting values and achieves the desired invariance. Because the
rewriting 𝑊𝜃𝑓𝑡 depends on 𝜃𝑡 , it might be viewed as an adaptive 𝑓 -transformation.
The monotone rewriting entails that if 𝑓 (𝑥) is “small” then 𝑊𝜃𝑓 (𝑥) ∈ R is “large” and
vice versa. The quantitative meaning of “small” or “large” depends on 𝜃 ∈ Θ. To obtain the
value of 𝑊𝜃𝑓 (𝑥) we compare 𝑓 (𝑥) to the quantiles of 𝑓 under the current distribution, as
measured by the 𝑃𝜃 -level fraction in which the value of 𝑓 (𝑥) lies.

Definition 3 The lower and upper 𝑃𝜃 -𝑓 -levels of 𝑥 ∈ 𝑋 are defined as

𝑞𝜃< (𝑥) = Pr𝑥′ ∼𝑃𝜃 (𝑓 (𝑥′ ) < 𝑓 (𝑥))


(4)
𝑞𝜃6 (𝑥) = Pr𝑥′ ∼𝑃𝜃 (𝑓 (𝑥′ ) 6 𝑓 (𝑥)) .

Let 𝑤 : [0; 1] → R be a non-increasing function, the selection scheme.

9
Ollivier, Arnold, Auger and Hansen

The transform 𝑊𝜃𝑓 (𝑥) of an objective function 𝑓 : 𝑋 → R is defined as a function of the


𝑃𝜃 -𝑓 -level of 𝑥 as

⎨𝑤(𝑞𝜃6 (𝑥))
⎪ if 𝑞𝜃6 (𝑥) = 𝑞𝜃< (𝑥),
𝑊𝜃𝑓 (𝑥) = 1 ∫︀ 𝑞=𝑞𝜃6 (𝑥) (5)

𝑞=𝑞𝜃< (𝑥)
𝑤(𝑞) d𝑞 otherwise.
𝑞𝜃6 (𝑥)−𝑞𝜃< (𝑥)

The level functions 𝑞 : 𝑋 → [0, 1] reflect the probability to sample a better value than
𝑓 (𝑥). They are monotone in 𝑓 (if 𝑓 (𝑥1 ) 6 𝑓 (𝑥2 ) then 𝑞𝜃< (𝑥1 ) 6 𝑞𝜃< (𝑥2 ), and likewise for 𝑞 6 )
and invariant under strictly increasing transformations of 𝑓 .
A typical choice for 𝑤 is 𝑤(𝑞) = 1𝑞6𝑞0 for some fixed value 𝑞0 , the selection quantile. In
what follows, we suppose that a selection scheme (weighting scheme) 𝑤 has been chosen
once and for all.
As desired, the definition of 𝑊𝜃𝑓 is invariant under a strictly increasing transformation
of 𝑓 . For instance, the 𝑃𝜃 -median of 𝑓 gets remapped to 𝑤( 12 ).
Note that E𝑥∼𝑃𝜃 𝑊𝜃𝑓 (𝑥) is always equal to 01 𝑤, independently of 𝑓 and 𝜃: indeed, by
∫︀

definition, the 𝑃𝜃 inverse-quantile of a random point under 𝑃𝜃 is uniformly distributed in


[0; 1]. In the following, our objective will be to maximize the expected value of 𝑊𝜃𝑓𝑡 over 𝜃,
that is, to maximize ∫︁
E𝑃𝜃 𝑊𝜃𝑓𝑡 = 𝑊𝜃𝑓𝑡 (𝑥) 𝑃𝜃 (d𝑥) (6)

over 𝜃, where 𝜃𝑡 is fixed at a given step but will adapt over time.
Importantly, 𝑊𝜃𝑓 (𝑥) can be estimated in practice: indeed, the 𝑃𝜃 -𝑓 -levels Pr𝑥′ ∼𝑃𝜃 (𝑓 (𝑥′ ) <
𝑓 (𝑥)) can be estimated by taking samples of 𝑃𝜃 and ordering the samples according to
the value of 𝑓 (see below). The estimate remains invariant under strictly increasing 𝑓 -
transformations.

2.2.2 The IGO Gradient Flow


At the most abstract level, IGO is a continuous-time gradient flow in the parameter space
Θ, which we define now. In practice, discrete time steps (a.k.a. iterations) are used, and
𝑃𝜃 -integrals are approximated through sampling, as described in the next section.
Let 𝜃𝑡 be the current value of the parameter at time 𝑡, and let 𝛿𝑡 ≪ 1. We define 𝜃𝑡+𝛿𝑡
in such a way as to increase the 𝑃𝜃 -weight of points where 𝑓 is small, while not going too
far from 𝑃𝜃𝑡 in Kullback–Leibler divergence. We use the adaptive weights 𝑊𝜃𝑓𝑡 as a way
to measure which points have large or small values. In accordance with (6), this suggests
taking the gradient ascent
∫︁
𝜃 𝑡+𝛿𝑡 𝑡
= 𝜃 + 𝛿𝑡 ∇
̃︀ 𝜃 𝑊𝜃𝑓𝑡 (𝑥) 𝑃𝜃 (d𝑥) (7)

where the natural gradient is suggested by Proposition 1.


Note again that we use 𝑊𝜃𝑓𝑡 and not 𝑊𝜃𝑓 in the integral. So the gradient ∇̃︀ 𝜃 does not
𝑓 𝑓
act on the adaptive objective 𝑊𝜃𝑡 . If we used 𝑊𝜃 instead, we would face a paradox: right
after a move, previously good points do not seem so good any more since the distribution
∫︀ 𝑓
has improved. More precisely, 𝑊𝜃 (𝑥) 𝑃𝜃 (d𝑥) is constant and always equal to the average
weight 01 𝑤, and so the gradient would always vanish.
∫︀

10
Information-Geometric Optimization

̃︀ 𝜃 = 𝑃𝜃 ∇ln
Using the log-likelihood trick ∇𝑃 ̃︀ 𝑃𝜃 (assuming 𝑃𝜃 is smooth), we get an
equivalent expression of the update above as an integral under the current distribution 𝑃𝜃𝑡 ;
this is important for practical implementation. This leads to the following definition.

Definition 4 (IGO flow) The IGO flow is the set of continuous-time trajectories in space
Θ, defined by the ordinary differential equation
d𝜃𝑡
∫︁
=∇
̃︀ 𝜃 𝑊𝜃𝑓𝑡 (𝑥) 𝑃𝜃 (𝑥)d𝑥 (8)
d𝑡

̃︀ 𝜃 𝑃𝜃 (𝑥)
∫︁
= 𝑊𝜃𝑓𝑡 (𝑥) 𝑃 𝑡 (𝑥)d𝑥
𝑃𝜃𝑡 (𝑥) 𝜃
∫︁
= 𝑊𝜃𝑓𝑡 (𝑥) ∇
̃︀ 𝜃 ln 𝑃𝜃 (𝑥) 𝑃 𝑡 (d𝑥)
𝜃 (9)
𝜕 ln 𝑃𝜃 (𝑥)
∫︁
= 𝐼 −1 (𝜃𝑡 ) 𝑊𝜃𝑓𝑡 (𝑥) 𝑃𝜃𝑡 (d𝑥) . (10)
𝜕𝜃
where the gradients are taken at point 𝜃 = 𝜃𝑡 , and 𝐼 is the Fisher information matrix.

Natural evolution strategies (NES, Wierstra et al. 2008; Glasmachers et al. 2010; Sun
et al. 2009; Wierstra et al. 2014) feature a related gradient descent with 𝑓 (𝑥) instead of
𝑊𝜃𝑓𝑡 (𝑥). The associated flow would read
d𝜃𝑡
∫︁
= −∇
̃︀ 𝜃 𝑓 (𝑥) 𝑃𝜃 (d𝑥) , (11)
d𝑡
where the gradient is taken at 𝜃𝑡 (in the sequel when not explicitly stated, gradients in 𝜃
are taken at 𝜃 = 𝜃𝑡 ). However, in the end NESs always implement algorithms using sample
quantiles (via “nonlinear fitness shaping”), as if derived from the gradient ascent of 𝑊𝜃𝑓𝑡 (𝑥).
The update (9) is a weighted average of “intrinsic moves” increasing the log-likelihood of
some points. We can slightly rearrange the update as
preference weight current sample distribution
d𝜃𝑡
∫︁ ⏞ ⏟ ⏞ ⏟
= 𝑊𝜃𝑓𝑡 (𝑥) ∇
̃︀ 𝜃 ln 𝑃𝜃 (𝑥) 𝑃 𝑡 (d𝑥)
𝜃 (12)
d𝑡 ⏟ ⏞
∫︁ intrinsic move to reinforce 𝑥
=∇
̃︀ 𝜃 𝑊𝜃𝑓𝑡 (𝑥) ln 𝑃𝜃 (𝑥) 𝑃𝜃𝑡 (d𝑥) , (13)
⏟ ⏞
weighted log-likelihood

which provides an interpretation for the IGO gradient flow as a gradient ascent optimization
of the weighted log-likelihood of the “good points” of the current distribution. In the sense
of Theorem 10 below, IGO is in fact the “best” way to increase this log-likelihood.
For exponential families of probability distributions, we will see later that the IGO flow
rewrites as a nice derivative-free expression (21).

2.2.3 IGO Algorithms: Time Discretization and Sampling


The above is a mathematically well-defined continuous-time flow in parameter space. Its
practical implementation involves three approximations depending on two parameters 𝑁
and 𝛿𝑡:

11
Ollivier, Arnold, Auger and Hansen

∙ the integral under 𝑃𝜃𝑡 is approximated using 𝑁 samples taken from 𝑃𝜃𝑡 ;

∙ the value 𝑊𝜃𝑓𝑡 is approximated for each sample taken from 𝑃𝜃𝑡 ;
𝑡
∙ the time derivative d𝜃 d𝑡 is approximated by a 𝛿𝑡 time increment: instead of the
𝑡
continuous-time IGO flow (8) we use its Euler approximation scheme 𝜃𝑡+𝛿𝑡 ≈ 𝜃𝑡 + 𝛿𝑡 d𝜃
d𝑡 ,
so that the time 𝑡 of the flow is discretized with a step size 𝛿𝑡, which thus becomes
the learning rate of the algorithm. (See Corollary 21 for an interpretation of 𝛿𝑡 as a
number of bits of information introduced in the distribution 𝑃𝜃𝑡 at each step.)

We also assume that the Fisher information matrix 𝐼(𝜃) and 𝜕 ln𝜕𝜃 𝑃𝜃 (𝑥)
can be computed
(see discussion below if 𝐼(𝜃) is unknown).
At each step, we draw 𝑁 samples 𝑥1 , . . . , 𝑥𝑁 under 𝑃𝜃𝑡 . To approximate 𝑊𝜃𝑓𝑡 , we rank
the samples according to the value of 𝑓 . Define rk(𝑥𝑖 ) = #{𝑗 | 𝑓 (𝑥𝑗 ) < 𝑓 (𝑥𝑖 )} and let the
estimated weight of sample 𝑥𝑖 be

1 rk(𝑥𝑖 ) + 1/2
(︂ )︂
𝑤
̂︀𝑖 = 𝑤 , (14)
𝑁 𝑁

using the non-increasing selection scheme function 𝑤 introduced in Definition 3 above. (This
is assuming there are no ties in our sample; in case several sample points have the same
̂︀𝑖 by averaging the above over all possible rankings of the ties2 .)
value of 𝑓 , we define 𝑤
Then we can approximate the IGO flow as follows.

Definition 5 (IGO algorithms) The IGO algorithm associated with parametrization 𝜃,


sample size 𝑁 and step size 𝛿𝑡 is the following update rule for the parameter 𝜃𝑡 . At each
step, 𝑁 sample points 𝑥1 , . . . , 𝑥𝑁 are drawn according to the distribution 𝑃𝜃𝑡 . The parameter
is updated according to
𝑁
∑︁ ⃒
𝜃𝑡+𝛿𝑡 = 𝜃𝑡 + 𝛿𝑡 ̂︀𝑖 ∇
𝑤 ̃︀ 𝜃 ln 𝑃𝜃 (𝑥𝑖 )⃒⃒ (16)
𝜃=𝜃𝑡
𝑖=1
𝑁 ⃒
𝜕 ln 𝑃𝜃 (𝑥𝑖 ) ⃒⃒
= 𝜃𝑡 + 𝛿𝑡 𝐼 −1 (𝜃𝑡 )
∑︁
𝑤
̂︀𝑖 (17)
𝑖=1
𝜕𝜃 ⃒
𝜃=𝜃𝑡

where 𝑤
̂︀𝑖 is the weight (14) obtained from the ranked values of the objective function 𝑓 .
(︁ )︁
1 𝑖−1/2
Equivalently one can fix the weights 𝑤𝑖 = 𝑁 𝑤 𝑁 once and for all and rewrite the
update as
𝑁 ⃒
𝑡+𝛿𝑡 𝑡 −1 𝑡
∑︁ 𝜕 ln 𝑃𝜃 (𝑥𝑖:𝑁 ) ⃒⃒
𝜃 = 𝜃 + 𝛿𝑡 𝐼 (𝜃 ) 𝑤𝑖 (18)
𝑖=1
𝜕𝜃 ⃒
𝜃=𝜃𝑡

2. A mathematically neater but less intuitive version would be


∫︁ 𝑢=rk6 (𝑥𝑖 )/𝑁
1
𝑤
̂︀𝑖 = 𝑤(𝑢)d𝑢 (15)
rk6 (𝑥𝑖 ) − rk< (𝑥𝑖 ) 𝑢=rk< (𝑥𝑖 )/𝑁

with rk< (𝑥𝑖 ) = #{𝑗 | 𝑓 (𝑥𝑗 ) < 𝑓 (𝑥𝑖 )} and rk6 (𝑥𝑖 ) = #{𝑗 | 𝑓 (𝑥𝑗 ) 6 𝑓 (𝑥𝑖 )}.

12
Information-Geometric Optimization

where 𝑥𝑖:𝑁 denotes the sampled point ranked 𝑖th according to 𝑓 , i.e. 𝑓 (𝑥1:𝑁 ) < . . . < 𝑓 (𝑥𝑁 :𝑁 )
(assuming again there are no ties). Note that {𝑥𝑖:𝑁 } = {𝑥𝑖 } and {𝑤𝑖 } = {𝑤 ̂︀𝑖 }.
As will be discussed in Section 5, this update applied to multivariate normal distributions
or Bernoulli measures allows us to neatly recover versions of some well-established algorithms,
in particular CMA-ES and PBIL. Actually, in the Gaussian context updates of the form
(17) have already been introduced (Glasmachers et al., 2010; Akimoto et al., 2010), though
not formally derived from a continuous-time flow with inverse quantiles.
When 𝑁 → ∞, the IGO algorithm using samples approximates the continuous-time IGO
gradient flow, see Theorem 6 below. Indeed, the IGO algorithm, with 𝑁 = ∞, is simply the
Euler approximation scheme for the ordinary differential equation defining the IGO flow (8).
The latter result thus provides a sound mathematical basis for currently used rank-based
updates.

2.2.4 IGO Flow versus IGO Algorithms


The IGO flow (8) is a well-defined continuous-time set of trajectories in the space of
probability distributions 𝑃𝜃 , depending only on the objective function 𝑓 and the chosen family
of distributions. It does not depend on the chosen parametrization for 𝜃 (Proposition 18).
On the other hand, there are several IGO algorithms associated with this flow. Each
IGO algorithm approximates the IGO flow in a slightly different way. An IGO algorithm
depends on three further choices: a sample size 𝑁 , a time discretization step size 𝛿𝑡, and a
choice of parametrization for 𝜃 in which to implement (17).
If 𝛿𝑡 is small enough, and 𝑁 large enough, the influence of the parametrization 𝜃
disappears and all IGO algorithms are approximations of the “ideal” IGO flow trajectory.
However, the larger 𝛿𝑡, the poorer the approximation gets.
So for large 𝛿𝑡, different IGO algorithms for the same IGO flow may exhibit different
behaviors. We will see an instance of this phenomenon for Gaussian distributions: both
CMA-ES and the maximum likelihood update (EMNA) can be seen as IGO algorithms, but
the latter with 𝛿𝑡 = 1 is known to exhibit premature loss of diversity (Section 5.2).
Still, if 𝛿𝑡 is sufficiently small, two IGO algorithms for the same IGO flow will differ less
from each other than from a non-IGO algorithm: at each step the difference is only 𝑂(𝛿𝑡 2 )
(Appendix C.1). On the other hand, for instance, the difference between an IGO algorithm
and the vanilla gradient ascent is, generally, not smaller than 𝑂(𝛿𝑡) at each step, i.e., it can
be roughly as big as the steps themselves.

2.2.5 Unknown Fisher Matrix


The algorithm presented so far assumes that the Fisher matrix 𝐼(𝜃) is known as a function of
𝜃. This is the case for Gaussian or Bernoulli distributions. However, for restricted Boltzmann
machines as considered below, no analytical form is known. Yet, provided the quantity
𝜕
𝜕𝜃 ln 𝑃𝜃 (𝑥) can be computed or approximated, it is possible to approximate the integral

𝜕 ln 𝑃𝜃 (𝑥) 𝜕 ln 𝑃𝜃 (𝑥)
∫︁
𝐼𝑖𝑗 (𝜃) = 𝑃𝜃 (d𝑥)
𝑥 𝜕𝜃𝑖 𝜕𝜃𝑗

using 𝑃𝜃 -Monte Carlo samples for 𝑥. These samples may or may not be the same as those
used in the IGO update (17): in particular, it is possible to use as many Monte Carlo

13
Ollivier, Arnold, Auger and Hansen

samples as necessary to approximate 𝐼𝑖𝑗 , at no additional cost in terms of the number of


calls to the black-box function 𝑓 to optimize.
Note that each Monte Carlo sample 𝑥 will contribute 𝜕 ln𝜕𝜃 𝑃𝜃 (𝑥) 𝜕 ln 𝑃𝜃 (𝑥)
𝑖 𝜕𝜃𝑗 to the Fisher
3
matrix approximation. This is a rank-1 non-negative matrix . So, for the approximated
Fisher matrix to be invertible, the number of (distinct) samples 𝑥 needs to be at least
equal to, and ideally much larger than, the number of components of the parameter 𝜃:
𝑁Fisher > dim Θ.
For exponential families of distributions, the IGO update has a particular form (21)
which simplifies this matter somewhat (Section 3.3). In Appendix B this is examplified using
restricted Boltzmann machines.

3. First Properties of IGO


In this section we derive some basic properties of IGO and present the IGO flow for
exponential families.

3.1 Consistency of Sampling


The first property to check is that when 𝑁 → ∞, the update rule using 𝑁 samples converges
to the IGO update rule. This is not a straightforward application of the law of large numbers,
because the estimated weights 𝑤 ̂︀𝑖 depend (non-continuously) on the whole sample 𝑥1 , . . . , 𝑥𝑁 ,
and not only on 𝑥𝑖 .

Theorem 6 (Consistency) When 𝑁 → ∞, the 𝑁 -sample IGO update rule (17):


𝑁 ⃒
𝑡+𝛿𝑡 𝑡 −1 𝑡 𝜕 ln 𝑃𝜃 (𝑥𝑖 ) ⃒⃒
∑︁
𝜃 = 𝜃 + 𝛿𝑡 𝐼 (𝜃 ) 𝑤
̂︀𝑖
𝑖=1
𝜕𝜃 ⃒
𝜃=𝜃𝑡

converges with probability 1 to the update rule (7):


∫︁
𝜃𝑡+𝛿𝑡 = 𝜃𝑡 + 𝛿𝑡 ∇
̃︀ 𝜃 𝑊𝜃𝑓𝑡 (𝑥) 𝑃𝜃 (d𝑥) .

The proof is given in Appendix D, under mild regularity assumptions. In particular we


do not require that 𝑤 is continuous. Unfortunately, the proof does not provide an explicit
sample size above which the IGO algorithm would be guaranteed to stay close to the IGO
flow with high probability; presumably such a size would be larger than the typical sample
sizes used in practice.
This theorem may clarify previous claims (Wierstra et al., 2008; Akimoto et al., 2010)
where rank-based updates similar to (7), such as in NES or CMA-ES, were derived from
optimizing the expected value −E𝑃𝜃 𝑓 . The rank-based weights 𝑤 ̂︀𝑖 were then introduced after
the derivation as a useful heuristic to improve stability. Theorem 6 shows that, for large 𝑁 ,
CMA-ES and NES actually follow the gradient flow of the quantity E𝑃𝜃 𝑊𝜃𝑓𝑡 : the update can
be rigorously derived from optimizing the expected value of the inverse-quantile-rewriting
𝑊𝜃𝑓𝑡 .
2
3. The alternative, equivalent formula 𝐼𝑖𝑗 (𝜃) = − 𝑥 𝜕 𝜕𝜃ln𝑖 𝑃𝜕𝜃𝜃 (𝑥)
∫︀
𝑗
𝑃𝜃 (d𝑥) for the Fisher matrix will not necessarily
yield non-negative matrices through Monte Carlo sampling.

14
Information-Geometric Optimization

3.2 Monotonicity: Quantile Improvement


Gradient descents come with a guarantee that the fitness value decreases over time. Here,
since we work with probability distributions on 𝑋, we need to define a “fitness” of the
distribution 𝑃𝜃𝑡 . An obvious choice is the expectation E𝑃𝜃𝑡 𝑓 , but it is not invariant under
𝑓 -transformation and moreover may be sensitive to extreme values.
It turns out that the monotonicity properties of the IGO gradient flow depend on the
choice of the selection scheme 𝑤. For instance, if 𝑤(𝑢) = 1𝑢61/2 , then the median of 𝑓 under
𝑃𝜃𝑡 improves over time.

Proposition 7 (Quantile improvement) Consider the IGO flow (8), with the weight
𝑤(𝑢) = 1𝑢6𝑞 where 0 < 𝑞 < 1 is fixed. Then the value of the 𝑞-quantile of 𝑓 improves over
time: if 𝑡1 6 𝑡2 then 𝑄𝑞𝑃 𝑡 (𝑓 ) 6 𝑄𝑞𝑃 𝑡 (𝑓 ). Here the 𝑞-quantile value 𝑄𝑞𝑃 (𝑓 ) of 𝑓 under a
𝜃 2 𝜃 1
probability distribution 𝑃 is defined as the largest number 𝑚 such that Pr𝑥∼𝑃 (𝑓 (𝑥) > 𝑚) >
1 − 𝑞.
Assume moreover that the objective function 𝑓 has no plateau, i.e. for any 𝑣 ∈ R
and any 𝜃 ∈ Θ we have Pr𝑥∼𝑃𝜃 (𝑓 (𝑥) = 𝑣) = 0. Then for 𝑡1 < 𝑡2 either 𝜃𝑡1 = 𝜃𝑡2 or
𝑄𝑞𝑃 𝑡 (𝑓 ) < 𝑄𝑞𝑃 𝑡 (𝑓 ).
𝜃 2 𝜃 1

The proof is given in Appendix D, together with the necessary regularity assumptions.
Note that on a discrete search space, the objective function has only plateaus, and the
𝑞-quantile will evolve by successive jumps even as 𝜃 evolves continuously.
This property is proved here only for the IGO gradient flow (8) with 𝑁 = ∞ and
𝛿𝑡 → 0. For an IGO algorithm with finite 𝑁 , the dynamics is random and one cannot expect
monotonicity. Still, Theorem 6 ensures that, with high probability, trajectories of a large
enough finite population stay close to the infinite-population limit trajectory.
In Akimoto and Ollivier (2013) this result was extended to finite time steps instead of
infinitesimal 𝛿𝑡, using the IGO-ML framework from Section 4 below.

3.3 The IGO Flow for Exponential Families


The expressions for the IGO update simplify somewhat if the family 𝑃𝜃 happens to be
an exponential family of probability distributions (see also Malagò et al. 2008, 2011 for
optimization using the natural gradient for exponential families). This covers, for instance,
Gaussian or Bernoulli distributions.
Suppose that 𝑃𝜃 can be written as
1 (︁∑︁ )︁
𝑃𝜃 (d𝑥) = exp 𝜃𝑖 𝑇𝑖 (𝑥) 𝐻(d𝑥)
𝑍(𝜃)
where 𝑇1 , . . . , 𝑇𝑘 is a finite family of functions on 𝑋, 𝐻(d𝑥) is an arbitrary reference measure
on 𝑋, and 𝑍(𝜃) is the normalization constant. It is well-known (Amari and Nagaoka, 2000,
(2.33)) that
𝜕 ln 𝑃𝜃 (𝑥)
= 𝑇𝑖 (𝑥) − E𝑃𝜃 𝑇𝑖 (19)
𝜕𝜃𝑖
so that (Amari and Nagaoka, 2000, (3.59))

𝐼𝑖𝑗 (𝜃) = Cov𝑃𝜃 (𝑇𝑖 , 𝑇𝑗 ) . (20)

15
Ollivier, Arnold, Auger and Hansen

By plugging this into the definition of the IGO flow (10) we find:

Proposition 8 Let 𝑃𝜃 be an exponential family parametrized by the natural parameters 𝜃


as above. Then the IGO flow is given by
d𝜃
= Cov𝑃𝜃 (𝑇, 𝑇 )−1 Cov𝑃𝜃 (𝑇, 𝑊𝜃𝑓 ) (21)
d𝑡
where Cov𝑃𝜃 (𝑇, 𝑊𝜃𝑓 ) denotes the vector (Cov𝑃𝜃 (𝑇𝑖 , 𝑊𝜃𝑓 ))𝑖 , and Cov𝑃𝜃 (𝑇, 𝑇 ) the matrix
(Cov𝑃𝜃 (𝑇𝑖 , 𝑇𝑗 ))𝑖𝑗 .

Note that the right-hand side does not involve derivatives w.r.t. 𝜃 any more. This result
makes it easy to simulate the IGO flow using, e.g., a Gibbs sampler for 𝑃𝜃 : both covariances
in (21) may be approximated by sampling, so that neither the Fisher matrix nor the gradient
term need to be known in advance, and no derivatives are involved.
The values of the variables 𝑇¯𝑖 = E𝑇𝑖 , namely the expected value of 𝑇𝑖 under the current
distribution, can often be used as an alternative parametrization for an exponential family
(e.g. for a one-dimensional Gaussian, these are the mean 𝜇 and the second moment 𝜇2 + 𝜎 2 ).
The IGO flow (9) may be rewritten using these variables, using the relation ∇ ̃︀ 𝜃 = 𝜕 for
𝑖 ¯
𝜕 𝑇𝑖
the natural gradient of exponential families (Proposition 29 in Appendix D). One finds:

Proposition 9 With the same setting as in Proposition 8, the expectation variables 𝑇¯𝑖 =
E𝑃𝜃 𝑇𝑖 satisfy the following evolution equation under the IGO flow

d𝑇¯𝑖
= Cov(𝑇𝑖 , 𝑊𝜃𝑓 ) = E(𝑇𝑖 𝑊𝜃𝑓 ) − 𝑇¯𝑖 E𝑊𝜃𝑓 . (22)
d𝑡
The proof is given in Appendix D, in the proof of Theorem 12. We shall further exploit
this fact in Section 4.

3.3.1 Exponential Families with Latent Variables.


Similar formulas hold when the distribution 𝑃𝜃 (𝑥) is the marginal of an exponential distri-
bution 𝑃𝜃 (𝑥, ℎ) over a “hidden” or “latent” variable ℎ, such as the restricted Boltzmann
machines of Appendix B.
1 ∑︀ ∑︀
Namely, with 𝑃𝜃 (𝑥) = 𝑍(𝜃) ℎ exp( 𝑖 𝜃𝑖 𝑇𝑖 (𝑥, ℎ)) 𝐻(d𝑥, dℎ) we have

𝜕 ln 𝑃𝜃 (𝑥)
= 𝑈𝑖 (𝑥) − E𝑃𝜃 𝑈𝑖 (23)
𝜕𝜃𝑖
where
𝑈𝑖 (𝑥) = E𝑃𝜃 (𝑇𝑖 (𝑥, ℎ)|𝑥)
is the expectation of 𝑇𝑖 (𝑥, ℎ) knowing 𝑥. Then the Fisher matrix is

𝐼𝑖𝑗 (𝜃) = Cov𝑃𝜃 (𝑈𝑖 , 𝑈𝑗 ) (24)

and consequently, the IGO flow takes the form


d𝜃
= Cov𝑃𝜃 (𝑈, 𝑈 )−1 Cov𝑃𝜃 (𝑈, 𝑊𝜃𝑓 ) . (25)
d𝑡

16
Information-Geometric Optimization

3.4 Further Mathematical Properties of IGO


IGO enjoys a number of other mathematical properties that are expanded upon in Ap-
pendix C.

∙ By its very construction, the IGO flow is invariant under a number of transformations of
the original problem. First, replacing the objective function 𝑓 with a strictly increasing
function of 𝑓 does not change the IGO flow (Proposition 17 in Appendix C.1).
Second, changing the parameterization 𝜃 used for the family 𝑃𝜃 (e.g., letting 𝜃 be a
variance or a standard deviation) results in unchanged trajectories for the distributions
𝑃𝜃 under the IGO flow (Proposition 18 in Appendix C.1).
Finally, the IGO flow is insensitive to transformations of the original problem by a
change of variable in the search space 𝑋 itself, provided this transformation can be
reflected in the family of distributions 𝑃𝜃 (Proposition 19 in Appendix C.1); this covers,
for instance, optimizing 𝑓 (𝐴𝑥) instead of 𝑓 (𝑥) in R𝑑 using Gaussian distributions,
where 𝐴 is any invertible matrix.
These latter two invariances are specifically due to the natural gradient and are not
satisfied by a vanilla gradient descent. 𝑓 -invariance and 𝑋-invariance are directly
inherited from the IGO flow by IGO algorithms, but this is only approximately true of
𝜃-invariance, as discussed in Appendix C.1.

∙ The speed of the IGO flow is bounded by the variance of the weight function 𝑤 on [0; 1]
(Appendix C.2, Proposition 20). This implies that the parameter 𝛿𝑡 in the IGO flow is
not a meaningless variable but is related to the maximal number of bits introduced in
𝑃𝜃 at each step (Appendix C.2, Corollary 21).

∙ IGO algorithms still make sense when the objective function 𝑓 is noisy, that is, when
each call to 𝑓 returns a non-deterministic value. This can be accounted for without
changing the framework: we prove in Proposition 22 (Appendix C.3) that IGO for
noisy 𝑓 is equivalent to IGO for a non-noisy 𝑓˜ defined on a larger space 𝑋 × Ω with
distributions 𝑃˜𝜃 that are uniform over Ω. Consequently, theorems such as consistency
of sampling immediately transfer to the noisy case.

∙ The IGO flow can be computed explicitly in the simple case of linear functions on R𝑑
using Gaussian distributions, and its convergence can be proven for linear functions
on {0, 1}𝑑 with Bernoulli distributions (Appendix C.4).

3.5 Implementation Remarks


We conclude this section by a few practical remarks when implementing IGO algorithms.

3.5.1 Influence of the Selection Scheme 𝑤


The selection scheme 𝑤 directly affects the update rule (18).
A natural choice is 𝑤(𝑢) = 1𝑢6𝑞 . This results, as shown in Proposition 7, in an
improvement of the 𝑞-quantile over the course of optimization. Taking 𝑞 = 1/2 springs
to mind (giving positive weights to the better half of the samples); however, this is often

17
Ollivier, Arnold, Auger and Hansen

not selective enough, and both theory and experiments confirm that for the Gaussian case,
efficient optimization requires 𝑞 < 1/2 (see Section 5.2). According to Beyer (2001), on the
sphere function 𝑓 (𝑥) = 𝑖 𝑥2𝑖 , the optimal 𝑞 is about 0.27 if sample size 𝑁 is not larger than
∑︀

the search space dimension 𝑑, and even smaller otherwise (Jebalia and Auger, 2010).
Second, replacing 𝑤 with 𝑤 + 𝑐 for some constant 𝑐 clearly has no influence on the IGO
continuous-time flow (7), since the gradient will cancel out the constant. However, this is
not the case for the update rule (18) with a finite sample of size 𝑁 .
Indeed, adding a constant 𝑐 to 𝑤 adds a quantity 𝑐 𝑁1 ∇𝜃 ln 𝑃𝜃 (𝑥𝑖 ) to the update. In
∑︀ ̃︀

expectation,
∫︀
this quantity
∫︀
vanishes because the 𝑃𝜃 -expected value of ∇ ̃︀ 𝜃 ln 𝑃𝜃 is 0 (because
(∇𝜃 ln 𝑃𝜃 ) 𝑃𝜃 = ∇𝑃𝜃 = ∇1 = 0). So adding a constant to 𝑤 does not change the
̃︀ ̃︀ ̃︀
expected value of the update, but it may change, √ e.g., its variance. The empirical average
of ∇ ̃︀ 𝜃 ln 𝑃𝜃 (𝑥𝑖 ) in the sample will be 𝑂(1/ 𝑁 ). So translating the weights results in a

𝑂(1/ 𝑁 ) change in the update. See also Section 4 in Sun et al. (2009).
Thus, one may be tempted to introduce a well chosen value of 𝑐 so as to reduce the
variance of the update. However, determining an optimal value for 𝑐 is difficult: the optimal
value minimizing the variance actually depends on possible correlations between ∇ ̃︀ 𝜃 ln 𝑃𝜃
and the function 𝑓 . The only general result is that one should shift 𝑤 such that 0 lies within
its range. Assuming independence, or dependence with enough symmetry, the optimal shift
is when the weights average to 0.

3.5.2 Complexity
The complexity of the IGO algorithm depends much on the computational cost model. In
optimization, it is fairly common to assume that the objective function 𝑓 is very costly
compared to any other calculations performed by the algorithm (Moré et al., 1981; Dolan
and Moré, 2002). Then the cost of IGO in terms of number of 𝑓 -calls is 𝑁 per iteration,
and the cost of using inverse quantiles and computing the natural gradient is negligible.
Setting the cost of 𝑓 aside, the complexity of the IGO algorithm depends mainly on
the computation of the (inverse) Fisher matrix. Assume an analytical expression for this
matrix is known. Then, with 𝑝 = dim Θ the number of parameters, the cost of storage of
the Fisher matrix is 𝑂(𝑝2 ) per iteration, and its inversion typically costs 𝑂(𝑝3 ) per iteration.
However, depending on the situation and on possible algebraic simplifications, strategies
exist to reduce this cost (e.g., Le Roux et al. 2007 in a learning context). For instance,
for CMA-ES the cost is 𝑂(𝑁 𝑝) (Suttorp et al., 2009). More generally, parametrization by
expectation parameters (see above), when available, may reduce the cost to 𝑂(𝑁 𝑝) as well.
If no analytical form of the Fisher matrix is known and Monte Carlo estimation is required,
then complexity depends on the particular situation at hand and is related to the best
sampling strategies available for a particular family of distributions. For Boltzmann machines,
for instance, a host of such strategies are available (Ackley et al., 1985; Salakhutdinov and
Murray, 2008; Salakhutdinov, 2009; Desjardins et al., 2010). Still, in such a situation, IGO
may be competitive if the objective function 𝑓 is costly.

3.5.3 Recycling Old Samples


To compute the ranks of samples in (14), it might be advisable to re-use samples from
previous iterations, so that a smaller number of samples is necessary, see e.g. Sun et al. (2009).

18
Information-Geometric Optimization

For 𝑁 = 1, this is indispensable. In order to preserve sampling consistency (Theorem 6)


the old samples need to be reweighted using the ratio of their likelihood under the current
versus old distribution, as in importance sampling.
In evolutionary computation, elitist selection (also called plus-selection) is a common
approach where the all-time best samples are taken into account in each iteration. Elitist
selection can be modelled in the IGO framework by using the current all-time best samples in
addition to samples from 𝑃𝜃 . Specifically, in the (𝜇 + 𝜆)-selection scheme, we set 𝑁 = 𝜇 + 𝜆
and let 𝑥1 , . . . , 𝑥𝜇 be the current all-time 𝜇 best points. Then we sample 𝜆 new points,
𝑥𝜇+1 , . . . , 𝑥𝑁 , from the current distribution 𝑃𝜃 and apply (16) with 𝑤(𝑞) = (𝑁/𝜇)1𝑞≤𝜇/𝑁 .

3.5.4 Initialization
As with other distribution-based optimization algorithms, it is usually a good idea to initialize
in such a way as to cover a wide portion of the search space, i.e. 𝜃0 should be chosen so that
𝑃𝜃0 has large diversity. For IGO algorithms this is particularly effective, since, as explained
above, the natural gradient provides minimal change of diversity (greedily at each step) for
a given change in the objective function.

4. IGO, Maximum Likelihood, and the Cross-Entropy Method


In this section we generalize the IGO update for settings where the natural gradient may
not exist. This generalization reveals a unique IGO algorithm for finite step-sizes 𝛿𝑡 and a
natural link to the cross-entropy method.

4.1 IGO as a Smooth-time Maximum Likelihood Estimate


The IGO flow turns out to be the only way to maximize a weighted log-likelihood, where
points of the current distribution are slightly reweighted according to 𝑓 -preferences.
This relies on the following interpretation of the natural gradient as a weighted maximum
likelihood update with infinitesimal learning rate. This result singles out, in yet another
way, the natural gradient among all possible gradients. The proof is given in Appendix D.

Theorem 10 (Natural gradient as ML with infinitesimal weights) Let 𝜀 > 0 and


𝜃0 ∈ Θ. Let 𝑊 (𝑥) be a function of 𝑥 and let 𝜃 be the solution of
{︃ preference weight biasing 𝑃𝜃0 }︃
∫︁ ∫︁ ⏞ ⏟
𝜃 = arg max (1 − 𝜀) ln 𝑃𝜃 (𝑥) 𝑃𝜃0 (d𝑥) + 𝜀 ln 𝑃𝜃 (𝑥) 𝑊 (𝑥) 𝑃𝜃0 (d𝑥) . (26)
𝜃 ⏟ ⏞
= const − KL(𝑃𝜃0 ‖𝑃𝜃 ), maximal for 𝜃 = 𝜃0

Then, when 𝜀 → 0 we have


∫︁
𝜃 = 𝜃0 + 𝜀 ̃︀ 𝜃 ln 𝑃𝜃 (𝑥) 𝑊 (𝑥) 𝑃𝜃 (d𝑥) + 𝑂(𝜀2 ) .
∇ (27)
0

Likewise for discrete samples: with 𝑥1 , . . . , 𝑥𝑁 ∈ 𝑋, let 𝜃 be the solution of


{︃ ∫︁ }︃
∑︁
𝜃 = arg max (1 − 𝜀) ln 𝑃𝜃 (𝑥) 𝑃𝜃0 (d𝑥) + 𝜀 𝑊 (𝑥𝑖 ) ln 𝑃𝜃 (𝑥𝑖 ) . (28)
𝜃 𝑖

19
Ollivier, Arnold, Auger and Hansen

Then when 𝜀 → 0 we have


∑︁
𝜃 = 𝜃0 + 𝜀 ̃︀ 𝜃 ln 𝑃𝜃 (𝑥𝑖 ) + 𝑂(𝜀2 ) .
𝑊 (𝑥𝑖 ) ∇ (29)
𝑖

So if 𝑊 (𝑥) = 𝑊𝜃𝑓0 (𝑥) is the weight of the points according to quantilized 𝑓 -preferences,
the weighted maximum log-likelihood necessarily is the IGO flow (9) using the natural
gradient—or the IGO update (17) when using samples.
Thus the IGO flow is the unique flow that, continuously in time, slightly changes the
distribution to maximize the log-likelihood of points with good values of 𝑓 . (In addition,
IGO continuously updates the weight 𝑊𝜃𝑓𝑡 (𝑥) depending on 𝑓 and on the current distribution,
so that we keep optimizing.)
This theorem suggests a way to approximate the IGO flow by enforcing this interpretation
for a given non-infinitesimal step size 𝛿𝑡, as follows.

Definition 11 (IGO-ML algorithm) The IGO-ML algorithm with step size 𝛿𝑡 updates
the value of the parameter 𝜃𝑡 according to
{︃ ∫︁ }︃
∑︁
𝑡+𝛿𝑡
= arg max (1 − 𝛿𝑡
∑︀
𝜃 𝑤
̂︀𝑖 ) ln 𝑃𝜃 (𝑥) 𝑃𝜃𝑡 (d𝑥) + 𝛿𝑡 𝑤
̂︀𝑖 ln 𝑃𝜃 (𝑥𝑖 ) (30)
𝜃 𝑖 𝑖

where 𝑥1 , . . . , 𝑥𝑁 are sample points drawn according to the distribution 𝑃𝜃𝑡 , and 𝑤
̂︀𝑖 is the
weight (14) obtained from the ranked values of the objective function 𝑓 .

The IGO-ML algorithm is obviously independent of the parametrization 𝜃: indeed it


only depends on 𝑃𝜃 itself. Furthermore, the IGO-ML update (30) does not even require a
smooth parametrization of the distribution anymore (though in this case, a small 𝛿𝑡 will
likely result in stalling: 𝜃𝑡+𝛿𝑡 = 𝜃𝑡 if the set of possible values for 𝜃 is discrete).
Like the cross-entropy method below, the IGO-ML algorithm can be applied only when
the argmax can be computed.
It turns out that for exponential families, IGO-ML is just the IGO algorithm in a
particular parametrization (see Theorem 12).

4.2 The Cross-Entropy Method


Taking 𝛿𝑡 = 1 in (30) above corresponds to a full maximum likelihood update; when using the
truncation selection scheme 𝑤, this is the cross-entropy method (CEM). The cross-entropy
method can be defined in an optimization setting as follows (de Boer et al., 2005). Like
IGO, it depends on a family of probability distributions 𝑃𝜃 parametrized by 𝜃 ∈ Θ, and a
number of samples 𝑁 at each iteration. Let also 𝑁𝑒 = ⌈𝑞𝑁 ⌉ (0 < 𝑞 < 1) be a number of
elite samples.
At each step, the cross-entropy method for optimization samples 𝑁 points 𝑥1 , . . . , 𝑥𝑁
from the current distribution 𝑃𝜃𝑡 . Let 𝑤 ̂︀𝑖 be 1/𝑁𝑒 if 𝑥𝑖 belongs to the 𝑁𝑒 samples with
the best value of the objective function 𝑓 , and 𝑤 ̂︀𝑖 = 0 otherwise. Then the cross-entropy
method or maximum likelihoood update (CEM/ML) for optimization is (de Boer et al., 2005,
Algorithm 3.1) ∑︁
𝜃𝑡+1 = arg max 𝑤̂︀𝑖 ln 𝑃𝜃 (𝑥𝑖 ) (31)
𝜃

20
Information-Geometric Optimization

(assuming the argmax is tractable). This corresponds to 𝛿𝑡 = 1 in (30).


A commonly used version of CEM with a smoother update depends on a step size
parameter 0 < 𝛼 6 1 and is given (de Boer et al., 2005) by
∑︁
𝜃𝑡+1 = (1 − 𝛼)𝜃𝑡 + 𝛼 arg max 𝑤
̂︀𝑖 ln 𝑃𝜃 (𝑥𝑖 ). (32)
𝜃

The standard CEM/ML update is 𝛼 = 1. For 𝛼 = 1, the standard cross-entropy method is


independent of the parametrization 𝜃, whereas for 𝛼 < 1 this is not the case.
Note the difference between the IGO-ML algorithm (30) and the smoothed CEM update
(32) with step size 𝛼 = 𝛿𝑡: the smoothed CEM update performs a weighted average of
the parameter value after taking the maximum likelihood estimate, whereas IGO-ML uses
a weighted average of current and previous likelihoods, then takes a maximum likelihood
estimate. In general, these two rules can greatly differ, as they do for Gaussian distributions
(Section 5.2).
This swapping of averaging makes IGO-ML parametrization-independent whereas the
smoothed CEM update is not.
Yet, for exponential families of probability distributions, there exists one particular
parametrization 𝜃 in which the IGO algorithm and the smoothed CEM update coincide. We
now proceed to this construction.

4.3 IGO for Expectation Parameters and Maximum Likelihood


The particular form of IGO for exponential families has an interesting consequence if the
parametrization chosen for the exponential family is the set of expectation parameters. Let
1 ∑︀
𝑃𝜃 (𝑥) = 𝑍(𝜃) exp ( 𝜃𝑗 𝑇𝑗 (𝑥)) 𝐻(d𝑥) be an exponential family as above. The expectation
parameters are 𝑇¯𝑗 = 𝑇¯𝑗 (𝜃) = E𝑃𝜃 𝑇𝑗 , (denoted 𝜂𝑗 in Amari and Nagaoka 2000, Eq. 3.56).
The notation 𝑇¯ will denote the collection (𝑇¯𝑗 ). We shall use the notation 𝑃𝑇¯ to denote the
probability distribution 𝑃 parametrized by the expectation parameters.
It is well-known that, in this parametrization, the maximum likelihood estimate for a
sample of points 𝑥1 , . . . , 𝑥𝑁 is just the empirical average of the expectation parameters over
that sample:
𝑁 𝑁
1 ∑︁ 1 ∑︁
arg max ln 𝑃𝑇¯ (𝑥𝑖 ) = 𝑇 (𝑥𝑖 ) . (33)
𝑇¯ 𝑁 𝑖=1 𝑁 𝑖=1
In the discussion above, one main difference between IGO and smoothed CEM was
whether we took averages before or after taking the maximum log-likelihood estimate. For
the expectation parameters 𝑇¯𝑖 , we see that these operations commute. (One can say that
these expectation parameters “linearize maximum likelihood estimates”.) After some work
we get the following result.

Theorem 12 (IGO, CEM and maximum likelihood) Let


1 (︁∑︁ )︁
𝑃𝜃 (d𝑥) = exp 𝜃𝑗 𝑇𝑗 (𝑥) 𝐻(d𝑥)
𝑍(𝜃)
be an exponential family of probability distributions, where the 𝑇𝑗 are functions of 𝑥 and 𝐻
is some reference measure. Let us parametrize this family by the expected values 𝑇¯𝑗 = E𝑇𝑗 .

21
Ollivier, Arnold, Auger and Hansen

Let us assume the chosen weights 𝑤


̂︀𝑖 sum to 1. For a sample 𝑥1 , . . . , 𝑥𝑁 , let

𝑇𝑗* =
∑︁
𝑤
̂︀𝑖 𝑇𝑗 (𝑥𝑖 ).
𝑖

Then the IGO update (17) in this parametrization reads

𝑇¯𝑗𝑡+𝛿𝑡 = (1 − 𝛿𝑡) 𝑇¯𝑗𝑡 + 𝛿𝑡 𝑇𝑗* . (34)

Moreover these three algorithms coincide:


∙ The IGO-ML algorithm (30).
∙ The IGO algorithm (17) written in the parametrization 𝑇¯𝑗 (34).
∙ The smoothed CEM algorithm (32) written in the parametrization 𝑇¯𝑗 , with 𝛼 = 𝛿𝑡.

Corollary 13 For exponential families, the standard CEM/ML update (31) coincides with
the IGO algorithm in parametrization 𝑇¯𝑗 with 𝛿𝑡 = 1.

Beware that the expectation parameters 𝑇¯𝑗 are not always the most obvious parameters
(Amari and Nagaoka, 2000, Section 3.5). For example, for 1-dimensional Gaussian distribu-
tions, the expectation parameters are the mean 𝜇 and the second moment 𝜇2 + 𝜎 2 , not the
mean and variance. When expressed back in terms of mean and variance, the update (34)
boils down to 𝜇 ← (1 − 𝛿𝑡)𝜇 + 𝛿𝑡𝜇* and 𝜎 2 ← (1 − 𝛿𝑡)𝜎 2 + 𝛿𝑡(𝜎 * )2 + 𝛿𝑡(1 − 𝛿𝑡)(𝜇* − 𝜇)2 ,
where 𝜇* and 𝜎 * denote the mean and standard deviation of the samples 𝑥𝑖 .
On the other hand, when using smoothed CEM with mean and variance as parameters,
the new variance is (1− 𝛿𝑡)𝜎 2 + 𝛿𝑡(𝜎 * )2 , which can be significantly smaller for 𝛿𝑡 ∈ (0, 1). This
proves, in passing, that the smoothed CEM update in other parametrizations is generally
not an IGO algorithm (because it can differ at first order in 𝛿𝑡).
The case of Gaussian distributions is further exemplified in Section 5.2 below: in partic-
ular, smoothed CEM in the (𝜇, 𝜎) parametrization almost invariably exhibits a reduction of
variance, often leading to premature convergence.
For these reasons we think that the IGO-ML algorithm is the sensible way to define an
interpolated ML estimate for 𝛿𝑡 < 1 in a parametrization-independent way (see however the
analysis of a critical 𝛿𝑡 in Section 5.2). In Appendix A we further discuss IGO and CEM
and sum up the differences and relative advantages.
Taking 𝛿𝑡 = 1 is a bold approximation choice: the “ideal” continuous-time IGO flow
itself, after time 1, does not coincide with the maximum likelihood update of the best points
in the sample. Since the maximum likelihood algorithm is known to converge prematurely
in some instances (Section 5.2), using the parametrization by expectation parameters with
large 𝛿𝑡 may not be desirable.
The considerable simplification of the IGO update in these coordinates reflects the duality
of coordinates 𝑇¯𝑖 and 𝜃𝑖 . More precisely, the natural gradient ascent w.r.t. the parameters
𝑇¯𝑖 is given by the vanilla gradient w.r.t. the parameters 𝜃𝑖 :
𝜕

̃︀ ¯ =
𝑇𝑖
𝜕𝜃𝑖
(Proposition 29 in Appendix D).

22
Information-Geometric Optimization

5. CMA-ES, NES, EDAs and PBIL from the IGO Framework


In this section we investigate the IGO algorithms for Bernoulli measures and for multivariate
normal distributions, and show the correspondence to well-known algorithms. Restricted
Boltzmann machines are given as a third, novel example. In addition, we discuss the influence
of the parametrization of the distributions.

5.1 PBIL and cGA as IGO Algorithms for Bernoulli Measures


Let us consider on 𝑋 = {0, 1}𝑑 a family of Bernoulli measures 𝑃𝜃 (𝑥) = 𝑝𝜃1 (𝑥1 ) × . . . × 𝑝𝜃𝑑 (𝑥𝑑 )
with 𝑝𝜃𝑖 (𝑥𝑖 ) = 𝜃𝑖𝑥𝑖 (1 − 𝜃𝑖 )1−𝑥𝑖 , with each 𝜃𝑖 ∈ [0; 1]. As this family is a product of probability
measures 𝑝𝜃𝑖 (𝑥𝑖 ), the different components of a random vector 𝑦 following 𝑃𝜃 are independent
and all off-diagonal terms of the Fisher information matrix are zero. Diagonal terms are
1
given by 𝜃𝑖 (1−𝜃 𝑖)
. Therefore the inverse of the Fisher matrix is a diagonal matrix with
diagonal entries equal to 𝜃𝑖 (1 − 𝜃𝑖 ). In addition, the partial derivative of ln 𝑃𝜃 (𝑥) w.r.t. 𝜃𝑖 is
computed in a straightforward manner resulting in

𝜕 ln 𝑃𝜃 (𝑥) 𝑥𝑖 1 − 𝑥𝑖
= − .
𝜕𝜃𝑖 𝜃𝑖 1 − 𝜃𝑖
Let 𝑥1 , . . . , 𝑥𝑁 be 𝑁 samples at step 𝑡 with distribution 𝑃𝜃𝑡 and let 𝑥1:𝑁 , . . . , 𝑥𝑁 :𝑁 be
the samples ranked according to 𝑓 value. The natural gradient update (18) with Bernoulli
measures is then
𝑁
(︃ )︃
[𝑥𝑗:𝑁 ]𝑖 1 − [𝑥𝑗:𝑁 ]𝑖
𝜃𝑖𝑡+𝛿𝑡 = 𝜃𝑖𝑡 + 𝛿𝑡 𝜃𝑖𝑡 (1 − 𝜃𝑖𝑡 )
∑︁
𝑤𝑗 − (35)
𝑗=1
𝜃𝑖𝑡 1 − 𝜃𝑖𝑡

where 𝑤𝑗 = 𝑤((𝑗 − 1/2)/𝑁 )/𝑁 and [𝑦]𝑖 denotes the 𝑖th coordinate of 𝑦 ∈ 𝑋. The previous
equation simplifies to
𝑁 (︁ )︁
𝜃𝑖𝑡+𝛿𝑡 = 𝜃𝑖𝑡 + 𝛿𝑡
∑︁
𝑤𝑗 [𝑥𝑗:𝑁 ]𝑖 − 𝜃𝑖𝑡 , (36)
𝑗=1

∑︀𝑁
or, denoting 𝑤
¯ the sum of the weights 𝑗=1 𝑤𝑗 ,

𝑁
𝜃𝑖𝑡+𝛿𝑡 = (1 − 𝑤𝛿𝑡)
∑︁
¯ 𝜃𝑖𝑡 + 𝛿𝑡 𝑤𝑗 [𝑥𝑗:𝑁 ]𝑖 . (37)
𝑗=1

The algorithm so obtained coincides with the so-called population-based incremental


learning algorithm (PBIL, Baluja and Caruana 1995) for 𝑁 = NUMBER_SAMPLES and the
appropriate (usually non-negative) weights 𝑤𝑗 , as well as with the compact genetic algorithm
(cGA, Harik et al. 1999) for 𝑁 = 2 and 𝑤1 = −𝑤2 . Different variants of PBIL correspond to
different choices of the selection scheme 𝑤. In cGA, components for which both samples have
the same value are unchanged, because 𝑤1 + 𝑤2 = 0. We have thus proved the following.

Proposition 14 The IGO algorithm on {0, 1}𝑑 using Bernoulli measures parametrized by 𝜃
as above, coincides with the compact Genetic Algorithm (cGA) when 𝑁 = 2 and 𝑤1 = −𝑤2 .

23
Ollivier, Arnold, Auger and Hansen

Moreover, it coincides with Population-Based Incremental Learning (PBIL) with the


following correspondence of parameters. The PBIL algorithm using the 𝜇 best solutions, see
Baluja and Caruana (1995, Figure 4), is recovered 4 using 𝛿𝑡 = lr, 𝑤𝑗 = (1 − lr)𝑗−1 for
𝑗 = 1, . . . , 𝜇, and 𝑤𝑗 = 0 for 𝑗 = 𝜇 + 1, . . . , 𝑁 .
If the selection scheme of IGO is chosen as 𝑤1 = 1, 𝑤𝑗 = 0 for 𝑗 = 2, . . . , 𝑁 , IGO
recovers the PBIL/EGA algorithm with update rule towards the best solution (Baluja, 1994,
Figure 4), with 𝛿𝑡 = lr (the learning rate of PBIL) and mut_probability = 0 (no random
mutation of 𝜃).

Interestingly, the parameters 𝜃𝑖 are the expectation parameters described in Section 4:


indeed, the expectation of 𝑥𝑖 is 𝜃𝑖 . So the formulas above are particular cases of (34).
Thus, by Theorem 12, PBIL is both a smoothed CEM in these parameters and an IGO-ML
algorithm.
Let us now consider another, so-called “logit” representation, given by the logistic
function 𝑃 (𝑥𝑖 = 1) = 1
˜
. This 𝜃˜ is the exponential parametrization of Section 3.3.
1+exp(−𝜃𝑖 )
We find that
𝜕 ln 𝑃𝜃˜(𝑥) exp(−𝜃˜𝑖 )
= (𝑥𝑖 − 1) + = 𝑥𝑖 − E𝑥𝑖 (38)
𝜕 𝜃˜𝑖 1 + exp(−𝜃˜𝑖 )
(cf. Eq. 19) and that the diagonal elements of the Fisher information matrix are given by
exp(−𝜃˜𝑖 )/(1 + exp(−𝜃˜𝑖 ))2 = Var 𝑥𝑖 (as per Eq. 20). So the natural gradient update (18)
with Bernoulli measures in parametrization 𝜃˜ reads
⎛ ⎞
𝑁
𝜃˜𝑖𝑡+𝛿𝑡 = 𝜃˜𝑖𝑡 + 𝛿𝑡(1 + exp(𝜃˜𝑖𝑡 )) ⎝−𝑤
¯ + (1 + exp(−𝜃˜𝑖𝑡 ))
∑︁
𝑤𝑗 [𝑥𝑗:𝑁 ]𝑖 ⎠ . (39)
𝑗=1

1
To better compare the update with the previous representation, note that 𝜃𝑖 =
1+exp(−𝜃˜𝑖 )
and thus we can rewrite
𝑁
𝛿𝑡 (︁ )︁
𝜃˜𝑖𝑡+𝛿𝑡 = 𝜃˜𝑖𝑡 +
∑︁
𝑡
𝑤 𝑗 [𝑥 𝑗:𝑁 ] 𝑖 − 𝜃 𝑖 . (40)
𝜃𝑖𝑡 (1 − 𝜃𝑖𝑡 ) 𝑗=1

So the direction of the update is the same as before and is given by the proportion of
bits set to 0 or 1 in the best samples, compared to its expected value under the current
distribution. The magnitude of the update is different since the parameter 𝜃˜ ranges from
−∞ to +∞ instead of from 0 to 1. We did not find this algorithm in the literature.
These updates also illustrate the influence of setting the sum of weights to 0 or not
(Section 3.5). If, at some time, the first bit is equal to 1 both for a majority of good points
and for a majority of bad points, then the original PBIL will increase the probability of
setting the first bit to 1, which is counterintuitive. If the weights 𝑤𝑖 are chosen to sum to 0
this noise effect disappears; otherwise, it disappears only on average.
4. Note that the pseudocode for the algorithm in Baluja and Caruana (1995, Figure 4) is slightly erroneous
since it gives smaller weights to better individuals. The error can be fixed by updating the probability in
reversed order, looping from NUMBER_OF_VECTORS_TO_UPDATE_FROM to 1. This was confirmed by S. Baluja
in personal communication. We consider here the corrected version of the algorithm.

24
Information-Geometric Optimization

5.2 Multivariate Normal Distributions (Gaussians)


Evolution strategies (Rechenberg, 1973; Schwefel, 1995; Beyer and Schwefel, 2002) are black-
box optimization algorithms for the continuous search domain, 𝑋 ⊆ R𝑑 (for simplicity we
assume 𝑋 = R𝑑 in the following), which use multivariate normal distributions to sample new
solutions. In the context of continuous black-box optimization, Natural Evolution Strategies
(NES) introduced the idea of using a natural gradient update of the distribution parameters
(Wierstra et al., 2008; Sun et al., 2009; Glasmachers et al., 2010; Wierstra et al., 2014).
Surprisingly, the well-known Covariance Matrix Adaption Evolution Strategy (CMA-ES,
Hansen and Ostermeier 1996, 2001; Hansen et al. 2003; Hansen and Kern 2004; Jastrebski and
Arnold 2006) also turns out to conduct a natural gradient update of distribution parameters
(Akimoto et al., 2010; Glasmachers et al., 2010).
Let 𝑥 ∈ R𝑑 . As the most prominent example, we use mean vector 𝑚 = E𝑥 and covariance
matrix 𝐶 = E(𝑥 − 𝑚)(𝑥 − 𝑚)T = E(𝑥𝑥T ) − 𝑚𝑚T to parametrize a normal distribution via
𝜃 = (𝑚, 𝐶). The IGO update in (17) or (18) in this parametrization can now be entirely
formulated without the (inverse) Fisher matrix, similarly to (34) or (22). The complexity
of the update is linear in the number of parameters (size of 𝜃 = (𝑚, 𝐶), where (𝑑2 − 𝑑)/2
parameters are redundant).
Let us discuss known algorithms that implement updates of this kind.

5.2.1 CMA-ES.
The rank-𝜇-update CMA-ES implements the equations5
𝑁
∑︁
𝑡+1 𝑡
𝑚 = 𝑚 + 𝜂m ̂︀𝑖 (𝑥𝑖 − 𝑚𝑡 )
𝑤 (41)
𝑖=1
𝑁
∑︁
𝐶 𝑡+1 = 𝐶 𝑡 + 𝜂c ̂︀𝑖 ((𝑥𝑖 − 𝑚𝑡 )(𝑥𝑖 − 𝑚𝑡 )T − 𝐶 𝑡 )
𝑤 (42)
𝑖=1

where 𝑤
̂︀𝑖 are the weights based on ranked 𝑓 -values, see (14) and (17).

Proposition 15 The IGO update (17) for Gaussian distributions in the parametrization by
mean and covariance matrix (𝑚, 𝐶), coincides with the CMA-ES update equations (41) and
(42) with 𝜂c = 𝜂m .

This result is essentially due to Akimoto et al. (2010) and Glasmachers et al. (2010), who
showed that the CMA-ES update with 𝜂c = 𝜂m is a natural gradient update6 .
However, in deviation from the IGO algorithm, the learning rates 𝜂m and 𝜂c are assigned
different values if 𝑁 ≪ dim Θ in CMA-ES7 . Note that the Fisher information matrix is
block-diagonal in 𝑚 and 𝐶 (Akimoto et al., 2010), so that application of the different
5. The CMA-ES implements these equations given the parameter setting 𝑐1 = 0 and 𝑐𝜎 = 0 (or 𝑑𝜎 = ∞, see
e.g. Hansen 2009) that disengages the effect of the rank-one update and of step size control and therefore
of both so-called evolution paths.
6. In these articles the result has been derived for 𝜃 ← 𝜃 + 𝜂 ∇ ̃︀𝜃 E𝑃𝜃 𝑓 , see (11), leading to 𝑓 (𝑥𝑖 ) in place
of 𝑤̂︀𝑖 . No assumptions on 𝑓 have been used besides that it does not depend on 𝜃. Consequently, by
replacing 𝑓 with∑︀𝑊𝜃𝑓𝑡 , where 𝜃𝑡 is fixed, the derivation holds equally well∑︀ for 𝜃 ← 𝜃 + 𝜂 ∇ ̃︀𝜃 E𝑃𝜃 𝑊 𝑓𝑡 .
𝜃
2 2
7. Specifically, let |𝑤
̂︀𝑖 | = 1, then the settings are 𝜂m = 1 and 𝜂c ≈ 1/(𝑑 𝑤̂︀𝑖 ) (Hansen, 2006b).

25
Ollivier, Arnold, Auger and Hansen

learning rates and of the inverse Fisher matrix commute. Moreover, CMA-ES uses a path
cumulation method to adjust the step sizes, which is not covered by the IGO framework
(see Appendix A).

5.2.2 Convenient Reparametrizations Over Time.


For practical purposes, at each step it is convenient to work in a representation of 𝜃 in which
the diagonal Fisher matrix 𝐼(𝜃𝑡 ) has a simple form, e.g., diagonal with simple diagonal
entries. It is generally not possible to obtain such a representation for all 𝜃 simultaneously.
Still it is always possible to find a transformation achieving a diagonal Fisher matrix at a
single parameter 𝜃𝑡 , in multiple ways (it amounts to choosing a basis of parameter space
which is orthogonal in the Fisher metric). Such a representation is never unique and not
intrinsic, yet it still provides a convenient way to write the algorithms.
For CMA-ES, one such representation can be found by sending the current covari-
ant matrix 𝐶 𝑡 to the identity, e.g., by representing the mean and covariance matrix by
((𝐶 𝑡 )−1/2 𝑚, (𝐶 𝑡 )−1/2 𝐶(𝐶 𝑡 )−1/2 ) instead of (𝑚, 𝐶). Then the Fisher matrix 𝐼(𝜃𝑡 ) at (𝑚𝑡 , 𝐶 𝑡 )
becomes diagonal. The next algorithm we discuss, xNES (Glasmachers et al., 2010), exploits
this possibility in a logarithmic representation of the covariance matrix.

5.2.3 Natural Evolution Strategies.


Natural evolution strategies (NES, Wierstra et al. 2008; Sun et al. 2009) implement (41)
as well, while using a Cholesky decomposition of 𝐶 as the parametrization for the update
of the variance parameters. The resulting update that replaces (42) is neither particularly
elegant nor numerically efficient. The more recent xNES (Glasmachers et al., 2010) chooses
an “exponential” parametrization that naturally depends on the current parameters. This
leads to an elegant formulation where the additive update in exponential parametrization
becomes a multiplicative update for 𝐶. With 𝐶 = 𝐴𝐴T , the matrix update reads
𝑁
(︃ )︃
𝜂c ∑︁
𝐴 ← 𝐴 × exp ̂︀𝑖 × (𝑧𝑖 𝑧𝑖T − I𝑑 )
𝑤 (43)
2 𝑖=1

where 𝑧𝑖 = 𝐴−1 (𝑥𝑖 − 𝑚)(︁ and I𝑑 is the identity matrix.


)︁ From (43) the updated covariance
∑︀𝑁 T T
matrix is 𝐶 ← 𝐴 × exp 𝜂c 𝑖=1 𝑤 ̂︀𝑖 × (𝑧𝑖 𝑧𝑖 − I𝑑 ) × 𝐴 .
Compared to (42), the update has the advantage that also negative weights, 𝑤 ̂︀𝑖 < 0,
always lead to a feasible covariance matrix. By default, xNES sets 𝜂m ̸= 𝜂c in the same
circumstances as in CMA-ES, but contrary to CMA-ES the past evolution path is not taken
into account (Glasmachers et al., 2010).
When 𝜂c = 𝜂m , xNES is consistent with the IGO flow (8), and implements an IGO
algorithm (17) slightly generalized in that it uses a 𝜃𝑡 -dependent parametrization, which
represents the current covariance matrix by 0. Namely, we have:

Proposition 16 (exponential IGO update of Gaussians) Let (𝑚𝑡 , 𝐶 𝑡 ) be the current


mean and covariance matrix. Let 𝐶 𝑡 = 𝐴𝐴T . Let 𝜃 be the time-dependent parametrization of
the space of Gaussian distributions, which parametrizes the Gaussian distribution (𝑚, 𝐶) by

𝜃 = (𝑚, 𝑅), 𝑅 = ln(𝐴−1 𝐶(𝐴T )−1 )

26
Information-Geometric Optimization

where ln is the logarithm of positive matrices. (Note that the current value 𝐶 𝑡 of 𝐶 is
represented as 𝑅 = 0.)
Then the IGO update (17) in the parametrization 𝜃 is as follows: the mean 𝑚 is updated
as in CMA-ES (41), and the parameter 𝑅 is updated as
𝑁
̂︀𝑖 × (𝐴−1 (𝑥𝑖 − 𝑚)(𝑥𝑖 − 𝑚)T (𝐴T )−1 − I𝑑 )
∑︁
𝑅 ← 𝛿𝑡 𝑤 (44)
𝑖=1

thus resulting in the same update as (43) (with 𝜂c = 𝛿𝑡) for the covariance matrix: 𝐶 ←
𝐴 exp(𝑅)𝐴T .

Proof Indeed, by basic differential geometry, if parametrization 𝜃′ = 𝜙(𝜃) is used, the IGO
update for 𝜃′ is 𝐷𝜙(𝜃𝑡 ) applied to the IGO update for 𝜃, where 𝐷𝜙 is the differential of
𝜙. Here, given the update (42) for 𝐶, to find the update for 𝑅 we have to compute the
differential of the map 𝐶 ↦→ ln(𝐴−1 𝐶(𝐴T )−1 ) taken at 𝐶 = 𝐴𝐴T : for any matrix 𝑀 we
have ln(𝐴−1 (𝐴𝐴T + 𝜀𝑀 )(𝐴T )−1 ) = 𝜀 𝐴−1 𝑀 (𝐴T )−1 + 𝑂(𝜀2 ). So to find the update for the
variable 𝑅 we have to apply 𝐴−1 . . . (𝐴T )−1 to the update (42) for 𝐶.

5.2.4 Cross-Entropy Method and EMNA


Estimation of distribution algorithms (EDA) and the cross-entropy method (CEM, Rubinstein
1999; Rubinstein and Kroese 2004) estimate a new distribution from a censored sample.
Generally, the new parameter value can be written as
𝑁
∑︁
𝜃maxLL = arg max 𝑤
̂︀𝑖 ln 𝑃𝜃 (𝑥𝑖 ) (45)
𝜃
𝑖=1
−→𝑁 →∞ arg max E𝑃𝜃𝑡 𝑊𝜃𝑓𝑡 ln 𝑃𝜃
𝜃

by Theorem 6. Here, the weights 𝑤^𝑖 are equal to 1/𝜇 for the 𝜇 best points (censored or elitist
sample) and 0 otherwise. This 𝜃maxLL maximizes the weighted log-likelihood of 𝑥1 , . . . , 𝑥𝑁 ;
equivalently, it minimizes the cross-entropy and the Kullback–Leibler divergence to the
distribution of the best 𝜇 samples8 .
For Gaussian distributions, Equation (45) can be explicitly written in the form
𝑁
𝑚𝑡+1 = 𝑚* =
∑︁
𝑤
̂︀𝑖 𝑥𝑖 (46)
𝑖=1
𝑁
𝐶 𝑡+1 = 𝐶 * = ̂︀𝑖 (𝑥𝑖 − 𝑚* )(𝑥𝑖 − 𝑚* )T
∑︁
𝑤 (47)
𝑖=1

the empirical mean and variance of the elite sample.


∑︀
8. Let 𝑃𝑤 denote the distribution of the∑︀weighted samples: Pr(𝑥 = 𝑥𝑖 ) = 𝑤 ̂︀𝑖 and 𝑖 𝑤
̂︀𝑖 = 1. Then the cross-
entropy
∑︀ between 𝑃𝑤 and∑︀ 𝑃𝜃 reads 𝑃 (𝑥𝑖 ) ln 1/𝑃𝜃 (𝑥𝑖 ) and the KL divergence reads KL(𝑃𝑤 || 𝑃𝜃 ) =
𝑖 𝑤
𝑃 (𝑥𝑖 ) ln 1/𝑃𝜃 (𝑥𝑖 ) − 𝑖 𝑃𝑤 (𝑥𝑖 ) ln 1/𝑃𝑤 (𝑥𝑖 ). Minimization of both terms in 𝜃 result in 𝜃maxLL .
𝑖 𝑤

27
Ollivier, Arnold, Auger and Hansen

Equations (46) and (47) also define the simplest continuous domain EDA, the estimation
of multivariate normal algorithm (EMNAglobal , Larranaga and Lozano 2002). Interestingly,
(46) and (47) only differ from (41) and (42) (with 𝜂m = 𝜂c = 1) in that the new mean 𝑚𝑡+1
is used instead of 𝑚𝑡 in the covariance matrix update (Hansen, 2006b).
The smoothed CEM (32) in this parametrization thus writes
𝑚𝑡+𝛿𝑡 = (1 − 𝛿𝑡)𝑚𝑡 + 𝛿𝑡𝑚* (48)

𝐶 𝑡+𝛿𝑡 = (1 − 𝛿𝑡)𝐶 𝑡 + 𝛿𝑡𝐶 * . (49)


Note that this is not an IGO algorithm (i.e., there is no parametrization of the family of
Gaussian distributions in which the IGO algorithm coincides with update Eq. 49): indeed,
all IGO algorithms coincide at first order in 𝛿𝑡 when 𝛿𝑡 → 0 (because they recover the IGO
flow), while this update for 𝐶 𝑡+𝛿𝑡 does not coincide with (42) in this limit, due to the use
of 𝑚* instead of 𝑚𝑡 . This does not contradict Theorem 12: smoothed CEM is an IGO
algorithm only if smoothed CEM is written in the expectation parametrization, which (𝑚, 𝐶)
is not.

5.2.5 CMA-ES, Smoothed CEM, and IGO-ML


Let us compare IGO-ML (30), rank-𝜇 CMA (41)–(42), and smoothed CEM (48)–(49) in
the parametrization with mean and covariance matrix. These algorithms all update the
distribution mean in the same way, while the update of the covariance matrix depends on
the algorithm. With learning rate 𝛿𝑡, these updates are computed to be
𝑚𝑡+1 = (1 − 𝛿𝑡) 𝑚𝑡 + 𝛿𝑡 𝑚*
(50)
𝐶 𝑡+1 = (1 − 𝛿𝑡) 𝐶 𝑡 + 𝛿𝑡 𝐶 * + 𝛿𝑡(1 − 𝛿𝑡)𝑗 (𝑚* − 𝑚𝑡 )(𝑚* − 𝑚𝑡 )T ,
for different values of 𝑗, where 𝑚* and 𝐶 * are the mean and covariance matrix computed
over the elite sample (with positive weights 𝑤 ̂︀𝑖 summing to one) as above. The rightmost
term of (50) is reminiscent of the so-called rank-one update in CMA-ES (not included in
Eq. 42).
For 𝑗 = 0 we recover the rank-𝜇 CMA-ES update (42), for 𝑗 = 1 we recover IGO-ML,
and for 𝑗 = ∞ we recover smoothed CEM (the rightmost term is absent). The case 𝑗 = 2
corresponds to an update that uses 𝑚𝑡+1 instead of 𝑚𝑡 in (42) (with 𝜂m = 𝜂c = 𝛿𝑡). For
0 < 𝛿𝑡 < 1, the larger 𝑗, the smaller 𝐶 𝑡+1 . For 𝛿𝑡 = 1, IGO-ML and smoothed CEM/EMNA
realize 𝜃maxLL from (45)–(47).
For 𝛿𝑡 → 0, the update is independent of 𝑗 at first order in 𝛿𝑡 if 𝑗 < ∞: this reflects
compatibility with the IGO flow of CMA-ES and of IGO-ML, but not of smoothed CEM.
In the default (full) CMA-ES (as opposed to rank-𝜇 CMA-ES), the coefficient preceeding
(𝑚 − 𝑚𝑡 )(𝑚* − 𝑚𝑡 )T in (50) reads approximately 3𝛿𝑡, where the additional 2𝛿𝑡 originate
*

from the so-called rank-one


√ update and are moreover modulated by a “cumulated path” up
to a factor of about 𝑑 (Hansen and Auger, 2014).

5.2.6 Critical 𝛿𝑡
Let us assume that 𝜇 < 𝑁 weights are set to 𝑤
̂︀𝑖 = 1/𝜇 and the remaining weights to zero,
so that the selection ratio is 𝑞 = 𝜇/𝑁 .

28
Information-Geometric Optimization

1.0

critical learning rate δt


0.8
0.6
0.4
j =1
0.2
j =2
0.0 j =∞
0.0 0.1 0.2 0.3 0.4 0.5
truncation ratio

Figure 1: Critical 𝛿𝑡 versus selection truncation ratio 𝑞 for three values of 𝑗 in (50). With
𝛿𝑡 above the critical 𝛿𝑡, the variance decreases systematically when optimizing
a linear function, indicating failure of the algorithm. For CMA-ES/NES where
𝑗 = 0, the critical 𝛿𝑡 for 𝑞 < 0.5 is infinite.

Then there is a critical value of 𝛿𝑡 depending on this ratio 𝑞, such that above this critical
𝛿𝑡 the algorithms given by IGO-ML and smoothed CEM are prone to premature convergence.
Indeed, let 𝑓 be a linear function on R𝑑 , and consider the variance in the direction of the
gradient of 𝑓 . Assuming further 𝑁 → ∞ and 𝑞 ≤ 1/2, then the variance 𝐶 * of the elite
sample is smaller than the current variance 𝐶 𝑡 , by a constant factor. Depending on the
precise update for 𝐶 𝑡+1 , if 𝛿𝑡 is too large, the variance 𝐶 𝑡+1 is going to be smaller than 𝐶 𝑡 by
a constant factor as well. This implies that the algorithm is going to stall, i.e., the variance
will go to 0 before the optimum is reached. (On the other hand, the continuous-time IGO
flow corresponding to 𝛿𝑡 → 0 does not stall, see Appendix C.4.)
We now study the critical 𝛿𝑡 (in the limit 𝑁 → ∞) below which the algorithm does not
stall (this is done similarly to the analysis in Appendix C.4 for linear functions). For IGO-ML,
(𝑗 = 1 in Eq. 50, or equivalently for the smoothed CEM in the expectation parameters
(𝑚, 𝐶 + 𝑚𝑚T ), see Section √ 4), the variance increases if and only if 𝛿𝑡 is smaller than the
critical value 𝛿𝑡 = 𝑞𝑏 2𝜋𝑒 𝑏2 /2 where 𝑏 is the percentile function of 𝑞, i.e. 𝑏 is such that
√crit
𝑞 = 𝑏∞ 𝑒−𝑥 /2 / 2𝜋. This value 𝛿𝑡 crit is plotted
∫︀ 2
√ as a solid line in Fig. 1. For 𝑗 = 2, 𝛿𝑡 crit
is smaller, related to the above by 𝛿𝑡 crit ← 1 + 𝛿𝑡 crit − 1 and plotted as a dashed line in
Fig. 1. For CEM (𝑗 = ∞), the critical 𝛿𝑡 is zero (reflecting the non-IGO behavior of CEM
in this parametrization). For CMA-ES (𝑗 = 0), the critical 𝛿𝑡 is infinite for 𝑞 < 1/2. When
the selection ratio 𝑞 is above 1/2, for all algorithms the critical 𝛿𝑡 becomes zero.

5.2.7 Gaussian Distributions with Restricted Parametrization


When considering a restricted parametrization of multivariate normal distributions, IGO
recovers other known algorithms. In particular for sep-CMA-ES (Ros and Hansen, 2008) and
SNES (Schaul et al., 2011), the update has been restricted to the diagonal of the covariance
matrix.

5.3 IGO for Restricted Boltzmann Machines, and Multimodal Optimization


As a third example after Bernoulli and Gaussian distributions, we have applied IGO to
restricted Boltzmann machines (RBMs, Smolensky 1986; Ackley et al. 1985), which are

29
Ollivier, Arnold, Auger and Hansen

families of distributions on the search space 𝑋 = {0, 1}𝑑 extending Bernoulli distributions to
allow for dependencies between the bits (see, e.g., Berny 2002 for Boltzmann machines used
in an optimization setting). The details are given in Appendix B. We chose this example for
two reasons.
First, it illustrates how to obtain a novel algorithm from a family of probability distri-
butions in the IGO framework, in a case when there is no fully explicit expression for the
Fisher information matrix.
Second, it illustrates the influence of the natural gradient on diversity and its relevance
to multimodal optimization. The RBM probability distributions on {0, 1}𝑑 are multimodal,
contrary to Bernoulli distributions. Thus, in principle, IGO on such distributions could reach
several optima simultaneously. This allows for a simple testing of the entropy-maximizing
property of the IGO flow (Proposition 2). Accordingly, for a given improvement on the
objective function, we would expect IGO to favor preserving the diversity of 𝑃𝜃 . Indeed
on a simple objective function with two optima, we find that the IGO flow converges to a
distribution putting weight on both optima, while a descent using the vanilla gradient always
only concentrates around one optimum. This might be relevant for multimodal optimization,
where simultaneous handling of multiple hypotheses is seen as useful for optimization (Sareni
and Krähenbühl, 1998; Das et al., 2011), or in situations in which several valleys appear
equally good at first but only one of them contains the true optimum.
We refer to Appendix B for a reminder about Boltzmann machines, for how IGO rolls
out in this case, and for the detailed experimental setup for studying the influence of IGO
in a multimodal situation.

6. Summary and Conclusion


We sum up:

∙ The information-geometric optimization (IGO) framework derives from invariance


principles the uniquely defined IGO flow (Definition 4), and allows, via discretization
in time and space, to build optimization algorithms from any family of distributions
on any search space. In some instances (Gaussian distributions on R𝑑 or Bernoulli
distributions on {0, 1}𝑑 ) it recovers versions of known algorithms (PBIL, CMA-ES,
cGA, NES); in other instances (restricted Boltzmann machine distributions) it produces
new, hopefully efficient optimization algorithms.

∙ The use of a quantile-based, time-dependent transform of the objective function,


Equation (5), provides a rigorous derivation of rank-based update rules, frequently
used in current optimization algorithms. Theorem 6 uniquely identifies the infinite-
population limit of these update rules.

∙ The IGO flow is singled out by its equivalent description as an infinitesimal weighted
maximum log-likelihood update (Theorem 10). In a particular parametrization and
with a step size of 1, IGO recovers the cross-entropy method (Corollary 13). This
reveals a connection between CEM and the natural gradient, and allows to define
a new, fully parametrization-invariant smoothed maximum likelihood update, the
IGO-ML.

30
Information-Geometric Optimization

∙ Theoretical arguments suggest that the IGO flow minimizes the change of diversity in
the course of optimization. As diversity usually decreases in the course of optimization,
IGO algorithms tend to exhibit minimal diversity loss for the observed improvement.
In particular, starting with high diversity and using multimodal distributions may
allow simultaneous exploration of multiple optima of the objective function. Prelimi-
nary experiments with restricted Boltzmann machines confirm this effect in a simple
situation.

Thus, the IGO framework provides sound theoretical foundations to optimization algo-
rithms based on probability distributions. In particular, this viewpoint helps to bridge the
gap between continuous and discrete optimization.
The invariance properties, which reduce the number of arbitrary choices, together with
the relationship between natural gradient and diversity, may contribute to a theoretical
explanation of the good practical performance of those currently used algorithms, such as
CMA-ES, which can be interpreted as instantiations of IGO.
We hope that invariance properties will acquire in computer science the importance they
have in mathematics, where intrinsic thinking is the first step for abstract linear algebra
or differential geometry, and in modern physics, where the notions of invariance w.r.t. the
coordinate system and so-called gauge invariance play a central role.

Acknowledgments

The authors would like to thank Michèle Sebag for the acronym and for helpful comments.
We also thank Youhei Akimoto for helpful feedback and inspiration. Y. O. would like to
thank Cédric Villani and Bruno Sévennec for helpful discussions on the Fisher metric. A. A.
and N. H. would like to acknowledge the Dagstuhl Seminar No 10361 on the Theory of
Evolutionary Computation (http://www.dagstuhl.de/10361) for crucially inspiring this
paper, their work on natural gradients and beyond. This work was partially supported by
the ANR-2010-COSI-002 grant (SIMINOLE) of the French National Research Agency.

31
Ollivier, Arnold, Auger and Hansen

Appendix A. Further Discussion and Perspectives


This appendix touches upon further aspects and perspectives of the IGO framework and its
implementations.

A.1 A Single Framework for Optimization on Arbitrary Spaces


A strength of the IGO viewpoint is to automatically provide a distinct and arguably in some
sense optimal optimization algorithm from any family of probability distributions on any
given space, discrete or continuous. IGO algorithms feature desired invariance properties
and thereby make fewer arbitrary choices.
In particular, IGO describes several well-known optimization algorithms within a single
framework. For Bernoulli distributions, to the best of our knowledge, PBIL or cGA have never
been identified as a natural gradient ascent in the literature9 . For Gaussian distributions,
algorithms of the same form (17) had been developed previously (Hansen and Ostermeier,
2001; Wierstra et al., 2008) and their close relationship with a natural gradient ascent had
been recognized (Akimoto et al., 2010; Glasmachers et al., 2010). These works, however,
also strongly suggest that IGO algorithms may need to be complemented with further
heuristics to achieve efficient optimization algorithms. In particular, learning rate settings
and diversity control (step-size) deviate from the original framework. These deviations
mostly stem from time discretization and the choice of a finite sample size, both necessary
to derive an IGO algorithm from the IGO flow. Further work is needed to fully understand
this from a theoretical viewpoint.
The wide applicability of natural gradient approaches seems not to be widely known in
the optimization community (though see Malagò et al. 2008).

A.2 About Invariance


The role of invariance is two-fold. First, invariance breaks otherwise arbitrary choices in
the algorithm design. Second, and more importantly, invariance implies generalization of
behavior from single problem instances to entire problem classes (with the caveat to choose
the initial state of the algorithm accordingly), thereby making the outcome of optimization
more predictable.
Optimization problems faced in reality—and algorithms used in practice—are often
far too complex to be amenable to a rigorous mathematical analysis. Consequently, the
judgement of algorithms in practice is to a large extent based in empirical observations,
either on artificial benchmarks or on the experience with real-world problems. Invariance,
as a guarantee of generalization, has an immediate impact on the relevance of any such
empirical observations and, in the same line of reasoning, can be interpreted as a notion of
robustness.

A.3 About Quantiles


The IGO flow in (8) has, to the best of our knowledge, never been defined before. The
introduction of the quantile-rewriting (5) of the objective function provides the first rigorous
9. Thanks to Jonathan Shapiro for giving an early argument confirming this property (personal communica-
tion).

32
Information-Geometric Optimization

derivation of quantile- or rank- or comparison-based optimization from a gradient ascent in


𝜃-space.
NES and CMA-ES have been claimed to maximize −E𝑃𝜃 𝑓 via natural gradient ascent
(Wierstra et al., 2008; Akimoto et al., 2010). However, we have proved that the NES and
CMA-ES updates actually converge to the IGO flow, not to the similar flow with the gradient
of E𝑃𝜃 𝑓 (Theorem 6). So we find that in reality these algorithms maximize E𝑃𝜃 𝑊𝜃𝑓𝑡 , where
𝑊𝜃𝑓𝑡 is a decreasing transformation of the 𝑓 -quantiles under the current sample distribution.
Moreover, in practice, maximizing −E𝑃𝜃 𝑓 tends to be a rather unstable procedure and
has been discouraged, see for example Whitley (1989) and Sun et al. (2009).

A.4 About Choice of 𝑃𝜃 : Learning a Model of Good Points


The choice of the family of probability distributions 𝑃𝜃 plays a double role.
First, it is analogous to choosing the variation operators (namely mutation or recombina-
tion) as seen in evolutionary algorithms: indeed, 𝑃𝜃 encodes possible moves according to
which new sample points are explored.
Second, optimization algorithms using distributions can be interpreted as learning a
probabilistic model of where the points with good values lie in the search space. With
this point of view, 𝑃𝜃 describes richness of this model: for instance, restricted Boltzmann
machines with ℎ hidden units can describe distributions with up to 2ℎ modes, whereas the
Bernoulli distribution is unimodal. This influences, for instance, the ability to explore several
valleys and optimize multimodal functions in a single run.
More generally, the IGO framework makes it tempting to use more complex models of
where good points lie, inspired, e.g., from machine learning, and adapt them for optimization.
The restricted Boltzmann machines of Appendix B are a first step in this direction. The
initial idea behind these machines is that each hidden unit controls a block of coordinates of
the search space (a block of features), so that the optimization algorithm hopefully builds a
good model of which features must be activated or de-activated together to obtain good
values of 𝑓 . This is somewhat reminiscent of a crossover operator: if observation of good
points shows that a block of features go together, this information is stored in the RBM
structure and this block may be later activated as a whole, thus effectively transferring
blocks of features from one good solution to another. Inspired by models of deep learning
(Bengio et al., 2012), one might be tempted to stack such models on top of each other, so
that optimization would operate on a more and more abstract representation of the problem.
IGO and the natural gradient might help in exploiting the added expressivity that comes
with richer models: in our simple experiment, the vanilla gradient ignores the additional
expressivity of RBMs with respect to Bernoulli distributions (Appendix B). The downside of
a richer model is that either the sample size 𝑁 must be increased or the learning rate 𝛿𝑡 must
be decreased to obtain a stable algorithm (a situation analogous to overfitting in machine
learning: richer models will be quicker to concentrate too much around a few observed good
samples).

A.5 Natural Gradient and Parametrization Invariance


Central to IGO is the use of the natural gradient, which follows from 𝜃-invariance (parametriza-
tion invariance) and makes sense on any search space, discrete or continuous.

33
Ollivier, Arnold, Auger and Hansen

While the IGO flow is exactly 𝜃-invariant, for any practical implementation of an
IGO algorithm, a parametrization choice has to be made. Different parametrizations
lead to different algorithms and larger values of 𝛿𝑡 are likely to result in more differing
algorithms. Still, since all IGO algorithms approximate the IGO flow, two parametrizations
in combination with IGO will differ less than the same two parametrizations in combination
with another algorithm (such as the vanilla gradient or the smoothed CEM method)—at
least if the learning rate 𝛿𝑡 is not too large. The chosen parametrization becomes more
relevant as the step size 𝛿𝑡 increases.
On the other hand, natural evolution strategies have not emphasized 𝜃-invariance: the
chosen parametrization (Cholesky, exponential) has been considered as a defining feature.
We believe the term “natural evolution strategy” should rather be used independently of the
chosen parameterization, thereby referring to the usage of the natural gradient as the main
principle for the update of distribution parameters.

A.6 IGO, Maximum Likelihood and Cross-Entropy

The cross-entropy method (CEM, de Boer et al. 2005) can be used to produce optimization
algorithms given a family of probability distributions on an arbitrary space, by performing a
jump to a maximum likelihood estimate of the parameters.
We have seen (Corollary 13) that the standard CEM is an IGO algorithm in a particular
parametrization, with a learning rate 𝛿𝑡 equal to 1. However, it is well-known, both
theoretically and experimentally (Branke et al., 2007; Hansen, 2006b; Wagner et al., 2004),
that standard CEM loses diversity too fast in many situations. The usual solution (de Boer
et al., 2005) is to reduce the learning rate (smoothed CEM, Equation 32), but this breaks
the reparametrization invariance of non-smoothed CEM.
On the other hand, the IGO flow can be seen as a maximum likelihood update with
infinitesimal learning rate (Theorem 10). This interpretation allows to define a particular
IGO algorithm, the IGO-ML (Definition 11): it performs a maximum likelihood update with
an arbitrary learning rate, and keeps the reparametrization invariance. It coincides with
CEM when the learning rate is set to 1, but it differs from smoothed CEM by the exchange
of the order of argmax and averaging (compare Equations 30 and 32), and coincides with
the IGO flow for small learning rates. We argue that this new, fully invariant algorithm is
the conceptually better way to reduce the learning rate and achieve smoothing in CEM.
Standard CEM with rate 1 can lose diversity, yet is a particular case of an IGO algo-
rithm: this illustrates the fact that reasonable values of the learning rate 𝛿𝑡 depend on the
parametrization. We have studied this phenomenon in detail for various Gaussian IGO
algorithms (Section 5.2).
Why would a smaller learning rate perform better than a large one in an optimization
setting? It might seem more efficient to jump directly to the maximum likelihood estimate
of currently known good points, instead of performing an iterated gradient ascent towards
this maximum. Due to having only a limited number of samples, however, optimization
faces a “moving target”, contrary to a learning setting in which the example distribution is
often stationary. Currently known good points heavily depend on the current distribution
and are likely not to indicate the position at which the optimum lies, but, rather, the broad

34
Information-Geometric Optimization

direction in which the optimum is to be found. After each update, the next elite sample
points are going to be located somewhere new.

A.7 Diversity and Multiple Optima


The IGO framework emphasizes the relation of natural gradient and diversity: we argued
that IGO provides minimal diversity change for a given objective function improvement. In
particular, provided the initial diversity is large, diversity is kept at a maximum after the
upate. This theoretical relationship can be observed experimentally for restricted Boltzmann
machines (Appendix B).
On the other hand, using the vanilla gradient does not lead to a balanced distribution
between the two optima in our experiments. Using the vanilla gradient introduces hidden
arbitrary choices between those points (more exactly between moves in Θ-space). This
results in unnecessary and unwelcome loss of diversity, and might also be detrimental at
later stages in the optimization. This may reflect the fact that the Euclidean metric on the
space of parameters, implicitly used in the vanilla gradient, becomes less and less meaningful
for gradient descent on complex distributions.
IGO and the natural gradient are certainly relevant to the well-known problem of
exploration-exploitation balance: as we have seen, arguably the natural gradient realizes
the largest improvement in the objective with the least possible change of diversity in the
distribution.
More generally, like other distribution-based optimization algorithms, IGO tries to learn
a model of where the good points are. This is typical of machine learning, one of the contexts
for which the natural gradient was studied. The conceptual relationship of IGO and IGO-like
optimization algorithms with machine learning is still to be explored and exploited.

We now present some ideas which we believe would be worth exploring.

A.8 Adaptive Learning Rate


Comparing consecutive updates to evaluate a learning rate or step size is an effective measure.
For example, in back-propagation, the update sign has been used to adapt the learning rate
of each single weight in an artificial neural network (Silva and Almeida, 1990). In CMA-ES,
a step size is adapted depending on whether recent steps tended to move in a consistent
direction or to backtrack. This is measured by considering the changes of the mean 𝑚 of
the Gaussian distribution.
For a probability distribution 𝑃𝜃 on an arbitrary search space, in general no notion of
mean may be defined. However, it is still possible to define “backtracking” in the evolution
of 𝜃 as follows.
Consider two successive updates 𝛿𝜃𝑡 = 𝜃𝑡 − 𝜃𝑡−𝛿𝑡 and 𝛿𝜃𝑡+𝛿𝑡 = 𝜃𝑡+𝛿𝑡 − 𝜃𝑡 . Their scalar
product in the Fisher metric 𝐼(𝜃𝑡 ) is

𝐼𝑖𝑗 (𝜃𝑡 ) 𝛿𝜃𝑖𝑡 𝛿𝜃𝑗𝑡+𝛿𝑡 .


∑︁
⟨𝛿𝜃𝑡 , 𝛿𝜃𝑡+𝛿𝑡 ⟩ =
𝑖𝑗

Dividing by the associated norms will yield the cosine cos 𝛼 of the angle between 𝛿𝜃𝑡 and
𝛿𝜃𝑡+𝛿𝑡 .

35
Ollivier, Arnold, Auger and Hansen

If this cosine is positive, the learning rate 𝛿𝑡 may be increased. If the cosine is negative,
the learning rate probably needs to be decreased. Various schemes for the change of 𝛿𝑡 can
be devised; for instance, inspired by step size updates commonly used in evolution strategies,
one can multiply 𝛿𝑡 by exp(𝛽(cos 𝛼)) or exp(𝛽 sign(cos 𝛼)), where 𝛽 ≈ min(𝑁/ dim Θ, 1/2).
As this cosine can be quite noisy, cumulation over several time steps might be advisable.
As before, this scheme is constructed to be robust w.r.t. reparametrization of 𝜃, thanks
to the use of the Fisher metric. However, for large learning rates 𝛿𝑡, in practice the
parametrization might well become relevant.

A.9 Geodesic Parametrization


While the IGO flow is fully invariant under 𝜃-reparametrization, an IGO algorithm does
depend on the choice of parametrization for 𝜃, even if for small 𝛿𝑡 the difference between two
IGO algorithms is 𝑂(𝛿𝑡 2 ), one order of magnitude smaller than between IGO and vanilla
gradient in general.
So one can wonder how to discretize time in the IGO flow in a fully intrinsic way, not
depending at all on a parametrization for 𝜃. A first possibility is given by the IGO-ML
algorithm (Definition 11)—this means, for exponential families, that we can decide to single
out the parametrization by expectation parameters.
Another, more geometric solution is to use geodesics on the statistical manifold. This
means we approximate the trajectories of the IGO flow by successive geodesic segments
of length 𝛿𝑡 in the Fisher metric, where the initial direction of each segment is given by
the direction of the IGO flow (16). This defines an approximation to the IGO flow that
depends on the step size 𝛿𝑡 and sample size 𝑁 , but not on any choice of parametrization.
This approach is fully developed in Bensadon (2015).

A.10 Finite Sample Size and Noisy IGO Flow


The IGO flow is an ideal model of the IGO algorithms. But the IGO flow is deterministic
while IGO algorithms are stochastic, depending on a finite number 𝑁 of random samples.
This might result in important differences in their behavior and one can wonder if there is a
way to reflect stochasticity directly in the definition of the IGO flow.
The IGO update (16) is a stochastic update
𝑁
∑︁ ⃒
𝜃𝑡+𝛿𝑡 = 𝜃𝑡 + 𝛿𝑡 ̂︀𝑖 ∇
𝑤 ̃︀ 𝜃 ln 𝑃𝜃 (𝑥𝑖 )⃒

𝜃=𝜃𝑡
𝑖=1

because the term 𝑁 ̂︀𝑖 ∇
∑︀
𝑖=1 𝑤
̃︀ 𝜃 ln 𝑃𝜃 (𝑥𝑖 )⃒

involves a random sample. As such, this term
𝜃=𝜃𝑡
has an expectation and a variance. So for a fixed 𝑁 and 𝛿𝑡, this random update is a weak
approximation with step size 𝛿𝑡 (Kloeden and Platen, 1992, Chapter 9.7) of a stochastic
differential equation on 𝜃, whose drift is the expectation √ of the IGO update (which tends
to the IGO flow when 𝑁 → ∞), and whose noise term is 𝛿𝑡 times the square root of the
covariance matrix of the update applied to a normal random vector.
Such a stochastic differential equation, defining a noisy IGO flow, might be a better
theoretical object with which to compare the actual behavior of IGO algorithms, than the
ideal noiseless IGO flow.

36
Information-Geometric Optimization

For instance, this strongly suggests that if we have 𝛿𝑡 → 0 while 𝑁 is kept fixed in an
IGO algorithm, noise will disappear (compare Remark 2 in Akimoto et al. 2012).
Second, for large 𝑁 , one expects√︀ the variance of the IGO update to scale like 1/𝑁 , so
that the noise term will scale like 𝛿𝑡/𝑁 . This formally suggests that, within reasonable
bounds, multiplying or dividing both 𝑁 and 𝛿𝑡 by the same factor should result in similar
behavior of the algorithm, so that for instance it should be reasonable to reset 𝛿𝑡 to 10𝛿𝑡/𝑁
and 𝑁 to 10. (Note that the cost in terms of 𝑓 -calls of these two algorithms is similar.)
This dependency is reflected in evolution strategies in several ways, provided 𝑁 is smaller
than the search space dimension. First, theoretical results for large search space dimension on
the sphere function, 𝑓 (𝑥) = ‖𝑥‖2 , indicate that the optimal step size 𝛿𝑡 for the mean vector
is proportional to 𝑁 , provided the weighting function 𝑤 is either truncation selection with a
fixed truncation ratio (Beyer, 2001) or optimal weights (Arnold, 2006). Second, the learning
(︁∑︀ )︁2 ∑︀
𝑁
rate 𝛿𝑡 of the covariance matrix in CMA-ES is chosen proportional to ̂︀𝑖 / 𝑁
𝑖=1 𝑤 ̂︀𝑖2
𝑖=1 𝑤
which is again proportional to 𝑁 (Hansen and Kern, 2004). For small enough 𝑁 , the progress
per 𝑓 -call is then in both cases rather independent of the choice of 𝑁 .
These results suggest that in implementations, 𝑁 can be chosen rather freely, whereas
𝛿𝑡 will be set to const · 𝑁 . The constant is chosen small enough to ensure stability, but as
large as possible to maximize speed; still, 𝛿𝑡 ≤ 1 is another constraint for (very) large 𝑁 .

A.11 Influence of the Fisher Geometry of the Statistical Manifold


The global Riemannian geometry of the statistical manifold 𝑃𝜃 might have a bearing on the
behavior of stochastic algorithms exploring this manifold. For instance, the Fisher metric
identifies the set of 1-dimensional normal distributions 𝒩 (𝑚, 𝜎 2 ) with the two-dimensional
hyperbolic plane. The latter has negative curvature. The sign of curvature has a strong
influence on the behavior of random walks in a Riemannian manifold: in particular, in
negative curvature, successive random errors tend to not compensate as much as in the
Euclidean case (because geodesics diverge more quickly); this might be relevant to the
settings of a stochastic optimization algorithm, suggesting to use larger sample size or
smaller steps when curvature is negative; Bensadon (2015) provides first observations in this
direction, associated with negative curvature in the space of Gaussians. This is speculative
and remains to be explored.

Appendix B. Implementing an IGO Algorithm with a New Family of


Probability Distributions: Restricted Boltzmann Machines
In this appendix we show how to apply IGO to restricted Boltzmann machines (RBMs,
Smolensky 1986; Ackley et al. 1985), a family of probability distributions on the discrete
hypercube that extends Bernoulli distributions and can represent correlations between
variables.
This first illustrates how to set up an IGO algorithm from a family of probability
distributions for which no fully explicit expression for the Fisher information matrix is
available.
Second, we test the relationship between IGO and diversity of 𝑝𝜃 , in view of the entropy-
maximizing property of the IGO flow (Proposition 2). We consider a simple function with

37
Ollivier, Arnold, Auger and Hansen

Figure 2: The RBM architecture with the observed (x) and latent (h) variables. In our
experiments, a single hidden unit was used.

two optima, and compare the behavior of the IGO flow with that of a vanilla gradient flow.
The latter always converges to a single optimum, even though RBMs allow for multimodal
distributions. On the other hand, IGO seems to always converge to both optima at once,
taking advantage of RBM multimodality.
Finally we interpret this observations by pointing out a non-obvious breach of sym-
metry between 0 and 1 on {0, 1}𝑑 for the vanilla gradient of RBMs; the natural gradient
automatically compensates for this.

B.1 Restricted Boltzmann Machines.


RBMs first define a joint distribution on x ∈ {0, 1}𝑑 together with a hidden or latent
variable h ∈ {0, 1}𝑑ℎ (Ghahramani, 2004). Summation over h provides the distribution over
x. The probability associated with an observation x = (𝑥𝑖 ) ∈ {0, 1}𝑑 and latent variable
h = (ℎ𝑗 ) ∈ {0, 1}𝑑ℎ is

𝑒−𝐸(x,h) ∑︁
𝑃𝜃 (x, h) = ∑︀ −𝐸(x′ ,h′ )
, 𝑃𝜃 (x) = 𝑃𝜃 (x, h), (51)
x′ ,h′ 𝑒 h

where the energy function 𝐸 is


∑︁ ∑︁ ∑︁
𝐸(x, h) = − 𝑎𝑖 𝑥𝑖 − 𝑏𝑗 ℎ𝑗 − 𝑤𝑖𝑗 𝑥𝑖 ℎ𝑗 (52)
𝑖 𝑗 𝑖,𝑗

(compare Section 3.3). The distribution is parametrized by 𝜃 = (a, b, w). The “biases” a
and b can be subsumed into the weights w by introducing variables 𝑥0 and ℎ0 always equal
to one, which we implicitly do from now on.
The hidden variable h introduces correlations between the components of x (see Figure 2).
For instance, with 𝑑ℎ = 1, the distribution on x is the sum of two Bernoulli distributions
with h acting as a “switch”.

B.2 IGO for RBMs.


RBMs constitute an exponential family with latent variables (with the statistics 𝑇 (x, h)
being all the 𝑥𝑖 ℎ𝑗 ). So the IGO flow (10) is given by (25) where the first contribution is the
Fisher matrix (24) and the second contribution is the log-probability derivatives weighted

38
Information-Geometric Optimization

by the objective function values. However, these expressions are not explicit due to the
expectations under 𝑃𝜃 . Still, these expectations can be replaced with Monte Carlo sampling;
this is what we will do.
Actually, when applying IGO to such distributions to optimize an objective function
𝑓 (x), we have two choices. The first is to see the objective function 𝑓 (x) as a function of
(x, h) where h is a dummy variable; then we can use the IGO algorithm to optimize over
(x, h) using the distributions 𝑃𝜃 (x, h). A second possibility is to marginalize 𝑃𝜃 (x, h) over
the hidden units h as in (51), to define the distribution 𝑃𝜃 (x); then we can use the IGO
algorithm to optimize 𝑓 over x using 𝑃𝜃 (x).
These two approaches yield slightly different algorithms. The Fisher matrix for the
distributions 𝑃𝜃 (x, h) is given by (20) (exponential families) whereas the one for the distri-
butions 𝑃𝜃 (x) is given by (24) (exponential families with latent variables). For instance,
with 𝐼𝑤𝑖𝑗 𝑤𝑖′ 𝑗 ′ denoting the entry of the Fisher matrix corresponding to the components 𝑤𝑖𝑗
and 𝑤𝑖′ 𝑗 ′ of the parameter 𝜃, from (20) we get

𝐼𝑤𝑖𝑗 𝑤𝑖′ 𝑗 ′ (𝜃) = Cov𝑃𝜃 (𝑥𝑖 ℎ𝑗 , 𝑥𝑖′ ℎ𝑗 ′ ) = E𝑃𝜃 [𝑥𝑖 ℎ𝑗 𝑥𝑖′ ℎ𝑗 ′ ] − E𝑃𝜃 [𝑥𝑖 ℎ𝑗 ] E𝑃𝜃 [𝑥𝑖′ ℎ𝑗 ′ ] (53)

whereas from (24) we get the same expression in which each ℎ𝑗 is replaced with its expectation
(︁ ∑︀ )︁−1
ℎ̄𝑗 knowing x namely ℎ̄𝑗 = E𝑃𝜃 [ℎ𝑗 |x] = 1 + 𝑒−𝑏𝑗 − 𝑖 𝑥𝑖 𝑤𝑖𝑗 and likewise for ℎ𝑗 ′ .
Both versions were tested on a small instance and found to be viable. However the
version using (x, h) is numerically more stable and requires fewer samples to get a reliable
(in particular, invertible) estimate of the Fisher matrix, both in practice and theory10 . For
this reason we selected the first approach: we optimize 𝑓 as a function of (x, h) using IGO
for the probability distributions 𝑃𝜃 (x, h).
A practical IGO update is thus obtained from (21) by replacing the expectations with
Monte Carlo samples (x, h) from 𝑃𝜃 . A host of strategies are available to sample from RBMs
(Ackley et al., 1985; Salakhutdinov and Murray, 2008; Salakhutdinov, 2009; Desjardins et al.,
2010); we simply used Gibbs sampling (Hinton., 2002). So the IGO update reads

𝜃𝑡+𝛿𝑡 = 𝜃𝑡 + 𝛿𝑡 Cov(𝑇,
̂︂ 𝑇 )−1 Cov(𝑇,
̂︂ 𝑊𝜃𝑓𝑡 ) (54)

where 𝑇 denotes the vector of all statistics 𝑇𝑖𝑗 (x, h) = 𝑥𝑖 ℎ𝑗 corresponding to the component
𝑤𝑖𝑗 of the parameter 𝜃, and where Cov
̂︂ denotes the empirical covariance over a set of Monte
Carlo samples (x, h) taken from 𝑃𝜃𝑡 (x, h). Thus 𝐼^ = Cov(𝑇, ̂︂ 𝑇 ) is the estimated Fisher
2 𝑓
matrix, a matrix of size dim(𝜃) as in (53). Cov(𝑇, 𝑊𝜃𝑡 ) is a vector of size dim(𝜃) giving
̂︂
the correlation, in the Monte Carlo sample, between the statistics 𝑇𝑖𝑗 and the ranked values
𝑊𝜃𝑓𝑡 of the objective function of the points (x, h) in the sample: this vector is the sum of
weighted log-probability derivatives in the IGO update (17), thanks to (19).
Different sample sizes 𝑁Fish and 𝑁 can be used to evaluate Cov(𝑇,̂︂ 𝑇 ) and Cov(𝑇,
̂︂ 𝑊𝜃𝑓𝑡 ).
𝑓
The sample size for Cov(𝑇,
̂︂ 𝑊𝜃𝑡 ) is just the IGO parameter 𝑁 , the number of points on

10. Indeed, if 𝐼1 (𝜃) is the Fisher matrix at 𝜃 in the first approach and 𝐼2 (𝜃) in the second approach, we always
have 𝐼1 (𝜃) > 𝐼2 (𝜃) in the sense of positive-definite matrices. This is because probability distributions
on the pair (x, h) carry more information than their projections on x only, and so the Kullback–Leibler
distances will always be larger. In particular, there exist values of 𝜃 for which the Fisher matrix 𝐼2 is not
invertible whereas 𝐼1 is.

39
Ollivier, Arnold, Auger and Hansen

which the objective function has to be evaluated; it is typically kept small especially if 𝑓
is costly. On the other hand it is important to obtain a reliable (in particular, invertible)
estimate of the Fisher matrix: invertibility requires a number of samples at least, and ideally
much larger than, dim(𝜃), because each point in the sample contributes a rank-1 term to the
empirical covariance matrix Cov(𝑇,
̂︂ 𝑇 ). Increasing 𝑁Fish does not require additional 𝑓 -calls.

B.3 An Experiment with Two Optima: IGO, Diversity, and Multimodal


Optimization.
We tested the resulting IGO trajectories on a simple objective function with two optima on
{0, 1}𝑑 , namely, the two-min function based at y defined as
(︃ )︃
∑︁ ∑︁
𝑓y (x) = min |𝑥𝑖 − 𝑦𝑖 | , |(1 − 𝑥𝑖 ) − 𝑦𝑖 )| (55)
𝑖 𝑖

which, as a function of x, has two optima, one at x = y and the other at its binary
complement x = ȳ. The value of the base point y was randomized for each independent run.
We ran both the IGO algorithm as described above, and a version using the vanilla
gradient instead of the natural gradient (that is, omitting the Fisher matrix in the IGO
update).
The dimension was 𝑑 = 40 and we used an RBM with only one latent variable (𝑑ℎ = 1).
Therefore dim(𝜃) = 81. We used a large sample size of 𝑁 = 10, 000 for Monte Carlo sampling,
so as to be close to the theoretical IGO flow behavior. We also tested a smaller, more
realistic sample size of 𝑁 = 10 (still keeping 𝑁Fish = 10, 000), with similar but noisier results.
The selection scheme (Section 2.2) was 𝑤(𝑞) = 1𝑞61/5 (cf. Rechenberg 1994) so that the best
20% points in the sample are given weight 1 for the update.
The RBM was initialized so that at startup, the distribution 𝑃𝜃0 is close to uniform on
∑︀ 𝑤
(x, h), in line with Proposition 2. Explicitly we set 𝑤𝑖𝑗 ← 𝒩 (0, 𝑑.𝑑1 ℎ ) and then 𝑏𝑗 ← − 𝑖 2𝑖𝑗
∑︀ 𝑤
and 𝑎𝑖 ← − 𝑗 2𝑖𝑗 + 𝒩 (0, 0.01 𝑑2
) which ensure a close-to-uniform initial distribution.
Full experimental details, including detailed setup and additional results, can be found in
a previous version of this article (Ollivier et al., 2011, Section 5). (In particular, IGO runs are
frozen when the estimated Fisher matrix becomes singular or unreliable.) The code used for
these experiments can be found at http://www.ludovicarnold.com/projects:igocode .

The results show that, most of the time, with IGO the distribution 𝑃𝜃𝑡 converges to a
distribution giving positive mass to both optima; on the other hand, over 300 independent
runs, the same algorithm using the vanilla gradient only converges to one optimum at the
expense at the other, so that the vanilla gradient never exploited the possibility offered by
the RBM to create a bimodal probability distribution on {0, 1}𝑑 . Figure 3 shows ten random
runs (out of 300 in our experiments) of the two algorithms: for each of the two optima we
plot its distance to the nearest of the points drawn from 𝑃𝜃𝑡 , as a function of time 𝑡.11
11. Note that the value of 𝛿𝑡 is not directly comparable between the natural and vanilla gradients. Theory
suggests that at startup the IGO trajectory with 𝛿𝑡 is most comparable to the vanilla gradient trajectory
with 4𝛿𝑡, because from (53) most of the diagonal terms of the Fisher matrix are equal to 1/4 and
most off-diagonal terms are 0 at startup. The experiments confirm that this yields roughly comparable
convergence speeds.

40
Information-Geometric Optimization

dist. to closest opt. (nat. grad.) dist. to closest opt. (van. grad.)
30 30
δt =0.5 δt =2.
25 25
20 20
15 15
10 10
5 5
0 0
0 20 40 60 80 100 0 20 40 60 80 100
gradient steps gradient steps
dist. to second opt. (nat. grad.) dist. to second opt. (van. grad.)
30 30
δt =0.5 δt =2.
25 25
20 20
15 15
10 10
5 5
0 0
0 20 40 60 80 100 0 20 40 60 80 100
gradient steps gradient steps

Figure 3: Distance to the two optima during 10 IGO optimization runs and 10 vanilla
gradient runs. For each optimum, we plot its distance to the nearest point among
the samples from 𝑃𝜃𝑡 found at each step.

This is even clearer when looking at the hidden variable h. With 𝑑ℎ = 1 the two possible
values ℎ = 0 and ℎ = 1 can create a sum of two Bernoulli distributions on x. Figure 4
plots the average value of ℎ in the sample for IGO and for the vanilla gradient. As can be
seen, over IGO optimization the balance between the two modes is preserved, whereas using
vanilla gradient optimization the zero mode is lost.

We interpret this as an illustration of Proposition 2: for a given improvement on the


objective function, IGO will favor preserving the diversity (entropy) of 𝑃𝜃 . Using more richly
multimodal distributions (e.g., RBMs with 𝑑ℎ > 1), this might be useful for multimodal
optimization or for optimization situations in which there are several almost equally deep
valleys only one of which contains the true optimum.

41
Ollivier, Arnold, Auger and Hansen

h-statistics (natural gradient)


1.0
δt =0.25
0.8 δt =0.5

δt =1.
0.6
δt =2.

0.4

0.2

0.0
0 20 40 60 80 100
gradient steps
h-statistics (vanilla gradient)
1.0
δt =1.
0.8 δt =2.

δt =4.
0.6
δt =8.

0.4

0.2

0.0
0 20 40 60 80 100
gradient steps

Figure 4: Median over 300 runs of the average of ℎ in the sample, for IGO and vanilla
gradient optimization. Error bars indicate the 16th and 84th quantile over the
runs.

B.4 Breach of Symmetry by the Vanilla Gradient.


This experiment reveals that, curiously, the vanilla gradient loses multimodality by always
setting the hidden variable ℎ to 1, not to 0 (Figure 4). So the vanilla gradient for RBMs
seems to favor ℎ = 1.
Of course, exchanging the values 0 and 1 for the hidden variables in a restricted Boltzmann
machine still gives a distribution of another Boltzmann machine. Changing ℎ𝑗 into 1 − ℎ𝑗 is
equivalent to resetting 𝑎𝑖 ← 𝑎𝑖 + 𝑤𝑖𝑗 , 𝑏𝑗 ← −𝑏𝑗 , and 𝑤𝑖𝑗 ← −𝑤𝑖𝑗 .
IGO and the natural gradient are impervious to such a change by Proposition 19. But
the vanilla gradient implicitly relies on the Euclidean norm on parameter space, as explained
in Section 2.1. For this norm, the distance between the RBM distributions (𝑎𝑖 , 𝑏𝑗 , 𝑤𝑖𝑗 )
∑︀ ⃒⃒ ⃒2 ∑︀ ⃒⃒ ⃒2
and (𝑎′𝑖 , 𝑏′𝑗 , 𝑤𝑖𝑗
′ ) is ′ 2 ′ ′
𝑖 |𝑎𝑖 − 𝑎𝑖 | + 𝑗 ⃒ 𝑏𝑗 − 𝑏𝑗 ⃒ + 𝑖𝑗 ⃒ 𝑤𝑖𝑗 − 𝑤𝑖𝑗 ⃒ . However, the change of
∑︀ ⃒ ⃒

variables 𝑎𝑖 ← 𝑎𝑖 + 𝑤𝑖𝑗 , 𝑏𝑗 ← −𝑏𝑗 , 𝑤𝑖𝑗 ← −𝑤𝑖𝑗 does not preserve this Euclidean metric.
Thus, exchanging 0 and 1 for the hidden variables will result in two different vanilla gradient
ascents. The observed asymmetry on ℎ is a consequence of this implicit asymmetry.
Of course it is possible to use parametrizations for which the vanilla gradient will be
more symmetric: for instance, using −1/1 instead of 0/1 for the variables, or rewriting the

42
Information-Geometric Optimization

energy as

𝐸(x, h) = − − 12 ) − − 12 ) − − 21 )(ℎ𝑗 − 12 )
∑︀ ∑︀ ∑︀
𝑖 𝐴𝑖 (𝑥𝑖 𝑗 𝐵𝑗 (ℎ𝑗 𝑖,𝑗 𝑊𝑖𝑗 (𝑥𝑖 (56)

with “bias-free” parameters 𝐴𝑖 , 𝐵𝑗 , 𝑊𝑖𝑗 related to the usual parametrization by 𝑤𝑖𝑗 = 𝑊𝑖𝑗 ,
𝑎𝑖 = 𝐴𝑖 − 12 𝑗 𝑤𝑖𝑗 , and 𝑏𝑗 = 𝐵𝑗 − 12 𝑖 𝑤𝑖𝑗 . The vanilla gradient might perform better in
∑︀ ∑︀

this parametrization.
However, we adopted the approach of using a family of probability distributions found
in the literature, with the parametrization commonly found in the literature. We then used
the vanilla gradient and the natural gradient on these distributions—and indeed the vanilla
gradient, or an approximation thereof, is routinely applied to RBMs in the literature to
optimize the log-likelihood of data (Hinton., 2002; Hinton et al., 2006; Bengio et al., 2007).
It was not obvious a priori (at least for us) that the vanilla gradient ascent favors ℎ = 1.
Since the first version of this article was written, this phenomenon has been recognized for
Boltzmann machines (Montavon and Müller, 2012).
This directly illustrates the specific influence of the chosen gradient (the two imple-
mentations only differ by the inclusion of the Fisher matrix): the natural gradient offers a
systematic way to recover symmetry from a non-symmetric gradient update.
Symmetry alone does not explain why IGO reaches the two optima simultaneously: a
symmetry-preserving stochastic algorithm could as well end up on either single optimum
with 50% probability in each run. The diversity-preserving property of IGO (Proposition 2)
offers a reasonable interpretation of why this does not happen.

Appendix C. Further Mathematical Properties of the IGO Flow


This appendix provides further mathematical properties of the IGO flow in general and in
specific scenarios.

C.1 Invariance Properties


Here we formally state the invariance properties of the IGO flow under various reparametriza-
tions. Since these results follow from the very construction of the algorithm, the proofs are
omitted.

Proposition 17 (𝑓 -invariance) Let 𝜙 : R → R be a strictly increasing function. Then the


trajectories of the IGO flow when optimizing the functions 𝑓 and 𝜙(𝑓 ) are the same.
The same is true for the discretized algorithm with population size 𝑁 and step size
𝛿𝑡 > 0.

Proposition 18 (𝜃-invariance) Let 𝜃′ = 𝜙(𝜃) be a smooth bijective function of 𝜃 and let


𝑃𝜃′ ′ = 𝑃𝜙−1 (𝜃′ ) . Let 𝜃𝑡 be the trajectory of the IGO flow when optimizing a function 𝑓 using
the distributions 𝑃𝜃 , initialized at 𝜃0 . Then the IGO flow trajectory (𝜃′ )𝑡 obtained from the
optimization of the function 𝑓 using the distributions 𝑃𝜃′ ′ , initialized at (𝜃′ )0 = 𝜙(𝜃0 ), is the
same, namely (𝜃′ )𝑡 = 𝜙(𝜃𝑡 ).

For the algorithm with finite 𝑁 and 𝛿𝑡 > 0, invariance under reparametrization of 𝜃 is
only true approximately, in the limit when 𝛿𝑡 → 0. As mentioned above, the IGO update

43
Ollivier, Arnold, Auger and Hansen

(17), with 𝑁 = ∞, is simply the Euler approximation scheme for the ordinary differential
equation (8) defining the IGO flow. At each step, the Euler scheme is known to make an error
𝑂(𝛿𝑡 2 ) with respect to the true flow. This error actually depends on the parametrization of
𝜃.
So the IGO updates for different parametrizations coincide at first order in 𝛿𝑡, and may,
in general, differ by 𝑂(𝛿𝑡 2 ). For instance the difference between the CMA-ES and xNES
updates is indeed 𝑂(𝛿𝑡 2 ), see Section 5.2.
For comparison, using the vanilla gradient results in a divergence of 𝑂(𝛿𝑡) at each step
between different parametrizations, so this divergence could be of the same magnitude as
the steps themselves.
In that sense, one can say that IGO algorithms are “more parametrization-invariant”
than other algorithms. This stems from their origin as a discretization of the IGO flow.
However, if the map 𝜙 is affine then this phenomenon disappears: parametrizations that
differ by an affine map on 𝜃 yield the same IGO algorithm.
The next proposition states that, for example, if one uses a family of distributions on
R𝑑 which is invariant under affine transformations, then IGO algorithms optimize equally
well a function and its image under any affine transformation (up to an obvious change in
the initialization). This proposition generalizes the well-known corresponding property of
CMA-ES (Hansen and Auger, 2014, Proposition 9).
This invariance under 𝑋-transformations only holds provided the 𝑋-transformation
preserves the “shape” of the family of probability distributions 𝑃𝜃 , as follows.
Let us define, as usual, the image (pushforward) of a probability distribution 𝑃 by a
transformation 𝜙 : 𝑋 → 𝑋 as the probability distribution 𝑃 ′ such that 𝑃 ′ (𝑌 ) = 𝑃 (𝜙−1 (𝑌 ))
for any subset 𝑌 ⊂ 𝑋 (Schilling, 2005, Chapter 7). In the continuous domain, the density of
the new distribution 𝑃 ′ is obtained by the usual change of variable formula involving the
Jacobian of 𝜙 (Schilling, 2005, Chapter 15).
We say that a transformation 𝜙 : 𝑋 → 𝑋 globally preserves a family of probability
distributions (𝑃𝜃 ), if the image of any 𝑃𝜃 by 𝜙 is equal to some distribution 𝑃𝜃′ in the same
family, and if moreover the correspondence 𝜃 ↦→ 𝜃′ is locally a diffeomorphism.
Proposition 19 (𝑋-invariance) Let 𝜙 : 𝑋 → 𝑋 be a one-to-one transformation of the
search space which globally preserves the family of measures 𝑃𝜃 . Let 𝜃𝑡 be the IGO flow
trajectory for the optimization of function 𝑓 , initialized at 𝑃𝜃0 . Let (𝜃′ )𝑡 be the IGO flow
trajectory for optimization of 𝑓 ∘ 𝜙−1 , initialized at the image of 𝑃𝜃0 by 𝜙. Then 𝑃(𝜃′ )𝑡 is
the image of 𝑃𝜃𝑡 by 𝜙.
For the discretized algorithm with population size 𝑁 and step size 𝛿𝑡 > 0, the same is
true up to an error of 𝑂(𝛿𝑡 2 ) per iteration. This error disappears if the action of the map 𝜙
on Θ by pushforward is affine.
The latter case of affine transforms is well exemplified by CMA-ES: here, using the
variance and mean as the parametrization of Gaussians, the new mean and variance after
an affine transform of the search space are an affine function of the old mean and variance;
specifically, for the affine transformation 𝐴 : 𝑥 ↦→ 𝐴𝑥 + 𝑏 we have (𝑚, 𝐶) ↦→ (𝐴𝑚 + 𝑏, 𝐴𝐶𝐴T ).
Another example, on the discrete search space 𝑋 = {0, 1}𝑑 , is the exchange of 0 and 1: for
reasonable choices of the family 𝑃𝜃 , the IGO flow and IGO algorithms will be invariant
under such a change in the way the data is presented.

44
Information-Geometric Optimization

C.2 Speed of the IGO Flow


d𝜃𝑡
Proposition 20 The speed of the IGO flow, i.e. the norm of d𝑡 in the Fisher metric, is at
√︁∫︀
1 2 ∫︀ 1
most 0 𝑤 −( 0 𝑤)2 where 𝑤 is the selection scheme.

The proof is given in Appendix D.


A bounded speed means that the IGO flow will not explode in finite time, or go out-of-
domain if the Fisher metric on the statistical manifold Θ is complete (for instance, the IGO
flow on Gaussian distributions will not yield non-positive or degenerate covariance matrices).
Due to the approximation terms 𝑂(𝛿𝑡 2 ), this may not be true of IGO algorithms.
This speed can be monitored in practice in at least two ways. The first is just to
compute the Fisher norm of the increment 𝜃𝑡+𝛿𝑡 − 𝜃𝑡 using the Fisher matrix; for small
𝛿𝑡 this is close to 𝛿𝑡‖ d𝜃
d𝑡 ‖ with ‖ · ‖ the Fisher metric. The second is as follows: since the
Fisher metric coincides with the Kullback–Leibler divergence up to a factor 1/2, we have
KL(𝑃𝜃𝑡+𝛿𝑡 || 𝑃𝜃𝑡 ) ≈ 12 𝛿𝑡 2 ‖ d𝜃 2
d𝑡 ‖ at least for small 𝛿𝑡. Since it is relatively easy to estimate
KL(𝑃𝜃𝑡+𝛿𝑡 || 𝑃𝜃𝑡 ) by comparing the new and old log-likelihoods of points in a Monte Carlo
sample, one can obtain an estimate of ‖ d𝜃 d𝑡 ‖.

Corollary 21 Consider an IGO algorithm with selection scheme 𝑤, step size 𝛿𝑡 and sample
size 𝑁 . Then, with probability 1,

KL(𝑃𝜃𝑡+𝛿𝑡 || 𝑃𝜃𝑡 ) 1
2
6 Var[0,1] 𝑤 + 𝑜(1)(𝛿𝑡,𝑁 )→(0,∞) .
𝛿𝑡 2
For instance, with 𝑤(𝑞) = 1𝑞6𝑞0 and neglecting the error terms, an IGO algorithm
introduces at most 12 𝛿𝑡 2 𝑞0 (1 − 𝑞0 ) bits of information (in base 𝑒) per iteration into the
probability distribution 𝑃𝜃 . The proof is given in Appendix D.
Thus, the time discretization parameter 𝛿𝑡 is not just an arbitrary variable: it has an
intrinsic interpretation related to a number of bits introduced at each step of the algorithm.
This kind of relationship suggests, more generally, to use the Kullback–Leibler divergence as
an external and objective way to measure learning rates in those optimization algorithms
which use probability distributions.
The result above is only an upper bound. Maximal speed can be achieved only if all “good”
points point in the same direction. If the various good points in the sample suggest moves
in inconsistent directions, then the IGO update will be much smaller. While non-consistent
moves are generally to be expected if 𝑁 < dim Θ, it may also be a sign that the signal is
noisy, or that the family of distributions 𝑃𝜃 is not well suited to the problem at hand and
should be enriched.
As an example, using a family of Gaussian distributions with unkown mean and fixed
identity variance on R𝑑 , one checks that for the optimization of a linear√function on R𝑑 , with
the weight 𝑤(𝑢) = 1𝑢<1/2 , the IGO flow moves at constant speed 1/ 2𝜋 ≈ 0.4, whatever
the dimension 𝑑. On a rapidly varying sinusoidal function, the moving speed will be much
slower because there are “good” and “bad” points in all directions.
This may suggest ways to design the selection scheme 𝑤 to achieve maximal speed in some
instances. Indeed, looking at the proof of the proposition, which involves a Cauchy–Schwarz
inequality, one can see that the maximal speed is achieved only if there is a linear relationship

45
Ollivier, Arnold, Auger and Hansen

between the weights 𝑊𝜃𝑓 (𝑥) and the gradient ∇ ̃︀ 𝜃 ln 𝑃𝜃 (𝑥). For instance, for the optimization
𝑑
of a linear function on R using Gaussian measures of known variance, the maximal speed
will be achieved when the selection scheme 𝑤(𝑢) is the inverse of the Gaussian cumulative
distribution function. (In particular, 𝑤(𝑢) tends to +∞ when 𝑢 → 0 and to −∞ when
𝑢 → 1.) This is in accordance with known results: the expected value of the 𝑖-th order
statistic of 𝑁 standard Gaussian variates is the optimal 𝑤 ̂︀𝑖 value for Gaussians on the sphere
function, 𝑓 (𝑥) = 𝑖 𝑥2𝑖 , where 𝑑 → ∞ (Beyer, 2001; Arnold, 2006). For 𝑁 → ∞, this order
∑︀

statistic converges to the inverse Gaussian cumulative distribution function.

C.3 Noisy Objective Functions


Suppose that the objective function 𝑓 is non-deterministic: each time we ask for the value
of 𝑓 at a point 𝑥 ∈ 𝑋, we get a random result. In this setting we may write the random
value 𝑓 (𝑥) as 𝑓 (𝑥) = 𝑓˜(𝑥, 𝜔) where 𝜔 is an unseen random seed, and 𝑓˜ is a deterministic
function of 𝑥 and 𝜔. Without loss of generality, up to a change of variables we can assume
that 𝜔 is uniformly distributed in [0, 1].
We can still use the IGO algorithm without modification in this context. One might
wonder which properties (consistency of sampling, etc.) still apply when 𝑓 is not deterministic.
Actually, IGO algorithms for noisy functions fit very nicely into the IGO framework: the
following proposition allows to transfer any property of IGO to the case of noisy functions.

Proposition 22 (Noisy IGO) Let 𝑓 be a random function of 𝑥 ∈ 𝑋, namely, 𝑓 (𝑥) =


𝑓˜(𝑥, 𝜔) where 𝜔 is a random variable uniformly distributed in [0, 1], and 𝑓˜ is a deterministic
function of 𝑥 and 𝜔. Then the two following algorithms coincide:

∙ The IGO algorithm (16), using a family of distributions 𝑃𝜃 on space 𝑋, applied to the
noisy function 𝑓 , and where the samples are ranked according to the random observed
value of 𝑓 (here we assume that, for each sample, the noise 𝜔 is independent from
everything else);

∙ The IGO algorithm on space 𝑋 ×[0, 1], using the family of distributions 𝑃˜𝜃 = 𝑃𝜃 ⊗𝑈[0,1] ,
applied to the deterministic function 𝑓˜. Here 𝑈[0,1] denotes the uniform law on [0, 1].

The (easy) proof is given in the Appendix D.


This proposition states that noisy optimization can be modeled as ordinary distribution-
based optimization with a component of the distribution being independent of the control
parameter 𝜃. Conversely, any component of the search space in which a distribution-based
optimization algorithm cannot perform selection or specialization will effectively act as a
random noise on the objective function.
As a consequence of this result, all properties of IGO can be transferred to the noisy case.
Consider, for instance, consistency of sampling (Theorem 6). The 𝑁 -sample IGO update
rule for the noisy case is identical to the non-noisy case (17):

𝑁 ⃒
𝜕 ln 𝑃𝜃 (𝑥𝑖 ) ⃒⃒
𝜃𝑡+𝛿𝑡 = 𝜃𝑡 + 𝛿𝑡 𝐼 −1 (𝜃𝑡 )
∑︁
𝑤
̂︀𝑖
𝑖=1
𝜕𝜃 ⃒
𝜃=𝜃𝑡

46
Information-Geometric Optimization

where each weight 𝑤 ̂︀𝑖 computed from (14) now incorporates noise from the objective
function because the rank of 𝑥𝑖 is computed on the random function, or equivalently
on the deterministic function 𝑓˜: rk(𝑥𝑖 ) = #{𝑗, 𝑓˜(𝑥𝑗 , 𝜔𝑗 ) < 𝑓˜(𝑥𝑖 , 𝜔𝑖 )}.
Consistency of sampling (Theorem 6) thus takes the following form through Proposi-
tion 22: When 𝑁 → ∞, the 𝑁 -sample IGO update rule on the noisy function 𝑓 converges
with probability 1 to the update rule
∫︁ 1 ∫︁
˜
𝜃𝑡+𝛿𝑡 = 𝜃𝑡 + 𝛿𝑡 ∇
̃︀ 𝜃 𝑊𝜃𝑓𝑡 (𝑥, 𝜔) 𝑃𝜃 (d𝑥) d𝜔
∫︁0
= 𝜃𝑡 + 𝛿𝑡 ∇
̃︀ 𝜃 ¯ 𝑓𝑡 (𝑥) 𝑃𝜃 (d𝑥)
𝑊 (57)
𝜃

where 𝑊 ¯ 𝑓 (𝑥) = E𝜔 𝑊 𝑓˜(𝑥, 𝜔). Thus when 𝑁 → ∞ the update becomes deterministic as the
𝜃 𝜃
effects of noise get averaged, as could be expected.
Thus, applying the IGO algorithm to noisy objective functions as in Proposition 22
requires defining the IGO flow for noisy objective functions as
d𝜃𝑡
∫︁
=∇
̃︀ 𝜃 ¯ 𝑓𝑡 (𝑥) 𝑃𝜃 (d𝑥)
𝑊 (58)
d𝑡 𝜃

where 𝑊 ¯ 𝑓 (𝑥) = E𝜔 𝑊 𝑓˜(𝑥, 𝜔) is the average weight of 𝑥 over the values 𝑓 (𝑥). Thus, the
𝜃 𝜃
effects of noise remain visible even for 𝑁 → ∞, as 𝑊 ¯ is in general flatter than 𝑊 .
Note that there is another possible way to define the IGO flow for noisy objective
functions. The flow (58) amounts to taking a point 𝑥, computing a (random) value 𝑓 (𝑥),
computing the level 𝑞𝜃< (𝑥) = Pr𝑥′ ∼𝑃𝜃 (𝑓 (𝑥′ ) < 𝑓 (𝑥)) of this random value 𝑓 (𝑥), applying
the selection scheme 𝑤, and then averaging. In this approach the value 𝑞𝜃< (𝑥) is a random
variable depending on the value of 𝑓 (𝑥). Alternatively, we could define the value 𝑞𝜃 (𝑥) by
applying Definition 3 unchanged, namely, 𝑞𝜃< (𝑥) = Pr𝑥′ ∼𝑃𝜃 (𝑓 (𝑥′ ) < 𝑓 (𝑥)) in which 𝑓 (𝑥) itself
is treated as a random variable under the probability Pr, so that 𝑞𝜃< (𝑥) is deterministic and
takes into account all possible values for 𝑓 (𝑥). Then we could apply the selection scheme
𝑤 to this value 𝑞𝜃< (𝑥) as in Definition 3 and define the IGO flow accordingly. The value of
𝑞𝜃< (𝑥) in this second version is the expected value of 𝑞𝜃< (𝑥) in the first version.
When the selection scheme 𝑤 is affine, these two approaches coincide; however this is not
the case in general, as the second version averages the 𝑞-values while the first version averages
the weights 𝑤(𝑞). The second version would be the 𝑁 → ∞ limit of a slightly more complex
algorithm using several evaluations of 𝑓 for each sample 𝑥𝑖 in order to compute noise-free
average ranks and quantiles. The first version has the advantage of leaving the algorithm
unchanged and of inheriting properties from the deterministic case via Proposition 22.

C.4 The IGO Flow for Linear Functions on {0, 1}𝑑 and R𝑑
In this section we take a closer look at the IGO differential equation solutions of (8) for some
simple examples of objective functions, for which it is possible to obtain exact information
about these IGO trajectories.
We start with the discrete search space 𝑋 = {0, 1}𝑑 and linear functions (to be minimized)
defined as 𝑓 (𝑥) = 𝑐 − 𝑑𝑖=1 𝛼𝑖 𝑥𝑖 with 𝛼𝑖 > 0. (So maximization of the classical onemax
∑︀
∑︀𝑑
function 𝑓onemax (𝑥) = 𝑖=1 𝑥𝑖 is covered by setting 𝛼𝑖 = 1.) The differential equation of

47
Ollivier, Arnold, Auger and Hansen

the IGO flow (8) for the Bernoulli measures 𝑃𝜃 (𝑥) = 𝑝𝜃1 (𝑥1 ) . . . 𝑝𝜃𝑑 (𝑥𝑑 ) defined on 𝑋 is the
𝛿𝑡 → 0 limit of the IGO-PBIL update (36):
d𝜃𝑖𝑡
∫︁
= 𝑊𝜃𝑓𝑡 (𝑥)(𝑥𝑖 − 𝜃𝑖𝑡 )𝑃𝜃𝑡 (d𝑥) =: 𝑔𝑖 (𝜃𝑡 ) . (59)
d𝑡
Although finding the analytical solution of the differential equation (59) for any initial
condition seems a bit intricate, we show that the solution of (59) converges to (1, . . . , 1)
starting from any initial 𝜃 ∈ (0, 1]𝑑 . Note that starting with 𝜃𝑖 = 0 for some 𝑖 prevents IGO
(and PBIL) from sampling any 1 for component 𝑖, so that the components of 𝜃 that are
equal to 0 at startup will always stay so; in that case the IGO flow effectively works as in a
smaller-dimensional space.
To prove convergence to (1, . . . , 1), we establish the following result:
Lemma 23 Assume that the selection scheme 𝑤 : [0, 1] → R is bounded, that 𝑤 is a
nonincreasing function and that there exists 𝑞0 , 𝑞1 ∈ (0, 1) such that 𝑤(𝑞0 ) ̸= 𝑤(𝑞1 ). On
d𝜃𝑡 ∑︀ d𝜃𝑡
𝑓 (𝑥) = 𝑐 − 𝑑𝑖=1 𝛼𝑖 𝑥𝑖 , the solution of (59) satisfies 𝑑𝑖=1 𝛼𝑖 d𝑡𝑖 > 0; moreover 𝛼𝑖 d𝑡𝑖 = 0
∑︀ ∑︀

if and only 𝜃 ∈ {0, 1}𝑑 .


∑︀𝑑 𝑡)
Proof We compute 𝑖=1 𝛼𝑖 𝑔𝑖 (𝜃 and find that
𝑑
(︃ 𝑑 𝑑
)︃
d𝜃𝑡
∫︁
𝑊𝜃𝑓𝑡 (𝑥)
∑︁ ∑︁ ∑︁
𝛼𝑖 𝑖 = 𝛼𝑖 𝑥𝑖 − 𝛼𝑖 𝜃𝑖𝑡 𝑃𝜃𝑡 (d𝑥)
𝑖=1
d𝑡 𝑖=1 𝑖=1
∫︁
= 𝑊𝜃𝑓𝑡 (𝑥)(𝑓 (𝜃𝑡 ) − 𝑓 (𝑥))𝑃𝜃𝑡 (d𝑥)

= E[𝑊𝜃𝑓𝑡 (𝑥)] E[𝑓 (𝑥)] − E[𝑊𝜃𝑓𝑡 (𝑥)𝑓 (𝑥)]


where the expectations are taken under 𝑃𝜃𝑡 . Using Lemma 24 below, we have that
E[𝑊𝜃𝑓𝑡 (𝑥)] E[𝑓 (𝑥)]−E[𝑊𝜃𝑓𝑡 (𝑥)𝑓 (𝑥)] > 0 and in addition E[𝑊𝜃𝑓𝑡 (𝑥)] E[𝑓 (𝑥)]−E[𝑊𝜃𝑓𝑡 (𝑥)𝑓 (𝑥)] = 0
if and only if 𝜃𝑡 ∈ {0, 1}𝑑 .

Lemma 24 Under the assumptions of Lemma 23


E[−𝑊𝜃𝑓𝑡 (𝑥)𝑓 (𝑥)] > E[−𝑊𝜃𝑓𝑡 (𝑥)]E[𝑓 (𝑥)] (60)
with equality if and only if 𝜃𝑡 ∈ {0, 1}𝑑 . Here the expectations are taken under 𝑥 ∼ 𝑃𝜃𝑡 .

Proof We want to prove that Cov[−𝑊𝜃𝑓𝑡 (𝑥), 𝑓 (𝑥)] > 0 with equality if and only if 𝜃𝑡 ∈ {0, 1}𝑑 .
Let us define the random variable 𝑍 = 𝑓 (𝑥) when 𝑥 ∼ 𝑃𝜃𝑡 . Remark that 𝑊𝜃𝑓𝑡 (𝑥) = G (𝑍)
where
∫︁ 𝐹 6 (𝑧)
1 𝜃𝑡
G (𝑧) = 6 <
𝑤(𝑞)𝑑𝑞 , (61)
𝐹𝜃𝑡 (𝑧) − 𝐹𝜃𝑡 (𝑧) 𝐹𝜃<𝑡 (𝑧)
where 𝐹𝜃6𝑡 (𝑧) = Pr𝑥′ ∼𝑃𝜃𝑡 (𝑓 (𝑥′ ) 6 𝑧) and 𝐹𝜃<𝑡 (𝑧) = Pr𝑥′ ∼𝑃𝜃𝑡 (𝑓 (𝑥′ ) < 𝑧). That is G is the
average of 𝑤 on the interval (𝐹𝜃<𝑡 (𝑧), 𝐹𝜃6𝑡 (𝑧)). Since 𝑤 is nonincreasing, the function G is also
nonincreasing. We have the following equality
Cov[−𝑊𝜃𝑓𝑡 (𝑥), 𝑓 (𝑥)] = Cov[−G (𝑍), 𝑍]

48
Information-Geometric Optimization

where −G is nondecreasing. Following (Thorisson, 2000, Chapter 1), we write


1
Cov[−G (𝑍), 𝑍] = Cov[−G (𝑍) + G (𝑍 ′ ), 𝑍 − 𝑍 ′ ]
2
where 𝑍 and 𝑍 ′ are independent following the distribution 𝑃𝜃𝑡 . Given that both the mean
of −G (𝑍) + G (𝑍 ′ ) and 𝑍 − 𝑍 ′ are zero
1 1
Cov[−G (𝑍), 𝑍] = 𝐸[(−G (𝑍) + G (𝑍 ′ ))(𝑍 − 𝑍 ′ )] = 𝐸[| − G (𝑍) + G (𝑍 ′ )||𝑍 − 𝑍 ′ |] > 0 ,
2 2
where the last equality holds because −G is nondecreasing. This proves the main statement
of the lemma.
We will now show that if 𝜃𝑡 ∈ / {0, 1}𝑑 , then 𝐸[| − G (𝑍) + G (𝑍 ′ )||𝑍 − 𝑍 ′ |] > 0 and thus
𝑓
consequently Cov[−𝑊𝜃𝑡 (𝑥), 𝑓 (𝑥)] > 0. We simply need to show that 𝑍 can take with strictly
positive probabilities 𝑝1 and 𝑝2 two distinct values 𝑧1 and 𝑧2 such that G (𝑧1 ) and G (𝑧2 ) are
distinct. We will then have

𝐸[| − G (𝑍) + G (𝑍 ′ )||𝑍 − 𝑍 ′ |] > (| − G (𝑧1 ) + G (𝑧2 )||𝑧1 − 𝑧2 |) 𝑝1 𝑝2 > 0 .

Let us assume that one single 𝜃𝑖 belongs to (0, 1) and all the others are either 0 or 1. We
can assume without loss of generality that 𝜃1 ∈ (0, 1). Then 𝑓 (𝑥) takes (only) two distinct
values with positive probability; let 𝑧1 be the one associated with 𝑥1 = 1 and 𝑧2 with 𝑥1 = 0.
Then 𝑧1 < 𝑧2 thanks to the definition of 𝑓 and because 𝛼𝑖 > 0. Moreover, unravelling the
definitions we find G (𝑧1 ) = 𝜃11 0𝜃1 𝑤(𝑞)𝑑𝑞 and G (𝑧2 ) = 1−𝜃
1 ∫︀ 1
∫︀
1 𝜃1
𝑤(𝑞)𝑑𝑞.
The assumption states that there exists 𝑞0 , 𝑞1 ∈ (0, 1) such that 𝑤(𝑞0 ) ̸= 𝑤(𝑞1 ). Assume
without loss of generality that 𝑞0 < 𝑞1 and 𝑤(𝑞0 ) > 𝑤(𝑞1 ). Either 𝑤(𝑞1 ) > 𝑤(𝜃1 ) or
𝑤(𝑞1 ) < 𝑤(𝜃1 ). In the first case, then 𝜃1 > 𝑞0 and
∫︁ 𝜃1 (︃∫︁ )︃
1 1 𝑞0 ∫︁ 𝜃1
1
𝑤(𝑞)𝑑𝑞 = 𝑤(𝑞)𝑑𝑞 + 𝑤(𝑞)𝑑𝑞 > (𝑤(𝑞0 )𝑞0 + 𝑤(𝜃1 )(𝜃1 − 𝑞0 )) > 𝑤(𝜃1 ) ,
𝜃1 0 𝜃1 0 𝑞0 𝜃1
∫︀ 1
Meanwhile, G (𝑧2 ) = 1
1−𝜃1 𝜃1 𝑤(𝑞)𝑑𝑞 6 𝑤(𝜃1 ) and thus

G (𝑧1 ) > G (𝑧2 ) .

If 𝑤(𝑞1 ) < 𝑤(𝜃1 ), then 𝑞1 > 𝜃1 and


∫︁ 1 (︂∫︁ 𝑞1 ∫︁ 1
1 1
)︂
G (𝑧2 ) = 𝑤(𝑞)𝑑𝑞 = 𝑤(𝑞)𝑑𝑞 + 𝑤(𝑞)𝑑𝑞
1 − 𝜃1 𝜃1 1 − 𝜃1 𝜃1 𝑞1
1
6 (𝑤(𝜃1 )(𝑞1 − 𝜃1 ) + 𝑤(𝑞1 )(1 − 𝑞1 )) < 𝑤(𝜃1 ) (62)
1 − 𝜃1
and thus G (𝑧2 ) < 𝑤(𝜃1 ) 6 G (𝑧1 ) as well.
In the case where we have not only two but 𝑙 distinct fitnesses that can be sampled with
strictly positive probabilities, say 𝑧1 < . . . < 𝑧𝑙 , we can show similarly that G (𝑧1 ) > G (𝑧𝑙 ).

We are now ready to state the convergence of the trajectories of IGO-PBIL (and cGA)
update in the limit of 𝛿𝑡 → 0.

49
Ollivier, Arnold, Auger and Hansen

Proposition 25 Assume that the selection scheme 𝑤 : [0, 1] → R is bounded, that 𝑤 is a


nonincreasing function and that there exists 𝑞0 , 𝑞1 ∈ (0, 1) such that 𝑤(𝑞0 ) ̸= 𝑤(𝑞1 ). On the
linear functions 𝑓 (𝑥) = 𝑐 − 𝑑𝑖=1 𝛼𝑖 𝑥𝑖 , the critical point 𝜃 = (1, . . . , 1) of the IGO-PBIL
∑︀

differential equation (59) is stable and for any initial condition 𝜃 ∈ (0; 1]𝑑 , the continuous-time
trajectory solving (59) converges to (1, . . . , 1).

Proof Remark first that if we start at 𝜃0 ∈ [𝜀, 1]𝑑 , then the solution of (59) stays in
[𝜀, 1]𝑑 . Indeed, the trajectory cannot go out of [0, 1]𝑑 and, in addition, for 𝜃 ∈ [𝜀, 1)𝑑 we
have 𝑔𝑖 (𝜃) > 0 so that the trajectory of (59) cannot go out of [𝜀, 1]𝑑 . The proof that
𝑔𝑖 (𝜃) > 0 for 𝜃 ∈ [𝜀, 1)𝑑 is similar to the proof that Cov[−𝑊𝜃𝑓𝑡 (𝑥), 𝑓 (𝑥)] > 0 in Lemma 24:
namely, we can write 𝑔𝑖 (𝜃) = E[𝑊𝜃𝑓𝑡 (𝑥)|𝑥𝑖 ](𝑥𝑖 − 𝜃𝑖 )𝑝𝜃𝑖 (𝑥𝑖 )𝑑𝑥𝑖 = 𝜃𝑖 (1 − 𝜃𝑖 )[ℎ(1) − ℎ(0)] where
∫︀

ℎ(𝑥𝑖 ) = E[𝑊𝜃𝑓𝑡 (𝑥)|𝑥𝑖 ] = E[G (𝑓 (𝑥))|𝑥𝑖 ] where G is defined in (61). Then ℎ(1) > ℎ(0) comes
from the increase of G when going to better function values.
Consider now on [𝜀, 1]𝑑 , the non-negative function 𝑉 (𝜃) = 𝑑𝑖=1 𝛼𝑖 − 𝑑𝑖=1 𝛼𝑖 𝜃𝑖 . Then
∑︀ ∑︀

𝑉 * (𝜃) := ∇𝑉 (𝜃) · 𝑔(𝜃) 6 0, and 𝑉 * (𝜃) equals zero only for 𝜃 ∈ {0, 1}𝑑 according to Lemma 23.
Hence for 𝜃 ∈ [𝜀, 1]𝑑 , the function 𝑉 is a Lyapunov function (Khalil, 1996; Agarwal and
O’Regan, 2008), which is minimal at (1, . . . , 1) and such that 𝑉 * (𝜃) < 0 except at (1, . . . , 1).
Therefore the trajectory of (59) will converge to (1, . . . , 1) as 𝑡 goes to infinity (the proof
is similar to that of (Khalil, 1996, Theorem 4.1)). Given that this holds for any 𝜀 > 0, we
can conclude that the trajectory of (59) converges to (1, . . . , 1) starting from any 𝜃 ∈ (0, 1]𝑑 .

We now consider on R𝑑 the family of multivariate normal distributions 𝑃𝜃 = 𝒩 (𝑚, 𝜎 2 𝐼𝑑 )


with covariance matrix equal to 𝜎 2 𝐼𝑑 . The parameter 𝜃 thus has 𝑑 + 1 components 𝜃 =
(𝑚, 𝜎) ∈ R𝑑 × R. The natural gradient update using this family was derived in Glasmachers
et al. (2010); from this we can derive the IGO differential equation which reads:

d𝑚𝑡
∫︁
= 𝑊𝜃𝑓𝑡 (𝑥)(𝑥 − 𝑚𝑡 )𝑃𝒩 (𝑚𝑡 ,(𝜎𝑡 )2 𝐼𝑑 ) (𝑥)d𝑥 (63)
d𝑡 R𝑑 ⎧ ⎫
𝑑
(︃ )︃2
𝜎𝑡
d˜ 1 ⎨∑︁ 𝑥𝑖 − 𝑚𝑡𝑖
∫︁ ⎬
= − 1 𝑊𝜃𝑓𝑡 (𝑥)𝑃𝒩 (𝑚𝑡 ,(𝜎𝑡 )2 𝐼𝑑 ) (𝑥)d𝑥 (64)
d𝑡 R𝑑 2𝑑 ⎩ 𝑖=1 𝜎𝑡 ⎭

where 𝜎 𝑡 and 𝜎˜ 𝑡 are linked via 𝜎 𝑡 = exp(˜


𝜎 𝑡 ) or 𝜎
˜ 𝑡 = ln(𝜎 𝑡 ). Denoting 𝒩 a random vector
following a centered multivariate normal distribution with identity covariance matrix we can
rewrite the gradient flow as

d𝑚𝑡 [︁ ]︁
= 𝜎 𝑡 E 𝑊𝜃𝑓𝑡 (𝑚𝑡 + 𝜎 𝑡 𝒩 )𝒩 (65)
d𝑡 [︃ (︃ )︃ ]︃
d˜𝜎𝑡 1 ‖𝒩 ‖2 𝑓 𝑡 𝑡
=E − 1 𝑊𝜃𝑡 (𝑚 + 𝜎 𝒩 ) . (66)
d𝑡 2 𝑑

Let us analyze the solution of the previous system on linear functions. Without loss of
generality (thanks to invariance) we can consider the linear function 𝑓 (𝑥) = 𝑥1 . We have

𝑊𝜃𝑓𝑡 (𝑥) = 𝑤(Pr(𝑚𝑡1 + 𝜎 𝑡 𝑍1 < 𝑥1 ))

50
Information-Geometric Optimization

where 𝑍1 follows a standard one-dimensional normal distribution and thus

𝑊𝜃𝑓𝑡 (𝑚𝑡 + 𝜎 𝑡 𝒩 ) = 𝑤(Pr𝑍1 ∼𝒩 (0,1) (𝑍1 < 𝒩1 )) (67)


= 𝑤(ℱ(𝒩1 )) (68)

with ℱ the cumulative distribution of a standard normal distribution, and 𝒩1 the first
component of 𝒩 . The differential equation thus simplifies into
⎛ ⎞
E [𝑤(ℱ(𝒩1 ))𝒩1 ]
d𝑚𝑡

0 ⎟
= 𝜎𝑡 ⎜
⎜ ⎟
.. ⎟ (69)
d𝑡 ⎜
⎝ .


0
𝜎𝑡
d˜ 1 [︁ ]︁
= E (|𝒩1 |2 − 1)𝑤(ℱ(𝒩1 )) . (70)
d𝑡 2𝑑

Consider, for example, the truncation selection function, i.e. 𝑤(𝑞) = 1𝑞6𝑞0 where 𝑞0 ∈
(0, 1). (Within the IGO algorithm, this results in so-called intermediate recombination of
the 𝑞0 × 𝑁 best samples.) We find that

d𝑚𝑡1
= 𝜎 𝑡 E[𝒩1 1{𝒩1 6ℱ −1 (𝑞0 )} ] =: 𝜎 𝑡 𝛽 (71)
d𝑡
𝜎𝑡
(︂∫︁ 𝑞0
d˜ 1
)︂
−1 2
= ℱ (𝑢) d𝑢 − 𝑞0 =: 𝛼 . (72)
d𝑡 2𝑑 0

The solution of the IGO flow for the linear function 𝑓 (𝑥) = 𝑥1 is thus given by

𝜎0𝛽
𝑚𝑡1 = 𝑚01 + exp(𝛼𝑡) (73)
𝛼
𝜎 𝑡 = 𝜎 0 exp(𝛼𝑡) . (74)

The coefficient 𝛽 is negative for any 𝑞0 < 1. The coefficient 𝛼 is positive if and only if
𝑞0 < 1/2 by a simple calculus argument12 ; this corresponds to selecting less than half of the
sampled points in an ES. In this case the step size 𝜎 𝑡 grows exponentially fast to infinity
and the mean vector moves along the gradient direction towards minus ∞ at the same
rate. If more than half of the points are selected, 𝑞0 > 1/2, the step size will decrease to
zero exponentially fast and the mean vector will get stuck (compare also Grahl et al. 2005;
Hansen 2006a; Pošík 2008).
For an analysis of the solutions of the system of differential equations (65) and (66)
on more complex functions, namely convex-quadratic functions and twice continuously
differentiable functions, we refer to Akimoto et al. (2012).
∫︀ ℱ −1 (𝑞 ) ∫︀ 𝑦
12. Indeed 𝛼 = 2𝑑√1 2𝜋 −∞ 0 (𝑥2 − 1) exp(−𝑥2 /2)d𝑥 = 2𝑑√1 2𝜋 𝑔(ℱ −1 (𝑞0 )) where 𝑔(𝑦) = −∞ (𝑥2 −
1) exp(−𝑥2 /2)d𝑥. Using 𝑔(0) = 0 and lim𝑦→±∞ 𝑔(𝑦) = 0, and studying the sign of 𝑔 ′ (𝑦), we find
that 𝑔 is positive for 𝑦 < 0 and negative for 𝑦 > 0. Since ℱ −1 (𝑞0 ) < 0 if and only if 𝑞0 < 1/2, we find
that 𝛼 = 2𝑑√1 2𝜋 𝑔(ℱ −1 (𝑞0 )) is positive if and only if 𝑞0 < 1/2.

51
Ollivier, Arnold, Auger and Hansen

Appendix D. Proofs
This final appendix provides longer proofs of propositions and theorems of the paper.

D.1 Proof of Proposition 2


We begin with a calculus lemma which will also be used for the proof of Theorem 10. The
proof is omitted and amounts to writing the maximum of a quadratic function obtained by
the second-order Taylor expansion of 𝑓 .

Lemma 26 Let 𝑓 be real-valued function on a finite-dimensional vector space 𝐸 equipped


with a positive definite quadratic form ‖ · ‖2 . Assume 𝑓 is smooth and has at most quadratic
growth at infinity. Then, for any 𝑥 ∈ 𝐸, we have

1 1
{︂ }︂
∇𝑓 (𝑥) = lim arg max 𝑓 (𝑥 + ℎ) − ‖ℎ‖2
𝜀→0 𝜀 ℎ 2𝜀

where ∇ is the gradient associated with the norm ‖ · ‖. Equivalently,

1
{︂ }︂
arg max 𝑓 (𝑦) − ‖𝑦 − 𝑥‖2 = 𝑥 + 𝜀∇𝑓 (𝑥) + 𝑂(𝜀2 )
𝑦 2𝜀

when 𝜀 → 0.

Proposition 2 follows by taking the Fisher information metric at 𝜃0 for the metric ‖ · ‖2 ,
and using the relation KL(𝑃 || 𝑄) = − Ent(𝑃 ) + log #𝑋 if 𝑄 is the uniform distribution on
a finite space 𝑋.

D.2 Proof of Theorem 6 (Convergence of Empirical Means and Quantiles)


Let us give a more precise statement including the necessary regularity conditions.

𝜕 ln 𝑃𝜃 (𝑥)
Proposition 27 Let 𝜃 ∈ Θ. Assume that the derivative𝜕𝜃 𝜃 exists for 𝑃 -almost all
⃒ 𝜕 ln 𝑃𝜃 (𝑥) ⃒2
⃒ ⃒
𝑥 ∈ 𝑋 and that E𝑃𝜃 ⃒ 𝜕𝜃 ⃒ < +∞. Assume that the function 𝑤 is non-decreasing and
bounded.
Let (𝑥𝑖 )𝑖∈N be a sequence of independent samples of 𝑃𝜃 . Then with probability 1, as
𝑁 → ∞ we have
𝑁
1 ∑︁ ̂︁ 𝑓 (𝑥𝑖 ) 𝜕 ln 𝑃𝜃 (𝑥𝑖 ) → 𝑊 𝑓 (𝑥) 𝜕 ln 𝑃𝜃 (𝑥) 𝑃𝜃 (d𝑥)
∫︁
𝑊 𝜃
𝑁 𝑖=1 𝜕𝜃 𝜕𝜃

where
rk𝑁 (𝑥𝑖 ) + 1/2
(︂ )︂
̂︁ 𝑓 (𝑥𝑖 ) = 𝑤
𝑊
𝑁
̂︁ 𝑓 (𝑥𝑖 )
with rk𝑁 (𝑥𝑖 ) = #{1 6 𝑗 6 𝑁, 𝑓 (𝑥𝑗 ) < 𝑓 (𝑥𝑖 )}. (When there are 𝑓 -ties in the sample, 𝑊
is defined as the average of 𝑤((𝑟 + 1/2)/𝑁 ) over the possible rankings 𝑟 of 𝑥𝑖 .)

52
Information-Geometric Optimization

Proof Let 𝑔 : 𝑋 → R be any function with E𝑃𝜃 𝑔 2 < ∞. We will show that 𝑁1 ̂︁ 𝑓 (𝑥𝑖 )𝑔(𝑥𝑖 ) → ∑︀
𝑊
𝑊𝜃𝑓 (𝑥)𝑔(𝑥) 𝑃𝜃 (d𝑥). Applying this with 𝑔 equal to the components of 𝜕 ln𝜕𝜃
𝑃𝜃 (𝑥)
∫︀
will yield
the result.
Let us decompose
1 ∑︁ ̂︁ 𝑓 1 ∑︁ 𝑓 1 ∑︁ ̂︁ 𝑓
𝑊 (𝑥𝑖 )𝑔(𝑥𝑖 ) = 𝑊𝜃 (𝑥𝑖 )𝑔(𝑥𝑖 ) + (𝑊 (𝑥𝑖 ) − 𝑊𝜃𝑓 (𝑥𝑖 ))𝑔(𝑥𝑖 ).
𝑁 𝑁 𝑁
Each summand in the first term involves only one sample 𝑥𝑖 (contrary to 𝑊 ̂︁ 𝑓 (𝑥𝑖 ) which
depends on the whole sample). So by the strong law of large numbers, almost surely
𝑓
1 ∑︀
𝑊𝜃𝑓 (𝑥)𝑔(𝑥) 𝑃𝜃 (d𝑥). So we have to show that the second
∫︀
𝑁 𝑊 𝜃 (𝑥 𝑖 )𝑔(𝑥 𝑖 ) converges to
term converges to 0 almost surely.
By the Cauchy–Schwarz inequality, we have
⃒ ∑︁ ⃒2
⃒ 1 1 1
(︂ ∑︁ )︂ (︂ ∑︁ )︂
𝑓 𝑓 ⃒ 𝑓 𝑓 2 2

⃒𝑁 (𝑊 (𝑥𝑖 ) − 𝑊𝜃 (𝑥𝑖 ))𝑔(𝑥𝑖 )⃒ 6
̂︁ ⃒ (𝑊 (𝑥𝑖 ) − 𝑊𝜃 (𝑥𝑖 ))
̂︁ 𝑔(𝑥𝑖 )
𝑁 𝑁
By the strong law of large numbers, the second term 𝑁1 𝑔(𝑥𝑖 )2 converges to E𝑃𝜃 𝑔 2 almost
∑︀

surely. So we have to prove that the first term 𝑁1 (𝑊


∑︀ ̂︁ 𝑓 (𝑥𝑖 ) − 𝑊 𝑓 (𝑥𝑖 ))2 converges to 0
𝜃
almost surely.
Since 𝑤 is bounded by assumption, we can write
⃒ ⃒
̂︁ 𝑓 (𝑥𝑖 ) − 𝑊 𝑓 (𝑥𝑖 ))2 6 2𝐵 ⃒ 𝑊
(𝑊 ̂︁ 𝑓 (𝑥𝑖 ) − 𝑊 𝑓 (𝑥𝑖 )⃒
⃒ ⃒
𝜃 𝜃
⃒ ⃒ ⃒ ⃒
̂︁ 𝑓 (𝑥𝑖 ) − 𝑊 𝑓 (𝑥𝑖 )⃒ + 2𝐵 ⃒ 𝑊
= 2𝐵 ⃒ 𝑊
⃒ ⃒ ̂︁ 𝑓 (𝑥𝑖 ) − 𝑊 𝑓 (𝑥𝑖 )⃒
⃒ ⃒
𝜃 + 𝜃 −

where 𝐵 is the bound on |𝑤|. We will bound each of these terms.


Let us abbreviate 𝑞𝑖< = Pr𝑥′ ∼𝑃𝜃 (𝑓 (𝑥′ ) < 𝑓 (𝑥𝑖 )), 𝑞𝑖6 = Pr𝑥′ ∼𝑃𝜃 (𝑓 (𝑥′ ) 6 𝑓 (𝑥𝑖 )), 𝑟𝑖< =
#{𝑗 6 𝑁, 𝑓 (𝑥𝑗 ) < 𝑓 (𝑥𝑖 )}, 𝑟𝑖6 = #{𝑗 6 𝑁, 𝑓 (𝑥𝑗 ) 6 𝑓 (𝑥𝑖 )}.
By definition of 𝑊 ̂︁ 𝑓 we have

6
𝑟𝑖 −1
𝑓 1 ∑︁
𝑊 (𝑥𝑖 ) = 6
̂︁
<
𝑤((𝑘 + 1/2)/𝑁 )
𝑟𝑖 − 𝑟𝑖 𝑘=𝑟<
𝑖

∫︀ 𝑞𝑖6
and moreover 𝑊𝜃𝑓 (𝑥𝑖 ) = 𝑤(𝑞𝑖< ) if 𝑞𝑖< = 𝑞𝑖6 or 𝑊𝜃𝑓 (𝑥𝑖 ) = 1
𝑞𝑖<
𝑤 otherwise.
𝑞𝑖6 −𝑞𝑖< ⃒ ⃒
The Glivenko–Cantelli theorem (Billingsley, 1995, Theorem 20.6) implies that sup𝑖 ⃒ 𝑞𝑖6 − 𝑟𝑖6 /𝑁 ⃒
⃒ ⃒

tends to 0 almost surely, and likewise for sup𝑖 |𝑞𝑖< − 𝑟𝑖< /𝑁 |. So let 𝑁 be large enough so
that these errors are bounded by 𝜀.
Since 𝑤 is non-increasing, we have 𝑤(𝑞𝑖< ) 6 𝑤(𝑟𝑖< /𝑁 − 𝜀). In the case 𝑞𝑖< = ̸ 𝑞𝑖6 , we
< 6 6 <
decompose the interval [𝑞𝑖 ; 𝑞𝑖 ] into (𝑟𝑖 − 𝑟𝑖 ) subintervals. The average of 𝑤 over each such
subinterval is compared to a term in the sum defining 𝑤𝑁 (𝑥𝑖 ): since 𝑤 is non-increasing,
the average of 𝑤 over the 𝑘 th subinterval is at most 𝑤((𝑟𝑖< + 𝑘)/𝑁 − 𝜀). So we get
6
𝑟𝑖 −1
1
𝑊𝜃𝑓 (𝑥𝑖 )
∑︁
6 6 <
𝑤(𝑘/𝑁 − 𝜀)
𝑟𝑖 − 𝑟𝑖 𝑘=𝑟<
𝑖

53
Ollivier, Arnold, Auger and Hansen

so that
6
𝑟𝑖 −1
1
𝑊𝜃𝑓 (𝑥𝑖 )
∑︁
̂︁ 𝑓
− 𝑊 (𝑥𝑖 ) 6 6 (𝑤(𝑘/𝑁 − 𝜀) − 𝑤((𝑘 + 1/2)/𝑁 )).
𝑟𝑖 − 𝑟𝑖< 𝑘=𝑟<
𝑖

Let us sum over 𝑖, remembering that there are (𝑟𝑖6 −𝑟𝑖< ) values of 𝑗 for which 𝑓 (𝑥𝑗 ) = 𝑓 (𝑥𝑖 ).
Taking the positive part, we get
𝑁 ⃒ −1
1 ∑︁ ⃒ 𝑓 𝑓
⃒ 1 𝑁∑︁
⃒ 𝑊𝜃 (𝑥𝑖 ) − 𝑊 (𝑥𝑖 )⃒ 6 (𝑤(𝑘/𝑁 − 𝜀) − 𝑤((𝑘 + 1/2)/𝑁 )).
̂︁ ⃒
𝑁 𝑖=1 + 𝑁 𝑘=0

Since 𝑤 is non-increasing we have


−1
1 𝑁∑︁
∫︁ 1−𝜀−1/𝑁
𝑤(𝑘/𝑁 − 𝜀) 6 𝑤
𝑁 𝑘=0 −𝜀−1/𝑁

and
−1
1 𝑁∑︁
∫︁ 1+1/2𝑁
𝑤((𝑘 + 1/2)/𝑁 ) > 𝑤
𝑁 𝑘=0 1/2𝑁

(we implicitly extend the range of 𝑤 so that 𝑤(𝑞) = 𝑤(0) for 𝑞 < 0 and likewise for 𝑞 > 1).
So we have
𝑁 ⃒ ∫︁ 1/2𝑁 ∫︁ 1+1/2𝑁
1 ∑︁ ⃒ 𝑓 𝑓

⃒ 𝑊𝜃 (𝑥𝑖 ) − 𝑊 (𝑥𝑖 )⃒ 6 𝑤− 𝑤 6 (2𝜀 + 3/𝑁 )𝐵
̂︁ ⃒
𝑁 𝑖=1 + −𝜀−1/𝑁 1−𝜀−1/𝑁

where 𝐵 is the bound on |𝑤|.


Reasoning symmetrically
⃒ with 𝑤(𝑘/𝑁
⃒ + 𝜀) and the inequalities reversed, we get a similar
1 ∑︀ ⃒ 𝑓 𝑓
bound for 𝑁 ⃒ 𝑊𝜃 (𝑥𝑖 ) − 𝑊 (𝑥𝑖 )⃒ . This ends the proof.
̂︁ ⃒

D.3 Proof of Proposition 7 (Quantile Improvement)


Let us use the weight 𝑤(𝑢) = 1𝑢6𝑞 . Let 𝑚 be the value of the 𝑞-quantile of 𝑓 under 𝑃𝜃𝑡 . We
want to show that the value of the 𝑞-quantile of 𝑓 under 𝑃𝜃𝑡+𝛿𝑡 is less than 𝑚, unless the
gradient vanishes and the IGO flow is stationary.
Let 𝑝− = Pr𝑥∼𝑃𝜃𝑡 (𝑓 (𝑥) < 𝑚), 𝑝𝑚 = Pr𝑥∼𝑃𝜃𝑡 (𝑓 (𝑥) = 𝑚) and 𝑝+ = Pr𝑥∼𝑃𝜃𝑡 (𝑓 (𝑥) > 𝑚).
By definition of the quantile value we have 𝑝− + 𝑝𝑚 > 𝑞 and 𝑝+ + 𝑝𝑚 > 1 − 𝑞. Let us assume
that we are in the more complicated case 𝑝𝑚 ̸= 0 (for the case 𝑝𝑚 = 0, simply remove the
corresponding terms).
We have 𝑊𝜃𝑓𝑡 (𝑥) = 1 if 𝑓 (𝑥) < 𝑚, 𝑊𝜃𝑓𝑡 (𝑥) = 0 if 𝑓 (𝑥) > 𝑚 and 𝑊𝜃𝑓𝑡 (𝑥) = 𝑝1𝑚 𝑝𝑝−− +𝑝𝑚 𝑤(𝑢)d𝑢 =
∫︀
𝑞−𝑝−
𝑝𝑚 if 𝑓 (𝑥) = 𝑚.
Using the same notation as above, let 𝑔𝑡 (𝜃) = 𝑊𝜃𝑓𝑡 (𝑥) 𝑃𝜃 (d𝑥). Decomposing this integral
∫︀

on the three sets 𝑓 (𝑥) < 𝑚, 𝑓 (𝑥) = 𝑚 and 𝑓 (𝑥) > 𝑚, we get that 𝑔𝑡 (𝜃) = Pr𝑥∼𝑃𝜃 (𝑓 (𝑥) <
𝑚) + Pr𝑥∼𝑃𝜃 (𝑓 (𝑥) = 𝑚) 𝑞−𝑝 − 𝑡
𝑝𝑚 . In particular, 𝑔𝑡 (𝜃 ) = 𝑞.

54
Information-Geometric Optimization

Since we follow a gradient ascent of 𝑔𝑡 , for 𝛿𝑡 small enough we have 𝑔𝑡 (𝜃𝑡+𝛿𝑡 ) > 𝑔𝑡 (𝜃𝑡 )
unless the gradient vanishes. If the gradient vanishes we have 𝜃𝑡+𝛿𝑡 = 𝜃𝑡 and the quantiles
are the same. Otherwise we get 𝑔𝑡 (𝜃𝑡+𝛿𝑡 ) > 𝑔𝑡 (𝜃𝑡 ) = 𝑞.
(𝑝− +𝑝𝑚 )−𝑝−
Since 𝑞−𝑝 −
𝑝𝑚 6 𝑝𝑚 = 1, we have 𝑔𝑡 (𝜃) 6 Pr𝑥∼𝑃𝜃 (𝑓 (𝑥) < 𝑚) + Pr𝑥∼𝑃𝜃 (𝑓 (𝑥) =
𝑚) = Pr𝑥∼𝑃𝜃 (𝑓 (𝑥) 6 𝑚).
So Pr𝑥∼𝑃𝜃𝑡+𝛿𝑡 (𝑓 (𝑥) 6 𝑚) > 𝑔𝑡 (𝜃𝑡+𝛿𝑡 ) > 𝑞. This implies, by definition, that the 𝑞-
quantile value of 𝑃𝜃𝑡+𝛿𝑡 is at most 𝑚. Moreover, if the objective function has no plateau
then Pr𝑥∼𝑃𝜃𝑡+𝛿𝑡 (𝑓 (𝑥) = 𝑚) = 0 and so Pr𝑥∼𝑃𝜃𝑡+𝛿𝑡 (𝑓 (𝑥) < 𝑚) > 𝑞 which implies that the
𝑞-quantile of 𝑃𝜃𝑡+𝛿𝑡 is stricly less than 𝑚.

D.4 Proof of Theorem 10 (Natural Gradient as ML with Infinitesimal


Weights)
The proof of Theorem 10 will use Lemma 26. Let 𝑊 be a function of 𝑥, and fix some 𝜃0 in
Θ.
We need some regularity assumptions: we assume that no two points 𝜃 ∈ Θ define the
same probability distribution and that the map
∫︀
𝑃𝜃 ↦→ 𝜃 is continuous. We also assume that
the map 𝜃 ↦→ 𝑃𝜃 is smooth enough, so that ln 𝑃𝜃 (𝑥) 𝑊 (𝑥) 𝑃𝜃0 (d𝑥) is a smooth function of
𝜃. (These are restrictions on 𝜃-regularity: this does not mean that 𝑊 has to be continuous
as a function of 𝑥.)
The two statements of Theorem 10 using a sum and an integral have similar proofs, so
we only include the first. For 𝜀 > 0, let 𝜃 be the solution of
{︃ ∫︁ ∫︁ }︃
𝜃 = arg max (1 − 𝜀) ln 𝑃𝜃 (𝑥) 𝑃𝜃0 (d𝑥) + 𝜀 ln 𝑃𝜃 (𝑥) 𝑊 (𝑥) 𝑃𝜃0 (d𝑥) .

Then we have
{︃ ∫︁ ∫︁ }︃
𝜃 = arg max ln 𝑃𝜃 (𝑥) 𝑃𝜃0 (d𝑥) + 𝜀 ln 𝑃𝜃 (𝑥) (𝑊 (𝑥) − 1) 𝑃𝜃0 (d𝑥)
{︃ ∫︁ ∫︁ ∫︁ }︃
= arg max ln 𝑃𝜃 (𝑥) 𝑃𝜃0 (d𝑥) − ln 𝑃𝜃0 (𝑥) 𝑃𝜃0 (d𝑥) + 𝜀 ln 𝑃𝜃 (𝑥) (𝑊 (𝑥) − 1) 𝑃𝜃0 (d𝑥)

(because the added term does not depend on 𝜃)


{︃ ∫︁ }︃
= arg max − KL(𝑃𝜃0 || 𝑃𝜃 ) + 𝜀 ln 𝑃𝜃 (𝑥) (𝑊 (𝑥) − 1) 𝑃𝜃0 (d𝑥)
{︃ }︃
1
∫︁
= arg max − KL(𝑃𝜃0 || 𝑃𝜃 ) + ln 𝑃𝜃 (𝑥) (𝑊 (𝑥) − 1) 𝑃𝜃0 (d𝑥)
𝜀

When 𝜀 → 0, the first term exceeds the second one if KL(𝑃𝜃0 || 𝑃𝜃 ) is too large (because
𝑊 is bounded), and so KL(𝑃𝜃0 || 𝑃𝜃 ) tends to 0. So we can assume that 𝜃 is close to 𝜃0 .
When 𝜃 = 𝜃0 + 𝛿𝜃 is close to 𝜃0 , we have
1 ∑︁
KL(𝑃𝜃0 || 𝑃𝜃 ) = 𝐼𝑖𝑗 (𝜃0 ) 𝛿𝜃𝑖 𝛿𝜃𝑗 + 𝑂(𝛿𝜃3 )
2

55
Ollivier, Arnold, Auger and Hansen

with 𝐼𝑖𝑗 (𝜃0 ) the Fisher matrix at 𝜃0 . (This actually holds both for KL(𝑃𝜃0 || 𝑃𝜃 ) and
KL(𝑃𝜃 || 𝑃𝜃0 ).)
∑︀
Thus, we can apply Lemma 26 using the Fisher metric 𝐼𝑖𝑗 (𝜃0 ) 𝛿𝜃𝑖 𝛿𝜃𝑗 , and working on
a small neighborhood of 𝜃0 in 𝜃-space (which can be identified with Rdim Θ ). The lemma
states that the argmax above is attained at
∫︁
𝜃 = 𝜃 0 + 𝜀∇
̃︀ 𝜃 ln 𝑃𝜃 (𝑥) (𝑊 (𝑥) − 1) 𝑃𝜃0 (d𝑥)

up to 𝑂(𝜀2 ), with ∇
̃︀ the natural gradient.
∫︀
Finally, the gradient cancels the constant −1 because (∇̃︀ ln 𝑃𝜃 ) 𝑃𝜃 = 0 at 𝜃 = 𝜃0 . This
0
proves Theorem 10.

D.5 Proof of Theorem 12 (IGO, CEM and IGO-ML)


Let 𝑃𝜃 be a family of probability distributions of the form
1 (︁∑︁ )︁
𝑃𝜃 (d𝑥) = exp 𝜃𝑖 𝑇𝑖 (𝑥) 𝐻(d𝑥)
𝑍(𝜃)

where 𝑇1 , . . . , 𝑇𝑘 is a finite family of functions on 𝑋 and 𝐻(d𝑥) is some reference measure


on 𝑋. We assume that the family of functions (𝑇𝑖 )𝑖 together with the constant function
𝑇0 (𝑥) = 1, are linearly independent. This prevents redundant parametrizations where two
values of 𝜃 describe the same distribution; this also ensures, by elementary linear algebra,
that the Fisher matrix Cov(𝑇𝑖 , 𝑇𝑗 ) is invertible.
The IGO update (17) in the parametrization 𝑇¯𝑖 is a sum of terms of the form


̃︀ ¯ ln 𝑃 (𝑥).
𝑇𝑖

So we will compute the natural gradient ∇ ̃︀ ¯ in those coordinates.


𝑇𝑖
Let us start with a proposition giving an expression for the Fisher scalar product between
two tangent vectors 𝛿𝑃 and 𝛿 ′ 𝑃 of a statistical manifold of exponential distributions. It
is one way to express the duality between the coordinates 𝜃𝑖 and 𝑇¯𝑖 (compare (Amari and
Nagaoka, 2000, (3.30) and Section 3.5)).

Proposition 28 Let 𝛿𝜃𝑖 and 𝛿 ′ 𝜃𝑖 be two small variations of the parameters 𝜃𝑖 . Let 𝛿𝑃 (𝑥)
and 𝛿 ′ 𝑃 (𝑥) be the resulting variations of the probability distribution 𝑃 , and 𝛿 𝑇¯𝑖 and 𝛿 ′ 𝑇¯𝑖 the
resulting variations of 𝑇¯𝑖 . Then the scalar product, in Fisher information metric, between
the tangent vectors 𝛿𝑃 and 𝛿 ′ 𝑃 , is

⟨𝛿𝑃, 𝛿 ′ 𝑃 ⟩ = 𝛿𝜃𝑖 𝛿 ′ 𝑇¯𝑖 = 𝛿 ′ 𝜃𝑖 𝛿 𝑇¯𝑖 .


∑︁ ∑︁

𝑖 𝑖

Proof
𝜕 𝑇¯𝑖 ¯
= 𝐼𝑖𝑗 , the Fisher matrix for 𝜃. Indeed, 𝜕𝜕𝜃𝑇𝑗𝑖 = 𝑥 𝑇𝑖 (𝑥) 𝜕𝑃𝜕𝜃(𝑥)
∫︀
First, let us prove that 𝜕𝜃𝑗 𝑗
=
𝜕 ln 𝑃 (𝑥)
= 𝑥 𝑇𝑖 (𝑥)𝑃 (𝑥)(𝑇𝑗 (𝑥) − 𝑇¯𝑗 ) by (19); this is equal to Cov(𝑇𝑖 , 𝑇𝑗 ) which
∫︀ ∫︀
𝑥 𝑇𝑖 (𝑥)𝑃 (𝑥) 𝜕𝜃𝑗
is 𝐼𝑖𝑗 by (20).

56
Information-Geometric Optimization

Then, by definition of the Fisher metric we have ⟨𝛿𝑃, 𝛿 ′ 𝑃 ⟩ = 𝑖,𝑗 𝐼𝑖𝑗 𝛿𝜃𝑖 𝛿 ′ 𝜃𝑗 but 𝑗 𝐼𝑖𝑗 𝛿 ′ 𝜃𝑗
∑︀ ∑︀
∑︀ 𝜕 𝑇¯𝑖 ′
is equal to 𝛿 ′ 𝑇¯𝑖 by the above, because 𝛿 ′ 𝑇¯𝑖 = 𝛿 𝜃𝑗 . Thus we find ⟨𝛿𝑃, 𝛿 ′ 𝑃 ⟩ = 𝛿𝜃𝑖 𝛿 ′ 𝑇¯𝑖
∑︀
𝑗 𝜕𝜃𝑗 𝑖
as needed.

Proposition 29 Let 𝑓 be a function on the statistical manifold of an exponential family as


above. Then the components of the natural gradient w.r.t. the expectation parameters are
given by the vanilla gradient w.r.t. the natural parameters:
𝜕𝑓

̃︀ ¯ 𝑓 =
𝑇𝑖 (75)
𝜕𝜃𝑖
and conversely
𝜕𝑓

̃︀ 𝜃 𝑓 = . (76)
𝑖
𝜕 𝑇¯𝑖
(Beware this does not mean that the gradient ascent in any of those parametrizations is
the vanilla gradient ascent.)
We could not find a reference for this result, though we think it is known (as a consequence
of Amari and Nagaoka 2000, Eq. 3.32).
¯ ̃︀ 𝜃 𝑓 = 𝐼 −1 𝜕𝑓 , this proves (76) by substituting
Proof We saw above that 𝜕𝜕𝜃𝑇 = 𝐼. Since ∇ 𝜕𝜃
¯ T
(︁ )︁
𝜕𝑓
𝜕𝜃 = 𝜕𝜕𝜃𝑇 𝜕𝜕𝑓𝑇¯ .
For the first statement (75) (the one needed for Theorem 12) we have to derive the
Fisher matrix for the variables 𝑇¯. It follows from Proposition 28 that the Fisher matrix in
these variables is 𝐼 −1 , by considering the Fisher metric 𝛿𝜃𝑖 .𝛿 ′ 𝑇¯𝑖 and substituting 𝐼 −1 𝛿 𝑇¯
∑︀

for 𝛿𝜃. Then (75) is proved along the same lines as (76).

Back to the proof of Theorem 12. We can now compute the desired terms:
𝜕 ln 𝑃 (𝑥)

̃︀ ¯ ln 𝑃 (𝑥) =
𝑇𝑖 = 𝑇𝑖 (𝑥) − 𝑇¯𝑖
𝜕𝜃𝑖
by (19). This proves the first statement (34) in Theorem 12 about the form of the IGO
update in these parameters.
The other statements follow easily from this together with the additional fact (33) that,
for any set of (positive or negative) weights 𝑎𝑖 with 𝑎𝑖 = 1, the value 𝑇 * = 𝑖 𝑎𝑖 𝑇 (𝑥𝑖 )
∑︀ ∑︀
∑︀
maximizes 𝑖 𝑎𝑖 ln 𝑃 (𝑥𝑖 ).

D.6 Proof of Proposition 20 and Corollary 21 (Speed of IGO)


Lemma 30 Let 𝑋 be a centered 𝐿2 random variable with values in R𝑑 and let 𝐴 be a
real-valued 𝐿2 random variable. Then

‖E(𝐴𝑋)‖ 6 𝜆 Var 𝐴

where 𝜆 is the largest eigenvalue of the covariance matrix of 𝑋 expressed in an orthonormal


basis.

57
Ollivier, Arnold, Auger and Hansen

Proof Let 𝑣 be any vector in R𝑑 ; its norm satisfies

‖𝑣‖ = sup (𝑣 · 𝑤)
𝑤, ‖𝑤‖61

and in particular

‖E(𝐴𝑋)‖ = sup (𝑤 · E(𝐴𝑋))


𝑤, ‖𝑤‖61

= sup E(𝐴 (𝑤 · 𝑋))


𝑤, ‖𝑤‖61

= sup E((𝐴 − E𝐴) (𝑤 · 𝑋)) since (𝑤 · 𝑋) is centered


𝑤, ‖𝑤‖61
√ √︁
6 sup Var 𝐴 E((𝑤 · 𝑋)2 )
𝑤, ‖𝑤‖61

by the Cauchy–Schwarz inequality.


Now, in an orthonormal basis we have
∑︁
(𝑤 · 𝑋) = 𝑤𝑖 𝑋𝑖
𝑖

so that
(︁ ∑︀ )︁
E((𝑤 · 𝑋)2 ) = E (
∑︀
𝑖 𝑤𝑖 𝑋𝑖 )( 𝑗 𝑤𝑗 𝑋𝑗 )
∑︀ ∑︀
= 𝑖 𝑗 E(𝑤𝑖 𝑋𝑖 𝑤𝑗 𝑋𝑗 )
∑︀ ∑︀
= 𝑖 𝑗 𝑤𝑖 𝑤𝑗 E(𝑋𝑖 𝑋𝑗 )
∑︀ ∑︀
= 𝑖 𝑗 𝑤𝑖 𝑤𝑗 𝐶𝑖𝑗

with 𝐶𝑖𝑗 the covariance matrix of 𝑋. The latter expression is the scalar product (𝑤 · 𝐶𝑤).
Since 𝐶 is a symmetric positive-semidefinite matrix, (𝑤 · 𝐶𝑤) is at most 𝜆‖𝑤‖2 with 𝜆 the
largest eigenvalue of 𝐶.

d𝜃𝑡
For the IGO flow we have d𝑡 = E𝑥∼𝑃𝜃 𝑊𝜃𝑓 (𝑥)∇
˜ 𝜃 ln 𝑃𝜃 (𝑥).
√︁
So applying the lemma, we get that the norm of d𝜃 d𝑡 is at most 𝜆 Var𝑥∼𝑃𝜃 𝑊𝜃𝑓 (𝑥) where
𝜆 is the largest eivengalue of the covariance matrix of ∇ ˜ 𝜃 ln 𝑃𝜃 (𝑥) (expressed in a coordinate
system where the Fisher matrix at the current point 𝜃 is the identity).
By construction of the quantiles, we have Var𝑥∼𝑃𝜃 𝑊𝜃𝑓 (𝑥) 6 Var[0,1] 𝑤 (with equality
unless there are ties). Indeed, for a given 𝑥, let 𝒰 be a uniform random variable in [0, 1]
independent from 𝑥 and define the random variable 𝑄 = 𝑞 < (𝑥) + (𝑞 6 (𝑥) − 𝑞 < (𝑥))𝒰. Then
𝑄 is uniformly distributed between the upper and lower quantiles 𝑞 6 (𝑥) and 𝑞 < (𝑥) and
thus we can rewrite 𝑊𝜃𝑓 (𝑥) as E(𝑤(𝑄)|𝑥). By the Jensen inequality we have Var 𝑊𝜃𝑓 (𝑥) =
Var E(𝑤(𝑄)|𝑥) 6 Var 𝑤(𝑄). In addition when 𝑥 is taken under 𝑃𝜃 , 𝑄 is uniformly distributed
in [0, 1] and thus Var 𝑤(𝑄) = Var[0,1] 𝑤, i.e., Var𝑥∼𝑃𝜃 𝑊𝜃𝑓 (𝑥) 6 Var[0,1] 𝑤.
Besides, consider the tangent space in Θ-space at point 𝜃𝑡 , and let us choose an orthonor-
mal basis in this tangent space for the Fisher metric. Then, in this basis we have ∇ ˜ 𝑖 ln 𝑃𝜃 (𝑥) =

58
Information-Geometric Optimization

˜ ln 𝑃𝜃 (𝑥) is E𝑥∼𝑃 (𝜕𝑖 ln 𝑃𝜃 (𝑥)𝜕𝑗 ln 𝑃𝜃 (𝑥)), which is


𝜕𝑖 ln 𝑃𝜃 (𝑥). So the covariance matrix of ∇ 𝜃
equal to the Fisher matrix by definition. So this covariance matrix is the identity, whose
largest eigenvalue is 1. This proves Proposition 20.
For Corollary 21, by the relationship (2) between Fisher matrix and Kullback–Leibler
divergence, if 𝑣 is the speed of the IGO flow then the Kullback–Leibler divergence between
𝑃𝜃𝑡 and 𝑃𝜃𝑡+𝛿𝑡 (where 𝑃𝜃𝑡+𝛿𝑡 is the trajectory of the IGO flow after a time 𝛿𝑡) is equal to the
square norm of 𝛿𝑡.𝑣 in Fisher metric up to an 𝑂(‖𝛿𝑡.𝑣‖3 ) term. Now if 𝑃𝜃𝑡+𝛿𝑡 is obtained
by a finite-population IGO algorithm, by Theorem 6 the actual 𝑣 from the IGO algorithm
differs from the speed of the IGO flow by an 𝑜(1)𝑁 →∞ term. Collecting terms, we find the
expression in Corollary 21.

D.7 Proof of Proposition 22 (Noisy IGO)


On the one hand, let 𝑃𝜃 be a family of distributions on 𝑋. The IGO algorithm (16) applied
to a random function 𝑓 (𝑥) = 𝑓˜(𝑥, 𝜔) where 𝜔 is a random variable uniformly distributed in
[0, 1] reads
𝑁
∑︁
𝜃𝑡+𝛿𝑡 = 𝜃𝑡 + 𝛿𝑡 𝑤^𝑖 ∇
̃︀ 𝜃 ln 𝑃𝜃 (𝑥𝑖 ) (77)
𝑖=1

where 𝑥𝑖 ∼ 𝑃𝜃 and 𝑤^𝑖 is according to (14) where ranking is applied to the values 𝑓˜(𝑥𝑖 , 𝜔𝑖 ),
with 𝜔𝑖 uniform variables in [0, 1] independent from 𝑥𝑖 and from each other.
On the other hand, for the IGO algorithm using 𝑃𝜃 ⊗𝑈[0,1] and applied to the deterministic
function 𝑓˜, 𝑤^𝑖 is computed using the ranking according to the 𝑓˜ values of the sampled points
𝑥
˜𝑖 = (𝑥𝑖 , 𝜔𝑖 ), and thus coincides with the one in (77).
Besides,
𝜕𝜃 ln 𝑃𝜃⊗𝑈[0,1] (˜
𝑥𝑖 ) = 𝜕𝜃 ln 𝑃𝜃 (𝑥𝑖 ) + 𝜕𝜃 ln 𝑈[0,1] (𝜔𝑖 )
⏟ ⏞
=0

and thus, both the vanilla gradients and the Fisher matrix 𝐼 (given by the tensor square
of the vanilla gradients) coincide. This proves that the IGO algorithm update on space
𝑋 × [0, 1], using the family of distributions 𝑃˜𝜃 = 𝑃𝜃 ⊗ 𝑈[0,1] , applied to the deterministic
function 𝑓˜, coincides with (77).

References
D.H. Ackley, G.E. Hinton, and T.J. Sejnowski. A learning algorithm for Boltzmann machines.
Cognitive Science, 9(1):147–169, 1985.

R.P. Agarwal and D. O’Regan. An Introduction to Ordinary Differential Equations. Springer,


2008.

Y. Akimoto and Y. Ollivier. Objective improvement in information-geometric optimization.


In F. Neumann and K. DeJong, editors, Foundations of Genetic Algorithms XII (FOGA
2013), Adelaide, Australia, 2013.

Y. Akimoto, Y. Nagata, I. Ono, and S. Kobayashi. Bidirectional relation between CMA


evolution strategies and natural evolution strategies. In Proceedings of Parallel Problem

59
Ollivier, Arnold, Auger and Hansen

Solving from Nature - PPSN XI, volume 6238 of Lecture Notes in Computer Science, pages
154–163. Springer, 2010.

Y. Akimoto, A. Auger, and N. Hansen. Convergence of the continuous time trajectories


of isotropic evolution strategies on monotonic 𝒞 2 -composite functions. In C.A. Coello
Coello, V. Cutello, K. Deb, S. Forrest, G. Nicosia, and M. Pavone, editors, PPSN (1),
volume 7491 of Lecture Notes in Computer Science, pages 42–51. Springer, 2012. ISBN
978-3-642-32936-4.

S.-I. Amari. Natural gradient works efficiently in learning. Neural Comput., 10:251–276,
February 1998. ISSN 0899-7667. doi: 10.1162/089976698300017746. URL http://portal.
acm.org/citation.cfm?id=287476.287477.

S.-I. Amari and H. Nagaoka. Methods of information geometry, volume 191 of Translations
of Mathematical Monographs. American Mathematical Society, Providence, RI, 2000.
ISBN 0-8218-0531-2. Translated from the 1993 Japanese original by Daishi Harada.

D.V. Arnold. Weighted multirecombination evolution strategies. Theoretical computer


science, 361(1):18–37, 2006.

S. Baluja. Population based incremental learning: A method for integrating genetic search
based function optimization and competitve learning. Technical Report CMU-CS-94-163,
Carnegie Mellon Report, 1994.

S. Baluja and R. Caruana. Removing the genetics from the standard genetic algorithm. In
Proceedings of ICML’95, pages 38–46, 1995.

Y. Bengio, P. Lamblin, V. Popovici, and H. Larochelle. Greedy layer-wise training of


deep networks. In B. Schölkopf, J. Platt, and T. Hoffman, editors, Advances in Neural
Information Processing Systems 19, pages 153–160. MIT Press, Cambridge, MA, 2007.

Y. Bengio, A.C. Courville, and P. Vincent. Unsupervised feature learning and deep learning:
A review and new perspectives. ArXiv preprints, arXiv:1206.5538, 2012.

J. Bensadon. Black-box optimization using geodesics in statistical manifolds. Entropy, 17


(1):304–345, 2015.

A. Berny. Selection and reinforcement learning for combinatorial optimization. In M. Schoe-


nauer, K. Deb, G. Rudolph, X. Yao, E. Lutton, J. Merelo, and H.-P. Schwefel, editors,
Parallel Problem Solving from Nature PPSN VI, volume 1917 of Lecture Notes in Computer
Science, pages 601–610. Springer Berlin Heidelberg, 2000a.

A. Berny. An adaptive scheme for real function optimization acting as a selection operator. In
Combinations of Evolutionary Computation and Neural Networks, 2000 IEEE Symposium
on, pages 140 –149, 2000b. doi: 10.1109/ECNN.2000.886229.

A. Berny. Boltzmann machine for population-based incremental learning. In ECAI, pages


198–202, 2002.

60
Information-Geometric Optimization

H.-G. Beyer. The Theory of Evolution Strategies. Natural Computing Series. Springer-Verlag,
2001.

H.-G. Beyer and H.-P. Schwefel. Evolution strategies—a comprehensive introduction. Natural
computing, 1(1):3–52, 2002. ISSN 1567-7818.

P. Billingsley. Probability and measure. Wiley Series in Probability and Mathematical


Statistics. John Wiley & Sons Inc., New York, third edition, 1995. ISBN 0-471-00710-2.
A Wiley-Interscience Publication.

J. Branke, C. Lode, and J.L. Shapiro. Addressing sampling errors and diversity loss in UMDA.
In Proceedings of the 9th annual conference on Genetic and evolutionary computation,
pages 508–515. ACM, 2007.

T.M. Cover and J.A. Thomas. Elements of information theory. Wiley-Interscience [John
Wiley & Sons], Hoboken, NJ, second edition, 2006. ISBN 978-0-471-24195-9; 0-471-24195-4.

S. Das, S. Maity, B.-Y. Qu, and P.N. Suganthan. Real-parameter evolutionary multimodal
optimization - a survey of the state-of-the-art. Swarm and Evolutionary Computation, 1
(2):71–88, 2011.

P.-T. de Boer, D.P. Kroese, S. Mannor, and R.Y. Rubinstein. A tutorial on the cross-entropy
method. Annals OR, 134(1):19–67, 2005.

G. Desjardins, A. Courville, Y. Bengio, P. Vincent, and O. Dellaleau. Parallel tempering for


training of restricted Boltzmann machines. In Proceedings of the Thirteenth International
Conference on Artificial Intelligence and Statistics (AISTATS), 2010.

P. Deuflhard. Newton methods for nonlinear problems: affine invariance and adaptive
algorithms, volume 35. Springer, 2011.

E.D. Dolan and J.J. Moré. Benchmarking optimization software with performance profiles.
Mathematical programming, 91(2):201–213, 2002.

S. Bhatnagar E. Zhou. Gradient-based adaptive stochastic search for simulation optimization


over continuous space. ArXiv preprints, arXiv:1608.00663, 2016.

M. Gallagher and M. Frean. Population-based continuous optimization, probabilistic mod-


elling and mean shift. Evol. Comput., 13(1):29–42, January 2005. ISSN 1063-6560. doi:
10.1162/1063656053583478. URL http://dx.doi.org/10.1162/1063656053583478.

Z. Ghahramani. Unsupervised learning. In O. Bousquet, U. von Luxburg, and G. Rätsch, ed-


itors, Advanced Lectures on Machine Learning, volume 3176 of Lecture Notes in Computer
Science, pages 72–112. Springer Berlin / Heidelberg, 2004.

T. Glasmachers, T. Schaul, Y. Sun, D. Wierstra, and J. Schmidhuber. Exponential nat-


ural evolution strategies. In Proceedings of the 12th annual conference on Genetic and
evolutionary computation GECCO’10, pages 393–400. ACM, 2010.

61
Ollivier, Arnold, Auger and Hansen

J. Grahl, S. Minner, and F. Rothlauf. Behaviour of umda c with truncation selection on


monotonous functions. In Evolutionary Computation, 2005. The 2005 IEEE Congress on,
volume 3, pages 2553–2559. IEEE, 2005.

N. Hansen. An analysis of mutative 𝜎-self-adaptation on linear fitness functions. Evolutionary


Computation, 14(3):255–275, 2006a.

N. Hansen. The CMA evolution strategy: a comparing review. In J.A. Lozano, P. Larranaga,
I. Inza, and E. Bengoetxea, editors, Towards a new evolutionary computation. Advances
on estimation of distribution algorithms, pages 75–102. Springer, 2006b.

N. Hansen. Benchmarking a BI-population CMA-ES on the BBOB-2009 function testbed.


In Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary
Computation Conference: Late Breaking Papers, GECCO ’09, pages 2389–2396, New
York, NY, USA, 2009. ACM. ISBN 978-1-60558-505-5. doi: http://doi.acm.org/10.1145/
1570256.1570333. URL http://doi.acm.org/10.1145/1570256.1570333.

N. Hansen and A. Auger. Principled design of continuous stochastic search: From theory to
practice. In Y. Borenstein and A. Moraglio, editors, Theory and Principled Methods for
the Design of Metaheuristics, Natural Computing Series, pages 145–180. Springer Berlin
Heidelberg, 2014. ISBN 978-3-642-33205-0. doi: 10.1007/978-3-642-33206-7_8. URL
http://dx.doi.org/10.1007/978-3-642-33206-7_8.

N. Hansen and S. Kern. Evaluating the CMA evolution strategy on multimodal test functions.
In X. Yao et al., editors, Parallel Problem Solving from Nature PPSN VIII, volume 3242
of LNCS, pages 282–291. Springer, 2004.

N. Hansen and A. Ostermeier. Adapting arbitrary normal mutation distributions in evolution


strategies: The covariance matrix adaptation. In ICEC96, pages 312–317. IEEE Press,
1996.

N. Hansen and A. Ostermeier. Completely derandomized self-adaptation in evolution


strategies. Evolutionary Computation, 9(2):159–195, 2001.

N. Hansen, S.D. Müller, and P. Koumoutsakos. Reducing the time complexity of the deran-
domized evolution strategy with covariance matrix adaptation (CMA-ES). Evolutionary
Computation, 11(1):1–18, 2003. ISSN 1063-6560.

G.R. Harik, F.G. Lobo, and D.E. Goldberg. The compact genetic algorithm. Evolutionary
Computation, IEEE Transactions on, 3(4):287–297, 1999.

G.E. Hinton. Training products of experts by minimizing contrastive divergence. Neural


Computation, 14:1771–1800, 2002.

G.E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets.
Neural Conputation, 18:1527–1554, 2006.

R. Hooke and T.A. Jeeves. “Direct search” solution of numerical and statistical problems.
Journal of the ACM, 8:212–229, 1961.

62
Information-Geometric Optimization

G.A. Jastrebski and D.V. Arnold. Improving evolution strategies through active covariance
matrix adaptation. In Evolutionary Computation, 2006. CEC 2006. IEEE Congress on,
pages 2814–2821. IEEE, 2006. ISBN 0780394879.

M. Jebalia and A. Auger. Log-linear convergence of the scale-invariant (𝜇/𝜇𝑤 , 𝜆)-ES


and optimal 𝜇 for intermediate recombination for large population sizes. In R. Schae-
fer et al., editor, Parallel Problem Solving from Nature (PPSN XI), volume 6239,
pages 52–61. Springer, 2010. URL http://hal.inria.fr/docs/00/49/44/78/PDF/
ppsn2010JebaliaAuger.pdf.

H. Jeffreys. An invariant form for the prior probability in estimation problems. Proc. Roy.
Soc. London. Ser. A., 186:453–461, 1946. ISSN 0962-8444.

H.K. Khalil. Nonlinear Systems. Nonlinear Systems. Prentice-Hall, Inc., second edition,
1996.

P.E. Kloeden and E. Platen. Numerical solution of stochastic differential equations, volume 23
of Applications of Mathematics (New York). Springer-Verlag, Berlin, 1992. ISBN 3-540-
54062-8.

S. Kullback. Information theory and statistics. Dover Publications Inc., Mineola, NY, 1997.
ISBN 0-486-69684-7. Reprint of the second (1968) edition.

P. Larranaga and J.A. Lozano. Estimation of distribution algorithms: A new tool for
evolutionary computation. Springer Netherlands, 2002. ISBN 0792374665.

N. Le Roux, P.-A. Manzagol, and Y. Bengio. Topmoumoute online natural gradient algorithm.
In NIPS, 2007.

L. Malagò, M. Matteucci, and B. Dal Seno. An information geometry perspective on


estimation of distribution algorithms: boundary analysis. In GECCO (Companion), pages
2081–2088, 2008.

L. Malagò, M. Matteucci, and G. Pistone. Towards the geometry of estimation of distribution


algorithms based on the exponential family. In H.-G. Beyer and W.B. Langdon, editors,
FOGA, Proceedings, pages 230–242. ACM, 2011. ISBN 978-1-4503-0633-1.

G. Montavon and K.-R. Müller. Deep boltzmann machines and the centering trick. In
G. Montavon, G.B. Orr, and K.-R. Müller, editors, Neural Networks: Tricks of the Trade
(2nd ed.), volume 7700 of Lecture Notes in Computer Science, pages 621–637. Springer,
2012. ISBN 978-3-642-35288-1.

J.J. Moré, B.S. Garbow, and K.E. Hillstrom. Testing unconstrained optimization software.
ACM Transactions on Mathematical Software (TOMS), 7(1):17–41, 1981.

J.A. Nelder and R. Mead. A simplex method for function minimization. The Computer
Journal, pages 308–313, 1965.

Y. Ollivier, L. Arnold, A. Auger, and N. Hansen. Information-geometric optimization: A


unifying picture via invariance principles. ArXiv preprints, arXiv:1106.3708v2, 2011.

63
Ollivier, Arnold, Auger and Hansen

M. Pelikan, D.E. Goldberg, and F.G. Lobo. A survey of optimization by building and
using probabilistic models. Computational optimization and applications, 21(1):5–20, 2002.
ISSN 0926-6003.

P. Pošík. Preventing premature convergence in a simple eda via global step size setting. In
Parallel Problem Solving from Nature–PPSN X, pages 549–558. Springer, 2008.

C.R. Rao. Information and the accuracy attainable in the estimation of statistical parameters.
Bull. Calcutta Math. Soc., 37:81–91, 1945. ISSN 0008-0659.

I. Rechenberg. Evolutionstrategie: Optimierung technischer Systeme nach Prinzipien der


biologischen Evolution. Frommann-Holzboog Verlag, Stuttgart, 1973.

I. Rechenberg. Evolutionsstrategie ’94. Frommann-Holzboog Verlag, 1994.

R. Ros and N. Hansen. A simple modification in CMA-ES achieving linear time and space
complexity. In G. Rudolph, T. Jansen, S. Lucas, C. Polini, and N. Beume, editors,
Proceedings of Parallel Problem Solving from Nature (PPSN X), volume 5199 of Lecture
Notes in Computer Science, pages 296–305. Springer, 2008.

R.Y. Rubinstein. The cross-entropy method for combinatorial and continuous optimization.
Methodology and Computing in Applied Probability, 1:127–190, 1999. ISSN 1387-5841.
URL http://dx.doi.org/10.1023/A:1010091220143. 10.1023/A:1010091220143.

R.Y. Rubinstein and D.P. Kroese. The cross-entropy method: a unified approach to combi-
natorial optimization, Monte-Carlo simulation, and machine learning. Springer-Verlag
New York Inc, 2004. ISBN 038721240X.

R. Salakhutdinov. Learning in Markov random fields using tempered transitions. In Y. Bengio,


D. Schuurmans, J. Lafferty, C.K.I. Williams, and A. Culotta, editors, Advances in Neural
Information Processing Systems 22, pages 1598–1606. MIT Press, 2009.

R. Salakhutdinov and I. Murray. On the quantitative analysis of deep belief networks.


In Proceedings of the 25th international conference on Machine learning, ICML ’08,
pages 872–879, New York, NY, USA, 2008. ACM. ISBN 978-1-60558-205-4. URL
http://doi.acm.org/10.1145/1390156.1390266.

B. Sareni and L. Krähenbühl. Fitness sharing and niching methods revisited. IEEE Trans.
Evolutionary Computation, 2(3):97–106, 1998.

T. Schaul, T. Glasmachers, and J. Schmidhuber. High dimensions and heavy tails for
natural evolution strategies. In Proceedings of the 13th annual conference on Genetic and
evolutionary computation, GECCO ’11, pages 845–852, New York, NY, USA, 2011. ACM.
ISBN 978-1-4503-0557-0. doi: 10.1145/2001576.2001692. URL http://doi.acm.org/10.
1145/2001576.2001692.

R.L. Schilling. Measures, integrals and martingales. Cambridge University Press, New York,
2005. ISBN 978-0-521-61525-9; 0-521-61525-9. doi: 10.1017/CBO9780511810886. URL
http://dx.doi.org/10.1017/CBO9780511810886.

64
Information-Geometric Optimization

L. Schwartz. Analyse. II, volume 43 of Collection Enseignement des Sciences [Collection: The
Teaching of Science]. Hermann, Paris, 1992. Calcul différentiel et équations différentielles,
With the collaboration of K. Zizi.

H.-P. Schwefel. Evolution and Optimum Seeking. Sixth-generation computer technology


series. John Wiley & Sons, Inc. New York, NY, USA, 1995. ISBN 0471571482.

F. Silva and L. Almeida. Acceleration techniques for the backpropagation algorithm. Neural
Networks, pages 110–119, 1990.

P. Smolensky. Information processing in dynamical systems: foundations of harmony theory.


In D. Rumelhart and J. McClelland, editors, Parallel Distributed Processing, volume 1,
chapter 6, pages 194–281. MIT Press, Cambridge, MA, USA, 1986. ISBN 0-262-68053-X.

Y. Sun, D. Wierstra, T. Schaul, and J. Schmidhuber. Efficient natural evolution strategies.


In Proceedings of the 11th Annual conference on Genetic and evolutionary computation,
GECCO ’09, pages 539–546, New York, NY, USA, 2009. ACM. ISBN 978-1-60558-325-9.
doi: http://doi.acm.org/10.1145/1569901.1569976. URL http://doi.acm.org/10.1145/
1569901.1569976.

T. Suttorp, N. Hansen, and C. Igel. Efficient covariance matrix update for variable metric
evolution strategies. Machine Learning, 75(2):167–197, 2009.

H. Thorisson. Coupling, Stationarity, and Regeneration. Springer, 2000.

V. Torczon. On the convergence of pattern search algorithms. SIAM Journal on optimization,


7(1):1–25, 1997.

M. Toussaint. Notes on information geometry and evolutionary processes. ArXiv preprints,


arXiv:nlin/0408040, 2004.

M. Wagner, A. Auger, and M. Schoenauer. EEDA : A new robust estimation of distribution


algorithms. Research Report RR-5190, INRIA, 2004. URL http://hal.inria.fr/
inria-00070802/en/.

D. Whitley. The genitor algorithm and selection pressure: Why rank-based allocation of
reproductive trials is best. In Proceedings of the third international conference on Genetic
algorithms, pages 116–121, 1989.

D. Wierstra, T. Schaul, J. Peters, and J. Schmidhuber. Natural evolution strategies. In


IEEE Congress on Evolutionary Computation, pages 3381–3387, 2008.

D. Wierstra, T. Schaul, T. Glasmachers, Y. Sun, J. Peters, and J. Schmidhuber. Natural


evolution strategies. Journal of Machine Learning Research, 15:949–980, 2014. URL
http://jmlr.org/papers/v15/wierstra14a.html.

E. Zhou and J. Hu. Gradient-based adaptive stochastic search for non-differentiable opti-
mization. IEEE Transactions on Automatic Control, 59(7):1818–1832, July 2014. ISSN
0018-9286. doi: 10.1109/TAC.2014.2310052.

65

You might also like