43
Journal of Machine Learning Research 18 (2018) 1-43 Submitted 8/17; Published 4/18 Automatic Differentiation in Machine Learning: a Survey Atılım G¨ une¸ s Baydin [email protected] Department of Engineering Science University of Oxford Oxford OX1 3PJ, United Kingdom Barak A. Pearlmutter [email protected] Department of Computer Science National University of Ireland Maynooth Maynooth, Co. Kildare, Ireland Alexey Andreyevich Radul [email protected] Department of Brain and Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139, United States Jeffrey Mark Siskind [email protected] School of Electrical and Computer Engineering Purdue University West Lafayette, IN 47907, United States Editor: eon Bottou Abstract Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in machine learn- ing. Automatic differentiation (AD), also called algorithmic differentiation or simply “auto- diff”, is a family of techniques similar to but more general than backpropagation for effi- ciently and accurately evaluating derivatives of numeric functions expressed as computer programs. AD is a small but established field with applications in areas including compu- tational fluid dynamics, atmospheric sciences, and engineering design optimization. Until very recently, the fields of machine learning and AD have largely been unaware of each other and, in some cases, have independently discovered each other’s results. Despite its relevance, general-purpose AD has been missing from the machine learning toolbox, a situ- ation slowly changing with its ongoing adoption under the names “dynamic computational graphs” and “differentiable programming”. We survey the intersection of AD and machine learning, cover applications where AD has direct relevance, and address the main imple- mentation techniques. By precisely defining the main differentiation techniques and their interrelationships, we aim to bring clarity to the usage of the terms “autodiff”, “automatic differentiation”, and “symbolic differentiation” as these are encountered more and more in machine learning settings. Keywords: Backpropagation, Differentiable Programming c 2018 Atılım G¨ une¸ s Baydin, Barak A. Pearlmutter, Alexey Andreyevich Radul, and Jeffrey Mark Siskind. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v18/17-468.html.

Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

  • Upload
    others

  • View
    10

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Journal of Machine Learning Research 18 (2018) 1-43 Submitted 8/17; Published 4/18

Automatic Differentiationin Machine Learning: a Survey

Atılım Gunes Baydin [email protected] of Engineering ScienceUniversity of OxfordOxford OX1 3PJ, United Kingdom

Barak A. Pearlmutter [email protected] of Computer ScienceNational University of Ireland MaynoothMaynooth, Co. Kildare, Ireland

Alexey Andreyevich Radul [email protected] of Brain and Cognitive SciencesMassachusetts Institute of TechnologyCambridge, MA 02139, United States

Jeffrey Mark Siskind [email protected]

School of Electrical and Computer Engineering

Purdue University

West Lafayette, IN 47907, United States

Editor: Leon Bottou

Abstract

Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in machine learn-ing. Automatic differentiation (AD), also called algorithmic differentiation or simply “auto-diff”, is a family of techniques similar to but more general than backpropagation for effi-ciently and accurately evaluating derivatives of numeric functions expressed as computerprograms. AD is a small but established field with applications in areas including compu-tational fluid dynamics, atmospheric sciences, and engineering design optimization. Untilvery recently, the fields of machine learning and AD have largely been unaware of eachother and, in some cases, have independently discovered each other’s results. Despite itsrelevance, general-purpose AD has been missing from the machine learning toolbox, a situ-ation slowly changing with its ongoing adoption under the names “dynamic computationalgraphs” and “differentiable programming”. We survey the intersection of AD and machinelearning, cover applications where AD has direct relevance, and address the main imple-mentation techniques. By precisely defining the main differentiation techniques and theirinterrelationships, we aim to bring clarity to the usage of the terms “autodiff”, “automaticdifferentiation”, and “symbolic differentiation” as these are encountered more and more inmachine learning settings.

Keywords: Backpropagation, Differentiable Programming

c©2018 Atılım Gunes Baydin, Barak A. Pearlmutter, Alexey Andreyevich Radul, and Jeffrey Mark Siskind.

License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are providedat http://jmlr.org/papers/v18/17-468.html.

Page 2: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Baydin, Pearlmutter, Radul, and Siskind

1. Introduction

Methods for the computation of derivatives in computer programs can be classified into fourcategories: (1) manually working out derivatives and coding them; (2) numerical differen-tiation using finite difference approximations; (3) symbolic differentiation using expressionmanipulation in computer algebra systems such as Mathematica, Maxima, and Maple; and(4) automatic differentiation, also called algorithmic differentiation, which is the subjectmatter of this paper.

Conventionally, many methods in machine learning have required the evaluation ofderivatives and most of the traditional learning algorithms have relied on the computa-tion of gradients and Hessians of an objective function (Sra et al., 2011). When introducingnew models, machine learning researchers have spent considerable effort on the manualderivation of analytical derivatives to subsequently plug these into standard optimizationprocedures such as L-BFGS (Zhu et al., 1997) or stochastic gradient descent (Bottou, 1998).Manual differentiation is time consuming and prone to error. Of the other alternatives, nu-merical differentiation is simple to implement but can be highly inaccurate due to round-offand truncation errors (Jerrell, 1997); more importantly, it scales poorly for gradients, ren-dering it inappropriate for machine learning where gradients with respect to millions ofparameters are commonly needed. Symbolic differentiation addresses the weaknesses ofboth the manual and numerical methods, but often results in complex and cryptic expres-sions plagued with the problem of “expression swell” (Corliss, 1988). Furthermore, manualand symbolic methods require models to be defined as closed-form expressions, ruling outor severely limiting algorithmic control flow and expressivity.

We are concerned with the powerful fourth technique, automatic differentiation (AD).AD performs a non-standard interpretation of a given computer program by replacing thedomain of the variables to incorporate derivative values and redefining the semantics ofthe operators to propagate derivatives per the chain rule of differential calculus. Despiteits widespread use in other fields, general-purpose AD has been underused by the machinelearning community until very recently.1 Following the emergence of deep learning (LeCunet al., 2015; Goodfellow et al., 2016) as the state-of-the-art in many machine learning tasksand the modern workflow based on rapid prototyping and code reuse in frameworks such asTheano (Bastien et al., 2012), Torch (Collobert et al., 2011), and TensorFlow (Abadi et al.,2016), the situation is slowly changing where projects such as autograd2 (Maclaurin, 2016),Chainer3 (Tokui et al., 2015), and PyTorch4 (Paszke et al., 2017) are leading the way inbringing general-purpose AD to the mainstream.

The term “automatic” in AD can be a source of confusion, causing machine learn-ing practitioners to put the label “automatic differentiation”, or just “autodiff”, on anymethod or tool that does not involve manual differentiation, without giving due attentionto the underlying mechanism. We would like to stress that AD as a technical term refersto a specific family of techniques that compute derivatives through accumulation of valuesduring code execution to generate numerical derivative evaluations rather than derivative

1. See, e.g., https://justindomke.wordpress.com/2009/02/17/automatic-differentiation-the-most-criminally-underused-tool-in-the-potential-machine-learning-toolbox/

2. https://github.com/HIPS/autograd

3. https://chainer.org/

4. http://pytorch.org/

2

Page 3: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Automatic Differentiation in Machine Learning: a Survey

expressions. This allows accurate evaluation of derivatives at machine precision with onlya small constant factor of overhead and ideal asymptotic efficiency. In contrast with theeffort involved in arranging code as closed-form expressions under the syntactic and seman-tic constraints of symbolic differentiation, AD can be applied to regular code with minimalchange, allowing branching, loops, and recursion. Because of this generality, AD has beenapplied to computer simulations in industry and academia and found applications in fieldsincluding engineering design optimization (Forth and Evans, 2002; Casanova et al., 2002),computational fluid dynamics (Muller and Cusdin, 2005; Thomas et al., 2006; Bischof et al.,2006), physical modeling (Ekstrom et al., 2010), optimal control (Walther, 2007), structuralmechanics (Haase et al., 2002), atmospheric sciences (Carmichael and Sandu, 1997; Char-pentier and Ghemires, 2000), and computational finance (Bischof et al., 2002; Capriotti,2011).

In machine learning, a specialized counterpart of AD known as the backpropagation al-gorithm has been the mainstay for training neural networks, with a colorful history of havingbeen reinvented at various times by independent researchers (Griewank, 2012; Schmidhu-ber, 2015). It has been one of the most studied and used training algorithms since theday it became popular mainly through the work of Rumelhart et al. (1986). In simplestterms, backpropagation models learning as gradient descent in neural network weight space,looking for the minima of an objective function. The required gradient is obtained by thebackward propagation of the sensitivity of the objective value at the output (Figure 1),utilizing the chain rule to compute partial derivatives of the objective with respect to eachweight. The resulting algorithm is essentially equivalent to transforming the network evalu-ation function composed with the objective function under reverse mode AD, which, as weshall see, actually generalizes the backpropagation idea. Thus, a modest understanding ofthe mathematics underlying backpropagation provides one with sufficient background forgrasping AD techniques.

In this paper we review AD from a machine learning perspective, covering its origins,applications in machine learning, and methods of implementation. Along the way, we alsoaim to dispel some misconceptions that we believe have impeded wider recognition of AD bythe machine learning community. In Section 2 we start by explicating how AD differs fromnumerical and symbolic differentiation. Section 3 gives an introduction to the AD techniqueand its forward and reverse accumulation modes. Section 4 discusses the role of derivativesin machine learning and examines cases where AD has relevance. Section 5 covers variousimplementation approaches and general-purpose AD tools, followed by Section 6 where wediscuss future directions.

2. What AD Is Not

Without proper introduction, one might assume that AD is either a type of numerical orsymbolic differentiation. Confusion can arise because AD does in fact provide numericalvalues of derivatives (as opposed to derivative expressions) and it does so by using symbolicrules of differentiation (but keeping track of derivative values as opposed to the resultingexpressions), giving it a two-sided nature that is partly symbolic and partly numerical(Griewank, 2003). We start by emphasizing how AD is different from, and in several aspectssuperior to, these two commonly encountered techniques of computing derivatives.

3

Page 4: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Baydin, Pearlmutter, Radul, and Siskind

�a) Forward pass

x�

x2

E�y3� t)

y2

@E=@y2

�b) Backward pass

w4

@E=@w4

w�

@E=@w�

w2

w3

y�

y3

@E=@y3

w5

w6

@E=@w6

@E=@E@E=@w3

@E=@y�@E=@w5

@E=@w2

Figure 1: Overview of backpropagation. (a) Training inputs xi are fed forward, generatingcorresponding activations yi. An error E between the actual output y3 and thetarget output t is computed. (b) The error adjoint is propagated backward,

giving the gradient with respect to the weights ∇wiE =(

∂E∂w1

, . . . , ∂E∂w6

), which is

subsequently used in a gradient-descent procedure. The gradient with respect toinputs ∇xiE can be also computed in the same backward pass.

2.1 AD Is Not Numerical Differentiation

Numerical differentiation is the finite difference approximation of derivatives using values ofthe original function evaluated at some sample points (Burden and Faires, 2001) (Figure 2,lower right). In its simplest form, it is based on the limit definition of a derivative. Forexample, for a multivariate function f : Rn → R, one can approximate the gradient ∇f =(

∂f∂x1

, . . . , ∂f∂xn

)using

∂f(x)

∂xi≈ f(x + hei)− f(x)

h, (1)

where ei is the i-th unit vector and h > 0 is a small step size. This has the advantage ofbeing uncomplicated to implement, but the disadvantages of performing O(n) evaluationsof f for a gradient in n dimensions and requiring careful consideration in selecting the stepsize h.

Numerical approximations of derivatives are inherently ill-conditioned and unstable,5

with the exception of complex variable methods that are applicable to a limited set ofholomorphic functions (Fornberg, 1981). This is due to the introduction of truncation6 and

5. Using the limit definition of the derivative for finite difference approximation commits both cardinal sinsof numerical analysis: “thou shalt not add small numbers to big numbers”, and “thou shalt not subtractnumbers which are approximately equal”.

6. Truncation error is the error of approximation, or inaccuracy, one gets from h not actually being zero.It is proportional to a power of h.

4

Page 5: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Automatic Differentiation in Machine Learning: a Survey

l1 = xln+1 = 4ln(1− ln)

f(x) = l4 = 64x(1−x)(1−2x)2(1−8x+8x2)2

f ′(x) = 128x(1 − x)(−8 + 16x)(1 − 2x)2(1 −8x+8x2)+64(1−x)(1−2x)2(1−8x+8x2)2−64x(1− 2x)2(1− 8x+ 8x2)2− 256x(1−x)(1−2x)(1− 8x+ 8x2)2

f(x):

v = x

for i = 1 to 3

v = 4*v*(1 - v)

return v

or, in closed-form,

f(x):return 64*x*(1-x)*((1-2*x)^2)

*(1-8*x+8*x*x)^2

f’(x):return 128*x*(1 - x)*(-8 + 16*x)

*((1 - 2*x)^2)*(1 - 8*x + 8*x*x)

+ 64*(1 - x)*((1 - 2*x)^2)*((1

- 8*x + 8*x*x)^2) - (64*x*(1 -

2*x)^2)*(1 - 8*x + 8*x*x)^2 -

256*x*(1 - x)*(1 - 2*x)*(1 - 8*x

+ 8*x*x)^2

f’(x0) = f ′(x0)Exact

f’(x):

(v,dv) = (x,1)

for i = 1 to 3

(v,dv) = (4*v*(1-v), 4*dv-8*v*dv)

return (v,dv)

f’(x0) = f ′(x0)Exact

f’(x):

h = 0.000001

return (f(x + h) - f(x)) / h

f’(x0) ≈ f ′(x0)Approximate

ManualDifferentiation

SymbolicDifferentiation

of the Closed-form

Coding Coding

NumericalDifferentiation

AutomaticDifferentiation

Figure 2: The range of approaches for differentiating mathematical expressions and com-puter code, looking at the example of a truncated logistic map (upper left). Sym-bolic differentiation (center right) gives exact results but requires closed-form in-put and suffers from expression swell; numerical differentiation (lower right) hasproblems of accuracy due to round-off and truncation errors; automatic differen-tiation (lower left) is as accurate as symbolic differentiation with only a constantfactor of overhead and support for control flow.

5

Page 6: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Baydin, Pearlmutter, Radul, and Siskind

h

Err

or

10-17 10-15 10-13 10-11 10-9 10-7 10-5 10-3

10-1

210

-10

10-8

10-6

10-4

10-2

100

Round-off errordominant

Truncation errordominant

Forward differenceCenter difference

Figure 3: Error in the forward (Eq. 1) and center difference (Eq. 2) approxi-mations as a function of step size h, for the derivative of the trun-cated logistic map f(x) = 64x(1− x)(1− 2x)2(1− 8x+ 8x2)2. Plotted er-

rors are computed using Eforward(h, x0) =∣∣∣f(x0+h)−f(x0)

h − ddxf(x)

∣∣x0

∣∣∣ and

Ecenter(h, x0) =∣∣∣f(x0+h)−f(x0−h)

2h − ddxf(x)

∣∣x0

∣∣∣ at x0 = 0.2 .

round-off7 errors inflicted by the limited precision of computations and the chosen valueof the step size h. Truncation error tends to zero as h → 0. However, as h is decreased,round-off error increases and becomes dominant (Figure 3).

Various techniques have been developed to mitigate approximation errors in numericaldifferentiation, such as using a center difference approximation

∂f(x)

∂xi=f(x + hei)− f(x− hei)

2h+O(h2) , (2)

where the first-order errors cancel and one effectively moves the truncation error from first-order to second-order in h.8 For the one-dimensional case, it is just as costly to compute theforward difference (Eq. 1) and the center difference (Eq. 2), requiring only two evaluations off . However, with increasing dimensionality, a trade-off between accuracy and performanceis faced, where computing a Jacobian matrix of a function f : Rn → Rm requires 2mnevaluations.

7. Round-off error is the inaccuracy one gets from valuable low-order bits of the final answer having tocompete for machine-word space with high-order bits of f(x+hei) and f(x) (Eq. 1), which the computerhas to store just until they cancel in the subtraction at the end. Round-off error is inversely proportionalto a power of h.

8. This does not avoid either of the cardinal sins, and is still highly inaccurate due to truncation.

6

Page 7: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Automatic Differentiation in Machine Learning: a Survey

Other techniques for improving numerical differentiation, including higher-order finitedifferences, Richardson extrapolation to the limit (Brezinski and Zaglia, 1991), and dif-ferential quadrature methods using weighted sums (Bert and Malik, 1996), have increasedcomputational complexity, do not completely eliminate approximation errors, and remainhighly susceptible to floating point truncation.

The O(n) complexity of numerical differentiation for a gradient in n dimensions is themain obstacle to its usefulness in machine learning, where n can be as large as millionsor billions in state-of-the-art deep learning models (Shazeer et al., 2017). In contrast,approximation errors would be tolerated in a deep learning setting thanks to the well-documented error resiliency of neural network architectures (Gupta et al., 2015).

2.2 AD Is Not Symbolic Differentiation

Symbolic differentiation is the automatic manipulation of expressions for obtaining deriva-tive expressions (Grabmeier and Kaltofen, 2003) (Figure 2, center right), carried out byapplying transformations representing rules of differentiation such as

d

dx(f(x) + g(x))

d

dxf(x) +

d

dxg(x)

d

dx(f(x) g(x))

(d

dxf(x)

)g(x) + f(x)

(d

dxg(x)

).

(3)

When formulae are represented as data structures, symbolically differentiating an ex-pression tree is a perfectly mechanistic process, considered subject to mechanical automa-tion even at the very inception of calculus (Leibniz, 1685). This is realized in moderncomputer algebra systems such as Mathematica, Maxima, and Maple and machine learningframeworks such as Theano.

In optimization, symbolic derivatives can give valuable insight into the structure of theproblem domain and, in some cases, produce analytical solutions of extrema (e.g., solvingfor d

dxf(x) = 0) that can eliminate the need for derivative calculation altogether. On theother hand, symbolic derivatives do not lend themselves to efficient runtime calculation ofderivative values, as they can get exponentially larger than the expression whose derivativethey represent.

Consider a function h(x) = f(x)g(x) and the multiplication rule in Eq. 3. Since h isa product, h(x) and d

dxh(x) have some common components, namely f(x) and g(x). Note

also that on the right hand side, f(x) and ddxf(x) appear separately. If we just proceeded

to symbolically differentiate f(x) and plugged its derivative into the appropriate place, wewould have nested duplications of any computation that appears in common between f(x)and d

dxf(x). Hence, careless symbolic differentiation can easily produce exponentially largesymbolic expressions which take correspondingly long to evaluate. This problem is knownas expression swell (Table 1).

When we are concerned with the accurate numerical evaluation of derivatives and notso much with their actual symbolic form, it is in principle possible to significantly simplifycomputations by storing only the values of intermediate sub-expressions in memory. More-over, for further efficiency, we can interleave as much as possible the differentiation andsimplification steps. This interleaving idea forms the basis of AD and provides an account

7

Page 8: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Baydin, Pearlmutter, Radul, and Siskind

Table 1: Iterations of the logistic map ln+1 = 4ln(1 − ln), l1 = x and the correspondingderivatives of ln with respect to x, illustrating expression swell.

n lnddx ln

ddx ln (Simplified form)

1 x 1 1

2 4x(1− x) 4(1− x)− 4x 4− 8x

3 16x(1−x)(1−2x)2 16(1− x)(1− 2x)2 − 16x(1− 2x)2 −64x(1− x)(1− 2x)

16(1− 10x+ 24x2 − 16x3)

4 64x(1−x)(1−2x)2

(1− 8x+ 8x2)2128x(1− x)(−8 + 16x)(1− 2x)2(1−8x+8x2)+64(1−x)(1−2x)2(1−8x+8x2)2−64x(1−2x)2(1−8x+8x2)2−256x(1− x)(1− 2x)(1− 8x+ 8x2)2

64(1 − 42x + 504x2 − 2640x3 +7040x4−9984x5 +7168x6−2048x7)

of its simplest form: apply symbolic differentiation at the elementary operation level andkeep intermediate numerical results, in lockstep with the evaluation of the main function.This is AD in the forward accumulation mode, which we shall introduce in the followingsection.

3. AD and Its Main Modes

AD can be thought of as performing a non-standard interpretation of a computer programwhere this interpretation involves augmenting the standard computation with the calcula-tion of various derivatives. All numerical computations are ultimately compositions of afinite set of elementary operations for which derivatives are known (Verma, 2000; Griewankand Walther, 2008), and combining the derivatives of the constituent operations throughthe chain rule gives the derivative of the overall composition. Usually these elementary op-erations include the binary arithmetic operations, the unary sign switch, and transcendentalfunctions such as the exponential, the logarithm, and the trigonometric functions.

On the left hand side of Table 2 we see the representation of the computation y =f(x1, x2) = ln(x1) + x1x2 − sin(x2) as an evaluation trace of elementary operations—alsocalled a Wengert list (Wengert, 1964). We adopt the three-part notation used by Griewankand Walther (2008), where a function f : Rn → Rm is constructed using intermediatevariables vi such that

• variables vi−n = xi, i = 1, . . . , n are the input variables,• variables vi i = 1, . . . , l are the working (intermediate) variables, and• variables ym−i = vl−i, i = m− 1, . . . , 0 are the output variables.

Figure 4 shows the given trace of elementary operations represented as a computationalgraph (Bauer, 1974), useful in visualizing dependency relations between intermediate vari-ables.

Evaluation traces form the basis of the AD techniques. An important point to note hereis that AD can differentiate not only closed-form expressions in the classical sense, but alsoalgorithms making use of control flow such as branching, loops, recursion, and procedure

8

Page 9: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Automatic Differentiation in Machine Learning: a Survey

v−1

v0

v1

v2

v3

v4

v5

x1

x2

f(x1, x2)

Figure 4: Computational graph of the example f(x1, x2) = ln(x1) + x1x2 − sin(x2). Seethe primal trace in Tables 2 or 3 for the definitions of the intermediate variablesv−1 . . . v5 .

calls, giving it an important advantage over symbolic differentiation which severely limitssuch expressivity. This is thanks to the fact that any numeric code will eventually resultin a numeric evaluation trace with particular values of the input, intermediate, and outputvariables, which are the only things one needs to know for computing derivatives usingchain rule composition, regardless of the specific control flow path that was taken duringexecution. Another way of expressing this is that AD is blind with respect to any operation,including control flow statements, which do not directly alter numeric values.

3.1 Forward Mode

AD in forward accumulation mode9 is the conceptually most simple type. Consider theevaluation trace of the function f(x1, x2) = ln(x1) + x1x2 − sin(x2) given on the left-handside in Table 2 and in graph form in Figure 4. For computing the derivative of f withrespect to x1, we start by associating with each intermediate variable vi a derivative

vi =∂vi∂x1

.

Applying the chain rule to each elementary operation in the forward primal trace, wegenerate the corresponding tangent (derivative) trace, given on the right-hand side in Ta-ble 2. Evaluating the primals vi in lockstep with their corresponding tangents vi gives usthe required derivative in the final variable v5 = ∂y

∂x1.

This generalizes naturally to computing the Jacobian of a function f : Rn → Rm with nindependent (input) variables xi and m dependent (output) variables yj . In this case, eachforward pass of AD is initialized by setting only one of the variables xi = 1 and setting therest to zero (in other words, setting x = ei, where ei is the i-th unit vector). A run of thecode with specific input values x = a then computes

yj =∂yj∂xi

∣∣∣∣x=a

, j = 1, . . . ,m ,

9. Also called tangent linear mode.

9

Page 10: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Baydin, Pearlmutter, Radul, and Siskind

Table 2: Forward mode AD example, with y = f(x1, x2) = ln(x1)+x1x2−sin(x2) evaluatedat (x1, x2) = (2, 5) and setting x1 = 1 to compute ∂y

∂x1. The original forward

evaluation of the primals on the left is augmented by the tangent operations onthe right, where each line complements the original directly to its left.

Forward Primal Trace

v−1 = x1 = 2

v0 = x2 = 5

v1 = ln v−1 = ln 2

v2 = v−1 × v0 = 2 × 5

v3 = sin v0 = sin 5

v4 = v1 + v2 = 0.693 + 10

v5 = v4 − v3 = 10.693 + 0.959

y = v5 = 11.652

Forward Tangent (Derivative) Trace

v−1 = x1 = 1

v0 = x2 = 0

v1 = v−1/v−1 = 1/2

v2 = v−1×v0+v0×v−1 = 1 × 5 + 0 × 2

v3 = v0 × cos v0 = 0 × cos 5

v4 = v1 + v2 = 0.5 + 5

v5 = v4 − v3 = 5.5 − 0

y = v5 = 5.5

giving us one column of the Jacobian matrix

Jf =

∂y1

∂x1· · · ∂y1

∂xn...

. . ....

∂ym∂x1

· · · ∂ym∂xn

∣∣∣∣∣∣∣x = a

evaluated at point a. Thus, the full Jacobian can be computed in n evaluations.Furthermore, forward mode AD provides a very efficient and matrix-free way of com-

puting Jacobian–vector products

Jf r =

∂y1

∂x1· · · ∂y1

∂xn...

. . ....

∂ym∂x1

· · · ∂ym∂xn

r1

...rn

, (4)

simply by initializing with x = r. Thus, we can compute the Jacobian–vector product injust one forward pass. As a special case, when f : Rn → R, we can obtain the directionalderivative along a given vector r as a linear combination of the partial derivatives

∇f · r

by starting the AD computation with the values x = r.Forward mode AD is efficient and straightforward for functions f : R → Rm, as all

the derivatives dyidx can be computed with just one forward pass. Conversely, in the other

extreme of f : Rn → R, forward mode AD requires n evaluations to compute the gradient

∇f =

(∂y

∂x1, . . . ,

∂y

∂xn

),

which also corresponds to a 1× n Jacobian matrix that is built one column at a time withthe forward mode in n evaluations.

In general, for cases f : Rn → Rm where n� m, a different technique is often preferred.We will describe AD in reverse accumulation mode in Section 3.2.

10

Page 11: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Automatic Differentiation in Machine Learning: a Survey

3.1.1 Dual Numbers

Mathematically, forward mode AD (represented by the left- and right-hand sides in Table 2)can be viewed as evaluating a function using dual numbers,10 which can be defined astruncated Taylor series of the form

v + vε ,

where v, v ∈ R and ε is a nilpotent number such that ε2 = 0 and ε 6= 0. Observe, forexample, that

(v + vε) + (u+ uε) = (v + u) + (v + u)ε

(v + vε)(u+ uε) = (vu) + (vu+ vu)ε ,

in which the coefficients of ε conveniently mirror symbolic differentiation rules (e.g., Eq. 3).We can utilize this by setting up a regime where

f(v + vε) = f(v) + f ′(v)vε (5)

and using dual numbers as data structures for carrying the tangent value together withthe primal.11 The chain rule works as expected on this representation: two applications ofEq. 5 give

f(g(v + vε)) = f(g(v) + g′(v)vε)

= f(g(v)) + f ′(g(v))g′(v)vε .

The coefficient of ε on the right-hand side is exactly the derivative of the composition of fand g. This means that since we implement elementary operations to respect the invariantEq. 5, all compositions of them will also do so. This, in turn, means that we can extractthe derivative of a function by interpreting any non-dual number v as v+ 0ε and evaluatingthe function in this non-standard way on an initial input with a coefficient 1 for ε:

df(x)

dx

∣∣∣∣x=v

= epsilon-coefficient(dual-version(f)(v + 1ε)) .

This also extends to arbitrary program constructs, since dual numbers, as data types,can be contained in any data structure. As long as a dual number remains in a data structurewith no arithmetic operations being performed on it, it will just remain a dual number; andif it is taken out of the data structure and operated on again, then the differentiation willcontinue.

In practice, a function f coded in a programming language of choice would be fed intoan AD tool, which would then augment it with corresponding extra code to handle the dualoperations so that the function and its derivative are simultaneously computed. This canbe implemented through calls to a specific library, in the form of source code transformationwhere a given source code will be automatically modified, or through operator overloading,making the process transparent to the user. We discuss these implementation techniquesin Section 5.

10. First introduced by Clifford (1873), with important uses in linear algebra and physics.11. Just as the complex number written x + yi is represented in the computer as a pair in memory (x, y)

whose two slots are reals, the dual number written x + xε is represented as the pair (x, x). Such pairsare sometimes called Argand pairs (Hamilton, 1837, p107 Eqs. (157) and (158)).

11

Page 12: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Baydin, Pearlmutter, Radul, and Siskind

3.2 Reverse Mode

AD in the reverse accumulation mode12 corresponds to a generalized backpropagation al-gorithm, in that it propagates derivatives backward from a given output. This is done bycomplementing each intermediate variable vi with an adjoint

vi =∂yj∂vi

,

which represents the sensitivity of a considered output yj with respect to changes in vi. Inthe case of backpropagation, y would be a scalar corresponding to the error E (Figure 1).

In reverse mode AD, derivatives are computed in the second phase of a two-phase pro-cess. In the first phase, the original function code is run forward, populating intermediatevariables vi and recording the dependencies in the computational graph through a book-keeping procedure. In the second phase, derivatives are calculated by propagating adjointsvi in reverse, from the outputs to the inputs.

Returning to the example y = f(x1, x2) = ln(x1) + x1x2 − sin(x2), in Table 3 we seethe adjoint statements on the right-hand side, corresponding to each original elementaryoperation on the left-hand side. In simple terms, we are interested in computing the contri-bution vi = ∂y

∂viof the change in each variable vi to the change in the output y. Taking the

variable v0 as an example, we see in Figure 4 that the only way it can affect y is throughaffecting v2 and v3, so its contribution to the change in y is given by

∂y

∂v0=

∂y

∂v2

∂v2

∂v0+

∂y

∂v3

∂v3

∂v0or v0 = v2

∂v2

∂v0+ v3

∂v3

∂v0.

In Table 3, this contribution is computed in two incremental steps

v0 = v3∂v3

∂v0and v0 = v0 + v2

∂v2

∂v0,

lined up with the lines in the forward trace from which these expressions originate.After the forward pass on the left-hand side, we run the reverse pass of the adjoints

on the right-hand side, starting with v5 = y = ∂y∂y = 1. In the end we get the derivatives

∂y∂x1

= x1 and ∂y∂x2

= x2 in just one reverse pass.Compared with the straightforwardness of forward accumulation mode, reverse mode

AD can, at first, appear somewhat “mysterious” (Dennis and Schnabel, 1996). Griewankand Walther (2008) argue that this is in part because of the common acquaintance with thechain rule as a mechanistic procedure propagating derivatives forward.

An important advantage of the reverse mode is that it is significantly less costly toevaluate (in terms of operation count) than the forward mode for functions with a largenumber of inputs. In the extreme case of f : Rn → R, only one application of the reverse

mode is sufficient to compute the full gradient ∇f =(

∂y∂x1

, . . . , ∂y∂xn

), compared with the

n passes of the forward mode needed for populating the same. Because machine learningpractice principally involves the gradient of a scalar-valued objective with respect to a largenumber of parameters, this establishes the reverse mode, as opposed to the forward mode,as the mainstay technique in the form of the backpropagation algorithm.

12. Also called adjoint or cotangent linear mode.

12

Page 13: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Automatic Differentiation in Machine Learning: a Survey

Table 3: Reverse mode AD example, with y = f(x1, x2) = ln(x1)+x1x2− sin(x2) evaluatedat (x1, x2) = (2, 5). After the forward evaluation of the primals on the left, theadjoint operations on the right are evaluated in reverse (cf. Figure 1). Note thatboth ∂y

∂x1and ∂y

∂x2are computed in the same reverse pass, starting from the adjoint

v5 = y = ∂y∂y = 1.

Forward Primal Trace

v−1= x1 = 2

v0 = x2 = 5

v1 = ln v−1 = ln 2

v2 = v−1 × v0 = 2 × 5

v3 = sin v0 = sin 5

v4 = v1 + v2 = 0.693 + 10

v5 = v4 − v3 = 10.693 + 0.959

y = v5 = 11.652

Reverse Adjoint (Derivative) Trace

x1 = v−1 = 5.5

x2 = v0 = 1.716

v−1= v−1 + v1∂v1∂v−1

= v−1 + v1/v−1 = 5.5

v0 = v0 + v2∂v2∂v0

= v0 + v2 × v−1 = 1.716

v−1= v2∂v2∂v−1

= v2 × v0 = 5

v0 = v3∂v3∂v0

= v3 × cos v0 = −0.284

v2 = v4∂v4∂v2

= v4 × 1 = 1

v1 = v4∂v4∂v1

= v4 × 1 = 1

v3 = v5∂v5∂v3

= v5 × (−1) = −1

v4 = v5∂v5∂v4

= v5 × 1 = 1

v5 = y = 1

In general, for a function f : Rn → Rm, if we denote the operation count to evaluatethe original function by ops(f), the time it takes to calculate the m × n Jacobian by theforward mode is n c ops(f), whereas the same computation can be done via reverse mode inm c ops(f), where c is a constant guaranteed to be c < 6 and typically c ∼ [2, 3] (Griewankand Walther, 2008). That is to say, reverse mode AD performs better when m� n.

Similar to the matrix-free computation of Jacobian–vector products with forward mode(Eq. 4), reverse mode can be used for computing the transposed Jacobian–vector product

Jᵀf r =

∂y1

∂x1· · · ∂ym

∂x1...

. . ....

∂y1

∂xn· · · ∂ym

∂xn

r1

...rm

,

by initializing the reverse phase with y = r.The advantages of reverse mode AD, however, come with the cost of increased storage

requirements growing (in the worst case) in proportion to the number of operations in theevaluated function. It is an active area of research to improve storage requirements inimplementations by using advanced methods such as checkpointing strategies and data-flowanalysis (Dauvergne and Hascoet, 2006; Siskind and Pearlmutter, 2017).

3.3 Origins of AD and Backpropagation

Ideas underlying AD date back to the 1950s (Nolan, 1953; Beda et al., 1959). Forwardmode AD as a general method for evaluating partial derivatives was essentially discovered

13

Page 14: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Baydin, Pearlmutter, Radul, and Siskind

by Wengert (1964). It was followed by a period of relatively low activity, until interestin the field was revived in the 1980s mostly through the work of Griewank (1989), alsosupported by improvements in modern programming languages and the feasibility of anefficient reverse mode AD.

Reverse mode AD and backpropagation have an intertwined history. The essence of thereverse mode, cast in a continuous-time formalism, is the Pontryagin maximum principle(Rozonoer, 1959; Boltyanskii et al., 1960). This method was understood in the controltheory community (Bryson and Denham, 1962; Bryson and Ho, 1969) and cast in moreformal terms with discrete-time variables topologically sorted in terms of dependency byWerbos (1974). Prior to Werbos, the work by Linnainmaa (1970, 1976) is often citedas the first published description of the reverse mode. Speelpenning (1980) subsequentlyintroduced reverse mode AD as we know it, in the sense that he gave the first implementationthat was actually automatic, accepting a specification of a computational process written ina general-purpose programming language and automatically performing the reverse modetransformation.

Incidentally, Hecht-Nielsen (1989) cites the work of Bryson and Ho (1969) and Werbos(1974) as the two earliest known instances of backpropagation. Within the machine learn-ing community, the method has been reinvented several times, such as by Parker (1985),until it was eventually brought to fame by Rumelhart et al. (1986) and the Parallel Dis-tributed Processing (PDP) group. The PDP group became aware of Parker’s work onlyafter their own discovery; similarly, Werbos’ work was not appreciated until it was foundby Parker (Hecht-Nielsen, 1989). This tells us an interesting story of two highly intercon-nected research communities that have somehow also managed to stay detached during thisfoundational period.

For a thorough review of the development of AD, we advise readers to refer to Rall(2006). Interested readers are highly recommended to read Griewank (2012) for an in-vestigation of the origins of the reverse mode and Schmidhuber (2015) for the same forbackpropagation.

4. AD and Machine Learning

In the following, we examine the main uses of derivatives in machine learning and reporton a selection of works where general-purpose AD, as opposed to just backpropagation,has been successfully applied in a machine learning context. Areas where AD has seen useinclude optimization, neural networks, computer vision, natural language processing, andprobabilistic inference.

4.1 Gradient-Based Optimization

Gradient-based optimization is one of the pillars of machine learning (Bottou et al., 2016).Given an objective function f : Rn → R, classical gradient descent has the goal of finding(local) minima w∗ = arg minw f(w) via updates of the form ∆w = −η∇f , where η > 0is a step size. Gradient-based methods make use of the fact that f decreases steepest ifone goes in the direction of the negative gradient. The convergence rate of gradient-basedmethods is usually improved by adaptive step-size techniques that adjust the step size η onevery iteration (Duchi et al., 2011; Schaul et al., 2013; Kingma and Ba, 2015).

14

Page 15: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Automatic Differentiation in Machine Learning: a Survey

Table 4: Evaluation times of the Helmholtz free energy function and its gradient (Figure 5).Times are given relative to that of the original function with both (1) n = 1 and(2) n corresponding to each column. (For instance, reverse mode AD with n = 43takes approximately twice the time to evaluate relative to the original functionwith n = 43.) Times are measured by averaging a thousand runs on a machinewith Intel Core i7-4785T 2.20 GHz CPU and 16 GB RAM, using DiffSharp 0.5.7.The evaluation time for the original function with n = 1 is 0.0023 ms.

n, number of variables

1 8 15 22 29 36 43 50

f , original

Relative n = 1 1 5.12 14.51 29.11 52.58 84.00 127.33 174.44

∇f , numerical diff.

Relative n = 1 1.08 35.55 176.79 499.43 1045.29 1986.70 3269.36 4995.96

Relative n in column 1.08 6.93 12.17 17.15 19.87 23.64 25.67 28.63

∇f , forward AD

Relative n = 1 1.34 13.69 51.54 132.33 251.32 469.84 815.55 1342.07

Relative n in column 1.34 2.66 3.55 4.54 4.77 5.59 6.40 7.69

∇f , reverse AD

Relative n = 1 1.52 11.12 31.37 67.27 113.99 174.62 254.15 342.33

Relative n in column 1.52 2.16 2.16 2.31 2.16 2.07 1.99 1.96

As we have seen, for large n, reverse mode AD provides a highly efficient method forcomputing gradients.13 Figure 5 and Table 4 demonstrate how gradient computation scalesdifferently for forward and reverse mode AD and numerical differentiation, looking at theHelmholtz free energy function that has been used in AD literature for benchmarking gra-dient calculations (Griewank, 1989; Griewank and Walther, 2008; Griewank et al., 2012).

Second-order methods based on Newton’s method make use of both the gradient ∇fand the Hessian Hf , working via updates of the form ∆w = −ηH−1

f ∇f and providingsignificantly faster convergence (Press et al., 2007). AD provides a way of automaticallycomputing the exact Hessian, enabling succinct and convenient general-purpose implemen-tations.14 Newton’s method converges in fewer iterations, but this comes at the cost ofhaving to compute Hf in each iteration. In large-scale problems, the Hessian is usuallyreplaced by a numerical approximation using first-order updates from gradient evaluations,giving rise to quasi-Newton methods. A highly popular such method is the BFGS15 algo-rithm, together with its limited-memory variant L-BFGS (Dennis and Schnabel, 1996). Onthe other hand, Hessians arising in large-scale applications are typically sparse. This spar-

13. See http://DiffSharp.github.io/DiffSharp/examples-gradientdescent.html for an example of ageneral-purpose AD-based gradient descent routine using DiffSharp.

14. See http://DiffSharp.github.io/DiffSharp/examples-newtonsmethod.html for an implementation ofNewton’s method with the full Hessian.

15. After Broyden–Fletcher–Goldfarb–Shanno, who independently discovered the method in the 1970s.

15

Page 16: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Baydin, Pearlmutter, Radul, and Siskind

010

002000

3000

4000

500

0

f , original function∇f , numerical diff.∇f , forward AD∇f , reverse AD

0 10 20 30 40 50

n

100

101

102

103

Tim

e

Figure 5: Evaluation time of the Helmholtz free energy function of a mixed fluid,based on the Peng-Robinson equation of state (Peng and Robinson, 1976),

f(x) = RT∑n

i=0 log xi

1−bTx− xTAx√

8bTxlog 1+(1+

√2)bTx

1+(1−√

2)bTx, where R is the universal

gas constant, T is the absolute temperature, b ∈ Rn is a vector of constants,A ∈ Rn×n is a symmetric matrix of constants, and x ∈ Rn is the vector of indepen-dent variables describing the system. The plots show the evaluation time of f andthe gradient ∇f with numerical differentiation (central difference), forward modeAD, and reverse mode AD, as a function of the number of variables n. Reportedtimes are relative to the evaluation time of f with n = 1. The lower plot uses loga-rithmic scale for illustrating the behavior for small n. Numerical results are givenin Table 4. (Code: http://DiffSharp.github.io/DiffSharp/misc/Benchmarks-h-

grad-v0.5.7.fsx)

16

Page 17: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Automatic Differentiation in Machine Learning: a Survey

sity along with symmetry can be readily exploited by AD techniques such as computationalgraph elimination (Dixon, 1991), partial separability (Gay, 1996), and matrix coloring andcompression (Gebremedhin et al., 2009).

In many cases one does not need the full Hessian but only a Hessian–vector productHv, which can be computed efficiently using a reverse-on-forward configuration of AD byapplying the reverse mode to take the gradient of code produced by the forward mode.16

Given the function f : Rn → R, the evaluation point x, and the vector v, one can accomplishthis by first computing the directional derivative ∇f · v through the forward mode viasetting x = v and then applying the reverse mode on this result to get ∇2f · v = Hfv(Pearlmutter, 1994). This computes Hv with O(n) complexity, even though H is a n × nmatrix. Availability of robust AD tools may make more sophisticated optimization methodsapplicable to large-scale machine-learning problems. For instance, when fast stochasticHessian–vector products are available, these can be used as the basis of stochastic Newton’smethods (Agarwal et al., 2016), which have the potential to endow stochastic optimizationwith quadratic convergence.

Another approach for improving the rate of convergence of gradient-based methodsis to use gain adaptation methods such as stochastic meta-descent (SMD) (Schraudolph,1999), where stochastic sampling is introduced to avoid local minima and reduce the com-putational expense. An example using SMD with AD Hessian–vector products is given byVishwanathan et al. (2006) on conditional random fields (CRF). Similarly, Schraudolph andGraepel (2003) use Hessian–vector products in their model combining conjugate gradienttechniques with stochastic gradient descent.

4.2 Neural Networks, Deep Learning, Differentiable Programming

Training of a neural network is an optimization problem with respect to its set of weights,which can in principle be addressed by using any method ranging from evolutionary algo-rithms (Such et al., 2017) to gradient-based methods such as BFGS (Apostolopoulou et al.,2009) or the mainstay stochastic gradient descent (Bottou, 2010) and its many variants(Kingma and Ba, 2015; Tieleman and Hinton, 2012; Duchi et al., 2011). As we have seen,the backpropagation algorithm is only a special case of AD: by applying reverse mode ADto an objective function evaluating a network’s error as a function of its weights, we canreadily compute the partial derivatives needed for performing weight updates.17

The LUSH system (Bottou and LeCun, 2002), and its predecessor SN (Bottou andLeCun, 1988), were the first production systems that targeted efficient neural network sim-ulation while incorporating both a general-purpose programming language and AD. Moderndeep learning frameworks provide differentiation capability in one way or another, but theunderlying mechanism is not always made clear and confusion abounds regarding the useof the terms “autodiff”, “automatic differentiation”, and “symbolic differentiation”, which

16. Christianson (2012) demonstrates that the second derivative can be computed with the same arithmeticoperation sequence using forward-on-reverse, reverse-on-forward, and reverse-on-reverse. The tapingoverheads of these methods may differ in implementation-dependent ways.

17. See http://DiffSharp.github.io/DiffSharp/examples-neuralnetworks.html for an implementationof backpropagation with reverse mode AD.

17

Page 18: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Baydin, Pearlmutter, Radul, and Siskind

are sometimes even used interchangeably. In mainstream frameworks including Theano18

(Bastien et al., 2012), TensorFlow (Abadi et al., 2016), Caffe (Jia et al., 2014), and CNTK(Seide and Agarwal, 2016) the user first constructs a model as a computational graph usinga domain-specific mini language, which then gets interpreted by the framework during ex-ecution. This approach has the advantage of enabling optimizations of the computationalgraph structure (e.g., as in Theano), but the disadvantages of having limited and unintuitivecontrol flow and being difficult to debug. In contrast, the lineage of recent frameworks ledby autograd (Maclaurin, 2016), Chainer (Tokui et al., 2015), and PyTorch (Paszke et al.,2017) provide truly general-purpose reverse mode AD of the type we outline in Section 3,where the user directly uses the host programming language to define the model as a regularprogram of the forward computation. This eliminates the need for an interpreter, allowsarbitrary control flow statements, and makes debugging simple and intuitive.

Simultaneously with the ongoing adoption of general-purpose AD in machine learning,we are witnessing a modeling-centric terminology emerge within the deep learning com-munity. The terms define-and-run and static computational graph refer to Theano-likesystems where a model is constructed, before execution, as a computational graph struc-ture, which later gets executed with different inputs while remaining fixed. In contrast,the terms define-by-run and dynamic computational graph refer to the general-purpose ADcapability available in newer PyTorch-like systems where a model is a regular program inthe host programming language, whose execution dynamically constructs a computationalgraph on-the-fly that can freely change in each iteration.19

Differentiable programming20 is another emerging term referring to the realization thatdeep learning models are essentially differentiable program templates of potential solutionsto a problem. These templates are constructed as differentiable directed graphs assembledfrom functional blocks, and their parameters are learned using gradient-based optimizationof an objective describing the task. Expressed in this paradigm, neural networks are justa class of parameterized differentiable programs composed of building blocks such as feed-forward, convolutional, and recurrent elements. We are increasingly seeing these traditionalbuilding blocks freely composed in arbitrary algorithmic structures using control flow, aswell as the introduction of novel differentiable architectures such as the neural Turing ma-chine (Graves et al., 2014), a range of controller–interface abstractions (Graves et al., 2016;Zaremba et al., 2016; Joulin and Mikolov, 2015; Sukhbaatar et al., 2015), and differentiableversions of data structures such as stacks, queues, deques (Grefenstette et al., 2015). Avail-ability of general-purpose AD greatly simplifies the implementation of such architectures byenabling their expression as regular programs that rely on the differentiation infrastructure.Although the differentiable programming perspective on deep learning is new, we note that

18. Theano is a computational graph optimizer and compiler with GPU support and it currently handlesderivatives in a highly optimized form of symbolic differentiation. The result can be interpreted as ahybrid of symbolic differentiation and reverse mode AD, but Theano does not use the general-purposereverse accumulation as we describe in this paper. (Personal communication with the authors.)

19. Note that the terms “static” and “dynamic” here are used in the sense of having a fixed versus non-fixedcomputational graph topology and not in the sense of data flow architectures.

20. A term advocated by Christopher Olah (http://colah.github.io/posts/2015-09-NN-Types-FP/),David Dalrymple (https://www.edge.org/response-detail/26794), and Yann LeCun (https://www.facebook.com/yann.lecun/posts/10155003011462143) from a deep learning point of view. Note thedifference from differential dynamic programming (Mayne and Jacobson, 1970) in optimal control.

18

Page 19: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Automatic Differentiation in Machine Learning: a Survey

programming with differentiable functions and having differentiation as a language infras-tructure has been the main research subject of the AD community for many decades andrealized in a wide range of systems and languages as we shall see in Section 5.

There are instances in neural network literature—albeit few—where explicit referencehas been made to AD for computing error gradients, such as Eriksson et al. (1998) using ADfor large-scale feed-forward networks, and the work by Yang et al. (2008), where the authorsuse AD to train a neural-network-based proportional-integral-derivative (PID) controller.Similarly, Rollins (2009) uses reverse mode AD in conjunction with neural networks forthe problem of optimal feedback control. Another example is given for continuous timerecurrent neural networks (CTRNN) by Al Seyab and Cao (2008), where the authors applyAD for the training of CTRNNs predicting dynamic behavior of nonlinear processes in realtime and report significantly reduced training time compared with other methods.

4.3 Computer Vision

Since the influential work by Krizhevsky et al. (2012), computer vision has been dominatedby deep learning, specifically, variations of convolutional neural networks (LeCun et al.,1998). These models are trained end-to-end, meaning that a mapping from raw input datato corresponding outputs is learned, automatically discovering the representations neededfor feature detection in a process called representation learning (Bengio et al., 2013).

Besides deep learning, an interesting area where AD can be applied to computer visionproblems is inverse graphics (Horn, 1977; Hinton and Ghahramani, 1997)—or analysis-by-synthesis (Yildirim et al., 2015)—where vision is seen as the inference of parameters for agenerative model of a scene. Using gradient-based optimization in inverse graphics requirespropagating derivatives through whole image synthesis pipelines including the renderer.Eslami et al. (2016) use numerical differentiation for this purpose. Loper and Black (2014)implement the Open Differentiable Renderer (OpenDR), which is a scene renderer that alsosupplies derivatives of the image pixels with respect to scene parameters, and demonstrateit in the task of fitting an articulated and deformable 3D model of the human body toimage and range data from a Kinect device. Similarly, Kulkarni et al. (2015) implementa differentiable approximate renderer for the task of inference in probabilistic programsdescribing scenes.

Srajer et al. (2016) investigate the use of AD for three tasks in computer vision andmachine learning, namely bundle adjustment (Triggs et al., 1999), Gaussian mixture modelfitting, and hand tracking (Taylor et al., 2014), and provide a comprehensive benchmark ofvarious AD tools for the computation of derivatives in these tasks.

Pock et al. (2007) make use of AD in addressing the problems of denoising, segmentation,and recovery of information from stereoscopic image pairs, and note the usefulness of ADin identifying sparsity patterns in large Jacobian and Hessian matrices. In another study,Grabner et al. (2008) use reverse mode AD for GPU-accelerated medical 2D/3D registration,a task involving the alignment of data from different sources such as X-ray images orcomputed tomography. The authors report a six-fold increase in speed compared withnumerical differentiation using center difference (cf. our benchmark with the Helmholtzfunction, Figure 5 and Table 4).

19

Page 20: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Baydin, Pearlmutter, Radul, and Siskind

Barrett and Siskind (2013) present a use of general-purpose AD for the task of videoevent detection using hidden Markov models (HMMs) and Dalal and Triggs (2005) objectdetectors, performing training on a corpus of pre-tracked video using an adaptive step sizegradient descent with reverse mode AD. Initially implemented with the R6RS-AD package21

which provides forward and reverse mode AD in Scheme, the resulting gradient code waslater ported to C and highly optimized.22

4.4 Natural Language Processing

Natural language processing (NLP) constitutes one of the areas where rapid progress is be-ing made by applying deep learning techniques (Goldberg, 2016), with applications in tasksincluding machine translation (Bahdanau et al., 2014), language modeling (Mikolov et al.,2010), dependency parsing (Chen and Manning, 2014), and question answering (Kumaret al., 2016). Besides deep learning approaches, statistical models in NLP are commonlytrained using general purpose or specialized gradient-based methods and mostly remainexpensive to train. Improvements in training time can be realized by using online or dis-tributed training algorithms (Gimpel et al., 2010). An example using stochastic gradientdescent for NLP is given by Finkel et al. (2008) optimizing conditional random field parsersthrough an objective function. Related with the work on video event detection in the pre-vious section, Yu and Siskind (2013) report their work on sentence tracking, representingan instance of grounded language learning paired with computer vision, where the systemlearns word meanings from short video clips paired with descriptive sentences. The methoduses HMMs to represent changes in video frames and meanings of different parts of speech.This work is implemented in C and computes the required gradients using AD through theADOL-C tool.23

4.5 Probabilistic Modeling and Inference

Inference in probabilistic models can be static, such as compiling a given model to Bayesiannetworks and using algorithms such as belief propagation for inference; or they can bedynamic, executing a model forward many times and computing statistics on observed valuesto infer posterior distributions. Markov chain Monte Carlo (MCMC) (Neal, 1993) methodsare often used for dynamic inference, such as the Metropolis–Hastings algorithm based onrandom sampling (Chib and Greenberg, 1995). Meyer et al. (2003) give an example of howAD can be used to speed up Bayesian posterior inference in MCMC, with an applicationin stochastic volatility. Amortized inference (Gershman and Goodman, 2014; Stuhlmulleret al., 2013) techniques based on deep learning (Le et al., 2017; Ritchie et al., 2016) work bytraining neural networks for performing approximate inference in generative models definedas probabilistic programs (Gordon et al., 2014).

When model parameters are continuous, the Hamiltonian—or, hybrid—Monte Carlo(HMC) algorithm provides improved convergence characteristics avoiding the slow explo-

21. https://github.com/qobi/R6RS-AD

22. Personal communication.23. An implementation of the sentence tracker applied to video search using sentence-based queries

can be accessed online: http://upplysingaoflun.ecn.purdue.edu/~qobi/cccp/sentence-tracker-

video-retrieval.html

20

Page 21: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Automatic Differentiation in Machine Learning: a Survey

ration of random sampling, by simulating Hamiltonian dynamics through auxiliary “mo-mentum variables” (Duane et al., 1987). The advantages of HMC come at the cost ofrequiring gradient evaluations of complicated probability models. AD is highly suitablehere for complementing probabilistic modeling, because it relieves the user from the man-ual derivation of gradients for each model.24 For instance, the probabilistic programminglanguage Stan (Carpenter et al., 2016) implements automatic Bayesian inference based onHMC and the No-U-Turn sampler (NUTS) (Hoffman and Gelman, 2014) and uses reversemode AD for the calculation of gradients for both HMC and NUTS (Carpenter et al., 2015).Similarly, Wingate et al. (2011) demonstrate the use of AD as a non-standard interpretationof probabilistic programs enabling efficient inference algorithms. Kucukelbir et al. (2017)present an AD-based method for deriving variational inference (VI) algorithms.

PyMC3 (Salvatier et al., 2016) allows fitting of Bayesian models using MCMC and VI,for which it uses gradients supplied by Theano. Edward (Tran et al., 2016) is a library fordeep probabilistic modeling, inference, and criticism (Tran et al., 2017) that supports VIusing TensorFlow. Availability of general-purpose AD in this area has enabled new librariessuch as Pyro25 and ProbTorch (Siddharth et al., 2017) for deep universal probabilisticprogramming with support for recursion and control flow, relying, in both instances, on VIusing gradients supplied by PyTorch’s reverse mode AD infrastructure.

When working with probabilistic models, one often needs to backpropagate derivativesthrough sampling operations of random variables in order to achieve stochastic optimiza-tion of model parameters. The score-function estimator, or REINFORCE (Williams, 1992),method provides a generally applicable unbiased gradient estimate, albeit with high vari-ance. When working with continuous random variables, one can substitute a random vari-able by a deterministic and differentiable transformation of a simpler random variable,a method known as the “reparameterization trick” (Williams, 1992; Kingma and Welling,2014; Rezende et al., 2014). For discrete variables, the REBAR (Tucker et al., 2017) methodprovides a lower-variance unbiased gradient estimator by using continuous relaxation. Ageneralization of REBAR called RELAX (Grathwohl et al., 2017) works by learning a free-form control variate parameterized by a neural network and is applicable in both discreteand continuous settings.

5. Implementations

It is useful to have an understanding of the different ways in which AD can be implemented.Here we cover major implementation strategies and provide a survey of existing tools.

A principal consideration in any AD implementation is the performance overhead intro-duced by the AD arithmetic and bookkeeping. In terms of computational complexity, ADguarantees that the amount of arithmetic goes up by no more than a small constant factor(Griewank and Walther, 2008). On the other hand, managing this arithmetic can introducea significant overhead if done carelessly. For instance, naıvely allocating data structures forholding dual numbers will involve memory access and allocation for every arithmetic opera-

24. See http://diffsharp.github.io/DiffSharp/examples-hamiltonianmontecarlo.html for an imple-mentation of HMC with reverse mode AD.

25. http://pyro.ai/

21

Page 22: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Baydin, Pearlmutter, Radul, and Siskind

tion, which are usually more expensive than arithmetic operations on modern computers.26

Likewise, using operator overloading may introduce method dispatches with attendant costs,which, compared to raw numerical computation of the original function, can easily amountto a slowdown of an order of magnitude.27

Another major issue is the risk of hitting a class of bugs called “perturbation confusion”(Siskind and Pearlmutter, 2005; Manzyuk et al., 2012). This essentially means that iftwo ongoing differentiations affect the same piece of code, the two formal epsilons theyintroduce (Section 3.1.1) need to be kept distinct. It is very easy to have bugs—particularlyin performance-oriented AD implementations—that confuse these in various ways. Suchsituations can also arise when AD is nested, that is, derivatives are computed for functionsthat internally compute derivatives.

Translation of mathematics into computer code often requires attention to numericissues. For instance, the mathematical expressions log(1+x) or

√x2 + y2 + z2 or tan−1(y/x)

should not be naıvely translated, but rather expressed as log1p(x), hypot(x,hypot(y,z)),and atan2(y,x). In machine learning, the most prominent example of this is probably theso-called log-sum-exp trick to improve the numerics of calculations of the form log

∑i expxi.

AD is not immune to such numeric considerations. For example, code calculating E =∑iEi, processed by AD, will calculate ∇wE =

∑i∇wEi. If the system is seeking a local

minimum of E then ∇wE =∑

i∇wEi →t 0, and naıvely adding a set of large numberswhose sum is near zero is numerically fraught. This is to say that AD is not immune to theperils of floating point arithmetic, and can sometimes introduce numeric issues which werenot present in the primal calculation. Issues of numeric analysis are outside our presentscope, but there is a robust literature on the numerics of AD (e.g., Griewank et al. (2012))involving using subgradients to allow optimization to proceed despite non-differentiabilityof the objective, appropriate subgradients and approximations for functions like |·| and ‖·‖2and√· near zero, and a spate of related issues.

One should also be cautious about approximated functions and AD (Sirkes and Tziper-man, 1997). In this case, if one has a procedure approximating an ideal function, AD alwaysgives the derivative of the procedure that was actually programmed, which may not be agood approximation of the derivative of the ideal function that the procedure was approxi-mating. For instance, consider ex computed by a piecewise-rational approximation routine.Using AD on this routine would produce an approximated derivative in which each piece ofthe piecewise formula will get differentiated. Even if this would remain an approximationof the derivative of ex, we know that dex

dx = ex and the original approximation itself was al-ready a better approximation for the derivative of ex.28 Users of AD implementations mustbe therefore cautious to approximate the derivative, not differentiate the approximation.This would require explicitly approximating a known derivative, in cases where a mathe-

26. The implementation of forward mode in Julia (Revels et al., 2016b) attempts to avoid this, and somecurrent compilers can avoid this expense by unboxing dual numbers (Leroy, 1997; Jones et al., 1993b;Jones and Launchbury, 1991; Siskind and Pearlmutter, 2016). This method is also used to reduce thememory-access overhead in the implementations of forward mode in Stalingrad and the Haskell ad library.

27. Flow analysis (Shivers, 1991) and/or partial evaluation (Jones et al., 1993a), together with tag strip-ping (Appel, 1989; Peterson, 1989), can remove this method dispatch. These, together with unboxing,can often make it possible to completely eliminate the memory access, memory allocation, memoryreclamation, and method dispatch overhead of dual numbers (Siskind and Pearlmutter, 2016).

28. In modern systems this is not an issue, because ex is a primitive implemented in hardware.

22

Page 23: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Automatic Differentiation in Machine Learning: a Survey

matical function can only be computed approximately but has a well-defined mathematicalderivative.

We note that there are similarities as well as differences between machine learning work-loads and those studied in the traditional AD literature (Baydin et al., 2016b). Deeplearning systems are generally compute-bound and spend a considerable amount of com-putation time in highly-optimized numerical kernels for matrix operations (Hadjis et al.,2015; Chetlur et al., 2014). This is a situation which is arguably amenable to operator-overloading-based AD implementations on high-level operations, as is commonly found incurrent machine learning frameworks. In contrast, numerical simulation workloads in tra-ditional AD applications can be bandwidth-bound, making source code transformation andcompiler optimization approaches more relevant. Another difference worth noting is thatwhereas high numerical precision is desirable in traditional application domains of AD suchas computational fluid dynamics (Cohen and Molemaker, 2009), in deep learning lower-precision is sufficient and even desirable in improving computational efficiency, thanks tothe error resiliency of neural networks (Gupta et al., 2015; Courbariaux et al., 2015).

There are instances in recent literature where implementation-related experience fromthe AD field has been put to use in machine learning settings. One particular area of re-cent interest is implicit and iterative AD techniques (Griewank and Walther, 2008), whichhas found use in work incorporating constrained optimization within deep learning (Amosand Kolter, 2017) and probabilistic graphical models and neural networks (Johnson et al.,2016). Another example is checkpointing strategies (Dauvergne and Hascoet, 2006; Siskindand Pearlmutter, 2017), which allow balancing of application-specific trade-offs betweentime and space complexities of reverse mode AD by not storing the full tape of interme-diate variables in memory and reconstructing these as needed by re-running parts of theforward computation from intermediate checkpoints. This is highly relevant in deep learningworkloads running on GPUs with limited memory budgets. A recent example in this areais the work by Gruslys et al. (2016), where the authors construct a checkpointing varietyof the backpropagation through time (BPTT) algorithm for recurrent neural networks anddemonstrate it saving up to 95% memory usage at the cost of a 33% increase in computationtime in one instance.

In Table 5 we present a review of notable general-purpose AD implementations.29 Athorough taxonomy of implementation techniques was introduced by Juedes (1991), whichwas later revisited by Bischof et al. (2008) and simplified into elemental, operator overload-ing, compiler-based, and hybrid methods. We adopt a similar classification for the followingpart of this section.

5.1 Elemental Libraries

These implementations form the most basic category and work by replacing mathematicaloperations with calls to an AD-enabled library. Methods exposed by the library are thenused in function definitions, meaning that the decomposition of any function into elementaryoperations is done manually when writing the code.

The approach has been utilized since the early days of AD, with prototypical examplesbeing the WCOMP and UCOMP packages of Lawson (1971), the APL package of Neidinger

29. Also see the website http://www.autodiff.org/ for a list of tools maintained by the AD community.

23

Page 24: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Baydin, Pearlmutter, Radul, and Siskind

Tab

le5:

Su

rveyof

AD

imp

lemen

tations.

Tools

develop

edp

rimarily

form

achin

elearn

ing

areh

ighligh

tedin

bold

.

Language

Tool

Typ

eM

ode

Institu

tion

/P

roje

ct

Refe

rence

UR

L

AM

PL

AM

PL

INT

F,

RB

ell

Lab

ora

torie

sFoure

ret

al.

(2002)

http://www.ampl.com/

C,

C+

+A

DIC

ST

F,

RA

rgonne

Natio

nal

Lab

ora

tory

Bisc

hof

et

al.

(1997)

http://www.mcs.anl.gov/research/projects/adic/

AD

OL

-CO

OF

,R

Com

puta

tional

Infra

structu

refo

rO

pera

tions

Rese

arc

hW

alth

er

and

Grie

wank

(2012)

https://projects.coin-or.org/ADOL-C

C+

+C

ere

sSolv

er

LIB

FG

oogle

http://ceres-solver.org/

CppA

DO

OF

,R

Com

puta

tional

Infra

structu

refo

rO

pera

tions

Rese

arc

hB

ell

and

Burk

e(2

008)

http://www.coin-or.org/CppAD/

FA

DB

AD

++

OO

F,

RT

echnic

al

Univ

ersity

of

Denm

ark

Bendtse

nand

Sta

unin

g(1

996)

http://www.fadbad.com/fadbad.html

Mxyzptlk

OO

FFerm

iN

atio

nal

Accele

rato

rL

ab

ora

tory

Ostig

uy

and

Mic

helo

tti(2

007)

C#

Auto

Diff

LIB

RG

eorg

eM

aso

nU

niv

.,D

ept.

of

Com

pute

rScie

nce

Shto

fet

al.

(2013)

http://autodiff.codeplex.com/

F#

,C

#D

iffSharp

OO

F,

RM

aynooth

Univ

ersity

,M

icro

soft

Rese

arc

hC

am

brid

ge

Baydin

et

al.

(2016a)

http://diffsharp.github.io

Fortra

nA

DIF

OR

ST

F,

RA

rgonne

Natio

nal

Lab

ora

tory

Bisc

hof

et

al.

(1996)

http://www.mcs.anl.gov/research/projects/adifor/

NA

GW

are

CO

MF

,R

Num

eric

al

Alg

orith

ms

Gro

up

Naum

ann

and

Rie

hm

e(2

005)

http://www.nag.co.uk/nagware/Research/ad_overview.asp

TA

MC

ST

RM

ax

Pla

nck

Institu

tefo

rM

ete

oro

logy

Gie

ring

and

Kam

insk

i(1

998)

http://autodiff.com/tamc/

Fortra

n,

CC

OSY

INT

FM

ichig

an

Sta

teU

niv

.,B

iom

edic

al

and

Physic

al

Sci.

Berz

et

al.

(1996)

http://www.bt.pa.msu.edu/index_cosy.htm

Tap

enade

ST

F,

RIN

RIA

Sophia

-Antip

olis

Hasc

oet

and

Pasc

ual

(2013)

http://www-sop.inria.fr/tropics/tapenade.html

Hask

ell

ad

OO

F,

RH

ask

ell

package

http://hackage.haskell.org/package/ad

Java

AD

iJaC

ST

F,

RU

niv

ersity

Polite

hnic

aof

Buchare

stSlu

sansc

hi

and

Dum

itrel

(2016)

http://adijac.cs.pub.ro

Deriv

aL

IBR

Java

&C

loju

relib

rary

https://github.com/lambder/Deriva

Julia

Julia

Diff

OO

F,

RJulia

packages

Revels

et

al.

(2016a)

http://www.juliadiff.org/

Lua

torch-a

utograd

OO

RT

witte

rC

orte

xhttps://github.com/twitter/torch-autograd

MA

TL

AB

AD

iMat

ST

F,

RT

echnic

al

Univ

ersity

of

Darm

stadt,

Scie

ntifi

cC

om

p.

Willk

om

mand

Vehre

schild

(2013)

http://adimat.sc.informatik.tu-darmstadt.de/

INT

Lab

OO

FH

am

burg

Univ

.ofT

echnolo

gy,In

st.fo

rR

elia

ble

Com

p.

Rum

p(1

999)

http://www.ti3.tu-harburg.de/rump/intlab/

TO

ML

AB

/M

AD

OO

FC

ranfi

eld

Univ

ersity

&T

om

lab

Optim

izatio

nIn

c.

Forth

(2006)

http://tomlab.biz/products/mad

Pyth

on

ad

OO

RP

yth

on

package

https://pypi.python.org/pypi/ad

autograd

OO

F,

RH

arv

ard

Inte

lligent

Pro

babilistic

Syste

ms

Gro

up

Macla

urin

(2016)

https://github.com/HIPS/autograd

Chain

er

OO

RP

refe

rred

Netw

ork

sT

okui

et

al.

(2015)

https://chainer.org/

PyT

orch

OO

RP

yT

orc

hcore

team

Pasz

ke

et

al.

(2017)

http://pytorch.org/

Tangent

ST

F,

RG

oogle

Bra

invan

Merrie

nb

oer

et

al.

(2017)

https://github.com/google/tangent

Schem

eR

6R

S-A

DO

OF

,R

Purd

ue

Univ

.,School

of

Ele

ctric

al

and

Com

pute

rE

ng.

https://github.com/qobi/R6RS-AD

Scm

utils

OO

FM

ITC

om

pute

rScie

nce

and

Artifi

cia

lIn

tellig

ence

Lab.

Sussm

an

and

Wisd

om

(2001)

http://groups.csail.mit.edu/mac/users/gjs/6946/refman.txt

Sta

lingra

dC

OM

F,

RP

urd

ue

Univ

.,School

of

Ele

ctric

al

and

Com

pute

rE

ng.

Pearlm

utte

rand

Sisk

ind

(2008)

http://www.bcl.hamilton.ie/~qobi/stalingrad/

F:

Forw

ard

,R

:R

everse

;C

OM

:C

om

pile

r,IN

T:

Inte

rpre

ter,

LIB

:L

ibra

ry,

OO

:O

pera

tor

overlo

adin

g,

ST

:Sourc

etra

nsfo

rmatio

n

24

Page 25: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Automatic Differentiation in Machine Learning: a Survey

(1989), and the work by Hinkins (1994). Likewise, Rich and Hill (1992) formulate theirimplementation of AD in MATLAB using elemental methods.

Elemental libraries still constitute the simplest strategy to implement AD for languageswithout operator overloading.

5.2 Compilers and Source Code Transformation

These implementations provide extensions to programming languages that automate thedecomposition of algorithms into AD-enabled elementary operations. They are typicallyexecuted as preprocessors30 to transform the input in the extended language into the originallanguage.

Classical instances of source code transformation include the Fortran preprocessorsGRESS (Horwedel et al., 1988) and PADRE2 (Kubo and Iri, 1990), which transformAD-enabled variants of Fortran into standard Fortran 77 before compiling. Similarly, theADIFOR tool (Bischof et al., 1996), given a Fortran source code, generates an augmentedcode in which all specified partial derivatives are computed in addition to the original result.For procedures coded in ANSI C, the ADIC tool (Bischof et al., 1997) implements AD as asource code transformation after the specification of dependent and independent variables.A recent and popular tool also utilizing this approach is Tapenade (Pascual and Hascoet,2008; Hascoet and Pascual, 2013), implementing forward and reverse mode AD for Fortranand C programs. Tapenade itself is implemented in Java and can be run locally or as anonline service.31

In addition to language extensions through source code transformation, there are im-plementations introducing new languages with tightly integrated AD capabilities throughspecial-purpose compilers or interpreters. Some of the earliest AD tools such as SLANG(Adamson and Winant, 1969) and PROSE (Pfeiffer, 1987) belong to this category. TheNAGWare Fortran 95 compiler (Naumann and Riehme, 2005) is a more recent example,where the use of AD-related extensions triggers automatic generation of derivative code atcompile time.

As an example of interpreter-based implementation, the algebraic modeling languageAMPL (Fourer et al., 2002) enables objectives and constraints to be expressed in math-ematical notation, from which the system deduces active variables and arranges the nec-essary AD computations. Other examples in this category include the FM/FAD package(Mazourik, 1991), based on the Algol-like DIFALG language, and the object-oriented COSYlanguage (Berz et al., 1996) similar to Pascal.

The Stalingrad compiler (Pearlmutter and Siskind, 2008; Siskind and Pearlmutter,2008a), working on the Scheme-based AD-aware VLAD language, also falls under this cat-egory. The newer DVL compiler32 is based on Stalingrad and uses a reimplementation ofportions of the VLAD language.

Motivated by machine learning applications, the Tangent library (van Merrienboer et al.,2017) implements AD using source code transformation, and accepts numeric functionswritten in a syntactic subset of Python and Numpy.

30. Preprocessors transform program source code before it is given as an input to a compiler.31. http://www-tapenade.inria.fr:8080/tapenade/index.jsp

32. https://github.com/axch/dysvunctional-language

25

Page 26: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Baydin, Pearlmutter, Radul, and Siskind

5.3 Operator Overloading

In modern programming languages with polymorphic features, operator overloading pro-vides the most straightforward way of implementing AD, exploiting the capability of re-defining elementary operation semantics.

A popular tool implemented with operator overloading in C++ is ADOL-C (Waltherand Griewank, 2012). ADOL-C requires the use of AD-enabled types for variables, andrecords arithmetic operations on variables in tape data structures, which can subsequentlybe “played back” during reverse mode AD computations. The Mxyzptlk package (Mich-elotti, 1990) is another example for C++ capable of computing arbitrary-order partialderivatives via forward propagation. The FADBAD++ library (Bendtsen and Stauning,1996) implements AD for C++ using templates and operator overloading. For Python,the ad package33 uses operator overloading to compute first- and second-order derivatives,while the newer autograd package34 provides forward and reverse mode AD with supportfor higher-order derivatives.

For functional languages, examples include R6RS-AD35 and the AD routines within theScmutils library36 for Scheme, the ad library37 for Haskell, and DiffSharp38 for F# and C#.

6. Conclusions

Backpropagation and gradient-based optimization are behind virtually all recent successesin machine learning, yielding state-of-the-art results in computer vision, speech recogni-tion and synthesis, and machine translation. We expect these techniques to remain atthe core of machine learning for the foreseeable future. Research in the field involves arapid prototyping and development cycle for testing new models and ideas, using a col-lection of increasingly higher-quality machine learning frameworks. These frameworks arein the process of transition from coarse-grained (module level) backpropagation towardsfine-grained, general-purpose AD, allowing models to be implemented as regular programsin general-purpose programming languages with differentiation as an integral part of theinfrastructure. We strongly believe that general-purpose AD is the future of gradient-basedmachine learning and we expect it to become an indispensable tool in the machine learningtoolbox.

It is an exciting time for working at the intersection of AD and machine learning, andthere are many opportunities for bringing advanced techniques and expertise from ADliterature to bear on machine learning problems. Techniques that have been developedby the AD community such as tape reduction and elimination (Naumann, 2004), fixed-point iterations (Christianson, 1994), utilizing sparsity by matrix coloring (Gebremedhinet al., 2009, 2013), and reverse AD checkpointing (Dauvergne and Hascoet, 2006) are justa few examples that can find potential use in machine learning for increasing performance,improving convergence of optimization, using hardware more efficiently, and even enabling

33. http://pythonhosted.org/ad/

34. https://github.com/HIPS/autograd

35. https://github.com/NUIM-BCL/R6RS-AD

36. http://groups.csail.mit.edu/mac/users/gjs/6946/refman.txt

37. http://hackage.haskell.org/package/ad

38. http://diffsharp.github.io

26

Page 27: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Automatic Differentiation in Machine Learning: a Survey

new types of machine learning models to be implemented. Similarly, exciting new ADmodes like direct propagation of the inverse Jacobian (Srinivasan and Todorov, 2015) haveemerged from the machine learning community, but have yet to be examined and formalizedby the AD community.

An important direction for future work is to make use of nested AD techniques in ma-chine learning, allowing differentiation to be nested arbitrarily deep with referential trans-parency (Siskind and Pearlmutter, 2008b; Pearlmutter and Siskind, 2008). Nested AD ishighly relevant in hyperparameter optimization as it can effortlessly provide exact hypergra-dients, that is, derivatives of a training objective with respect to the hyperparameters of anoptimization routine (Maclaurin et al., 2015; Baydin et al., 2018). Potential applications in-clude Bayesian model selection (Rasmussen and Williams, 2006) and gradient-based tuningof Hamiltonian Monte Carlo step sizes and mass matrices (Salimans et al., 2015). Besideshyperparameters, models internally using higher-order derivatives constitute a straightfor-ward usage case for nested AD. The Riemannian manifold Langevin and Hamiltonian MonteCarlo methods (Girolami and Calderhead, 2011) use higher-order derivative information tomore closely track the information geometry of the sampled distribution for faster con-vergence and exploration. In neural networks, it is very natural to use nested derivativesin defining objective functions that take input transformations into account, such as theTangent Prop method (Simard et al., 1998) for imposing invariance under a set of chosentransformations.

Acknowledgments

We thank the anonymous reviewers whose comments helped improve this manuscript. Thiswork was supported, in part, by Science Foundation Ireland grant 09/IN.1/I2637, by theArmy Research Laboratory, accomplished under Cooperative Agreement Number W911NF-10-2-0060, by the National Science Foundation under Grants 1522954-IIS and 1734938-IIS,and by the Intelligence Advanced Research Projects Activity (IARPA) via Department ofInterior/Interior Business Center (DOI/IBC) contract number D17PC00341. Any opinions,findings, views, and conclusions or recommendations expressed in this material are those ofthe authors and do not necessarily reflect the views, official policies, or endorsements, eitherexpressed or implied, of the sponsors. The U.S. Government is authorized to reproduceand distribute reprints for Government purposes, notwithstanding any copyright notationherein.

References

Martın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro,Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. TensorFlow:Large-scale machine learning on heterogeneous distributed systems. arXiv preprintarXiv:1603.04467, 2016.

D. S. Adamson and C. W. Winant. A SLANG simulation of an initially strong shock wavedownstream of an infinite area change. In Proceedings of the Conference on Applicationsof Continuous-System Simulation Languages, pages 231–40, 1969.

27

Page 28: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Baydin, Pearlmutter, Radul, and Siskind

Naman Agarwal, Brian Bullins, and Elad Hazan. Second order stochastic optimization inlinear time. Technical Report arXiv:1602.03943, arXiv preprint, 2016.

R. K. Al Seyab and Y. Cao. Nonlinear system identification for predictive control usingcontinuous time recurrent neural networks and automatic differentiation. Journal ofProcess Control, 18(6):568–581, 2008. doi: 10.1016/j.jprocont.2007.10.012.

Brandon Amos and J Zico Kolter. OptNet: Differentiable optimization as a layer in neuralnetworks. arXiv preprint arXiv:1703.00443, 2017.

Marianna S. Apostolopoulou, Dimitris G. Sotiropoulos, Ioannis E. Livieris, and PanagiotisPintelas. A memoryless BFGS neural network training algorithm. In 7th IEEE Inter-national Conference on Industrial Informatics, INDIN 2009, pages 216–221, June 2009.doi: 10.1109/INDIN.2009.5195806.

Andrew W Appel. Runtime tags aren’t necessary. Lisp and Symbolic Computation, 2(2):153–162, 1989.

Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation byjointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.

Daniel Paul Barrett and Jeffrey Mark Siskind. Felzenszwalb-Baum-Welch: Event detectionby changing appearance. arXiv preprint arXiv:1306.4746, 2013.

Frederic Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian Goodfellow, Ar-naud Bergeron, Nicolas Bouchard, David Warde-Farley, and Yoshua Bengio. Theano:new features and speed improvements. Deep Learning and Unsupervised Feature Learn-ing NIPS 2012 Workshop, 2012.

Friedrich L. Bauer. Computational graphs and rounding error. SIAM Journal on NumericalAnalysis, 11(1):87–96, 1974.

Atılım Gunes Baydin, Barak A. Pearlmutter, and Jeffrey Mark Siskind. Diffsharp: An ADlibrary for .NET languages. In 7th International Conference on Algorithmic Differentia-tion, Christ Church Oxford, UK, September 12–15, 2016, 2016a. Also arXiv:1611.03423.

Atılım Gunes Baydin, Barak A. Pearlmutter, and Jeffrey Mark Siskind. Tricks from deeplearning. In 7th International Conference on Algorithmic Differentiation, Christ ChurchOxford, UK, September 12–15, 2016, 2016b. Also arXiv:1611.03777.

Atılım Gunes Baydin, Robert Cornish, David Martınez Rubio, Mark Schmidt, and FrankWood. Online learning rate adaptation with hypergradient descent. In Sixth InternationalConference on Learning Representations (ICLR), Vancouver, Canada, April 30–May 3,2018, 2018.

L. M. Beda, L. N. Korolev, N. V. Sukkikh, and T. S. Frolova. Programs for automaticdifferentiation for the machine BESM (in Russian). Technical report, Institute for PreciseMechanics and Computation Techniques, Academy of Science, Moscow, USSR, 1959.

28

Page 29: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Automatic Differentiation in Machine Learning: a Survey

Bradley M. Bell and James V. Burke. Algorithmic differentiation of implicit functionsand optimal values. In C. H. Bischof, H. M. Bucker, P. Hovland, U. Naumann, andJ. Utke, editors, Advances in Automatic Differentiation, volume 64 of Lecture Notes inComputational Science and Engineering, pages 67–77. Springer Berlin Heidelberg, 2008.doi: 10.1007/978-3-540-68942-3 7.

Claus Bendtsen and Ole Stauning. FADBAD, a flexible C++ package for automatic differ-entiation. Technical Report IMM-REP-1996-17, Department of Mathematical Modelling,Technical University of Denmark, Lyngby, Denmark, 1996.

Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A reviewand new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence,35(8):1798–1828, 2013.

Charles W. Bert and Moinuddin Malik. Differential quadrature method in computationalmechanics: A review. Applied Mechanics Reviews, 49, 1996. doi: 10.1115/1.3101882.

Martin Berz, Kyoko Makino, Khodr Shamseddine, Georg H. Hoffstatter, and Weishi Wan.COSY INFINITY and its applications in nonlinear dynamics. In M. Berz, C. Bischof,G. Corliss, and A. Griewank, editors, Computational Differentiation: Techniques, Ap-plications, and Tools, pages 363–5. Society for Industrial and Applied Mathematics,Philadelphia, PA, 1996.

Christian Bischof, Alan Carle, George Corliss, Andreas Griewank, and Paul Hovland.ADIFOR 2.0: Automatic differentiation of Fortran 77 programs. Computational ScienceEngineering, IEEE, 3(3):18–32, 1996. doi: 10.1109/99.537089.

Christian Bischof, Lucas Roh, and Andrew Mauer-Oats. ADIC: An extensible automaticdifferentiation tool for ANSI-C. Software Practice and Experience, 27(12):1427–56, 1997.

Christian H. Bischof, H. Martin Bucker, and Bruno Lang. Automatic differentiation forcomputational finance. In E. J. Kontoghiorghes, B. Rustem, and S. Siokos, editors, Com-putational Methods in Decision-Making, Economics and Finance, volume 74 of AppliedOptimization, pages 297–310. Springer US, 2002. doi: 10.1007/978-1-4757-3613-7 15.

Christian H. Bischof, H. Martin Bucker, Arno Rasch, Emil Slusanschi, and Bruno Lang.Automatic differentiation of the general-purpose computational fluid dynamics packageFLUENT. Journal of Fluids Engineering, 129(5):652–8, 2006. doi: 10.1115/1.2720475.

Christian H. Bischof, Paul D. Hovland, and Boyana Norris. On the implementation ofautomatic differentiation tools. Higher-Order and Symbolic Computation, 21(3):311–31,2008. doi: 10.1007/s10990-008-9034-4.

V. G. Boltyanskii, R. V. Gamkrelidze, and L. S. Pontryagin. The theory of optimal processesI: The maximum principle. Izvest. Akad. Nauk S.S.S.R. Ser. Mat., 24:3–42, 1960.

Leon Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedingsof COMPSTAT’2010, pages 177–186. Springer, 2010.

29

Page 30: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Baydin, Pearlmutter, Radul, and Siskind

Leon Bottou and Yann LeCun. SN: A simulator for connectionist models. In Proceedings ofNeuroNimes 88, pages 371–382, Nimes, France, 1988. URL http://leon.bottou.org/

papers/bottou-lecun-88.

Leon Bottou and Yann LeCun. Lush reference manual, 2002. URL http://lush.

sourceforge.net/doc.html.

Leon Bottou, Frank E. Curtis, and Jorge Nocedal. Optimization methods for large-scalemachine learning. arXiv preprint arXiv:1606.04838, 2016.

Leon Bottou. Online learning and stochastic approximations. On-Line Learning in NeuralNetworks, 17:9, 1998.

Claude Brezinski and M. Redivo Zaglia. Extrapolation Methods: Theory and Practice.North-Holland, 1991.

A. E. Bryson and W. F. Denham. A steepest ascent method for solving optimum program-ming problems. Journal of Applied Mechanics, 29(2):247, 1962. doi: 10.1115/1.3640537.

Arthur E. Bryson and Yu-Chi Ho. Applied Optimal Control: Optimization, Estimation, andControl. Blaisdell, Waltham, MA, 1969.

Rirchard L. Burden and J. Douglas Faires. Numerical Analysis. Brooks/Cole, 2001.

Luca Capriotti. Fast Greeks by algorithmic differentiation. Journal of ComputationalFinance, 14(3):3, 2011.

Gregory R. Carmichael and Adrian Sandu. Sensitivity analysis for atmospheric chemistrymodels via automatic differentiation. Atmospheric Environment, 31(3):475–89, 1997.

Bob Carpenter, Matthew D Hoffman, Marcus Brubaker, Daniel Lee, Peter Li, and MichaelBetancourt. The Stan math library: Reverse-mode automatic differentiation in C++.arXiv preprint arXiv:1509.07164, 2015.

Bob Carpenter, Andrew Gelman, Matt Hoffman, Daniel Lee, Ben Goodrich, Michael Be-tancourt, Michael A Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell. Stan: A prob-abilistic programming language. Journal of Statistical Software, 20:1–37, 2016.

Daniele Casanova, Robin S. Sharp, Mark Final, Bruce Christianson, and Pat Symonds.Application of automatic diffentiation to race car performance optimisation. In GeorgeCorliss, Christele Faure, Andreas Griewank, Lauren Hascoet, and Uwe Naumann, editors,Automatic Differentiation of Algorithms, pages 117–124. Springer-Verlag New York, Inc.,New York, NY, USA, 2002. ISBN 0-387-95305-1.

Isabelle Charpentier and Mohammed Ghemires. Efficient adjoint derivatives: Applicationto the meteorological model Meso-NH. Optimization Methods and Software, 13(1):35–63,2000.

Danqi Chen and Christopher Manning. A fast and accurate dependency parser using neu-ral networks. In Proceedings of the 2014 Conference on Empirical Methods in NaturalLanguage Processing (EMNLP), pages 740–750, 2014.

30

Page 31: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Automatic Differentiation in Machine Learning: a Survey

Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, BryanCatanzaro, and Evan Shelhamer. cuDNN: Efficient primitives for deep learning. arXivpreprint arXiv:1410.0759, 2014.

Siddhartha Chib and Edward Greenberg. Understanding the Metropolis-Hastings algo-rithm. The American Statistician, 49(4):327–335, 1995. doi: 10.1080/00031305.1995.10476177.

Bruce Christianson. Reverse accumulation and attractive fixed points. Optimization Meth-ods and Software, 3(4):311–326, 1994.

Bruce Christianson. A Leibniz notation for automatic differentiation. In Shaun Forth,Paul Hovland, Eric Phipps, Jean Utke, and Andrea Walther, editors, Recent Advances inAlgorithmic Differentiation, volume 87 of Lecture Notes in Computational Science andEngineering, pages 1–9. Springer, Berlin, 2012. ISBN 978-3-540-68935-5. doi: 10.1007/978-3-642-30023-3 1.

William K. Clifford. Preliminary sketch of bi-quaternions. Proceedings of the London Math-ematical Society, 4:381–95, 1873.

J Cohen and M Jeroen Molemaker. A fast double precision cfd code using cuda. ParallelComputational Fluid Dynamics: Recent Advances and Future Directions, pages 414–429,2009.

Ronan Collobert, Koray Kavukcuoglu, and Clement Farabet. Torch7: A Matlab-like en-vironment for machine learning. In BigLearn, NIPS Workshop, number EPFL-CONF-192376, 2011.

George F. Corliss. Application of differentiation arithmetic, volume 19 of Perspectives inComputing, pages 127–48. Academic Press, Boston, 1988.

Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Trainingdeep neural networks with binary weights during propagations. In Advances in NeuralInformation Processing Systems, pages 3123–3131, 2015.

Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection.In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR’05), pages 886–93, Washington, DC, USA, 2005. IEEE Com-puter Society. doi: 10.1109/CVPR.2005.177.

Benjamin Dauvergne and Laurent Hascoet. The data-flow equations of checkpointing inreverse automatic differentiation. In V. N. Alexandrov, G. D. van Albada, P. M. A.Sloot, and J. Dongarra, editors, Computational Science – ICCS 2006, volume 3994 ofLecture Notes in Computer Science, pages 566–73, Dauvergne, 2006. Springer Berlin.

John E. Dennis and Robert B. Schnabel. Numerical Methods for Unconstrained Optimiza-tion and Nonlinear Equations. Classics in Applied Mathematics. Society for Industrialand Applied Mathematics, Philadelphia, 1996.

31

Page 32: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Baydin, Pearlmutter, Radul, and Siskind

L. C. Dixon. Use of automatic differentiation for calculating Hessians and Newton steps. InA. Griewank and G. F. Corliss, editors, Automatic Differentiation of Algorithms: Theory,Implementation, and Application, pages 114–125. SIAM, Philadelphia, PA, 1991.

Simon Duane, Anthony D. Kennedy, Brian J. Pendleton, and Duncan Roweth. HybridMonte Carlo. Physics Letters B, 195(2):216–222, 1987.

John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for onlinelearning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159, 2011.

Ulf Ekstrom, Lucas Visscher, Radovan Bast, Andreas J. Thorvaldsen, and Kenneth Ruud.Arbitrary-order density functional response theory from automatic differentiation. Jour-nal of Chemical Theory and Computation, 6:1971–80, 2010. doi: 10.1021/ct100117s.

Jerry Eriksson, Marten Gulliksson, Per Lindstrom, and Per Ake Wedin. Regularizationtools for training large feed-forward neural networks using automatic differentiation. Op-timization Methods and Software, 10(1):49–69, 1998. doi: 10.1080/10556789808805701.

S. M. Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, David Szepesvari, KorayKavukcuoglu, and Geoffrey E. Hinton. Attend, infer, repeat: Fast scene understandingwith generative models. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, andR. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 3225–3233. Curran Associates, Inc., 2016.

Jenny Rose Finkel, Alex Kleeman, and Christopher D. Manning. Efficient, feature-based,conditional random field parsing. In Proceedings of the 46th Annual Meeting of the As-sociation for Computational Linguistics (ACL 2008), pages 959–67, 2008.

Bengt Fornberg. Numerical differentiation of analytic functions. ACM Transactions onMathematical Software, 7(4):512–26, 1981. doi: 10.1145/355972.355979.

Shaun A. Forth. An efficient overloaded implementation of forward mode automatic dif-ferentiation in MATLAB. ACM Transactions on Mathematical Software, 32(2):195–222,2006.

Shaun A. Forth and Trevor P. Evans. Aerofoil optimisation via AD of a multigrid cell-vertex Euler flow solver. In George Corliss, Christele Faure, Andreas Griewank, LaurentHascoet, and Uwe Naumann, editors, Automatic Differentiation of Algorithms: FromSimulation to Optimization, pages 153–160. Springer New York, New York, NY, 2002.ISBN 978-1-4613-0075-5. doi: 10.1007/978-1-4613-0075-5 17.

Robert Fourer, David M. Gay, and Brian W. Kernighan. AMPL: A Modeling Language forMathematical Programming. Duxbury Press, 2002.

David M. Gay. Automatically finding and exploiting partially separable structure in nonlin-ear programming problems. Technical report, Bell Laboratories, Murray Hill, NJ, 1996.

32

Page 33: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Automatic Differentiation in Machine Learning: a Survey

Assefaw H. Gebremedhin, Arijit Tarafdar, Alex Pothen, and Andrea Walther. Efficientcomputation of sparse Hessians using coloring and automatic differentiation. INFORMSJournal on Computing, 21(2):209–23, 2009. doi: 10.1287/ijoc.1080.0286.

Assefaw H Gebremedhin, Duc Nguyen, Md Mostofa Ali Patwary, and Alex Pothen. Col-Pack: Software for graph coloring and related problems in scientific computing. ACMTransactions on Mathematical Software (TOMS), 40(1):1, 2013.

Samuel Gershman and Noah Goodman. Amortized inference in probabilistic reasoning. InProceedings of the Annual Meeting of the Cognitive Science Society, number 36, 2014.

Ralf Giering and Thomas Kaminski. Recipes for adjoint code construction. ACM Transac-tions on Mathematical Software, 24:437–74, 1998. doi: 10.1145/293686.293695.

Kevin Gimpel, Dipanjan Das, and Noah A. Smith. Distributed asynchronous online learningfor natural language processing. In Proceedings of the Fourteenth Conference on Compu-tational Natural Language Learning, CoNLL ’10, pages 213–222, Stroudsburg, PA, USA,2010. Association for Computational Linguistics.

Mark Girolami and Be Calderhead. Riemann manifold Langevin and Hamiltonian MonteCarlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodol-ogy), 73(2):123–214, 2011.

Yoav Goldberg. A primer on neural network models for natural language processing. Journalof Artificial Intelligence Research, 57:345–420, 2016.

Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016.http://www.deeplearningbook.org.

Andrew D Gordon, Thomas A Henzinger, Aditya V Nori, and Sriram K Rajamani. Prob-abilistic programming. In Proceedings of the on Future of Software Engineering, pages167–181. ACM, 2014.

Johannes Grabmeier and Erich Kaltofen. Computer Algebra Handbook: Foundations, Ap-plications, Systems. Springer, 2003.

Markus Grabner, Thomas Pock, Tobias Gross, and Bernhard Kainz. Automatic differentia-tion for GPU-accelerated 2D/3D registration. In C. H. Bischof, H. M. Bucker, P. Hovland,U. Naumann, and J. Utke, editors, Advances in Automatic Differentiation, volume 64 ofLecture Notes in Computational Science and Engineering, pages 259–269. Springer BerlinHeidelberg, 2008. doi: 10.1007/978-3-540-68942-3 23.

Will Grathwohl, Dami Choi, Yuhuai Wu, Geoff Roeder, and David Duvenaud. Backprop-agation through the void: Optimizing control variates for black-box gradient estimation.arXiv preprint arXiv:1711.00123, 2017.

Alex Graves, Greg Wayne, and Ivo Danihelka. Neural Turing machines. arXiv preprintarXiv:1410.5401, 2014.

33

Page 34: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Baydin, Pearlmutter, Radul, and Siskind

Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, AgnieszkaGrabska-Barwinska, Sergio Gomez Colmenarejo, Edward Grefenstette, Tiago Ramalho,John Agapiou, et al. Hybrid computing using a neural network with dynamic externalmemory. Nature, 538(7626):471–476, 2016.

Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learn-ing to transduce with unbounded memory. In Advances in Neural Information ProcessingSystems, pages 1828–1836, 2015.

Andreas Griewank. On automatic differentiation. In M. Iri and K. Tanabe, editors, Math-ematical Programming: Recent Developments and Applications, pages 83–108. KluwerAcademic Publishers, 1989.

Andreas Griewank. A mathematical view of automatic differentiation. Acta Numerica, 12:321–98, 2003. doi: 10.1017/S0962492902000132.

Andreas Griewank. Who invented the reverse mode of differentiation? Documenta Mathe-matica, Extra Volume ISMP:389–400, 2012.

Andreas Griewank and Andrea Walther. Evaluating Derivatives: Principles and Techniquesof Algorithmic Differentiation. Society for Industrial and Applied Mathematics, Philadel-phia, 2008. doi: 10.1137/1.9780898717761.

Andreas Griewank, Kshitij Kulshreshtha, and Andrea Walther. On the numerical stabilityof algorithmic differentiation. Computing, 94(2-4):125–149, 2012.

Audrunas Gruslys, Remi Munos, Ivo Danihelka, Marc Lanctot, and Alex Graves. Memory-efficient backpropagation through time. In Advances in Neural Information ProcessingSystems, pages 4125–4133, 2016.

Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learn-ing with limited numerical precision. In Proceedings of the 32nd International Conferenceon Machine Learning (ICML-15), pages 1737–1746, 2015.

Gundolf Haase, Ulrich Langer, Ewald Lindner, and Wolfram Muhlhuber. Optimal sizingof industrial structural mechanics problems using AD. In Automatic Differentiation ofAlgorithms, pages 181–188. Springer, 2002.

Stefan Hadjis, Firas Abuzaid, Ce Zhang, and Christopher Re. Caffe con troll: Shallow ideasto speed up deep learning. In Proceedings of the Fourth Workshop on Data analytics inthe Cloud, page 2. ACM, 2015.

William Rowan Hamilton. Theory of conjugate functions, or algebraic couples; with apreliminary and elementary essay on algebra as the science of pure time. Transactions ofthe Royal Irish Academy, 17:293–422, 1837.

Laurent Hascoet and Valerie Pascual. The Tapenade automatic differentiation tool: prin-ciples, model, and specification. ACM Transactions on Mathematical Software, 39(3),2013. doi: 10.1145/2450153.2450158.

34

Page 35: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Automatic Differentiation in Machine Learning: a Survey

Robert Hecht-Nielsen. Theory of the backpropagation neural network. In InternationalJoint Conference on Neural Networks, IJCNN 1989, pages 593–605. IEEE, 1989.

Ruth L. Hinkins. Parallel computation of automatic differentiation applied to magneticfield calculations. Technical report, Lawrence Berkeley Lab., CA, 1994.

Geoffrey E. Hinton and Zoubin Ghahramani. Generative models for discovering sparsedistributed representations. Philosophical Transactions of the Royal Society of LondonB: Biological Sciences, 352(1358):1177–1190, 1997.

Matthew D. Hoffman and Andrew Gelman. The no-U-turn sampler: Adaptively settingpath lengths in Hamiltonian Monte Carlo. Journal of Machine Learning Research, 15:1351–1381, 2014.

Berthold K. P. Horn. Understanding image intensities. Artificial Intelligence, 8:201–231,1977.

Jim E. Horwedel, Brian A. Worley, E. M. Oblow, and F. G. Pin. GRESS version 1.0 user’smanual. Technical Memorandum ORNL/TM 10835, Martin Marietta Energy Systems,Inc., Oak Ridge National Laboratory, Oak Ridge, 1988.

Max E. Jerrell. Automatic differentiation and interval arithmetic for estimation of disequi-librium models. Computational Economics, 10(3):295–316, 1997.

Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Gir-shick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture forfast feature embedding. In Proceedings of the 22nd ACM International Conference onMultimedia, pages 675–678. ACM, 2014.

Matthew Johnson, David K Duvenaud, Alex Wiltschko, Ryan P Adams, and Sandeep RDatta. Composing graphical models with neural networks for structured representationsand fast inference. In Advances in Neural Information Processing Systems, pages 2946–2954, 2016.

Neil D Jones, Carsten K Gomard, and Peter Sestoft. Partial evaluation and automaticprogram generation. Peter Sestoft, 1993a.

Simon L Peyton Jones and John Launchbury. Unboxed values as first class citizens in anon-strict functional language. In Conference on Functional Programming Languages andComputer Architecture, pages 636–666. Springer, 1991.

SL Peyton Jones, Cordy Hall, Kevin Hammond, Will Partain, and Philip Wadler. TheGlasgow Haskell compiler: a technical overview. In Proc. UK Joint Framework for In-formation Technology (JFIT) Technical Conference, volume 93, 1993b.

Armand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmentedrecurrent nets. In Advances in Neural Information Processing Systems, pages 190–198,2015.

35

Page 36: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Baydin, Pearlmutter, Radul, and Siskind

David W. Juedes. A taxonomy of automatic differentiation tools. In A. Griewank and G. F.Corliss, editors, Automatic Differentiation of Algorithms: Theory, Implementation, andApplication, pages 315–29. Society for Industrial and Applied Mathematics, Philadelphia,PA, 1991.

D. Kingma and J. Ba. Adam: A method for stochastic optimization. In The InternationalConference on Learning Representations (ICLR), San Diego, 2015.

Diederik P. Kingma and Max Welling. Auto-encoding variational Bayes. In InternationalConference on Learning Representations, 2014.

Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet classification with deepconvolutional neural networks. In Advances in Neural Information Processing Systems,pages 1097–1105, 2012.

K. Kubo and M. Iri. PADRE2, version 1—user’s manual. Research Memorandum RMI90-01, Department of Mathematical Engineering and Information Physics, University ofTokyo, Tokyo, 1990.

Alp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, and David M. Blei.Automatic differentiation variational inference. Journal of Machine Learning Research,18(14):1–45, 2017.

Tejas D. Kulkarni, Pushmeet Kohli, Joshua B. Tenenbaum, and Vikash Mansinghka. Pic-ture: A probabilistic programming language for scene perception. In The IEEE Confer-ence on Computer Vision and Pattern Recognition (CVPR), June 2015.

Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani,Victor Zhong, Romain Paulus, and Richard Socher. Ask me anything: Dynamic memorynetworks for natural language processing. In Maria Florina Balcan and Kilian Q. Wein-berger, editors, Proceedings of The 33rd International Conference on Machine Learning,volume 48 of Proceedings of Machine Learning Research, pages 1378–1387, New York,New York, USA, 20–22 Jun 2016. PMLR.

C. L. Lawson. Computing derivatives using W-arithmetic and U-arithmetic. Internal Com-puting Memorandum CM-286, Jet Propulsion Laboratory, Pasadena, CA, 1971.

Tuan Anh Le, Atılım Gunes Baydin, and Frank Wood. Inference compilation and univer-sal probabilistic programming. In Proceedings of the 20th International Conference onArtificial Intelligence and Statistics (AISTATS), volume 54 of Proceedings of MachineLearning Research, pages 1338–1348, Fort Lauderdale, FL, USA, 2017. PMLR.

Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learningapplied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.

Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015.

G. W. Leibniz. Machina arithmetica in qua non additio tantum et subtractio sed et multi-plicatio nullo, diviso vero paene nullo animi labore peragantur. Hannover, 1685.

36

Page 37: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Automatic Differentiation in Machine Learning: a Survey

Xavier Leroy. The effectiveness of type-based unboxing. In TIC 1997: Workshop Types inCompilation, 1997.

Seppo Linnainmaa. The representation of the cumulative rounding error of an algorithm asa taylor expansion of the local rounding errors. Master’s thesis, University of Helsinki,1970.

Seppo Linnainmaa. Taylor expansion of the accumulated rounding error. BIT NumericalMathematics, 16(2):146–160, 1976.

Matthew M. Loper and Michael J. Black. OpenDR: An approximate differentiable renderer.In European Conference on Computer Vision, pages 154–169. Springer, 2014.

Dougal Maclaurin. Modeling, Inference and Optimization with Composable DifferentiableProcedures. PhD thesis, School of Engineering and Applied Sciences, Harvard University,2016.

Dougal Maclaurin, David Duvenaud, and Ryan Adams. Gradient-based hyperparameter op-timization through reversible learning. In International Conference on Machine Learning,pages 2113–2122, 2015.

Oleksandr Manzyuk, Barak A. Pearlmutter, Alexey Andreyevich Radul, David R Rush, andJeffrey Mark Siskind. Confusion of tagged perturbations in forward automatic differenti-ation of higher-order functions. arXiv preprint arXiv:1211.4892, 2012.

David Q. Mayne and David H. Jacobson. Differential Dynamic Programming. AmericanElsevier Pub. Co., New York, 1970.

Vladimir Mazourik. Integration of automatic differentiation into a numerical library forPC’s. In A. Griewank and G. F. Corliss, editors, Automatic Differentiation of Algo-rithms: Theory, Implementation, and Application, pages 315–29. Society for Industrialand Applied Mathematics, Philadelphia, PA, 1991.

Renate Meyer, David A. Fournier, and Andreas Berg. Stochastic volatility: Bayesian com-putation using automatic differentiation and the extended Kalman filter. EconometricsJournal, 6(2):408–420, 2003. doi: 10.1111/1368-423X.t01-1-00116.

L. Michelotti. MXYZPTLK: A practical, user-friendly C++ implementation of differen-tial algebra: User’s guide. Technical Memorandum FN-535, Fermi National AcceleratorLaboratory, Batavia, IL, 1990.

Tomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur.Recurrent neural network based language model. In Eleventh Annual Conference of theInternational Speech Communication Association, 2010.

J. D. Muller and P. Cusdin. On the performance of discrete adjoint CFD codes usingautomatic differentiation. International Journal for Numerical Methods in Fluids, 47(8-9):939–945, 2005. ISSN 1097-0363. doi: 10.1002/fld.885.

37

Page 38: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Baydin, Pearlmutter, Radul, and Siskind

Uwe Naumann. Optimal accumulation of Jacobian matrices by elimination methods on thedual computational graph. Mathematical Programming, 99(3):399–421, 2004.

Uwe Naumann and Jan Riehme. Computing adjoints with the NAGWare Fortran 95 com-piler. In H. M. Bucker, G. Corliss, P. Hovland, U. Naumann, and B. Norris, editors,Automatic Differentiation: Applications, Theory, and Implementations, Lecture Notes inComputational Science and Engineering, pages 159–69. Springer, 2005.

Radford M. Neal. Probabilistic inference using Markov chain Monte Carlo methods. Tech-nical Report CRG-TR-93-1, Department of Computer Science, University of Toronto,1993.

Richard D. Neidinger. Automatic differentiation and APL. College Mathematics Journal,20(3):238–51, 1989. doi: 10.2307/2686776.

John F. Nolan. Analytical differentiation on a digital computer. Master’s thesis, Mas-sachusetts Institute of Technology, 1953.

J. F. Ostiguy and L. Michelotti. Mxyzptlk: An efficient, native C++ differentiation engine.In Particle Accelerator Conference (PAC 2007), pages 3489–91. IEEE, 2007. doi: 10.1109/PAC.2007.4440468.

David B. Parker. Learning-logic: Casting the cortex of the human brain in silicon. Tech-nical Report TR-47, Center for Computational Research in Economics and ManagementScience, MIT, 1985.

Valerie Pascual and Laurent Hascoet. TAPENADE for C. In Advances in AutomaticDifferentiation, Lecture Notes in Computational Science and Engineering, pages 199–210. Springer, 2008. doi: 10.1007/978-3-540-68942-3 18.

Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, ZacharyDeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic dif-ferentiation in PyTorch. In NIPS 2017 Autodiff Workshop: The Future of Gradient-basedMachine Learning Software and Techniques, Long Beach, CA, US, December 9, 2017,2017.

Barak A. Pearlmutter. Fast exact multiplication by the Hessian. Neural Computation, 6:147–60, 1994. doi: 10.1162/neco.1994.6.1.147.

Barak A. Pearlmutter and Jeffrey Mark Siskind. Reverse-mode AD in a functional frame-work: Lambda the ultimate backpropagator. ACM Transactions on Programming Lan-guages and Systems (TOPLAS), 30(2):1–36, March 2008. doi: 10.1145/1330017.1330018.

Ding-Yu Peng and Donald B. Robinson. A new two-constant equation of state. Industrialand Engineering Chemistry Fundamentals, 15(1):59–64, 1976. doi: 10.1021/i160057a011.

John Peterson. Untagged data in tagged environments: Choosing optimal representationsat compile time. In Proceedings of the Fourth International Conference on FunctionalProgramming Languages and Computer Architecture, pages 89–99. ACM, 1989.

38

Page 39: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Automatic Differentiation in Machine Learning: a Survey

F. W. Pfeiffer. Automatic differentiation in PROSE. SIGNUM Newsletter, 22(1):2–8, 1987.doi: 10.1145/24680.24681.

Thomas Pock, Michael Pock, and Horst Bischof. Algorithmic differentiation: Applicationto variational problems in computer vision. IEEE Transactions on Pattern Analysis andMachine Intelligence, 29(7):1180–1193, 2007. doi: 10.1109/TPAMI.2007.1044.

William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery. Nu-merical Recipes: The Art of Scientific Computing. Cambridge University Press, 2007.

Louise B. Rall. Perspectives on automatic differentiation: Past, present, and future? InM. Bucker, G. Corliss, U. Naumann, P. Hovland, and B. Norris, editors, AutomaticDifferentiation: Applications, Theory, and Implementations, volume 50 of Lecture Notesin Computational Science and Engineering, pages 1–14. Springer Berlin Heidelberg, 2006.

Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian processes for machinelearning. MIT Press, 2006.

J. Revels, M. Lubin, and T. Papamarkou. Forward-mode automatic differentiation in Julia.arXiv:1607.07892 [cs.MS], 2016a. URL https://arxiv.org/abs/1607.07892.

Jarrett Revels, Miles Lubin, and Theodore Papamarkou. Forward-mode automatic differ-entiation in Julia. arXiv preprint arXiv:1607.07892, 2016b.

Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagationand approximate inference in deep generative models. In International Conference onMachine Learning, pages 1278–1286, 2014.

Lawrence C. Rich and David R. Hill. Automatic differentiation in MATLAB. AppliedNumerical Mathematics, 9:33–43, 1992.

Daniel Ritchie, Paul Horsfall, and Noah D Goodman. Deep amortized inference for proba-bilistic programs. arXiv preprint arXiv:1610.05735, 2016.

Elizabeth Rollins. Optimization of neural network feedback control systems using au-tomatic differentiation. Master’s thesis, Department of Aeronautics and Astronautics,Massachusetts Institute of Technology, 2009.

L. I. Rozonoer. L. S. Pontryagin’s maximum principle in the theory of optimum systems—Part II. Automat. i Telemekh., 20:1441–1458, 1959.

David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning representationsby back-propagating errors. Nature, 323(6088):533, 1986.

Siegfried M. Rump. INTLAB—INTerval LABoratory. In Developments in Reliable Com-puting, pages 77–104. Kluwer Academic Publishers, Dordrecht, 1999. doi: 10.1007/978-94-017-1247-7 7.

Tim Salimans, Diederik Kingma, and Max Welling. Markov chain Monte Carlo and varia-tional inference: Bridging the gap. In Proceedings of the 32nd International Conferenceon Machine Learning (ICML-15), pages 1218–1226, 2015.

39

Page 40: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Baydin, Pearlmutter, Radul, and Siskind

John Salvatier, Thomas V Wiecki, and Christopher Fonnesbeck. Probabilistic programmingin Python using PyMC3. PeerJ Computer Science, 2:e55, 2016.

Tom Schaul, Sixin Zhang, and Yann LeCun. No more pesky learning rates. In InternationalConference on Machine Learning, pages 343–351, 2013.

Jurgen Schmidhuber. Deep learning in neural networks: An overview. Neural Networks,61:85–117, 2015.

Nicol N. Schraudolph. Local gain adaptation in stochastic gradient descent. In Proceedingsof the International Conference on Artificial Neural Networks, pages 569–74, Edinburgh,Scotland, 1999. IEE London. doi: 10.1049/cp:19991170.

Nicol N. Schraudolph and Thore Graepel. Combining conjugate direction methods withstochastic approximation of gradients. In Proceedings of the Ninth International Work-shop on Artificial Intelligence and Statistics, 2003.

Frank Seide and Amit Agarwal. CNTK: Microsoft’s open-source deep-learning toolkit. InProceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discov-ery and Data Mining, KDD ’16, pages 2135–2135, New York, NY, USA, 2016. ACM.ISBN 978-1-4503-4232-2. doi: 10.1145/2939672.2945397.

Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hin-ton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In International Conference on Learning Representations 2017, 2017.

Olin Shivers. Control-flow analysis of higher-order languages. PhD thesis, Carnegie MellonUniversity, 1991.

Alex Shtof, Alexander Agathos, Yotam Gingold, Ariel Shamir, and Daniel Cohen-Or. Geose-mantic snapping for sketch-based modeling. Computer Graphics Forum, 32(2):245–53,2013. doi: 10.1111/cgf.12044.

N. Siddharth, Brooks Paige, Jan-Willem van de Meent, Alban Desmaison, Noah D. Good-man, Pushmeet Kohli, Frank Wood, and Philip Torr. Learning disentangled representa-tions with semi-supervised deep generative models. In I. Guyon, U. V. Luxburg, S. Bengio,H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in NeuralInformation Processing Systems 30, pages 5927–5937. Curran Associates, Inc., 2017.

Patrice Simard, Yann LeCun, John Denker, and Bernard Victorri. Transformation invari-ance in pattern recognition, tangent distance and tangent propagation. In G. Orr andK. Muller, editors, Neural Networks: Tricks of the Trade. Springer, 1998.

Z. Sirkes and E. Tziperman. Finite difference of adjoint or adjoint of finite difference?Monthly Weather Review, 125(12):3373–8, 1997. doi: 10.1175/1520-0493(1997)125〈3373:FDOAOA〉2.0.CO;2.

40

Page 41: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Automatic Differentiation in Machine Learning: a Survey

Jeffrey Mark Siskind and Barak A. Pearlmutter. Perturbation confusion and referentialtransparency: Correct functional implementation of forward-mode AD. In Andrew But-terfield, editor, Implementation and Application of Functional Languages—17th Inter-national Workshop, IFL’05, pages 1–9, Dublin, Ireland, 2005. Trinity College DublinComputer Science Department Technical Report TCD-CS-2005-60.

Jeffrey Mark Siskind and Barak A. Pearlmutter. Using polyvariant union-free flow analysisto compile a higher-order functional-programming language with a first-class derivativeoperator to efficient Fortran-like code. Technical Report TR-ECE-08-01, School of Elec-trical and Computer Engineering, Purdue University, 2008a.

Jeffrey Mark Siskind and Barak A. Pearlmutter. Nesting forward-mode AD in a functionalframework. Higher-Order and Symbolic Computation, 21(4):361–376, 2008b.

Jeffrey Mark Siskind and Barak A. Pearlmutter. Efficient implementation of a higher-orderlanguage with built-in AD. In 7th International Conference on Algorithmic Differentia-tion, Christ Church Oxford, UK, September 12–15, 2016, 2016. Also arXiv:1611.03416.

Jeffrey Mark Siskind and Barak A. Pearlmutter. Divide-and-conquer checkpointing forarbitrary programs with no user annotation. In NIPS 2017 Autodiff Workshop: TheFuture of Gradient-based Machine Learning Software and Techniques, Long Beach, CA,US, December 9, 2017, 2017. Also arXiv:1708.06799.

Emil I. Slusanschi and Vlad Dumitrel. ADiJaC—Automatic differentiation of Java classfiles.ACM Transaction on Mathematical Software, 43(2):9:1–9:33, September 2016. ISSN 0098-3500. doi: 10.1145/2904901.

Bert Speelpenning. Compiling Fast Partial Derivatives of Functions Given by Algo-rithms. PhD thesis, Department of Computer Science, University of Illinois at Urbana-Champaign, 1980.

Suvrit Sra, Sebastian Nowozin, and Stephen J. Wright. Optimization for Machine Learning.MIT Press, 2011.

Filip Srajer, Zuzana Kukelova, and Andrew Fitzgibbon. A benchmark of selected algo-rithmic differentiation tools on some problems in machine learning and computer vision.In AD2016: The 7th International Conference on Algorithmic Differentiation, Monday12th–Thursday 15th September 2016, Christ Church Oxford, UK: Programme and Ab-stracts, pages 181–184. Society for Industrial and Applied Mathematics (SIAM), 2016.

Akshay Srinivasan and Emanuel Todorov. Graphical Newton. Technical ReportarXiv:1508.00952, arXiv preprint, 2015.

Andreas Stuhlmuller, Jacob Taylor, and Noah Goodman. Learning stochastic inverses. InAdvances in Neural Information Processing Systems, pages 3048–3056, 2013.

Felipe Petroski Such, Vashisht Madhavan, Edoardo Conti, Joel Lehman, Kenneth O. Stan-ley, and Jeff Clune. Deep neuroevolution: Genetic algorithms are a competitive al-ternative for training deep neural networks for reinforcement learning. arXiv preprintarXiv:1712.06567, 2017.

41

Page 42: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Baydin, Pearlmutter, Radul, and Siskind

Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. InAdvances in Neural Information Processing Systems, pages 2440–2448, 2015.

Gerald J. Sussman and Jack Wisdom. Structure and Interpretation of Classical Mechanics.MIT Press, 2001. doi: 10.1063/1.1457268.

Jonathan Taylor, Richard Stebbing, Varun Ramakrishna, Cem Keskin, Jamie Shotton,Shahram Izadi, Aaron Hertzmann, and Andrew Fitzgibbon. User-specific hand modelingfrom monocular depth sequences. In Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition, pages 644–651, 2014.

Jeffrey P. Thomas, Earl H. Dowell, and Kenneth C. Hall. Using automatic differentiation tocreate a nonlinear reduced order model of a computational fluid dynamic solver. AIAAPaper, 7115:2006, 2006.

T. Tieleman and G. Hinton. Lecture 6.5—RMSProp: Divide the gradient by a runningaverage of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4(2), 2012.

Seiya Tokui, Kenta Oono, Shohei Hido, and Justin Clayton. Chainer: a next-generationopen source framework for deep learning. In Proceedings of Workshop on Machine Learn-ing Systems (LearningSys) in The Twenty-ninth Annual Conference on Neural Informa-tion Processing Systems (NIPS), 2015.

Dustin Tran, Alp Kucukelbir, Adji B. Dieng, Maja Rudolph, Dawen Liang, and David M.Blei. Edward: A library for probabilistic modeling, inference, and criticism. arXivpreprint arXiv:1610.09787, 2016.

Dustin Tran, Matthew D. Hoffman, Rif A. Saurous, Eugene Brevdo, Kevin Murphy, andDavid M. Blei. Deep probabilistic programming. In International Conference on LearningRepresentations, 2017.

Bill Triggs, Philip F. McLauchlan, Richard I. Hartley, and Andrew W. Fitzgibbon. Bundleadjustment—a modern synthesis. In International Workshop on Vision Algorithms, pages298–372. Springer, 1999.

George Tucker, Andriy Mnih, Chris J. Maddison, John Lawson, and Jascha Sohl-Dickstein.REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models.In Advances in Neural Information Processing Systems, pages 2624–2633, 2017.

Bart van Merrienboer, Alexander B. Wiltschko, and Dan Moldovan. Tangent: Au-tomatic differentiation using source code transformation in Python. arXiv preprintarXiv:1711.02712, 2017.

Arun Verma. An introduction to automatic differentiation. Current Science, 78(7):804–7,2000.

S. V. N. Vishwanathan, Nicol N. Schraudolph, Mark W. Schmidt, and Kevin P. Murphy.Accelerated training of conditional random fields with stochastic gradient methods. In

42

Page 43: Automatic Di erentiation in Machine Learning: a Survey · Automatic Differentiation in Machine Learning: a Survey expressions. This allows accurate evaluation of derivatives at machine

Automatic Differentiation in Machine Learning: a Survey

Proceedings of the 23rd International Conference on Machine Learning (ICML ’06), pages969–76, 2006. doi: 10.1145/1143844.1143966.

Andrea Walther. Automatic differentiation of explicit Runge-Kutta methods for optimalcontrol. Computational Optimization and Applications, 36(1):83–108, 2007. doi: 10.1007/s10589-006-0397-3.

Andrea Walther and Andreas Griewank. Getting started with ADOL-C. In U. Naumannand O. Schenk, editors, Combinatorial Scientific Computing, chapter 7, pages 181–202.Chapman-Hall CRC Computational Science, 2012. doi: 10.1201/b11644-8.

Robert E. Wengert. A simple automatic derivative evaluation program. Communicationsof the ACM, 7:463–4, 1964.

Paul J. Werbos. Beyond Regression: New Tools for Prediction and Analysis in the Behav-ioral Sciences. PhD thesis, Harvard University, 1974.

Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist rein-forcement learning. Machine Learning, 8(3-4):229–256, 1992.

J. Willkomm and A. Vehreschild. The ADiMat handbook, 2013. URL http://adimat.sc.

informatik.tu-darmstadt.de/doc/.

David Wingate, Noah Goodman, Andreas Stuhlmuller, and Jeffrey Mark Siskind. Nonstan-dard interpretations of probabilistic programs for efficient inference. Advances in NeuralInformation Processing Systems, 23, 2011.

Weiwei Yang, Yong Zhao, Li Yan, and Xiaoqian Chen. Application of PID controller basedon BP neural network using automatic differentiation method. In F. Sun, J. Zhang,Y. Tan, J. Cao, and W. Yu, editors, Advances in Neural Networks—ISNN 2008, volume5264 of Lecture Notes in Computer Science, pages 702–711. Springer Berlin Heidelberg,2008. doi: 10.1007/978-3-540-87734-9 80.

Ilker Yildirim, Tejas D. Kulkarni, Winrich A. Freiwald, and Joshua B. Tenenbaum. Efficientand robust analysis-by-synthesis in vision: A computational framework, behavioral tests,and modeling neuronal representations. In Annual Conference of the Cognitive ScienceSociety, 2015.

Haonan Yu and Jeffrey Mark Siskind. Grounded language learning from video describedwith sentences. In Proceedings of the 51st Annual Meeting of the Association for Com-putational Linguistics, pages 53–63, Sofia, Bulgaria, 2013. Association for ComputationalLinguistics.

Wojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. Learning simplealgorithms from examples. In International Conference on Machine Learning, pages 421–429, 2016.

Ciyou Zhu, Richard H. Byrd, Peihuang Lu, and Jorge Nocedal. Algorithm 778: L-BFGS-B:Fortran subroutines for large-scale bound-constrained optimization. ACM Transactionson Mathematical Software (TOMS), 23(4):550–60, 1997. doi: 10.1145/279232.279236.

43