Upload
others
View
10
Download
0
Embed Size (px)
Citation preview
1/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
PDGM: A Neural Network Approach to SolvePath-Dependent Partial Differential Equations
Zhaoyu Zhang
Department of IEOR, Columbia University
Joint work with Yuri F. Saporito
February 3, 2020
Mathematical Finance Colloquium, USC
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
2/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
Literature Review
I Path-Dependent Partial Differential Equations (PPDEs)and Numerical Methods:I B. Dupire [2009].I I. Ekren, N. Touzi and J. Zhang [2014, 2016ab].I J. Zhang and J. Zhuo [2014].I Z. Ren and X. Tan [2017].I etc.
I Machine Learning on PDEs:I W. E, J. Han, and A. Jentze [2017].I M. Raiss [2018, 2019].I A. Jacquier and M. Oumgari [2019].I J. Sirignano and K. Spiliopoulos [2018].I etc.
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
3/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
Outline
I Review The Notion of Path-Dependent Partial Differential Equations(PPDEs), and Functional Ito Calculus.
I Review of Neural Networks.
I Feed-Forward Neural Network.
I Recurrent Neural Network.
I PDGM Architecture and Algorithm.
I Numerical Examples.
I Linear and Nonlinear PPDE.
I Application in Path Dependent Options.
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
4/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
Functional Ito CalculusI Time horizon T > 0 fixed.
I Denote Λt the space of cadlag paths in [0, t] taking values in Rn and defineΛ =
⋃t∈[0,T ] Λt .
I Capital letters will denote elements of Λ (i.e. paths) and lower-case letterswill denote spot value of paths.
I eg. Yt ∈ Λ means Yt ∈ Λt and ys = Yt(s), for s ≤ t.
I We consider here continuity of functionals as the usual continuity in metricspaces with respect to the metric:
dΛ(Yt ,Zs) = ‖Yt,s−t − Zs‖∞ + |s − t|,
where, without loss of generality, we are assuming s ≥ t, and
‖Yt‖∞ = supu∈[0,t]
|yu|.
The norm | · | is the usual Euclidean norm in the appropriate Euclideanspace, depending on the dimension of the path being considered.
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
5/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
Functional Ito Calculus
Flat extension of a path.
Yt,δt(u) =
{yu, if 0 ≤ u ≤ t,yt , if t ≤ u ≤ t + δt.
b b
Bumped path.
Y ht (u) =
{yu, if 0 ≤ u < t,yt + h, if u = t.
b
b
b
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
6/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
Functional Ito Calculus
I A functional is any function f : Λ −→ R. For such objects, we define, whenthe limits exist, the time and space functional derivatives, respectively, as
∆t f (Yt) = limδt→0+
f (Yt,δt)− f (Yt)
δt,
∆x f (Yt) = limh→0
f (Y ht )− f (Yt)
h.
I Our numerical method is based on the following approximation of thefunctional derivatives: for a smooth functional f ∈ C1,2, we use
∆t f (Yt) =f (Yt,δt)− f (Yt)
δt+ o(δt),
∆x f (Yt) =f (Y h
t )− f (Y−ht )
2h+ o(h2),
∆xx f (Yt) =f (Y h
t )− 2f (Yt) + f (Y−ht )
h2+ o(h2).
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
7/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
Theorem (Functional Feynman-Kac Formula; Dupire 2009)Let x be a process given by the SDE
dxs = µ(Xs)ds + σ(Xs)dws .
Consider functionals g : ΛT −→ R, λ : Λ −→ R and k : Λ −→ R and define thefunctional f as
f (Yt) = E
[e−
∫ Ttλ(Xu)dug(XT ) +
∫ T
t
e−∫ stλ(Xu)duk(Xs)ds
∣∣∣∣∣ Yt
],
for any path Yt ∈ Λ, t ∈ [0,T ]. Thus, if f ∈ C1,2 and k , λ, µ and σ areΛ-continuous, then f satisfies the (linear) PPDE:
∆t f (Yt) + µ(Yt)∆x f (Yt) +1
2σ2(Yt)∆xx f (Yt)− λ(Yt)f (Yt) + k(Yt) = 0,
with f (YT ) = g(YT ), for any Yt in the topological support of the stochasticprocess process x .
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
8/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
Feed-Forward Neural NetworksI A set of layers Mρ
d,k with input x ∈ Rd in a feed-forward neural networkcan be defined as
Mρd,k := {M : Rd → Rk ; M(x) = ρ(Ax + b),A ∈ Rk×d , b ∈ Rk}.
I ρ is some activation function such as
ρtanh(x) := tanh(x), ρs(x) :=1
1 + e−xand ρId(x) := x .
I The set of feed-forward neural networks with ` hidden layers is defined asa composition of layers:
NN`d1,d2= {M : Rd1 → Rd2 ; M = M` ◦ · · · ◦M1 ◦M0,
M0 ∈Mρd1,k1
,M` ∈Mρkl ,d2
,Mi ∈Mρki ,ki+1
,
ki ∈ Z+, i = 1, . . . , `− 1}.
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
9/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
Recurrent Neural NetworkI The recurrent neural network(RNN) is powerful for capturing long-range
dependence of the data.I The LSTM network was proposed in Hochreiter & Schmidhuber(1997). It
is designed to solve the shrinking gradient effects which basic RNN oftensuffers from.
I The LSTM network is a chain of cells. Each LSTM cell composes of a cellstate, which contains information, and three gates, which regulate the flowof information.
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
10/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
LSTM NetworkI Mathematically, the rule inside ith cell follows,
Forget gate, ΓFi (xi , ai−1) = ρs(AF xi + UFai−1 + bF ),
Input gate, ΓIi (xi , ai−1) = ρs(AI xi + UIai−1 + bI ),
Output gate, ΓOi (xi , ai−1) = ρs(AOxi + UOai−1 + bO),
Cell state, ci = ΓFi � ci−1 + ΓIi � ρtanh(ACxi + UCai−1 + bC ),
Output vector, ai = ΓOi � ρtanh(ci ).
I The set of LSTM network up to time i as
LSTMi,d,k =
{M : (Rd)i×Rk×Rk → Rk×Rk ; M(x[0,i ], a−1, c−1) = (ai , ci ),
ci = ΓFi � ci−1 + ΓIi � ρtanh(ACxi + UCai−1 + bC ),
ai = ΓOi � ρtanh(ci ), a−1 = c−1 = 0
},
where x[0,i ] = [x0, . . . , xi ].
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
11/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
PDGM Architecture
I Time discretization {ti}i=1,...,N , with δt = ti − ti−1.
I Approximate f (Yt) by a feed-forward neural network
f (Yt) ≈ u(Yti ; θ) = ϕ(ti , yti , ati−1 ; θf ),
where ti ≤ t < ti+1.
I Here ϕ ∈ NN`k+2,1, where a is an output vector from an LSTM network, i.e.
ati−1 = ψ(yt0 , . . . , yti−1 ; θr ),
for some ψ ∈ LSTMi−1,1,k .
I θ = [θf , θr ] are the neural network’s parameters.
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
12/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
PDGM Architecture
I Neural Network Approximation of the Solution.
u(Yti ; θ) = ϕ(ti , yti , ati−1 ; θf )
u(Y hti ; θ) = ϕ(ti , yti + h, ati−1 ; θf ),
u(Yti ,δt ; θ) = ϕ(ti+1, yti , ati ; θf ).
I Neural Network Approximation of the Derivatives.
∆[δt]t u(Yti ; θ) =
u(Yti ,δt ; θ)− u(Yti ; θ)
δt,
∆[h]x u(Yti ; θ) =
u(Y hti ; θ)− u(Yti ; θ)
h,
∆[h]xx u(Yti ; θ) =
u(Y hti ; θ)− 2u(Yti ; θ) + u(Y−hti ; θ)
h2.
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
13/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
PDGM Architecture
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
14/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
PDGM Architecture
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
15/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
PDGM AlgorithmI Consider the general class of final-value PPDE problem:∆t f (Yt) + Lf (Yt) = 0,
f (YT ) = g(YT ).
I As an illustration, L could be given by the linear operator
Lf (Yt) = µ(Yt)∆x f (Yt) +1
2σ2(Yt)∆xx f (Yt)− λ(Yt)f (Yt) + k(Yt).
I Given M simulated paths, time and space discretization parameters δt andh, the loss J will be approximated by
JN,M(θ) =1
M
1
N
M∑j=1
N∑i=0
(∆
[δt]t u(Y
(j)ti ; θ) + L[h]u(Y
(j)ti ; θ)
)2
+1
M
M∑j=1
(u(Y
(j)tN ; θ)− g(Y
(j)tN ))2
.
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
16/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
Algorithm 1: Path-Dependent DGM - PDGM
initialize discretization parameter δt, mini-batch size M and threshold ε;while JN,M(θ) > ε do
generate a mini-batch size of M paths {(Y (j)ti )
i=0,...,N}j=1,...,M
for i ∈ {1, . . . ,N} do
calculate u(Y(j)ti ; θ), ∆
[δt]t u(Y
(j)ti ; θ),
∆[h]x u(Y
(j)ti ; θ) and ∆
[h]xx u(Y
(j)ti ; θ) ;
put them all together to compute L[h]u(Y(j)ti ; θ) ;
endcalculate the approximated loss function, JN,M(θ);minimize JN,M(θ), update θ using stochastic gradient descent.
end
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
17/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
Linear Running IntegralI Consider the class PPDEs of the form
∆t f (Yt) +1
2∆xx f (Yt) = 0,
f (YT ) = g(YT ).
I As an example, let g(YT ) =∫ T
0yudu. The explicit solution can be
calculated as
f (Yt) =
∫ t
0
yudu + yt(T − t).
I Training paths in this subsection are 12800 simulated standard Brownianmotions paths with T = 1 and N = 100. We choose mini-batch sizeM = 128.
I We use a single layer LSTM network with 64 units connecting with a deepfeed-forward neural network which consists of three hidden layers with 64,128, 64 respectively.
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
18/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
Linear Running Integral
0.0 0.2 0.4 0.6 0.8 1.0t
3
2
1
0
1
2
Yt
A sample of 128 Brownian motion paths
Figure: A sample of 128 Brownianmotion paths.
0 2000 4000 6000 8000 10000Epochs
0.000
0.002
0.004
0.006
0.008
0.010
Loss
Train LossTest Loss
Figure: Train and test losses for the linearrunning integral example.
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
19/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
Linear Running Integral
0.0 0.2 0.4 0.6 0.8 1.0t
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Y t
Test Path 1Test Path 2Test Path 3
0.0 0.2 0.4 0.6 0.8 1.0t
0.0
0.2
0.4
0.6
0.8
1.0
1.2
Solution & Derivatives of Test Path 1
True SolutionPredicted SolutionTerminal ValueSpace Derivative xxf Time Derivative tf
0.0 0.2 0.4 0.6 0.8 1.0t
0.0
0.2
0.4
0.6
0.8
1.0
Solution & Derivatives of Test Path 2True SolutionPredicted SolutionTerminal ValueSpace Derivative xxf Time Derivative tf
0.0 0.2 0.4 0.6 0.8 1.0t
1.00
0.75
0.50
0.25
0.00
0.25
0.50
0.75
Solution & Derivatives of Test Path 3
True SolutionPredicted SolutionTerminal ValueSpace Derivative xxf Time Derivative tf
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
20/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
Linear Running Integral
0.0 0.2 0.4 0.6 0.8 1.0t
9
8
7
6
5
Yt
Test Path (Failure)
0.0 0.2 0.4 0.6 0.8 1.0t
8
7
6
5
4
3
f(Y
t)
True / Pred Solution to Test Path (Failure)
TruePredTerminal Value
Figure: Prediction failure due to the limitation of training domain.
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
21/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
Linear Running Integral
0.0 0.2 0.4 0.6 0.8 1.0t
10.0
7.5
5.0
2.5
0.0
2.5
5.0
7.5
Y t
Test Path 1Test Path 2Test Path 3
0.0 0.2 0.4 0.6 0.8 1.0t
8.0
7.5
7.0
6.5
6.0
5.5
f(Yt)
Solution of Test Path 1TruePredTerminal Value
0.0 0.2 0.4 0.6 0.8 1.0t
0
1
2
3
4
5
6
7
8
f(Yt)
Solution of Test Path 2TruePredTerminal Value
0.0 0.2 0.4 0.6 0.8 1.0t
4
2
0
2
4
f(Yt)
Solution of Test Path 3TruePredTerminal Value
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
22/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
Non-Linear ExampleI Consider PPDE is of the form
∆t f (Yt) + (µ1{∆x f (Yt)>0} + µ1{∆x f (Yt)<0})∆x f (Yt)
+ 12 (σ21{∆xx f (Yt)<0} + σ21{∆xx f (Yt)>0})∆xx f (Yt) + φ(Yt) = 0,
f (YT ) = cos(yT + IT ).
with
φ(Yt) = (yt + µ) min (sin(yt + It), 0) + (yt + µ) max (sin(yt + It), 0)
+σ2
2max (cos(yt + It), 0) +
σ2
2min (cos(yt + It), 0) ,
where It =∫ t
0yudu is the running integral.
I The closed-formula solution is given by
f (Yt) = cos(yt + It).
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
23/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
Non-Linear Example
I Use standard Brownian motion paths to train the neural network.
I Specify the coefficients to be µ = −0.2, µ = 0.2, σ = 0.2, and σ = 0.3.
I Loss reaches around 4× 10−6 after 15000 epochs.
I Test path 1 is a realization of standard Brownian motion path.
Test path 2 is a smooth path yt = (1− 2t)3.
Test path 3 is is yti ∼ U(1, 3), i ∈ {1, . . . , 100}.
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
24/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
Non-Linear Example
0.0 0.2 0.4 0.6 0.8 1.0t
1.0
0.5
0.0
0.5
1.0
1.5
2.0
2.5
3.0
Y t
Test Path 1Test Path 2Test Path 3
0.0 0.2 0.4 0.6 0.8 1.0t
0.75
0.50
0.25
0.00
0.25
0.50
0.75
1.00
f(Yt)
Solution to Test Path 1TruePredTerminal Value
0.0 0.2 0.4 0.6 0.8 1.0t
0.6
0.7
0.8
0.9
1.0
f(Yt)
Solution to Test Path 2
TruePredTerminal Value
0.0 0.2 0.4 0.6 0.8 1.0t
1.0
0.8
0.6
0.4
0.2
0.0
0.2
f(Yt)
Solution of Test Path 3TruePredTerminal Value
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
25/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
Applications in Mathematical Finance
I We will consider the classical Black–Scholes model, where the spot valuefollows a geometric Brownian Motion with constant parameters
dxt = (r − q)xtdt + σxtdwt .
I Under this model, the price of a general path-dependent financial derivativewith maturity T and payoff g : ΛT −→ R solves the PPDE
∆t f (Yt) + (r − q)yt∆x f (Yt) +1
2σ2y2
t ∆xx f (Yt)− rf (Yt) = 0,
f (YT ) = g(YT ).
I Geometric Asian Option. g(YT ) =(
exp{
1T
∫ T
0log ytdt
}− K
)+
.
I Lookback Option. g(YT ) = yT − inf0≤t≤T yt .
I Barrier Option. g(YT ) = (yT − K)+1{inf yt
0≤t≤T> B
}.
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
26/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
Down-and-Out Call
I We focus on the case of down-and-out call options. More precisely, theoption becomes worthless whether the spot value crosses a down barrierB < S0. Otherwise, the payoff is a call with strike K ≥ B.
I The payoff functional can then be written as
g(YT ) = (yT − K )+1{inf yt
0≤t≤T> B
}.I A closed-form solution is available:
f (Yt) =
{0, if inf0≤u≤t yu ≤ B,
CBS(yt ,T − t)−(ytB
)1−λCBS
(B2
yt,T − t
), if inf0≤u≤t yu > B,
where CBS(yt ,T − t) is the price of a call option with strike K and
maturity T at (t, yt) and λ = 2(r−q)σ2 .
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
27/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
Down-and-Out CallI The option become valueless when the stock price crosses the barrier.
Modify the loss function.I The loss for a given sample path j at time ti is
J(j)ti (θ) =
|u(Y(j)ti ; θ)− 0| if inf0≤i ′≤i Y
(j)ti′
< B,(∆tu(Y
(j)ti ; θ) + Lu(Y
(j)ti ; θ)
)2
otherwise.
The total loss is calculated as
JN,M(θ) =1
M
1
N
M∑j=1
N∑i=0
J(j)ti (θ)
+1
M
M∑j=1
[(u(Y
(j)tN ; θ)− g(Y
(j)tN )1{
inf yti0≤i≤N
> B})2
+ |u(Y(j)tN ; θ)− 0|1{
inf yti0≤i≤N
< B}].
I Then minimize the above loss objective using stochastic gradient descentalgorithm and update parameter θ.
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
28/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
Down-and-Out Call (B = 0.6 and K = 0.8)
0.0 0.2 0.4 0.6 0.8 1.0t
0
1
2
3
4
5
Y t
Test Path 1Test Path 2Test Path 3
0.0 0.2 0.4 0.6 0.8 1.0t
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
f(Yt)
Solution of Test Path 1TruePredTerminal Value
0.0 0.2 0.4 0.6 0.8 1.0t
0
1
2
3
4
5
f(Yt)
Solution of Test Path 2TruePredTerminal Value
0.0 0.2 0.4 0.6 0.8 1.0t
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
f(Yt)
Solution of Test Path 3TruePredTerminal Value
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
29/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
Heston ModelI A more complex model. Consider is the well-known Heston model:
dxt = (r − q)xtdt +√vtxtdwt ,
dvt = κ(m − vt)dt + ξ√vtdw
∗t ,
dwtdw∗t = ρdt.
I The price at time t of a general path-dependent option with maturity Tand payoff g : ΛT −→ R can be written as the functional f (Yt , v) andsolves the PPDE
∆t f (Yt , v) + (r − q)yt∆x f (Yt , v) +1
2vy2
t ∆xx f (Yt , v)− rf (Yt , v)
+κ(m − v)∂v f (Yt , v) + 12ξ
2v∂vv f (Yt , v) + ρξvyt∆x∂v f (Yt , v) = 0
f (YT , v) = g(YT ).
I Consider the geometric Asian option, and The generalization of ouralgorithm to this multidimensional case is straightforward. Specifyr = 0.03, q = 0.01, κ = 3,m = 1, ξ = 1, ρ = 0.6, x0 = v0 = 1,T = 1.
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
30/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
0.0 0.2 0.4 0.6 0.8 1.0t
0.25
0.50
0.75
1.00
1.25
1.50
1.75
2.00
2.25Yt
Vt
0.0 0.2 0.4 0.6 0.8 1.0t
0.00
0.25
0.50
0.75
1.00
1.25
1.50
1.75
f(Yt)
Strike = 0.0Strike = 0.2Strike = 0.4Strike = 0.6Strike = 0.8Strike = 1.0Terminal Value
Figure: Given a pair of paths of (Yt ,Vt), Solutions to the Heston model vs strike prices.
0.0 0.5 1.0 1.5 2.0 2.5 3.0Strikes
0.0
0.2
0.4
0.6
0.8
Price
s
0 2 4 6 8 10 12Maturity T
0.0
0.1
0.2
0.3
0.4
0.5
0.6
Price
s
Figure: On the left (T = 1): prices vs strike prices K . On the Right(K = 0.4): pricesvs maturity times T .
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs
31/31
Overview PPDE and fIto PDGM Algorithm Numerical Examples Summary
Summary
I Review Functional Ito Calculus ∆t f (Yt),∆x f (Yt),∆xx f (Yt).PPDE and Functional Feynman-Kac Formula.
I Review of Neural Networks. NN`d1,d2and LSTMi,d,k .
I PDGM Architecture and Algorithm.
u(Yti ; θ) = ϕ(ti , yti , ati−1 ; θf )
u(Y hti ; θ) = ϕ(ti , yti + h, ati−1 ; θf ),
u(Yti ,δt ; θ) = ϕ(ti+1, yti , ati ; θf ).
I Numerical Examples.
I Linear and Nonlinear PPDE.
I Down-and-Out Call and Heston Model.
Zhaoyu Zhang (Columbia University) PDGM: A Neural Network Approach to Solve PPDEs