Journal of Computational Finance

Risk.net

Neural variance reduction for stochastic differential equations

P. D. Hinds and M. V. Tretyakov

  • The use of neural networks to approximate optimal control variates is proposed in order to reduce the variance of Monte Carlo simulations for stochastic differential equations driven by Brownian motion and by a general Lévy process.
  • General optimality conditions for the variance reduction for SDEs driven by a Lévy process are established.
  • The constructed numerical algorithms work as a practical variance reduction tool in a black-box fashion without a need of offline neural network training.
  • Several numerical examples from option pricing are presented, which demonstrate the effectiveness of the proposed algorithms.

Variance reduction techniques are of crucial importance for the efficiency of Monte Carlo simulations in finance applications. We propose the use of neural stochastic differential equations (SDEs), with control variates parameterized by neural networks, in order to learn approximately optimal control variates and hence reduce variance as trajectories of the SDEs are simulated. We consider SDEs driven by Brownian motion and, more generally, by Lévy processes, including those with infinite activity. For the latter, we prove optimality conditions for the variance reduction. Several numerical examples from option pricing are presented.

Sorry, our subscription options are not loading right now

Please try again later. Get in touch with our customer services team if this issue persists.

New to Risk.net? View our subscription options

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here