No Arabic abstract
Deep hedging (Buehler et al. 2019) is a versatile framework to compute the optimal hedging strategy of derivatives in incomplete markets. However, this optimal strategy is hard to train due to action dependence, that is, the appropriate hedging action at the next step depends on the current action. To overcome this issue, we leverage the idea of a no-transaction band strategy, which is an existing technique that gives optimal hedging strategies for European options and the exponential utility. We theoretically prove that this strategy is also optimal for a wider class of utilities and derivatives including exotics. Based on this result, we propose a no-transaction band network, a neural network architecture that facilitates fast training and precise evaluation of the optimal hedging strategy. We experimentally demonstrate that for European and lookback options, our architecture quickly attains a better hedging strategy in comparison to a standard feed-forward network.
We consider the problem of neural network training in a time-varying context. Machine learning algorithms have excelled in problems that do not change over time. However, problems encountered in financial markets are often time-varying. We propose the online early stopping algorithm and show that a neural network trained using this algorithm can track a function changing with unknown dynamics. We compare the proposed algorithm to current approaches on predicting monthly U.S. stock returns and show its superiority. We also show that prominent factors (such as the size and momentum effects) and industry indicators, exhibit time varying stock return predictiveness. We find that during market distress, industry indicators experience an increase in importance at the expense of firm level features. This indicates that industries play a role in explaining stock returns during periods of heightened risk.
We derive a backward and forward nonlinear PDEs that govern the implied volatility of a contingent claim whenever the latter is well-defined. This would include at least any contingent claim written on a positive stock price whose payoff at a possibly random time is convex. We also discuss suitable initial and boundary conditions for those PDEs. Finally, we demonstrate how to solve them numerically by using an iterative finite-difference approach.
Automated neural network design has received ever-increasing attention with the evolution of deep convolutional neural networks (CNNs), especially involving their deployment on embedded and mobile platforms. One of the biggest problems that neural architecture search (NAS) confronts is that a large number of candidate neural architectures are required to train, using, for instance, reinforcement learning and evolutionary optimisation algorithms, at a vast computation cost. Even recent differentiable neural architecture search (DNAS) samples a small number of candidate neural architectures based on the probability distribution of learned architecture parameters to select the final neural architecture. To address this computational complexity issue, we introduce a novel emph{architecture parameterisation} based on scaled sigmoid function, and propose a general emph{Differentiable Neural Architecture Learning} (DNAL) method to optimize the neural architecture without the need to evaluate candidate neural networks. Specifically, for stochastic supernets as well as conventional CNNs, we build a new channel-wise module layer with the architecture components controlled by a scaled sigmoid function. We train these neural network models from scratch. The network optimization is decoupled into the weight optimization and the architecture optimization. We address the non-convex optimization problem of neural architecture by the continuous scaled sigmoid method with convergence guarantees. Extensive experiments demonstrate our DNAL method delivers superior performance in terms of neural architecture search cost. The optimal networks learned by DNAL surpass those produced by the state-of-the-art methods on the benchmark CIFAR-10 and ImageNet-1K dataset in accuracy, model size and computational complexity.
We present a multigrid iterative algorithm for solving a system of coupled free boundary problems for pricing American put options with regime-switching. The algorithm is based on our recently developed compact finite difference scheme coupled with Hermite interpolation for solving the coupled partial differential equations consisting of the asset option and the delta, gamma, and speed sensitivities. In the algorithm, we first use the Gauss-Seidel method as a smoother and then implement a multigrid strategy based on modified cycle (M-cycle) for solving our discretized equations. Hermite interpolation with Newton interpolatory divided difference (as the basis) is used in estimating the coupled asset, delta, gamma, and speed options in the set of equations. A numerical experiment is performed with the two- and four- regime examples and compared with other existing methods to validate the optimal strategy. Results show that this algorithm provides a fast and efficient tool for pricing American put options with regime-switching.
Textual network embeddings aim to learn a low-dimensional representation for every node in the network so that both the structural and textual information from the networks can be well preserved in the representations. Traditionally, the structural and textual embeddings were learned by models that rarely take the mutual influences between them into account. In this paper, a deep neural architecture is proposed to effectively fuse the two kinds of informations into one representation. The novelties of the proposed architecture are manifested in the aspects of a newly defined objective function, the complementary information fusion method for structural and textual features, and the mutual gate mechanism for textual feature extraction. Experimental results show that the proposed model outperforms the comparing methods on all three datasets.