No Arabic abstract
It is a difficult task for both professional investors and individual traders continuously making profit in stock market. With the development of computer science and deep reinforcement learning, Buy&Hold (B&H) has been oversteped by many artificial intelligence trading algorithms. However, the information and process are not enough, which limit the performance of reinforcement learning algorithms. Thus, we propose a parallel-network continuous quantitative trading model with GARCH and PPO to enrich the basical deep reinforcement learning model, where the deep learning parallel network layers deal with 3 different frequencies data (including GARCH information) and proximal policy optimization (PPO) algorithm interacts actions and rewards with stock trading environment. Experiments in 5 stocks from Chinese stock market show our method achieves more extra profit comparing with basical reinforcement learning methods and bench models.
As deep reinforcement learning (DRL) has been recognized as an effective approach in quantitative finance, getting hands-on experiences is attractive to beginners. However, to train a practical DRL trading agent that decides where to trade, at what price, and what quantity involves error-prone and arduous development and debugging. In this paper, we introduce a DRL library FinRL that facilitates beginners to expose themselves to quantitative finance and to develop their own stock trading strategies. Along with easily-reproducible tutorials, FinRL library allows users to streamline their own developments and to compare with existing schemes easily. Within FinRL, virtual environments are configured with stock market datasets, trading agents are trained with neural networks, and extensive backtesting is analyzed via trading performance. Moreover, it incorporates important trading constraints such as transaction cost, market liquidity and the investors degree of risk-aversion. FinRL is featured with completeness, hands-on tutorial and reproducibility that favors beginners: (i) at multiple levels of time granularity, FinRL simulates trading environments across various stock markets, including NASDAQ-100, DJIA, S&P 500, HSI, SSE 50, and CSI 300; (ii) organized in a layered architecture with modular structure, FinRL provides fine-tuned state-of-the-art DRL algorithms (DQN, DDPG, PPO, SAC, A2C, TD3, etc.), commonly-used reward functions and standard evaluation baselines to alleviate the debugging workloads and promote the reproducibility, and (iii) being highly extendable, FinRL reserves a complete set of user-import interfaces. Furthermore, we incorporated three application demonstrations, namely single stock trading, multiple stock trading, and portfolio allocation. The FinRL library will be available on Github at link https://github.com/AI4Finance-LLC/FinRL-Library.
We introduce a new formulation of asset trading games in continuous time in the framework of the game-theoretic probability established by Shafer and Vovk (Probability and Finance: Its Only a Game! (2001) Wiley). In our formulation, the market moves continuously, but an investor trades in discrete times, which can depend on the past path of the market. We prove that an investor can essentially force that the asset price path behaves with the variation exponent exactly equal to two. Our proof is based on embedding high-frequency discrete-time games into the continuous-time game and the use of the Bayesian strategy of Kumon, Takemura and Takeuchi (Stoch. Anal. Appl. 26 (2008) 1161--1180) for discrete-time coin-tossing games. We also show that the main growth part of the investors capital processes is clearly described by the information quantities, which are derived from the Kullback--Leibler information with respect to the empirical fluctuation of the asset price.
As a fundamental problem in algorithmic trading, order execution aims at fulfilling a specific trading order, either liquidation or acquirement, for a given instrument. Towards effective execution strategy, recent years have witnessed the shift from the analytical view with model-based market assumptions to model-free perspective, i.e., reinforcement learning, due to its nature of sequential decision optimization. However, the noisy and yet imperfect market information that can be leveraged by the policy has made it quite challenging to build up sample efficient reinforcement learning methods to achieve effective order execution. In this paper, we propose a novel universal trading policy optimization framework to bridge the gap between the noisy yet imperfect market states and the optimal action sequences for order execution. Particularly, this framework leverages a policy distillation method that can better guide the learning of the common policy towards practically optimal execution by an oracle teacher with perfect information to approximate the optimal trading strategy. The extensive experiments have shown significant improvements of our method over various strong baselines, with reasonable trading actions.
Executing a basket of co-integrated assets is an important task facing investors. Here, we show how to do this accounting for the informational advantage gained from assets within and outside the basket, as well as for the permanent price impact of market orders (MOs) from all market participants, and the temporary impact that the agents MOs have on prices. The execution problem is posed as an optimal stochastic control problem and we demonstrate that, under some mild conditions, the value function admits a closed-form solution, and prove a verification theorem. Furthermore, we use data of five stocks traded in the Nasdaq exchange to estimate the model parameters and use simulations to illustrate the performance of the strategy. As an example, the agent liquidates a portfolio consisting of shares in Intel Corporation (INTC) and Market Vectors Semiconductor ETF (SMH). We show that including the information provided by three additional assets, FARO Technologies (FARO), NetApp (NTAP) and Oracle Corporation (ORCL), considerably improves the strategys performance; for the portfolio we execute, it outperforms the multi-asset version of Almgren-Chriss by approximately 4 to 4.5 basis points.