No Arabic abstract
Financial trading aims to build profitable strategies to make wise investment decisions in the financial market. It has attracted interests in the machine learning community for a long time. This paper proposes to trade financial assets automatically using feature preprocessing skills and Recurrent Reinforcement Learning (RRL) algorithm. The strategy starts from technical indicators extracted from assets market information. Then these technical indicators are preprocessed by Principal Component Analysis (PCA) and Discrete Wavelet Transform (DWT) and eventually inputted to the RRL algorithm to do the trading. The extensive empirical evidence shows that the proposed strategy is not only effective and robust in its performance, but also can mitigate the drawbacks underlying the initial trading using RRL.
The unpredictability and volatility of the stock market render it challenging to make a substantial profit using any generalized scheme. This paper intends to discuss our machine learning model, which can make a significant amount of profit in the US stock market by performing live trading in the Quantopian platform while using resources free of cost. Our top approach was to use ensemble learning with four classifiers: Gaussian Naive Bayes, Decision Tree, Logistic Regression with L1 regularization and Stochastic Gradient Descent, to decide whether to go long or short on a particular stock. Our best model performed daily trade between July 2011 and January 2019, generating 54.35% profit. Finally, our work showcased that mixtures of weighted classifiers perform better than any individual predictor about making trading decisions in the stock market.
Prospect theory is widely viewed as the best available descriptive model of how people evaluate risk in experimental settings. According to prospect theory, people are risk-averse with respect to gains and risk-seeking with respect to losses, a phenomenon called loss aversion. Despite of the fact that prospect theory has been well developed in behavioral economics at the theoretical level, there exist very few large-scale empirical studies and most of them have been undertaken with micro-panel data. Here we analyze over 28.5 million trades made by 81.3 thousand traders of an online financial trading community over 28 months, aiming to explore the large-scale empirical aspect of prospect theory. By analyzing and comparing the behavior of winning and losing trades and traders, we find clear evidence of the loss aversion phenomenon, an essence in prospect theory. This work hence demonstrates an unprecedented large-scale empirical evidence of prospect theory, which has immediate implication in financial trading, e.g., developing new trading strategies by minimizing the effect of loss aversion. Moreover, we introduce three risk-adjusted metrics inspired by prospect theory to differentiate winning and losing traders based on their historical trading behavior. This offers us potential opportunities to augment online social trading, where traders are allowed to watch and follow the trading activities of others, by predicting potential winners statistically based on their historical trading behavior rather than their trading performance at any given point in time.
This scientific research paper presents an innovative approach based on deep reinforcement learning (DRL) to solve the algorithmic trading problem of determining the optimal trading position at any point in time during a trading activity in stock markets. It proposes a novel DRL trading strategy so as to maximise the resulting Sharpe ratio performance indicator on a broad range of stock markets. Denominated the Trading Deep Q-Network algorithm (TDQN), this new trading strategy is inspired from the popular DQN algorithm and significantly adapted to the specific algorithmic trading problem at hand. The training of the resulting reinforcement learning (RL) agent is entirely based on the generation of artificial trajectories from a limited set of stock market historical data. In order to objectively assess the performance of trading strategies, the research paper also proposes a novel, more rigorous performance assessment methodology. Following this new performance assessment approach, promising results are reported for the TDQN strategy.
A dynamic herding model with interactions of trading volumes is introduced. At time $t$, an agent trades with a probability, which depends on the ratio of the total trading volume at time $t-1$ to its own trading volume at its last trade. The price return is determined by the volume imbalance and number of trades. The model successfully reproduces the power-law distributions of the trading volume, number of trades and price return, and their relations. Moreover, the generated time series are long-range correlated. We demonstrate that the results are rather robust, and do not depend on the particular form of the trading probability.
As deep reinforcement learning (DRL) has been recognized as an effective approach in quantitative finance, getting hands-on experiences is attractive to beginners. However, to train a practical DRL trading agent that decides where to trade, at what price, and what quantity involves error-prone and arduous development and debugging. In this paper, we introduce a DRL library FinRL that facilitates beginners to expose themselves to quantitative finance and to develop their own stock trading strategies. Along with easily-reproducible tutorials, FinRL library allows users to streamline their own developments and to compare with existing schemes easily. Within FinRL, virtual environments are configured with stock market datasets, trading agents are trained with neural networks, and extensive backtesting is analyzed via trading performance. Moreover, it incorporates important trading constraints such as transaction cost, market liquidity and the investors degree of risk-aversion. FinRL is featured with completeness, hands-on tutorial and reproducibility that favors beginners: (i) at multiple levels of time granularity, FinRL simulates trading environments across various stock markets, including NASDAQ-100, DJIA, S&P 500, HSI, SSE 50, and CSI 300; (ii) organized in a layered architecture with modular structure, FinRL provides fine-tuned state-of-the-art DRL algorithms (DQN, DDPG, PPO, SAC, A2C, TD3, etc.), commonly-used reward functions and standard evaluation baselines to alleviate the debugging workloads and promote the reproducibility, and (iii) being highly extendable, FinRL reserves a complete set of user-import interfaces. Furthermore, we incorporated three application demonstrations, namely single stock trading, multiple stock trading, and portfolio allocation. The FinRL library will be available on Github at link https://github.com/AI4Finance-LLC/FinRL-Library.