ترغب بنشر مسار تعليمي؟ اضغط هنا

Qlib: An AI-oriented Quantitative Investment Platform

72   0   0.0 ( 0 )
 نشر من قبل Weiqing Liu
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Quantitative investment aims to maximize the return and minimize the risk in a sequential trading period over a set of financial instruments. Recently, inspired by rapid development and great potential of AI technologies in generating remarkable innovation in quantitative investment, there has been increasing adoption of AI-driven workflow for quantitative research and practical investment. In the meantime of enriching the quantitative investment methodology, AI technologies have raised new challenges to the quantitative investment system. Particularly, the new learning paradigms for quantitative investment call for an infrastructure upgrade to accommodate the renovated workflow; moreover, the data-driven nature of AI technologies indeed indicates a requirement of the infrastructure with more powerful performance; additionally, there exist some unique challenges for applying AI technologies to solve different tasks in the financial scenarios. To address these challenges and bridge the gap between AI technologies and quantitative investment, we design and develop Qlib that aims to realize the potential, empower the research, and create the value of AI technologies in quantitative investment.



قيم البحث

اقرأ أيضاً

115 - Zhuo Jin , Zuo Quan Xu , 2020
We study an optimal dividend problem for an insurer who simultaneously controls investment weights in a financial market, liability ratio in the insurance business, and dividend payout rate. The insurer seeks an optimal strategy to maximize her expec ted utility of dividend payments over an infinite horizon. By applying a perturbation approach, we obtain the optimal strategy and the value function in closed form for log and power utility. We conduct an economic analysis to investigate the impact of various model parameters and risk aversion on the insurers optimal strategy.
343 - Xiangrui Zeng , Min Xu 2019
Cryo-electron tomography (cryo-ET) is an emerging technology for the 3D visualization of structural organizations and interactions of subcellular components at near-native state and sub-molecular resolution. Tomograms captured by cryo-ET contain hete rogeneous structures representing the complex and dynamic subcellular environment. Since the structures are not purified or fluorescently labeled, the spatial organization and interaction between both the known and unknown structures can be studied in their native environment. The rapid advances of cryo-electron tomography (cryo-ET) have generated abundant 3D cellular imaging data. However, the systematic localization, identification, segmentation, and structural recovery of the subcellular components require efficient and accurate large-scale image analysis methods. We introduce AITom, an open-source artificial intelligence platform for cryo-ET researchers. AITom provides many public as well as in-house algorithms for performing cryo-ET data analysis through both the traditional template-based or template-free approach and the deep learning approach. AITom also supports remote interactive analysis. Comprehensive tutorials for each analysis module are provided to guide the user through. We welcome researchers and developers to join this collaborative open-source software development project. Availability: https://github.com/xulabs/aitom
Recent advances in the fields of machine learning and neurofinance have yielded new exciting research perspectives in practical inference of behavioural economy in financial markets and microstructure study. We here present the latest results from a recently published stock market simulator built around a multi-agent system architecture, in which each agent is an autonomous investor trading stocks by reinforcement learning (RL) via a centralised double-auction limit order book. The RL framework allows for the implementation of specific behavioural and cognitive traits known to trader psychology, and thus to study the impact of these traits on the whole stock market at the mesoscale. More precisely, we narrowed our agent design to three such psychological biases known to have a direct correspondence with RL theory, namely delay discounting, greed, and fear. We compared ensuing simulated data to real stock market data over the past decade or so, and find that market stability benefits from larger populations of agents prone to delay discounting and most astonishingly, to greed.
We propose an extended public goods interaction model to study the evolution of cooperation in heterogeneous population. The investors are arranged on the well known scale-free type network, the Barab{a}si-Albert model. Each investor is supposed to p referentially distribute capital to pools in its portfolio based on the knowledge of pool sizes. The extent that investors prefer larger pools is determined by investment strategy denoted by a tunable parameter $alpha$, with larger $alpha$ corresponding to more preference to larger pools. As comparison, we also study this interaction model on square lattice, and find that the heterogeneity contacts favors cooperation. Additionally, the influence of local topology to the game dynamics under different $alpha$ strategies are discussed. It is found that the system with smaller $alpha$ strategy can perform comparatively better than the larger $alpha$ ones.
We introduce Air Learning, an open-source simulator, and a gym environment for deep reinforcement learning research on resource-constrained aerial robots. Equipped with domain randomization, Air Learning exposes a UAV agent to a diverse set of challe nging scenarios. We seed the toolset with point-to-point obstacle avoidance tasks in three different environments and Deep Q Networks (DQN) and Proximal Policy Optimization (PPO) trainers. Air Learning assesses the policies performance under various quality-of-flight (QoF) metrics, such as the energy consumed, endurance, and the average trajectory length, on resource-constrained embedded platforms like a Raspberry Pi. We find that the trajectories on an embedded Ras-Pi are vastly different from those predicted on a high-end desktop system, resulting in up to 40% longer trajectories in one of the environments. To understand the source of such discrepancies, we use Air Learning to artificially degrade high-end desktop performance to mimic what happens on a low-end embedded system. We then propose a mitigation technique that uses the hardware-in-the-loop to determine the latency distribution of running the policy on the target platform (onboard compute on the aerial robot). A randomly sampled latency from the latency distribution is then added as an artificial delay within the training loop. Training the policy with artificial delays allows us to minimize the hardware gap (discrepancy in the flight time metric reduced from 37.73% to 0.5%). Thus, Air Learning with hardware-in-the-loop characterizes those differences and exposes how the onboard computes choice affects the aerial robots performance. We also conduct reliability studies to assess the effect of sensor failures on the learned policies. All put together, Air Learning enables a broad class of deep RL research on UAVs. The source code is available at:http://bit.ly/2JNAVb6.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا