ترغب بنشر مسار تعليمي؟ اضغط هنا

Forecasting open-high-low-close data contained in candlestick chart

60   0   0.0 ( 0 )
 نشر من قبل Wenyang Huang
 تاريخ النشر 2021
  مجال البحث اقتصاد
والبحث باللغة English




اسأل ChatGPT حول البحث

Forecasting the (open-high-low-close)OHLC data contained in candlestick chart is of great practical importance, as exemplified by applications in the field of finance. Typically, the existence of the inherent constraints in OHLC data poses great challenge to its prediction, e.g., forecasting models may yield unrealistic values if these constraints are ignored. To address it, a novel transformation approach is proposed to relax these constraints along with its explicit inverse transformation, which ensures the forecasting models obtain meaningful openhigh-low-close values. A flexible and efficient framework for forecasting the OHLC data is also provided. As an example, the detailed procedure of modelling the OHLC data via the vector auto-regression (VAR) model and vector error correction (VEC) model is given. The new approach has high practical utility on account of its flexibility, simple implementation and straightforward interpretation. Extensive simulation studies are performed to assess the effectiveness and stability of the proposed approach. Three financial data sets of the Kweichow Moutai, CSI 100 index and 50 ETF of Chinese stock market are employed to document the empirical effect of the proposed methodology.



قيم البحث

اقرأ أيضاً

183 - Joel L. Horowitz 2018
This paper presents a simple method for carrying out inference in a wide variety of possibly nonlinear IV models under weak assumptions. The method is non-asymptotic in the sense that it provides a finite sample bound on the difference between the tr ue and nominal probabilities of rejecting a correct null hypothesis. The method is a non-Studentized version of the Anderson-Rubin test but is motivated and analyzed differently. In contrast to the conventional Anderson-Rubin test, the method proposed here does not require restrictive distributional assumptions, linearity of the estimated model, or simultaneous equations. Nor does it require knowledge of whether the instruments are strong or weak. It does not require testing or estimating the strength of the instruments. The method can be applied to quantile IV models that may be nonlinear and can be used to test a parametric IV model against a nonparametric alternative. The results presented here hold in finite samples, regardless of the strength of the instruments.
188 - Yinchu Zhu 2021
We consider the setting in which a strong binary instrument is available for a binary treatment. The traditional LATE approach assumes the monotonicity condition stating that there are no defiers (or compliers). Since this condition is not always obv ious, we investigate the sensitivity and testability of this condition. In particular, we focus on the question: does a slight violation of monotonicity lead to a small problem or a big problem? We find a phase transition for the monotonicity condition. On one of the boundary of the phase transition, it is easy to learn the sign of LATE and on the other side of the boundary, it is impossible to learn the sign of LATE. Unfortunately, the impossible side of the phase transition includes data-generating processes under which the proportion of defiers tends to zero. This boundary of phase transition is explicitly characterized in the case of binary outcomes. Outside a special case, it is impossible to test whether the data-generating process is on the nice side of the boundary. However, in the special case that the non-compliance is almost one-sided, such a test is possible. We also provide simple alternatives to monotonicity.
203 - Yeonwoo Rho , Xiaofeng Shao 2018
In unit root testing, a piecewise locally stationary process is adopted to accommodate nonstationary errors that can have both smooth and abrupt changes in second- or higher-order properties. Under this framework, the limiting null distributions of t he conventional unit root test statistics are derived and shown to contain a number of unknown parameters. To circumvent the difficulty of direct consistent estimation, we propose to use the dependent wild bootstrap to approximate the non-pivotal limiting null distributions and provide a rigorous theoretical justification for bootstrap consistency. The proposed method is compared through finite sample simulations with the recolored wild bootstrap procedure, which was developed for errors that follow a heteroscedastic linear process. Further, a combination of autoregressive sieve recoloring with the dependent wild bootstrap is shown to perform well. The validity of the dependent wild bootstrap in a nonstationary setting is demonstrated for the first time, showing the possibility of extensions to other inference problems associated with locally stationary processes.
This paper studies the asymptotic convergence of computed dynamic models when the shock is unbounded. Most dynamic economic models lack a closed-form solution. As such, approximate solutions by numerical methods are utilized. Since the researcher can not directly evaluate the exact policy function and the associated exact likelihood, it is imperative that the approximate likelihood asymptotically converges -- as well as to know the conditions of convergence -- to the exact likelihood, in order to justify and validate its usage. In this regard, Fernandez-Villaverde, Rubio-Ramirez, and Santos (2006) show convergence of the likelihood, when the shock has compact support. However, compact support implies that the shock is bounded, which is not an assumption met in most dynamic economic models, e.g., with normally distributed shocks. This paper provides theoretical justification for most dynamic models used in the literature by showing the conditions for convergence of the approximate invariant measure obtained from numerical simulations to the exact invariant measure, thus providing the conditions for convergence of the likelihood.
Consider a planner who has to decide whether or not to introduce a new policy to a certain local population. The planner has only limited knowledge of the policys causal impact on this population due to a lack of data but does have access to the publ icized results of intervention studies performed for similar policies on different populations. How should the planner make use of and aggregate this existing evidence to make her policy decision? Building upon the paradigm of `patient-centered meta-analysis proposed by Manski (2020; Towards Credible Patient-Centered Meta-Analysis, Epidemiology), we formulate the planners problem as a statistical decision problem with a social welfare objective pertaining to the local population, and solve for an optimal aggregation rule under the minimax-regret criterion. We investigate the analytical properties, computational feasibility, and welfare regret performance of this rule. We also compare the minimax regret decision rule with plug-in decision rules based upon a hierarchical Bayes meta-regression or stylized mean-squared-error optimal prediction. We apply the minimax regret decision rule to two settings: whether to enact an active labor market policy given evidence from 14 randomized control trial studies; and whether to approve a drug (Remdesivir) for COVID-19 treatment using a meta-database of clinical trials.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا