No Arabic abstract
We propose a novel Bayesian optimisation procedure for outlier detection in the Capital Asset Pricing Model. We use a parametric product partition model to robustly estimate the systematic risk of an asset. We assume that the returns follow independent normal distributions and we impose a partition structure on the parameters of interest. The partition structure imposed on the parameters induces a corresponding clustering of the returns. We identify via an optimisation procedure the partition that best separates standard observations from the atypical ones. The methodology is illustrated with reference to a real data set, for which we also provide a microeconomic interpretation of the detected outliers.
In a network meta-analysis, some of the collected studies may deviate markedly from the others, for example having very unusual effect sizes. These deviating studies can be regarded as outlying with respect to the rest of the network and can be influential on the pooled results. Thus, it could be inappropriate to synthesize those studies without further investigation. In this paper, we propose two Bayesian methods to detect outliers in a network meta-analysis via: (a) a mean-shifted outlier model and (b), posterior predictive p-values constructed from ad-hoc discrepancy measures. The former method uses Bayes factors to formally test each study against outliers while the latter provides a score of outlyingness for each study in the network, which allows to numerically quantify the uncertainty associated with being outlier. Furthermore, we present a simple method based on informative priors as part of the network meta-analysis model to down-weight the detected outliers. We conduct extensive simulations to evaluate the effectiveness of the proposed methodology while comparing it to some alternative, available outlier diagnostic tools. Two real networks of interventions are then used to demonstrate our methods in practice.
A new framework for asset price dynamics is introduced in which the concept of noisy information about future cash flows is used to derive the price processes. In this framework an asset is defined by its cash-flow structure. Each cash flow is modelled by a random variable that can be expressed as a function of a collection of independent random variables called market factors. With each such X-factor we associate a market information process, the values of which are accessible to market agents. Each information process is a sum of two terms; one contains true information about the value of the market factor; the other represents noise. The noise term is modelled by an independent Brownian bridge. The market filtration is assumed to be that generated by the aggregate of the independent information processes. The price of an asset is given by the expectation of the discounted cash flows in the risk-neutral measure, conditional on the information provided by the market filtration. When the cash flows are the dividend payments associated with equities, an explicit model is obtained for the share-price, and the prices of options on dividend-paying assets are derived. Remarkably, the resulting formula for the price of a European call option is of the Black-Scholes-Merton type. The information-based framework also generates a natural explanation for the origin of stochastic volatility.
We consider state estimation for networked systems where measurements from sensor nodes are contaminated by outliers. A new hierarchical measurement model is formulated for outlier detection by integrating the outlier-free measurement model with a binary indicator variable. The binary indicator variable, which is assigned a beta-Bernoulli prior, is utilized to characterize if the sensors measurement is nominal or an outlier. Based on the proposed outlier-detection measurement model, both centralized and decentralized information fusion filters are developed. Specifically, in the centralized approach, all measurements are sent to a fusion center where the state and outlier indicators are jointly estimated by employing the mean-field variational Bayesian inference in an iterative manner. In the decentralized approach, however, every node shares its information, including the prior and likelihood, only with its neighbors based on a hybrid consensus strategy. Then each node independently performs the estimation task based on its own and shared information. In addition, an approximation distributed solution is proposed to reduce the local computational complexity and communication overhead. Simulation results reveal that the proposed algorithms are effective in dealing with outliers compared with several recent robust solutions.
We propose an extension of the Cox-Ross-Rubinstein (CRR) model based on q-binomial (or Kemp) random walks, with application to default with logistic failure rates. This model allows us to consider time-dependent switching probabilities varying according to a trend parameter, and it includes tilt and stretch parameters that control increment sizes. Option pricing formulas are written using q-binomial coefficients, and we study the convergence of this model to a Black-Scholes type formula in continuous time. A convergence rate of order O(1/N) is obtained when the tilt and stretch parameters are set equal to one.
We predict asset returns and measure risk premia using a prominent technique from artificial intelligence -- deep sequence modeling. Because asset returns often exhibit sequential dependence that may not be effectively captured by conventional time series models, sequence modeling offers a promising path with its data-driven approach and superior performance. In this paper, we first overview the development of deep sequence models, introduce their applications in asset pricing, and discuss their advantages and limitations. We then perform a comparative analysis of these methods using data on U.S. equities. We demonstrate how sequence modeling benefits investors in general through incorporating complex historical path dependence, and that Long- and Short-term Memory (LSTM) based models tend to have the best out-of-sample performance.