Do you want to publish a course? Click here

Social Learning from Reviews in Non-Stationary Environments

57   0   0.0 ( 0 )
 Added by Etienne Boursier
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Potential buyers of a product or service tend to read reviews from previous consumers before making their decisions. This behavior is modeled by a market of Bayesian consumers with heterogeneous preferences, who sequentially decide whether to buy an item of unknown quality, based on previous buyers reviews. The quality is multi-dimensional and the reviews can assume one of different forms and can also be multi-dimensional. The belief about the items quality in simple uni-dimensional settings is known to converge to its true value. Our paper extends this result to the more general case of a multidimensional quality, possibly in a continuous space, and provides anytime convergence rates. In practice, the quality of an item may vary over time, due to some change in the production process or the need to keep up with the competition. This paper also studies the learning dynamic when the unknown quality changes at random times and shows that the cost of learning is rather small

rate research

Read More

Levy walks are found in the migratory behaviour patterns of various organisms, and the reason for this phenomenon has been much discussed. We use simulations to demonstrate that learning causes the changes in confidence level during decision-making in non-stationary environments, and results in Levy-walk-like patterns. One inference algorithm involving confidence is Bayesian inference. We propose an algorithm that introduces the effects of learning and forgetting into Bayesian inference, and simulate an imitation game in which two decision-making agents incorporating the algorithm estimate each others internal models from their opponents observational data. For forgetting without learning, agent confidence levels remained low due to a lack of information on the counterpart and Brownian walks occurred for a wide range of forgetting rates. Conversely, when learning was introduced, high confidence levels occasionally occurred even at high forgetting rates, and Brownian walks universally became Levy walks through a mixture of high- and low-confidence states.
We investigate stochastic optimization problems under relaxed assumptions on the distribution of noise that are motivated by empirical observations in neural network training. Standard results on optimal convergence rates for stochastic optimization assume either there exists a uniform bound on the moments of the gradient noise, or that the noise decays as the algorithm progresses. These assumptions do not match the empirical behavior of optimization algorithms used in neural network training where the noise level in stochastic gradients could even increase with time. We address this behavior by studying convergence rates of stochastic gradient methods subject to changing second moment (or variance) of the stochastic oracle as the iterations progress. When the variation in the noise is known, we show that it is always beneficial to adapt the step-size and exploit the noise variability. When the noise statistics are unknown, we obtain similar improvements by developing an online estimator of the noise level, thereby recovering close variants of RMSProp. Consequently, our results reveal an important scenario where adaptive stepsize methods outperform SGD.
In data stream mining, predictive models typically suffer drops in predictive performance due to concept drift. As enough data representing the new concept must be collected for the new concept to be well learnt, the predictive performance of existing models usually takes some time to recover from concept drift. To speed up recovery from concept drift and improve predictive performance in data stream mining, this work proposes a novel approach called Multi-sourcE onLine TrAnsfer learning for Non-statIonary Environments (Melanie). Melanie is the first approach able to transfer knowledge between multiple data streaming sources in non-stationary environments. It creates several sub-classifiers to learn different aspects from different source and target concepts over time. The sub-classifiers that match the current target concept well are identified, and used to compose an ensemble for predicting examples from the target concept. We evaluate Melanie on several synthetic data streams containing different types of concept drift and on real world data streams. The results indicate that Melanie can deal with a variety drifts and improve predictive performance over existing data stream learning algorithms by making use of multiple sources.
We consider a class of sequential decision-making problems under uncertainty that can encompass various types of supervised learning concepts. These problems have a completely observed state process and a partially observed modulation process, where the state process is affected by the modulation process only through an observation process, the observation process only observes the modulation process, and the modulation process is exogenous to control. We model this broad class of problems as a partially observed Markov decision process (POMDP). The belief function for the modulation process is control invariant, thus separating the estimation of the modulation process from the control of the state process. We call this specially structured POMDP the separable POMDP, or SEP-POMDP, and show it (i) can serve as a model for a broad class of application areas, e.g., inventory control, finance, healthcare systems, (ii) inherits value function and optimal policy structure from a set of completely observed MDPs, (iii) can serve as a bridge between classical models of sequential decision making under uncertainty having fully specified model artifacts and such models that are not fully specified and require the use of predictive methods from statistics and machine learning, and (iv) allows for specialized approximate solution procedures.
We study a non standard infinite horizon, infinite dimensional linear-quadratic control problem arising in the physics of non-stationary states (see e.g. cite{BDGJL4,BertiniGabrielliLebowitz05}): finding the minimum energy to drive a given stationary state $bar x=0$ (at time $t=-infty$) into an arbitrary non-stationary state $x$ (at time $t=0$). This is the opposite to what is commonly studied in the literature on null controllability (where one drives a generic state $x$ into the equilibrium state $bar x=0$). Consequently, the Algebraic Riccati Equation (ARE) associated to this problem is non-standard since the sign of the linear part is opposite to the usual one and since it is intrinsically unbounded. Hence the standard theory of AREs does not apply. The analogous finite horizon problem has been studied in the companion paper cite{AcquistapaceGozzi17}. Here, similarly to such paper, we prove that the linear selfadjoint operator associated to the value function is a solution of the above mentioned ARE. Moreover, differently to cite{AcquistapaceGozzi17}, we prove that such solution is the maximal one. The first main result (Theorem ref{th:maximalARE}) is proved by approximating the problem with suitable auxiliary finite horizon problems (which are different from the one studied in cite{AcquistapaceGozzi17}). Finally in the special case where the involved operators commute we characterize all solutions of the ARE (Theorem ref{th:sol=proj}) and we apply this to the Landau-Ginzburg model.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا