ترغب بنشر مسار تعليمي؟ اضغط هنا

Choice modelling in the age of machine learning

83   0   0.0 ( 0 )
 نشر من قبل Sander Van Cranenburgh
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Since its inception, the choice modelling field has been dominated by theory-driven models. The recent emergence and growing popularity of machine learning models offer an alternative data-driven approach. Machine learning models, techniques and practices could help overcome problems and limitations of the current theory-driven modelling paradigm, e.g. relating to the ad-hocness in search for the optimal model specification, and theory-driven choice models inability to work with text and image data. However, despite the potential value of machine learning to improve choice modelling practices, the choice modelling field has been somewhat hesitant to embrace machine learning. The aim of this paper is to facilitate (further) integration of machine learning in the choice modelling field. To achieve this objective, we make the case that (further) integration of machine learning in the choice modelling field is beneficial for the choice modelling field, and, we shed light on where the benefits of further integration can be found. Specifically, we take the following approach. First, we clarify the similarities and differences between the two modelling paradigms. Second, we provide a literature overview on the use of machine learning for choice modelling. Third, we reinforce the strengths of the current theory-driven modelling paradigm and compare this with the machine learning modelling paradigm, Fourth, we identify opportunities for embracing machine learning for choice modelling, while recognising the strengths of the current theory-driven paradigm. Finally, we put forward a vision on the future relationship between the theory-driven choice models and machine learning.



قيم البحث

اقرأ أيضاً

The Random Utility Maximization model is by far the most adopted framework to estimate consumer choice behavior. However, behavioral economics has provided strong empirical evidence of irrational choice behavior, such as halo effects, that are incomp atible with this framework. Models belonging to the Random Utility Maximization family may therefore not accurately capture such irrational behavior. Hence, more general choice models, overcoming such limitations, have been proposed. However, the flexibility of such models comes at the price of increased risk of overfitting. As such, estimating such models remains a challenge. In this work, we propose an estimation method for the recently proposed Generalized Stochastic Preference choice model, which subsumes the family of Random Utility Maximization models and is capable of capturing halo effects. Specifically, we show how to use partially-ranked preferences to efficiently model rational and irrational customer types from transaction data. Our estimation procedure is based on column generation, where relevant customer types are efficiently extracted by expanding a tree-like data structure containing the customer behaviors. Further, we propose a new dominance rule among customer types whose effect is to prioritize low orders of interactions among products. An extensive set of experiments assesses the predictive accuracy of the proposed approach. Our results show that accounting for irrational preferences can boost predictive accuracy by 12.5% on average, when tested on a real-world dataset from a large chain of grocery and drug stores.
158 - Yun Liu , Yeonwoo Rho 2018
Time averaging has been the traditional approach to handle mixed sampling frequencies. However, it ignores information possibly embedded in high frequency. Mixed data sampling (MIDAS) regression models provide a concise way to utilize the additional information in high-frequency variables. In this paper, we propose a specification test to choose between time averaging and MIDAS models, based on a Durbin-Wu-Hausman test. In particular, a set of instrumental variables is proposed and theoretically validated when the frequency ratio is large. As a result, our method tends to be more powerful than existing methods, as reconfirmed through the simulations.
Based on evidence gathered from a newly built large macroeconomic data set for the UK, labeled UK-MD and comparable to similar datasets for the US and Canada, it seems the most promising avenue for forecasting during the pandemic is to allow for gene ral forms of nonlinearity by using machine learning (ML) methods. But not all nonlinear ML methods are alike. For instance, some do not allow to extrapolate (like regular trees and forests) and some do (when complemented with linear dynamic components). This and other crucial aspects of ML-based forecasting in unprecedented times are studied in an extensive pseudo-out-of-sample exercise.
We study the impact of weak identification in discrete choice models, and provide insights into the determinants of identification strength in these models. Using these insights, we propose a novel test that can consistently detect weak identificatio n in commonly applied discrete choice models, such as probit, logit, and many of their extensions. Furthermore, we demonstrate that when the null hypothesis of weak identification is rejected, Wald-based inference can be carried out using standard formulas and critical values. A Monte Carlo study compares our proposed testing approach against commonly applied weak identification tests. The results simultaneously demonstrate the good performance of our approach and the fundamental failure of using conventional weak identification tests for linear models in the discrete choice model context. Furthermore, we compare our approach against those commonly applied in the literature in two empirical examples: married women labor force participation, and US food aid and civil conflicts.
Consider a planner who has to decide whether or not to introduce a new policy to a certain local population. The planner has only limited knowledge of the policys causal impact on this population due to a lack of data but does have access to the publ icized results of intervention studies performed for similar policies on different populations. How should the planner make use of and aggregate this existing evidence to make her policy decision? Building upon the paradigm of `patient-centered meta-analysis proposed by Manski (2020; Towards Credible Patient-Centered Meta-Analysis, Epidemiology), we formulate the planners problem as a statistical decision problem with a social welfare objective pertaining to the local population, and solve for an optimal aggregation rule under the minimax-regret criterion. We investigate the analytical properties, computational feasibility, and welfare regret performance of this rule. We also compare the minimax regret decision rule with plug-in decision rules based upon a hierarchical Bayes meta-regression or stylized mean-squared-error optimal prediction. We apply the minimax regret decision rule to two settings: whether to enact an active labor market policy given evidence from 14 randomized control trial studies; and whether to approve a drug (Remdesivir) for COVID-19 treatment using a meta-database of clinical trials.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا