ترغب بنشر مسار تعليمي؟ اضغط هنا

A Dynamic Bayesian Network Model for Inventory Level Estimation in Retail Marketing

47   0   0.0 ( 0 )
 نشر من قبل Luis Reyes Castro
 تاريخ النشر 2016
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Many retailers today employ inventory management systems based on Re-Order Point Policies, most of which rely on the assumption that all decreases in product inventory levels result from product sales. Unfortunately, it usually happens that small but random quantities of the product get lost, stolen or broken without record as time passes, e.g., as a consequence of shoplifting. This is usual for retailers handling large varieties of inexpensive products, e.g., grocery stores. In turn, over time these discrepancies lead to stock freezing problems, i.e., situations where the system believes the stock is above the re-order point but the actual stock is at zero, and so no replenishments or sales occur. Motivated by these issues, we model the interaction between sales, losses, replenishments and inventory levels as a Dynamic Bayesian Network (DBN), where the inventory levels are unobserved (i.e., hidden) variables we wish to estimate. We present an Expectation-Maximization (EM) algorithm to estimate the parameters of the sale and loss distributions, which relies on solving a one-dimensional dynamic program for the E-step and on solving two separate one-dimensional nonlinear programs for the M-step.


قيم البحث

اقرأ أيضاً

Self-reinforcing feedback loops in personalization systems are typically caused by users choosing from a limited set of alternatives presented systematically based on previous choices. We propose a Bayesian choice model built on Luce axioms that expl icitly accounts for users limited exposure to alternatives. Our model is fair---it does not impose negative bias towards unpresented alternatives, and practical---preference estimates are accurately inferred upon observing a small number of interactions. It also allows efficient sampling, leading to a straightforward online presentation mechanism based on Thompson sampling. Our approach achieves low regret in learning to present upon exploration of only a small fraction of possible presentations. The proposed structure can be reused as a building block in interactive systems, e.g., recommender systems, free of feedback loops.
Measuring and evaluating network resilience has become an important aspect since the network is vulnerable to both uncertain disturbances and malicious attacks. Networked systems are often composed of many dynamic components and change over time, whi ch makes it difficult for existing methods to access the changeable situation of network resilience. This paper establishes a novel quantitative framework for evaluating network resilience using the Dynamic Bayesian Network. The proposed framework can be used to evaluate the networks multi-stage resilience processes when suffering various attacks and recoveries. First, we define the dynamic capacities of network components and establish the networks five core resilience capabilities to describe the resilient networking stages including preparation, resistance, adaptation, recovery, and evolution; the five core resilience capabilities consist of rapid response capability, sustained resistance capability, continuous running capability, rapid convergence capability, and dynamic evolution capability. Then, we employ a two-time slices approach based on the Dynamic Bayesian Network to quantify five crucial performances of network resilience based on core capabilities proposed above. The proposed approach can ensure the time continuity of resilience evaluation in time-varying networks. Finally, our proposed evaluation framework is applied to different attacks and recovery conditions in typical simulations and real-world network topology. Results and comparisons with extant studies indicate that the proposed method can achieve a more accurate and comprehensive evaluation and can be applied to network scenarios under various attack and recovery intensities.
We show theoretical similarities between the Least Squares Support Vector Regression (LS-SVR) model with a Radial Basis Functions (RBF) kernel and maximum a posteriori (MAP) inference on Bayesian RBF networks with a specific Gaussian prior on the reg ression weights. Although previous works have pointed out similar expressions between those learning approaches, we explicit and formally state the existing correspondences. We empirically demonstrate our result by performing computational experiments with standard regression benchmarks. Our findings open a range of possibilities to improve LS-SVR by borrowing strength from well-established developments in Bayesian methodology.
Encoding domain knowledge into the prior over the high-dimensional weight space of a neural network is challenging but essential in applications with limited data and weak signals. Two types of domain knowledge are commonly available in scientific ap plications: 1. feature sparsity (fraction of features deemed relevant); 2. signal-to-noise ratio, quantified, for instance, as the proportion of variance explained (PVE). We show how to encode both types of domain knowledge into the widely used Gaussian scale mixture priors with Automatic Relevance Determination. Specifically, we propose a new joint prior over the local (i.e., feature-specific) scale parameters that encodes knowledge about feature sparsity, and a Stein gradient optimization to tune the hyperparameters in such a way that the distribution induced on the models PVE matches the prior distribution. We show empirically that the new prior improves prediction accuracy, compared to existing neural network priors, on several publicly available datasets and in a genetics application where signals are weak and sparse, often outperforming even computationally intensive cross-validation for hyperparameter tuning.
Public special events, like sports games, concerts and festivals are well known to create disruptions in transportation systems, often catching the operators by surprise. Although these are usually planned well in advance, their impact is difficult t o predict, even when organisers and transportation operators coordinate. The problem highly increases when several events happen concurrently. To solve these problems, costly processes, heavily reliant on manual search and personal experience, are usual practice in large cities like Singapore, London or Tokyo. This paper presents a Bayesian additive model with Gaussian process components that combines smart card records from public transport with context information about events that is continuously mined from the Web. We develop an efficient approximate inference algorithm using expectation propagation, which allows us to predict the total number of public transportation trips to the special event areas, thereby contributing to a more adaptive transportation system. Furthermore, for multiple concurrent event scenarios, the proposed algorithm is able to disaggregate gross trip counts into their most likely components related to specific events and routine behavior. Using real data from Singapore, we show that the presented model outperforms the best baseline model by up to 26% in R2 and also has explanatory power for its individual components.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا