Do you want to publish a course? Click here

Designing Optimal Binary Rating Systems

73   0   0.0 ( 0 )
 Added by Nikhil Garg
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Modern online platforms rely on effective rating systems to learn about items. We consider the optimal design of rating systems that collect binary feedback after transactions. We make three contributions. First, we formalize the performance of a rating system as the speed with which it recovers the true underlying ranking on items (in a large deviations sense), accounting for both items underlying match rates and the platforms preferences. Second, we provide an efficient algorithm to compute the binary feedback system that yields the highest such performance. Finally, we show how this theoretical perspective can be used to empirically design an implementable, approximately optimal rating system, and validate our approach using real-world experimental data collected on Amazon Mechanical Turk.

rate research

Read More

Platforms critically rely on rating systems to learn the quality of market participants. In practice, however, these ratings are often highly inflated, and therefore not very informative. In this paper, we first investigate whether the platform can obtain less inflated, more informative ratings by altering the meaning and relative importance of the levels in the rating system. Second, we seek a principled approach for the platform to make these choices in the design of the rating system. First, we analyze the results of a randomized controlled trial on an online labor market in which an additional question was added to the feedback form. Between treatment conditions, we vary the question phrasing and answer choices; in particular, the treatment conditions include several positive-skewed verbal rating scales with descriptive phrases or adjectives providing specific interpretation for each rating level. The online labor market test reveals that current inflationary norms can in fact be countered by re-anchoring the meaning of the levels of the rating system. In particular, the positive-skewed verbal rating scales yield rating distributions that significantly reduce rating inflation and are much more informative about seller quality. Second, we develop a model-based framework to compare and select among rating system designs, and apply this framework to the data obtained from the online labor market test. Our simulations demonstrate that our model-based framework for scale design and optimization can identify the most informative rating system and substantially improve the quality of information obtained over baseline designs. Overall, our study illustrates that rating systems that are informative in practice can be designed, and demonstrates how to design them in a principled manner.
We study learning in a noisy bisection model: specifically, Bayesian algorithms to learn a target value V given access only to noisy realizations of whether V is less than or greater than a threshold theta. At step t = 0, 1, 2, ..., the learner sets threshold theta t and observes a noisy realization of sign(V - theta t). After T steps, the goal is to output an estimate V^ which is within an eta-tolerance of V . This problem has been studied, predominantly in environments with a fixed error probability q < 1/2 for the noisy realization of sign(V - theta t). In practice, it is often the case that q can approach 1/2, especially as theta -> V, and there is little known when this happens. We give a pseudo-Bayesian algorithm which provably converges to V. When the true prior matches our algorithms Gaussian prior, we show near-optimal expected performance. Our methods extend to the general multiple-threshold setting where the observation noisily indicates which of k >= 2 regions V belongs to.
Sequential learning systems are used in a wide variety of problems from decision making to optimization, where they provide a belief (opinion) to nature, and then update this belief based on the feedback (result) to minimize (or maximize) some cost or loss (conversely, utility or gain). The goal is to reach an objective by exploiting the temporal relation inherent to the natures feedback (state). By exploiting this relation, specific learning systems can be designed that perform asymptotically optimal for various applications. However, if the framework of the problem is not stationary, i.e., the natures state sometimes changes arbitrarily, the past cumulative belief revision done by the system may become useless and the system may fail if it lacks adaptivity. While this adaptivity can be directly implemented in specific cases (e.g., convex optimization), it is mostly not straightforward for general learning tasks. To this end, we propose an efficient optimal mixture framework for general sequential learning systems, which we call the recursive experts for dynamic environments. For this purpose, we design hyper-experts that incorporate the learning systems at our disposal and recursively merge in a specific way to achieve minimax optimal regret bounds up to constant factors. The multiplicative increases in computational complexity from the initial system to our adaptive system are only logarithmic-in-time factors.
83 - Matthew Streeter 2019
We present algorithms for efficiently learning regularizers that improve generalization. Our approach is based on the insight that regularizers can be viewed as upper bounds on the generalization gap, and that reducing the slack in the bound can improve performance on test data. For a broad class of regularizers, the hyperparameters that give the best upper bound can be computed using linear programming. Under certain Bayesian assumptions, solving the LP lets us jump to the optimal hyperparameters given very limited data. This suggests a natural algorithm for tuning regularization hyperparameters, which we show to be effective on both real and synthetic data.
Formal models of learning from teachers need to respect certain criteria to avoid collusion. The most commonly accepted notion of collusion-freeness was proposed by Goldman and Mathias (1996), and various teaching models obeying their criterion have been studied. For each model $M$ and each concept class $mathcal{C}$, a parameter $M$-$mathrm{TD}(mathcal{C})$ refers to the teaching dimension of concept class $mathcal{C}$ in model $M$---defined to be the number of examples required for teaching a concept, in the worst case over all concepts in $mathcal{C}$. This paper introduces a new model of teaching, called no-clash teaching, together with the corresponding parameter $mathrm{NCTD}(mathcal{C})$. No-clash teaching is provably optimal in the strong sense that, given any concept class $mathcal{C}$ and any model $M$ obeying Goldman and Mathiass collusion-freeness criterion, one obtains $mathrm{NCTD}(mathcal{C})le M$-$mathrm{TD}(mathcal{C})$. We also study a corresponding notion $mathrm{NCTD}^+$ for the case of learning from positive data only, establish useful bounds on $mathrm{NCTD}$ and $mathrm{NCTD}^+$, and discuss relations of these parameters to the VC-dimension and to sample compression. In addition to formulating an optimal model of collusion-free teaching, our main results are on the computational complexity of deciding whether $mathrm{NCTD}^+(mathcal{C})=k$ (or $mathrm{NCTD}(mathcal{C})=k$) for given $mathcal{C}$ and $k$. We show some such decision problems to be equivalent to the existence question for certain constrained matchings in bipartite graphs. Our NP-hardness results for the latter are of independent interest in the study of constrained graph matchings.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا