ترغب بنشر مسار تعليمي؟ اضغط هنا

We study the interpretability of conditional probability estimates for binary classification under the agnostic setting or scenario. Under the agnostic setting, conditional probability estimates do not necessarily reflect the true conditional probabi lities. Instead, they have a certain calibration property: among all data points that the classifier has predicted P(Y = 1|X) = p, p portion of them actually have label Y = 1. For cost-sensitive decision problems, this calibration property provides adequate support for us to use Bayes Decision Theory. In this paper, we define a novel measure for the calibration property together with its empirical counterpart, and prove an uniform convergence result between them. This new measure enables us to formally justify the calibration property of conditional probability estimations, and provides new insights on the problem of estimating and calibrating conditional probabilities.
Sensitivity conjecture is a longstanding and fundamental open problem in the area of complexity measures of Boolean functions and decision tree complexity. The conjecture postulates that the maximum sensitivity of a Boolean function is polynomially r elated to other major complexity measures. Despite much attention to the problem and major advances in analysis of Boolean functions in the past decade, the problem remains wide open with no positive result toward the conjecture since the work of Kenyon and Kutin from 2004. In this work, we present new upper bounds for various complexity measures in terms of sensitivity improving the bounds provided by Kenyon and Kutin. Specifically, we show that deg(f)^{1-o(1)}=O(2^{s(f)}) and C(f) < 2^{s(f)-1} s(f); these in turn imply various corollaries regarding the relation between sensitivity and other complexity measures, such as block sensitivity, via known results. The gap between sensitivity and other complexity measures remains exponential but these results are the first improvement for this difficult problem that has been achieved in a decade.
Given a batch of human computation tasks, a commonly ignored aspect is how the price (i.e., the reward paid to human workers) of these tasks must be set or varied in order to meet latency or cost constraints. Often, the price is set up-front and not modified, leading to either a much higher monetary cost than needed (if the price is set too high), or to a much larger latency than expected (if the price is set too low). Leveraging a pricing model from prior work, we develop algorithms to optimally set and then vary price over time in order to meet a (a) user-specified deadline while minimizing total monetary cost (b) user-specified monetary budget constraint while minimizing total elapsed time. We leverage techniques from decision theory (specifically, Markov Decision Processes) for both these problems, and demonstrate that our techniques lead to upto 30% reduction in cost over schemes proposed in prior work. Furthermore, we develop techniques to speed-up the computation, enabling users to leverage the price setting algorithms on-the-fly.
Sensitivity cite{CD82,CDR86} and block sensitivity cite{Nisan91} are two important complexity measures of Boolean functions. A longstanding open problem in decision tree complexity, the Sensitivity versus Block Sensitivity question, proposed by Nisan and Szegedy cite{Nisan94} in 1992, is whether these two complexity measures are polynomially related, i.e., whether $bs(f)=O(s(f)^{O(1)})$. We prove an new upper bound on block sensitivity in terms of sensitivity: $bs(f) leq 2^{s(f)-1} s(f)$. Previously, the best upper bound on block sensitivity was $bs(f) leq (frac{e}{sqrt{2pi}}) e^{s(f)} sqrt{s(f)}$ by Kenyon and Kutin cite{KK}. We also prove that if $min{s_0(f),s_1(f)}$ is a constant, then sensitivity and block sensitivity are linearly related, i.e. $bs(f)=O(s(f))$.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا