ترغب بنشر مسار تعليمي؟ اضغط هنا

People are often reluctant to sell a house, or shares of stock, below the price at which they originally bought it. While this is generally not consistent with rational utility maximization, it does reflect two strong empirical regularities that are central to the behavioral science of human decision-making: a tendency to evaluate outcomes relative to a reference point determined by context (in this case the original purchase price), and the phenomenon of loss aversion in which people are particularly prone to avoid outcomes below the reference point. Here we explore the implications of reference points and loss aversion in optimal stopping problems, where people evaluate a sequence of options in one pass, either accepting the option and stopping the search or giving up on the option forever. The best option seen so far sets a reference point that shifts as the search progresses, and a biased decision-makers utility incurs an additional penalty when they accept a later option that is below this reference point. We formulate and study a behaviorally well-motivated version of the optimal stopping problem that incorporates these notions of reference dependence and loss aversion. We obtain tight bounds on the performance of a biased agent in this model relative to the best option obtainable in retrospect (a type of prophet inequality for biased agents), as well as tight bounds on the ratio between the performance of a biased agent and the performance of a rational one. We further establish basic monotonicity results, and show an exponential gap between the performance of a biased agent in a stopping problem with respect to a worst-case versus a random order. As part of this, we establish fundamental differences between optimal stopping problems for rational versus biased agents, and these differences inform our analysis.
We present the first nontrivial approximation algorithm for the bottleneck asymmetric traveling salesman problem. Given an asymmetric metric cost between n vertices, the problem is to find a Hamiltonian cycle that minimizes its bottleneck (or maximum -length edge) cost. We achieve an O(log n / log log n) approximation performance guarantee by giving a novel algorithmic technique to shortcut Eulerian circuits while bounding the lengths of the shortcuts needed. This allows us to build on a related result of Asadpour, Goemans, Mk{a}dry, Oveis Gharan, and Saberi to obtain this guarantee. Furthermore, we show how our technique yields stronger approximation bounds in some cases, such as the bounded orientable genus case studied by Oveis Gharan and Saberi. We also explore the possibility of further improvement upon our main result through a comparison to the symmetric counterpart of the problem.
We investigate revenue guarantees for auction mechanisms in a model where a distribution is specified for each bidder, but only some of the distributions are correct. The subset of bidders whose distribution is correctly specified (henceforth, the gr een bidders) is unknown to the auctioneer. The question we address is whether the auctioneer can run a mechanism that is guaranteed to obtain at least as much revenue, in expectation, as would be obtained by running an optimal mechanism on the green bidders only. For single-parameter feasibility environments, we find that the answer depends on the feasibility constraint. For matroid environments, running the optimal mechanism using all the specified distributions (including the incorrect ones) guarantees at least as much revenue in expectation as running the optimal mechanism on the green bidders. For any feasibility constraint that is not a matroid, there exists a way of setting the specified distributions and the true distributions such that the opposite conclusion holds.
Martin Weitzmans Pandoras problem furnishes the mathematical basis for optimal search theory in economics. Nearly 40 years later, Laura Doval introduced a version of the problem in which the searcher is not obligated to pay the cost of inspecting an alternatives value before selecting it. Unlike the original Pandoras problem, the version with nonobligatory inspection cannot be solved optimally by any simple ranking-based policy, and it is unknown whether there exists any polynomial-time algorithm to compute the optimal policy. This motivates the study of approximately optimal policies that are simple and computationally efficient. In this work we provide the first non-trivial approximation guarantees for this problem. We introduce a family of committing policies such that it is computationally easy to find and implement the optimal committing policy. We prove that the optimal committing policy is guaranteed to approximate the fully optimal policy within a $1-frac1e = 0.63ldots$ factor, and for the special case of two boxes we improve this factor to 4/5 and show that this approximation is tight for the class of committing policies.
Algorithm configuration methods optimize the performance of a parameterized heuristic algorithm on a given distribution of problem instances. Recent work introduced an algorithm configuration procedure (Structured Procrastination) that provably achie ves near optimal performance with high probability and with nearly minimal runtime in the worst case. It also offers an $textit{anytime}$ property: it keeps tightening its optimality guarantees the longer it is run. Unfortunately, Structured Procrastination is not $textit{adaptive}$ to characteristics of the parameterized algorithm: it treats every input like the worst case. Follow-up work (LeapsAndBounds) achieves adaptivity but trades away the anytime property. This paper introduces a new algorithm, Structured Procrastination with Confidence, that preserves the near-optimality and anytime properties of Structured Procrastination while adding adaptivity. In particular, the new algorithm will perform dramatically faster in settings where many algorithm configurations perform poorly. We show empirically both that such settings arise frequently in practice and that the anytime property is useful for finding good configurations quickly.
There are many settings in which a principal performs a task by delegating it to an agent, who searches over possible solutions and proposes one to the principal. This describes many aspects of the workflow within organizations, as well as many of th e activities undertaken by regulatory bodies, who often obtain relevant information from the parties being regulated through a process of delegation. A fundamental tension underlying delegation is the fact that the agents interests will typically differ -- potentially significantly -- from the interests of the principal, and as a result the agent may propose solutions based on their own incentives that are inefficient for the principal. A basic problem, therefore, is to design mechanisms by which the principal can constrain the set of proposals they are willing to accept from the agent, to ensure a certain level of quality for the principal from the proposed solution. In this work, we investigate how much the principal loses -- quantitatively, in terms of the objective they are trying to optimize -- when they delegate to an agent. We develop a methodology for bounding this loss of efficiency, and show that in a very general model of delegation, there is a family of mechanisms achieving a universal bound on the ratio between the quality of the solution obtained through delegation and the quality the principal could obtain in an idealized benchmark where they searched for a solution themself. Moreover, it is possible to achieve such bounds through mechanisms with a natural threshold structure, which are thus structurally simpler than the optimal mechanisms typically considered in the literature on delegation. At the heart of our framework is an unexpected connection between delegation and the analysis of prophet inequalities, which we leverage to provide bounds on the behavior of our delegation mechanisms.
Stochastic gradient descent (SGD) is widely used in machine learning. Although being commonly viewed as a fast but not accurate version of gradient descent (GD), it always finds better solutions than GD for modern neural networks. In order to under stand this phenomenon, we take an alternative view that SGD is working on the convolved (thus smoothed) version of the loss function. We show that, even if the function $f$ has many bad local minima or saddle points, as long as for every point $x$, the weighted average of the gradients of its neighborhoods is one point convex with respect to the desired solution $x^*$, SGD will get close to, and then stay around $x^*$ with constant probability. More specifically, SGD will not get stuck at sharp local minima with small diameters, as long as the neighborhoods of these regions contain enough gradient information. The neighborhood size is controlled by step size and gradient noise. Our result identifies a set of functions that SGD provably works, which is much larger than the set of convex functions. Empirically, we observe that the loss surface of neural networks enjoys nice one point convexity properties locally, therefore our theorem helps explain why SGD works so well for neural networks.
We derive upper and lower bounds on the degree $d$ for which the Lovasz $vartheta$ function, or equivalently sum-of-squares proofs with degree two, can refute the existence of a $k$-coloring in random regular graphs $G_{n,d}$. We show that this type of refutation fails well above the $k$-colorability transition, and in particular everywhere below the Kesten-Stigum threshold. This is consistent with the conjecture that refuting $k$-colorability, or distinguishing $G_{n,d}$ from the planted coloring model, is hard in this region. Our results also apply to the disassortative case of the stochastic block model, adding evidence to the conjecture that there is a regime where community detection is computationally hard even though it is information-theoretically possible. Using orthogonal polynomials, we also provide explicit upper bounds on $vartheta(overline{G})$ for regular graphs of a given girth, which may be of independent interest.
We provide a polynomial time reduction from Bayesian incentive compatible mechanism design to Bayesian algorithm design for welfare maximization problems. Unlike prior results, our reduction achieves exact incentive compatibility for problems with mu lti-dimensional and continuous type spaces. The key technical barrier preventing exact incentive compatibility in prior black-box reductions is that repairing violations of incentive constraints requires understanding the distribution of the mechanisms output, which is typically #P-hard to compute. Reductions that instead estimate the output distribution by sampling inevitably suffer from sampling error, which typically precludes exact incentive compatibility. We overcome this barrier by employing and generalizing the computational model in the literature on $textit{Bernoulli factories}$. In a Bernoulli factory problem, one is given a function mapping the bias of an input coin to that of an output coin, and the challenge is to efficiently simulate the output coin given only sample access to the input coin. This is the key ingredient in designing an incentive compatible mechanism for bipartite matching, which can be used to make the approximately incentive compatible reduction of Hartline et al. (2015) exactly incentive compatible.
74 - Robert Kleinberg 2016
A tri-colored sum-free set in an abelian group $H$ is a collection of ordered triples in $H^3$, ${(a_i,b_i,c_i)}_{i=1}^m$, such that the equation $a_i+b_j+c_k=0$ holds if and only if $i=j=k$. Using a variant of the lemma introduced by Croot, Lev, and Pach in their breakthrough work on arithmetic-progression-free sets, we prove that the size of any tri-colored sum-free set in $mathbb{F}_2^n$ is bounded above by $6 {n choose lfloor n/3 rfloor}$. This upper bound is tight, up to a factor subexponential in $n$: there exist tri-colored sum-free sets in $mathbb{F}_2^n$ of size greater than ${n choose lfloor n/3 rfloor} cdot 2^{-sqrt{16 n / 3}}$ for all sufficiently large $n$.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا