ترغب بنشر مسار تعليمي؟ اضغط هنا

Buying Data Over Time: Approximately Optimal Strategies for Dynamic Data-Driven Decisions

136   0   0.0 ( 0 )
 نشر من قبل Brendan Lucier
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider a model where an agent has a repeated decision to make and wishes to maximize their total payoff. Payoffs are influenced by an action taken by the agent, but also an unknown state of the world that evolves over time. Before choosing an action each round, the agent can purchase noisy samples about the state of the world. The agent has a budget to spend on these samples, and has flexibility in deciding how to spread that budget across rounds. We investigate the problem of choosing a sampling algorithm that optimizes total expected payoff. For example: is it better to buy samples steadily over time, or to buy samples in batches? We solve for the optimal policy, and show that it is a natural instantiation of the latter. Under a more general model that includes per-round fixed costs, we prove that a variation on this batching policy is a 2-approximation.



قيم البحث

اقرأ أيضاً

We consider the problem of designing a survey to aggregate non-verifiable information from a privacy-sensitive population: an analyst wants to compute some aggregate statistic from the private bits held by each member of a population, but cannot veri fy the correctness of the bits reported by participants in his survey. Individuals in the population are strategic agents with a cost for privacy, ie, they not only account for the payments they expect to receive from the mechanism, but also their privacy costs from any information revealed about them by the mechanisms outcome---the computed statistic as well as the payments---to determine their utilities. How can the analyst design payments to obtain an accurate estimate of the population statistic when individuals strategically decide both whether to participate and whether to truthfully report their sensitive information? We design a differentially private peer-prediction mechanism that supports accurate estimation of the population statistic as a Bayes-Nash equilibrium in settings where agents have explicit preferences for privacy. The mechanism requires knowledge of the marginal prior distribution on bits $b_i$, but does not need full knowledge of the marginal distribution on the costs $c_i$, instead requiring only an approximate upper bound. Our mechanism guarantees $epsilon$-differential privacy to each agent $i$ against any adversary who can observe the statistical estimate output by the mechanism, as well as the payments made to the $n-1$ other agents $j eq i$. Finally, we show that with slightly more structured assumptions on the privacy cost functions of each agent, the cost of running the survey goes to $0$ as the number of agents diverges.
We identify the first static credible mechanism for multi-item additive auctions that achieves a constant factor of the optimal revenue. This is one instance of a more general framework for designing two-part tariff auctions, adapting the duality fra mework of Cai et al [CDW16]. Given a (not necessarily incentive compatible) auction format $A$ satisfying certain technical conditions, our framework augments the auction with a personalized entry fee for each bidder, which must be paid before the auction can be accessed. These entry fees depend only on the prior distribution of bidder types, and in particular are independent of realized bids. Our framework can be used with many common auction formats, such as simultaneous first-price, simultaneous second-price, and simultaneous all-pay auctions. If all-pay auctions are used, we prove that the resulting mechanism is credible in the sense that the auctioneer cannot benefit by deviating from the stated mechanism after observing agent bids. If second-price auctions are used, we obtain a truthful $O(1)$-approximate mechanism with fixed entry fees that are amenable to tuning via online learning techniques. Our results for first price and all-pay are the first revenue guarantees of non-truthful mechanisms in multi-dimensional environments; an open question in the literature [RST17].
47 - Samuel N. Cohen 2016
In stochastic decision problems, one often wants to estimate the underlying probability measure statistically, and then to use this estimate as a basis for decisions. We shall consider how the uncertainty in this estimation can be explicitly and cons istently incorporated in the valuation of decisions, using the theory of nonlinear expectations.
We study the design of a decentralized two-sided matching market in which agents search is guided by the platform. There are finitely many agent types, each with (potentially random) preferences drawn from known type-specific distributions. Equipped with knowledge of these distributions, the platform guides the search process by determining the meeting rate between each pair of types from the two sides. Focusing on symmetric pairwise preferences in a continuum model, we first characterize the unique stationary equilibrium that arises given a feasible set of meeting rates. We then introduce the platforms optimal directed search problem, which involves optimizing meeting rates to maximize equilibrium social welfare. We first show that incentive issues arising from congestion and cannibalization make the design problem fairly intricate. Nonetheless, we develop an efficiently computable search design whose corresponding equilibrium achieves at least 1/4 the social welfare of the optimal design. In fact, our construction always recovers at least 1/4 the first-best social welfare, where agents incentives are disregarded. Our directed search design is simple and easy-to-implement, as its corresponding bipartite graph consists of disjoint stars. Furthermore, our design implies the platform can substantially limit choice and yet induce an equilibrium with an approximately optimal welfare. Finally, we show that approximation is likely the best we can hope for by establishing that the problem of designing optimal directed search is NP-hard to even approximate beyond a certain constant factor.
We consider a revenue-maximizing seller with $m$ heterogeneous items and a single buyer whose valuation $v$ for the items may exhibit both substitutes (i.e., for some $S, T$, $v(S cup T) < v(S) + v(T)$) and complements (i.e., for some $S, T$, $v(S cu p T) > v(S) + v(T)$). We show that the mechanism first proposed by Babaioff et al. [2014] - the better of selling the items separately and bundling them together - guarantees a $Theta(d)$ fraction of the optimal revenue, where $d$ is a measure on the degree of complementarity. Note that this is the first approximately optimal mechanism for a buyer whose valuation exhibits any kind of complementarity, and extends the work of Rubinstein and Weinberg [2015], which proved that the same simple mechanisms achieve a constant factor approximation when buyer valuations are subadditive, the most general class of complement-free valuations. Our proof is enabled by the recent duality framework developed in Cai et al. [2016], which we use to obtain a bound on the optimal revenue in this setting. Our main technical contributions are specialized to handle the intricacies of settings with complements, and include an algorithm for partitioning edges in a hypergraph. Even nailing down the right model and notion of degree of complementarity to obtain meaningful results is of interest, as the natural extensions of previous definitions provably fail.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا