ترغب بنشر مسار تعليمي؟ اضغط هنا

In the budgeted learning problem, we are allowed to experiment on a set of alternatives (given a fixed experimentation budget) with the goal of picking a single alternative with the largest possible expected payoff. Approximation algorithms for this problem were developed by Guha and Munagala by rounding a linear program that couples the various alternatives together. In this paper we present an index for this problem, which we call the ratio index, which also guarantees a constant factor approximation. Index-based policies have the advantage that a single number (i.e. the index) can be computed for each alternative irrespective of all other alternatives, and the alternative with the highest index is experimented upon. This is analogous to the famous Gittins index for the discounted multi-armed bandit problem. The ratio index has several interesting structural properties. First, we show that it can be computed in strongly polynomial time. Second, we show that with the appropriate discount factor, the Gittins index and our ratio index are constant factor approximations of each other, and hence the Gittins index also gives a constant factor approximation to the budgeted learning problem. Finally, we show that the ratio index can be used to create an index-based policy that achieves an O(1)-approximation for the finite horizon version of the multi-armed bandit problem. Moreover, the policy does not require any knowledge of the horizon (whereas we compare its performance against an optimal strategy that is aware of the horizon). This yields the following surprising result: there is an index-based policy that achieves an O(1)-approximation for the multi-armed bandit problem, oblivious to the underlying discount factor.
The programming paradigm Map-Reduce and its main open-source implementation, Hadoop, have had an enormous impact on large scale data processing. Our goal in this expository writeup is two-fold: first, we want to present some complexity measures that allow us to talk about Map-Reduce algorithms formally, and second, we want to point out why this model is actually different from other models of parallel programming, most notably the PRAM (Parallel Random Access Memory) model. We are looking for complexity measures that are detailed enough to make fine-grained distinction between different algorithms, but which also abstract away many of the implementation details.
We consider the well-studied problem of finding a perfect matching in $d$-regular bipartite graphs with $2n$ vertices and $m = nd$ edges. While the best-known algorithm for general bipartite graphs (due to Hopcroft and Karp) takes $O(m sqrt{n})$ time , in regular bipartite graphs, a perfect matching is known to be computable in $O(m)$ time. Very recently, the $O(m)$ bound was improved to $O(min{m, frac{n^{2.5}ln n}{d}})$ expected time, an expression that is bounded by $tilde{O}(n^{1.75})$. In this paper, we further improve this result by giving an $O(min{m, frac{n^2ln^3 n}{d}})$ expected time algorithm for finding a perfect matching in regular bipartite graphs; as a function of $n$ alone, the algorithm takes expected time $O((nln n)^{1.5})$. To obtain this result, we design and analyze a two-stage sampling scheme that reduces the problem of finding a perfect matching in a regular bipartite graph to the same problem on a subsampled bipartite graph with $O(nln n)$ edges that has a perfect matching with high probability. The matching is then recovered using the Hopcroft-Karp algorithm. While the standard analysis of Hopcroft-Karp gives us an $tilde{O}(n^{1.5})$ running time, we present a tighter analysis for our special case that results in the stronger $tilde{O}(min{m, frac{n^2}{d} })$ time mentioned earlier. Our proof of correctness of this sampling scheme uses a new correspondence theorem between cuts and Halls theorem ``witnesses for a perfect matching in a bipartite graph that we prove. We believe this theorem may be of independent interest; as another example application, we show that a perfect matching in the support of an $n times n$ doubly stochastic matrix with $m$ non-zero entries can be found in expected time $tilde{O}(m + n^{1.5})$.
Search auctions have become a dominant source of revenue generation on the Internet. Such auctions have typically used per-click bidding and pricing. We propose the use of hybrid auctions where an advertiser can make a per-impression as well as a per -click bid, and the auctioneer then chooses one of the two as the pricing mechanism. We assume that the advertiser and the auctioneer both have separate beliefs (called priors) on the click-probability of an advertisement. We first prove that the hybrid auction is truthful, assuming that the advertisers are risk-neutral. We then show that this auction is superior to the existing per-click auction in multiple ways: 1) It takes into account the risk characteristics of the advertisers. 2) For obscure keywords, the auctioneer is unlikely to have a very sharp prior on the click-probabilities. In such situations, the hybrid auction can result in significantly higher revenue. 3) An advertiser who believes that its click-probability is much higher than the auctioneers estimate can use per-impression bids to correct the auctioneers prior without incurring any extra cost. 4) The hybrid auction can allow the advertiser and auctioneer to implement complex dynamic programming strategies. As Internet commerce matures, we need more sophisticated pricing models to exploit all the information held by each of the participants. We believe that hybrid auctions could be an important step in this direction.
In this paper we further investigate the well-studied problem of finding a perfect matching in a regular bipartite graph. The first non-trivial algorithm, with running time $O(mn)$, dates back to K{o}nigs work in 1916 (here $m=nd$ is the number of ed ges in the graph, $2n$ is the number of vertices, and $d$ is the degree of each node). The currently most efficient algorithm takes time $O(m)$, and is due to Cole, Ost, and Schirra. We improve this running time to $O(min{m, frac{n^{2.5}ln n}{d}})$; this minimum can never be larger than $O(n^{1.75}sqrt{ln n})$. We obtain this improvement by proving a uniform sampling theorem: if we sample each edge in a $d$-regular bipartite graph independently with a probability $p = O(frac{nln n}{d^2})$ then the resulting graph has a perfect matching with high probability. The proof involves a decomposition of the graph into pieces which are guaranteed to have many perfect matchings but do not have any small cuts. We then establish a correspondence between potential witnesses to non-existence of a matching (after sampling) in any piece and cuts of comparable size in that same piece. Kargers sampling theorem for preserving cuts in a graph can now be adapted to prove our uniform sampling theorem for preserving perfect matchings. Using the $O(msqrt{n})$ algorithm (due to Hopcroft and Karp) for finding maximum matchings in bipartite graphs on the sampled graph then yields the stated running time. We also provide an infinite family of instances to show that our uniform sampling result is tight up to poly-logarithmic factors (in fact, up to $ln^2 n$).
It was shown recently by Fakcharoenphol et al that arbitrary finite metrics can be embedded into distributions over tree metrics with distortion O(log n). It is also known that this bound is tight since there are expander graphs which cannot be embed ded into distributions over trees with better than Omega(log n) distortion. We show that this same lower bound holds for embeddings into distributions over any minor excluded family. Given a family of graphs F which excludes minor M where |M|=k, we explicitly construct a family of graphs with treewidth-(k+1) which cannot be embedded into a distribution over F with better than Omega(log n) distortion. Thus, while these minor excluded families of graphs are more expressive than trees, they do not provide asymptotically better approximations in general. An important corollary of this is that graphs of treewidth-k cannot be embedded into distributions over graphs of treewidth-(k-3) with distortion less than Omega(log n). We also extend a result of Alon et al by showing that for any k, planar graphs cannot be embedded into distributions over treewidth-k graphs with better than Omega(log n) distortion.
We believe the Babcock--Leighton process of poloidal field generation to be the main source of irregularity in the solar cycle. The random nature of this process may make the poloidal field in one hemisphere stronger than that in the other hemisphere at the end of a cycle. We expect this to induce an asymmetry in the next sunspot cycle. We look for evidence of this in the observational data and then model it theoretically with our dynamo code. Since actual polar field measurements exist only from 1970s, we use the polar faculae number data recorded by Sheeley (1991) as a proxy of the polar field and estimate the hemispheric asymmetry of the polar field in different solar minima during the major part of the twentieth century. This asymmetry is found to have a reasonable correlation with the asymmetry of the next cycle. We then run our dynamo code by feeding information about this asymmetry at the successive minima and compare with observational data. We find that the theoretically computed asymmetries of different cycles compare favourably with the observational data, the correlation coefficient being 0.73. Due to the coupling between the two hemispheres, any hemispheric asymmetry tends to get attenuated with time. The hemispheric asymmetry of a cycle either from observational data or from theoretical calculation statistically tends to be less than the asymmetry in the polar field (as inferred from the faculae data) in the preceding minimum. This reduction factor turns out to be 0.38 and 0.60 respectively in observational data and theoretical simulation.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا