ترغب بنشر مسار تعليمي؟ اضغط هنا

The angel wins

124   0   0.0 ( 0 )
 نشر من قبل Peter Gacs
 تاريخ النشر 2007
  مجال البحث
والبحث باللغة English
 تأليف Peter Gacs




اسأل ChatGPT حول البحث

The angel-devil game is played on an infinite two-dimensional ``chessboard. The squares of the board are all white at the beginning. The players called angel and devil take turns in their steps. When it is the devils turn, he can turn a square black. The angel always stays on a white square, and when it is her turn she can fly at a distance of at most J steps (each of which can be horizontal, vertical or diagonal) to a new white square. Here J is a constant. The devil wins if the angel does not find any more white squares to land on. The result of the paper is that if J is sufficiently large then the angel has a strategy such that the devil will never capture her. This deceptively easy-sounding result has been a conjecture, surprisingly, for about thirty years. Several other independent solutions have appeared simultaneously, some of them prove that J=2 is sufficient (see the Wikipedia on the angel problem). Still, it is hoped that the hierarchical solution presented here may prove useful for some generalizations.



قيم البحث

اقرأ أيضاً

The classification experiments covered by machine learning (ML) are composed by two important parts: the data and the algorithm. As they are a fundamental part of the problem, both must be considered when evaluating a models performance against a ben chmark. The best classifiers need robust benchmarks to be properly evaluated. For this, gold standard benchmarks such as OpenML-CC18 are used. However, data complexity is commonly not considered along with the model during a performance evaluation. Recent studies employ Item Response Theory (IRT) as a new approach to evaluating datasets and algorithms, capable of evaluating both simultaneously. This work presents a new evaluation methodology based on IRT and Glicko-2, jointly with the decodIRT tool developed to guide the estimation of IRT in ML. It explores the IRT as a tool to evaluate the OpenML-CC18 benchmark for its algorithmic evaluation capability and checks if there is a subset of datasets more efficient than the original benchmark. Several classifiers, from classics to ensemble, are also evaluated using the IRT models. The Glicko-2 rating system was applied together with IRT to summarize the innate ability and classifiers performance. It was noted that not all OpenML-CC18 datasets are really useful for evaluating algorithms, where only 10% were rated as being really difficult. Furthermore, it was verified the existence of a more efficient subset containing only 50% of the original size. While Randon Forest was singled out as the algorithm with the best innate ability.
Computer science has grown rapidly since its inception in the 1950s and the pioneers in the field are celebrated annually by the A.M. Turing Award. In this paper, we attempt to shed light on the path to influential computer scientists by examining th e characteristics of the 72 Turing Award laureates. To achieve this goal, we build a comprehensive dataset of the Turing Award laureates and analyze their characteristics, including their personal information, family background, academic background, and industry experience. The FP-Growth algorithm is used for frequent feature mining. Logistic regression plot, pie chart, word cloud and map are generated accordingly for each of the interesting features to uncover insights regarding personal factors that drive influential work in the field of computer science. In particular, we show that the Turing Award laureates are most commonly white, male, married, United States citizen, and received a PhD degree. Our results also show that the age at which the laureate won the award increases over the years; most of the Turing Award laureates did not major in computer science; birth order is strongly related to the winners success; and the number of citations is not as important as one would expect.
Nakamoto invented the longest chain protocol, and claimed its security by analyzing the private double-spend attack, a race between the adversary and the honest nodes to grow a longer chain. But is it the worst attack? We answer the question in the a ffirmative for three classes of longest chain protocols, designed for different consensus models: 1) Nakamotos original Proof-of-Work protocol; 2) Ouroboros and SnowWhite Proof-of-Stake protocols; 3) Chia Proof-of-Space protocol. As a consequence, exact characterization of the maximum tolerable adversary power is obtained for each protocol as a function of the average block time normalized by the network delay. The security analysis of these protocols is performed in a unified manner by a novel method of reducing all attacks to a race between the adversary and the honest nodes.
223 - Mao Ye , Lemeng Wu , Qiang Liu 2020
Despite the great success of deep learning, recent works show that large deep neural networks are often highly redundant and can be significantly reduced in size. However, the theoretical question of how much we can prune a neural network given a spe cified tolerance of accuracy drop is still open. This paper provides one answer to this question by proposing a greedy optimization based pruning method. The proposed method has the guarantee that the discrepancy between the pruned network and the original network decays with exponentially fast rate w.r.t. the size of the pruned network, under weak assumptions that apply for most practical settings. Empirically, our method improves prior arts on pruning various network architectures including ResNet, MobilenetV2/V3 on ImageNet.
Deep networks were recently suggested to face the odds between accuracy (on clean natural images) and robustness (on adversarially perturbed images) (Tsipras et al., 2019). Such a dilemma is shown to be rooted in the inherently higher sample complexi ty (Schmidt et al., 2018) and/or model capacity (Nakkiran, 2019), for learning a high-accuracy and robust classifier. In view of that, give a classification task, growing the model capacity appears to help draw a win-win between accuracy and robustness, yet at the expense of model size and latency, therefore posing challenges for resource-constrained applications. Is it possible to co-design model accuracy, robustness and efficiency to achieve their triple wins? This paper studies multi-exit networks associated with input-adaptive efficient inference, showing their strong promise in achieving a sweet point in cooptimizing model accuracy, robustness and efficiency. Our proposed solution, dubbed Robust Dynamic Inference Networks (RDI-Nets), allows for each input (either clean or adversarial) to adaptively choose one of the multiple output layers (early branches or the final one) to output its prediction. That multi-loss adaptivity adds new variations and flexibility to adversarial attacks and defenses, on which we present a systematical investigation. We show experimentally that by equipping existing backbones with such robust adaptive inference, the resulting RDI-Nets can achieve better accuracy and robustness, yet with over 30% computational savings, compared to the defended original models.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا