ترغب بنشر مسار تعليمي؟ اضغط هنا

Fitts Law for speed-accuracy trade-off describes a diversity-enabled sweet spot in sensorimotor control

77   0   0.0 ( 0 )
 نشر من قبل Quanying Liu
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Human sensorimotor control exhibits remarkable speed and accuracy, and the tradeoff between them is encapsulated in Fitts Law for reaching and pointing. While Fitts related this to Shannons channel capacity theorem, despite widespread study of Fitts Law, a theory that connects implementation of sensorimotor control at the system and hardware level has not emerged. Here we describe a theory that connects hardware (neurons and muscles with inherent severe speed-accuracy tradeoffs) with system level control to explain Fitts Law for reaching and related laws. The results supporting the theory show that diversity between hardware components is exploited to achieve both fast and accurate control performance despite slow or inaccurate hardware. Such diversity-enabled sweet spots (DESSs) are ubiquitous in biology and technology, and explain why large heterogeneities exist in biological and technical components and how both engineers and natural selection routinely evolve fast and accurate systems using imperfect hardware.



قيم البحث

اقرأ أيضاً

Nervous systems sense, communicate, compute and actuate movement using distributed components with severe trade-offs in speed, accuracy, sparsity, noise and saturation. Nevertheless, brains achieve remarkably fast, accurate, and robust control perfor mance due to a highly effective layered control architecture. Here we introduce a driving task to study how a mountain biker mitigates the immediate disturbance of trail bumps and responds to changes in trail direction. We manipulated the time delays and accuracy of the control input from the wheel as a surrogate for manipulating the characteristics of neurons in the control loop. The observed speed-accuracy trade-offs (SATs) motivated a theoretical framework consisting of layers of control loops with components having diverse speeds and accuracies within each physical level, such as nerve bundles containing axons with a wide range of sizes. Our model explains why the errors from two control loops -- one fast but inaccurate reflexive layer that corrects for bumps, and a planning layer that is slow but accurate -- are additive, and show how the errors in each control loop can be decomposed into the errors caused by the limited speeds and accuracies of the components. These results demonstrate that an appropriate diversity in the properties of neurons across layers helps to create diversity-enabled sweet spots (DESSs) so that both fast and accurate control is achieved using slow or inaccurate components.
We empirically investigate the best trade-off between sparse and uniformly-weighted multiple kernel learning (MKL) using the elastic-net regularization on real and simulated datasets. We find that the best trade-off parameter depends not only on the sparsity of the true kernel-weight spectrum but also on the linear dependence among kernels and the number of samples.
The goal of this paper is to serve as a guide for selecting a detection architecture that achieves the right speed/memory/accuracy balance for a given application and platform. To this end, we investigate various ways to trade accuracy for speed and memory usage in modern convolutional object detection systems. A number of successful systems have been proposed in recent years, but apples-to-apples comparisons are difficult due to different base feature extractors (e.g., VGG, Residual Networks), different default image resolutions, as well as different hardware and software platforms. We present a unified implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016] and SSD [Liu et al., 2015] systems, which we view as meta-architectures and trace out the speed/accuracy trade-off curve created by using alternative feature extractors and varying other critical parameters such as image size within each of these meta-architectures. On one extreme end of this spectrum where speed and memory are critical, we present a detector that achieves real time speeds and can be deployed on a mobile device. On the opposite end in which accuracy is critical, we present a detector that achieves state-of-the-art performance measured on the COCO detection task.
As numerous machine learning and other algorithms increase in complexity and data requirements, distributed computing becomes necessary to satisfy the growing computational and storage demands, because it enables parallel execution of smaller tasks t hat make up a large computing job. However, random fluctuations in task service times lead to straggling tasks with long execution times. Redundancy, in the form of task replication and erasure coding, provides diversity that allows a job to be completed when only a subset of redundant tasks is executed, thus removing the dependency on the straggling tasks. In situations of constrained resources (here a fixed number of parallel servers), increasing redundancy reduces the available resources for parallelism. In this paper, we characterize the diversity vs. parallelism trade-off and identify the optimal strategy, among replication, coding and splitting, which minimizes the expected job completion time. We consider three common service time distributions and establish three models that describe scaling of these distributions with the task size. We find that different distributions with different scaling models operate optimally at different levels of redundancy, and thus may require very different code rates.
We identify a trade-off between robustness and accuracy that serves as a guiding principle in the design of defenses against adversarial examples. Although this problem has been widely studied empirically, much remains unknown concerning the theory u nderlying this trade-off. In this work, we decompose the prediction error for adversarial examples (robust error) as the sum of the natural (classification) error and boundary error, and provide a differentiable upper bound using the theory of classification-calibrated loss, which is shown to be the tightest possible upper bound uniform over all probability distributions and measurable predictors. Inspired by our theoretical analysis, we also design a new defense method, TRADES, to trade adversarial robustness off against accuracy. Our proposed algorithm performs well experimentally in real-world datasets. The methodology is the foundation of our entry to the NeurIPS 2018 Adversarial Vision Challenge in which we won the 1st place out of ~2,000 submissions, surpassing the runner-up approach by $11.41%$ in terms of mean $ell_2$ perturbation distance.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا