ترغب بنشر مسار تعليمي؟ اضغط هنا

When the quarter jumps into a cup (and when it does not)

448   0   0.0 ( 0 )
 نشر من قبل Arturo C. Marti
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

While Bernoullis equation is one of the most frequently mentioned topics in Physics literature and other means of dissemination, it is also one of the least understood. Oddly enough, in the wonderful book Turning the world inside out [1], Robert Ehrlich proposes a demonstration that consists of blowing a quarter dollar coin into a cup, incorrectly explained using Bernoullis equation. In the present work, we have adapted the demonstration to show situations in which the coin jumps into the cup and others in which it does not, proving that the explanation based on Bernoullis is flawed. Our demonstration is useful to tackle the common misconception, stemming from the incorrect use of Bernoullis equation, that higher velocity invariably means lower pressure.



قيم البحث

اقرأ أيضاً

Numerous papers ask how difficult it is to cluster data. We suggest that the more relevant and interesting question is how difficult it is to cluster data sets {em that can be clustered well}. More generally, despite the ubiquity and the great import ance of clustering, we still do not have a satisfactory mathematical theory of clustering. In order to properly understand clustering, it is clearly necessary to develop a solid theoretical basis for the area. For example, from the perspective of computational complexity theory the clustering problem seems very hard. Numerous papers introduce various criteria and numerical measures to quantify the quality of a given clustering. The resulting conclusions are pessimistic, since it is computationally difficult to find an optimal clustering of a given data set, if we go by any of these popular criteria. In contrast, the practitioners perspective is much more optimistic. Our explanation for this disparity of opinions is that complexity theory concentrates on the worst case, whereas in reality we only care for data sets that can be clustered well. We introduce a theoretical framework of clustering in metric spaces that revolves around a notion of good clustering. We show that if a good clustering exists, then in many cases it can be efficiently found. Our conclusion is that contrary to popular belief, clustering should not be considered a hard task.
156 - Howard M. Wiseman 2015
Quantum optics did not, and could not, flourish without the laser. The present paper is not about the principles of laser construction, still less a history of how the laser was invented. Rather, it addresses the question: what are the fundamental fe atures that distinguish laser light from thermal light? The obvious answer, laser light is coherent, is, I argue, so vague that it must be put aside at the start, albeit to revisit later. A more specific, quantum theoretic, version, laser light is in a coherent state, is simply wrong in this context: both laser light and thermal light can equally well be described by coherent states, with amplitudes that vary stochastically in space. Instead, my answer to the titular question is that four principles are needed: high directionality, monochromaticity, high brightness, and stable intensity. Combining the first three of these principles suffices to show, in a quantitative way --- involving, indeed, very large dimensionless quantities (up to $sim10^{51}$) --- that a laser must be constructed very differently from a light bulb. This quantitative analysis is quite simple, and is easily relatable to coherence, yet is not to be found in any text-books on quantum optics to my knowledge. The fourth principle is the most subtle and, perhaps surprisingly, is the only one related to coherent states in the quantum optics sense: it implies that the description in terms of coherent states is the only simple description of a laser beam. Interestingly, this leads to the (not, as it turns out, entirely new) prediction that narrowly filtered laser beams are indistinguishable from similarly filtered thermal beams. I hope that other educators find this material useful, it may contain surprises even for researchers who have been in the field longer than I have.
The development of neural networks and pretraining techniques has spawned many sentence-level tagging systems that achieved superior performance on typical benchmarks. However, a relatively less discussed topic is what if more context information is introduced into current top-scoring tagging systems. Although several existing works have attempted to shift tagging systems from sentence-level to document-level, there is still no consensus conclusion about when and why it works, which limits the applicability of the larger-context approach in tagging tasks. In this paper, instead of pursuing a state-of-the-art tagging system by architectural exploration, we focus on investigating when and why the larger-context training, as a general strategy, can work. To this end, we conduct a thorough comparative study on four proposed aggregators for context information collecting and present an attribute-aided evaluation method to interpret the improvement brought by larger-context training. Experimentally, we set up a testbed based on four tagging tasks and thirteen datasets. Hopefully, our preliminary observations can deepen the understanding of larger-context training and enlighten more follow-up works on the use of contextual information.
Information flow measures, over the duration of a game, the audiences belief of who will win, and thus can reflect the amount of surprise in a game. To quantify the relationship between information flow and audiences perceived quality, we conduct a c ase study where subjects watch one of the worlds biggest esports events, LOL S10. In addition to eliciting information flow, we also ask subjects to report their rating for each game. We find that the amount of surprise in the end of the game plays a dominant role in predicting the rating. This suggests the importance of incorporating when the surprise occurs, in addition to the amount of surprise, in perceived quality models. For content providers, it implies that everything else being equal, it is better for twists to be more likely to happen toward the end of a show rather than uniformly throughout.
We study the equilibrium behavior in a multi-commodity selfish routing game with many types of uncertain users where each user over- or under-estimates their congestion costs by a multiplicative factor. Surprisingly, we find that uncertainties in dif ferent directions have qualitatively distinct impacts on equilibria. Namely, contrary to the usual notion that uncertainty increases inefficiencies, network congestion actually decreases when users over-estimate their costs. On the other hand, under-estimation of costs leads to increased congestion. We apply these results to urban transportation networks, where drivers have different estimates about the cost of congestion. In light of the dynamic pricing policies aimed at tackling congestion, our results indicate that users perception of these prices can significantly impact the policys efficacy, and caution in the face of uncertainty leads to favorable network conditions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا