ترغب بنشر مسار تعليمي؟ اضغط هنا

How difficult it is to prove the quantumness of macroscropic states?

173   0   0.0 ( 0 )
 نشر من قبل Pavel Sekatski
 تاريخ النشر 2014
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

General wisdom tells us that if two quantum states are ``macroscopically distinguishable then their superposition should be hard to observe. We make this intuition precise and general by quantifying the difficulty to observe the quantum nature of a superposition of two states that can be distinguished without microscopic accuracy. First, we quantify the distinguishability of any given pair of quantum states with measurement devices lacking microscopic accuracy, i.e. measurements suffering from limited resolution or limited sensitivity. Next, we quantify the required stability that have to be fulfilled by any measurement setup able to distinguish their superposition from a mere mixture. Finally, by establishing a relationship between the stability requirement and the ``macroscopic distinguishability of the two superposed states, we demonstrate that indeed, the more distinguishable the states are, the more demanding are the stability requirements.



قيم البحث

اقرأ أيضاً

Numerous papers ask how difficult it is to cluster data. We suggest that the more relevant and interesting question is how difficult it is to cluster data sets {em that can be clustered well}. More generally, despite the ubiquity and the great import ance of clustering, we still do not have a satisfactory mathematical theory of clustering. In order to properly understand clustering, it is clearly necessary to develop a solid theoretical basis for the area. For example, from the perspective of computational complexity theory the clustering problem seems very hard. Numerous papers introduce various criteria and numerical measures to quantify the quality of a given clustering. The resulting conclusions are pessimistic, since it is computationally difficult to find an optimal clustering of a given data set, if we go by any of these popular criteria. In contrast, the practitioners perspective is much more optimistic. Our explanation for this disparity of opinions is that complexity theory concentrates on the worst case, whereas in reality we only care for data sets that can be clustered well. We introduce a theoretical framework of clustering in metric spaces that revolves around a notion of good clustering. We show that if a good clustering exists, then in many cases it can be efficiently found. Our conclusion is that contrary to popular belief, clustering should not be considered a hard task.
117 - Sofia Wechsler 2009
The concept of realism in quantum mechanics means that results of measurement are caused by physical variables, hidden or observable. Local hidden variables were proved unable to explain results of measurements on entangled particles tested far away from one another. Then, some physicists embraced the idea of nonlocal hidden variables. The present article proves that this idea is problematic, that it runs into an impasse vis-`a-vis the special relativity.
101 - Palash Dey , Sourav Medya 2019
Covert networks are social networks that often consist of harmful users. Social Network Analysis (SNA) has played an important role in reducing criminal activities (e.g., counter terrorism) via detecting the influential users in such networks. There are various popular measures to quantify how influential or central any vertex is in a network. As expected, strategic and influential miscreants in covert networks would try to hide herself and her partners (called {em leaders}) from being detected via these measures by introducing new edges. Waniek et al. show that the corresponding computational problem, called Hiding Leader, is NP-Complete for the degree and closeness centrality measures. We study the popular core centrality measure and show that the problem is NP-Complete even when the core centrality of every leader is only $3$. On the contrary, we prove that the problem becomes polynomial time solvable for the degree centrality measure if the degree of every leader is bounded above by any constant. We then focus on the optimization version of the problem and show that the Hiding Leader problem admits a $2$ factor approximation algorithm for the degree centrality measure. We complement it by proving that one cannot hope to have any $(2-varepsilon)$ factor approximation algorithm for any constant $varepsilon>0$ unless there is a $varepsilon/2$ factor polynomial time algorithm for the Densest $k$-Subgraph problem which would be considered a significant breakthrough.
We discuss the constraints coming from current observations of type Ia supernovae on cosmological models which allow sudden future singularities of pressure (with the scale factor and the energy density regular). We show that such a sudden singularit y may happen in the very near future (e.g. within ten million years) and its prediction at the present moment of cosmic evolution cannot be distinguished, with current observational data, from the prediction given by the standard quintessence scenario of future evolution. Fortunately, sudden future singularities are characterized by a momentary peak of infinite tidal forces only; there is no geodesic incompletness which means that the evolution of the universe may eventually be continued throughout until another ``more serious singularity such as Big-Crunch or Big-Rip.
We study the one-dimensional projection of the extremal Gibbs measures of the two-dimensional Ising model, the Schonmann projection. These measures are known to be non-Gibbsian at low temperatures, since their conditional probabilities as a function of the two-sided boundary conditions are not continuous. We prove that they are g-measures, which means that their conditional probabilities have a continuous dependence on one-sided boundary condition.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا