ترغب بنشر مسار تعليمي؟ اضغط هنا

Photon model of light: Revision of applicability limits

65   0   0.0 ( 0 )
 نشر من قبل Yuriy Akimov Dr
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The photon model of light has been known for decades to be self-inconsistent and controversial theory with numerous intrinsic conflicts. This paper revises the model and explores its applicability for description of classical electromagnetic fields. The revision discloses that the photon model fails for fields in current-containing domains, as well as for near fields in current-free regions. This drastically changes the hierarchy of optics theories and the entire landscape of physics. In particular, quantum optics appears to be not the most advanced theory, as it is commonly thought, but just an improved version of geometrical optics with limited applicability, while quantum electrodynamics turns out to provide a truncated description of electromagnetic interactions.



قيم البحث

اقرأ أيضاً

136 - Rainer W. Kuhne 2004
Several years ago, I suggested a quantum field theory which has many attractive features. (1) It can explain the quantization of electric charge. (2) It describes symmetrized Maxwell equations. (3) It is manifestly covariant. (4) It describes local f our-potentials. (5) It avoids the unphysical Dirac string. My model predicts a second kind of light, which I named ``magnetic photon rays. Here I will discuss possible observations of this radiation by August Kundt in 1885, Alipasha Vaziri in February 2002, and Roderic Lakes in June 2002.
With a modest revision of the mass sector of the Standard Model, the systematics of the fermion masses and mixings can be fully described and interpreted as providing information on matrix elements of physics beyond the Standard Model. A by-product i s a reduction of the largest Higgs Yukawa fine structure constant by an order of magnitude. The extension to leptons provides for insight on the difference between quark mixing and lepton mixing as evidenced in neutrino oscillations. The large difference between the scale for up-quark and down-quark masses is not addressed. In this approach, improved detail and accuracy of the elements of the current mixing matrices can extend our knowledge and understanding of physics beyond the Standard Model.
Almost sixty years since Landauer linked the erasure of information with an increase of entropy, his famous erasure principle and byproducts like reversible computing are still subjected to debates in the scientific community. In this work we use the Liouville theorem to establish three different types of the relation between manipulation of information by a logical gate and the change of its physical entropy, corresponding to three types of the final state of environment. A time-reversible relation can be established when the final states of environment corresponding to different logical inputs are macroscopically distinguishable, showing a path to reversible computation and erasure of data with no entropy cost. A weak relation, giving the entropy change of $k ln 2$ for an erasure gate, can be deduced without any thermodynamical argument, only requiring the final states of environment to be macroscopically indistinguishable. The common strong relation that links entropy cost to heat requires the final states of environment to be in a thermal equilibrium. We argue in this work that much of the misunderstanding around the Landauers erasure principle stems from not properly distinguishing the limits and applicability of these three different relations. Due to new technological advances, we emphasize the importance of taking into account the time-reversible and weak types of relation to link the information manipulation and entropy cost in erasure gates beyond the considerations of environments in thermodynamic equilibrium.
We investigate the Peierls-Feynman-Bogoliubov variational principle to map Hubbard models with nonlocal interactions to effective models with only local interactions. We study the renormalization of the local interaction induced by nearest-neighbor i nteraction and assess the quality of the effective Hubbard models in reproducing observables of the corresponding extended Hubbard models. We compare the renormalization of the local interactions as obtained from numerically exact determinant Quantum Monte Carlo to approximate but more generally applicable calculations using dual boson, dynamical mean field theory, and the random phase approximation. These more approximate approaches are crucial for any application with real materials in mind. Furthermore, we use the dual boson method to calculate observables of the extended Hubbard models directly and benchmark these against determinant Quantum Monte Carlo simulations of the effective Hubbard model.
The progress in building large quantum states and networks requires sophisticated detection techniques to verify the desired operation. To achieve this aim, a cost- and resource-efficient detection method is the time multiplexing of photonic states. This design is assumed to be efficiently scalable; however, it is restricted by inevitable losses and limited detection efficiencies. Here, we investigate the scalability of time-multiplexed detectors under the effects of fiber dispersion and losses. We use the distinguishability of Fock states up to $n=20$ after passing the time-multiplexed detector as our figure of merit and find that, for realistic setup efficiencies of $eta=0.85$, the optimal size for time-multiplexed detectors is 256 bins.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا