Do you want to publish a course? Click here

Answer of the most important unsettled question of physical theory in 20th century

261   0   0.0 ( 0 )
 Added by Junhao Zhang
 Publication date 2001
  fields Physics
and research's language is English
 Authors Z. Junhao




Ask ChatGPT about the research

Quantum mechanics take the sum of first finite order approximate solutions of time-dependent perturbation to substitute the exact solution. From the point of mathematics, it may be correct only in the convergent region of the time-dependent perturbation series. Where is the convergent region of this series? Quantum mechanics did not answer this problem. However it is relative to the question, can we use the Schrodinger equation to describe the transition processes? So it is the most important unsettling problem of physical theory. We find out the time-dependent approximate solution for arbitrary and the exact solution. Then we can prove that: (1) In the neighborhood of the conservation of energy, the series is divergent. The basic error of quantum mechanics is using the sum of the first finite orders approximate solutions to substitute the exact solution in this divergent region. It leads to an infinite error. So the Fermi golden rule is not a mathematically reasonable inference of the. Schrodinger equation (2) The transiton probability per unit time deduced from the exact solution of Schrodinger equation cannot describe the transition processes. This paper is only a prime discussion.

rate research

Read More

183 - Yurij Baryshev 2015
In the history of cosmology physical paradoxes played important role for development of contemporary world models. Within the modern standard cosmological model there are both observational and conceptual cosmological paradoxes which stimulate to search their solution. Confrontation of theoretical predictions of the standard cosmological model with the latest astrophysical observational data is considered. A review of conceptual problems of the Friedmann space expending models, which are in the bases of modern cosmological model, is discussed. The main paradoxes, which are discussed in modern literature, are the Newtonian character of the exact Friedmann equation, the violation of the energy conservation within any comoving local volume, violation of the limiting recession velocity of galaxies for the observed high redshift objects. Possible observational tests of the nature of the cosmological redshift are discussed
The dependency between an adequate question formulation and correct answer selection is a very intriguing but still underexplored area. In this paper, we show that question rewriting (QR) of the conversational context allows to shed more light on this phenomenon and also use it to evaluate robustness of different answer selection approaches. We introduce a simple framework that enables an automated analysis of the conversational question answering (QA) performance using question rewrites, and present the results of this analysis on the TREC CAsT and QuAC (CANARD) datasets. Our experiments uncover sensitivity to question formulation of the popular state-of-the-art models for reading comprehension and passage ranking. Our results demonstrate that the reading comprehension model is insensitive to question formulation, while the passage ranking changes dramatically with a little variation in the input question. The benefit of QR is that it allows us to pinpoint and group such cases automatically. We show how to use this methodology to verify whether QA models are really learning the task or just finding shortcuts in the dataset, and better understand the frequent types of error they make.
In education, open-ended quiz questions have become an important tool for assessing the knowledge of students. Yet, manually preparing such questions is a tedious task, and thus automatic question generation has been proposed as a possible alternative. So far, the vast majority of research has focused on generating the question text, relying on question answering datasets with readily picked answers, and the problem of how to come up with answer candidates in the first place has been largely ignored. Here, we aim to bridge this gap. In particular, we propose a model that can generate a specified number of answer candidates for a given passage of text, which can then be used by instructors to write questions manually or can be passed as an input to automatic answer-aware question generators. Our experiments show that our proposed answer candidate generation model outperforms several baselines.
In e-commerce portals, generating answers for product-related questions has become a crucial task. In this paper, we focus on the task of product-aware answer generation, which learns to generate an accurate and complete answer from large-scale unlabeled e-commerce reviews and product attributes. However, safe answer problems pose significant challenges to text generation tasks, and e-commerce question-answering task is no exception. To generate more meaningful answers, in this paper, we propose a novel generative neural model, called the Meaningful Product Answer Generator (MPAG), which alleviates the safe answer problem by taking product reviews, product attributes, and a prototype answer into consideration. Product reviews and product attributes are used to provide meaningful content, while the prototype answer can yield a more diverse answer pattern. To this end, we propose a novel answer generator with a review reasoning module and a prototype answer reader. Our key idea is to obtain the correct question-aware information from a large scale collection of reviews and learn how to write a coherent and meaningful answer from an existing prototype answer. To be more specific, we propose a read-and-write memory consisting of selective writing units to conduct reasoning among these reviews. We then employ a prototype reader consisting of comprehensive matching to extract the answer skeleton from the prototype answer. Finally, we propose an answer editor to generate the final answer by taking the question and the above parts as input. Conducted on a real-world dataset collected from an e-commerce platform, extensive experimental results show that our model achieves state-of-the-art performance in terms of both automatic metrics and human evaluations. Human evaluation also demonstrates that our model can consistently generate specific and proper answers.
In this paper, the answer selection problem in community question answering (CQA) is regarded as an answer sequence labeling task, and a novel approach is proposed based on the recurrent architecture for this problem. Our approach applies convolution neural networks (CNNs) to learning the joint representation of question-answer pair firstly, and then uses the joint representation as input of the long short-term memory (LSTM) to learn the answer sequence of a question for labeling the matching quality of each answer. Experiments conducted on the SemEval 2015 CQA dataset shows the effectiveness of our approach.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا