Do you want to publish a course? Click here

Real Space Renormalization-Group for Configurational Random Walk Models on a Hierarchical Lattice. The Asymptotic End-to-End Distance of a Weakly SARW in Dimension Four

70   0   0.0 ( 0 )
 Publication date 1995
  fields
and research's language is English




Ask ChatGPT about the research

We present a real space renormalization-group map for probabilities of random walks on a hierarchical lattice. From this, we study the asymptotic behavior of the end-to-end distance of a weakly self- avoiding random walk (SARW) that penalizes the (self-)intersection of two random walks in dimension four on the hierarchical lattice.



rate research

Read More

In [BEI] we introduced a Levy process on a hierarchical lattice which is four dimensional, in the sense that the Greens function for the process equals 1/x^2. If the process is modified so as to be weakly self-repelling, it was shown that at the critical killing rate (mass-squared) beta^c, the Greens function behaves like the free one. - Now we analyze the end-to-end distance of the model and show that its expected value grows as a constant times sqrt{T} log^{1/8}T (1+O((log log T)/log T)), which is the same law as has been conjectured for self-avoiding walks on the simple cubic lattice Z^4. The proof uses inverse Laplace transforms to obtain the end-to-end distance from the Greens function, and requires detailed properties of the Greens function throughout a sector of the complex beta plane. These estimates are derived in a companion paper [math-ph/0205028].
We study the problem of a random walk on a lattice in which bonds connecting nearest neighbor sites open and close randomly in time, a situation often encountered in fluctuating media. We present a simple renormalization group technique to solve for the effective diffusive behavior at long times. For one-dimensional lattices we obtain better quantitative agreement with simulation data than earlier effective medium results. Our technique works in principle in any dimension, although the amount of computation required rises with dimensionality of the lattice.
We discuss some general aspects of renormalization group flows in four dimensions. Every such flow can be reinterpreted in terms of a spontaneously broken conformal symmetry. We analyze in detail the consequences of trace anomalies for the effective action of the Nambu-Goldstone boson of broken conformal symmetry. While the c-anomaly is algebraically trivial, the a-anomaly is non-Abelian, and leads to a positive-definite universal contribution to the S-matrix of 2->2 dilaton scattering. Unitarity of the S-matrix results in a monotonically decreasing function that interpolates between the Euler anomalies in the ultraviolet and the infrared, thereby establishing the a-theorem.
Considering the scale dependent effective spacetimes implied by the functional renormalization group in d-dimensional Quantum Einstein Gravity, we discuss the representation of entire evolution histories by means of a single, (d + 1)-dimensional manifold furnished with a fixed (pseudo-) Riemannian structure. This scale-space-time carries a natural foliation whose leaves are the ordinary spacetimes seen at a given resolution. We propose a universal form of the higher dimensional metric and discuss its properties. We show that, under precise conditions, this metric is always Ricci flat and admits a homothetic Killing vector field; if the evolving spacetimes are maximally symmetric, their (d + 1)-dimensional representative has a vanishing Riemann tensor even. The non-degeneracy of the higher dimensional metric which geometrizes a given RG trajectory is linked to a monotonicity requirement for the running of the cosmological constant, which we test in the case of Asymptotic Safety.
End-to-end automatic speech recognition (ASR) models, including both attention-based models and the recurrent neural network transducer (RNN-T), have shown superior performance compared to conventional systems. However, previous studies have focused primarily on short utterances that typically last for just a few seconds or, at most, a few tens of seconds. Whether such architectures are practical on long utterances that last from minutes to hours remains an open question. In this paper, we both investigate and improve the performance of end-to-end models on long-form transcription. We first present an empirical comparison of different end-to-end models on a real world long-form task and demonstrate that the RNN-T model is much more robust than attention-based systems in this regime. We next explore two improvements to attention-based systems that significantly improve its performance: restricting the attention to be monotonic, and applying a novel decoding algorithm that breaks long utterances into shorter overlapping segments. Combining these two improvements, we show that attention-based end-to-end models can be very competitive to RNN-T on long-form speech recognition.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا