ترغب بنشر مسار تعليمي؟ اضغط هنا

This paper is concerned with a linear quadratic optimal control for a class of singular Volterra integral equations. Under proper convexity conditions, optimal control uniquely exists, and it could be characterized via Frechet derivative of the quadr atic functional in a Hilbert space or via maximum principle type necessary conditions. However, these (equivalent) characterizations have a shortcoming that the current value of the optimal control depends on the future values of the optimal state. Practically, this is not feasible. The main purpose of this paper is to obtain a causal state feedback representation of the optimal control.
Motivated by the recent experiments on the kagome metals $Atext{V}_3text{Sb}_5$ with $A=text{K}, text{Rb}, text{Cs}$, which see onset of charge density wave (CDW) order at $sim$ $100$ K and superconductivity at $sim$ $1$ K, we explore the onset of su perconductivity, taking the perspective that it descends from a parent CDW state. In particular, we propose that the pairing comes from the Pomeranchuk fluctuations of the reconstructed Fermi surface in the CDW phase. This scenario naturally explains the large separation of energy scale from the parent CDW. Remarkably, the phase diagram hosts the double-dome superconductivity near two reconstructed Van Hove singularities. These singularities occur at the Lifshitz transition and the quantum critical point of the parent CDW. The first dome is occupied by the $d_{xy}$-wave nematic spin-singlet superconductivity. Meanwhile, the $(s+d_{x^2-y^2})$-wave nematic spin-singlet superconductivity develops in the second dome. Our work sheds light on an unconventional pairing mechanism with strong evidences in the kagome metals $Atext{V}_3text{Sb}_5$.
Recently deep learning-based image compression methods have achieved significant achievements and gradually outperformed traditional approaches including the latest standard Versatile Video Coding (VVC) in both PSNR and MS-SSIM metrics. Two key compo nents of learned image compression frameworks are the entropy model of the latent representations and the encoding/decoding network architectures. Various models have been proposed, such as autoregressive, softmax, logistic mixture, Gaussian mixture, and Laplacian. Existing schemes only use one of these models. However, due to the vast diversity of images, it is not optimal to use one model for all images, even different regions of one image. In this paper, we propose a more flexible discretized Gaussian-Laplacian-Logistic mixture model (GLLMM) for the latent representations, which can adapt to different contents in different images and different regions of one image more accurately. Besides, in the encoding/decoding network design part, we propose a concatenated residual blocks (CRB), where multiple residual blocks are serially connected with additional shortcut connections. The CRB can improve the learning ability of the network, which can further improve the compression performance. Experimental results using the Kodak and Tecnick datasets show that the proposed scheme outperforms all the state-of-the-art learning-based methods and existing compression standards including VVC intra coding (4:4:4 and 4:2:0) in terms of the PSNR and MS-SSIM. The project page is at url{https://github.com/fengyurenpingsheng/Learned-image-compression-with-GLLMM}
73 - Ping Lin , Hatem Zaag 2021
This paper concerns a controllability problem for blowup points on heat equation. It can be described as follows: In the absence of control, the solution to the linear heat system globally exists in a bounded domain $Omega$. While, for a given time $ T>0$ and a point $a$ in this domain, we find a feedback control, which is acted on an internal subset $omega$ of this domain, such that the corresponding solution to this system blows up at time $T$ and holds unique point $a$. We show that $ain omega$ can be the unique blowup point of the corresponding solution with a certain feedback control, and for any feedback control, $ain Omegasetminus overline{omega}$ could not be the unique blowup point.
Neutral Particle Analyzer (NPA) is one of the crucial diagnostic devices on Tokamak facilities. Stripping unit is one of the main parts of the NPA. A windowless gas stripping room with two differential pipes is adopted in a parallel direction of elec tric and magnetic fields (E//B) NPA. The pressure distributions in the stripping chamber are simulated by Ansys Fluent together with MolFlow+. Based on the pressure distributions extracted from the simulation, the stripping efficiency of the E//B NPA is studied with GEANT4. The hadron reaction physics is modified to track the charge state of each particle in a cross section base method in GEANT4. The transmission rates ($R$) and the stripping efficiencies $f_{+1}$ are examined for the particle energy ranging from 20 to 200 keV at the input pressure ($P_0$) ranging from 20 to 400 Pa. According to the combined global efficiency, $R times f_{+1}$, $P_0$ = 240 Pa is obtained as the optimum pressure for the maximum global efficiency in the incident energy range investigated.
Approximations based on perturbation theory are the basis for most of the quantitative predictions of quantum mechanics, whether in quantum field theory, many-body physics, chemistry or other domains. Quantum computing provides an alternative to the perturbation paradigm, but the tens of noisy qubits currently available in state-of-the-art quantum processors are of limited practical utility. In this article, we introduce perturbative quantum simulation, which combines the complementary strengths of the two approaches, enabling the solution of large practical quantum problems using noisy intermediate-scale quantum hardware. The use of a quantum processor eliminates the need to identify a solvable unperturbed Hamiltonian, while the introduction of perturbative coupling permits the quantum processor to simulate systems larger than the available number of physical qubits. After introducing the general perturbative simulation framework, we present an explicit example algorithm that mimics the Dyson series expansion. We then numerically benchmark the method for interacting bosons, fermions, and quantum spins in different topologies, and study different physical phenomena on systems of up to $48$ qubits, such as information propagation, charge-spin separation and magnetism. In addition, we use 5 physical qubits on the IBMQ cloud to experimentally simulate the $8$-qubit Ising model using our algorithm. The result verifies the noise robustness of our method and illustrates its potential for benchmarking large quantum processors with smaller ones.
Layered platinum tellurium (PtTe2) was recently synthesized with controllable layer numbers down to a monolayer limit. Using ab initio calculations based on anisotropic Midgal-Eliashberg formalism, we show that by rubidium (Rb) intercalation, weak su perconductivity in bilayer PtTe2 can be significantly boosted with superconducting Tc = 8 K in the presence of spin-orbit coupling (SOC). The intercalant on one hand mediates the interlayer coupling and serves as an electron donor, leading to large density of states at Fermi energy. On the other hand, it increases the mass-enhancement parameter with electron-phonon coupling strength comparable to that of Pt. The potassium intercalated bilayer PtTe2 has a comparable Tc to the case of Rb intercalation. The relatively high Tc with SOC combined with experimental accessible crystal structures suggest that these superconductors are promising platforms to study the novel quantum physics associated with two-dimensional superconductivity, such as the recently proposed type-II Ising superconductivity.
A thorough backward stability analysis of Hotellings deflation, an explicit external deflation procedure through low-rank updates for computing many eigenpairs of a symmetric matrix, is presented. Computable upper bounds of the loss of the orthogonal ity of the computed eigenvectors and the symmetric backward error norm of the computed eigenpairs are derived. Sufficient conditions for the backward stability of the explicit external deflation procedure are revealed. Based on these theoretical results, the strategy for achieving numerical backward stability by dynamically selecting the shifts is proposed. Numerical results are presented to corroborate the theoretical analysis and to demonstrate the stability of the procedure for computing many eigenpairs of large symmetric matrices arising from applications.
Graph-based subspace clustering methods have exhibited promising performance. However, they still suffer some of these drawbacks: encounter the expensive time overhead, fail in exploring the explicit clusters, and cannot generalize to unseen data poi nts. In this work, we propose a scalable graph learning framework, seeking to address the above three challenges simultaneously. Specifically, it is based on the ideas of anchor points and bipartite graph. Rather than building a $ntimes n$ graph, where $n$ is the number of samples, we construct a bipartite graph to depict the relationship between samples and anchor points. Meanwhile, a connectivity constraint is employed to ensure that the connected components indicate clusters directly. We further establish the connection between our method and the K-means clustering. Moreover, a model to process multi-view data is also proposed, which is linear scaled with respect to $n$. Extensive experiments demonstrate the efficiency and effectiveness of our approach with respect to many state-of-the-art clustering methods.
Interpretation of Airborne Laser Scanning (ALS) point clouds is a critical procedure for producing various geo-information products like 3D city models, digital terrain models and land use maps. In this paper, we present a local and global encoder ne twork (LGENet) for semantic segmentation of ALS point clouds. Adapting the KPConv network, we first extract features by both 2D and 3D point convolutions to allow the network to learn more representative local geometry. Then global encoders are used in the network to exploit contextual information at the object and point level. We design a segment-based Edge Conditioned Convolution to encode the global context between segments. We apply a spatial-channel attention module at the end of the network, which not only captures the global interdependencies between points but also models interactions between channels. We evaluate our method on two ALS datasets namely, the ISPRS benchmark dataset and DCF2019 dataset. For the ISPRS benchmark dataset, our model achieves state-of-the-art results with an overall accuracy of 0.845 and an average F1 score of 0.737. With regards to the DFC2019 dataset, our proposed network achieves an overall accuracy of 0.984 and an average F1 score of 0.834.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا