ترغب بنشر مسار تعليمي؟ اضغط هنا

Multi-photon graph states are a fundamental resource in quantum communication networks, distributed quantum computing, and sensing. These states can in principle be created deterministically from quantum emitters such as optically active quantum dots or defects, atomic systems, or superconducting qubits. However, finding efficient schemes to produce such states has been a long-standing challenge. Here, we present an algorithm that, given a desired multi-photon graph state, determines the minimum number of quantum emitters and precise operation sequences that can produce it. The algorithm itself and the resulting operation sequence both scale polynomially in the size of the photonic graph state, allowing one to obtain efficient schemes to generate graph states containing hundreds or thousands of photons.
We present numerical simulations of scattering-type Scanning Near-Field Optical Microscopy (s-SNOM) of 1D plasmonic graphene junctions. A comprehensive analysis of simulated s-SNOM spectra is performed for three types of junctions. We find conditions when the conventional interpretation of the plasmon reflection coefficients from s-SNOM measurements does not apply. Our results are applicable to other conducting 2D materials and provide a comprehensive understanding of the s-SNOM techniques for probing local transport properties of 2D materials.
109 - Zhenkun Li , Yi Xie , 2021
Suppose $(M, gamma)$ is a balanced sutured manifold and $K$ is a rationally null-homologous knot in $M$. It is known that the rank of the sutured Floer homology of $Mbackslash N(K)$ is at least twice the rank of the sutured Floer homology of $M$. Thi s paper studies the properties of $K$ when the equality is achieved for instanton homology. As an application, we show that if $Lsubset S^3$ is a fixed link and $K$ is a knot in the complement of $L$, then the instanton link Floer homology of $Lcup K$ achieves the minimum rank if and only if $K$ is the unknot in $S^3backslash L$.
89 - Kun Li , Peiming Li , Yong Zeng 2021
Channel knowledge map (CKM) is an emerging technique to enable environment-aware wireless communications, in which databases with location-specific channel knowledge are used to facilitate or even obviate real-time channel state information acquisiti on. One fundamental problem for CKM-enabled communication is how to efficiently construct the CKM based on finite measurement data points at limited user locations. Towards this end, this paper proposes a novel map construction method based on the emph{expectation maximization} (EM) algorithm, by utilizing the available measurement data, jointly with the expert knowledge of well-established statistic channel models. The key idea is to partition the available data points into different groups, where each group shares the same modelling parameter values to be determined. We show that determining the modelling parameter values can be formulated as a maximum likelihood estimation problem with latent variables, which is then efficiently solved by the classic EM algorithm. Compared to the pure data-driven methods such as the nearest neighbor based interpolation, the proposed method is more efficient since only a small number of modelling parameters need to be determined and stored. Furthermore, the proposed method is extended for constructing a specific type of CKM, namely, the channel gain map (CGM), where closed-form expressions are derived for the E-step and M-step of the EM algorithm. Numerical results are provided to show the effectiveness of the proposed map construction method as compared to the benchmark curve fitting method with one single model.
Complex reasoning aims to draw a correct inference based on complex rules. As a hallmark of human intelligence, it involves a degree of explicit reading comprehension, interpretation of logical knowledge and complex rule application. In this paper, w e take a step forward in complex reasoning by systematically studying the three challenging and domain-general tasks of the Law School Admission Test (LSAT), including analytical reasoning, logical reasoning and reading comprehension. We propose a hybrid reasoning system to integrate these three tasks and achieve impressive overall performance on the LSAT tests. The experimental results demonstrate that our system endows itself a certain complex reasoning ability, especially the fundamental reading comprehension and challenging logical reasoning capacities. Further analysis also shows the effectiveness of combining the pre-trained models with the task-specific reasoning module, and integrating symbolic knowledge into discrete interpretable reasoning steps in complex reasoning. We further shed a light on the potential future directions, like unsupervised symbolic knowledge extraction, model interpretability, few-shot learning and comprehensive benchmark for complex reasoning.
This technical report presents an overview of our solution used in the submission to 2021 HACS Temporal Action Localization Challenge on both Supervised Learning Track and Weakly-Supervised Learning Track. Temporal Action Localization (TAL) requires to not only precisely locate the temporal boundaries of action instances, but also accurately classify the untrimmed videos into specific categories. However, Weakly-Supervised TAL indicates locating the action instances using only video-level class labels. In this paper, to train a supervised temporal action localizer, we adopt Temporal Context Aggregation Network (TCANet) to generate high-quality action proposals through ``local and global temporal context aggregation and complementary as well as progressive boundary refinement. As for the WSTAL, a novel framework is proposed to handle the poor quality of CAS generated by simple classification network, which can only focus on local discriminative parts, rather than locate the entire interval of target actions. Further inspired by the transfer learning method, we also adopt an additional module to transfer the knowledge from trimmed videos (HACS Clips dataset) to untrimmed videos (HACS Segments dataset), aiming at promoting the classification performance on untrimmed videos. Finally, we employ a boundary regression module embedded with Outer-Inner-Contrastive (OIC) loss to automatically predict the boundaries based on the enhanced CAS. Our proposed scheme achieves 39.91 and 29.78 average mAP on the challenge testing set of supervised and weakly-supervised temporal action localization track respectively.
We investigate the effectiveness of different machine learning methodologies in predicting economic cycles. We identify the deep learning methodology of Bi-LSTM with Autoencoder as the most accurate model to forecast the beginning and end of economic recessions in the U.S. We adopt commonly-available macro and market-condition features to compare the ability of different machine learning models to generate good predictions both in-sample and out-of-sample. The proposed model is flexible and dynamic when both predictive variables and model coefficients vary over time. It provided good out-of-sample predictions for the past two recessions and early warning about the COVID-19 recession.
Reliable (accurate and precise) quantification of dose requires reliable absolute quantification of regional activity uptake. This is especially challenging for alpha-particle emitting radiopharmaceutical therapies ({alpha}-RPTs) due to the complex e mission spectra, the very low number of detected counts, the impact of stray-radiation-related noise at these low counts, and other image-degrading processes such as attenuation, scatter, and collimator-detector response. The conventional reconstruction-based quantification methods are observed to be erroneous for {alpha}-RPT SPECT. To address these challenges, we developed an ultra-low-count quantitative SPECT (ULC-QSPECT) method that incorporates multiple strategies to perform reliable quantification. First, the method directly estimates the regional activity uptake from the projection data, obviating the reconstruction step. This makes the problem more well-posed and avoids reconstruction-related information loss. Next, the method compensates for radioisotope and SPECT physics, including the isotope spectra, scatter, attenuation, and collimator-detector response, using a Monte Carlo-based approach. Further, the method compensates for stray-radiation-related noise that becomes substantial at these low-count levels. The method was validated in the context of three-dimensional SPECT with 223Ra. Validation was performed using both realistic simulation studies, as well as synthetic and anthropomorphic physical-phantom studies. Across all studies, the ULC-QSPECT method yielded reliable estimates of regional uptake and outperformed conventional ordered subset expectation maximization (OSEM)-based reconstruction and geometric transfer matrix (GTM)-based partial-volume compensation methods. Further, the method yielded reliable estimates of mean uptake in lesions with varying intra-lesion heterogeneity in uptake.
Conversational Question Simplification (CQS) aims to simplify self-contained questions into conversational ones by incorporating some conversational characteristics, e.g., anaphora and ellipsis. Existing maximum likelihood estimation (MLE) based meth ods often get trapped in easily learned tokens as all tokens are treated equally during training. In this work, we introduce a Reinforcement Iterative Sequence Editing (RISE) framework that optimizes the minimum Levenshtein distance (MLD) through explicit editing actions. RISE is able to pay attention to tokens that are related to conversational characteristics. To train RISE, we devise an Iterative Reinforce Training (IRT) algorithm with a Dynamic Programming based Sampling (DPS) process to improve exploration. Experimental results on two benchmark datasets show that RISE significantly outperforms state-of-the-art methods and generalizes well on unseen data.
Wave propagation problems have many applications in physics and engineering, and the stochastic effects are important in accurately modeling them due to the uncertainty of the media. This paper considers and analyzes a fully discrete finite element m ethod for a class of nonlinear stochastic wave equations, where the diffusion term is globally Lipschitz continuous while the drift term is only assumed to satisfy weaker conditions as in [11]. The novelties of this paper are threefold. First, the error estimates cannot not be directly obtained if the numerical scheme in primal form is used. The numerical scheme in mixed form is introduced and several H{o}lder continuity results of the strong solution are proved, which are used to establish the error estimates in both $L^2$ norm and energy norms. Second, two types of discretization of the nonlinear term are proposed to establish the $L^2$ stability and energy stability results of the discrete solutions. These two types of discretization and proper test functions are designed to overcome the challenges arising from the stochastic scaling in time issues and the nonlinear interaction. These stability results play key roles in proving the probability of the set on which the error estimates hold approaches one. Third, higher order moment stability results of the discrete solutions are proved based on an energy argument and the underlying energy decaying property of the method. Numerical experiments are also presented to show the stability results of the discrete solutions and the convergence rates in various norms.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا