ترغب بنشر مسار تعليمي؟ اضغط هنا

We derive a hierarchy of matrix product states (HOMPS) method which is numerically exact and efficient for general non-Markovian dynamics in open quantum system. This HOMPS is trying to attack the exponential wall issue in the recently developed hier archy of pure states (HOPS) scheme with two steps: a. finding an effective time-dependent Schrodinger equation which is equivalent to HOPS, b. propagating this equation within matrix product states/operators (MPS/MPO) representation. HOMPS works in linear form and takes into account finite temperature effect straightforwardly from the initial pure state. Applications of HOMPS to spin-boson model covering both high and low temperatures are provided, demonstrating the validity and efficiency of the new approach.
203 - Xing Gao , Li Guo , Huhu Zhang 2021
Many years ago, Rota proposed a program on determining algebraic identities that can be satisfied by linear operators. After an extended period of dormant, progress on this program picked up speed in recent years, thanks to perspectives from operated algebras and Grobner-Shirshov bases. These advances were achieved in a series of papers from special cases to more general situations. These perspectives also indicate that Rotas insight can be manifested very broadly, for other algebraic structures such as Lie algebras, and further in the context of operads. This paper gives a survey on the motivation, early developments and recent advances on Rotas program, for linear operators on associative algebras and Lie algebras. Emphasis will be given to the applications of rewriting systems and Grobner-Shirshov bases. Problems, old and new, are proposed throughout the paper to prompt further developments on Rotas program.
In this paper, we obtain respectively some new linear bases of free unitary (modified) weighted differential algebras and free nonunitary (modified) Rota-Baxter algebras, in terms of the method of Gr{o}bner-Shirshov bases.
Graph convolution networks, like message passing graph convolution networks (MPGCNs), have been a powerful tool in representation learning of networked data. However, when data is heterogeneous, most architectures are limited as they employ a single strategy to handle multi-channel graph signals and they typically focus on low-frequency information. In this paper, we present a novel graph convolution operator, termed BankGCN, which keeps benefits of message passing models, but extends their capabilities beyond `low-pass features. It decomposes multi-channel signals on graphs into subspaces and handles particular information in each subspace with an adapted filter. The filters of all subspaces have different frequency responses and together form a filter bank. Furthermore, each filter in the spectral domain corresponds to a message passing scheme, and diverse schemes are implemented via the filter bank. Importantly, the filter bank and the signal decomposition are jointly learned to adapt to the spectral characteristics of data and to target applications. Furthermore, this is implemented almost without extra parameters in comparison with most existing MPGCNs. Experimental results show that the proposed convolution operator permits to achieve excellent performance in graph classification on a collection of benchmark graph datasets.
Lightweight convolutional neural networks is an important research topic in the field of embedded vision. To implement image recognition tasks on a resource-limited hardware platform, it is necessary to reduce the memory size and the computational co st. The contribution of this paper is stated as follows. First, we propose an algorithm to process a specific network architecture (Condensation-Net) without increasing the maximum memory storage for feature maps. The architecture for virtual feature maps saves 26.5% of memory bandwidth by calculating the results of cross-channel pooling before storing the feature map into the memory. Second, we show that cross-channel pooling can improve the accuracy of object detection tasks, such as face detection, because it increases the number of filter weights. Compared with Tiny-YOLOv2, the improvement of accuracy is 2.0% for quantized networks and 1.5% for full-precision networks when the false-positive rate is 0.1. Last but not the least, the analysis results show that the overhead to support the cross-channel pooling with the proposed hardware architecture is negligible small. The extra memory cost to support Condensation-Net is 0.2% of the total size, and the extra gate count is only 1.0% of the total size.
187 - Xing Gao , Li Guo , Huhu Zhang 2021
Various compatibility conditions among replicated copies of operations in a given algebraic structure have appeared in broad contexts in recent years. Taking an uniform approach, this paper gives an operadic study of compatibility conditions for nons ymmetric operads with unary and binary operations, and homogeneous quadratic and cubic relations. This generalizes the previous studies for binary quadratic operads. We consider three compatibility conditions, namely the linear compatibility, matching compatibility and total compatibility, with increasingly strict restraints among the replicated copies. The linear compatibility is in Koszul dual to the total compatibility, while the matching compatibility is self dual. Further, each compatibility can be expressed in terms of either one or both of the two Manin square products.
158 - Zhaoxing Gao , Ruey S. Tsay 2021
This paper proposes a hierarchical approximate-factor approach to analyzing high-dimensional, large-scale heterogeneous time series data using distributed computing. The new method employs a multiple-fold dimension reduction procedure using Principal Component Analysis (PCA) and shows great promises for modeling large-scale data that cannot be stored nor analyzed by a single machine. Each computer at the basic level performs a PCA to extract common factors among the time series assigned to it and transfers those factors to one and only one node of the second level. Each 2nd-level computer collects the common factors from its subordinates and performs another PCA to select the 2nd-level common factors. This process is repeated until the central server is reached, which collects common factors from its direct subordinates and performs a final PCA to select the global common factors. The noise terms of the 2nd-level approximate factor model are the unique common factors of the 1st-level clusters. We focus on the case of 2 levels in our theoretical derivations, but the idea can easily be generalized to any finite number of hierarchies. We discuss some clustering methods when the group memberships are unknown and introduce a new diffusion index approach to forecasting. We further extend the analysis to unit-root nonstationary time series. Asymptotic properties of the proposed method are derived for the diverging dimension of the data in each computing unit and the sample size $T$. We use both simulated data and real examples to assess the performance of the proposed method in finite samples, and compare our method with the commonly used ones in the literature concerning the forecastability of extracted factors.
360 - Xiang Gao , Kai Li , Xing Gao 2021
The first photometric analysis of V811 Cep was carried out. The first complete light curves of V, R and I bands are given. The analysis was carried out by Wilson-Devinney (W-D) program, and the results show that V811 Cep is a median-contact binary ($ f=33.9(pm4.9)%$) with a mass ratio of 0.285. It is a W-subtype contact binary, that is, the component with less mass is hotter than the component with more mass, and the light curves are asymmetric (OConnell effect), which can be explained by the existence of a hot spot on the component with less mass. The orbital inclination is $i=88.3^{circ}$, indicating that it is a totally eclipsing binary, so the parameters obtained are reliable. Through the O-C analyzing, it is found that the orbital period decreases at the rate of $dot{P}=-3.90(pm0.06)times 10^{-7}d cdot yr^{-1}$, which indicates that the mass transfer occurs from the more massive component to the less massive one.
Graph neural networks have attracted wide attentions to enable representation learning of graph data in recent works. In complement to graph convolution operators, graph pooling is crucial for extracting hierarchical representation of graph data. How ever, most recent graph pooling methods still fail to efficiently exploit the geometry of graph data. In this paper, we propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology. Node proximity is obtained by harmonizing the kernel representation of topology information and node features. Implicit structure-aware kernel representation of topology information allows efficient graph pooling without explicit eigendecomposition of the graph Laplacian. Similarities of node signals are adaptively evaluated with the combination of the affine transformation and kernel trick using the Gaussian RBF function. Experimental results demonstrate that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
Deploying deep models on embedded devices has been a challenging problem since the great success of deep learning based networks. Fixed-point networks, which represent their data with low bits fixed-point and thus give remarkable savings on memory us age, are generally preferred. Even though current fixed-point networks employ relative low bits (e.g. 8-bits), the memory saving is far from enough for the embedded devices. On the other hand, quantization deep networks, for example XNOR-Net and HWGQNet, quantize the data into 1 or 2 bits resulting in more significant memory savings but still contain lots of floatingpoint data. In this paper, we propose a fixed-point network for embedded vision tasks through converting the floatingpoint data in a quantization network into fixed-point. Furthermore, to overcome the data loss caused by the conversion, we propose to compose floating-point data operations across multiple layers (e.g. convolution, batch normalization and quantization layers) and convert them into fixedpoint. We name the fixed-point network obtained through such integrated conversion as Integrated Fixed-point Quantization Networks (IFQ-Net). We demonstrate that our IFQNet gives 2.16x and 18x more savings on model size and runtime feature map memory respectively with similar accuracy on ImageNet. Furthermore, based on YOLOv2, we design IFQ-Tinier-YOLO face detector which is a fixed-point network with 256x reduction in model size (246k Bytes) than Tiny-YOLO. We illustrate the promising performance of our face detector in terms of detection rate on Face Detection Data Set and Bencmark (FDDB) and qualitative results of detecting small faces of Wider Face dataset.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا