ترغب بنشر مسار تعليمي؟ اضغط هنا

177 - Yong Wang 2021
The well-known process algebras, such as CCS, ACP and $pi$-calculus, capture the interleaving concurrency based on bisimilarity semantics. We did some work on truly concurrent process algebras, such as CTC, APTC and $pi_{tc}$, capture the true concur rency based on truly concurrent bisimilarities, such as pomset bisimilarity, step bisimilarity, history-preserving (hp-) bisimilarity and hereditary history-preserving (hhp-) bisimilarity. Truly concurrent process algebras are generalizations of the corresponding traditional process algebras. In this book, we introduce localities into truly concurrent process algebras, based on the work on process algebra with localities.
Removing hazardous particulate and macromolecular contaminants as well as viruses with sizes from a few nm up to the 100-nm-range from water and air is crucial for ensuring sufficient sanitation and hygiene for a growing world population. To this end , high-performance separation membranes are needed that combine high permeance, high selectivity and sufficient mechanical stability under operating conditions. However, design features of separation membranes enhancing permeance reduce selectivity and vice versa. Membrane configurations combining high permeance and high selectivity suffer in turn from a lack of mechanical robustness. These problems may be tackled by using block copolymers (BCPs) as a material platform for the design of separation membranes. BCPs are macromolecules that consist of two or more chemically distinct block segments, which undergo microphase separation yielding a wealth of ordered nanoscopic domain structures. Various methods allow the transformation of these nanoscopic domain structures into customized nanopore systems with pore sizes in the sub-100-nm range and with narrow pore size distributions. This tutorial review summarizes design strategies for nanoporous state-of-the-art BCP separation membranes, their preparation, their device integration and their use for water purification.
77 - Tong Wu , Yong Wang 2021
In this paper, we define deformed Schouten-Van Kampen connections which are metric connections and compute sub-Riemannian limits of Gaussian curvature for a Euclidean C2-smooth surface associated to deformed Schouten-Van Kampen connections with two k inds of distributions in the affine group and the group of rigid motions of the Minkowski plane away from characteristic points and signed geodesic curvature for Euclidean C2-smooth curves on surfaces. According to above results, we get Gauss-Bonnet theorems associated to two kinds of deformed Schouten-Van Kampen connections in the affine group and the group of rigid motions of the Minkowski plane.
With the help of the largest data samples of $J/psi$ and $psi(2S)$ events ever produced in $e^+e^-$ annihilations, the three singlet charmonium states, $eta_c(1S)$, $eta_c(2S)$ and $h_c(1P)$, have been extensively studied at the BESIII experiment. In this review, a survey on the most recent results, including a series of precision measurements and observations of their new decay modes, is presented, which indicates the further investigations on their decays are needed to understand their decay mechanisms and have precision tests of the theoretical models. At present, about eight times larger data samples of 10 billion $J/psi$ events and 3 billion $psi(3686)$ events were collected with the BESIII detector, and thus the prospects for the study of these three charmonium states is discussed extensively.
81 - Yong Wang 2021
The well-known process algebras, such as CCS, ACP and $pi$-calculus, capture the interleaving concurrency based on bisimilarity semantics. We did some work on truly concurrent process algebras, such as CTC, APTC and $pi_{tc}$, capture the true concur rency based on truly concurrent bisimilarities, such as pomset bisimilarity, step bisimilarity, history-preserving (hp-) bisimilarity and hereditary history-preserving (hhp-) bisimilarity. Truly concurrent process algebras are generalizations of the corresponding traditional process algebras. In this book, we introduce reversibility, probabilism, and guards into truly concurrent calculus $pi_{tc}$.
Recently, learning-based algorithms have shown impressive performance in underwater image enhancement. Most of them resort to training on synthetic data and achieve outstanding performance. However, these methods ignore the significant domain gap bet ween the synthetic and real data (i.e., interdomain gap), and thus the models trained on synthetic data often fail to generalize well to real underwater scenarios. Furthermore, the complex and changeable underwater environment also causes a great distribution gap among the real data itself (i.e., intra-domain gap). However, almost no research focuses on this problem and thus their techniques often produce visually unpleasing artifacts and color distortions on various real images. Motivated by these observations, we propose a novel Two-phase Underwater Domain Adaptation network (TUDA) to simultaneously minimize the inter-domain and intra-domain gap. Concretely, a new dual-alignment network is designed in the first phase, including a translation part for enhancing realism of input images, followed by an enhancement part. With performing image-level and feature-level adaptation in two parts by jointly adversarial learning, the network can better build invariance across domains and thus bridge the inter-domain gap. In the second phase, we perform an easy-hard classification of real data according to the assessed quality of enhanced images, where a rank-based underwater quality assessment method is embedded. By leveraging implicit quality information learned from rankings, this method can more accurately assess the perceptual quality of enhanced images. Using pseudo labels from the easy part, an easy-hard adaptation technique is then conducted to effectively decrease the intra-domain gap between easy and hard samples.
Most deep models for underwater image enhancement resort to training on synthetic datasets based on underwater image formation models. Although promising performances have been achieved, they are still limited by two problems: (1) existing underwater image synthesis models have an intrinsic limitation, in which the homogeneous ambient light is usually randomly generated and many important dependencies are ignored, and thus the synthesized training data cannot adequately express characteristics of real underwater environments; (2) most of deep models disregard lots of favorable underwater priors and heavily rely on training data, which extensively limits their application ranges. To address these limitations, a new underwater synthetic dataset is first established, in which a revised ambient light synthesis equation is embedded. The revised equation explicitly defines the complex mathematical relationship among intensity values of the ambient light in RGB channels and many dependencies such as surface-object depth, water types, etc, which helps to better simulate real underwater scene appearances. Secondly, a unified framework is proposed, named ANA-SYN, which can effectively enhance underwater images under collaborations of priors (underwater domain knowledge) and data information (underwater distortion distribution). The proposed framework includes an analysis network and a synthesis network, one for priors exploration and another for priors integration. To exploit more accurate priors, the significance of each prior for the input image is explored in the analysis network and an adaptive weighting module is designed to dynamically recalibrate them. Meanwhile, a novel prior guidance module is introduced in the synthesis network, which effectively aggregates the prior and data features and thus provides better hybrid information to perform the more reasonable image enhancement.
158 - Yong Wang 2021
The well-known process algebras, such as CCS, ACP and $pi$-calculus, capture the interleaving concurrency based on bisimilarity semantics. We did some work on truly concurrent process algebras, such as CTC, APTC and $pi_{tc}$, capture the true concur rency based on truly concurrent bisimilarities, such as pomset bisimilarity, step bisimilarity, history-preserving (hp-) bisimilarity and hereditary history-preserving (hhp-) bisimilarity. Truly concurrent process algebras are generalizations of the corresponding traditional process algebras. In this book, we introduce reversibility, probabilism, and guards into truly concurrent calculus CTC.
The electrification revolution in automobile industry and others demands annual production capacity of batteries at least on the order of 102 gigawatts hours, which presents a twofold challenge to supply of key materials such as cobalt and nickel and to recycling when the batteries retire. Pyrometallurgical and hydrometallurgical recycling are currently used in industry but suffer from complexity, high costs, and secondary pollution. Here we report a direct-recycling method in molten salts (MSDR) that is environmentally benign and value-creating based on a techno-economic analysis using real-world data and price information. We also experimentally demonstrate the feasibility of MSDR by upcycling a low-nickel polycrystalline LiNi0.5Mn0.3Co0.2O2 (NMC) cathode material that is widely used in early-year electric vehicles into Ni-rich (Ni > 65%) single-crystal NMCs with increased energy-density (>10% increase) and outstanding electrochemical performance (>94% capacity retention after 500 cycles in pouch-type full cells). This work opens up new opportunities for closed-loop recycling of electric vehicle batteries and manufacturing of next-generation NMC cathode materials.
Visualization recommendation or automatic visualization generation can significantly lower the barriers for general users to rapidly create effective data visualizations, especially for those users without a background in data visualizations. However , existing rule-based approaches require tedious manual specifications of visualization rules by visualization experts. Other machine learning-based approaches often work like black-box and are difficult to understand why a specific visualization is recommended, limiting the wider adoption of these approaches. This paper fills the gap by presenting KG4Vis, a knowledge graph (KG)-based approach for visualization recommendation. It does not require manual specifications of visualization rules and can also guarantee good explainability. Specifically, we propose a framework for building knowledge graphs, consisting of three types of entities (i.e., data features, data columns and visualization design choices) and the relations between them, to model the mapping rules between data and effective visualizations. A TransE-based embedding technique is employed to learn the embeddings of both entities and relations of the knowledge graph from existing dataset-visualization pairs. Such embeddings intrinsically model the desirable visualization rules. Then, given a new dataset, effective visualizations can be inferred from the knowledge graph with semantically meaningful rules. We conducted extensive evaluations to assess the proposed approach, including quantitative comparisons, case studies and expert interviews. The results demonstrate the effectiveness of our approach.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا