ترغب بنشر مسار تعليمي؟ اضغط هنا

121 - Jiamin Yu 2021
It has been for a long time to use big data of autonomous vehicles for perception, prediction, planning, and control of driving. Naturally, it is increasingly questioned why not using this big data for risk management and actuarial modeling. This art icle examines the emerging technical difficulties, new ideas, and methods of risk modeling under autonomous driving scenarios. Compared with the traditional risk model, the novel model is more consistent with the real road traffic and driving safety performance. More importantly, it provides technical feasibility for realizing risk assessment and car insurance pricing under a computer simulation environment.
295 - Jiamin Yu 2021
Since Claude Shannon founded Information Theory, information theory has widely fostered other scientific fields, such as statistics, artificial intelligence, biology, behavioral science, neuroscience, economics, and finance. Unfortunately, actuarial science has hardly benefited from information theory. So far, only one actuarial paper on information theory can be searched by academic search engines. Undoubtedly, information and risk, both as Uncertainty, are constrained by entropy law. Todays insurance big data era means more data and more information. It is unacceptable for risk management and actuarial science to ignore information theory. Therefore, this paper aims to exploit information theory to discover the performance limits of insurance big data systems and seek guidance for risk modeling and the development of actuarial pricing systems.
Water electrolysis is promising for industrial hydrogen production to achieve a sustainable and green hydrogen economy, but the high cost of the technology limits its market share. Developing efficient yet economic electrocatalysts is crucial to decr ease the cost of electricity and electrolytic cell. Meanwhile, electrolysis in seawater electrolyte can further reduce feedstock cost. Here we synthesize a type of electrocatalyst where trace precious metals are strongly anchored on corrosion-resistive matrix. As an example, the produced Pt/Ni-Mo electrocatalyst only needs an overpotential of 113 mV to reach an ultrahigh current density of 2000 mA cm-2 in saline-alkaline electrolyte, standing as the best performance so far. It shows high activity and long durability in various electrolytes and under harsh conditions, including strong alkaline and simulated seawater electrolytes, and under elevated temperatures up to 80 degree Celsius). This electrocatalyst is produced on a large scale at low cost and shows good performance in a commercial membrane electrode assembly stack, demonstrating its feasibility for practical water electrolysis
Point clouds captured in real-world applications are often incomplete due to the limited sensor resolution, single viewpoint, and occlusion. Therefore, recovering the complete point clouds from partial ones becomes an indispensable task in many pract ical applications. In this paper, we present a new method that reformulates point cloud completion as a set-to-set translation problem and design a new model, called PoinTr that adopts a transformer encoder-decoder architecture for point cloud completion. By representing the point cloud as a set of unordered groups of points with position embeddings, we convert the point cloud to a sequence of point proxies and employ the transformers for point cloud generation. To facilitate transformers to better leverage the inductive bias about 3D geometric structures of point clouds, we further devise a geometry-aware block that models the local geometric relationships explicitly. The migration of transformers enables our model to better learn structural knowledge and preserve detailed information for point cloud completion. Furthermore, we propose two more challenging benchmarks with more diverse incomplete point clouds that can better reflect the real-world scenarios to promote future research. Experimental results show that our method outperforms state-of-the-art methods by a large margin on both the new benchmarks and the existing ones. Code is available at https://github.com/yuxumin/PoinTr
Assessing action quality is challenging due to the subtle differences between videos and large variations in scores. Most existing approaches tackle this problem by regressing a quality score from a single video, suffering a lot from the large inter- video score variations. In this paper, we show that the relations among videos can provide important clues for more accurate action quality assessment during both training and inference. Specifically, we reformulate the problem of action quality assessment as regressing the relative scores with reference to another video that has shared attributes (e.g., category and difficulty), instead of learning unreferenced scores. Following this formulation, we propose a new Contrastive Regression (CoRe) framework to learn the relative scores by pair-wise comparison, which highlights the differences between videos and guides the models to learn the key hints for assessment. In order to further exploit the relative information between two videos, we devise a group-aware regression tree to convert the conventional score regression into two easier sub-problems: coarse-to-fine classification and regression in small intervals. To demonstrate the effectiveness of CoRe, we conduct extensive experiments on three mainstream AQA datasets including AQA-7, MTL-AQA and JIGSAWS. Our approach outperforms previous methods by a large margin and establishes new state-of-the-art on all three benchmarks.
Link and sign prediction in complex networks bring great help to decision-making and recommender systems, such as in predicting potential relationships or relative status levels. Many previous studies focused on designing the special algorithms to pe rform either link prediction or sign prediction. In this work, we propose an effective model integration algorithm consisting of network embedding, network feature engineering, and an integrated classifier, which can perform the link and sign prediction in the same framework. Network embedding can accurately represent the characteristics of topological structures and cooperate with the powerful network feature engineering and integrated classifier can achieve better prediction. Experiments on several datasets show that the proposed model can achieve state-of-the-art or competitive performance for both link and sign prediction in spite of its generality. Interestingly, we find that using only very low network embedding dimension can generate high prediction performance, which can significantly reduce the computational overhead during training and prediction. This study offers a powerful methodology for multi-task prediction in complex networks.
181 - Yu Zhu , Xinrui Yang , Famin Yu 2021
The low degradability of common polymers composed of light elements, results in a serious impact on the environment, which has become an urgent problem to be solved. As the reverse process of monomer polymerization, what deviates degradation from the idealized sequential depolymerization process, thereby bringing strange degradation products or even hindering further degradation? This is a key issue at the atomic level that must be addressed. Herein, we reveal that hydrogen atom transfer (HAT) during degradation, which is usually attributed to the thermal effect, unexpectedly exhibits a strong high-temperature tunnelling effect. This gives a possible answer to the above question. High-precision first-principles calculations show that, in various possible HAT pathways, lower energy barrier and stronger tunnelling effect make the HAT reaction related to the active end of the polymer occur more easily. In particular, although the energy barrier of the HAT reaction is only of 0.01 magnitude different from depolymerization, the tunnelling probability of the former can be 14~32 orders of magnitude greater than that of the latter. Furthermore, chain scission following HAT will lead to a variety of products other than monomers. Our work highlights that quantum tunnelling may be an important source of uncertainty in degradation and will provide a direction for regulating the polymer degradation process.
Modern semi-supervised learning methods conventionally assume both labeled and unlabeled data have the same class distribution. However, unlabeled data may include out-of-class samples in practice; those that cannot have one-hot encoded labels from a closed-set of classes in label data, i.e., unlabeled data is an open-set. In this paper, we introduce OpenCoS, a method for handling this realistic semi-supervised learning scenario based on a recent framework of contrastive learning. One of our key findings is that out-of-class samples in the unlabeled dataset can be identified effectively via (unsupervised) contrastive learning. OpenCoS utilizes this information to overcome the failure modes in the existing state-of-the-art semi-supervised methods, e.g., ReMixMatch or FixMatch. It further improves the semi-supervised performance by utilizing soft- and pseudo-labels on open-set unlabeled data, learned from contrastive learning. Our extensive experimental results show the effectiveness of OpenCoS, fixing the state-of-the-art semi-supervised methods to be suitable for diverse scenarios involving open-set unlabeled data.
Precise positioning has become one core topic in wireless communications by facilitating candidate techniques of B5G. Nevertheless, most existing positioning algorithms, categorized into geometric-driven and data-driven approaches, fail to simultaneo usly fulfill diversified requirements for practical use, e.g., accuracy, real-time operation, scalability, maintenance, etc. This article aims at introducing a new principle, called emph{combinatorial data augmentation} (CDA), a catalyst for the two approaches tight integration. We first explain the concept of CDA and its critical advantages over the two standalone approaches. Then, we confirm the CDAs effectiveness from field experiments based on WiFi round-trip time and inertial measurement units. Lastly, we present its potential beyond positioning, expected to play a critical role in B5G.
The quantum Fisher information (QFI) represents a fundamental concept in quantum physics. On the one hand, it quantifies the metrological potential of quantum states in quantum-parameter-estimation measurements. On the other hand, it is intrinsically related to the quantum geometry and multipartite entanglement of many-body systems. Here, we explore how the QFI can be estimated via randomized measurements, an approach which has the advantage of being applicable to both pure and mixed quantum states. In the latter case, our method gives access to the sub-quantum Fisher information, which sets a lower bound on the QFI. We experimentally validate this approach using two platforms: a nitrogen-vacancy center spin in diamond and a 4-qubit state provided by a superconducting quantum computer. We further perform a numerical study on a many-body spin system to illustrate the advantage of our randomized-measurement approach in estimating multipartite entanglement, as compared to quantum state tomography. Our results highlight the general applicability of our method to general quantum platforms, including solid-state spin systems, superconducting quantum computers and trapped ions, hence providing a versatile tool to explore the essential role of the QFI in quantum physics.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا