ترغب بنشر مسار تعليمي؟ اضغط هنا

We present $textbf{PyRMLE}$, a Python module that implements Regularized Maximum Likelihood Estimation for the analysis of Random Coefficient models. $textbf{PyRMLE}$ is simple to use and readily works with data formats that are typical to Random Coe fficient problems. The module makes use of Pythons scientific libraries $textbf{NumPy}$ and $textbf{SciPy}$ for computational efficiency. The main implementation of the algorithm is executed purely in Python code which takes advantage of Pythons high-level features.
94 - Bian Du , Xiang Gao , Wei Hu 2021
Point clouds have attracted increasing attention. Significant progress has been made in methods for point cloud analysis, which often requires costly human annotation as supervision. To address this issue, we propose a novel self-contrastive learning for self-supervised point cloud representation learning, aiming to capture both local geometric patterns and nonlocal semantic primitives based on the nonlocal self-similarity of point clouds. The contributions are two-fold: on the one hand, instead of contrasting among different point clouds as commonly employed in contrastive learning, we exploit self-similar point cloud patches within a single point cloud as positive samples and otherwise negative ones to facilitate the task of contrastive learning. On the other hand, we actively learn hard negative samples that are close to positive samples for discriminative feature learning. Experimental results show that the proposed method achieves state-of-the-art performance on widely used benchmark datasets for self-supervised point cloud segmentation and transfer learning for classification.
The random coefficients model $Y_i={beta_0}_i+{beta_1}_i {X_1}_i+{beta_2}_i {X_2}_i+ldots+{beta_d}_i {X_d}_i$, with $mathbf{X}_i$, $Y_i$, $mathbf{beta}_i$ i.i.d, and $mathbf{beta}_i$ independent of $X_i$ is often used to capture unobserved heterogene ity in a population. We propose a quasi-maximum likelihood method to estimate the joint density distribution of the random coefficient model. This method implicitly involves the inversion of the Radon transformation in order to reconstruct the joint distribution, and hence is an inverse problem. Nonparametric estimation for the joint density of $mathbf{beta}_i=({beta_0}_i,ldots, {beta_d}_i)$ based on kernel methods or Fourier inversion have been proposed in recent years. Most of these methods assume a heavy tailed design density $f_mathbf{X}$. To add stability to the solution, we apply regularization methods. We analyze the convergence of the method without assuming heavy tails for $f_mathbf{X}$ and illustrate performance by applying the method on simulated and real data. To add stability to the solution, we apply a Tikhonov-type regularization method.
Person Re-Identification (re-ID) aims at retrieving images of the same person taken by different cameras. A challenge for re-ID is the performance preservation when a model is used on data of interest (target data) which belong to a different domain from the training data domain (source data). Unsupervised Domain Adaptation (UDA) is an interesting research direction for this challenge as it avoids a costly annotation of the target data. Pseudo-labeling methods achieve the best results in UDA-based re-ID. Surprisingly, labeled source data are discarded after this initialization step. However, we believe that pseudo-labeling could further leverage the labeled source data in order to improve the post-initialization training steps. In order to improve robustness against erroneous pseudo-labels, we advocate the exploitation of both labeled source data and pseudo-labeled target data during all training iterations. To support our guideline, we introduce a framework which relies on a two-branch architecture optimizing classification and triplet loss based metric learning in source and target domains, respectively, in order to allow emph{adaptability to the target domain} while ensuring emph{robustness to noisy pseudo-labels}. Indeed, shared low and mid-level parameters benefit from the source classification and triplet loss signal while high-level parameters of the target branch learn domain-specific features. Our method is simple enough to be easily combined with existing pseudo-labeling UDA approaches. We show experimentally that it is efficient and improves performance when the base method has no mechanism to deal with pseudo-label noise or for hard adaptation tasks. Our approach reaches state-of-the-art performance when evaluated on commonly used datasets, Market-1501 and DukeMTMC-reID, and outperforms the state of the art when targeting the bigger and more challenging dataset MSMT.
Complementary metal oxide semiconductor (CMOS) logic circuits at the ultimate scaling limit place the utmost demands on the properties of all materials involved. The requirements for semiconductors are well explored and could possibly be satisfied by a number of layered two-dimensional (2D) materials, like for example transition-metal dichalcogenides or black phosphorus. The requirements for the gate insulator are arguably even more challenging and difficult to meet. In particular the combination of insulator to semiconductor which forms the central element of the metal oxide semiconductor field effect transistor (MOSFET) has to be of superior quality in order to build competitive devices. At the moment, hexagonal boron nitride (hBN) is the most common two-dimensional insulator and widely considered to be the most promising gate insulator in nanoscaled 2D material-based transistors. Here, we critically assess the material parameters of hBN and conclude that while its properties render hBN an ideal candidate for many applications in 2D nanoelectronics, hBN is most likely not suitable as a gate insulator for ultrascaled CMOS devices.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا