ترغب بنشر مسار تعليمي؟ اضغط هنا

Surface reconstruction from noisy, non-uniformly, and unoriented point clouds is a fascinating yet difficult problem in computer vision and computer graphics. In this paper, we propose Neural-IMLS, a novel approach that learning noise-resistant signe d distance function (SDF) for reconstruction. Instead of explicitly learning priors with the ground-truth signed distance values, our method learns the SDF from raw point clouds directly in a self-supervised fashion by minimizing the loss between the couple of SDFs, one obtained by the implicit moving least-square function (IMLS) and the other by our network. Finally, a watertight and smooth 2-manifold triangle mesh is yielded by running Marching Cubes. We conduct extensive experiments on various benchmarks to demonstrate the performance of Neural-IMLS, especially for point clouds with noise.
Unsupervised Domain Adaptive (UDA) object re-identification (Re-ID) aims at adapting a model trained on a labeled source domain to an unlabeled target domain. State-of-the-art object Re-ID approaches adopt clustering algorithms to generate pseudo-lab els for the unlabeled target domain. However, the inevitable label noise caused by the clustering procedure significantly degrades the discriminative power of Re-ID model. To address this problem, we propose an uncertainty-aware clustering framework (UCF) for UDA tasks. First, a novel hierarchical clustering scheme is proposed to promote clustering quality. Second, an uncertainty-aware collaborative instance selection method is introduced to select images with reliable labels for model training. Combining both techniques effectively reduces the impact of noisy labels. In addition, we introduce a strong baseline that features a compact contrastive loss. Our UCF method consistently achieves state-of-the-art performance in multiple UDA tasks for object Re-ID, and significantly reduces the gap between unsupervised and supervised Re-ID performance. In particular, the performance of our unsupervised UCF method in the MSMT17$to$Market1501 task is better than that of the fully supervised setting on Market1501. The code of UCF is available at https://github.com/Wang-pengfei/UCF.
Entity Resolution (ER) aims to identify whether two tuples refer to the same real-world entity and is well-known to be labor-intensive. It is a prerequisite to anomaly detection, as comparing the attribute values of two matched tuples from two differ ent datasets provides one effective way to detect anomalies. Existing ER approaches, due to insufficient feature discovery or error-prone inherent characteristics, are not able to achieve stable performance. In this paper, we present CollaborER, a self-supervised entity resolution framework via multi-features collaboration. It is capable of (i) obtaining reliable ER results with zero human annotations and (ii) discovering adequate tuples features in a fault-tolerant manner. CollaborER consists of two phases, i.e., automatic label generation (ALG) and collaborative ER training (CERT). In the first phase, ALG is proposed to generate a set of positive tuple pairs and a set of negative tuple pairs. ALG guarantees the high quality of the generated tuples and hence ensures the training quality of the subsequent CERT. In the second phase, CERT is introduced to learn the matching signals by discovering graph features and sentence features of tuples collaboratively. Extensive experimental results over eight real-world ER benchmarks show that CollaborER outperforms all the existing unsupervised ER approaches and is comparable or even superior to the state-of-the-art supervised ER methods.
The reading of arbitrarily-shaped text has received increasing research attention. However, existing text spotters are mostly built on two-stage frameworks or character-based methods, which suffer from either Non-Maximum Suppression (NMS), Region-of- Interest (RoI) operations, or character-level annotations. In this paper, to address the above problems, we propose a novel fully convolutional Point Gathering Network (PGNet) for reading arbitrarily-shaped text in real-time. The PGNet is a single-shot text spotter, where the pixel-level character classification map is learned with proposed PG-CTC loss avoiding the usage of character-level annotations. With a PG-CTC decoder, we gather high-level character classification vectors from two-dimensional space and decode them into text symbols without NMS and RoI operations involved, which guarantees high efficiency. Additionally, reasoning the relations between each character and its neighbors, a graph refinement module (GRM) is proposed to optimize the coarse recognition and improve the end-to-end performance. Experiments prove that the proposed method achieves competitive accuracy, meanwhile significantly improving the running speed. In particular, in Total-Text, it runs at 46.7 FPS, surpassing the previous spotters with a large margin.
Fuzzing is one of the most efficient technology for vulnerability detection. Since the fuzzing process is computing-intensive and the performance improved by algorithm optimization is limited, recent research seeks to improve fuzzing performance by u tilizing parallel computing. However, parallel fuzzing has to overcome challenges such as task conflicts, scalability in a distributed environment, synchronization overhead, and workload imbalance. In this paper, we design and implement UniFuzz, a distributed fuzzing optimization based on a dynamic centralized task scheduling. UniFuzz evaluates and distributes seeds in a centralized manner to avoid task conflicts. It uses a request-response scheme to dynamically distribute fuzzing tasks, which avoids workload imbalance. Besides, UniFuzz can adaptively switch the role of computing cores between evaluating, and fuzzing, which avoids the potential bottleneck of seed evaluation. To improve synchronization efficiency, UniFuzz shares different fuzzing information in a different way according to their characteristics, and the average overhead of synchronization is only about 0.4%. We evaluated UniFuzz with real-world programs, and the results show that UniFuzz outperforms state-of-the-art tools, such as AFL, PAFL and EnFuzz. Most importantly, the experiment reveals a counter-intuitive result that parallel fuzzing can achieve a super-linear acceleration to the single-core fuzzing. We made a detailed explanation and proved it with additional experiments. UniFuzz also discovered 16 real-world vulnerabilities.
We demonstrate the super-resolution localization of the nitrogen vacancy centers in diamond by a novel fluorescence photoswitching technique based on coherent quantum control. The photoswitching is realized by the quantum phase encoding based on puls ed magnetic field gradient. Then we perform super-resolution imaging and achieve a localizing accuracy better than 1.4 nm under a scanning confocal microscope. Finally, we show that the quantum phase encoding plays a dominant role on the resolution, and a resolution of 0.15 nm is achievable under our current experimental condition. This method can be applied in subnanometer scale addressing and control of qubits based on multiple coupled defect spins.
Realizing a long coherence time quantum memory is a major challenge of current quantum technology. Here, we report a single Yb ion-qubit memory with over one hour coherence time, an order of improvement compared to the state-of-the-art record. The lo ng coherence time memory is realized by addressing various technical challenges such as ambient magnetic-field noise, phase noise and leakage of the microwave oscillator. Moreover, systematically study the decoherence process of our quantum memory by quantum process tomography, which enables to apply the strict criteria of quantum coherence, relative entropy of coherence. We also benchmark our quantum memory by its ability in preserving quantum information, i.e., the robustness of quantum memory, which clearly shows that over 6000 s, our quantum memory preserves non-classical quantum information. Our results verify the stability of the quantum memory in hours level and indicate its versatile applicability in various scenarios.
560 - Pengfei Wang , Xu Zhou , Kai Lu 2020
Most greybox fuzzing tools are coverage-guided as code coverage is strongly correlated with bug coverage. However, since most covered codes may not contain bugs, blindly extending code coverage is less efficient, especially for corner cases. Unlike c overage-guided greybox fuzzers who extend code coverage in an undirected manner, a directed greybox fuzzer spends most of its time allocation on reaching specific targets (e.g., the bug-prone zone) without wasting resources stressing unrelated parts. Thus, directed greybox fuzzing (DGF) is particularly suitable for scenarios such as patch testing, bug reproduction, and specialist bug hunting. This paper studies DGF from a broader view, which takes into account not only the location-directed type that targets specific code parts, but also the behaviour-directed type that aims to expose abnormal program behaviours. Herein, the first in-depth study of DGF is made based on the investigation of 32 state-of-the-art fuzzers (78% were published after 2019) that are closely related to DGF. A thorough assessment of the collected tools is conducted so as to systemise recent progress in this field. Finally, it summarises the challenges and provides perspectives for future research.
Magnetic resonance imaging (MRI) is a non-invasive and label-free technique widely used in medical diagnosis and life science research, and its success has benefited greatly from continuing efforts on enhancing contrast and resolution. Here we report ed nanoscale MRI in a single cell using an atomic-size quantum sensor. With nitrogen-vacancy center in diamond, the intracellular protein ferritin has been imaged with a spatial resolution of ~ 10 nanometers, and ferritin-containing organelles were co-localized by correlative MRI and electron microscopy. Comparing to the current micrometer resolution in current state-of-art conventional MRI, our approach represents a 100-fold enhancement, and paves the way for MRI of intracellular proteins.
Detecting scene text of arbitrary shapes has been a challenging task over the past years. In this paper, we propose a novel segmentation-based text detector, namely SAST, which employs a context attended multi-task learning framework based on a Fully Convolutional Network (FCN) to learn various geometric properties for the reconstruction of polygonal representation of text regions. Taking sequential characteristics of text into consideration, a Context Attention Block is introduced to capture long-range dependencies of pixel information to obtain a more reliable segmentation. In post-processing, a Point-to-Quad assignment method is proposed to cluster pixels into text instances by integrating both high-level object knowledge and low-level pixel information in a single shot. Moreover, the polygonal representation of arbitrarily-shaped text can be extracted with the proposed geometric properties much more effectively. Experiments on several benchmarks, including ICDAR2015, ICDAR2017-MLT, SCUT-CTW1500, and Total-Text, demonstrate that SAST achieves better or comparable performance in terms of accuracy. Furthermore, the proposed algorithm runs at 27.63 FPS on SCUT-CTW1500 with a Hmean of 81.0% on a single NVIDIA Titan Xp graphics card, surpassing most of the existing segmentation-based methods.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا