ترغب بنشر مسار تعليمي؟ اضغط هنا

The decays $K^{ast}(892)rightarrow K_{S,L}^{0}pi$ can be used to study CP violation and CPT violation. The $K^{ast}(892)$ meson can be produced via $J/psi$ decays at BESIII. In this paper, we study CP violation and CPT violation in $K^{ast}(892)right arrow K_{S,L}^{0}pi$ decays at BESIII. Basing on two cases: the samples of $10^{10}$ and $10^{12}$ $J/psi$ events, we calculate the expected numbers of the observed signal events on the CP violation in $J/psi$ decays with $K^{ast}(892)$ meson in the final states, we find that the BESIII experiment may be able to unambiguously observe CP violation for each of these two cases. Under the assumption that the observed event on CPT violation is absent, we discuss the upper limits on the absolute value of the CPT violation parameter $Re(z)$ in $J/psi$ decays involving $K^{ast}(892)$ meson in the final states with $10^{10}$ $J/psi$ events and $10^{12}$ $J/psi$ events, respectively. By using the accumulated $10^{10}$ $J/psi$ events, the upper limits on $left|Re(z)right|$ can be obtained at the $90%$ confidence level of $10^{-4}-10^{-5}$, which is competitive with the current best result, if the detection efficiency $varepsilon_{K^0_{L}}$ is assumed to be at the level of $8times 10^{-3}$ at BESIII. And besides, the upper limits on $left|Re(z)right|$ will be improved by about two orders of magnitude compared with the current best result with $10^{12}$ $J/psi$ events.
The high power and variable repetition rate of Yb femtosecond lasers make them very attractive for ultrafast science. However, for capturing sub-200 fs dynamics, efficient, high-fidelity, and high-stability pulse compression techniques are essential. Spectral broadening using an all-solid-state free-space geometry is particularly attractive, as it is simple, robust, and low-cost. However, spatial and temporal losses caused by spatio-spectral inhomogeneities have been a major challenge to date, due to coupled space-time dynamics associated with unguided nonlinear propagation. In this work, we use all-solid-state free-space compressors to demonstrate compression of 170 fs pulses at a wavelength of 1030nm from a Yb:KGW laser to ~9.2 fs, with a highly spatially homogeneous mode. This is achieved by ensuring that the nonlinear beam propagation in periodic layered Kerr media occurs in soliton modes and confining the nonlinear phase through each material layer to less than 1.0 rad. A remarkable spatio-spectral homogeneity of ~0.87 can be realized, which yields a high efficiency of >50% for few-cycle compression. The universality of the method is demonstrated by implementing high-quality pulse compression under a wide range of laser conditions. The high spatiotemporal quality and the exceptional stability of the compressed pulses are further verified by high-harmonic generation. This work represents the highest efficiency and the best spatio-spectral quality ever achieved by an all-solid-state free-space pulse compressor for few-cycle-pulse generation.
We present Mobile-Former, a parallel design of MobileNet and Transformer with a two-way bridge in between. This structure leverages the advantage of MobileNet at local processing and transformer at global interaction. And the bridge enables bidirecti onal fusion of local and global features. Different with recent works on vision transformer, the transformer in Mobile-Former contains very few tokens (e.g. less than 6 tokens) that are randomly initialized, resulting in low computational cost. Combining with the proposed light-weight cross attention to model the bridge, Mobile-Former is not only computationally efficient, but also has more representation power, outperforming MobileNetV3 at low FLOP regime from 25M to 500M FLOPs on ImageNet classification. For instance, it achieves 77.9% top-1 accuracy at 294M FLOPs, gaining 1.3% over MobileNetV3 but saving 17% of computations. When transferring to object detection, Mobile-Former outperforms MobileNetV3 by 8.6 AP.
Recent research shows deep neural networks are vulnerable to different types of attacks, such as adversarial attack, data poisoning attack and backdoor attack. Among them, backdoor attack is the most cunning one and can occur in almost every stage of deep learning pipeline. Therefore, backdoor attack has attracted lots of interests from both academia and industry. However, most existing backdoor attack methods are either visible or fragile to some effortless pre-processing such as common data transformations. To address these limitations, we propose a robust and invisible backdoor attack called Poison Ink. Concretely, we first leverage the image structures as target poisoning areas, and fill them with poison ink (information) to generate the trigger pattern. As the image structure can keep its semantic meaning during the data transformation, such trigger pattern is inherently robust to data transformations. Then we leverage a deep injection network to embed such trigger pattern into the cover image to achieve stealthiness. Compared to existing popular backdoor attack methods, Poison Ink outperforms both in stealthiness and robustness. Through extensive experiments, we demonstrate Poison Ink is not only general to different datasets and network architectures, but also flexible for different attack scenarios. Besides, it also has very strong resistance against many state-of-the-art defense techniques.
The intellectual property (IP) of Deep neural networks (DNNs) can be easily ``stolen by surrogate model attack. There has been significant progress in solutions to protect the IP of DNN models in classification tasks. However, little attention has be en devoted to the protection of DNNs in image processing tasks. By utilizing consistent invisible spatial watermarks, one recent work first considered model watermarking for deep image processing networks and demonstrated its efficacy in many downstream tasks. Nevertheless, it highly depends on the hypothesis that the embedded watermarks in the network outputs are consistent. When the attacker uses some common data augmentation attacks (e.g., rotate, crop, and resize) during surrogate model training, it will totally fail because the underlying watermark consistency is destroyed. To mitigate this issue, we propose a new watermarking methodology, namely ``structure consistency, based on which a new deep structure-aligned model watermarking algorithm is designed. Specifically, the embedded watermarks are designed to be aligned with physically consistent image structures, such as edges or semantic regions. Experiments demonstrate that our method is much more robust than the baseline method in resisting data augmentation attacks for model IP protection. Besides that, we further test the generalization ability and robustness of our method to a broader range of circumvention attacks.
Point cloud segmentation is a fundamental task in 3D. Despite recent progress on point cloud segmentation with the power of deep networks, current deep learning methods based on the clean label assumptions may fail with noisy labels. Yet, object clas s labels are often mislabeled in real-world point cloud datasets. In this work, we take the lead in solving this issue by proposing a novel Point Noise-Adaptive Learning (PNAL) framework. Compared to existing noise-robust methods on image tasks, our PNAL is noise-rate blind, to cope with the spatially variant noise rate problem specific to point clouds. Specifically, we propose a novel point-wise confidence selection to obtain reliable labels based on the historical predictions of each point. A novel cluster-wise label correction is proposed with a voting strategy to generate the best possible label taking the neighbor point correlations into consideration. We conduct extensive experiments to demonstrate the effectiveness of PNAL on both synthetic and real-world noisy datasets. In particular, even with $60%$ symmetric noisy labels, our proposed method produces much better results than its baseline counterpart without PNAL and is comparable to the ideal upper bound trained on a completely clean dataset. Moreover, we fully re-labeled the validation set of a popular but noisy real-world scene dataset ScanNetV2 to make it clean, for rigorous experiment and future research. Our code and data are available at url{https://shuquanye.com/PNAL_website/}.
Unsupervised pretraining has achieved great success and many recent works have shown unsupervised pretraining can achieve comparable or even slightly better transfer performance than supervised pretraining on downstream target datasets. But in this p aper, we find this conclusion may not hold when the target dataset has very few labeled samples for finetuning, ie, few-label transfer. We analyze the possible reason from the clustering perspective: 1) The clustering quality of target samples is of great importance to few-label transfer; 2) Though contrastive learning is essential to learn how to cluster, its clustering quality is still inferior to supervised pretraining due to lack of label supervision. Based on the analysis, we interestingly discover that only involving some unlabeled target domain into the unsupervised pretraining can improve the clustering quality, subsequently reducing the transfer performance gap with supervised pretraining. This finding also motivates us to propose a new progressive few-label transfer algorithm for real applications, which aims to maximize the transfer performance under a limited annotation budget. To support our analysis and proposed method, we conduct extensive experiments on nine different target datasets. Experimental results show our proposed method can significantly boost the few-label transfer performance of unsupervised pretraining.
Although much progress has been made in visual emotion recognition, researchers have realized that modern deep networks tend to exploit dataset characteristics to learn spurious statistical associations between the input and the target. Such dataset characteristics are usually treated as dataset bias, which damages the robustness and generalization performance of these recognition systems. In this work, we scrutinize this problem from the perspective of causal inference, where such dataset characteristic is termed as a confounder which misleads the system to learn the spurious correlation. To alleviate the negative effects brought by the dataset bias, we propose a novel Interventional Emotion Recognition Network (IERN) to achieve the backdoor adjustment, which is one fundamental deconfounding technique in causal inference. A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
Although spiking neural networks (SNNs) take benefits from the bio-plausible neural modeling, the low accuracy under the common local synaptic plasticity learning rules limits their application in many practical tasks. Recently, an emerging SNN super vised learning algorithm inspired by backpropagation through time (BPTT) from the domain of artificial neural networks (ANNs) has successfully boosted the accuracy of SNNs and helped improve the practicability of SNNs. However, current general-purpose processors suffer from low efficiency when performing BPTT for SNNs due to the ANN-tailored optimization. On the other hand, current neuromorphic chips cannot support BPTT because they mainly adopt local synaptic plasticity rules for simplified implementation. In this work, we propose H2Learn, a novel architecture that can achieve high efficiency for BPTT-based SNN learning which ensures high accuracy of SNNs. At the beginning, we characterized the behaviors of BPTT-based SNN learning. Benefited from the binary spike-based computation in the forward pass and the weight update, we first design lookup table (LUT) based processing elements in Forward Engine and Weight Update Engine to make accumulations implicit and to fuse the computations of multiple input points. Second, benefited from the rich sparsity in the backward pass, we design a dual-sparsity-aware Backward Engine which exploits both input and output sparsity. Finally, we apply a pipeline optimization between different engines to build an end-to-end solution for the BPTT-based SNN learning. Compared with the modern NVIDIA V100 GPU, H2Learn achieves 7.38x area saving, 5.74-10.20x speedup, and 5.25-7.12x energy saving on several benchmark datasets.
The large data sample of the $B_c$ meson collected at the LHC experiment and the HL-LHC experiment provides us the opportunity to study the $B_c$ decays and the related physics. In this paper, we investigate the effect of $K^0-bar{K}^0$ mixing on the the branching ratios, CP violations and CPT violations in the $B_{c}^{pm}rightarrow B^{pm} K_{S,L}^{0}$ decays. We find that some of the $B_c^{pm}rightarrow B^{pm} K_{S,L}^0rightarrow f_{B^{pm}} f_{K_{S,L}^0}$ decay chains have large branching ratios, whose maximum value can exceed the order of $10^{-6}$, the minimum number of $B_c^pm$ events times efficiency for observing the decays at three standard deviations (3$sigma$) level is about $ 10^6$. We study the CP asymmetries in the $B_c^{pm}rightarrow B^{pm} K_{S,L}^0$ decays and find that the CP asymmetries can exceed the order of $10^{-3}$, which are dominated by $K^0-bar{K}^0$ mixing. We give the most promising processes to observe the CP violations and the ranges of the numbers of $B_c^pm$ events-times-efficiency needed to observe the CP asymmetries at a significance of 3$sigma$ in these decays. We investigate the possibility to constraint the CPT violation parameter $Re(z)$ in the $B_c^{pm}rightarrow B^{pm} K_{S,L}^0rightarrow f_{B^{pm}} f_{K_{S,L}^0}$ decays and give the most promising processes to extract the parameter $Re(z)$. We find that the sensitivity for the measurement of parameter $Re(z)$ can reach below $10^{-3}$ in the most promising decays if we assume the selection efficiency is $10^{-3}$ and the total number of $B_c^{pm}$ events collected by the LHCb experiment is $10^{12}$ in the HL-LHC era.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا