ترغب بنشر مسار تعليمي؟ اضغط هنا

Recommender system usually suffers from severe popularity bias -- the collected interaction data usually exhibits quite imbalanced or even long-tailed distribution over items. Such skewed distribution may result from the users conformity to the group , which deviates from reflecting users true preference. Existing efforts for tackling this issue mainly focus on completely eliminating popularity bias. However, we argue that not all popularity bias is evil. Popularity bias not only results from conformity but also item quality, which is usually ignored by existing methods. Some items exhibit higher popularity as they have intrinsic better property. Blindly removing the popularity bias would lose such important signal, and further deteriorate model performance. To sufficiently exploit such important information for recommendation, it is essential to disentangle the benign popularity bias caused by item quality from the harmful popularity bias caused by conformity. Although important, it is quite challenging as we lack an explicit signal to differentiate the two factors of popularity bias. In this paper, we propose to leverage temporal information as the two factors exhibit quite different patterns along the time: item quality revealing item inherent property is stable and static while conformity that depends on items recent clicks is highly time-sensitive. Correspondingly, we further propose a novel Time-aware DisEntangled framework (TIDE), where a click is generated from three components namely the static item quality, the dynamic conformity effect, as well as the user-item matching score returned by any recommendation model. Lastly, we conduct interventional inference such that the recommendation can benefit from the benign popularity bias while circumvent the harmful one. Extensive experiments on three real-world datasets demonstrated the effectiveness of TIDE.
Knowledge Distillation (KD) aims at transferring knowledge from a larger well-optimized teacher network to a smaller learnable student network.Existing KD methods have mainly considered two types of knowledge, namely the individual knowledge and the relational knowledge. However, these two types of knowledge are usually modeled independently while the inherent correlations between them are largely ignored. It is critical for sufficient student network learning to integrate both individual knowledge and relational knowledge while reserving their inherent correlation. In this paper, we propose to distill the novel holistic knowledge based on an attributed graph constructed among instances. The holistic knowledge is represented as a unified graph-based embedding by aggregating individual knowledge from relational neighborhood samples with graph neural networks, the student network is learned by distilling the holistic knowledge in a contrastive manner. Extensive experiments and ablation studies are conducted on benchmark datasets, the results demonstrate the effectiveness of the proposed method. The code has been published in https://github.com/wyc-ruiker/HKD
The interplay among anisotropic magnetic terms, such as the bond-dependent Kitaev interactions and single-ion anisotropy, plays a key role in stabilizing the finite-temperature ferromagnetism in the two-dimensional compound $rm{CrSiTe_3}$. While the Heisenberg interaction is predominant in this material, a recent work shows that it is rather sensitive to the compressive strain, leading to a variety of phases, possibly including a sought-after Kitaev quantum spin liquid [C. Xu, textit{et. al.}, Phys. Rev. Lett. textbf{124}, 087205 (2020)]. To further understand these states, we establish the quantum phase diagram of a related bond-directional spin-$3/2$ model by the density-matrix renormalization group method. As the Heisenberg coupling varies from ferromagnetic to antiferromagnetic, three magnetically ordered phases, i.e., a ferromagnetic phase, a $120^circ$ phase and an antiferromagnetic phase, appear consecutively. All the phases are separated by first-order phase transitions, as revealed by the kinks in the ground-state energy and the jumps in the magnetic order parameters. However, no positive evidence of the quantum spin liquid state is found and possible reasons are discussed briefly.
Machine learning and wireless communication technologies are jointly facilitating an intelligent edge, where federated edge learning (FEEL) is a promising training framework. As wireless devices involved in FEEL are resource limited in terms of commu nication bandwidth, computing power and battery capacity, it is important to carefully schedule them to optimize the training performance. In this work, we consider an over-the-air FEEL system with analog gradient aggregation, and propose an energy-aware dynamic device scheduling algorithm to optimize the training performance under energy constraints of devices, where both communication energy for gradient aggregation and computation energy for local training are included. The consideration of computation energy makes dynamic scheduling challenging, as devices are scheduled before local training, but the communication energy for over-the-air aggregation depends on the l2-norm of local gradient, which is known after local training. We thus incorporate estimation methods into scheduling to predict the gradient norm. Taking the estimation error into account, we characterize the performance gap between the proposed algorithm and its offline counterpart. Experimental results show that, under a highly unbalanced local data distribution, the proposed algorithm can increase the accuracy by 4.9% on CIFAR-10 dataset compared with the myopic benchmark, while satisfying the energy constraints.
In federated learning (FL), devices contribute to the global training by uploading their local model updates via wireless channels. Due to limited computation and communication resources, device scheduling is crucial to the convergence rate of FL. In this paper, we propose a joint device scheduling and resource allocation policy to maximize the model accuracy within a given total training time budget for latency constrained wireless FL. A lower bound on the reciprocal of the training performance loss, in terms of the number of training rounds and the number of scheduled devices per round, is derived. Based on the bound, the accuracy maximization problem is solved by decoupling it into two sub-problems. First, given the scheduled devices, the optimal bandwidth allocation suggests allocating more bandwidth to the devices with worse channel conditions or weaker computation capabilities. Then, a greedy device scheduling algorithm is introduced, which in each step selects the device consuming the least updating time obtained by the optimal bandwidth allocation, until the lower bound begins to increase, meaning that scheduling more devices will degrade the model accuracy. Experiments show that the proposed policy outperforms state-of-the-art scheduling policies under extensive settings of data distributions and cell radius.
Deep Neural Networks (DNNs) models become one of the most valuable enterprise assets due to their critical roles in all aspects of applications. With the trend of privatization deployment of DNN models, the data leakage of the DNN models is becoming increasingly serious and widespread. All existing model-extraction attacks can only leak parts of targeted DNN models with low accuracy or high overhead. In this paper, we first identify a new attack surface -- unencrypted PCIe traffic, to leak DNN models. Based on this new attack surface, we propose a novel model-extraction attack, namely Hermes Attack, which is the first attack to fully steal the whole victim DNN model. The stolen DNN models have the same hyper-parameters, parameters, and semantically identical architecture as the original ones. It is challenging due to the closed-source CUDA runtime, driver, and GPU internals, as well as the undocumented data structures and the loss of some critical semantics in the PCIe traffic. Additionally, there are millions of PCIe packets with numerous noises and chaos orders. Our Hermes Attack addresses these issues by huge reverse engineering efforts and reliable semantic reconstruction, as well as skillful packet selection and order correction. We implement a prototype of the Hermes Attack, and evaluate two sequential DNN models (i.e., MINIST and VGG) and one consequential DNN model (i.e., ResNet) on three NVIDIA GPU platforms, i.e., NVIDIA Geforce GT 730, NVIDIA Geforce GTX 1080 Ti, and NVIDIA Geforce RTX 2080 Ti. The evaluation results indicate that our scheme is able to efficiently and completely reconstruct ALL of them with making inferences on any one image. Evaluated with Cifar10 test dataset that contains 10,000 images, the experiment results show that the stolen models have the same inference accuracy as the original ones (i.e., lossless inference accuracy).
In a vehicular edge computing (VEC) system, vehicles can share their surplus computation resources to provide cloud computing services. The highly dynamic environment of the vehicular network makes it challenging to guarantee the task offloading dela y. To this end, we introduce task replication to the VEC system, where the replicas of a task are offloaded to multiple vehicles at the same time, and the task is completed upon the first response among replicas. First, the impact of the number of task replicas on the offloading delay is characterized, and the optimal number of task replicas is approximated in closed-form. Based on the analytical result, we design a learning-based task replication algorithm (LTRA) with combinatorial multi-armed bandit theory, which works in a distributed manner and can automatically adapt itself to the dynamics of the VEC system. A realistic traffic scenario is used to evaluate the delay performance of the proposed algorithm. Results show that, under our simulation settings, LTRA with an optimized number of task replicas can reduce the average offloading delay by over 30% compared to the benchmark without task replication, and at the same time can improve the task completion ratio from 97% to 99.6%.
As 5G and Internet-of-Things (IoT) are deeply integrated into vertical industries such as autonomous driving and industrial robotics, timely status update is crucial for remote monitoring and control. In this regard, Age of Information (AoI) has been proposed to measure the freshness of status updates. However, it is just a metric changing linearly with time and irrelevant of context-awareness. We propose a context-based metric, named as Urgency of Information (UoI), to measure the nonlinear time-varying importance and the non-uniform context-dependence of the status information. This paper first establishes a theoretical framework for UoI characterization and then provides UoI-optimal status updating and user scheduling schemes in both single-terminal and multi-terminal cases. Specifically, an update-index-based scheme is proposed for a single-terminal system, where the terminal always updates and transmits when its update index is larger than a threshold. For the multi-terminal case, the UoI of the proposed scheduling scheme is proven to be upper-bounded and its decentralized implementation by Carrier Sensing Multiple Access with Collision Avoidance (CSMA/CA) is also provided. In the simulations, the proposed updating and scheduling schemes notably outperform the existing ones such as round robin and AoI-optimal schemes in terms of UoI, error-bound violation and control system stability.
Improving sparsity of deep neural networks (DNNs) is essential for network compression and has drawn much attention. In this work, we disclose a harmful sparsifying process called filter collapse, which is common in DNNs with batch normalization (BN) and rectified linear activation functions (e.g. ReLU, Leaky ReLU). It occurs even without explicit sparsity-inducing regularizations such as $L_1$. This phenomenon is caused by the normalization effect of BN, which induces a non-trainable region in the parameter space and reduces the network capacity as a result. This phenomenon becomes more prominent when the network is trained with large learning rates (LR) or adaptive LR schedulers, and when the network is finetuned. We analytically prove that the parameters of BN tend to become sparser during SGD updates with high gradient noise and that the sparsifying probability is proportional to the square of learning rate and inversely proportional to the square of the scale parameter of BN. To prevent the undesirable collapsed filters, we propose a simple yet effective approach named post-shifted BN (psBN), which has the same representation ability as BN while being able to automatically make BN parameters trainable again as they saturate during training. With psBN, we can recover collapsed filters and increase the model performance in various tasks such as classification on CIFAR-10 and object detection on MS-COCO2017.
Timely status updating is crucial for future applications that involve remote monitoring and control, such as autonomous driving and Industrial Internet of Things (IIoT). Age of Information (AoI) has been proposed to measure the freshness of status u pdates. However, it is incapable of capturing critical systematic context information that indicates the time-varying importance of status information, and the dynamic evolution of status. In this paper, we propose a context-based metric, namely the Urgency of Information (UoI), to evaluate the timeliness of status updates. Compared to AoI, the new metric incorporates both time-varying context information and dynamic status evolution, which enables the analysis on context-based adaptive status update schemes, as well as more effective remote monitoring and control. The minimization of average UoI for a status update terminal with an updating frequency constraint is investigated, and an update-index-based adaptive scheme is proposed. Simulation results show that the proposed scheme achieves a near-optimal performance with a low computational complexity.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا