ﻻ يوجد ملخص باللغة العربية
Cross-training workers is one of the most efficient ways to achieve flexibility in manufacturing and service systems to increase responsiveness to demand variability. However, it is generally the case that cross-trained employees are not as productive as employees who are originally trained on a specific task. Also, the productivity of the cross-trained workers depend on when they are cross-trained. In this work, we consider a two-stage model to analyze the affect of variations in productivity levels of workers on cross-training policies. Our results indicate that the most important factor determining the problem structure is the consistency in productivity levels of workers trained at different times. As long as cross-training can be done in a consistent manner, the productivity differences between cross-trained workers and workers originally trained on the task plays a minor role. We also analyze the effect of the variabilities in demand and producivity levels. We show that if the productivity levels of workers trained at different times are consistent, the decision maker is inclined to defer the cross-training decisions as the variability of demand or productivity levels increases. However, when the productivities of workers trained at different times differ, the decision maker may prefer to invest more in cross-training earlier as variability increases.
Translating e-commercial product descriptions, a.k.a product-oriented machine translation (PMT), is essential to serve e-shoppers all over the world. However, due to the domain specialty, the PMT task is more challenging than traditional machine tran
In this paper, we introduce Target-Aware Weighted Training (TAWT), a weighted training algorithm for cross-task learning based on minimizing a representation-based task distance between the source and target tasks. We show that TAWT is easy to implem
In a series of recent theoretical works, it was shown that strongly over-parameterized neural networks trained with gradient-based methods could converge exponentially fast to zero training loss, with their parameters hardly varying. In this work, we
Dense retrieval has shown great success in passage ranking in English. However, its effectiveness in document retrieval for non-English languages remains unexplored due to the limitation in training resources. In this work, we explore different trans
In a previous work we have detailed the requirements to obtain a maximal performance benefit by implementing fully connected deep neural networks (DNN) in form of arrays of resistive devices for deep learning. This concept of Resistive Processing Uni