No Arabic abstract
Robots may soon play a role in higher education by augmenting learning environments and managing interactions between instructors and learners. Little, however, is known about how the presence of robots in the learning environment will influence academic integrity. This study therefore investigates if and how college students cheat while engaged in a collaborative sorting task with a robot. We employed a 2x2 factorial design to examine the effects of cheating exposure (exposure to cheating or no exposure) and task clarity (clear or vague rules) on college student cheating behaviors while interacting with a robot. Our study finds that prior exposure to cheating on the task significantly increases the likelihood of cheating. Yet, the tendency to cheat was not impacted by the clarity of the task rules. These results suggest that normative behavior by classmates may strongly influence the decision to cheat while engaged in an instructional experience with a robot.
Despite the great success of deep neural networks, the adversarial attack can cheat some well-trained classifiers by small permutations. In this paper, we propose another type of adversarial attack that can cheat classifiers by significant changes. For example, we can significantly change a face but well-trained neural networks still recognize the adversarial and the original example as the same person. Statistically, the existing adversarial attack increases Type II error and the proposed one aims at Type I error, which are hence named as Type II and Type I adversarial attack, respectively. The two types of attack are equally important but are essentially different, which are intuitively explained and numerically evaluated. To implement the proposed attack, a supervised variation autoencoder is designed and then the classifier is attacked by updating the latent variables using gradient information. {Besides, with pre-trained generative models, Type I attack on latent spaces is investigated as well.} Experimental results show that our method is practical and effective to generate Type I adversarial examples on large-scale image datasets. Most of these generated examples can pass detectors designed for defending Type II attack and the strengthening strategy is only efficient with a specific type attack, both implying that the underlying reasons for Type I and Type II attack are different.
Socially Assistive Robots (SARs) offer great promise for improving outcomes in paediatric rehabilitation. However, the design of software and interactive capabilities for SARs must be carefully considered in the context of their intended clinical use. While previous work has explored specific roles and functionalities to support paediatric rehabilitation, few have considered the design of such capabilities in the context of ongoing clinical deployment. In this paper we present a two-phase In-situ design process for SARs in health care, emphasising stakeholder engagement and on-site development. We explore this in the context of developing the humanoid social robot NAO as a socially assistive rehabilitation aid for children with cerebral palsy. We present and evaluate our design process, outcomes achieved, and preliminary results from ongoing clinical testing with 9 patients and 5 therapists over 14 sessions. We argue that our in-situ Design methodology has been central to the rapid and successful deployment of our system.
Unconditionally secure non-relativistic bit commitment is known to be impossible in both the classical and the quantum worlds. But when committing to a string of n bits at once, how far can we stretch the quantum limits? In this paper, we introduce a framework for quantum schemes where Alice commits a string of n bits to Bob in such a way that she can only cheat on a bits and Bob can learn at most b bits of information before the reveal phase. Our results are two-fold: we show by an explicit construction that in the traditional approach, where the reveal and guess probabilities form the security criteria, no good schemes can exist: a+b is at least n. If, however, we use a more liberal criterion of security, the accessible information, we construct schemes where a=4log n+O(1) and b=4, which is impossible classically. We furthermore present a cheat-sensitive quantum bit string commitment protocol for which we give an explicit tradeoff between Bobs ability to gain information about the committed string, and the probability of him being detected cheating.
Trust in automation, or more recently trust in autonomy, has received extensive research attention in the past two decades. The majority of prior literature adopted a snapshot view of trust and typically evaluated trust through questionnaires administered at the end of an experiment. This snapshot view, however, does not acknowledge that trust is a time-variant variable that can strengthen or decay over time. To fill the research gap, the present study aims to model trust dynamics when a human interacts with a robotic agent over time. The underlying premise of the study is that by interacting with a robotic agent and observing its performance over time, a rational human agent will update his/her trust in the robotic agent accordingly. Based on this premise, we develop a personalized trust prediction model based on Beta distribution and learn its parameters using Bayesian inference. Our proposed model adheres to three major properties of trust dynamics reported in prior empirical studies. We tested the proposed method using an existing dataset involving 39 human participants interacting with four drones in a simulated surveillance mission. The proposed method obtained a Root Mean Square Error (RMSE) of 0.072, significantly outperforming existing prediction methods. Moreover, we identified three distinctive types of trust dynamics, the Bayesian decision maker, the oscillator, and the disbeliever, respectively. This prediction model can be used for the design of individualized and adaptive technologies.
The C/O-ratio as traced with C$_2$H emission in protoplanetary disks is fundamental for constraining the formation mechanisms of exoplanets and our understanding of volatile depletion in disks, but current C$_2$H observations show an apparent bimodal distribution which is not well understood, indicating that the C/O distribution is not described by a simple radial dependence. The transport of icy pebbles has been suggested to alter the local elemental abundances in protoplanetary disks, through settling, drift and trapping in pressure bumps resulting in a depletion of volatiles in the surface and an increase of the elemental C/O. We combine all disks with spatially resolved ALMA C$_2$H observations with high-resolution continuum images and constraints on the CO snowline to determine if the C$_2$H emission is indeed related to the location of the icy pebbles. We report a possible correlation between the presence of a significant CO-icy dust reservoir and high C$_2$H emission, which is only found in disks with dust rings outside the CO snowline. In contrast, compact dust disks (without pressure bumps) and warm transition disks (with their dust ring inside the CO snowline) are not detected in C$_2$H, suggesting that such disks may never have contained a significant CO ice reservoir. This correlation provides evidence for the regulation of the C/O profile by the complex interplay of CO snowline and pressure bump locations in the disk. These results demonstrate the importance of including dust transport in chemical disk models, for a proper interpretation of exoplanet atmospheric compositions, and a better understanding of volatile depletion in disks, in particular the use of CO isotopologues to determine gas surface densities.