ترغب بنشر مسار تعليمي؟ اضغط هنا

Reweighting of Binaural Localization Cues Induced by Lateralization Training

86   0   0.0 ( 0 )
 نشر من قبل Maike Klingel
 تاريخ النشر 2020
  مجال البحث علم الأحياء
والبحث باللغة English
 تأليف Maike Klingel




اسأل ChatGPT حول البحث

Normal-hearing listeners adapt to alterations in sound localization cues. This adaptation can result from the establishment of a new spatial map of the altered cues or from a stronger relative weighting of unaltered compared to altered cues. Such reweighting has been shown for monaural vs. binaural cues. However, studies attempting to reweight the two binaural cues, interaural differences in time and level, yielded inconclusive results. In this study we investigated whether binaural cue reweighting can be induced by a lateralization training in a virtual audio-visual environment. 20 normal-hearing participants, divided into two groups, completed the experiment consisting of a seven-day lateralization training in a virtual audio-visual environment, preceded and followed by a test measuring the binaural cue weights. During testing, the participants task was to lateralize 500-ms bandpass-filtered (2-4 kHz) noise bursts containing various combinations of spatially consistent and inconsistent ITDs and ILDs. During training, the task was extended by visual cues reinforcing ITDs in one group and ILDs in the other group as well as manipulating the azimuthal ranges of the two cues. In both groups, the weight given to the reinforced cue increased significantly from pre- to posttest, suggesting that participants reweighted the binaural cues in the expected direction. This reweighting occurred predominantly within the first training session. The present results are relevant as binaural cue reweighting is, for example, likely to occur when normal-hearing listeners adapt to new acoustic environments. Similarly, binaural cue reweighting might be a factor underlying the low contribution of ITDs to sound localization of cochlear-implant listeners as they typically do not experience reliable ITD cues with their clinical devices.



قيم البحث

اقرأ أيضاً

Neural populations exposed to a certain stimulus learn to represent it better. However, the process that leads local, self-organized rules to do so is unclear. We address the question of how can a neural periodic input be learned and use the Differen tial Hebbian Learning framework, coupled with a homeostatic mechanism to derive two self-consistency equations that lead to increased responses to the same stimulus. Although all our simulations are done with simple Leaky-Integrate and Fire neurons and standard Spiking Time Dependent Plasticity learning rules, our results can be easily interpreted in terms of rates and population codes.
Instances-reweighted adversarial training (IRAT) can significantly boost the robustness of trained models, where data being less/more vulnerable to the given attack are assigned smaller/larger weights during training. However, when tested on attacks different from the given attack simulated in training, the robustness may drop significantly (e.g., even worse than no reweighting). In this paper, we study this problem and propose our solution--locally reweighted adversarial training (LRAT). The rationale behind IRAT is that we do not need to pay much attention to an instance that is already safe under the attack. We argue that the safeness should be attack-dependent, so that for the same instance, its weight can change given different attacks based on the same model. Thus, if the attack simulated in training is mis-specified, the weights of IRAT are misleading. To this end, LRAT pairs each instance with its adversarial variants and performs local reweighting inside each pair, while performing no global reweighting--the rationale is to fit the instance itself if it is immune to the attack, but not to skip the pair, in order to passively defend different attacks in future. Experiments show that LRAT works better than both IRAT (i.e., global reweighting) and the standard AT (i.e., no reweighting) when trained with an attack and tested on different attacks.
Adversarial training has been empirically proven to be one of the most effective and reliable defense methods against adversarial attacks. However, almost all existing studies about adversarial training are focused on balanced datasets, where each cl ass has an equal amount of training examples. Research on adversarial training with imbalanced training datasets is rather limited. As the initial effort to investigate this problem, we reveal the facts that adversarially trained models present two distinguished behaviors from naturally trained models in imbalanced datasets: (1) Compared to natural training, adversarially trained models can suffer much worse performance on under-represented classes, when the training dataset is extremely imbalanced. (2) Traditional reweighting strategies may lose efficacy to deal with the imbalance issue for adversarial training. For example, upweighting the under-represented classes will drastically hurt the models performance on well-represented classes, and as a result, finding an optimal reweighting value can be tremendously challenging. In this paper, to further understand our observations, we theoretically show that the poor data separability is one key reason causing this strong tension between under-represented and well-represented classes. Motivated by this finding, we propose Separable Reweighted Adversarial Training (SRAT) to facilitate adversarial training under imbalanced scenarios, by learning more separable features for different classes. Extensive experiments on various datasets verify the effectiveness of the proposed framework.
123 - Sylvain Hanneton 2009
Aging or sedentary behavior can decrease motor capabilities causing a loss of autonomy. Prevention or readaptation programs that involve practice of physical activities can be precious tools to fight against this phenomenon. ?Serious? video game have the potential to help people to train their body mainly due to the immersion of the participant in a motivating interaction with virtual environments. We propose here to discuss the results of a preliminary study that evaluated a training program using the well-known WiiFit game and Wii balance board device in participants of different ages. Our results showed that participants were satisfied with the program and that they progressed in their level of performance. The most important observation of this study, however was that the presence of a real human coach is necessary in particular for senior participants, for security reasons but also to help them to deal with difficulties with immersive situations.
74 - Qizhou Wang , Feng Liu , Bo Han 2021
Reweighting adversarial data during training has been recently shown to improve adversarial robustness, where data closer to the current decision boundaries are regarded as more critical and given larger weights. However, existing methods measuring t he closeness are not very reliable: they are discrete and can take only a few values, and they are path-dependent, i.e., they may change given the same start and end points with different attack paths. In this paper, we propose three types of probabilistic margin (PM), which are continuous and path-independent, for measuring the aforementioned closeness and reweighting adversarial data. Specifically, a PM is defined as the difference between two estimated class-posterior probabilities, e.g., such the probability of the true label minus the probability of the most confusing label given some natural data. Though different PMs capture different geometric properties, all three PMs share a negative correlation with the vulnerability of data: data with larger/smaller PMs are safer/riskier and should have smaller/larger weights. Experiments demonstrate that PMs are reliable measurements and PM-based reweighting methods outperform state-of-the-art methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا