ترغب بنشر مسار تعليمي؟ اضغط هنا

Spatially multiplexed picosecond pulse-train generations through simultaneous intra-modal four wave mixing and inter-modal cross-phase modulation

122   0   0.0 ( 0 )
 نشر من قبل Julien Fatome
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We report on the experimental generation of spatially multiplexed picosecond 40-GHz pulse trains at telecommunication wavelengths by simultaneous intra-modal multiple four wave mixing and intermodal cross-phase modulation in km-long bi-modal and 6-LP-mode graded-index few-mode fibers. More precisely, an initial beat-signal injected into the fundamental mode is first nonlinearly compressed into well-separated pulses by means of an intra-modal multiple four-wave mixing process, while several group-velocity matched continuous-wave probe signals are injected into higher-order modes in such a way to develop similar pulsed profile thanks to an intermodal cross-phase modulation interaction. Specifically, by simultaneously exciting three higher-order modes (LP11, LP02 and LP31) of a 6-LP-mode fiber along group-velocity matched wavelengths with the fundamental mode, four spatially multiplexed 40-GHz picosecond pulse-trains are generated at selective wavelengths with negligible cross-talks between all the modes.

قيم البحث

اقرأ أيضاً

We report on the generation of four spatially multiplexed picosecond 40-GHz pulse trains in a km-long 6-LP multimode optical fiber. The principle of operation is based on the parallel nonlinear compression of initial beat-signals into well separated pulse trains owing to intra-modal multiple four-wave mixings. A series of four 40-GHz dual-frequency beatings at different wavelengths are simultaneously injected into the LP01, LP11, LP02 and LP12 modes of a 1.8-km long graded-index few-mode fiber. The combined effects of Kerr nonlinearity and anomalous chromatic dispersion lead to the simultaneous generation of four spatially multiplexed frequency combs which correspond in the temporal domain to the compression of these beat-signals into picosecond pulses. The temporal profiles of the output pulse-trains demultiplexed from each spatial mode show that well-separated picosecond pulses with negligible pedestals are then generated.
233 - Yuzhao Mao , Qi Sun , Guang Liu 2020
Emotion Recognition in Conversations (ERC) is essential for building empathetic human-machine systems. Existing studies on ERC primarily focus on summarizing the context information in a conversation, however, ignoring the differentiated emotional be haviors within and across different modalities. Designing appropriate strategies that fit the differentiated multi-modal emotional behaviors can produce more accurate emotional predictions. Thus, we propose the DialogueTransformer to explore the differentiated emotional behaviors from the intra- and inter-modal perspectives. For intra-modal, we construct a novel Hierarchical Transformer that can easily switch between sequential and feed-forward structures according to the differentiated context preference within each modality. For inter-modal, we constitute a novel Multi-Grained Interactive Fusion that applies both neuron- and vector-grained feature interactions to learn the differentiated contributions across all modalities. Experimental results show that DialogueTRM outperforms the state-of-the-art by a significant margin on three benchmark datasets.
170 - Duo Peng , Yinjie Lei , Wen Li 2021
Domain adaptation is critical for success when confronting with the lack of annotations in a new domain. As the huge time consumption of labeling process on 3D point cloud, domain adaptation for 3D semantic segmentation is of great expectation. With the rise of multi-modal datasets, large amount of 2D images are accessible besides 3D point clouds. In light of this, we propose to further leverage 2D data for 3D domain adaptation by intra and inter domain cross modal learning. As for intra-domain cross modal learning, most existing works sample the dense 2D pixel-wise features into the same size with sparse 3D point-wise features, resulting in the abandon of numerous useful 2D features. To address this problem, we propose Dynamic sparse-to-dense Cross Modal Learning (DsCML) to increase the sufficiency of multi-modality information interaction for domain adaptation. For inter-domain cross modal learning, we further advance Cross Modal Adversarial Learning (CMAL) on 2D and 3D data which contains different semantic content aiming to promote high-level modal complementarity. We evaluate our model under various multi-modality domain adaptation settings including day-to-night, country-to-country and dataset-to-dataset, brings large improvements over both uni-modal and multi-modal domain adaptation methods on all settings.
Non-negative tensor factorization has been shown a practical solution to automatically discover phenotypes from the electronic health records (EHR) with minimal human supervision. Such methods generally require an input tensor describing the inter-mo dal interactions to be pre-established; however, the correspondence between different modalities (e.g., correspondence between medications and diagnoses) can often be missing in practice. Although heuristic methods can be applied to estimate them, they inevitably introduce errors, and leads to sub-optimal phenotype quality. This is particularly important for patients with complex health conditions (e.g., in critical care) as multiple diagnoses and medications are simultaneously present in the records. To alleviate this problem and discover phenotypes from EHR with unobserved inter-modal correspondence, we propose the collective hidden interaction tensor factorization (cHITF) to infer the correspondence between multiple modalities jointly with the phenotype discovery. We assume that the observed matrix for each modality is marginalization of the unobserved inter-modal correspondence, which are reconstructed by maximizing the likelihood of the observed matrices. Extensive experiments conducted on the real-world MIMIC-III dataset demonstrate that cHITF effectively infers clinically meaningful inter-modal correspondence, discovers phenotypes that are more clinically relevant and diverse, and achieves better predictive performance compared with a number of state-of-the-art computational phenotyping models.
We show that the velocity and thus the frequency of a signal pulse can be adjusted by the use of a control Airy pulse. In particular, we utilize a nonlinear Airy pulse which, via cross-phase modulation, creates an effective potential for the optical signal. Interestingly, during the interaction, the signal dispersion is suppressed. Importantly, the whole process is controllable and by using Airy pulses with different truncations leads to predetermined values of the frequency shifting. Such a functionality might be useful in wavelength division multiplexing networks.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا