Do you want to publish a course? Click here

Modelling the Memory of Turbulence-Induced Beam Wander

385   0   0.0 ( 0 )
 Added by Mitchell Cox Mr
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

One of the major challenges for long range, high speed Free-Space Optical (FSO) communication is turbulence induced beam wander. Beam wander causes fluctuations in the received intensity as well as crosstalk in mode division multiplexed systems. Existing models for beam wander make use of probability distributions and long term averages and are not able to accurately model time-dependent intensity fluctuations such as deep fading, where the received intensity is too low to maintain reliable communication for an extended period of time. In this work we present an elegant new memory model which models the behaviour of beam wander induced intensity fluctuations with the unique capability to accurately simulate deep fading. This is invaluable for the development of optimised error correction coding and digital signal processing in order to improve the throughput and reliability of FSO systems.



rate research

Read More

Beam wander caused by atmospheric turbulence can significantly degrade the performance of horizontal free-space quantum communication links. Classical beam wander correction techniques cannot be applied due to the stronger requirements of transmitting single photons. One strategy to overcome this limitation consists in using a separate wavelength from that of the quantum signal to analyze the beam wander and use this information for its correction. For this strategy to work adequately, both wavelengths should be affected equally by atmospheric turbulence, i.e. no chromatic effects should be originated from beam wander. In this letter, a series of experiments are performed to prove that this is the case for {lambda} ~ 850 nm as the quantum signal and {lambda} ~ 1550 nm as the tracking signal of a quantum communication system.
Accurate and efficient models for rainfall runoff (RR) simulations are crucial for flood risk management. Most rainfall models in use today are process-driven; i.e. they solve either simplified empirical formulas or some variation of the St. Venant (shallow water) equations. With the development of machine-learning techniques, we may now be able to emulate rainfall models using, for example, neural networks. In this study, a data-driven RR model using a sequence-to-sequence Long-short-Term-Memory (LSTM) network was constructed. The model was tested for a watershed in Houston, TX, known for severe flood events. The LSTM networks capability in learning long-term dependencies between the input and output of the network allowed modeling RR with high resolution in time (15 minutes). Using 10-years precipitation from 153 rainfall gages and river channel discharge data (more than 5.3 million data points), and by designing several numerical tests the developed model performance in predicting river discharge was tested. The model results were also compared with the output of a process-driven model Gridded Surface Subsurface Hydrologic Analysis (GSSHA). Moreover, physical consistency of the LSTM model was explored. The model results showed that the LSTM model was able to efficiently predict discharge and achieve good model performance. When compared to GSSHA, the data-driven model was more efficient and robust in terms of prediction and calibration. Interestingly, the performance of the LSTM model improved (test Nash-Sutcliffe model efficiency from 0.666 to 0.942) when a selected subset of rainfall gages based on the model performance, were used as input instead of all rainfall gages.
Fiber Kerr nonlinearity is a fundamental limitation to the achievable capacity of long-distance optical fiber communication. Digital back-propagation (DBP) is a primary methodology to mitigate both linear and nonlinear impairments by solving the inverse-propagating nonlinear Schrodinger equation (NLSE), which requires detailed link information. Recently, the paradigms based on neural network (NN) were proposed to mitigate nonlinear transmission impairments in optical communication systems. However, almost all neural network-based equalization schemes yield high computation complexity, which prevents the practical implementation in commercial transmission systems. In this paper, we propose a center-oriented long short-term memory network (Co-LSTM) incorporating a simplified mode with a recycling mechanism in the equalization operation, which can mitigate fiber nonlinearity in coherent optical communication systems with ultralow complexity. To validate the proposed methodology, we carry out an experiment of ten-channel wavelength division multiplexing (WDM) transmission with 64 Gbaud polarization-division-multiplexed 16-ary quadrature amplitude modulation (16-QAM) signals. Co-LSTM and DBP achieve a comparable performance of nonlinear mitigation. However, the complexity of Co-LSTM with a simplified mode is almost independent of the transmission distance, which is much lower than that of the DBP. The proposed Co-LSTM methodology presents an attractive approach for low complexity nonlinearity mitigation with neural networks.
The Dupree-Weinstock renormalization is used to prove that a reactive closure exists for drift wave turbulence in magnetized plasmas. The result is used to explain recent results in gyrokinetic simulations and is also related to the Mattor-Parker closure. The level of closure is found in terms of applied external sources.
We characterize the frequency response of channel-interleaved photonic analog-to-digital converters (CI-PADCs) theoretically and experimentally. The CI-PADC is composed of a photonic frontend for photonic sampling and an electronic backend for quantization. The photonic frontend includes a photonic sampling pulse generator for directly high-speed sampling and an optical time-division demultiplexer (OTDM) for channel demultiplexing. It is found that the frequency response of the CI-PADC is influenced by both the photonic sampling pulses and the OTDM, of which the combined impact can be characterized through demultiplexed pulse trains. First, the frequency response can be divided into multiple frequency intervals and the range of the frequency interval equals the repetition rate of demultiplexed pulse trains. Second, the analog bandwidth of the CI-PADC is determined by the optical spectral bandwidth of demultiplexed pulse trains which is broadened in the OTDM. Further, the effect of the OTDM is essential for enlarging the analog bandwidth of the CI-PADC employing the photonic sampling pulses with a limited optical spectral bandwidth.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا