ترغب بنشر مسار تعليمي؟ اضغط هنا

Fast Scenario Reduction for Power Systems by Deep Learning

121   0   0.0 ( 0 )
 نشر من قبل Qiao Li
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Scenario reduction is an important topic in stochastic programming problems. Due to the random behavior of load and renewable energy, stochastic programming becomes a useful technique to optimize power systems. Thus, scenario reduction gets more attentions in recent years. Many scenario reduction methods have been proposed to reduce the scenario set in a fast speed. However, the speed of scenario reduction is still very slow, in which it takes at least several seconds to several minutes to finish the reduction. This limitation of speed prevents stochastic programming to be implemented in real-time optimal control problems. In this paper, a fast scenario reduction method based on deep learning is proposed to solve this problem. Inspired by the deep learning based image process, recognition and generation methods, the scenario data are transformed into a 2D image-like data and then to be fed into a deep convolutional neural network (DCNN). The output of the DCNN will be an image of the reduced scenario set. Since images can be processed in a very high speed by neural networks, the scenario reduction by neural network can also be very fast. The results of the simulation show that the scenario reduction with the proposed DCNN method can be completed in very high speed.



قيم البحث

اقرأ أيضاً

288 - Yan Yang , Juan Yu , Zhifang Yang 2019
Probabilistic optimal power flow (POPF) is an important analytical tool to ensure the secure and economic operation of power systems. POPF needs to solve enormous nonlinear and nonconvex optimization problems. The huge computational burden has become the major bottleneck for the practical application. This paper presents a deep learning approach to solve the POPF problem efficiently and accurately. Taking advantage of the deep structure and reconstructive strategy of stacked denoising auto encoders (SDAE), a SDAE-based optimal power flow (OPF) is developed to extract the high-level nonlinear correlations between the system operating condition and the OPF solution. A training process is designed to learn the feature of POPF. The trained SDAE network can be utilized to conveniently calculate the OPF solution of random samples generated by Monte-Carlo simulation (MCS) without the need of optimization. A modified IEEE 118-bus power system is simulated to demonstrate the effectiveness of the proposed method.
Owing to the complicated characteristics of 5G communication system, designing RF components through mathematical modeling becomes a challenging obstacle. Moreover, such mathematical models need numerous manual adjustments for various specification r equirements. In this paper, we present a learning-based framework to model and compensate Power Amplifiers (PAs) in 5G communication. In the proposed framework, Deep Neural Networks (DNNs) are used to learn the characteristics of the PAs, while, correspondent Digital Pre-Distortions (DPDs) are also learned to compensate for the nonlinear and memory effects of PAs. On top of the framework, we further propose two frequency domain losses to guide the learning process to better optimize the target, compared to naive time domain Mean Square Error (MSE). The proposed framework serves as a drop-in replacement for the conventional approach. The proposed approach achieves an average of 56.7% reduction of nonlinear and memory effects, which converts to an average of 16.3% improvement over a carefully-designed mathematical model, and even reaches 34% enhancement in severe distortion scenarios.
This paper presents DeepIA, a deep learning solution for faster and more accurate initial access (IA) in 5G millimeter wave (mmWave) networks when compared to conventional IA. By utilizing a subset of beams in the IA process, DeepIA removes the need for an exhaustive beam search thereby reducing the beam sweep time in IA. A deep neural network (DNN) is trained to learn the complex mapping from the received signal strengths (RSSs) collected with a reduced number of beams to the optimal spatial beam of the receiver (among a larger set of beams). In test time, DeepIA measures RSSs only from a small number of beams and runs the DNN to predict the best beam for IA. We show that DeepIA reduces the IA time by sweeping fewer beams and significantly outperforms the conventional IAs beam prediction accuracy in both line of sight (LoS) and non-line of sight (NLoS) mmWave channel conditions.
187 - Ke Ma , Dongxuan He , Hancun Sun 2021
Huge overhead of beam training imposes a significant challenge in millimeter-wave (mmWave) wireless communications. To address this issue, in this paper, we propose a wide beam based training approach to calibrate the narrow beam direction according to the channel power leakage. To handle the complex nonlinear properties of the channel power leakage, deep learning is utilized to predict the optimal narrow beam directly. Specifically, three deep learning assisted calibrated beam training schemes are proposed. The first scheme adopts convolution neural network to implement the prediction based on the instantaneous received signals of wide beam training. We also perform the additional narrow beam training based on the predicted probabilities for further beam direction calibrations. However, the first scheme only depends on one wide beam training, which lacks the robustness to noise. To tackle this problem, the second scheme adopts long-short term memory (LSTM) network for tracking the movement of users and calibrating the beam direction according to the received signals of prior beam training, in order to enhance the robustness to noise. To further reduce the overhead of wide beam training, our third scheme, an adaptive beam training strategy, selects partial wide beams to be trained based on the prior received signals. Two criteria, namely, optimal neighboring criterion and maximum probability criterion, are designed for the selection. Furthermore, to handle mobile scenarios, auxiliary LSTM is introduced to calibrate the directions of the selected wide beams more precisely. Simulation results demonstrate that our proposed schemes achieve significantly higher beamforming gain with smaller beam training overhead compared with the conventional and existing deep-learning based counterparts.
We consider adversarial machine learning based attacks on power allocation where the base station (BS) allocates its transmit power to multiple orthogonal subcarriers by using a deep neural network (DNN) to serve multiple user equipments (UEs). The D NN that corresponds to a regression model is trained with channel gains as the input and allocated transmit powers as the output. While the BS allocates the transmit power to the UEs to maximize rates for all UEs, there is an adversary that aims to minimize these rates. The adversary may be an external transmitter that aims to manipulate the inputs to the DNN by interfering with the pilot signals that are transmitted to measure the channel gain. Alternatively, the adversary may be a rogue UE that transmits fabricated channel estimates to the BS. In both cases, the adversary carefully crafts adversarial perturbations to manipulate the inputs to the DNN of the BS subject to an upper bound on the strengths of these perturbations. We consider the attacks targeted on a single UE or all UEs. We compare these attacks with a benchmark, where the adversary scales down the input to the DNN. We show that adversarial attacks are much more effective than the benchmark attack in terms of reducing the rate of communications. We also show that adversarial attacks are robust to the uncertainty at the adversary including the erroneous knowledge of channel gains and the potential errors in exercising the attacks exactly as specified.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا