No Arabic abstract
This paper develops a machine learning-driven portfolio optimization framework for virtual bidding in electricity markets considering both risk constraint and price sensitivity. The algorithmic trading strategy is developed from the perspective of a proprietary trading firm to maximize profit. A recurrent neural network-based Locational Marginal Price (LMP) spread forecast model is developed by leveraging the inter-hour dependencies of the market clearing algorithm. The LMP spread sensitivity with respect to net virtual bids is modeled as a monotonic function with the proposed constrained gradient boosting tree. We leverage the proposed algorithmic virtual bid trading strategy to evaluate both the profitability of the virtual bid portfolio and the efficiency of U.S. wholesale electricity markets. The comprehensive empirical analysis on PJM, ISO-NE, and CAISO indicates that the proposed virtual bid portfolio optimization strategy considering the price sensitivity explicitly outperforms the one that neglects the price sensitivity. The Sharpe ratio of virtual bid portfolios for all three electricity markets are much higher than that of the S&P 500 index. It was also shown that the efficiency of CAISOs two-settlement system is lower than that of PJM and ISO-NE.
Solving the optimal power flow (OPF) problem in real-time electricity market improves the efficiency and reliability in the integration of low-carbon energy resources into the power grids. To address the scalability and adaptivity issues of existing end-to-end OPF learning solutions, we propose a new graph neural network (GNN) framework for predicting the electricity market prices from solving OPFs. The proposed GNN-for-OPF framework innovatively exploits the locality property of prices and introduces physics-aware regularization, while attaining reduced model complexity and fast adaptivity to varying grid topology. Numerical tests have validated the learning efficiency and adaptivity improvements of our proposed method over existing approaches.
We design a novel framework to examine market efficiency through out-of-sample (OOS) predictability. We frame the asset pricing problem as a machine learning classification problem and construct classification models to predict return states. The prediction-based portfolios beat the market with significant OOS economic gains. We measure prediction accuracies directly. For each model, we introduce a novel application of binomial test to test the accuracy of 3.34 million return state predictions. The tests show that our models can extract useful contents from historical information to predict future return states. We provide unique economic insights about OOS predictability and machine learning models.
The proposed open-source Power Market Tool (POMATO) aims to enable research on interconnected modern and future electricity markets in the context of the physical transmission system and its secure operation. POMATO has been designed to study capacity allocation and congestion management (CACM) policies of European zonal electricity markets, especially flow-based market coupling (FBMC). For this purpose, POMATO implements methods for the analysis of simultaneous zonal market clearing, nodal (N-k secure) power flow computation for capacity allocation, and multi-stage market clearing with adaptive grid representation and redispatch. The computationally demanding N-k secure power flow is enabled via an efficient constraint reduction algorithm. POMATO provides an integrated environment for data read-in, pre- and post-processing and interactive result visualization. Comprehensive data sets of European electricity systems compiled from Open Power System Data and Matpower Cases are part of the distribution. POMATO is implemented in Python and Julia, leveraging Pythons easily maintainable data processing and user interaction features and Julias well readable algebraic modeling language, superior computational performance and interfaces to open-source and commercial solvers.
This paper considers the scheduling of jobs on distributed, heterogeneous High Performance Computing (HPC) clusters. Market-based approaches are known to be efficient for allocating limited resources to those that are most prepared to pay. This context is applicable to an HPC or cloud computing scenario where the platform is overloaded. In this paper, jobs are composed of dependent tasks. Each job has a non-increasing time-value curve associated with it. Jobs are submitted to and scheduled by a market-clearing centralised auctioneer. This paper compares the performance of several policies for generating task bids. The aim investigated here is to maximise the value for the platform provider while minimising the number of jobs that do not complete (or starve). It is found that the Projected Value Remaining bidding policy gives the highest level of value under a typical overload situation, and gives the lowest number of starved tasks across the space of utilisation examined. It does this by attempting to capture the urgency of tasks in the queue. At high levels of overload, some alternative algorithms produce slightly higher value, but at the cost of a hugely higher number of starved workflows.
In this paper, we model the various wireless users in a cognitive radio network as a collection of selfish, autonomous agents that strategically interact in order to acquire the dynamically available spectrum opportunities. Our main focus is on developing solutions for wireless users to successfully compete with each other for the limited and time-varying spectrum opportunities, given the experienced dynamics in the wireless network. We categorize these dynamics into two types: one is the disturbance due to the environment (e.g. wireless channel conditions, source traffic characteristics, etc.) and the other is the impact caused by competing users. To analyze the interactions among users given the environment disturbance, we propose a general stochastic framework for modeling how the competition among users for spectrum opportunities evolves over time. At each stage of the dynamic resource allocation, a central spectrum moderator auctions the available resources and the users strategically bid for the required resources. The joint bid actions affect the resource allocation and hence, the rewards and future strategies of all users. Based on the observed resource allocation and corresponding rewards from previous allocations, we propose a best response learning algorithm that can be deployed by wireless users to improve their bidding policy at each stage. The simulation results show that by deploying the proposed best response learning algorithm, the wireless users can significantly improve their own performance in terms of both the packet loss rate and the incurred cost for the used resources.