Skip to main content

AI (Deep Reinforcement Learning) for Strategic Bidding in Energy Markets

Primary supervisor

Hao Wang

The world’s energy markets are transforming, and more renewable energy is integrated into the electric energy market. The intermittent renewable supply leads to unexpected demand-supply mismatches and results in highly fluctuating energy prices. Energy arbitrage aims to strategically operate energy devices to leverage the temporal price spread to smooth out the price differences in the market, which also generates some revenue. The ancillary energy market provides frequency regulation services to ensure power system security in the face of limited dispatchability of the renewable supply. It is often difficult to forecast prices in the energy market and challenging to develop a bidding strategy for arbitrage, given the market's complex behavior.

This project aim to design effective bidding strategies to leverage historical market data and secure market operation. Deep reinforcement learning is a category of artificial intelligence that learns the best actions through a series of trials and errors similar to humans. Deep reinforcement learning's unique features make it an ideal technology for dynamic and stochastic environments, such as energy markets. But most existing studies focused on either a single market using reinforcement learning or multiple markets using optimisation methods given market prices, leaving a research gap of how to bid in multiple markets in real-time optimally.

#sustainability

Student cohort

Double Semester

Aim/outline

1) Model the electricity market including different value streams, e.g., energy, reserve, and frequency regulation; 

2) Develop deep reinforcement learning algorithms for strategic bidding in multiple markets to maximise the value;

3) Train and test the developed strategy on real-world data and compare with baseline methods.

URLs/references

Some recent papers from the team.

1) Hao Wang, B. Zhang, Energy storage arbitrage in real-time markets via reinforcement learning, IEEE Power & Energy Society General Meeting (PESGM), 2018.

2) Muhammad Anwar*, C., Wang, F. de Nijs, Hao Wang, Proximal Policy Optimization Based Reinforcement Learning for Joint Bidding in Energy and Frequency Regulation Markets, IEEE Power & Energy Society General Meeting (PESGM), 2022.

3) J. Li^, C., Wang, Hao Wang, Optimal Energy Storage Scheduling for Wind Curtailment Reduction and Energy Arbitrage: A Deep Reinforcement Learning Approach, IEEE Power & Energy Society General Meeting (PESGM), 2023.

4) J. Li^, C., Wang, Hao Wang, Deep Reinforcement Learning for Wind and Energy Storage Coordination in Wholesale Energy and Ancillary Service Markets, Energy and AI, 2023.

5) J. Li^, C., Wang, Hao Wang, Attentive Convolutional Deep Reinforcement Learning for Optimizing Solar-Storage Systems in Real-time Electricity Markets, IEEE Transactions on Industrial Informatics, 2024.

6) J. Li^, C., Wang, Y. Zhang, Hao Wang, Temporal-Aware Deep Reinforcement Learning for Energy Storage Bidding in Energy and Contingency Reserve Markets, IEEE Transactions on Energy Markets, Policy, and Regulation, 2024.

(*Thesis students. ^HDR students.)

Required knowledge

Programming in Python (required), Markov decision process and/or reinforcement learning (preferred).