January 13, 2023
Conference Paper

An Efficient Distributed Reinforcement Learning for Enhanced Multi-Microgrid Management

Abstract

Economic dispatch in multi-microgrid (MMG) systems requires coordinating distributed energy resources (DERs) of different microgrids, which leads to a significant increase in the number of states for energy management. In these cases, traditional reinforcement learning (RL) approaches become computationally expensive or output a solution that causes extra-operating costs for the system. This paper proposes an RL approach that employs local learning agents to interact with microgrid environments in a distributed manner and aggregates the outcomes to train the global agent to learn the policy for the MMG system. This distributed exploration and aggregation process provides an effective solution and guides the global agent to learn the dispatch policy efficiently. Case studies are performed on a system with three microgrids with different types of DERs. Results obtained using the proposed RL and comparisons with conventional methods substantiate the effectiveness of the proposed approach in terms of operation costs, computation time, and peak-to-average ratio.

Published: January 13, 2023

Citation

Das A., Z. Ni, and D. Wu. 2022. An Efficient Distributed Reinforcement Learning for Enhanced Multi-Microgrid Management. In International Joint Conference on Neural Networks (IJCNN 2022), July 18-23, 2022, Padua, Italy, 1-6. Piscataway, New Jersey:IEEE. PNNL-SA-170345. doi:10.1109/IJCNN55064.2022.9892754