January 31, 2023
Conference Paper

Optimal Coordination of Distributed Energy Resources Using Deep Deterministic Policy Gradient

Abstract

Recent studies showed that reinforcement learning (RL) is a promising approach for coordination and control of distributed energy resources (DER) under uncertainties. Many existing RL approaches, including Q-learning and approximate dynamic programming, are based on lookup table methods, which become inefficient when the problem size is large and infeasible when continuous states and actions are involved. In addition, when modeling battery energy storage system (BESS), the loss of life is not reasonably considered into the decision-making process. This paper proposes an innovative deep RL method for DER coordination considering BESS degradation. The proposed deep RL is designed based on an adaptive actor-critic architecture and employs an off-policy deterministic policy gradient method for determining the dispatch operation that minimizes the operation cost and BESS life loss. Case studies were performed to validate the proposed method and demonstrate the effects of incorporating degradation models into control design.

Published: January 31, 2023

Citation

Das A., and D. Wu. 2022. Optimal Coordination of Distributed Energy Resources Using Deep Deterministic Policy Gradient. In IEEE Electrical Energy Storage Applications and Technologies (EESAT) 2022, Austin, TX., 1-5. Piscataway, New Jersey:IEEE. PNNL-SA-174133. doi:10.1109/EESAT55007.2022.9998046