Please wait a minute...
Frontiers of Information Technology & Electronic Engineering

ISSN 2095-9184

Front. Inform. Technol. Electron. Eng    2022, Vol. 23 Issue (3) : 438-451    https://doi.org/10.1631/FITEE.2000446
Orginal Article
Minimax Q-learning design for H control of linear discrete-time systems
Xinxing LI1(), Lele XI2,3(), Wenzhong ZHA1(), Zhihong PENG2()
1. Information Science Academy, China Electronics Technology Group Corporation, Beijing 100086, China
2. School of Automation, Beijing Institute of Technology, Beijing 100081, China
3. Peng Cheng Laboratory, Shenzhen 518052, China
 Download: PDF(673 KB)  
 Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks
Abstract

The H control method is an effective approach for attenuating the effect of disturbances on practical systems, but it is difficult to obtain the H controller due to the nonlinear Hamilton–Jacobi–Isaacs equation, even for linear systems. This study deals with the design of an H controller for linear discrete-time systems. To solve the related game algebraic Riccati equation (GARE), a novel model-free minimax Q-learning method is developed, on the basis of an offline policy iteration algorithm, which is shown to be Newton’s method for solving the GARE. The proposed minimax Q-learning method, which employs off-policy reinforcement learning, learns the optimal control policies for the controller and the disturbance online, using only the state samples generated by the implemented behavior policies. Different from existing Q-learning methods, a novel gradient-based policy improvement scheme is proposed. We prove that the minimax Q-learning method converges to the saddle solution under initially admissible control policies and an appropriate positive learning rate, provided that certain persistence of excitation (PE) conditions are satisfied. In addition, the PE conditions can be easily met by choosing appropriate behavior policies containing certain excitation noises, without causing any excitation noise bias. In the simulation study, we apply the proposed minimax Q-learning method to design an H load-frequency controller for an electrical power system generator that suffers from load disturbance, and the simulation results indicate that the obtained H load-frequency controller has good disturbance rejection performance.

Keywords H control      Zero-sum dynamic game      Reinforcement learning      Adaptive dynamic programming      Minimax Q-learning      Policy iteration     
Corresponding Author(s): Wenzhong ZHA   
Issue Date: 01 June 2022
 Cite this article:   
Xinxing LI,Lele XI,Wenzhong ZHA, et al. Minimax Q-learning design for H control of linear discrete-time systems[J]. Front. Inform. Technol. Electron. Eng, 2022, 23(3): 438-451.
 URL:  
https://academic.hep.com.cn/fitee/EN/10.1631/FITEE.2000446
https://academic.hep.com.cn/fitee/EN/Y2022/V23/I3/438
[1] FITEE-0438-22007-XXL_suppl_2 Download
[1] Xiaoyu LIU, Chi XU, Haibin YU, Peng ZENG. Multi-agent deep reinforcement learning for end–edge orchestrated resource allocation in industrial wireless networks[J]. Front. Inform. Technol. Electron. Eng, 2022, 23(1): 47-60.
[2] Sihan ZHU, Jian PU. Aself-supervised method for treatment recommendation in sepsis[J]. Front. Inform. Technol. Electron. Eng, 2021, 22(7): 926-939.
[3] Kaiqing ZHANG, Zhuoran YANG, Tamer BAŞAR. Decentralizedmulti-agent reinforcement learning with networked agents: recent advances[J]. Front. Inform. Technol. Electron. Eng, 2021, 22(6): 802-814.
[4] Yunpeng WANG, Kunxian ZHENG, Daxin TIAN, Xuting DUAN, Jianshan ZHOU. Pre-training with asynchronous supervised learning for reinforcement learning based autonomous driving[J]. Front. Inform. Technol. Electron. Eng, 2021, 22(5): 673-686.
[5] Yi-ning CHEN, Ni-qi LYU, Guang-hua SONG, Bo-wei YANG, Xiao-hong JIANG. Atraffic-aware Q-network enhanced routing protocol based onGPSRfor unmanned aerial vehicle ad-hoc networks[J]. Front. Inform. Technol. Electron. Eng, 2020, 21(9): 1308-1320.
[6] Yun-peng WANG, Kun-xian ZHENG, Da-xin TIAN, Jian-shan ZHOU. Cooperative channel assignment forVANETs based on multiagent reinforcement learning[J]. Front. Inform. Technol. Electron. Eng, 2020, 21(7): 1047-1058.
[7] Zhao-qi WU, Jin WEI, Fan ZHANG, Wei GUO, Guang-wei XIE. MDLB: a metadata dynamic load balancing mechanism based on reinforcement learning[J]. Front. Inform. Technol. Electron. Eng, 2020, 21(7): 1034-1046.
[8] Huan HU, Qing-ling WANG. Proximal policy optimization with an integral compensator for quadrotor control[J]. Front. Inform. Technol. Electron. Eng, 2020, 21(5): 777-795.
[9] Hao-nan WANG, Ning LIU, Yi-yun ZHANG, Da-wei FENG, Feng HUANG, Dong-sheng LI, Yi-ming ZHANG. Deep reinforcement learning: a survey[J]. Front. Inform. Technol. Electron. Eng, 2020, 21(12): 1726-1744.
[10] Tian-yang ZHOU, Yi-chao ZANG, Jun-hu ZHU, Qing-xian WANG. NIG-AP: a newmethod for automated penetration testing[J]. Front. Inform. Technol. Electron. Eng, 2019, 20(9): 1277-1298.
[11] Li-dong ZHANG, Ban WANG, Zhi-xiang LIU, You-min ZHANG, Jian-liang AI. Motion planning of a quadrotor robot game using a simulation-based projected policy iteration method[J]. Front. Inform. Technol. Electron. Eng, 2019, 20(4): 525-537.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed