TY - JOUR A2 - Lolli, Francesco AU - Xin, Bo AU - Yu, Haixu AU - Qin, You AU - Tang, Qing AU - Zhu, Zhangqing PY - 2020 DA - 2020/01/09 TI - Exploration Entropy for Reinforcement Learning SP - 2672537 VL - 2020 AB - The training process analysis and termination condition of the training process of a Reinforcement Learning (RL) system have always been the key issues to train an RL agent. In this paper, a new approach based on State Entropy and Exploration Entropy is proposed to analyse the training process. The concept of State Entropy is used to denote the uncertainty for an RL agent to select the action at every state that the agent will traverse, while the Exploration Entropy denotes the action selection uncertainty of the whole system. Actually, the action selection uncertainty of a certain state or the whole system reflects the degree of exploration and the stage of the learning process for an agent. The Exploration Entropy is a new criterion to analyse and manage the training process of RL. The theoretical analysis and experiment results illustrate that the curve of Exploration Entropy contains more information than the existing analytical methods. SN - 1024-123X UR - https://doi.org/10.1155/2020/2672537 DO - 10.1155/2020/2672537 JF - Mathematical Problems in Engineering PB - Hindawi KW - ER -