Muslim, M. S. M. and Ismail, Z. H. (2021) Stability-certified deep reinforcement learning strategy for UAV and lagrangian floating platform. In: 18th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, ECTI-CON 2021, 19 May 2021 - 22 May 2021, Chiang Mai.
Full text not available from this repository.
Official URL: http://dx.doi.org/10.1109/ECTI-CON51831.2021.94546...
Abstract
This paper presents a robust technique for an Unmanned Aerial Vehicle (UAV) with the ability to fly above a moving platform autonomously. The purpose of the study is to investigate the problem of certifying stability of reinforcement learning policy when linked with nonlinear dynamical systems since conventional control methods often fail to properly account for complex effects. However, deep reinforcement learning algorithms have been designed to monitor the robust stability of a UAV's position in three-dimensional space, such as altitude and longitude-latitude location, so that the UAV can fly over a moving platform in a stable manner. Plus, the input-output policy gradient method is regulated and capable of approving a large number of stabilization controllers to obtain robust stability by exploiting problem-specific structure. Inside the stability-certified parameter space, reinforcement learning agents will attain high efficiency while also exhibiting consistent learning behaviors over time, according to a formula assessment on a decentralized control task involving flight creation.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Uncontrolled Keywords: | deep reinforcement learning, dynamic system, stability-certified |
Subjects: | T Technology > T Technology (General) |
Divisions: | Malaysia-Japan International Institute of Technology |
ID Code: | 95750 |
Deposited By: | Narimah Nawil |
Deposited On: | 31 May 2022 13:18 |
Last Modified: | 31 May 2022 13:18 |
Repository Staff Only: item control page