Proceedings of the 2023 International Conference on Data Science, Advanced Algorithm and Intelligent Computing (DAI 2023)

Robustness Exploration of the Twin-Delayed Deep Deterministic Policy Gradient Algorithm Under Noise Attack

Authors
Aofeng Xu1, *
1School of Mechanical, Electrical & Information Engineering, Shandong University, Weihai, Shandong, 264209, China
*Corresponding author.
Corresponding Author
Aofeng Xu
Available Online 14 February 2024.
DOI
10.2991/978-94-6463-370-2_64How to use a DOI?
Keywords
Reinforcement Learning; TD3; Robustness Exploration
Abstract

This research focuses on exploring the robustness of the reinforcement learning algorithm Twin Delayed Deep Deterministic Policy Gradients (TD3), especially in terms of its performance in the face of uncertainty, noise, and attacks. Reinforcement learning is a machine learning paradigm in which an agent learns how to perform tasks and optimize long-term rewards through interaction with its environment. This learning approach has a wide range of applications in areas such as autonomous driving, gaming, robot control, and many others. TD3 is an advanced reinforcement learning algorithm that performs remarkably well in various complex tasks and environments. Additionally, TD3 possesses some unique performance advantages, such as the dual Q-Critic structure and target policy smoothing, which potentially make it robust when facing uncertainty and noise. While there has been extensive research on the robustness of reinforcement learning, there is a relative lack of research specifically targeting TD3. This study aims to fill this gap and investigate how TD3’s performance changes when different types of noise are added or when it is subjected to attacks. This research aims not only to gain a deeper understanding of the TD3 algorithm itself but also to provide strong support for the theory and practice of reinforcement learning robustness. This research has broad applications and academic value and has the potential to drive further advancements in the field of reinforcement learning.

Copyright
© 2024 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Download article (PDF)

Volume Title
Proceedings of the 2023 International Conference on Data Science, Advanced Algorithm and Intelligent Computing (DAI 2023)
Series
Advances in Intelligent Systems Research
Publication Date
14 February 2024
ISBN
978-94-6463-370-2
ISSN
1951-6851
DOI
10.2991/978-94-6463-370-2_64How to use a DOI?
Copyright
© 2024 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite this article

TY  - CONF
AU  - Aofeng Xu
PY  - 2024
DA  - 2024/02/14
TI  - Robustness Exploration of the Twin-Delayed Deep Deterministic Policy Gradient Algorithm Under Noise Attack
BT  - Proceedings of the 2023 International Conference on Data Science, Advanced Algorithm and Intelligent Computing (DAI 2023)
PB  - Atlantis Press
SP  - 627
EP  - 634
SN  - 1951-6851
UR  - https://doi.org/10.2991/978-94-6463-370-2_64
DO  - 10.2991/978-94-6463-370-2_64
ID  - Xu2024
ER  -