dc.contributor.author | Issa, Razin Bin | |
dc.contributor.author | Das, Modhumonty | |
dc.contributor.author | Rahman, Md. Saferi | |
dc.contributor.author | Barua, Monika | |
dc.contributor.author | Rhaman, Md. Khalilur | |
dc.contributor.author | Ripon, Kazi Shah Nawaz | |
dc.contributor.author | Alam, Md. Golam Rabiul | |
dc.date.accessioned | 2021-10-20T12:56:31Z | |
dc.date.available | 2021-10-20T12:56:31Z | |
dc.date.created | 2021-10-05T13:26:23Z | |
dc.date.issued | 2021 | |
dc.identifier.citation | Sensors. 2021, 21 (4), Artikkel 1468. | en_US |
dc.identifier.issn | 1424-8220 | |
dc.identifier.uri | https://hdl.handle.net/11250/2824178 | |
dc.description.abstract | Autonomous vehicle navigation in an unknown dynamic environment is crucial for both supervised- and Reinforcement Learning-based autonomous maneuvering. The cooperative fusion of these two learning approaches has the potential to be an effective mechanism to tackle indefinite environmental dynamics. Most of the state-of-the-art autonomous vehicle navigation systems are trained on a specific mapped model with familiar environmental dynamics. However, this research focuses on the cooperative fusion of supervised and Reinforcement Learning technologies for autonomous navigation of land vehicles in a dynamic and unknown environment. The Faster RCNN, a supervised learning approach, identifies the ambient environmental obstacles for untroubled maneuver of the autonomous vehicle. Whereas, the training policies of Double Deep Q-Learning, a Reinforcement Learning approach, enable the autonomous agent to learn effective navigation decisions form the dynamic environment. The proposed model is primarily tested in a gaming environment similar to the real-world. It exhibits the overall efficiency and effectiveness in the maneuver of autonomous land vehicles. | en_US |
dc.language.iso | eng | en_US |
dc.publisher | MDPI | en_US |
dc.rights | Navngivelse 4.0 Internasjonal | * |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/deed.no | * |
dc.subject | autonomous vehicle | en_US |
dc.subject | reinforcement learning | en_US |
dc.subject | Double Deep Q Learning | en_US |
dc.subject | faster R-CNN | en_US |
dc.subject | object classifier | en_US |
dc.subject | markov decision process | en_US |
dc.title | Double Deep Q-Learning and Faster R-CNN-Based Autonomous Vehicle Navigation and Obstacle Avoidance in Dynamic Environment | en_US |
dc.type | Peer reviewed | en_US |
dc.type | Journal article | en_US |
dc.description.version | publishedVersion | en_US |
dc.rights.holder | © 2021 by the authors. | en_US |
dc.subject.nsi | VDP::Teknologi: 500 | en_US |
dc.source.volume | 21 | en_US |
dc.source.journal | Sensors | en_US |
dc.source.issue | 4 | en_US |
dc.identifier.doi | 10.3390/s21041468 | |
dc.identifier.cristin | 1943425 | |
dc.source.articlenumber | 1468 | en_US |
cristin.ispublished | true | |
cristin.fulltext | original | |
cristin.qualitycode | 1 | |