ACO-QL: Enhancing ACO Algorithm for Routing in MANETs Using Reinforcement Learning

Автор: Yahia Mohsen Abu Saqer, Khalil Mohammed Eslayyeh, Nasser Majed Abudalu, Aiman A. Abusamra

Журнал: International Journal of Engineering and Manufacturing @ijem

Статья в выпуске: 5 vol.15, 2025 года.

Бесплатный доступ

ACO-based routing protocols like AntHocNet have emerged as a solution for adaptive routing in MANETS. Likewise, deep Q-learning based protocols are suitable for complex and dynamic environments like MANETs and utilizing real time data for better decision-making. However, there is lack of studies in enhancing ACO-based protocols using Q-learning in a new hybrid protocol, and comparing it with the established ACO-based protocol AntHocNet. By combining ACO’s strengths (eg. Multi-agent pathfinding and historical data creatd by pheromones) and combine it with key components of Q-learning, then we have a promising protocol ready to be compared with AntHocNet. Previous studies have explored integrating ML with MANET routing, but few of them, if any, have explored enhancing ACO using ML techniques. Therefore, we propose two new protocols: ACO-QL and ACO-DQN. One uses Q-learning and the latter uses deep Q-learning. After conducting many experiments by running implementations of ACO-DQN, ACO-QL, and AntHocNet on a MANET simulation, we found out that AntHocNet is superior to ACO-DQN in terms of execution time, end-to-end delay, and path cost in most cases, but on the other hand ACO-DQN achieved better packet delivery ratio and throughput results. Meanwhile, ACO-QL consistently achieved lower packet delivery ratios than AntHocNet, and mostly matching AntHocNet’s performance in terms and of other metrics, making it a valid lightweight and faster alternative.

Еще

MANET, Machine Learning, Reinforcement Learning, Deep Learning, Q-Learning, ACO, Deep Q-Learning

Короткий адрес: https://sciup.org/15019955

IDR: 15019955   |   DOI: 10.5815/ijem.2025.05.03