Drones, Vol. 7, Pages 213: Deep Reinforcement Learning Based Computation Offloading in UAV-Assisted Edge Computing

1 year ago 39

Drones, Vol. 7, Pages 213: Deep Reinforcement Learning Based Computation Offloading in UAV-Assisted Edge Computing

Drones doi: 10.3390/drones7030213

Authors: Peiying Zhang Yu Su Boxiao Li Lei Liu Cong Wang Wei Zhang Lizhuang Tan

Traditional multi-access edge computing (MEC) often has difficulty processing large amounts of data in the face of high computationally intensive tasks, so it needs to offload policies to offload computation tasks to adjacent edge servers. The computation offloading problem is a mixed integer programming non-convex problem, and it is difficult to have a good solution. Meanwihle, the cost of deploying servers is often high when providing edge computing services in remote areas or some complex terrains. In this paper, the unmanned aerial vehicle (UAV) is introduced into the multi-access edge computing network, and a computation offloading method based on deep reinforcement learning in UAV-assisted multi-access edge computing network (DRCOM) is proposed. We use the UAV as the space base station of MEC, and it transforms computation task offloading problems of MEC into two sub-problems: find the optimal solution of whether each user’s device is offloaded through deep reinforcement learning; allocate resources. We compared our algorithm with other three offloading methods, i.e., LC, CO, and LRA. The maximum computation rate of our algorithm DRCOM is 142.38% higher than LC, 50.37% higher than CO, and 12.44% higher than LRA. The experimental results demonstrate that DRCOM greatly improves the computation rate.

Read Entire Article