wangshgeo / DROO

Deep Reinforcement Learning for Online Computation Offloading in Wireless Powered Mobile-Edge Computing Networks

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

DROO

Deep Reinforcement Learning for Online Computation Offloading in Wireless Powered Mobile-Edge Computing Networks

Python code to reproduce our DROO algorithm for Wireless-powered Mobile-Edge Computing [1], which uses the time-varying wireless channel gains as the input and generates the binary offloading decisions. It includes:

Cite this work

  1. L. Huang, S. Bi, and Y. J. Zhang, “Deep reinforcement learning for online computation offloading in wireless powered mobile-edge computing networks,” IEEE Trans. Mobile Compt., vol. 19, no. 11, pp. 2581-2593, November 2020.

About authors

Required packages

  • Tensorflow

  • numpy

  • scipy

How the code works

  • For DROO algorithm, run the file, main.py. If you code with Tenforflow 2 or PyTorch, run mainTF2.py or mainPyTorch.py, respectively.

  • For more DROO demos:

    • Laternating-weight WDs, run the file, demo_alternate_weights.py
    • ON-OFF WDs, run the file, demo_on_off.py
    • Remember to respectively edit the import MemoryDNN code from
        from memory import MemoryDNN
      
      to
        from memoryTF2 import MemoryDNN
      
      or
        from memoryPyTorch import MemoryDNN
      
      if you are using Tensorflow 2 or PyTorch.

The DROO algorithm is coded based on Tensorflow 1.x. If you are fresh to deep learning, please start with Tensorflow 2 or PyTorch, whose codes are much cleaner and easier to follow.

About

Deep Reinforcement Learning for Online Computation Offloading in Wireless Powered Mobile-Edge Computing Networks

License:MIT License


Languages

Language:Python 100.0%