This repo contains all source code for implementing network assisted adaptive bitrate video streaming and evaluating the system in the CloudLab testbed.
This system has been tested and evaluated in CloudLab, a testbed for researchers to build and test clouds and new architectures. In order to experiment with CloudLab, you will need an account for access and a profile to instantiate. The profile is specified here as cloudlab_topo.rspec.
The following subsystems are included:
This component has been tested on a server that runs Ubuntu 14.04 and has the following dependencies:
- MongoDB
- OpenNetMon - included in this directory and requires the PoX controller
- R for Ubuntu
- Forecast package in R
- rpy2 - This is the python library used to invoke R functions like ARIMA from Python.
-
You will need to setup 3 tables in a MongoDB database; portmonitor, serv_bandwidth, cachemiss where portmonitor will be used to archive network port statistics on active downstream paths, serv_bandwidth is used to store the processed ARIMA forecasts along with cache status and cachemiss is used to archive the incoming cache misses to be used for content placement for various strategies, respectively. For portmonitor and serv_bandwidth tables, you may need to set the maximum size limit using capped collections as specified here. For the cachemiss table, it is mandatory to enable capped collections in order for the caching functionality to work. [Note:] It is reccommended to run the following processes inside individual screen processes
-
Run the controller using the following command:
./pox.py openflow.of_01 --port=<controller_port> log --file=<log_file>,w opennetmon.startup
-
Run ARIMA forecast and cache status collection module a. Copy arima.py to ext folder inside the PoX directory and issue the following command:
./pox.py openflow.of_01 --port=<port_number> log --file=<log_file>,w arima
-
Run the caching component - For the initial setup of the empty caches, the command,
python cacher.py
, must be run only after the orchestration of the testbed experiments has started.
-
Setup Switches - Create OVS bridges on each of OVS switches, sw1a, sw2a, sw2b, sw3a, sw3b, sw3c, sw3d, sw4a, sw4b, sw4c and sw4d and connect them to the controller using the following commands as a reference:
a. Create bridge
sudo ovs-vsctl add-br <bridge_name>
b. Add ports,i.e., interfaces to be controlled
sudo ovs-vsctl add-port <bridge_name> <port_name>
c. Connect bridge to OpenFlow controller
sudo ovs-vsctl set-controller <bridge_name> tcp:<IP_of_controller>:<port_number>
[Note]: For more information on how to configure and work with OVS switches, go here.
The script, automate_sabr_clab.py, can be updated to remotely execute the above commands on switches if desired.
-
Setup Server
-
The following dependencies must be installed:
- screen apache2 python-pip python-dev build-essential libssl-dev libffi-dev mongodb
- Python libraries: pymongo scapy scapy_http netifaces [Hint]: Run the following [script] (https://github.com/dbhat/cloudlab_SABR/blob/master/server/server.sh) on the server.
-
Insert metadata information into the cache MongoDB using create_mpdinfo.py. This script depends on mpd_insert.py and config_dash.py.
-
Use http_capture.py to listen to http requests for caching. This script is used to sniff HTTP GET requests and works best inside a screen process by running
python http_capture.py
-
-
Setup Clients
- The following dependencies must be installed:
- python-pip python-dev build-essential vim screen
- Python libraries: urllib3 httplib2 pymongo netifaces requests numpy sortedcontainers
- In our evaluation, we modify the player available here to obtain the cache map using a GET request of the following format in a Pymongo client:
table.find({"server_ip": str(index), "client_ip":(str(ni.ifaddresses('eth1')[AF_INET][0]['addr'])[:10])}).sort([("_id", pymongo.DESCENDING)]).limit(1)
; where server_ip is the cache which the client wishes to query and client_ip is the client's own IP address which is required as a filter. The query returns a list of segments and qualities found on the cache along with the available bandwidth obtained from ARIMA processing.
[Note]: You can issue GET requests with Mongo queries with any preferred player of choice since GET requests to MongoDB are widely supported by a range of languages.
- The following dependencies must be installed:
-
The script, automate_sabr_clab.py maybe used to automate experiment runs on CloudLab using remote login capability provided by the Python-based Paramiko library. The script mainly does the following:
a. Run client algorithm using AStream
b. Resets MongoDB caches for each run/set of runs
-
To run this script on your machine ensure that:
a. The following Python libraries are installed: numpy, scipy, paramiko, pymongo
b. Copy your SSH keys locally and provide the login credentials in the automate_sabr_clab.py script.
c. Replace the server and client lists with login information in automate_sabr_clab.py.
- A sample BASH script, getmultipleruns_BOLA.sh, is provided to collect the results from the CloudLab client machines. This script retrieves results from 60 clients and saves them in different folders to be parsed. You will need to update it with login information of your CloudLab clients. For every algorithm, replace the default, BOLAO, with the name of the client algorithm.
- For parsing results and computing QoE metrics, average quality bitrate, number of quality switches, spectrum and rebuffering ratio, matplotlib_clab.py, may be used. The script contains parsing logic for BOLAO and BOLAO with SABR. You will need to replace BOLAO with paths for other algorithm results.
- Cache hit-rates can be computed using the script, cdf_hitratio_qual.py. The current example contains parsing script for BOLAO for the Quality-based caching case. You will need to replace this with other content placement result folders for the Global and Local caching cases.
- Total content requests per quality can be obtained using the script, cdf_hitratio_qual.py, for BOLAO and SQUAD for the Quality-based caching case. You will need to replace this with other content placement result folders for the Global and Local caching cases.
- The script, caching_CDF_SQUAD.m, is used to plot CDF and CCDF graphs for the 4 QoE metrics, Average Quality Bitrate, Number of quality switches, Spectrum and Rebuffering Ratio for all client algorithms. The same script is modified to generate results for the various content placement algorithms for Baseline, Local, Global and Quality-based caching.
- The script, stacked_requests.m, is used to create a stacked bar graph for the total number of hits for the 5 quality representations for all content placement strategies.
- The script, caching_hit_ratio.m, is used to create a bar graph for the hit rate, i.e, (no. of requests served by caches)/(total number of requests) for the 5 quality representations for all content placement strategies.