viniarck / fluxory

:rocket: Async high-performance distributed OpenFlow framework in Go and Python

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

pipeline status

fluxory

Asynchronous high-performance distributed OpenFlow 1.3/1.5 framework in Go and Python.

Goals

  • Distributed OpenFlow framework leveraging multiple CPU cores.
  • Be faster than Ryu in terms of serialization/deserialization of messages.

Major Features

  • Distributed computing
  • Reliable queueing and asynchronous OpenFlow events notification
  • Applications are written in either Go or Python (asyncio)

Examples

Running the server OpenFlow (fluxory)

  • Build the binary first:
mkdir bin
go build -ldflags "-s -w" -o bin/fluxory
  • Compose up rabbitmq:
docker-compose up -d
  • Run it:
./bin/fluxory

Benchmarks

The following benchmarks compare these two code snippets written in Go and Python, they both serialize 10 million OpenFlow Hello messages:

package main

import "github.com/viniarck/fluxory/pkg/ofp15"

func main() {
	for i := 0; i < 1000*10000; i++ {
		ofp15.NewHello(ofp15.OFPP_V15).Encode()
	}
}
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from ryu.ofproto import ofproto_v1_5_parser


def main() -> None:
    """Main function."""
    for i in range(1000 * 10000):
        ofproto_v1_5_parser.OFPHello().serialize()


if __name__ == "__main__":
    main()

The benchmark was run with hyperfine, Go 1.12, and Python 3.7.3, on my laptop, as you can see the Go code is more than 3 times faster as expected:

❯ hyperfine './bench'
  Time (mean ± σ):      9.608 s ±  0.474 s    [User: 9.694 s, System: 0.035 s]
  Range (min … max):    9.085 s … 10.395 s    10 runs
❯ hyperfine 'python bench.py'
  Time (mean ± σ):     35.028 s ±  0.959 s    [User: 34.938 s, System: 0.021 s]
  Range (min … max):   34.015 s … 36.617 s    10 runs

About

:rocket: Async high-performance distributed OpenFlow framework in Go and Python

License:Apache License 2.0


Languages

Language:Go 100.0%