zhgoh / uniswap_transaction_service

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Readme

Uniswap transaction prices backend system

Docs

This is a proof-of-concept backend service written in Go to fetch Uniswap transactions from etherscan’s api and compute the prices for the transactions using Binance’s API.

Pre-requisites

The software is written in Go, and requires the following piece of software to run.

Locally

For deploying locally, currently the only requirements is only go compiler.

  • Go (tested against 1.18.1)
  • Make (Optional, if using makefile)

Docker (on Ubuntu 20.04.4 LTS)

For deploying on docker, the system would need docker and docker-compose.

  • Docker Compose (tested against 1.29.2)
  • Docker (tested against 20.10.14, build a224086)
  • Make (Optional, if using makefile)

Features

  • Background polling for Uniswap transactions from etherscan and compute prices using Binance’s api.
  • Live transaction prices are computed with Binance’s order books api.
  • Batch jobs to pull historical transactions with prices computed with Binance’s kline api.
  • Support http rest api for starting batch job and getting transaction by hash.
  • Transactions are stored in sqlite database.

Quick start

Clone project

# Download project
$ git clone https://gitlab.com/zhgoh/uniswap_transaction_service

### Run local
$ cd uniswap_transaction_service

Setting up env variable

# For running locally, export the env variable (for Unix OS), for Windows, can just add it in path in environment variable editor.
$ export etherscan_api=<your api key>

# For running in docker, create .env file for docker-compose to use
# Example .env file
# ETHERSCAN_API=123123123MYAPIKEY123123
$ nano web/.env # Key in api key and save

Running on Local

With Make

# See all rule for make
$ make

# To run the server, run
$ make run

# To test the server, run
$ make test

# To build the server without running, run
$ make build

Without Make

$ cd web

# To run the server, run
$ go run .

# To test the server, run
$ go test .

# To build the server without running, run
$ go build .

Running with Docker

### Run with docker
$ cd web

# With make
$ make docker_run

# or just plain old docker-compose
$ docker-compose up

Testing the service

Currently the default port used by docker/backend is 5050, once the backend is running, point the browser to localhost:5050, or test with curl with the following command,

$ curl localhost:5050

It should have the list of endpoints, which can be called.

To get only transaction by hash that the backend currently stores

# Change hash with the hash that is needed
$ curl --request GET \
    --url 'http://localhost:5050/transactions?hash=0xa893b598641afe65ba380c1fec2a3cc19320146b0324909d4aeebed705587901'

To get only transaction by hash that the backend currently stores

$ curl --request PUT \
    --url http://localhost:5050/batch \
    --header 'Content-Type: application/json' \
    --data '{
                "start": "2022-04-23T05:55:10.770Z",
                "end": "2022-04-23T05:56:10.770Z"
           }'

To get all transactions that the backend currently stores

$ curl --request GET \
    --url http://localhost:5050/transactions/all

Design decisions

Programming Language

At the beginning, Python was considered for this proof-of-concept as it is both an easy to use language with numerous library, however I was thinking on how to solve the let my backend serve the endpoint while it allows pulling of data in the background. I believe there are several solutions however as I am more familiar with Go at this point, I know I can just use Goroutine to fetch the live transactions while serving the backend.

Database

Currently there is no database in use while I was building the POC, I am intending to add in a sqlite backend later on. Why I chose sqlite is because it is quite highly performant as well as not too heavy (in terms of usage of CPU/memory) as compared to MySQL or PostgreSQL, but that might change if there is a need to store more data.

Libraries

I chose to use minimal libraries for this POC. The external libraries that I am using is the sqlite driver which does not require cgo. Naturally, I should have gone with the cgo implementation because it is faster (as it uses the c bindings) instead of the pure go version. However for testing and poc purpose, I have decided to use the pure go version instead. As I am using the interface provided by go, it would relatively easy to switch out the driver implementation easily.

As for etherscan/binance api, I have decided to write my own wrapper for the etherscan/binance api because I wanted to learn and understand more about the API and I also enjoyed thinking and finding out how to get the prices for the transactions optimally. In the end, I create my own wrapper to use only the api that I need.

General flow

When the backend is started, it will start a goroutine in the background to fetch the live transactions from etherscan while the main service will be serving the endpoints while it gets hit. The endpoints consists of fetching transactions that is already stored on the service as well as querying all the transactions on the service. Also one can specify the duration to trigger a batch job to get the transactions during that time frame. The challenge comes when I wanted to process the

The program can largely be thought of in the following functionality,

Background polling of live transactions

Goroutine started at beginning of program to fetch live transaction data from etherscan and compute price using Binance’s order book api. These are stored and can be fetch via rest api.

Web Server

Main routine serving the various api endpoint for getting transaction price as well as starting batch job.

Batch job

I wanted to achieve the following results, given a start and end time, I will be able to fetch those transactions and get the closest price for that instant. This meant that I wanted to use a finer grain data, hence I decided to go with Binance’s kline api. To implement the above, it got a bit tricky due to a few factors,

  1. Etherscan’s api does not allow me to get transactions by date time (might need more research).
  2. Binance’s kline data allows for 1000 max results.
  3. Batch results might exceed kline’s data, hence the need to pull more to compute the closest price?

To solve those, these are what I did,

  1. Use another api (get closest block by timestamp) to get the closest start and end block and use that to get the transactions.
  2. Have to pull kline data a few times until my timeframe is met.
  3. Try to get as many kline data to fit into the timeframe to check against the transactions.

For this part, I had to get a little bit creative to try to get the closest price to the transaction based on the time stamp and the closing kline data. For this batch job, I was working with 1 minute kline data to be as precise as possible, I do admit I can tone it down to using daily price instead which is not as accurate but close enough given ETH price are relatively stable intraday.

Thin wrapper for Etherscan and Binance api

Generic code to handle getting of data from the various data source.

Documents

  • Find the swagger document in docs/swagger.yaml.
  • To read more and notes that I have written, refer to docs/notes.org
  • Added insomnia.json which has the environment used for testing this POC

About

License:MIT License


Languages

Language:Go 60.2%Language:Python 38.4%Language:Makefile 1.0%Language:Dockerfile 0.4%