danagilliann / large_scale

LARGE S C A L E

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

First installation:

Install required packages.
$ sudo apt-get update; sudo apt-get install mysql-server libmysqlclient-dev python-dev python-virtualenv
(Set a mysql root password)

$ ./first_install.sh

Install the proper databases
$ cd db
$ ./install_db.sh
(Will ask for the mysql root password configured above).
$ cd ..

Sync the database
$ source ./env/bin/activate
$ cd web/scalica
$ python manage.py makemigrations micro
$ python manage.py migrate


# After the first installation, from the project's directory
Run the server:
$ source ./env/bin/activate
$ cd web/scalica
$ python manage.py runserver

Access the site at http://localhost:8000/micro

Steps to restart DB:
1) in /web/scalica, run python manage.py flush
2) in /web/scalica, run rm -rf /micro/migrations/*
3) drop scalica database in mysql
4) redo all set-up steps in the github README starting from ./install_db.sh

Connecting the Compute Engine instance to Cloud SQL instance:
- https://cloud.google.com/sql/docs/mysql/connect-compute-engine
- use UNIX sockets instructions, not TCP socket instructions
- `service apache2 reload`
- check:
```
cd var/www/site/scalica
ls
```

Running the Batch Job:
To run the job, first run have the database accept connections by running the following on the command-line:

```
./cloud_sql_proxy -instances=windy-watch-186102:us-central1:cora-sql=tcp:3306
```

Then, run the following Python script in a virtual environment:

```
python trigger_pipeline.py
```


you should have:
cloud_sql_proxy (executable)
creds.json (credentials for cloud sql)
manage.py
micro
scalica
utils

About

LARGE S C A L E


Languages

Language:Python 56.0%Language:HTML 32.5%Language:Java 10.6%Language:Shell 0.9%