msehnout / fabric8-analytics-worker

fabric8-analytics worker for gathering raw data

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Build Status

Fabric8-Analytics Core library and services

This library provides basic infrastructure for development of services and concrete implemementation of services.

The following libraries are provided:

  • Database abstraction
  • Task Queue Worker/Node abstraction
  • Utilities
    • File tree walker with filtering
    • One-to-many dictionary
    • Shell command wrapper with timeout support

See workers/README.md for a listing of the concrete services.

Contributing

See our contributing guidelines for more info.

Running worker environment with docker-compose

There are two sets of workers - API and ingestion. API workers serve requests that are passed from API endpoint. Ingestion workers are used for background data ingestion. To run them use:

$ docker-compose up worker-api worker-ingestion

Running the tests locally

Docker based API testing

Run the tests in a container using the helper script:

$ ./runtests.sh

(The above command assumes you have passwordless docker invocation configured - if you don't, then sudo will be necessary to enable docker invocation).

If you're changing dependencies rather than just editing source code locally, you will need images to be rebuilt when invoking runtest.sh. You can set environment variable REBUILD=1 to request image rebuilding.

If the offline virtualenv based tests have been run, then this may complain about mismatched locations in compiled files. Those can be deleted using:

$ find -name *.pyc -delete

NOTE: Running the container based tests is likely to cause any already running local Fabric8-Analytics instance launched via Docker Compose to fall over due to changes in the SELinux labels on mounted volumes, and may also cause spurious test failures.

Virtualenv based offline testing

Test cases marked with pytest.mark.offline may be executed without having a Docker daemon running locally.

To configure a virtualenv (called f8a-worker in the example) to run these tests:

(f8a-worker) $ python -m pip install -r requirements.txt
(f8a-worker) $ python -m pip install -r tests/requirements.txt

The marked offline tests can then be run as:

(f8a-worker) $ py.test -m offline tests/

If the Docker container based tests have been run, then this may complain about mismatched locations in compiled files. Those can be deleted using:

(f8a-worker) $ sudo find -name *.pyc -delete

Some tips for running tests locally

Reusing an existing virtualenv for multiple test runs

When a virtualenv already is setup you can run tests like so:

source /path/to/python_env/bin/activate
NOVENV=1 ./runtest.sh

This will not create a virtualenv every time.

Forcing image builds while testing

When some changes are made to code that will change the docker image, it is good to rebuild images locally for testing. This can re-build can be forced like so:

REBUILD=1 ./runtest.sh 

Coding standards

  • You can use scripts run-linter.sh and check-docstyle.sh to check if the code follows PEP 8 and PEP 257 coding standards. These scripts can be run w/o any arguments:
./run-linter.sh
./check-docstyle.sh

The first script checks the indentation, line lengths, variable names, white space around operators etc. The second script checks all documentation strings - its presence and format. Please fix any warnings and errors reported by these scripts.

Code complexity measurement

The scripts measure-cyclomatic-complexity.sh and measure-maintainability-index.sh are used to measure code complexity. These scripts can be run w/o any arguments:

./measure-cyclomatic-complexity.sh
./measure-maintainability-index.sh

The first script measures cyclomatic complexity of all Python sources found in the repository. Please see this table for further explanation how to comprehend the results.

The second script measures maintainability index of all Python sources found in the repository. Please see the following link with explanation of this measurement.

You can specify command line option --fail-on-error if you need to check and use the exit code in your workflow. In this case the script returns 0 when no failures has been found and non zero value instead.

Dead code detection

The script detect-dead-code.sh can be used to detect dead code in the repository. This script can be run w/o any arguments:

./detect-dead-code.sh

Please note that due to Python's dynamic nature, static code analyzers are likely to miss some dead code. Also, code that is only called implicitly may be reported as unused.

Because of this potential problems, only code detected with more than 90% of confidence is reported.

Common issues detection

The script detect-common-errors.sh can be used to detect common errors in the repository. This script can be run w/o any arguments:

./detect-common-errors.sh

Please note that only semantical problems are reported.

Check for scripts written in BASH

The script named check-bashscripts.sh can be used to check all BASH scripts (in fact: all files with the .sh extension) for various possible issues, incompatibilies, and caveats. This script can be run w/o any arguments:

./check-bashscripts.sh

Please see the following link for further explanation, how the ShellCheck works and which issues can be detected.

About

fabric8-analytics worker for gathering raw data

License:GNU General Public License v3.0


Languages

Language:Python 90.0%Language:HTML 5.3%Language:Shell 4.3%Language:Dockerfile 0.2%Language:Makefile 0.2%Language:Mako 0.1%Language:Go 0.0%