elmarcoh / fiware-sth-comet

IoT / FIWARE candidate to Short Time Historic (STH) (aka. Comet)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

# Short Time Historic (aka. Comet)

Join the chat at https://gitter.im/telefonicaid/IoT-STH

## Introduction The Short Time Historic (STH, aka. Comet) is a component of the FIWARE ecosystem in charge of providing aggregated time series information about the evolution in time of entity attribute values registered using the Orion Context Broker, an implementation of the publish/subscribe context management system exposing NGSI9 and NGSI10 interfaces.

The aggregated time series information is stored in a MongoDB instance. This information can be generated by 2 main means:

  1. The STH component can directly subscribe to the Context Broker to receive notifications when the entity attribute values change, calculating the aggregated time series information and storing it in the MongoDB instance. This option is called the minimalist option.
  2. A new sink will be enabled in the Cygnus component to calculate and to update the aggregated time series information as the entity attribute values change over time. Using Cygnus adds a set of capabilities not available in the minimalist option such as advanced filtering regarding the attributes to consider in the time series, advanced flow and congestion management, amongst others. This option is provided as a Cygnus sink just like the other data stores already supported by Cygnus. This option is the formal one.

Since both mechanisms (the formal one using Cygnus and the minimalist one using the STH component directly) update the same database, it is the responsibility of the people or software in charge of creating the needed subscriptions to avoid updating the time series database twice (i.e. to avoid enabling both mechanisms at the same time). This would happen if both mechanisms are enabled for the same attribute of the same entity.

Regarding the aggregated time series information provided by the STH component, there are 3 main concepts which are important to know about:

  • Resolution or aggregation period: The time period by which the aggregated time series information is grouped. Possible valid resolution values are: month, day, hour, minute and second.
  • Origin: For certain resolution, it is the origin of time for which the aggregated time series information applies. For example, for a resolution of minutes, a valid origin value could be: 2015-03-01T13:00:00.000Z, meaning the 13th hour of March, the 3rd, 2015. The origin is stored using UTC time to avoid locale issues.
  • Offset: For certain resolution, it is the offset from the origin for which the aggregated time series information applies. For example, for a resolution of minutes and an origin 2015-03-01T13:00:00.000Z, an offset of 10 refers to the 10th minute of the concrete hour pointed by the origin. In this example, there would be a maximum of 60 offsets from 0 to 59 corresponding to each one of the 60 minutes within the concrete hour.
  • Samples: For a triple resolution, origin and offset, it is the number of samples, values, events or notifications available for that concrete offset from the origin.

### Why Comet?

Since most of the components which conform the FIWARE ecosystem have astrological names, we decided to follow that path in the case of the STH too. Since the STH is in charge of collecting historical information about the values attributes took over time, we decided to name it "Comet", in reference to the tails comets leave on their way as they move.

### Consuming raw data

The STH component exposes an HTTP REST API to let external clients query the raw events (aka. raw data) from which the aggregated time series information is generated. A typical URL querying for this information using a GET request is the following:

http://localhost:8666/STH/v1/contextEntities/type/<entityType>/id/<entityId>/attributes/<attrName>?hLimit=3&hOffset=0&dateFrom=2014-02-14T00:00:00.000Z&dateTo=2014-02-14T23:59:59.999Z

The entries between "<" and ">" in the URL path depend on the concrete case (type of data, entity and attribute) being queried.

The requests can make use the following query parameters:

  • lastN: Only the requested last entries should be returned. It is a mandatory parameter if no hLimit and hOffset are provided.
  • hLimit: In case of pagination, the number of entries per page. It is a mandatory parameter if no lastN is provided.
  • hOffset: In case of pagination, the offset to apply to the requested search of raw data. It is a mandatory parameter if no lastN is provided.
  • dateFrom: The origin of time from which the raw data is desired. It is an optional parameter.
  • dateTo: The end of time until which the raw data is desired. It is an optional parameter.

An example response provided by the STH component to a request such as the previous one could be the following:

{
    "contextResponses": [
        {
            "contextElement": {
                "attributes": [
                    {
                        "name": "attrName",
                        "values": [
                            {
                                "recvTime": "2014-02-14T13:43:33.306Z",
                                "attrValue": "21.28"
                            },
                            {
                                "recvTime": "2014-02-14T13:43:34.636Z",
                                "attrValue": "23.42"
                            },
                            {
                                "recvTime": "2014-02-14T13:43:35.424Z",
                                "attrValue": "22.12"
                            }
                        ]
                    }
                ],
                "id": "entityId",
                "isPattern": false
            },
            "statusCode": {
                "code": "200",
                "reasonPhrase": "OK"
            }
        }
    ]
}

Notice that a paginated response has been requested with a limit of 3 entries and an offset of 0 entries (first page).

It is important to note that if a valid query is made but it returns no data (for example because there is no raw data for the specified time frame), a response with code 200 is returned including an empty values property array, since it is a valid query.

Top

### Consuming aggregated time series information

The STH component exposes an HTTP REST API to let external clients query this aggregated time series information. A typical URL querying for this information using a GET request is the following:

http://localhost:8666/STH/v1/contextEntities/type/<entityType>/id/<entityId>/attributes/<attrName>?aggrMethod=sum&aggrPeriod=second&dateFrom=2015-02-22T00:00:00.000Z&dateTo=2015-02-22T23:00:00.000Z

The entries between "<" and ">" in the URL path depend on the concrete case (type of data, entity and attribute) being queried.

The requests can make use the following query parameters:

  • aggrMethod: The aggregation method. The STH component supports the following aggregation methods: max (maximum value), min (minimum value), sum(sum of all the samples) andsum2(sum of the square value of all the samples) for numeric attribute values andoccur` for attributes values of type string. Combining the information provided by these aggregated methods with the number of samples, it is possible to calculate probabilistic values such as the average value, the variance as well as the standard deviation. It is a mandatory parameter.
  • aggrPeriod: Aggregation period or resolution. A fixed resolution determines the origin time format and the possible offsets. It is a mandatory parameter.
  • dateFrom: The origin of time from which the aggregated time series information is desired. It is an optional parameter.
  • dateTo: The end of time until which the aggregated time series information is desired. It is an optional parameter.

An example response provided by the STH component to a request such as the previous one (for a numeric attribute value) could be the following:

{
    "contextResponses": [
        {
            "contextElement": {
                "attributes": [
                    {
                        "name": "attrName",
                        "values": [
                            {
                                "_id": {
                                    "origin": "2015-02-18T02:46:00.000Z",
                                    "resolution": "second"
                                },
                                "points": [
                                    {
                                        "offset": 13,
                                        "samples": 1,
                                        "sum": 34.59
                                    }
                                ]
                            }
                        ]
                    }
                ],
                "id": "entityId",
                "isPattern": false
            },
            "statusCode": {
                "code": "200",
                "reasonPhrase": "OK"
            }
        }
    ]
}

In this example response, aggregated time series information for a resolution of seconds is returned. This information has as its origin the 46nd minute, of the 2nd hour of February, the 18th, 2015. And includes data for the 13th second, for which there is a sample and the sum (and value of that sample) is 34.59.

On the other hand, if the attribute value was of type string, a query such as the following (with aggrMethod as occur) sent to the STH component:

http://localhost:8666/STH/v1/contextEntities/type/<entityType>/id/<entityId>/attributes/<attrName>?aggrMethod=occur&aggrPeriod=second&dateFrom=2015-02-22T00:00:00.000Z&dateTo=2015-02-22T23:00:00.000Z

may end up receiving the following payload as a possible response:

{
    "contextResponses": [
        {
            "contextElement": {
                "attributes": [
                    {
                        "name": "attrName",
                        "values": [
                            {
                                "_id": {
                                    "origin": "2015-02-18T02:46:00.000Z",
                                    "resolution": "second"
                                },
                                "points": [
                                    {
                                        "offset": 35,
                                        "samples": 34,
                                        "occur": {
                                            "string01": 7,
                                            "string02": 4,
                                            "string03": 5,
                                            "string04": 6,
                                            "string05": 12
                                        }
                                    }
                                ]
                            }
                        ]
                    }
                ],
                "id": "entityId",
                "isPattern": false
            },
            "statusCode": {
                "code": "200",
                "reasonPhrase": "OK"
            }
        }
    ]
}

It is important to note that if a valid query is made but it returns no data (for example because there is no aggregated data for the specified time frame), a response with code 200 is returned including an empty values property array, since it is a valid query.

Another very important aspect is that since the strings are used as properties in the generated aggregated data, the limitations to this regard imposed by MongoDB must be followed. More concretely: "In some cases, you may wish to build a BSON object with a user-provided key. In these situations, keys will need to substitute the reserved $ and . characters. Any character is sufficient, but consider using the Unicode full width equivalents: U+FF04 (i.e. “$”) and U+FF0E (i.e. “.”).". Consequently, take into consideration that if the textual values stored in the attributes for which aggregated data is being generated contain the $ or the . characters, they will be substituted for their Javascript Unicode full width equivalents, this is: \uFF04 instead of $ and \uFF0E instead of ..

Top

### Updating aggregated time series information

As already mentioned, there are 2 main ways to update the aggregated time series information associated to attributes. The so-called minimalist option and the formal one.

Regarding the formal option (based on using the Cygnus component for the updating), please refer to the documentation available at the Cygnus component repository, and more concretely at the following links:

The another option to update the aggregated time series information consists on directly subscribing the STH component to the Orion Context Broker to receive the corresponding notifications and generate and update the aggregated data.

In the minimalist option, the STH component calculates aggregated data grouped at certain resolutions whenever it receives a notification from the Orion Context Broker. To this regard and as a way to subscribe the STH component to the Orion Context Broker so it receives the attribute values of interest, the following curl command can be used:

curl orion.contextBroker.host:1026/v1/subscribeContext -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'Fiware-Service: theService' --header 'Fiware-ServicePath: theServicePath' -d @- <<EOF
{
    "entities": [
        {
            "type": "Room",
            "isPattern": "false",
            "id": "Room1-gtv"
        }
    ],
    "attributes": [
        "temperature"
    ],
    "reference": "http://<sth.host>:<sth.port>/notify",
    "duration": "P1M",
    "notifyConditions": [
        {
            "type": "ONCHANGE",
            "condValues": [
                "temperature"
            ]
        }
    ],
    "throttling": "PT1S"
}
EOF

In this request, a subscription to be notified the value of the temperature attribute of the Room1 entity whenever it changes is made to an instance of the Orion Context Broker listening at orion.contextBroker.host:1026.

More concretely, the condValues property includes a list of attributes of the entity of interest which should be tracked for changes. If any of them changes, a new notification will be sent to the endpoint set in the reference property including the current values of the attributes specified in the attributes property of the subscription request payload. The condValues and attributes properties can include any attributes of the entity of interest (not necessarily the same in both lists) and their change and latest values will be notified accordingly.

If the list of attributes is empty, this is interpreted by the Orion Context Broker as "notify the values of all the attributes of the selected entities".

It is important to note that the subscription expire and must be re-enabled. More concretely, the duration property sets the duration of the subscription. One month in the proposed example.

On the other hand, for the time being the STH component only is able to manage notifications in JSON format and consequently it is very important to set the Accept header to application/json.

Last but not least, the throttling makes it possible to control the frequency of the notifications. In this sense and for this concrete example, the Orion Context Broker will send notifications separated 1 second in time the least. This is, the time between notifications will be at least 1 second. Depending on the resolution of the aggregated data you are interested in, the throttling should be fine-tuned accordingly.

Top

## Dependencies The STH component is a Node.js application which depends on certain Node.js modules as stated in the project.json file.

Apart from these Node.js modules, the STH component also needs a running MongoDB instance where the aggregated time series information is stored for its proper functioning. Since the STH component uses MongoDB update operators (see http://docs.mongodb.org/v2.6/reference/operator/update/) such as the $max and the $min update operators which were introduced in version 2.6, there is a dependency of the STH component with this concrete version of the MongoDB instance where the aggregated data will be stored. Consequently, a MongoDB instance version >= 2.6 is needed to store the aggregated time series information.

Top

## Installation

  1. Clone the repository:
 git clone https://github.com/telefonicaid/fiware-sth-comet.git 
  1. Get into the directory where the STH repository has been cloned:
 cd fiware-sth-comet/ 
  1. Install the Node.js modules and dependencies:
 npm install 

The STH component server is ready to be started.

Top

## Automatic deployment using Docker To ease the testing and deployment of the STH component we have prepared a Docker repository which can be found at https://registry.hub.docker.com/u/fiwareiotplatform/iot-sth/, including all the information needed to try and to deploy the STH component via the execution of a simple Docker command.

Top

##Running the STH server

  1. To run the STH server, just execute:
 npm start 

The STH component provides the user with 2 mechanisms to configure the component to the concrete needs of the user:

  • Environment variables, which can be set assigning values to them or using the sth_default.conf file if a packaged version of the STH component is used.
  • The config.js file located at the root of the STH component code, a JSON formatted file including the configuration properties.

It is important to note that environment variables, if set, take precedence over the properties defined in the config.js file.

On the other hand, it is also important to note that the aggregation resolutions can only be configured using the config.js file and consequently this is the preferred way to configure the STH component behavior. The mentioned resolutions can be configured using the config.server.aggregation property in the config.js file including the desired resolution to be used when aggregating data. Accepted resolution values include: month, day, hour, minute and second.

In case of preferring using environment variables, the script accepts the following parameters as environment variables:

  • STH_HOST: The host where the STH server will be started. Optional. Default value: "localhost".
  • STH_PORT: The port where the STH server will be listening. Optional. Default value: "8666".
  • LOG_LEVEL: The logging level of the messages. Messages with a level equal or superior to this will be logged. Accepted values are: "debug", "info", "warn" and "error". Optional. Default value: "info".
  • LOG_TO_CONSOLE: A flag indicating if the logs should be sent to the console. Optional. Default value: "true".
  • LOG_TO_FILE: A flag indicating if the logs should be sent to a file. Optional. Default value: "true".
  • LOG_FILE_MAX_SIZE_IN_BYTES: Maximum size in bytes of the log files. If the maximum size is reached, a new log file is created incrementing a counter used as the suffix for the log file name. Optional. Default value: "0" (no size limit).
  • LOG_DIR: The path to a directory where the log file will be searched for or created if it does not exist. Optional. Default value: "./log".
  • LOG_FILE_NAME: The name of the file where the logs will be stored. Optional. Default value: "sth_app.log".
  • PROOF_OF_LIFE_INTERVAL: The time in seconds between proof of life logging messages informing that the server is up and running normally. Default value: "60".
  • DB_PREFIX: The prefix to be added to the service for the creation of the databases. More information below. Optional. Default value: "sth_".
  • DEFAULT_SERVICE: The service to be used if not sent by the Orion Context Broker in the notifications. Optional. Default value: "orion".
  • COLLECTION_PREFIX: The prefix to be added to the collections in the databases. More information below. Optional. Default value: "sth_".
  • DEFAULT_SERVICE_PATH: The service path to be used if not sent by the Orion Context Broker in the notifications. Optional. Default value: "/".
  • POOL_SIZE: The default MongoDB pool size of database connections. Optional. Default value: "5".
  • WRITE_CONCERN: The write concern policy to apply when writing data to the MongoDB database. Default value: "1".
  • SHOULD_STORE: Flag indicating if the raw and/or aggregated data should be persisted. Valid values are: "only-raw", "only-aggregated" and "both". Default value: "both".
  • SHOULD_HASH: Flag indicating if the raw and/or aggregated data collection names should include a hash portion. This is mostly due to MongoDB's limitation regarding the number of bytes a namespace may have (currently limited to 120 bytes). In case of hashing, information about the final collection name and its correspondence to each concrete service path, entity and (if applicable) attribute is stored in a collection named COLLECTION_PREFIX + "collection_names". Default value: "false".
  • TRUNCATION_EXPIREAFTERSECONDS: Data from the raw and aggregated data collections will be removed if older than the value specified in seconds. In case of raw data the reference time is the one stored in the recvTime property whereas in the case of the aggregated data the reference of time is the one stored in the _id.origin property. Set the value to 0 not to apply this time-based truncation policy. Default value: "0".
  • TRUNCATION_SIZE: The oldest raw data (according to insertion time) will be removed if the size of the raw data collection gets bigger than the value specified in bytes. Set the value to 0 not to apply this truncation policy. Take into consideration than the "size" configuration parameter is mandatory in case size collection truncation is desired as required by MongoDB. Default value: "0". Notice that this configuration parameter does not affect the aggregated data collections since MongoDB does not currently support updating documents in capped collections which increase the size of the documents. Notice also that in case of the raw data, the size-based truncation policy takes precedence over the TTL one. More concretely, if "size" is set, the value of "exporeAfterSeconds" is ignored for the raw data collections since currently MongoDB does not support TTL in capped collections.
  • TRUNCATION_MAX: The oldest raw data (according to insertion time) will be removed if the number of documents in the raw data collections goes beyond the specified value. Set the value to 0 not to apply this truncation policy. Default value: "0". Notice that this configuration parameter does not affect the aggregated data collections since MongoDB does not currently support updating documents in capped collections which increase the size of the documents.
  • DB_USERNAME: The username to use for the database connection. Optional. Default value: "".
  • DB_PASSWORD: The password to use for the database connection. Optional. Default value: "".
  • DB_URI: The URI to use for the database connection. This does not include the 'mongo://' protocol part (see a couple of examples below). Optional. Default value: "localhost:27017".
  • REPLICA_SET: The name of the replica set to connect to, if any. Default value: "".
  • FILTER_OUT_EMPTY: A flag indicating if the empty results should be removed from the response. Optional. Default value: "true".

For example, to start the STH server listening on port 7777, connecting to a MongoDB instance listening on mymongo.com:27777 and without filtering out the empty results, use:

 STH_PORT=7777 DB_URI=mymongo.com:27777 FILTER_OUT_EMPTY=false npm start

On the other hand, in case of connecting to a MongoDB replica set composed of 3 machines with IPs addresses 1.1.1.1, 1.1.1.2, 1.1.1.3 listening on ports 27771, 27772 and 27773, respectively, use:

 DB_URI=1.1.1.1:27771,1.1.1.2:27772,1.1.1.3:27773 npm start

The STH component creates a new database for each service. The name of these databases will be the concatenation of the DB_PREFIX environment variable and the service, using an underscore ("_") as the separator.

As already mentioned, all this configuration parameters can also be adjusted using the config.js file whose contents are self-explanatory.

It is important to note that there is a limitation of 120 bytes for the namespaces (concatenation of the database name and collection names) in MongoDB (see http://docs.mongodb.org/manual/reference/limits/#namespaces for further information). Related to this, the STH generates the collection names using 2 possible mechanisms:

  1. Plain text: In case the SHOULD_HASH configuration parameter is set to 'false' (the default option), the collection names are generated as a concatenation of the COLLECTION_PREFIX plus the service path plus the entity id plus the entity type plus '.aggr' for the collections storing the aggregated data. The length of the collection name plus the DB_PREFIX plus the database name (or service) should not be more than 120 bytes using UTF-8 format or MongoDB will complain and will not create the collection, and consequently no data would be stored by the STH. A warning message is logged in case this happens.

  2. Hash based: In case the SHOULD_HASH option is set to something distinct from 'false', the collection names are generated as a concatenation of the COLLECTION_PREFIX plus a generated hash plus '.aggr' for the collections of the aggregated data. To avoid collisions in the generation of these hashes, they are forced to be 20 bytes long at least. Once again, the length of the collection name plus the DB_PREFIX plus the database name (or service) should not be more than 120 bytes using UTF-8 or MongoDB will complain and will not create the collection, and consequently no data would be stored by the STH. The hash function used is SHA-512. A warning message is logged in case this happens.

In case of using hashes as part of the collection names and to let the user or developer easily recover this information, a collection named DB_COLLECTION_PREFIX + _collection_names is created and fed with information regarding the mapping of the collection names and the combination of concrete services, service paths, entities and attributes.

Top

## Inserting data (random single events and its aggregated data) into the database The STH component source code includes a set of tests to validate the correct functioning of the component. Amongst these tests, there is a suite to validate the insertion of aggregated time series information into the MongoDB instance.

Preconditions

A running instance of a MongoDB database.

Running the tests

  1. To run the tests, just execute:
 make test-database 

The script accepts the following parameters as environment variables:

  • SAMPLES: The number of random events which will be generated and inserted into the database. Optional. Default value: "5".
  • ENTITY_ID: The id of the entity for which the random event will be generated. Optional. Default value: "entityId".
  • ENTITY_TYPE: The type of the entity for which the random event will be generated. Optional. Default value: "entityType".
  • ATTRIBUTE_NAME: The id of the attribute for which the random event will be generated. Optional. Default value: "attrName".
  • ATTRIBUTE_TYPE: The type of the attribute for which the random event will be generated. Optional. Default value: "attrType".
  • START_DATE: The date from which the random events will be generated. Optional. Default value: the beginning of the previous year to avoid collisions with the testing of the Orion Context Broker notifications which use the current time. For example if in 2015, the start date is set to "2015-01-01T00:00:00", UTC time. Be very careful if setting the start date, since these collisions may arise.
  • END_DATE: The date before which the random events will be generated. Optional. Default value: the end of the previous year to avoid collisions with the testing of the Orion Context Broker notifications which use the current time. For example if in 2015, the end date is set to "2014-12-31T23:59:59", UTC time. Be very careful if setting the start date, since these collisions may arise.
  • MIN_VALUE: The minimum value associated to the random events. Optional. Default value: "0".
  • MAX_VALUE: The maximum value associated to the random events. Optional. Default value: "100".
  • DB_USERNAME: The username to use for the database connection. Optional. Default value: "".
  • DB_PASSWORD: The password to use for the database connection. Optional. Default value: "".
  • DB_URI: The URI to use for the database connection. This does not include the 'mongo://' protocol part. Optional. Default value: "localhost:27017".
  • DB_NAME: The name of the database to use. Optional. Default value: "test".
  • CLEAN: A flag indicating if the generated collections should be removed after the tests. Optional. Default value: "true".

For example, to insert 100 samples on a certain date without cleaning up the database after running the tests, use:

SAMPLES=100 START_DATE=2015-02-14T00:00:00 END_DATE=2015-02-14T23:59:59 CLEAN=false make test-database

In case of executing the tests with the CLEAN option set to false, the contents of the database can be inspected using the MongoDB (mongo) shell.

Top

## STH component complete test coverage The STH component source code includes a set of tests to validate the correct functioning of the whole set of capabilities exposed by the component. This set includes:

  • Tests to check the connection to the database
  • Tests to check the correct starting of the STH component
  • Tests to check the STH component correctly deals with all the possible requests it may receive (including invalid URL paths (routes) as well as all the combinations of possible query parameters)
  • Tests to check the correct aggregate time series information querying after inserting random events (attribute values) into the database
  • Tests to check the correct aggregate time series information generation when receiving (simulated) notifications by a (fake) Orion Content Broker

Preconditions

A running instance of a MongoDB database.

Running the tests

  1. To run the tests, just execute:
 make test 

The script accepts the following parameters as environment variables:

  • SAMPLES: The number of random events which will be generated and inserted into the database. Optional. Default value: "5".
  • ENTITY_ID: The id of the entity for which the random event will be generated. Optional. Default value: "entityId".
  • ENTITY_TYPE: The type of the entity for which the random event will be generated. Optional. Default value: "entityType".
  • ATTRIBUTE_NAME: The id of the attribute for which the random event will be generated. Optional. Default value: "attrName"
  • ATTRIBUTE_TYPE: The type of the attribute for which the random event will be generated. Optional. Default value: "attrType".
  • START_DATE: The date from which the random events will be generated. Optional. Default value: the beginning of the previous year to avoid collisions with the testing of the Orion Context Broker notifications which use the current time. For example if in 2015, the start date is set to "2015-01-01T00:00:00", UTC time. Be very careful if setting the start date, since these collisions may arise.
  • END_DATE: The date before which the random events will be generated. Optional. Default value: the end of the previous year to avoid collisions with the testing of the Orion Context Broker notifications which use the current time. For example if in 2015, the end date is set to "2014-12-31T23:59:59", UTC time. Be very careful if setting the start date, since these collisions may arise.
  • MIN_VALUE: The minimum value associated to the random events. Optional. Default value: "0".
  • MAX_VALUE: The maximum value associated to the random events. Optional. Default value: "100".
  • DB_USERNAME: The username to use for the database connection. Optional. Default value: "".
  • DB_PASSWORD: The password to use for the database connection. Optional. Default value: "".
  • DB_URI: The URI to use for the database connection. This does not include the 'mongo://' protocol part. Optional. Default value: "localhost:27017".
  • DB_NAME: The name of the database to use. Optional. Default value: "test".
  • CLEAN: A flag indicating if the generated collections should be removed after the tests. Optional. Default value: "true".

For example, to run the tests using 100 samples, certain start and end data without cleaning up the database after running the tests, use:

SAMPLES=100 START_DATE=2014-02-14T00:00:00 END_DATE=2014-02-14T23:59:59 CLEAN=false make test

In case of executing the tests with the CLEAN option set to false, the contents of the database can be inspected using the MongoDB (mongo) shell.

Top

##Performance tests

The Performance tests section of the repository includes information to run performance tests on the STH component. If you are interested on them, please navigate to that section of the repository for further information.

Top

##Additional resources The Additional resources section of the repository includes some scripts and utilities which may make the developer's life easier. If you are interested on them, please navigate to that section of the repository for further information.

Top

##How to contribute

Would you like to contribute to the project? This is how you can do it:

  1. Fork this repository clicking on the "Fork" button on the upper-right area of the page.
  2. Clone your just forked repository:
git clone https://github.com/your-github-username/fiware-sth-comet.git
  1. Add the main fiware-sth-comet repository as a remote to your forked repository (use any name for your remote repository, it does not have to be fiware-sth-comet, although we will use it in the next steps):
git remote add fiware-sth-comet https://github.com/telefonicaid/fiware-sth-comet.git
  1. Synchronize the develop branch in your forked repository with the develop branch in the main fiware-sth-comet repository:

    (step 4.1, just in case you were not in the develop branch yet)
    git checkout develop
    (step 4.2)
    git fetch fiware-sth-comet
    (step 4.3)
    git rebase fiware-sth-comet/develop
  2. Create a new local branch for your new code (currently we use the prefixes: feature/ for new features, task/ for maintenance and documentation issues and bug/ for bugs):
git checkout -b feature/some-new-feature
  1. Include your changes and create the corresponding commits.
  2. To assure that your code will land nicely, repeat steps 4.2 and 4.3 from your feature/some-new-feature branch to synchronize it with the latest code which may have landed in the develop branch of the main fiware-sth-comet repository during your implementation.
  3. Push your code to your forked repository hosted in Github:
git push origin feature/some-new-feature
  1. Launch a new pull request from your forked repository to the develop branch of the main fiware-sth-comet repository. You may find some active pull requests available at https://github.com/telefonicaid/fiware-sth-comet/pulls.
  2. Assign the pull request to any of the main fiware-sth-comet developers (currently, @gtorodelvalle or @frbattid) for review.
  3. After the review process is successfully completed, your code will land into the develop branch of the main fiware-sth-comet repository. Congratulations!!!

For additional contributions, just repeat these steps from step 4 on.

To further guide you through your first contributions, we have created the label mentored which are assigned to those bugs and issues simple and interesting enough to be solved by people new to the project. Feel free to assign any of them to yourself and do not hesitate to mention any of the main developers (this is, @gtorodelvalle or @frbattid) in the issue's comments to get help from them during its resolution. They will be glad to help you.

Top

##Contact

Top

About

IoT / FIWARE candidate to Short Time Historic (STH) (aka. Comet)


Languages

Language:JavaScript 43.9%Language:Python 29.1%Language:Gherkin 16.6%Language:Shell 6.6%Language:API Blueprint 3.5%Language:Makefile 0.2%