A tool to compute all the intermediate stops (geometry and time range) between each trips from a GTFS in order to emulate their travel
You can see the visualization of the results here
pyenv local 3.10.7
poetry env use 3.10.7
poetry install
Each block are used to compute all the moving stops from the specified GTFS:
- set the name of your gtfs without space ("study_area_name")
- Copy paste your data into the 'input_data' (for example) directory ("input_data_dir")
- Select the transport modes to compute in a list ("transport_modes")
- Select the date filter mode: based on "calendar_dates.txt" or "calendar.text" ("date_mode")
- Define the date to compute ("date")
- Set if you want to build the "shapes.txt" file ("build_shape_id")
- Set the interpolation nodes values: 100 = 1 nodes interpolated each 100 meters ("interpolation_threshold")
- Set if you want to proceed data with multiprocessing mode. Good only if you have a lot of trips! ("multiprocess")
[
{
"study_area_name": "your_area_name",
"input_data_dir": "input_data",
"transport_modes": ["train"],
"date_mode": "calendar_dates",
"date": "20211125",
"build_shape_id": true,
"interpolation_threshold": 3500,
"multiprocess": false
},
{
"study_area_name": "your_area_name_2",
"input_data_dir": "input_data_2",
"transport_modes": ["tramway", "metro"],
"date_mode": "calendar",
"date": "20191115",
"build_shape_id": false,
"interpolation_threshold": 1000,
"multiprocess": true
}
]
Copy paste the 'inputs_attrs.json' file into your data direction (ex: 'input_data') This file is used to map column types for each gtfs files
Let's go to compute your data
python db_run.py
You'll get 3 parquet files which can be use with TODO:
- [study_area_name]_moving_stops.parq:
Contains all the stops interpolated
- [study_area_name]_base_stops_data.parq
On going.... [TODO]
- [study_area_name]_base_lines_data.parq
On going.... [TODO]