Machine readable and standardized data sources for use in Elastic Map Service.
Create a new JSON or Hjson file in the appropriate folder in sources
. The source file must match the schema in schema/source_schema.json
.
To validate data sources against the schema run
yarn test
Setting the environment variable EMS_STRICT_TEST
will perform an additional check to ensure all field definitions are present in all features:
EMS_STRICT_TEST=ok yarn test
To build manifests and vector data files for all versions run
yarn build
- New feature layers can be developed on the
feature-layers
branch (git checkout --track upstream/feature-layers
). Jenkins will build and deploy all commits to this branch into a testing bucket on GCP. Test feature layers on this branch in Kibana by addingmap.manifestServiceUrl: http://storage.googleapis.com/elastic-bekitzur-emsfiles-catalogue-dev/v7.2/manifest
toconfig/kibana.yml
. - Pull requests for new feature layers should be made from the
feature-layers
against themaster
branch. Pull requests for any other changes should be made on a new branch in your fork, e.g.git checkout -b my-bugfix
. - Once merged, Jenkins will run
deployStaging.sh
script, which will place the contents of thedist
directory into the staging bucket. - Deploying to production requires manually triggering this Jenkins job to run the
deployProduction.sh
script. This will rsync files from the staging bucket to the production bucket. To trigger, log in and click the "Build with Parameters" link. Leave thebranch_specifier
field as default (refs/heads/master
).
Whenever possible new vector layers should be created using a SPARQL query in Sophox.
- Checkout the upstream
feature-layers
branch. - If necessary, create a new folder in the
sources
directory with the corresponding two-digit country code (ex.ru
for Russia). - Copy and paste the template source file (
templates/source_template.hjson
) into the new directory you created in step 1. Give it a useful name (ex.states.hjson
,provinces.hjson
, etc). - Complete the
note
andname
fields in the new source file. - Copy and paste the
query.sparql
value into the query box on http://sophox.org. - Change the
Q33
in theVALUES ?entity { wd:Q33 }
to the corresponding Wikidata ID for the country for which you are adding subdivisions (ex.Q33
is the Wikidata ID for Finland). - Run the SPARQL query and compare the
iso_3166_2
results with the corresponding country's subdivision list on the ISO website looking for missingiso_3166_2
codes. - The most common reason for missing
iso_3166_2
codes in the query results is an incomplete "contains administrative territorial entity" property in the immediate parent region of the subdivision in Wikidata (usually, but not always, the country). You may need to add the subdivision Wikidata item to this property (ex. https://www.wikidata.org/wiki/Q33#P150). - Add
label_*
fields for each official language of the country to the SPARQL query similar to thelabel_en
field. - Optionally, add unique subdivision code fields from other sources (ex.
logianm
in Ireland) to the query. - Run the SPARQL query and check the map output.
- Optionally, click the "Simplify" link and drag the slider to reduce the number of vertices (smaller file size).
- Click the "Export" link on the top right of the map. Choose GeoJSON or TopoJSON as the File Format.
- Type
rfc7946
ƒin the "command line options" to reduce the precision of the coordinates and click "Export" to download the vector file. - Rename the downloaded file with the first supported EMS version number (ex.
_v1
,_v2
,_v6.6
) and the vector type (geo
for GeoJSON,topo
for TopoJSON) (ex.russia_states_v1.geo.json
). Copy this file to thedata
directory. - Complete the
emsFormats
properties:type
is eithergeojson
ortopojson
,file
is the filename specified above,default
istrue
when there is only one format. Subsequent formats can be added but only one item in the array can havedefault: true
. The other items must bedefault: false
or omitdefault
entirely. - Copy and paste the SPARQL query from Sophox to the
query.sparql
field in the source file. - Use the
scripts/wikidata-labels.js
script to list thehumanReadableName
languages from Wikidata (e.g.node scripts/wikidata-labels.js Q33
). You should spot check these translations as some languages might lack specificity (e.g.Provins
rather thanKinas provinser
). - We should maintain the current precedent for title casing
legacyIds
and English labels of thehumanReadableName
. This may need to be manually edited in the source (e.g. Paraguay Departments). - All fields used by sources that do not follow the
label_<language_code>
schema must have translations in (schema/fields.hjson). If necessary, use thescripts/wikidata-labels.js
script to list translations and copy them to (schema/fields.hjson) (e.g.node scripts/wikidata-labels P5097
). - Use the following bash command to generate the timestamp for the
createdAt
field. Usegdate
on Mac OSX.date -u +"%Y-%m-%dT%H:%M:%S.%6N"
- Generate a 17 digit number for the
id
field. A timestamp using the following bash command is suitable. Usegdate
On Mac OSX.date +%s%6N
- The
filename
field in the source file should match the name of the file you added to thedata
directory. - Run
yarn test
to test for errors. - Invalid or non-simple geometry errors that occur during testing can usually be fixed by running the
clean-geom.js
script against the GeoJSON file (e.g.node scripts/clean-geom.js data/usa_states_v1.geo.json
). - Run
./build.sh
to build the manifest and blob files locally.