IQSS / dataverse-client-r

R Client for Dataverse Repositories

Home Page:https://iqss.github.io/dataverse-client-r

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Faster JSON parser

kuriwaki opened this issue · comments

For dataset retrieval, we download and parse JSON metadata multiple times.
For example, in get_dataframe_by_name, get_fileid.character would first find the dataset id via

jsonlite::fromJSON(httr::content(r, as = "text", encoding = "UTF-8"))[["data"]][["id"]]

and the file the list of ids for each file in the dataset at
out <- jsonlite::fromJSON(httr::content(r, as = "text", encoding = "UTF-8"), simplifyDataFrame = FALSE)$data

It turns out the time this takes is non-trivial. Most of the time is taken up by loading the JSON from URL. A small remaining fraction (< 1%) is due to the parsing of the JSON file. We could make a minor improvement in speed by switching to a faster parser, RcppSimdJson (https://github.com/eddelbuettel/rcppsimdjson). This is about 2-10x faster in my tests, per below. The current jsonlite::fromJSON seems to be optimal for data science pipelines where we deal with data but here we are only interested in bits of metadata. An even faster switch is to download metadata only once.

Switching packages will require changes in at least 20 places where jsonlite is used.

library(jsonlite) # currently used
library(RcppSimdJson) # potential replacement

# sample: https://demo.dataverse.org/file.xhtml?persistentId=doi:10.70122/FK2/PPIAXE/MHDB0O
js_url <- "https://demo.dataverse.org/api/datasets/export?exporter=dataverse_json&persistentId=doi%3A10.70122/FK2/PPIAXE"

# download once
tmp <- tempfile()
download.file(js_url, tmp)

microbenchmark::microbenchmark(
  statusquo = jsonlite::fromJSON(js_url), # what is currently being called
  dl = curl::curl_download(js_url, tempfile()), # separating download from parsing
  jsonlite = jsonlite::fromJSON(tmp),  # parsing, without download
  RcppJson = RcppSimdJson::fload(tmp), # replace with Rcpp
  RcppJson_file = RcppSimdJson::fload(tmp, query = "/datasetVersion/files"), # only files data
  RcppJson_id = RcppSimdJson::fload(tmp, query = "/id"),  # stop at dataset /id
  times = 30
)
#> Unit: microseconds
#>           expr        min         lq        mean      median         uq        max neval
#>      statusquo 365097.709 371235.626 374774.8021 373752.4175 378357.084 387006.459    30
#>             dl 361154.168 364100.750 369091.1201 369528.3965 371835.459 378629.667    30
#>       jsonlite   1487.834   2743.500   3248.0424   2994.1465   3270.959   8380.876    30
#>       RcppJson    186.876    262.001    438.1298    345.3130    468.042   2335.417    30
#>  RcppJson_file    136.292    224.001    334.5173    301.6465    409.376    688.001    30
#>    RcppJson_id    138.459    177.876    287.7714    263.3965    362.792    586.750    30

Created on 2022-01-05 by the reprex package (v2.0.1)

Wow, that's a big difference. I like how your first two benchmarks isolate downloading vs parsing.

If it helps, here are the results from my home desktop; I have fiber internet with a ~5 year old processor. I have the same 7.2x jump for parsing as you.

Unit: microseconds
          expr        min         lq        mean      median         uq        max neval cld
     statusquo 170548.243 178850.316 182609.5076 182405.6315 186942.899 193960.856    30   c
            dl 168463.594 176699.861 181490.2084 181860.3195 185492.156 195547.737    30   c
      jsonlite   2786.174   3237.758   4953.9265   5831.6465   5977.747   6232.820    30  b 
      RcppJson    349.556    478.450    684.4094    740.9170    895.923   1018.478    30 a  
 RcppJson_file    294.919    548.481    635.1559    642.5110    771.255    884.151    30 a  
   RcppJson_id    256.885    531.879    553.2711    558.8955    597.987    795.705    30 a 

Even though a 7-10x jump is nice, I'm not sure it will be noticed by the user. The real bottleneck is downloading the file (ie, 0.3 sec for you and 0.18sec for me). The parsing duration is a fraction of the downloading duration.


This is probably more trouble than it's worth, but I'm thinking aloud: Are the two packages essentially interchangeable? I mean, do they accept the same parameter (ie, url) and spit out a nested list with the exact same structure?

If so, could the dataverse package use RcppSimdJson if it's available (using requireNamespace("RcppSimdJson ", quietly = TRUE)) and fall back to jsonLite if it's not available?

RcppSimdJson could be a suggests; this approach is explained well in the "Guarding the use of a suggested package" section of the 2nd edition of R Packages.

I'm a little concerned RcppSimdJson is not easily deployed. The RcppSimdJson library has only three dependencies in the past two years. I see the current minimum requirements of jsonLite are almost nothing (R won't even work without the methods package. The suggested dependencies are almost identical to dataverse -the sf package is the only real addition.

Overall, I think we can start with the parallel Suggests, but given that download as opposed to parsing is the real bottleneck, it is not a high priority.

Re:

Are the two packages essentially interchangeable? I mean, do they accept the same parameter (ie, url) and spit out a nested list with the exact same structure?

They are not identical (their object.size is slightly off and don't pass the base::identical()) but I can't find a difference yet, and it might be identical in the aspects our client package cares about.

Re:

I'm a little concerned RcppSimdJson is not easily deployed. The RcppSimdJson library has only three dependencies in the past two years.

I thought we want to depend on packages which in turn have fewer dependencies themselves? Right that jsonlite has no real dependencies. RcppSimd is also minimal too, but it does rely on Rcpp.

Re:

I'm not sure it will be noticed by the user. The real bottleneck is downloading the file

Yes, maybe we tackle the download first.