switch data with keeping service?
HIRANO-Satoshi opened this issue · comments
It took 23 minutes to start after indexing country.db, region.db, county.db and locality.db.
Every time WoF-data is updated, we need to restart the pip-server and service stops for the period, right?
Is there any way to switch data while it is serving?
It occupies 4.38GB now on my iMac. I don't think double buffering is a good idea.
It seems like there are three separate issues here?
- Time to index (23 minutes)
- Memory usage assuming that you are indexing SQLite databases
- Hot-swapping data
Is that correct? Are you also creating an extras/SQLite database (with the -extras
flag) for appending properties to the results?
Assuming the issues listed above:
- Can you make sure that you are using a version of the code >= 06433dc
If you are not passing the -extras
flag this shouldn't matter but if you are it's a known problem that not enabling performance tweaks (the so-called -live-hard-die-fast
flag) in the SQLite code will cause indexing to take forever.
Beyond that indexing on a Mac has always been less-performant than on a Linux machine and as of this writing I am not sure why. I apologize for the inconvenience; generally all the work so far has been to ensure that it can run on a Mac but has been tweaked and tested for a Linux machine.
-
That sounds like something somewhere is letting go of memory. Question: If you let the server sit idly for 5-10 minutes after indexing does the memory usage go down? I am wondering if this is just a Go garbarge collection issue (in which case we could probably just just force garbage collection after indexing). Tangentially related: #18
-
Currently it is not possible to hot swap data. It has always been on the list but hasn't happened yet. There is now an open ticket for this: https://github.com/whosonfirst/go-whosonfirst-pip-v2/issues/new
I confirmed that memory usage was reduced after finishing indexing.
For a test data, it consumes 19.3GB memory during indexing and 5.66GB after indexing.
I will close this issue since you opened #22.