josephg / ShareJS

Collaborative editing in any app

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Linear performance issues with larger number of operations submitted

Maksims opened this issue · comments

We are submitting number of operations: from 50 to 500 to different documents in the same collection.
Operations are very easy - oi an array with 5 numbers.
This operations are submitted by back-end.
We are using redis driver and lovedb-mongo.

So testing it on local machine (redis and mongo installed on same machine), we getting some bad performance results:
Updating 50 docs, takes ~125ms. Submit operations are called in one go, then on callback we increment counter to wait till all operations are done, and report time it takes.

Updating 100 docs, takes ~250ms. It seems like there is some sync linear work going on, or some locking mechanics that leads to linear performance.

We have 2 problems here:

  1. It is not async, and linearly increase. In our case we doing operations on paths of files, with our virtual filesystem. During those operations we issue locking system over filesystem in particular scope (project). So no other operations will be executable on files manipulation until lock is released. So such long time it takes, leads to the fact it takes really long, and user sees files moving around in perceivable manner, and they have to wait.
  2. Time it takes even for a single operation - is way too much. But even if takes that much, at least it could do it in parallel for different documents. We do bunch of other direct db requests before submitting operations and it takes 1-3ms for our part, although there are big queries happening to same db in meantime.

Would like to get insight if there is anything can be done to reduce timing, or at least enforce it to be parallelised?