dao-xyz / peerbit

P2P database framework with encryption, sharding and search

Home Page:https://peerbit.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

IndexedDB size limitations

marcus-pousette opened this issue · comments

When uploading a 500 mb file in https://files.dao.xyz
you run into interesting problems with IndexedDB. It seems to fail saving all chunks and IndexedDB shuts down after a while.

Expected behaviour: https://files.dao.xyz should be able to put "any" files size

Also, it's not just single file sizes, correct? All persistence options in the browser have data limits, including localStorage, IndexedDB, and the relatively new Origin Private File System (OPFS). The actual limits aren't even that predictable, from system to system, because different browsers impose slightly different limitations, and many rules factor in device disk usage.

Perhaps one way to work around this is to have peers running in Node.js take on the role of scaling storage, and then let browser peers replicate the shards they need? Is that even feasible? I'm not super familiar with how libp2p works, but I've been wondering about this problem.

Also, it's not just single file sizes, correct? All persistence options in the browser have data limits, including localStorage, IndexedDB, and the relatively new Origin Private File System (OPFS). The actual limits aren't even that predictable, from system to system, because different browsers impose slightly different limitations, and many rules factor in device disk usage.

Yeap. Currently using IndexedDB through LevelJS. OPFS actually got functions that allows you to predict how much memeory you have available https://developer.mozilla.org/en-US/docs/Web/API/StorageManager/estimate
Kind of want to move to a solution where OPFS is used instead of IndexedDB, but have not had time yet too implement storage adapter for that. But given you have OPFS would be in place you could throw errors as soon as you reach memory limitations and notify the user about it, and potentially do something about it.

Perhaps one way to work around this is to have peers running in Node.js take on the role of scaling storage, and then let browser peers replicate the shards they need? Is that even feasible? I'm not super familiar with how libp2p works, but I've been wondering about this problem.

This kind of defeats the purpose of doing it p2p browser only. The thesis is that if you have an app with 100 users there should be enough space for replicating all content to a degree which is satisfactory. However, before you reach that, or if you can not really trust people for doing any storage work, having dedicate nodes doing the work (and always beeing online) can be preferable. This possible to do with Peerbit today. You can create a server or use a home machine and run a node and deploy stateful applications to it so that it will hold state.

A solution for this now implemented. See how here https://peerbit.org/#/topics/sharding/sharding

Basically you can now set your limits however you want, while mainting replication degree. If you want to bound upper storage by the allowed amount of data to persist, you can do something like

peer.open(db, {
      args:  {
           role: {
               type: 'replicator'
               limit: { storage: await navigator.storage.estimate() * 0.8 } // * 0.8 to stay below limit
           }
        }
})

Demo is here

https://files.dao.xyz/

Please reopen if you find problems