denyncrawford / dndb

A Deno 🦕 persistent, embeddable and optimized NoSQL database for JS & TS

Home Page:https://dndb.crawford.ml

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Crash app with DNDB high load!

opened this issue · comments

Hi!
I wrote a simple test and I want to share it with you.

  1. Test code:

1

In this example i run simple visit counter

  1. Run it

2

And all well done! In DB i see correct counter value.

But if i press and hold F5 key (Firefox) - this allows you to simulate a large request frequency per second of time.

DNDB make db.update file and crash all app

3

please tell me how to disable db. update or optimize dndb for high loads?

P.S ( deno - 1.7.0 & dndb 0.2.6 )

@michailVestnik Hi buddy, the problem isn't a DnDB issue. It is actually just because you're not awaiting your async function.

Edit: I must remember that DnDB is not a sync module and you have to wait for each operation until it is done and committed to run the next operation, this is because we are working with a file state that cannot change while another write task is running. So, this happens when you try to perform multiple operations at time.

I did a minimal example with the fix and it works nice:

import { Application } from "https://deno.land/x/abc@v1.2.4/mod.ts";
import Datastore from 'https://deno.land/x/dndb@0.2.6/mod.ts'

const app = new Application();

const count = new Datastore({
  autoload: true,
  filename: './count.db'
})

console.log("http://localhost:8080/");

const increment =  async (page) => {
  let results = await count.find({ url: page })
  if (results.length) {
    for (let el of results) {
      let { counter } = el;
      return await count.updateOne({url: page}, {$set: {counter: counter+1}})
    }
  } else {
    let doc = {
      url: page, 
      counter: 1
    }
    return await count.insert(doc);
  }
}

app.get('/*', async (c) => {
  let page = c.url.pathname;
  page = page === '/' ? 'index' : page 
  let pageVisit = await increment(page);
  return JSON.stringify(pageVisit, null, 2)
})

app.start({port: 3000})

Also you don't need to loop through all the documents, you can use Datastore.updateOne() :D but I did it in the example to be true to your code, also dndb does not return async iterators they are just arrays and you don't need to use for await.

Does this solve your problem? If it does, I proceed to close, regards 😄 .

Ok, I'm testing right now on another pc and it crashes with the same exact error even when I'm waiting for the function. Let me check it out ASAP.

I think that the problem is in writing the backup file db. updated and the reaction of the operating system. If we give the request rate about 1600 per minute-the application crashes

Yes, I'm working so far trying to debug this, but I can't find why it fails at that operation rate. I guess the same as you, by now I only know that this is related to the temporary file renaming and the way dndb handles storage. I am working solve the problem ASAP.

@michailVestnik Hi, buddy, this is the new update:

Basically the error occurs because when a user makes a request at the same time as another user is doing the same, Deno revokes the rename operation because the file is already open by another call. In fact, I made a run queue manager, but it didn't work due to the way the DnDB storage system is managed, dndb opens and closes the file every time an operation is executed. Probably what I'll have to do is keep the file open when initializing dndb and close it only when renaming. For now I have to do some rewrites, please be patient.

@michailVestnik I did some changes and it is working so far:

trim.mp4

Idk if it is only working on my environment so, can you perform a test with the update using the example?

https://raw.githubusercontent.com/denyncrawford/dndb/main/mod.ts

OK. I'll test it now, wait!

Everything works perfectly! you're doing great!
I tested for large requests and DDoS attacks - everything is fine!
Now DNDB is a reliable base!

Thanks @michailVestnik, it's great that this bug has been discovered and resolved. I proceed to close this issue and publish the stable version.