RisingStack / graffiti-mongoose

⚠️ DEVELOPMENT DISCONTINUED - Mongoose (MongoDB) adapter for graffiti (Node.js GraphQL ORM)

Home Page:https://risingstack-graffiti.signup.team/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Pagination that involves ordering

vangavroche opened this issue · comments

Pagination that involves ordering doesn't work.

Do you have any idea how to fix it elegantly?

Thanks.

I will look into this. Thanks for the report.

@vangavroche Are you sure it's not working? I just wrote a passing test.

According to the src https://github.com/RisingStack/graffiti-mongoose/blob/master/src/query/query.js#L286

The "after" is always determined by the ID, ( > ID). So if there's some ordering, it will break.

Regarding your test, it works because the ordering ( NAME_DESC) happens to be the same as the ID order.

See

beforeEach(async function BeforeEach() {
      motherUser = new User({
        name: 'Mother',
        age: 54,
        bools: [true, true]
      });

      await motherUser.save();

      user1 = new User({
        name: 'Foo',
        age: 28
      });

      await user1.save();

      user2 = new User({
        name: 'Bar',
        age: 28,
        mother: motherUser._id,
        friends: [user1._id],
        objectIds: [user1._id]
      });

      await user2.save();
    });

Named descending order: Mother > Foo > Bar

It is exactly the same order the records are inserted (which is also the order of the ID generated.)

@vangavroche You are right. I don't see an easy solution without MongoDB aggregations, but using those will be really painful. Do you see other solutions?

Other solution is extending logic of GQ cursor. For which this thing was introduced to GQ.

First solution

Use mongodb cursor via query_stream in mongoose and store in GQ cursor stringified id of stream, by which you can obtain real stream a load new portion of data
Pros:

  • no matter was added or deleted doc between your request, you get exactly next portion of docs

Cons:

  • cost memory on mongodb, each alive cursor
  • memory on nodejs
  • problem if client may be served via several backends
  • too many tricks and heave to code and test it

Second solution

If used custom sorting, you should store OFFSET instead of objectID in GQ cursor
Pros:

  • very easy and quickly solution to code it

Cons:

  • you can get elements again, or skip some of them (due may be added and removed some docs between requests)

Third solution

If used custom sorting, apply sorting with 0 offset, and request objects until you not get expected id in cursor. After that get needed amount of docs.
Pros:

  • easy solution to code it

Cons:

  • some problem with performance, if you ask data which have good offset eg. 1000 (need transfer via network 1000 docs before gets needed).

Fourth solution (combination of 2 and 3rd solution)

Store in GQ cursor OFFSET and ObjectID. Add some constant (of growth) eg 100 (this constant may be changed by developer). So you start query with OFFSET-100, read data until ID was reached, and after that read LIMIT number of documents. (BUT READ NO MORE THAN 100+100+LIMIT records. If you read this number of documents, you return last LIMIT records or begins new query with full scan).

Fifth solution

Aggregation framework

Sixth solution

Using sphinx or elasticsearch or some queue for obtaining next portion of IDs.

So... To be or not to be

Only developer knows, can he skip some data, or can show twice the same, can do fullscan for every request. Or use 3rd party solution. So I think that need create ability for pluggable adapters for connections.

No Silver Bullet for connections

The silver bullet was found but it has complex realization.

Connection sorting problem (not only by ID) solved in https://github.com/nodkz/graphql-compose-mongoose

The main secret in cursors:

  • for every document in cursor you should store values from fields, by which you make the sort (e.g. .sort({ _id: 1, age: 1}) then you should construct such cursor base64({_id: 34ef85...34, age: 28})
  • sort can be proceeded only by unique indexes (to avoid overlapping)
  • when proceeding before and after arguments with the cursor, you should parse values from it and use them via search with operators .where({ _id: {$gt: 34ef85...34}, age: {$gt: 28}})