DmitryTsepelev / graphql-ruby-fragment_cache

graphql-ruby plugin for caching parts of the response

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Does this gem can work with GraphQL Ruby Multiplex and GraphQL Batch?

jpalumickas opened this issue ยท comments

Hello,

Using GraphQL Ruby Multiplex with Apollo Batch Http Link, this user type works fine

def user(id:)
  user = User.find(id)

  cache_fragment([user, context[:current_user]]) do
    user
  end
end

but when I try to add GraphQL batch here

def user(id:)
  Loaders::RecordLoader.for(User).load(id).then do |user|
    cache_fragment([user, context[:current_user]]) do
      user
    end
  end
end

I get this error:

undefined method `map' for nil:NilClass
/Users/jpalumickas/.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/gems/graphql-fragment_cache-1.4.0/lib/graphql/fragment_cache/cache_key_builder.rb:147:in `block in path_cache_key'  
/Users/jpalumickas/.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/gems/graphql-fragment_cache-1.4.0/lib/graphql/fragment_cache/cache_key_builder.rb:144:in `fetch'                    

Also, this type with Batch works fine when using without Apollo Batch (calling single query)

Hi! It should work fine with batch, did you try to move loading inside cache_fragment block? (we don't want to do anything if fragment has already been cached).

Hey,

But I need user object to know if it's still valid (to check updated_at field). Or I missing something?

cache_fragment does not check updated_at, it keeps the nested subtree in the cache forever (or while expires_in is valid). This gem was designed to avoid unnecessary queries for entities that do not change often, if we make a database request โ€” we don't get a lot of benefit in terms of time ๐Ÿ™‚

In our situation, we have, for example, user query witch has a lot of other associations. We touch user if these associations are changed, so we can cache user based on updated_at (to not load associations, but return cached fragment). So I guess this gem can provide that? But how then we can cache based on updated_at? I think providing user with cache_key_with_version can work fine with this gem, no?

user query witch has a lot of other associations

Makes sense, in this case adding updated_at to the cache key should do a trick.

Thanks ๐Ÿ™‚ but then I still have a problem with this Batched query that sometimes this path is nil

When I said "It should work fine with batch" I meant it works with batch loader, I haven't tried it with Apollo Batch because sometimes it might make the app vulnerable. Sounds like it sends a complex request the gem cannot handle, could you please take a look at logs and maybe come up with a failing spec?

ok, so you just can try directly calling GraphQL.multiplex (without even using request) https://graphql-ruby.org/queries/multiplex.html#concurrent-execution

query = 'query User($id: ID!) { user(id: $id) { id } }'

context = {
  current_user: current_user,
}

queries = [
  {
   query: query,
   variables: { id: 1 },
   operation_name: nil,
   context: context,
 },
 {
   query: query,
   variables: { id: 2 },
   operation_name: nil,
   context: context,
 }
]

MySchema.multiplex(queries)

and it's still failing ๐Ÿ™‚

Glad this conversation was already started, I was just coming here to discuss!

@DmitryTsepelev, wrapping a graphql-batch call inside of cache_fragment doesn't work, because graphql-batch's loaders returns a Promise. Since cache_fragment immediately checks the cache_key, the cache key for the promise is just the object id, like "<DIGEST>/#<Promise:0x00007fc306642a28>".

With promises, the cache value and cache key need to be calculated once they've resolved, but cache_fragment helper checks the cache_key which is memoized.

On the other hand, in order to check if the value is in the cache, you have to know what the promise is going to resolve as and/or capture the cache key after sync, but at that point the database has been hit. You will save some computation time returning the RawValue especially in a nested tree, but it would be nice to avoid the database hit altogether.

I tried various methods to get it to work in the graphql-fragment-cache code, but everything seemed like a hack. I'm going to keep trying, but I was hoping you had an easy idea on how this could be handled.

but everything seemed like a hack

Haha, check out this part of the codebase and decide if you can use at as excuse to your hacks ๐Ÿ˜€

wrapping a graphql-batch call inside of cache_fragment doesn't work, because graphql-batch's loaders returns a Promise

I feel like we mix two things here, Justas was talking about clientโ€“side batch (when client collects queries and sends them in a group). However, it's a valid point about server batching, I'll try to dig in but looks like it won't happen during the next few weeks since my personal backlog is a bit overloaded ๐Ÿ˜…

Whoops! My bad. ๐Ÿ˜… You're right, I was talking about something else. I'll wait to see if #49 solves our problems, and if not I'll make a new issue.

Hi @jpalumickas! I've added a spec #57, and looks like #multiplex works fine! However, I did see that error about map when accidentally passed an object instead of array, could you please make sure it's not your case or maybe point out what is different between my spec and your setup?

Hey @DmitryTsepelev,

This gem works fine with multiplex, but only not works when I try to preload records with GraphQL Batch before caching:

Not working:

def user(id:)
  Loaders::RecordLoader.for(User).load(id).then do |user|
    cache_fragment(user) do
      user
    end
  end
end

Working:

def user(id:)
  user = User.find(id)

  cache_fragment(user) do
    user
  end
end

So test written in #57 working fine because it using cachedPost without GraphQL batch preloaded before caching.

def cached_post(id:)
  cache_fragment { ::Post.find(id) }
end

Cool, I was able to reproduce the issue, so far I found out that context.namespace(:interpreter)[:current_path] becomes nil at the middle of resolution, and we need it to build a key. Let's see where it goes

@DmitryTsepelev that sounds like the same problem we were having that in #53, where context.namespace(:interpreter) final value was also coming back nil.

Asked a question in the graphql-ruby repo, looks like the case is pretty rare

Thank you @DmitryTsepelev ! Hope we will find answer soon ๐Ÿ™‚

@jpalumickas we figured out that context is not updated because promise keeps control until it's fully resolved (i.e., the whole batch). There are two workarounds: use builtโ€“in dataloader (be careful) or use my dirty hack to pass down the context component gem needs:

def cached_author_inside_batch
   outer_path = context.namespace(:interpreter)[:current_path]

   AuthorLoader.load(object).then do |author|
     context.namespace(:interpreter)[:current_path] = outer_path
     cache_fragment(author, context: context)
   end
 end

More detailed explanation is here #58 as well as an example with specs.

@DmitryTsepelev I see, thank you! I think we can close this issue then ๐Ÿ™‚

FYI we went down this road and there's also an issue with graphql-ruby when you use cache_fragment inside of a batch loader that returns a list (it works fine with single objects). Unfortunately, individual item caching was horrible slow in comparison to just batching. โ˜น๏ธ We are going to look into doing a redis mget from the batch results instead.

For example:

BatchLoader::GraphQL.for(object.id).batch(default_value: []) do |user_ids, loader|
  Project.where(user_id: user_ids).each_with_index do |project, index|
    # โŒ This will break without the graphql-ruby fix and the graphql-fragment-cache patch below
    loader.call(project.user_id) { |memo| memo << cache_fragment(project, path: current_path(index)) }
  end
end

We also had to add a patch to Fragment to allow overriding the path option, but we didn't submit a PR because of the performance issue.

diff --git a/lib/graphql/fragment_cache/fragment.rb b/lib/graphql/fragment_cache/fragment.rb
index d2b39e0..2697082 100644
--- a/lib/graphql/fragment_cache/fragment.rb
+++ b/lib/graphql/fragment_cache/fragment.rb
@@ -13,7 +13,7 @@ module GraphQL
       def initialize(context, **options)
         @context = context
         @options = options
-        @path = interpreter_context[:current_path]
+        @path = options[:path] || interpreter_context[:current_path]
       end
 
       NIL_IN_CACHE = Object.new

None of the batch loaders play particularly well with fragment caching, unfortunately.

Unfortunately, individual item caching was horrible slow in comparison to just batching.

I wonder if migration to multi_read (#44) would make things faster

I'm going to close the issue since we solved (not really) the initial problem. @bbugh please let me know if you want me to fix something when your PRs are merged (and thanks for finding the issue with arrays!).