arthurfiorette / axios-cache-interceptor

📬 Small and efficient cache interceptor for axios. Etag, Cache-Control, TTL, HTTP headers and more!

Home Page:https://axios-cache-interceptor.js.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Optimization: Reduce cache calls

paescuj opened this issue · comments

What happened?

Thank you very much for this library!

I've done some debugging and noticed there are quite a few cache calls happening for the following scenario:
There's a response in cache which is stale (max-age reached) and it gets re-validated.

I'm wondering whether those could actually be reduced.

I would have expect two cache calls for such a case:

  • Cache Call 1: find to get cached response
  • Do request to revalidate
  • Cache Call 2: set to update the cached response

In fact, 5 calls are made (see the log below, logged as cache: ...).
I guess that's because request configs are updated and written back to cache again before the actual request is carried out. Couldn't that happen in-memory and only be written down to the cache after processing the new request?

axios-cache-interceptor version

v1.3.2

Node / Browser Version

Node.js v18.19.0

Axios Version

v1.6.2

What storage is being used

Another one

Relevant debugging log output

cache: find (key: 1137762219)
cache: set (key: 1137762219)
cache: set (key: 1137762219)
{ id: '1137762219', msg: 'Updated stale request' }
{
  id: '1137762219',
  msg: 'Sending request, waiting for response',
  data: { overrideCache: false, state: 'stale' }
}
cache: find (key: 1137762219)
{
  id: '1137762219',
  msg: 'Useful response configuration found',
  data: {
    cacheConfig: {
      update: {},
      ttl: 300000,
      methods: [Array],
      cachePredicate: [Object],
      etag: true,
      modifiedSince: false,
      interpretHeader: true,
      cacheTakeover: true,
      staleIfError: true,
      override: false,
      hydrate: undefined
    },
    cacheResponse: {
      data: [Object],
      status: 200,
      statusText: 'OK',
      headers: [Object]
    }
  }
}
{
  id: '1137762219',
  msg: 'Found waiting deferred(s) and resolved them'
}
cache: set (key: 1137762219)
{
  id: '1137762219',
  msg: 'Response cached',
  data: {
    cache: {
      state: 'cached',
      ttl: 89000,
      staleTtl: undefined,
      createdAt: 1702166521005,
      data: [Object]
    },
    response: {
      status: 200,
      statusText: 'OK',
      headers: [Object],
      config: [Object],
      request: [ClientRequest],
      data: [Object],
      id: '1137762219',
      cached: true
    }
  }
}

Hey @paescuj is this directly impacting the performance of your app? Request and response in axios should have completely separated state from each other, that's why two finds happen.

I do not have time or a appealing reason to do such optimization this storage access should be almost irrelevant comparing to a network request which would happen always without axios-cache-interceptor.

However, I'm happy to guide you if you wanna work on it.

Let me walk you through what is currently happening in your usecase.

  1. Request is made, get() is called in the request interceptor.

let cache = await axios.storage.get(config.id, config);

  1. The storage wrapper detects the storage returned an stale entry with cached status, set() is called to also update this entry in the db.

await set(key, value, config);

  1. Everything is loaded to wait for a network response, a set() call is made to update this entry in the db to loading.

await axios.storage.set(
config.id,
{
state: 'loading',

  1. Request is sent to network, event 'Sending request, waiting for response' is emitted.

  2. Response arrives, the response interceptor has no real information of what happened before and has no state connection to the request interceptor, so another get() is performed.

const cache = await axios.storage.get(response.id, config);

  1. Response is processed and before returning, a set() is called to insert the response gotten from network.

await axios.storage.set(response.id, newCache, config);


However the use case you presented here probably have the highest ratio of db calls per request. Cached requests should have only a single get().