graphql / graphql-over-http

Working draft of "GraphQL over HTTP" specification

Home Page:https://graphql.github.io/graphql-over-http

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Define directives for HTTP compression on the client-side and HTTP caching on the server-side

isocroft opened this issue · comments

I believe it would be pertinent and useful to create GraphQL directives for the client-side and server-side:

CLIENT-SIDE EXAMPLE ( @httpCompressed directive):

This directive is used to set up encoding negotiation using the Accept-Encoding: gzip HTTP request header while expecting compressed content. When the mode is set to one-off, the response needs to have the HTTP response header Content-Encoding: gzip but if the @defer or @stream directives a re used in conjunction with @httpCompressed directive, then the response needs to have the HTTP response header Transfer-Encoding: gzip, chunked and the mode will be implicitly incremental even if the mode is explicitly specified as the response will definitely be streamed using either HTTP 1.1 streaming (multipart/mixed response). The above setting tells the GraphQL server that HTTP compression should only be applied when the GraphQL response is above 3000KB.

query GetUserFriendsAndLikes ($id: Int!)  {
  getUserFriendsAndLikes (id: $id) @httpCompressed(mode: "incremental", above: 3000) {
    first_name
    last_name
    age
    gender
    friends @defer {
      first_name
      last_name
      age
      gender
    }
    likes
  }
}

SERVER-SIDE EXAMPLE (@httpCached directive)

This directive is used to activate HTTP Cache-Control header to either set them or check depending on whether the response goes through a CDN or if the resolver function output data gets requested over the network and the GraphQL server has caching capabilities built around the HTTP caching spec.

type User {
  name: String!
  age: Int!
  salary: Float!
}

type Query {
  getAllUsers (id: ID!, page: Int!, pageSize: Int!): [User!] @httpCached(action: "check", freshness: 1, validation: 0)
}

Why does regular HTTP content negotiation not sufficiently fulfil this need?

Hello @benjie, yes, proactive HTTP content negotiation does in fact cover/include HTTP compression. However, one GraphQL client can request a GraphQL response with or without compression (e.g. gzip - because it supports stream compression; think @defer, @stream directives) depending on the size of the response but yes they can go hand-in-hand. Also, for most GraphQL clients and servers out there in the wild, the primarily popular Content-Type is set to application/graphql (which i think is in the process of being standardised) OR application/json. In other words, JSON is preferred over say XML (which can also be a valid response content type format for GraphQL). But for compression, it should statutorily optional.

Furthermore, a directive might be beneficial to apply HTTP compression only above a given JSON response size on the server-side. I also do believe it has more benefits to the many way GraphQL responses could be delivered quickly than applying deduplication as an alternative which doesn't work with streamed content (HTTP 1.1 streaming) as the response has to be complete before deduplication can take place. HTTP compression can also be effectively disabled (by default) at the GraphQL level (especially if the GraphQL server utilises signed HTTP cookie(s) for authorization and hence needs to worry about CSRF and consequently the BREACH attack linked to HTTP-level compression).

Also, @benjie, another scenario will be to rename the directive from @httpCompressed to @compressed. This will enable deduplication to be used alongside HTTP compression (gzip). So, the GraphQL server may inspect the query and if it doesn't find directive that might suggest the use of HTTP 1.1 streaming to formulate the GraphQL response (think @defer, @stream directives), the @compressed directive makes use of deduplication else it makes use of HTTP compression (gzip - stream compression support). This can further standardise control over how and when deduplication is applied from the client-side.

When you say deduplication, what exactly are you referring to?

I mean removing duplicates that is and appears redundant in a given dataset as it applies to GraphQL response payloads (probably using logic like the one used here).

Again, I think this would be better served by HTTP content negotiation, in this case you could add Accept: application/graphql+jsoncrunch, application/graphql+json, application/json or some such.

OK @benjie , you do have a point. It's much better and more conformant to the use case than utilising directives. However, i want to ask: are these mime-types standardised for use via the HTTP RFCs ?

Not as far as I know; I don't think this specific spec would contain the GraphQL Crunch encoding for example, however this spec should be compatible with someone doing so. Much in the same way that the GraphQL Cursor Pagination Specification is not part of the GraphQL Spec, but is compatible with it. If changes are required to this spec to make it compatible I certainly support that 👍

Or if you mean the actual HTTP RFCs (rather than the GraphQL-over-HTTP RFC) then here's a good place to start your research: https://stackoverflow.com/a/11574673/141284

Closing this issue. There appears to be agreement that the solution is a combination of standard HTTP compression and HTTP content negotiation.