lucaswerkmeister / m3api

minimal modern MediaWiki API client

Home Page:https://www.npmjs.com/package/m3api

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Handle batchcomplete in continuation

lucaswerkmeister opened this issue · comments

I was initially under the impression that the batchcomplete field in responses is specific to the query API, while continuation is a more general concept, but in fact batchcomplete is also managed by the ApiContinuationManager. In light of this, I wonder if m3api (rather than m3api-query) should offer some functionality to work with this field, or its absence.

One thing I could imagine is a function called requestAndContinueBatch(), which, instead of returning a iterable, returns an iterable of iterables. Each inner iterable would yield responses up to and including a batchcomplete one; the outer iterable would produce inner iterables until continuation finishes (or iteration stops, of course). If each response has a complete batch, then each inner iterator would only yield a single response.

This would be more cumbersome to use, so I definitely wouldn’t want it to replace requestAndContinue(); it would just be available as an alternative API, which would allow you to write more correct / robust code when you think you need it (typically in a library).

If this is implemented, then support for hiding truncatedresult warnings might belong in this package as well (compare lucaswerkmeister/m3api-query#2). I’m not sure it should happen by default even when using this function – we don’t know if callers will use the API “correctly” – but it stands to reason that users are likely handling truncated results, so we should at least make it easy for them to hide these warnings.

Hm, not sure if the iterable of iterables works. That idea expects callers to iterate the iterators sequentially: fully exhaust an inner iterator before stepping the outer one again. But nothing actually enforces this, and the results of not following these rules could actually be really confusing.

Needs more thought, I think.

I suppose one approach would be to leave control of the iteration in our hands. The function is called with a reduce-like callback; we call that callback for each partial response, then yield the accumulator at the end of each batchcomplete response.

I think the reduce-like approach could work; in this case, we can probably also hide truncatedresult warnings unconditionally, because we are feeding all the partial results into the reduce function, and it seems fine to assume that it does something useful with them. (If anyone really wants the warning, it’s still available in the response object, too.)

Done, though I’m not super excited about the name.