Consider increase buffer size on runtime output buffer
Nexxkinn opened this issue · comments
Buffer()
defaulted their size to 4096 and won't allocate more memory unless you call grow()
method in it. Therefore, any body response with a length more than 4096 bytes will be cut, and only send the first 4096 bytes to base64.fromUint8Array(body)
Initializing the memory size is currently my solution to this problem I had in my runtime, too. I don't know the official size limit so I just set it to 33MB just in case.
Thanks for this. AWS Lambda's max response size is 6mb, so seems we should set the buffer to that size.
I tried doing what you said in ba2f182, but I'm not sure if it's working correctly.
You could test it by adding a stock image exceeding 4096 bytes and check if it returns properly.
This is the solution I used in my runtime.
It might have some redundancies, but at least it works for me:
const output = new Deno.Buffer(new Uint8Array(6000000)); // 6 MB
...
const bufr = new BufReader(output,output.length);
const tp = new TextProtoReader(bufr);
...
let buff = new Uint8Array(bufr.size());
const size = await bufr.read(buff)||bufr.size();
const body = buff.slice(0,size);