vercel-community / deno

🦕 Deno runtime for ▲ Vercel Serverless Functions

Home Page:https://vercel-deno.vercel.app

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Consider increase buffer size on runtime output buffer

Nexxkinn opened this issue · comments

commented

https://github.com/TooTallNate/vercel-deno/blob/a033f1dfe0b8adaede606fc98b875db3328267ea/src/runtime.ts#L49

Buffer() defaulted their size to 4096 and won't allocate more memory unless you call grow() method in it. Therefore, any body response with a length more than 4096 bytes will be cut, and only send the first 4096 bytes to base64.fromUint8Array(body)

Initializing the memory size is currently my solution to this problem I had in my runtime, too. I don't know the official size limit so I just set it to 33MB just in case.

Thanks for this. AWS Lambda's max response size is 6mb, so seems we should set the buffer to that size.

I tried doing what you said in ba2f182, but I'm not sure if it's working correctly.

commented

You could test it by adding a stock image exceeding 4096 bytes and check if it returns properly.

This is the solution I used in my runtime.
It might have some redundancies, but at least it works for me:

const output = new Deno.Buffer(new Uint8Array(6000000)); // 6 MB
...
const bufr = new BufReader(output,output.length);
const tp = new TextProtoReader(bufr);
...
let buff = new Uint8Array(bufr.size());
const size = await bufr.read(buff)||bufr.size();
const body = buff.slice(0,size);

Verified that #17 fixes the issue (note the property "big" at the bottom of the JSON payload with "a".repeat(4096) as the value, which exceeds the default buffer size).