monome / crow

Crow speaks and listens and remembers bits of text. A scriptable USB-CV-II machine

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Reduce RAM footprint of core libs

trentgill opened this issue · comments

commented

Lua is allowed to use ~155kB of RAM before we see out-of-memory errors (and occasional crashes).

As of 2.1 we're at ~110kB of space used by the core lilbs. A lot, but still 45kB for the user script to run. With the impending lua libs of v3.0 we're rapidly approaching 155kB in the core libs themselves! Obviously this won't work, and depending on which PRs you're running, there may already be out-of-mem errors.

As a result, we need to prune back the lua libs dramatically. If possible, I would prefer to do this without removing functionality. Secondly, if we can avoid rewriting large chunks of the libs in C that would be nice too (though I'm happy to do it if necessary).

I've been working through libs and am finding inconsistent results in how to reduce the RAM footprint. Unfortunately it seems the RAM cost of functions is actually quite high, meaning we might need to inline some function calls to remove helper functions. Before doing this all over the place I would like to have an objective method by which to test whether changes are helping.

I'm thinking something along the lines of using luac -o a.out lua/input.lua && stat a.out | grep Size to compare bytecode sizes. I don't think the 32/64bit difference will make a substantial difference. Perhaps there are references people know of regarding the RAM usage impact of lua? We have plenty of CPU headroom, so if there's ways to reduce memory pressure with more CPU that's an option.

Would love some input on how folks would tackle this problem. @tehn @csboling @simonvanderveldt @ngwese @catfact

commented

I've gone for somewhat of a bruteforce approach that strips debug info (same approach as luac -s) on files as they are loaded. There's obvious downsides to this, but it's a good stop gap. I'm very interested in exploring other options that are more focused. See PR #395

commented

fixed in #395