ModuleIntegration: High memory usage
jarstelfox opened this issue · comments
How do you use Sentry?
Sentry SaaS (sentry.io)
SDK version
3.21.0
Steps to reproduce
We noticed that Fatal Errors were not being reported correctly.
After some digging, we found that the Event Handlers were working correctly, but simply running out of reserved memory.
I did an xdebug profile to understand exactly what was using memory during the FatalErrorHandlerListener.
It looks like Jean85\PrettyVersions::getVersion
is the main culprit. It takes 8.1 MB in our case.
Meanwhile, the default reserved memory is 10.24 kilobytes.
As a work-around, we have removed the integration and all works as expected.
Expected result
We receive Fatal Errors in Sentry.
Actual result
We do not receive Fatal Errors in Sentry.
I noticed similar issues when profiling the SDK, and we'll address this in the upcoming major we are currently working on.
@cleptric I would say a minor version release may be needed. We had out-of-memory errors that were not reported in production due to the integration.
That integration is used when ModulesIntegration
is enabled, which is by default. A quick workaround would be to disable that.
It is really strange to me though that that method chugs so much memory... it's a very simple method that does a foreach: https://github.com/Jean85/pretty-package-versions/blob/a0427613ed159314b1aac96952edd515fa2a284f/src/PrettyVersions.php#L54-L58
Maybe you have a very wide dependency graph? I've opened Jean85/pretty-package-versions#47 to track this and see if I can fix it in any way.
@cleptric as you can see from Jean85/pretty-package-versions#47 (comment), it seems there's no way around this memory usage, if you're reading versions you'll still have that memory peak because we're reading vendor/composer/installed.php
, Composer's source of truth about what's inside the vendor folder.
So, I would recommend either eager fetching that info (before the user's app hits OOM, so that the garbage collection would free that memory afterwards) or disabling that integration when we're reporting an OOM (if possible).
So, I would recommend either eager fetching that info (before the user's app hits OOM, so that the garbage collection would free that memory afterwards) or disabling that integration when we're reporting an OOM (if possible).
Seconded. In fact, it may be worth seeing if you can apply the eager fetching for all integrations in the first scope so no integration runs into a similar problem.
I have been trying to trigger this by adding a pretty (in my mind) large amount of packages to the composer.json
but I'm not nearing anything like the memory usage you are describing... however it does look like your project has over 200 direct dependencies? Is it possible to share your composer.json
or the bulk of it?
@stayallive Absolutely. I just sent you an email with it 👍🏻
So, interesting development. I've tested how much memory the modules integration is taking with this script: https://gist.github.com/stayallive/e76b4314f1b9e2141ae7e19ff78ae8a4
The code of the ModulesIntegrationMock
was copied from the ModulesIntegration
and changed it a little to make it static, should do the same amount of work. However, much to my surprise:
before: 4194304
before peak: 4194304
after: 4194304
after peak: 4194304
Packages reslved: 202
So either I'm missing something in my test or the resolving of the modules is not actually consuming any memory which would mean we have to look elsewhere for the cause of the problems you are seeing. Note that commenting out calling ModulesIntegrationMock::getComposerPackages()
doesn't change the memory usage either.
I've run this in macOS with PHP 8.2.11 FWIW.
So after a bit more digging, I think I've found the issue and hopefully it's going to be fixed by #1633.
I don't believe the ModuleIntegration
is the culprit (although it certainly doesn't help). Your example does have an abnormally high memory usage however this cost is, from my testing, already paid up-front by composer so this is not happening when the OOM error is being captured but it does make the payload bigger so removing it can still help not ballooning the JSON size.
I have been testing all day and this problem is dependent on how much memory is triggering the OOM, either a large chunk or a small chunk. In the case of a smaller chunk we OOM in our fatal error handler because the 10 KiB of reserved memory is waaaay to small on it's own to allow us to do network calls and serialize large JSON payloads (from testing we need 30-60 times that for a small payload). When a OOM occurs for a larger memory allocation this is less an issue since the allocation that caused the OOM is freed before the fatal error handler is called thus already freeing up some memory and our 10 KiB is not really doing much at that point.
The solution is to increase the memory limit (once) when we encounter a OOM by 5MB which should be enough headroom to serialize even the largest payload and should not have much negative effect on the environment.
It would be great if you could look over the PR and test the branch in your project to see if that helps without needing to disable the ModuleIntegration
.