facebookarchive / flashback

Capture and replay real mongodb workloads

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

fatal error: runtime: out of memory

lukerosenfeld opened this issue · comments

Hello,

I've been stress testing a database using flashback, and it seems to me that the style "stress" is not working. I am using go1.4 and get no data in my "statsfilename" and similarly no data in the "stdout" file.

I have been getting the "out of memory" error that is in the title of this post. Any ideas on what might be causing this?

As "stress" is currently implemented, all ops get loaded into memory before execution. I think the original reason for doing this was that workers were getting starved for ops while reading ops from disk, not sure it applies anymore.

Anyway, if your ops file is larger than system RAM it will just consume ops until you go OOM. Can you check if that's the case, and if so, try setting the maxOps flag to limit the number of ops that get loaded into memory. Otherwise can you link a stack trace where the OOM occurs?

@tredman

I think Luke is on to a real and possible issue ( at-least for sharded setups)

My thought here would be something like

---opBufferDocs

We could setup a reader channel that looks for when the buffer is 200k ( or maybe we use size instead if that's easier) , at that time the read just puts more lines into the buffer until i hits the threshold. After which time the channel would sleep until the buffer was low again.

I will say I have been testing a 30 shard system and was not able to need this but I could see it is possible.