How to properly manage Memory? Leaking Memory....
exotexot opened this issue · comments
I'm running Nakama Typescript server in my docker, and I do see memory leaking.
The graph dropping down is when I Restarted the machine..
However I Create matches and terminate them, still I see memory aggregating.
I do have logging active. People tell me to disable logging would save memory. However, I do feel like there is something wrong.
I use TS template code mainly from your demo repo. And I even optimise my matchLoop to use dispatch.roadcastMessageDeferred() in order to avid excessive socket messaging.
let matchLoop: nkruntime.MatchLoopFunction<State> = function (
ctx: nkruntime.Context,
logger: nkruntime.Logger,
nk: nkruntime.Nakama,
dispatcher: nkruntime.MatchDispatcher,
tick: number,
state: State,
messages: nkruntime.MatchMessage[]
) {
const unifiedTransforms: { [key: string]: any } = {}
for (const message of messages) {
switch (message.opCode) {
// Transform Update
case OpCode.TRANSFORM_UPDATE:
let transformMsg = {} as any
try {
transformMsg = JSON.parse(nk.binaryToString(message.data))
unifiedTransforms[message.sender.userId] = transformMsg
} catch (error) {
logger.error("Bad data received: %v", error)
continue
}
break
case OpCode.STOP_SESSION:
return null
break
default:
// No other opcodes are expected from the client, so automatically treat it as an error.
dispatcher.broadcastMessage(OpCode.STATE_UPDATE_REJECTED, null, [message.sender])
logger.error("Unexpected opcode received: %d", message.opCode)
}
}
state.transforms = combineObjects(state.transforms, unifiedTransforms)
// Broadcast the unified state to all clients
dispatcher.broadcastMessageDeferred(OpCode.UNIFIED_STATE_UPDATE, JSON.stringify(state))
return { state }
}
There was a similar issue on the forums: https://forum.heroiclabs.com/t/memory-leak-profiling/615 - but it seemed amazon specific.
Your Environment
- Nakama: 3.21.1
- Database: Postgres 16.2
- Environment name and version: Docker version 26.1.1
- Operating System and version: Ubuntu 24
Hello @exotexot, we'll need more information, could you please take this over to our forums and open a thread there?