aio-libs / aiomcache

Minimal asyncio memcached client

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Closing Client leaks dict

jkeys089 opened this issue · comments

Disclaimer: I'm new to python and asyncio so this may just be my own mis-use.

I've written some code to integrate with the auto-discovery feature of AWS ElastiCache. Part of this is connecting to a memcached cluster address every 60 seconds (it is important to re-connect each time so we resolve the DNS and ensure we get to a healthy cluster member). Everything is working find but it seems this process of frequently connecting / disconnecting is leaking dict's.

Here is a minimal reproducer using pympler to demonstrate the leak:

from pympler import muppy, summary
import asyncio
import aiomcache

loop = asyncio.get_event_loop()

async def hello_aiomcache():
    mc = aiomcache.Client("127.0.0.1", 11211, loop=loop)
    await mc.set(b"some_key", b"Some value")
    value = await mc.get(b"some_key")
    print(value)
    values = await mc.multi_get(b"some_key", b"other_key")
    print(values)
    await mc.delete(b"another_key")
    mc.close()  

# establish a baseline (watch the <class 'dict line)
summary.print_(summary.summarize(muppy.get_objects()))

for i in range(50):
    loop.run_until_complete(hello_aiomcache())

# <class 'dict grows
summary.print_(summary.summarize(muppy.get_objects()))

ds = [ao for ao in muppy.get_objects() if isinstance(ao, dict)]

# leaked dict looks like {'_loop': <_UnixSelectorEventLoop running=False closed=False debug=False>, '_paused': False, '_drain_waiter': None, '_connection_lost': False, '_stream_reader': <StreamReader t=<_SelectorSocketTransport fd=34 read=polling write=<idle, bufsize=0>>>, '_stream_writer': None, '_client_connected_cb': None, '_over_ssl': False}
ds[2364]

It looks like these dict's will hang around forever until loop.close() is called. I'm confused by this. I think I don't want to ever close the loop that I borrowed from tornado via tornado.ioloop.IOLoop.current().asyncio_loop. Is there any other way to properly close / cleanup these connections without closing the loop?

It looks like the issue was caused by not awaiting the mc.close() call -- definitely caused by my own misuse.