openresty / lua-resty-redis

Lua redis client driver for the ngx_lua based on the cosocket API

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Share redis instance between workers

samsonradu opened this issue · comments

I'm trying to share the same Redis client between workers:

 --- script/red.lua
 local redis = require "resty.redis"
 local red = redis:new()
 local host = "127.0.0.1"
 local port = 6379

 red:set_timeout(1000) -- 1 sec

 local ok, err = red:connect(host, port)
 if not ok then
     ngx.exit(ngx.HTTP_FORBIDDEN)
     return
 end
 return red
--- script/auth.lua
local red = require "script.red"

However it throws the following error. Am I doing something wrong? Is this impossible to achieve?

 2014/01/17 00:05:34 [error] 7716#0: *1 lua entry thread aborted: runtime error: attempt to yield across C-call boundary
 stack traceback:
 coroutine 0:
     [C]: in function 'require'
     /home/radu/stor/script/auth.lua:2: in function </home/radu/stor/script/auth.lua:1>, client: 127.0.0.1, server: , request: "GET /nolua?                              AUTH_TOKEN=AAAAAAAAAAAAAAAA_BBBBBBBBBBBBBBBB_MGMGMGMGMGMGMGMG HTTP/1.1", host: "localhost:8888"

@samsonradu You make the following mistakes here:

  1. cosocket objects cannot go out of the scope of the request handler creating them. So you cannot share cosocket objects among different requests but you can share the underlying connections via the connection pool mechanism (like the set_keepalive method).
  2. you cannot perform nonblocking I/O operations or other operations requiring coroutine yields (implicitly or explicitly) on the toplevel of a Lua module because the require() builtin function is implemented as a C function right now (in both LuaJIT and the standard Lua 5.1 interpreter). One cannot yield across the boundary of a C function call in both LuaJIT 2.x and standard Lua 5.1 interpreter. This limitation might get removed in LuaJIT in the near future.

Thank you

So, what's the answer? No?

@velikanov No. It is not easy nor cheap to pass sockets across the boundary of nginx worker processes. The potential gain here will also be very limited.