sewenew / redis-plus-plus

Redis client written in C++

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Async futures issue

mlanzuisi opened this issue · comments

Hello,

while trying to implement the async version of command with redis-plus-plus library, I tried making something like the following:
1 - each command sent has a future return, as following

std::future *myfuture;
myfuture = new std::future;
*myfuture = m_async_redis_cluster->set(key_sw, value_sw, std::chrono::seconds(seconds));

2 - each future is added to the last position of a vector, this way

m_pending_bool_futures.push_back(std::move(*new_future));

3 - after a number of operations, results are read this way:

bool ret = m_pending_bool_futures.front().get();
m_pending_bool_futures.erase(m_pending_bool_futures.begin());

The problem is: when I start calling this last part (the removal of the futures), during the "get()" I have the following exceptions:

Caught exception "connection is closing" (only 1, the first error)
Caught exception "No associated state" (many subsequent messages)

Sometimes I receive the
Caught exception "failed to connect to Redis (anritsu-redis-server-0.svc-redis:6379): Timeout"

as first message instead of the "connection is closing" one.

What can this be? Some libuv connection timeout or anything else?

Sorry, but I didi some tests, and cannot reproduce the problem.

Can you use the latest code on master branch to have a test? If you still have the problem, please give a minimum compilable code snippet that can reproduce the problem. So that I can debug it. Thanks!

Regards

Hi,

try with attached file.
main_futures.zip

  • "failed to connect to Redis (anritsu-redis-server-0.svc-redis:6379): Timeout" means your set command is timed out.
  • "connection is closing" means AsyncRedisCluster is destroyed (destructor is called) before the queued commands are sent to Redis.

I did some research on your code, and it's buggy.

bool ret = m_pending_bool_futures.front().get();       // ---------> (1)
std::printf("Ret set \"%u\"\n", ret);
m_pending_bool_futures.erase(m_pending_bool_futures.begin());   // -----> (2)

If (1) throws, (2) won't be executed, i.e. the front element won't be erased but already been call with future::get. The loop continues, (1) execute again with the original front element. However, since it has already been called with future::get, the behavior is undefined.

As you described above that your code catches exception, either timeout or connection is closing, you'll run into the problem, i.e. call future::get twice on the same future object. That's why you see No associated state.

In order to fix the problem, you can try the following pseudo code:

for (int i = 0; i < key_no; i++) {
  try {
     m_pending_bool_futures.push_back(m_async_redis_cluster->set(key, value));
     if (m_pending_bool_futures.size() > FUTURE_LIMIT / 2) {
        try {
          bool ret = m_pending_bool_futures.front().get();
        } catch (const Error &) {
            // error handling
        }
        // always erase it, no matter whether we get exception or not.
        m_pending_bool_futures.erase(m_pending_bool_futures.begin());
     }
  } catch (const Error &e) {
     // error handling
  }
}
// ensure all future have been called with `get`
for (auto &fut : m_pending_bool_futures) {
  try {
     bool ret = fut.get();
  } catch (const Error &e) {
  }
}

Sorry for the late reply...

Regards

Thank you for your reply.
After changing what you described, I always have the same errors, either "failed to connect to Redis" or "connection is closing", but "No associated state" disappeared, thank you.

Anyway, it seems that if I start using set() and get() functions too fast, there are these errors.
They disappear completely if I use a "sleep(1)" function between the AsyncRedisCluster constructor and the loops of get/set,

You should ensure all future::get have been called before AsyncRedisCluster is destroyed. Otherwise, you'll get "connection is closing", since the destructor of AsyncRedisCluster will close the underlying connection.

Regards

Thank you, issue can be considered closed