Stiffstream / sobjectizer

An implementation of Actor, Publish-Subscribe, and CSP models in one rather small C++ framework. With performance, quality, and stability proved by years in the production.

Home Page:https://stiffstream.com/en/products/sobjectizer.html

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Drop oldest agent message limit?

ilpropheta opened this issue · comments

Hi everybody,
speaking about message limits, I know that limit_then_drop makes the agent ignore new messages. I am wondering if it's possible to drop oldest messages somehow (as it happens with message chains' overflow policy so_5::mchain_props::overflow_reaction_t::remove_oldest).

Many thanks!

Marco

Hi!

No, there is no such a possibility. It is because of SO-5 architecture: a message is passed to an event queue provided by a dispatcher. There is no way to revoke a message after that. It allows having different dispatchers with different delivery policies and different queues (like delivering via Windows messaging or Asio's posts). Moreover, some dispatchers could not have a sequential ordering with respect to send time, for example, if there are some priorities related to message types (or message contents).

As a workaround, you can use a mchain instead of a mbox (by using mchain's as_mbox method). You can read and handle the content of that mchain via ordinary receive. And you can receive notifications when a message is sent to empty mchain by setting non_empty_notificator. Somewhat like that:

class agent_with_mchain : public so_5::agent_t {
  struct msg_chain_not_empty final : public so_5::signal_t {};

  so_5::mchain ch_;
  ...
public:
  agent_with_mchain(context_t ctx, ...)
    : so_5::agent_t{std::move(ctx)}
    , ch_{so_environment().make_mchain(
        so_5::make_limited_without_waiting_mchain_params(...)
          .non_empty_notificator( [self_mbox=so_direct_mbox()]() {
            so_5::send<msg_chain_not_empty>(self_mbox);
          })
      }}
  {...}

  void so_define_agent() override {
    so_subscribe_self([this]( mhood_t<msg_chain_not_empty> ) {
      ... // Reading ch_ while it's not empty
    });
    ...
  }
  ...
};

Hi @eao197 , many thanks for the details, as usual! I didn't know about non_empty_notificator that is very convenient.

Related question: what's the most idiomatic way to consume all messages from the chain until it becomes empty?
I usually do the following:

receive( from(ch_).handle_all().no_wait_on_empty() ),
       handlers...);

I usually do the following:

It's the right way.

But, if your agent shares working context with other agents (for example, your agents are bound to the same one_thread or thread_pool dispatcher) then additional care has to be taken: you can spend too much time inside the receive and that could prevent the execution of events of other agents.

So if you expect a big amount of incoming messages then handle_n can be a more safe way. Something like:

receive( from(ch_).handle_n(some_limit).no_wait_on_empty(), ... );
if( !ch_->empty() )
  so_5::send<msg_chain_not_empty>(*this); // Initiate the next iteration.

Makes sense, thanks for the clarification.

In some scenarios, I have a few agents perpetually receiving from one chain until the program stops. They never share context/threads and at most two agents work on the same chain.
When there is no work to do, they are just idle. Now, I could actually prevent such agents to be blocked perpetually in receive and use the non_empty_notificator pattern to notify such agents when it's time to receive.

Do you see any sensible performance benefits in doing this compared to the perpetual receive? Or, behind the scenes, they are more or less the same? I am just speculating, I should test, but I am interested in your opinion/experience.

Of course, I think that from an agent management point of view, the non_empty_notificator pattern is much more flexible.

Do you see any sensible performance benefits in doing this compared to the perpetual receive?

Usage of not_empty_notificator with sending a message inside adds the price of sending the message (or signal) + invoking an event handler. It could add a significant overhead (as I think) only in the case when you have many pauses between messages that are sent to the chain. For example, someone sends a couple of messages to the chain, then takes a short pause, then sends another couple of messages, then takes the next short pause, and so on. But that overhead will be seen if you have a message stream of hundreds of thousands of messages per second with short pauses that are somewhat around a microsecond or even less.

I see the point, thanks again for your support!
I close the issue.