balena / elixir-sippet

An Elixir library designed to be used as SIP protocol middleware.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Running multiple Core on different ports

martinos opened this issue · comments

I am wondering if there's a way of running 2 instances of Sippet that listen on two different ports. But since I am fairly new to Elixir, I don't see if it's event possible with the current code base. Am I right?

I thought in two possibilities: adding multiple transports to a single Sippet stack, and/or adding an "umbrella" Sippet.Core with tailored message routing.

Multiple transports, different protocols

The transports are connected through the configuration, but connecting multiple ones make sense when you have multiple protocols.

I will explain. Check the config/config.exs file. Here you can find the transports configuration: first the transport is declared and initialization settings are specified.

# Sets the UDP plug settings:
#
# * `:port` is the UDP port to listen (required).
# * `:address` is the local address to bind (optional, defaults to "0.0.0.0")
config :sippet, Sippet.Transports.UDP.Plug,
  port: 5060,
  address: "127.0.0.1"

The next step is to declare the transport "pool", which basically maintains the connections (UDP is obviously not connection oriented, but they maintain port mappings useful for NAT traversal).

# Sets the message processing pool settings:
#
# * `:size` is the pool size (optional, defaults to
#   `System.schedulers_online/1`).
# * `:max_overflow` is the acceptable number of extra workers under high load
#   (optional, defaults to 0, or no overflow).
config :sippet, Sippet.Transports.Pool,
  size: System.schedulers_online(),
  max_overflow: 0

And finally we declare the protocol mapping:

# Sets the transport plugs, or the supported SIP transport protocols.
config :sippet, Sippet.Transports,
  udp: Sippet.Transports.UDP.Plug

That means whenever a message requires the UDP protocol (in the Via header), the stack will use the Sippet.Transports.UDP.Plug module to dispatch them.

So if you are willing to maintain multiple protocols, this solution will work just fine.

Multiple transports, same protocol

At the time I've written the library I didn't consider having multiple transports of the same protocol in a single Sippet stack instance. I projected it to be used in SIP proxies and tailored services and gateways. So the transport routing using simply the message protocol (observed in Via headers, as originally specified by RFC 3261) was convenient at the time.

But if you feel this limitation doesn't suit you well, no problem. The logic is to modify the transport routing mechanism to consider also the ports involved. Most of the transport routing happens in Sippet.Transports module.

Multiple Sippet.Core instances

Basically you can create an "umbrella" Sippet.Core that dispatches messages to multiple other processes if you want. You can decide entirely how you route the messages; you may use any parameter passed in the SIP messages. The SIP RFCs usually give good hints how to create these routes.

As a last idea, instead of a single Elixir application, you can create multiple ones and run them isolated. Is there any reason to have multiple Cores running side by side on the same Application?

Thanks a lot for taking time to answer me.

I've built an STIR SHAKEN application (https://transnexus.com/whitepapers/understanding-stir-shaken/) to authenticate and validate calls. Since I have built the authentication part by using validation part to test it and vice-versa, I've built both of them in the same library (stir_shaken) to avoid circular dependency.

The final solution that I came up with was 2 empty apps that had 2 different configurations each using their own Core implementation that were defined in the stir_shaken library.

My first goal was to make it work in the same app, but it seemed hard to do. I am quite happy with the result but it's just a bit more work to deploy. But that's a small price to pay compared of what I benefit I get from using your library.

I am closing the issue because it's just a nice to have and it might take too much effort to do this.