PeerSharingServer refactorisation
coot opened this issue · comments
computePeerSharing
should be defined in Ouroboros.Network.PeerSharing
. We don't need to export it, it should be used directly by peerSharingServer
. This will simplify the interface we expose to consensus, e.g. no callback in the daApplicationInitiatorResponderMode
, but instead requires access to PublicPeerSelectionState
which is fine, we do that for other mini-protocols as well. Ideally daApplicationIntiatorResponderMode
wouldn't it as an argument, other protocols don't need to pass similar information if I recall correctly.
IIRC it is only how it is now because polymorphism shenanigans and because we need access to the PeerSelectionState
. Since peerSharingServer
is used in consensus code when setting up the miniprotocols and since we need access to PeerSelectionState, we have to partially apply computePeerSharingServer
which is defined in ouroboros-network Diffusion.P2P
(module that has access to PeerSelectionState
) to daApplicationInitiatorResponderMode
.
If we want to move computePeerSharing
to Ouroboros.Network.PeerSharing
and call it directly in peerSharingServer
we still need a way to pass an STM action to read the PeerSelectionState to consensus. So maybe the cleanest way to do this is to define a PeerSharing consensus API dictionary and use that
We can pass TVar PublicPeerSelectionState
to Diffusion
through DiffusionArguments
(even better just pass atomically . writeTVar publicPeerSelectionStateVar :: PublickPeerSelectionState -> m ()
- this requires some changes, but it's possible). Then we will have access to the TVar
outside of Diffusion
and thus allow us to create peerSharingServer
outside of the diffusion context, it will be responsibility of consensus
to create the TVar
and pass it to the right places. This is already true for other mini-protocols, that they create some context outside of diffusion
, e.g. the bracketSyncWithFetchClient
requires FetchClientRegistry
which is part of NodeKernel
, maybe that's the right place for the TVar
too.
NodeKernel
contains other pieces of diffusion state. We can also make this more explicit by providing State
(diffusion
stuff is supposed to be imported qualified so it will be Diffusion.State
), e.g.
data State peeraddr = State {
dsFetchClientRegistry :: FetchClientRegistry (ConnectionId peeraddr) hdr blk m,
dfPeerSharingRegistry :: PeerSharingRegistry peeraddr m,
dfPublicPeerSelectionState :: TVar m PublicPeerSelectionState
-- ^ the new thing which we add to `NodeKernel`
}
newState :: m (State peeraddr)
newState = ...
And then consolidate it in NodeKernel
in a single field getDiffusionState :: NodeKernel -> Diffusion.State
I would define provide it in Ouroboros.Network.Diffusion.State
module and re-export from Ouroboros.Network.Diffusion
.
This way in the future we can extend NodeKernel
in in a simple way without much changes to ouroboros-consensus
.
We could also define our own initState
function, similar to initiNodeKernel
. It would fork blockFetchLogic
, such function will need to take arguments which are constructed by consensus
, so I am not sure if this is worth it. If we do so then Diffusion.State
can also contain getFetchMode
.