openebs / zfs-localpv

Dynamically provision Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is integrated with a backend ZFS data storage stack.

Home Page:https://openebs.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Cross-Node Pod Access to ZFS Pool in a Multi-Node Cluster

kapilsingh421 opened this issue · comments

I'm working with a 3-node cluster and have set up a ZFS pool on the first node which currently holds our data. My goal is to have 3 pod replicas that can run on all three nodes (node 1, node 2, and node 3) while still being able to access the data from the ZFS pool located on node 1.

Is this setup feasible within the current capabilities? Would I need to implement an NFS share to facilitate this access, or is there native support for such cross-node data accessibility using zfs-localpv? Any guidance or suggestions on how to achieve this would be greatly appreciated.

i'm coincidentally investigating something similar.
slightly different use case (we want replicated zfs) but same requirement towards localpv:
how do i layer another CSI on top of localpv. in your case probably NFS.

of course if you can get a way with manual configuration, you can maybe use https://github.com/kubernetes-csi/csi-driver-nfs

but it would be ideal if that thing could in turn tell zfs-localpv to create manage a zfs dataset for each nfs volume

@kapilsingh421 @aep
Thanks for the feature request.

What you are describing is a Single Clustered Posix Filesystem image that can span and be mounted into any pod/container within a cluster and have its backend storage be managed by node-local ZFS ZPools managing local disks in each local node.

Can you please describe what applications you are running in the pods/containers that ALL need to see the same single global filesystem image from anywhere in the Cluster?

  • and (I assume) they all need to write into that filesystem too?

What are your Apps that need this?

We don't need NFS actually. We use s3 instead.

We do however need replicated ZFS for HA in case a node fails. Instead of doing our own thing we're just waiting for Maya store to support zfs.

Hi @aep
When you say your want Replicated ZFS... that's a little vague when you get into the details.

What ZFS storage entity do you want to be working with as a replicated ZFS entity?

  1. a ZFS Dataset - i.e. pool/dataset - ( which only has a native ZFS filesystem on it )
  2. a ZFS zvol - i.e. a block device - ( which does not have a native ZFS filesystem on it but has a file system of your choice on it )
    • if so, then what filesystems do you need to be supported ?