utkuozdemir / pv-migrate

CLI tool to easily migrate Kubernetes persistent volumes

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Error: all strategies failed from 1.21 (docker) to 1.22 (containerd) in Yandex Cloud

patsevanton opened this issue Β· comments

Describe the bug
Error: all strategies failed

To Reproduce
Steps to reproduce the behavior:

  1. helm uninstall application on old k8s
  2. Get PersistentVolumeClaim from old k8s
  3. Create PersistentVolumeClaim on new k8s from PersistentVolumeClaim from old k8s
  4. Run pv-migrate on laptop. Old k8s and new k8s on Cloud.
  5. Full comand pv-migrate: pv-migrate migrate --source-kubeconfig /home/user/.kube/config --source-context yc-dev --source-namespace dev --dest-kubeconfig /home/user/.kube/config --dest-context yc-core-dev --dest-namespace dev --dest-delete-extraneous-files redis-data-sbppay-redis-master-0 redis-data-sbppay-redis-master-0 --log-level trace

Expected behavior
Copy data

Console output

πŸš€  Starting migration
❕  Extraneous files will be deleted from the destination
πŸ’­  Will attempt 3 strategies: mnt2, svc, lbsvc
🚁  Attempting strategy: mnt2
🦊  Strategy 'mnt2' cannot handle this migration, will try the next one
🚁  Attempting strategy: svc
🦊  Strategy 'svc' cannot handle this migration, will try the next one
🚁  Attempting strategy: lbsvc
πŸ”‘  Generating SSH key pair
creating 4 resource(s)
beginning wait for 4 resources with timeout of 1m0s
Service does not have load balancer ingress IP address: dev/pv-migrate-ebdba-src-sshd
Service does not have load balancer ingress IP address: dev/pv-migrate-ebdba-src-sshd
Service does not have load balancer ingress IP address: dev/pv-migrate-ebdba-src-sshd
Service does not have load balancer ingress IP address: dev/pv-migrate-ebdba-src-sshd
Deployment is not ready: dev/pv-migrate-ebdba-src-sshd. 0 out of 1 expected pods are ready
Deployment is not ready: dev/pv-migrate-ebdba-src-sshd. 0 out of 1 expected pods are ready
creating 3 resource(s)
beginning wait for 3 resources with timeout of 1m0s
πŸ“‚  Copying data...   0% |                                                                                                                                                                        |  [0s:0s]🧹  Cleaning up
πŸ“‚  Copying data...   0% |                                                                                                                                                                        |  [0s:0s]uninstall: Deleting pv-migrate-ebdba-src
Starting delete for "pv-migrate-ebdba-src-sshd" Service
Starting delete for "pv-migrate-ebdba-src-sshd" Deployment
Starting delete for "pv-migrate-ebdba-src-sshd" Secret
Starting delete for "pv-migrate-ebdba-src-sshd" ServiceAccount
beginning wait for 4 resources to be deleted with timeout of 1m0s
purge requested for pv-migrate-ebdba-src
uninstall: Deleting pv-migrate-ebdba-dest
Starting delete for "pv-migrate-ebdba-dest-rsync" Job
Starting delete for "pv-migrate-ebdba-dest-rsync" Secret
Starting delete for "pv-migrate-ebdba-dest-rsync" ServiceAccount
beginning wait for 3 resources to be deleted with timeout of 1m0s
purge requested for pv-migrate-ebdba-dest
✨  Cleanup done
πŸ”Ά  Migration failed with this strategy, will try with the remaining strategies
Error: all strategies failed
Usage:
  pv-migrate migrate <source-pvc> <dest-pvc> [flags]

Aliases:
  migrate, m

Flags:
  -C, --dest-context string            context in the kubeconfig file of the destination PVC
  -d, --dest-delete-extraneous-files   delete extraneous files on the destination by using rsync's '--delete' flag
  -H, --dest-host-override string      the override for the rsync host destination when it is run over SSH, in cases when you need to target a different destination IP on rsync for some reason. By default, it is determined by used strategy and differs across strategies. Has no effect for mnt2 and local strategies
  -K, --dest-kubeconfig string         path of the kubeconfig file of the destination PVC
  -N, --dest-namespace string          namespace of the destination PVC
  -P, --dest-path string               the filesystem path to migrate in the destination PVC (default "/")
      --helm-set strings               set additional Helm values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)
      --helm-set-file strings          set additional Helm values from respective files specified via the command line (can specify multiple or separate values with commas: key1=path1,key2=path2)
      --helm-set-string strings        set additional Helm STRING values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)
  -t, --helm-timeout duration          install/uninstall timeout for helm releases (default 1m0s)
  -f, --helm-values strings            set additional Helm values by a YAML file or a URL (can specify multiple)
  -h, --help                           help for migrate
  -i, --ignore-mounted                 do not fail if the source or destination PVC is mounted
  -o, --no-chown                       omit chown on rsync
  -b, --no-progress-bar                do not display a progress bar
  -c, --source-context string          context in the kubeconfig file of the source PVC
  -k, --source-kubeconfig string       path of the kubeconfig file of the source PVC
  -R, --source-mount-read-only         mount the source PVC in ReadOnly mode (default true)
  -n, --source-namespace string        namespace of the source PVC
  -p, --source-path string             the filesystem path to migrate in the source PVC (default "/")
  -a, --ssh-key-algorithm string       ssh key algorithm to be used. Valid values are rsa,ed25519 (default "ed25519")
  -s, --strategies strings             the comma-separated list of strategies to be used in the given order (default [mnt2,svc,lbsvc])

Global Flags:
      --log-format string   log format, must be one of: json, fancy (default "fancy")
      --log-level string    log level, must be one of: trace, debug, info, warn, error, fatal, panic (default "info")

❌  Error: all strategies failed

Version

  • Source and destination Kubernetes versions: v1.21.5 and v1.22.6
  • Source and destination container runtimes: docker://20.10.17, containerd://1.6.6]
  • pv-migrate version and architecture: 1.0.1- Linuxx86_64
  • Installation method: binary
  • Source and destination PVC type, size and accessModes [e.g. `ReadWriteOnce, 8G, N/A -> ReadWriteOnce, N/A, N/A' ]

Additional context
Get PV from Old k8s
kubectl get pv pvc-8b2b4edb-c7be-40e7-850b-a679d000cf9d -o yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: disk-csi-driver.mks.ycloud.io
  creationTimestamp: "2022-12-02T12:35:14Z"
  finalizers:
  - kubernetes.io/pv-protection
  - external-attacher/disk-csi-driver-mks-ycloud-io
  name: pvc-8b2b4edb-c7be-40e7-850b-a679d000cf9d
  resourceVersion: "302287979"
  uid: 5d6214ce-852f-4c2e-b7f9-da674e404258
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 8Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: redis-data-sbppay-redis-master-0
    namespace: dev
    resourceVersion: "302287877"
    uid: 8b2b4edb-c7be-40e7-850b-a679d000cf9d
  csi:
    driver: disk-csi-driver.mks.ycloud.io
    fsType: ext4
    volumeAttributes:
      storage.kubernetes.io/csiProvisionerIdentity: 1661443585177-8081-disk-csi-driver.mks.ycloud.io
      type: network-hdd
    volumeHandle: epdgffc6esj6t5iqs1mh
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: failure-domain.beta.kubernetes.io/zone
          operator: In
          values:
          - ru-central1-b
  persistentVolumeReclaimPolicy: Delete
  storageClassName: yc-network-hdd
  volumeMode: Filesystem
status:
  phase: Bound

Get PVC from Old k8s
kubectl get pvc -n dev redis-data-sbppay-redis-master-0 -o yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: disk-csi-driver.mks.ycloud.io
    volume.kubernetes.io/selected-node: cl1kue8pq3363orhrtk0-uryc
  creationTimestamp: "2022-12-02T12:35:03Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    app.kubernetes.io/component: master
    app.kubernetes.io/instance: sbppay
    app.kubernetes.io/name: redis
  name: redis-data-sbppay-redis-master-0
  namespace: dev
  resourceVersion: "302287976"
  uid: 8b2b4edb-c7be-40e7-850b-a679d000cf9d
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
  storageClassName: yc-network-hdd
  volumeMode: Filesystem
  volumeName: pvc-8b2b4edb-c7be-40e7-850b-a679d000cf9d
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 8Gi
  phase: Bound

I removed status and annotations and create PVC in new k8s

If you are running this on AWS, you probably need additional configure for the SVC for pv-migrate, since AWS require you specify the Loadbalance annotations in yaml file.

And the error probably because the Rsync couldn't establish the connectin from Dest to Src via SSH, if you set the --log-level debug and --log-format json, because the hostname from Src are unable to resolve from Dest.