smartxworks / virtink

Lightweight Virtualization Add-on for Kubernetes

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Is there any examples for using a Block pvc as root disk?

weixiao-huang opened this issue · comments

I try to write a vm.yaml below

apiVersion: virt.virtink.smartx.com/v1alpha1
kind: VirtualMachine
metadata:
  name: ubuntu-container-rootfs
spec:
  instance:
    memory:
      size: 4Gi
    kernel:
      image: smartxworks/virtink-kernel-5.15.12
      cmdline: "console=ttyS0 root=/dev/vda rw"
    disks:
      - name: ubuntu
      - name: cloud-init
    interfaces:
      - name: pod
  volumes:
    - name: ubuntu
      persistentVolumeClaim:
        claimName: test-vm
    - name: cloud-init
      cloudInit:
        userData: |-
          #cloud-config
          password: password
          chpasswd: { expire: False }
          ssh_pwauth: True
  networks:
    - name: pod
      pod: {}

and have a PVC test-vm like

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-vm
  namespace: default
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 50Gi
  selector:
    matchLabels:
      pv: test-vm
  storageClassName: csi-rbd-storageclass
  volumeMode: Block
  volumeName: test-vm
status:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 50Gi
  phase: Bound

And I want to use the pvc test-vm as root disk for vm. Is there any examples for how to import rootfs into this block pvc device?

I tried to use

FROM ubuntu:jammy AS rootfs
RUN apt-get update -y && \
    apt-get install -y --no-install-recommends systemd-sysv udev lsb-release cloud-init sudo openssh-server && \
    rm -rf /var/lib/apt/lists/*

to build a docker image ubuntu-rootfs and use

docker create --name ubuntu-rootfs ubuntu-rootfs
docker export ubuntu-rootfs | tar -xvf - -C ./my-rootfs-pvc-mount-dir

which ./my-rootfs-pvc-mount-dir is the mount path where the pvc mounted manually in the host machine.

After doing this, this Block PVC should be used in the VM.

Could I open a PR for adding it into the docs?

On the other hand, the rootfs disk type seems does not support xfs. I use ext4 for successfully creating the VM but by using xfs, I got the error below:

[    0.915391]  driver: virtio_blk
[    0.917481] No filesystem could mount root, tried:
[    0.917481]  ext3
[    0.918974]  ext2
[    0.919595]  ext4
[    0.920285]  vfat
[    0.920910]  msdos
[    0.921529]  iso9660
[    0.922170]  fuseblk
[    0.922860]
[    0.924047] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(254,0)
[    0.926612] Kernel Offset: disabled
[    0.927685] ---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(254,0) ]---

Could virtink or cloud-hypervisor support xfs rootfs disk type?

The kernel smartxworks/virtink-kernel-5.15.12 we provided doesn't include the xfs module. We try to keep it minimal, and you can include any kernel modules you like and rebuild it.

As for using PVC (both FS mode and block mode) as the root disk, I'd recommend you packing and publishing your rootfs as a QCOW2 image and then using CDI datavolume to import it. This is a more general and automated way.

But I could not use a CDI datavolume by using direct kernel boot. I only successfully in the Block PVC imported with

docker create --name ubuntu-rootfs ubuntu-rootfs
docker export ubuntu-rootfs | tar -xvf - -C ./my-rootfs-pvc-mount-dir

I think this method may be added in the docs.

Direct kernel boot should work with CDI data volumes. Could you share steps to reproduce it if it didn't work? @weixiao-huang

I tried to start vm with pvc, but I get an error:

image

My steps:

  1. use hostpath to create a pv:
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-hostpath
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteOnce
  volumeMode: Block
  hostPath:
    path: "/mnt/local_pv"
  1. create a pvc:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-hostpath
spec:
  storageClassName: manual
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi
  selector:
    matchLabels:
      type: local
  volumeMode: Block
  volumeName: pv-hostpath
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 50Gi
  phase: Bound

  1. build a rootfs:
    Dockerfile:
FROM ubuntu:jammy AS rootfs
RUN apt-get update -y && \
   apt-get install -y --no-install-recommends systemd-sysv udev lsb-release cloud-init sudo openssh-server && \
   rm -rf /var/lib/apt/lists/*
RUN apt-get install python3

FROM smartxworks/virtink-container-rootfs-base
COPY --from=rootfs / /rootfs
RUN ln -sf ../run/systemd/resolve/stub-resolv.conf /rootfs/etc/resolv.conf

run the commands:

docker build -t ubuntu-rootfs .
docker create --name ubuntu-rootfs ubuntu-rootfs
docker export ubuntu-rootfs | tar -xvf - -C /mnt/local_pv

image

  1. tried to start vm:
apiVersion: virt.virtink.smartx.com/v1alpha1
kind: VirtualMachine
metadata:
  name: test-vm
spec:
  instance:
    memory:
      size: 4Gi
    kernel:
      image: smartxworks/virtink-kernel-5.15.12
      cmdline: "console=ttyS0 root=/dev/vda rw"
    disks:
      - name: ubuntu
      - name: cloud-init
    interfaces:
      - name: pod
  volumes:
    - name: ubuntu
      persistentVolumeClaim:
        claimName: pvc-hostpath
    - name: cloud-init
      cloudInit:
        userData: |-
          #cloud-config
          password: password
          chpasswd: { expire: False }
          ssh_pwauth: True
  networks:
    - name: pod
      pod: {}

Thanks for your reply.

@wavemomo Although you specified volumeMode: Block in the PV spec, I don't think hostPath PV can be a block PV (correct me if I'm wrong). To use direct kernel boot with a block PV, you need to:

  1. Have a raw block mode PV. You can verify if it's a raw block mode PV by using it in a pod's volumeDevices. And it should appear as a raw block device (not a directory) inside the pod.
  2. Import your rootfs content using CDI or manually. To manually import your rootfs into a raw block device, you should first format the raw block device and mount it. Then you can copy your rootfs into it. Please be noted, for the image built by the rootfs Dockerfile in our sample, the rootfs content is inside the /rootfs path, not the image root itself.
  3. Use the imported PV in the VM.

Importing a rootfs into a PV manually is a non-trivial task. That's why we recommend users to first pack the rootfs as a QCOW2 image and then use CDI to import it. To pack a rootfs into a QCOW2 image, you need to:

  1. Build the rootfs image as you would using the Dockerfile.
  2. truncate a large enough empty file to be the raw disk file
  3. Format and mount that raw disk file
  4. Copy your rootfs content into it
  5. Unmount it
  6. Convert the raw disk file to a QCOW2 file using qemu-img command

I use Ceph RBD to use Block PVC. By using this, it could work.

@wavemomo Your Dockerfile is not correct. The Dockerfile I used is

FROM ubuntu:jammy AS rootfs
RUN apt-get update -y && \
    apt-get install -y --no-install-recommends systemd-sysv udev lsb-release cloud-init sudo openssh-server && \
    rm -rf /var/lib/apt/lists/*