runv start failed
jovizhangwei opened this issue · comments
Hi,
I followed the instruction of README: https://github.com/hyperhq/runv#run.
Build the latest runv code, and install kernel and initrd by install hyperstart rpm:
sudo rpm -ivh https://hypercontainer-download.s3-us-west-1.amazonaws.com/0.8/centos/hyperstart-0.8.1-1.el7.centos.x86_64.rpm
But runv cannot start as README demonstrated:
$sudo ./runv --kernel /var/lib/hyper/kernel --initrd /var/lib/hyper/hyper-initrd.img
NAME:
runv - Open Container Initiative hypervisor-based runtime
runv is a command line client for running applications packaged according to
the Open Container Format (OCF) and is a compliant implementation of the
Open Container Initiative specification. However, due to the difference
between hypervisors and containers, the following sections of OCF don't
apply to runV:
Namespace
Capability
Device
"linux" and "mount" fields in OCI specs are ignored
...
The runv even didn't give any error message, did I missed something? or binary mismatch issue?
It's also weird that hyperstart rpm and hyper-container rpm both don't included runv binary, why not let runv included in hyperstart rpm?
Great thanks.
We will add runv binary into hyper-container rpm at 1.0 release, or create a new rpm package for ti. Thank you for reporting it.
I think you may need to use runv from source before that.
And it is recommended to use the newest hyperstart with current runv since runv has been changed quite largely since 0.8.0.
Thanks @laijs for quick reply.
I built newest hyperstart and runv as you said, but still started failed, no error message.
$./runv --version
runv version 0.8.1, commit: v1.0.0-rc2-14-g99936bd
$file /var/lib/hyper/hyper-initrd.img
/var/lib/hyper/hyper-initrd.img: gzip compressed data, from Unix, last modified: Thu Aug 24 17:42:51 2017, max compression
$./runv --kernel /var/lib/hyper/kernel --initrd /var/lib/hyper/hyper-initrd.img
NAME:
runv - Open Container Initiative hypervisor-based runtime
...
Would you mind to share the verified runv binary and kernel/initrd with same version to somewhere on internet? I can take a quick test in my machine. Thanks.
Oh, we are sorry that runv --kernel kernel --initrd initrd.img
is outdated. please use runv run container_name
or other subcomands.
COMMANDS:
create create a container
exec exec a new program in runv container
kill kill sends the specified signal (default: SIGTERM) to the container's init process
list lists containers started by runv with the given root
ps ps displays the processes running inside a container
run run a container
spec create a new specification file
start executes the user defined process in a created container
state output the state of a container
manage manage VMs, network, defaults ....
pause suspend all processes in the container
resume resume all processes in the container
delete delete any resources held by the container often used with detached container
proxy [internal command] proxy hyperstart API into vm and watch vm
shim [internal command] proxy operations(io, signal ...) to the container/process
network-nslisten [internal command] collection net namespace's network configuration
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--debug enable debug output for logging, saved on the dir specified by log_dir via glog style
--log_dir value the directory for the logging (glog style) (default: "/var/log/hyper")
--log value [ignored on runv] set the log file path where internal debug information is written
--log-format value [ignored on runv] set the format used by logs ('text' (default), or 'json')
--root value root directory for storage of container state (this should be located in tmpfs) (default: "/run/runv")
--driver value hypervisor driver (supports: kvm xen vbox)
--default_cpus value default number of vcpus to assign pod (default: 1)
--default_memory value default memory to assign pod (mb) (default: 128)
--kernel value kernel for the container
--initrd value runv-compatible initrd for the container
--bios value bios for the container
--cbfs value cbfs for the container
--template value path to the template vm state directory
--vbox value runv-compatible boot ISO for the container for vbox driver
--help, -h show help
--version, -v print the version
It is recommended to combine with --debug
argument when you try it. So that you can get the logs from /var/log/hyper in case of any problem. There will be multiple log files for each container. And you can combine it with --log_dir /path/to/container/name/log/dir
if you want to find the log files for the specific container more conveniently.
Another error reported. My machine can run clear-container successfully, something wrong about my qemu version(1.5.3)?
$sudo runv --kernel /var/lib/hyper/kernel --initrd /var/lib/hyper/hyper-initrd.img run ubuntu
E0825 09:59:39.162677 20412 qemu_process.go:153]
(process:20425): GLib-WARNING **: gmem.c:482: custom memory allocation vtable not supported
qemu-system-x86_64: -machine pc-i440fx-2.1,accel=kvm,usb=off: Unsupported machine type
Use -machine help to list supported machines!
E0825 09:59:39.163159 20412 qemu_process.go:157] exit status 1
E0825 09:59:49.131108 20412 qmp_handler.go:371] QMP initialize timeout
E0825 09:59:49.177194 20412 qmp_handler.go:164] failed to connected to /var/run/hyper/vm-VCOFpKbSNh/qmp.sock: dial unix /var/run/hyper/vm-VCOFpKbSNh/qmp.sock: connect: no such file or directory
E0825 09:59:49.177285 20412 qmp_handler.go:364] QMP initialize failed
E0825 09:59:49.177591 20412 vm_states.go:226] SB[vm-VCOFpKbSNh] Start POD failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
E0825 09:59:49.177618 20412 sandbox.go:104] StartPod fail, response: &api.ResultBase{Id:"vm-VCOFpKbSNh", Success:false, ResultMessage:"got failed event when wait init message"}
Failed to load the container after created, err: &os.PathError{Op:"open", Path:"/run/runv/ubuntu/state.json", Err:0x2}
qemu version(1.5.3) are two old, could you try to install a new one?
You can also use the one provided by us: https://s3-us-west-1.amazonaws.com/hypercontainer-download/qemu-hyper/qemu-hyper-2.4.1-3.el7.centos.x86_64.rpm
The qemu works now, but runv failed again by followed the steps in README. I believe the instructions in README is not completed and will misguide new users.
$runv spec
$sudo runv --kernel /var/lib/hyper/kernel --initrd /var/lib/hyper/hyper-initrd.img run ubuntu
E0825 13:42:19.947592 4722 filesystem.go:158] mount /home/xianwei/rootfs to /var/run/hyper/vm-nDOTnpQfFu/share_dir/ubuntu/rootfs failed: no such file or directory
E0825 13:42:19.953188 4722 qmp_handler.go:141] QMP exit as got error: read unix @->/var/run/hyper/vm-nDOTnpQfFu/qmp.sock: use of closed network connection
Run Container error: failed to create container: no such file or directory
Would you please list the full command lines to start a simple container by runv?
A bundle is needed here.
# create the top most bundle directory
mkdir /mycontainer
cd /mycontainer
# create the rootfs directory
mkdir rootfs
# export busybox via Docker into the rootfs directory
docker export $(docker create busybox) | tar -C rootfs -xvf -
It works now.
I suggest we should put below sample into "Run" section in README.md (https://github.com/hyperhq/runv#run).
mkdir mycontainer; cd mycontainer
mkdir rootfs
runv spec
sudo docker export $(sudo docker create busybox) | tar -C rootfs -xvf -
sudo runv --kernel /var/lib/hyper/kernel --initrd /var/lib/hyper/hyper-initrd.img run mycontainer
Thanks for your patient help. @laijs
Hi @laijs , I use runv as my dockerd runtime, failed to start container, no clear error log showed.
My docker version is 17.03.2-ee-5, it tested with clear container successfully.
$sudo /usr/bin/dockerd -D --add-runtime cor=/usr/bin/runv --default-runtime=cor
...
ERRO[0014] containerd: start container error=containerd: container not started id=aabafca5dd32fc70391534b4cda557541c581aa6c5c2fc75b296c11f41276074
ERRO[0014] Create container failed with error: containerd: container not started
...
$sudo docker run -it --rm ubuntu bash
docker: Error response from daemon: containerd: container not started.
Is there have any log file can check for runv?
The example is being made completed via #579. Thanks.
We documented a slight different way to use runv with docker, https://github.com/hyperhq/runv/blob/master/docs/configure-runv-with-containerd-docker.md#work-with-docker. How about that way?
In any case, logs can be found from /var/log/hyper/
Could you check the /usr/bin/runv in your environment please? It might be old or non-existed
The default installed runv is /usr/local/bin/runv.So when you do make install
, the /usr/bin/runv won't be covered with the new one.
$/usr/bin/runv --version
runv version 0.8.1, commit: v1.0.0-rc2-14-g99936bd
$ls /var/log/runv
ls: cannot access /var/log/runv: No such file or directory
Could you try to run /usr/bin/runv --version
please.
And runv is OCI runtime, but it is also the runtime of hyper-container under the development by hyperhq. So the logs is in the /var/log/hyper/. You can specific the log_dir via --log_dir if you use runv directly.
You can also use a wrapper shell script to specific the log_dir when you use runv with docker. I could give an example later.
Please see my last post.
runv and kernel/initrd all is the newest version, compiled by myself.
Could you update the docker in your environment to the newest docker please?
We tried both 17.05.0-ce, 17.03.2-ce, both test got
"Run Container error: load config failed: json: cannot unmarshal array into Go struct field Process.capabilities of type specs.LinuxCapabilities"
It seems that the docker of 17.05.0-ce, 17.03.2-ce 17.03.2-ee-5 use the old version runtime-spec. We are sorry that the runv won't go back to the old version of runtime-spec.
Thanks @laijs , it works with docker 17.06.1-ee.
I tried to limiting the cpu and memory, it seems memory limit is correct, but --cpusets-cpu doesn't work, is this normal?
My test command: "sudo docker run -it --rm --cpuset-cpus=0-3 -m=1G ubuntu bash"
--cpuset-cpus is unsupported yet. Since it is hypervisor based runtime, that what "cpuset" means is unclear yet. We may apply the cpuset to the process of the qemu while the container processes can access to all their VCPUs. What do you think?