rdbreak / rhcsa8env

This is a RHCSA8 study environment built with Vagrant/Ansible

Home Page:https://join.slack.com/t/redhat-certs/shared_invite/zt-7ju3rz7b-_G3Njp3PDwdBG_81SwPeLA

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

No space left on device (28)

drioton opened this issue · comments

Describe the bug
Error: rsync: write failed on "/vagrant/disk-0-1.vdi": No space left on device (28)
rsync error: error in file IO (code 11) at receiver.c(374) [receiver=3.1.3]

server1: VM not created

To Reproduce
Steps to reproduce the behavior:

  1. Go to 'cd .\RHCSA\rhcsa8env-master'
  2. Click on 'vagrant up'
  3. Scroll down to '....'
  4. See error

PS C:\WINDOWS\system32> cd d:
PS D:> cd .\RHCSA
PS D:\RHCSA>
PS D:\RHCSA> cd .\rhcsa8env-master
PS D:\RHCSA\rhcsa8env-master> vagrant up
Bringing machine 'server2' up with 'virtualbox' provider...
Bringing machine 'repo' up with 'virtualbox' provider...
Bringing machine 'server1' up with 'virtualbox' provider...
==> server2: Box 'rdbreak/rhel8node' could not be found. Attempting to find and install...
server2: Box Provider: virtualbox
server2: Box Version: >= 0
==> server2: Loading metadata for box 'rdbreak/rhel8node'
server2: URL: https://vagrantcloud.com/rdbreak/rhel8node
==> server2: Adding box 'rdbreak/rhel8node' (v1.0) for provider: virtualbox
server2: Downloading: https://vagrantcloud.com/rdbreak/boxes/rhel8node/versions/1.0/providers/virtualbox.box
==> server2: Box download is resuming from prior download progress
Download redirected to host: vagrantcloud-files-production.s3-accelerate.amazonaws.com
server2:
==> server2: Successfully added box 'rdbreak/rhel8node' (v1.0) for 'virtualbox'!
==> server2: Importing base box 'rdbreak/rhel8node'...
==> server2: Matching MAC address for NAT networking...
==> server2: Setting the name of the VM: rhcsa8env-master_server2_1616926384513_43393
==> server2: Clearing any previously set network interfaces...
==> server2: Preparing network interfaces based on configuration...
server2: Adapter 1: nat
server2: Adapter 2: hostonly
server2: Adapter 3: hostonly
server2: Adapter 4: hostonly
==> server2: Forwarding ports...
server2: 22 (guest) => 2222 (host) (adapter 1)
==> server2: Running 'pre-boot' VM customizations...
==> server2: Booting VM...
==> server2: Waiting for machine to boot. This may take a few minutes...
server2: SSH address: 127.0.0.1:2222
server2: SSH username: vagrant
server2: SSH auth method: private key
==> server2: Machine booted and ready!
==> server2: Checking for guest additions in VM...
server2: The guest additions on this VM do not match the installed version of
server2: VirtualBox! In most cases this is fine, but in rare cases it can
server2: prevent things such as shared folders from working properly. If you see
server2: shared folder errors, please make sure the guest additions within the
server2: virtual machine match the version of VirtualBox you have installed on
server2: your host and reload your VM.
server2:
server2: Guest Additions Version: 5.2.30 r130521
server2: VirtualBox Version: 6.1
==> server2: Configuring and enabling network interfaces...
==> server2: Rsyncing folder: /cygdrive/d/RHCSA/rhcsa8env-master/ => /vagrant
==> server2: - Exclude: [".vagrant/", ".git/"]
==> server2: Running provisioner: shell...
server2: Running: inline script
server2: mke2fs 1.44.3 (10-July-2018)
server2: Creating filesystem with 2097152 4k blocks and 524288 inodes
server2: Filesystem UUID: 033feaba-606e-4e03-a2fc-a9900a1964d7
server2: Superblock backups stored on blocks:
server2: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
server2:
server2: Allocating group tables: 0/64
server2: done
server2: Writing inode tables: done
server2: Creating journal (16384 blocks):
server2: done
server2: Writing superblocks and filesystem accounting information:
server2: 0/64
server2:
server2: done
==> server2: Running provisioner: shell...
server2: Running: inline script
==> server2: Running provisioner: shell...
server2: Running: inline script
server2: mke2fs 1.44.3 (10-July-2018)
server2: Creating filesystem with 2097152 4k blocks and 524288 inodes
server2: Filesystem UUID: 12e4264c-dabd-42ab-a18a-a99990964e60
server2: Superblock backups stored on blocks:
server2: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
server2:
server2: Allocating group tables: 0/64
server2:
server2: done
server2: Writing inode tables: done
server2: Creating journal (16384 blocks):
server2: done
server2: Writing superblocks and filesystem accounting information:
server2: 0/64
server2: 26/64
server2:
server2: done
==> server2: Running provisioner: shell...
server2: Running: inline script
==> server2: Running provisioner: ansible_local...
server2: Running ansible-playbook...

PLAY [Setting Up Server 2] *****************************************************

TASK [Setting Up Python] *******************************************************
changed: [server2.eight.example.com]

TASK [Setting Hostname] ********************************************************
changed: [server2.eight.example.com]

TASK [Configuring network] *****************************************************
changed: [server2.eight.example.com]

TASK [Reloading Network] *******************************************************
changed: [server2.eight.example.com]

TASK [Building Host File] ******************************************************
changed: [server2.eight.example.com]

TASK [Erasing Repos] ***********************************************************
changed: [server2.eight.example.com]

TASK [Creating Temporary Repo File] ********************************************
changed: [server2.eight.example.com]

TASK [Building Repo File] ******************************************************
changed: [server2.eight.example.com]

TASK [Environment Packages Installed.] *****************************************
ok: [server2.eight.example.com]

TASK [Starting services] *******************************************************
ok: [server2.eight.example.com] => (item=firewalld)
changed: [server2.eight.example.com] => (item=httpd)

TASK [Erasing Repos] ***********************************************************
changed: [server2.eight.example.com]

TASK [Changing User Password] **************************************************
changed: [server2.eight.example.com]

TASK [Changing Root Password] **************************************************
changed: [server2.eight.example.com]

TASK [Creating Welcome Message] ************************************************
changed: [server2.eight.example.com]

TASK [Building Welcome Message then rebooting] *********************************
changed: [server2.eight.example.com]

TASK [Adjusting Services and Rebooting] ****************************************
changed: [server2.eight.example.com]

PLAY RECAP *********************************************************************
server2.eight.example.com : ok=16 changed=15 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

==> server2: Running provisioner: shell...
server2: Running: inline script
==> repo: Box 'rdbreak/rhel8repo' could not be found. Attempting to find and install...
repo: Box Provider: virtualbox
repo: Box Version: >= 0
==> repo: Loading metadata for box 'rdbreak/rhel8repo'
repo: URL: https://vagrantcloud.com/rdbreak/rhel8repo
==> repo: Adding box 'rdbreak/rhel8repo' (v1.1) for provider: virtualbox
repo: Downloading: https://vagrantcloud.com/rdbreak/boxes/rhel8repo/versions/1.1/providers/virtualbox.box
Download redirected to host: vagrantcloud-files-production.s3-accelerate.amazonaws.com
repo:
==> repo: Successfully added box 'rdbreak/rhel8repo' (v1.1) for 'virtualbox'!
==> repo: Importing base box 'rdbreak/rhel8repo'...
==> repo: Matching MAC address for NAT networking...
==> repo: Setting the name of the VM: rhcsa8env-master_repo_1616928884347_82773
==> repo: Fixed port collision for 22 => 2222. Now on port 2200.
==> repo: Clearing any previously set network interfaces...
==> repo: Preparing network interfaces based on configuration...
repo: Adapter 1: nat
repo: Adapter 2: hostonly
==> repo: Forwarding ports...
repo: 22 (guest) => 2200 (host) (adapter 1)
==> repo: Running 'pre-boot' VM customizations...
==> repo: Booting VM...
==> repo: Waiting for machine to boot. This may take a few minutes...
repo: SSH address: 127.0.0.1:2200
repo: SSH username: vagrant
repo: SSH auth method: private key
==> repo: Machine booted and ready!
==> repo: Checking for guest additions in VM...
==> repo: Configuring and enabling network interfaces...
==> repo: Installing rsync to the VM...
==> repo: Rsyncing folder: /cygdrive/d/RHCSA/rhcsa8env-master/ => /vagrant
==> repo: - Exclude: [".vagrant/", ".git/"]
There was an error when attempting to rsync a synced folder.
Please inspect the error message below for more info.

Host path: /cygdrive/d/RHCSA/rhcsa8env-master/
Guest path: /vagrant
Command: "rsync" "--verbose" "--archive" "--delete" "-z" "--copy-links" "--chmod=ugo=rwX" "--no-perms" "--no-owner" "--no-group" "--rsync-path" "sudo rsync" "-e" "ssh -p 2200 -o LogLevel=FATAL -o IdentitiesOnly=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i 'C:/Users/Marek/.vagrant.d/boxes/rdbreak-VAGRANTSLASH-rhel8repo/1.1/virtualbox/vagrant_private_key'" "--exclude" ".vagrant/" "--exclude" ".git/" "/cygdrive/d/RHCSA/rhcsa8env-master/" "vagrant@127.0.0.1:/vagrant"
Error: rsync: write failed on "/vagrant/disk-0-1.vdi": No space left on device (28)
rsync error: error in file IO (code 11) at receiver.c(374) [receiver=3.1.3]

Desktop :

  • OS: [Windows 10 AMD Ryzen 7 3700X 8-Core Processor 3.60 GHz, RAM 16,0 GB]
  • Browser [firefox]
  • Version Edition
    Windows 10 Pro
    Version 20H2
    Installed on ‎23. ‎7. ‎2020
    OS build 19042.867
    Experience Windows Feature Experience Pack 120.2212.551.0

Additional context
same problem also on another PC

Thanks
Virtualbox

Same issue. If at this point you ssh to the repo VM you see the / fs is 100% used. Had to resize the VM with some help of internet searches:

  1. Power off repo vm either with virtual box tool or command line.
  2. Clone repo vmdk file to vdi: vboxmanage clonemedium "/<path to VirtualBox vms/rhcsa8env_repo_1616939719413_56415/box-disk001.vmdk" "Newfile.vdi" --format vdi
  3. Resize the vdi: vboxmanage modifyhd "Newfile.vdi" --resize 40960
  4. Use clonemedia option to copy the vdi back to a vmdk: vboxmanage clonemedium "Newfile.vdi" "/<path to VirtualBox vms/rhcsa8env_repo_1616939719413_56415/box-disk001-40.vmdk"
  5. Next downloaded gparted iso and used VirtualBox graphical tool to:
    • Attach the newly created, resized vmdk as the primary hd of repo vm
    • Attach the gparted iso to the optical drive of repo vm
    • Go into Settings -> System and change the boot order to allow the vm to boot from optical drive.
    • Boot VM and connect to console.
    • It should boot from gparted image.
  6. Use gparted to add the unallocated space (in my case 8G) to the /dev/sda2 drive. Apply changes. Power down VM.
    6)Use VirtualBox to detach the gparted iso from the optical drive and power on VM.
  7. ssh to vm or connect to console from virtual box. Become root user.
  8. Resize the root fs by:
  • lvextend -L +8Gib /dev/rhel_rhel8/root
  • xfs_growfs -d /
  • verify new size of root with freespace.
  1. reboot the VM and continue the install with vagrant up

Hope I didn't miss anything. Basically repo vm was too small.

This is working!!!!. thanks

@glengib Thank you SO MUCH for this write-up. I had been struggling for far too long trying to provision my environment for the RHCE. Will link to your workaround in the issue I raised.

I faced the same issue and was able to get around the problem by removing the huge EMPTY file within the repo system:

[vagrant@rhel8 ~]$ df -h
Filesystem                   Size  Used Avail Use% Mounted on
devtmpfs                     389M     0  389M   0% /dev
tmpfs                        406M     0  406M   0% /dev/shm
tmpfs                        406M   11M  395M   3% /run
tmpfs                        406M     0  406M   0% /sys/fs/cgroup
/dev/mapper/rhel_rhel8-root   29G   29G   51M 100% /
/dev/sda1                   1014M  282M  733M  28% /boot
tmpfs                         82M     0   82M   0% /run/user/1000

[root@rhel8 /]# [root@rhel8 /]# ll
total 21431316
..
-rw-r--r--.   1 root root 21945647104 Mar 20 15:11 EMPTY

[root@rhel8 /]# rm EMPTY
rm: remove regular file 'EMPTY'? yes

[root@rhel8 /]# df -h
Filesystem                   Size  Used Avail Use% Mounted on
devtmpfs                     389M     0  389M   0% /dev
tmpfs                        406M     0  406M   0% /dev/shm
tmpfs                        406M   11M  395M   3% /run
tmpfs                        406M     0  406M   0% /sys/fs/cgroup
/dev/mapper/rhel_rhel8-root   29G  8.5G   21G  30% /
/dev/sda1                   1014M  282M  733M  28% /boot
tmpfs                         82M     0   82M   0% /run/user/1000

I am not saying this is the correct approach (and maybe it was wrong to remove it), but at least I was able to get the LAB up to a running state. To reach that point, I've removed the file as described above (access the system with ssh repo) and ran vagrant up several times until I got the success message.

TASK [Welcome to the RHCSA 8 Study/Test Environment!] **************************
ok: [server1.eight.example.com] =>
  msg:
  - ' The repo server, Server 1, and Server 2 have been set up successfully!'
  - '------------------------------------------------------------------------'
  - ' Server 1 is rebooting.  If you are unable to access it right away,'
  - ' wait a couple moments, then try again.'
  - '------------------------------------------------------------------------'
  - ' Accessing The Systems:'
  - '- Server 1 - 192.168.55.150'
  - '- Server 2 - 192.168.55.151'
  - '- Username/Password - vagrant/vagrant or root/password'
  - '- Access example - `ssh root@192.168.55.150` or `vagrant ssh system1`'
  - ' -----------------------------------------------------------------------'
  - '- Two additional interfaces and drives are on Server 2.'
  - '- Happy Studying!'

PLAY RECAP *********************************************************************
repo.eight.example.com     : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
server1.eight.example.com  : ok=16   changed=14   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

==> server1: Running provisioner: shell...
    server1: Running: inline script

As said, this might not be a real solution, but probably you can use this information as an indicator to the real issue.

A little OT but still related: How can I get access to the "Red Hat Certs Slack workspace practice exam channel"? I didn't find a working link nor another possibility to join. I'd like to use the lab now but am little unsure on how to do that.

Well that seems like a much simpler solution. I never checked. I assumed everything on the images was as intended.
No idea about the slack channel.

I faced the same issue and was able to get around the problem by removing the huge EMPTY file within the repo system:

[vagrant@rhel8 ~]$ df -h
Filesystem                   Size  Used Avail Use% Mounted on
devtmpfs                     389M     0  389M   0% /dev
tmpfs                        406M     0  406M   0% /dev/shm
tmpfs                        406M   11M  395M   3% /run
tmpfs                        406M     0  406M   0% /sys/fs/cgroup
/dev/mapper/rhel_rhel8-root   29G   29G   51M 100% /
/dev/sda1                   1014M  282M  733M  28% /boot
tmpfs                         82M     0   82M   0% /run/user/1000

[root@rhel8 /]# [root@rhel8 /]# ll
total 21431316
..
-rw-r--r--.   1 root root 21945647104 Mar 20 15:11 EMPTY

[root@rhel8 /]# rm EMPTY
rm: remove regular file 'EMPTY'? yes

[root@rhel8 /]# df -h
Filesystem                   Size  Used Avail Use% Mounted on
devtmpfs                     389M     0  389M   0% /dev
tmpfs                        406M     0  406M   0% /dev/shm
tmpfs                        406M   11M  395M   3% /run
tmpfs                        406M     0  406M   0% /sys/fs/cgroup
/dev/mapper/rhel_rhel8-root   29G  8.5G   21G  30% /
/dev/sda1                   1014M  282M  733M  28% /boot
tmpfs                         82M     0   82M   0% /run/user/1000

I am not saying this is the correct approach (and maybe it was wrong to remove it), but at least I was able to get the LAB up to a running state. To reach that point, I've removed the file as described above (access the system with ssh repo) and ran vagrant up several times until I got the success message.

TASK [Welcome to the RHCSA 8 Study/Test Environment!] **************************
ok: [server1.eight.example.com] =>
  msg:
  - ' The repo server, Server 1, and Server 2 have been set up successfully!'
  - '------------------------------------------------------------------------'
  - ' Server 1 is rebooting.  If you are unable to access it right away,'
  - ' wait a couple moments, then try again.'
  - '------------------------------------------------------------------------'
  - ' Accessing The Systems:'
  - '- Server 1 - 192.168.55.150'
  - '- Server 2 - 192.168.55.151'
  - '- Username/Password - vagrant/vagrant or root/password'
  - '- Access example - `ssh root@192.168.55.150` or `vagrant ssh system1`'
  - ' -----------------------------------------------------------------------'
  - '- Two additional interfaces and drives are on Server 2.'
  - '- Happy Studying!'

PLAY RECAP *********************************************************************
repo.eight.example.com     : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
server1.eight.example.com  : ok=16   changed=14   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

==> server1: Running provisioner: shell...
    server1: Running: inline script

As said, this might not be a real solution, but probably you can use this information as an indicator to the real issue.

A little OT but still related: How can I get access to the "Red Hat Certs Slack workspace practice exam channel"? I didn't find a working link nor another possibility to join. I'd like to use the lab now but am little unsure on how to do that.

https://join.slack.com/t/redhat-certs/shared_invite/zt-k6ew2jxh-fnBYw2Pw4~PQQE~Nse~wCQ

# I have re-write repo server config with adding extra one "/dev/sdb" disk for increasing sizes of the volume group "rhel_rhel8" the and the volume "/dev/rhel_rhel8/root"
  # Repo Configuration
  file_to_disk3 = './disk-1-3.vdi'
  
  config.vm.define "repo" do |repo|
    repo.vm.box = "rdbreak/rhel8repo"    
    repo.vm.provider "virtualbox" do |repo|
      repo.memory = "1024"
      unless File.exist?(file_to_disk3)
        repo.customize ['createhd', '--filename', file_to_disk3, '--variant', 'Standard', '--size', 2 * 1024]
        repo.customize ['storagectl', :id, '--name', 'SATA Controller', '--add', 'sata', '--portcount', 1]
        repo.customize ['storageattach', :id,  '--storagectl', 'SATA Controller', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', file_to_disk3]       
      end      
    end
    repo.vm.provision :shell, :inline => "pvs | grep '/dev/sdb' && echo 'The disk was already expanded!' || (pvcreate /dev/sdb; vgextend rhel_rhel8 /dev/sdb; lvextend -l +100%FREE /dev/rhel_rhel8/root; xfs_growfs /dev/rhel_rhel8/root)"
    
    repo.vm.provision :shell, :inline => "sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config; sudo systemctl restart sshd;", run: "always"
    repo.vm.provision :shell, :inline => "yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm -y; sudo yum install -y sshpass python3-pip python3-devel httpd sshpass vsftpd createrepo", run: "always"
    repo.vm.provision :shell, :inline => " python3 -m pip install -U pip ; python3 -m pip install pexpect; python3 -m pip install ansible", run: "always"
    repo.vm.synced_folder ".", "/vagrant", type: "rsync", rsync__exclude: ".git/"
    repo.vm.network "private_network", ip: "192.168.55.149"

  end
# Server 1 Configuration

Same issue. If at this point you ssh to the repo VM you see the / fs is 100% used. Had to resize the VM with some help of internet searches:
(...)

Hope I didn't miss anything. Basically repo vm was too small.

Thank you! I am using Windows 10 as my host and I had to run this command before following the steps you described. Via PowerShell, I ran:

$env:PATH = $env:PATH + ";C:\Program Files\Oracle\VirtualBox"

Many thanks indeed. It worked like a charm.

THX

I faced the same issue and was able to get around the problem by removing the huge EMPTY file within the repo system:

[vagrant@rhel8 ~]$ df -h
Filesystem                   Size  Used Avail Use% Mounted on
devtmpfs                     389M     0  389M   0% /dev
tmpfs                        406M     0  406M   0% /dev/shm
tmpfs                        406M   11M  395M   3% /run
tmpfs                        406M     0  406M   0% /sys/fs/cgroup
/dev/mapper/rhel_rhel8-root   29G   29G   51M 100% /
/dev/sda1                   1014M  282M  733M  28% /boot
tmpfs                         82M     0   82M   0% /run/user/1000

[root@rhel8 /]# [root@rhel8 /]# ll
total 21431316
..
-rw-r--r--.   1 root root 21945647104 Mar 20 15:11 EMPTY

[root@rhel8 /]# rm EMPTY
rm: remove regular file 'EMPTY'? yes

[root@rhel8 /]# df -h
Filesystem                   Size  Used Avail Use% Mounted on
devtmpfs                     389M     0  389M   0% /dev
tmpfs                        406M     0  406M   0% /dev/shm
tmpfs                        406M   11M  395M   3% /run
tmpfs                        406M     0  406M   0% /sys/fs/cgroup
/dev/mapper/rhel_rhel8-root   29G  8.5G   21G  30% /
/dev/sda1                   1014M  282M  733M  28% /boot
tmpfs                         82M     0   82M   0% /run/user/1000

I am not saying this is the correct approach (and maybe it was wrong to remove it), but at least I was able to get the LAB up to a running state. To reach that point, I've removed the file as described above (access the system with ssh repo) and ran vagrant up several times until I got the success message.

TASK [Welcome to the RHCSA 8 Study/Test Environment!] **************************
ok: [server1.eight.example.com] =>
  msg:
  - ' The repo server, Server 1, and Server 2 have been set up successfully!'
  - '------------------------------------------------------------------------'
  - ' Server 1 is rebooting.  If you are unable to access it right away,'
  - ' wait a couple moments, then try again.'
  - '------------------------------------------------------------------------'
  - ' Accessing The Systems:'
  - '- Server 1 - 192.168.55.150'
  - '- Server 2 - 192.168.55.151'
  - '- Username/Password - vagrant/vagrant or root/password'
  - '- Access example - `ssh root@192.168.55.150` or `vagrant ssh system1`'
  - ' -----------------------------------------------------------------------'
  - '- Two additional interfaces and drives are on Server 2.'
  - '- Happy Studying!'

PLAY RECAP *********************************************************************
repo.eight.example.com     : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
server1.eight.example.com  : ok=16   changed=14   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

==> server1: Running provisioner: shell...
    server1: Running: inline script

As said, this might not be a real solution, but probably you can use this information as an indicator to the real issue.

A little OT but still related: How can I get access to the "Red Hat Certs Slack workspace practice exam channel"? I didn't find a working link nor another possibility to join. I'd like to use the lab now but am little unsure on how to do that.

I removed the EMPTY file but it still fails, any tips?