ceph can't use all disk space, only use 100GB
kiddingl opened this issue · comments
Is this a bug report or feature request?
- Bug Report
Bug Report
What happened:
I hava 156GB space of disk:
What you expected to happen:
I want to use all space
How to reproduce it (minimal and precise):
Environment:
- OS (e.g. from /etc/os-release):
[root@ansible1 star]# cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
[root@ansible1 star]#
- Kernel (e.g.
uname -a
):
[root@ansible1 star]# uname -a
Linux ansible1 5.4.143-1.el7.elrepo.x86_64 #1 SMP Wed Aug 25 18:15:50 EDT 2021 x86_64 x86_64 x86_64 GNU/Linux
[root@ansible1 star]# - Docker version (e.g.
docker version
):
[root@ansible1 star]# docker version
Client: Docker Engine - Community
Version: 20.10.10
API version: 1.41
Go version: go1.16.9
Git commit: b485636
Built: Mon Oct 25 07:44:50 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.10
API version: 1.41 (minimum version 1.12)
Go version: go1.16.9
Git commit: e2f740d
Built: Mon Oct 25 07:43:13 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.11
GitCommit: 5b46e404f6b9f661a205e28d59c982d3634148f8
runc:
Version: 1.0.2
GitCommit: v1.0.2-0-g52b36a2
docker-init:
Version: 0.19.0
GitCommit: de40ad0
[root@ansible1 star]#
- Ceph version (e.g.
ceph -v
):
[root@ansible1 star]# ceph -v
ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)
ceph status:
[root@ansible1 star]# ceph -s
cluster:
id: ab1c0469-5885-4465-be40-e814d3876a1a
health: HEALTH_WARN
clients are using insecure global_id reclaim
services:
mon: 3 daemons, quorum ansible1,ansible2,ansible3 (age 6m)
mgr: ansible1(active, since 6m)
mds: cephfs:1 {0=ansible1=up:active}
osd: 3 osds: 3 up (since 6m), 3 in (since 6m)
rgw: 1 daemon active (07ac582d3171)
task status:
data:
pools: 7 pools, 208 pgs
objects: 209 objects, 3.9 KiB
usage: 3.0 GiB used, 297 GiB / 300 GiB avail
pgs: 208 active+clean
[root@ansible1 star]# ceph -s
Hi @kiddingl,
This repository is providing a way to run ceph within containers. If you have general issues or questions with ceph usage, please look at more ceph usage oriented community forums. There are plenty, search and you shall find :-)
Also, i would suggest that you provide more details about your specific setup in those questions at your forum of choice, as it's not obvious what it is you've told ceph to do, and how that differs from what actually happened.
As I see it, this is not a ceph-container issue. Good luck.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.