nobuto-m / microceph

Ceph for a one-rack cluster and appliances

Home Page:https://snapcraft.io/microceph

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MicroCeph

microceph microceph Go Report Card Documentation Status

MicroCeph is snap-deployed Ceph with built-in clustering.

Get it from the Snap Store

Table of Contents

  1. πŸ’‘ Philosophy
  2. 🎯 Features
  3. πŸ“– Documentation
  4. ⚑️ Quickstart
  5. πŸ‘ How Can I Contribute ?

πŸ’‘ Philosophy

Deploying and operating a Ceph cluster is complex because Ceph is designed to be a general-purpose storage solution. This is a significant overhead for small Ceph clusters. MicroCeph solves this by being opinionated and focused on the small scale. With MicroCeph, deploying and operating a Ceph cluster is as easy as a Snap!

🎯 Features

  1. Quick and consistent deployment with minimal overhead.
  2. Single-command operations (for bootstrapping, adding OSDs, service enablement, etc).
  3. Isolated from the host and upgrade-friendly.
  4. Built-in clustering so you don't have to worry about it!
  5. Tailored for small scale (or just your Laptop).

πŸ“– Documentation

Refer to the QuickStart section for your first setup. If you want to read official documentation, please visit our hosted Docs.

⚑️ Quickstart

βš™οΈ Installation and Bootstrapping Ceph cluster

# Install MicroCeph
$ sudo snap install microceph

# Bootstrapping the Ceph Cluster
$ sudo microceph cluster bootstrap
$ sudo microceph.ceph status
    cluster:
        id:     c8d120af-d7dc-45db-a216-4340e88e5a0e
        health: HEALTH_WARN
                OSD count 0 < osd_pool_default_size 3
    
    services:
        mon: 1 daemons, quorum host (age 1m)
        mgr: host(active, since 1m)
        osd: 0 osds: 0 up, 0 in
    
    data:
        pools:   0 pools, 0 pgs
        objects: 0 objects, 0 B
        usage:   0 B used, 0 B / 0 B avail
        pgs: 

Dashboard

NOTE: You might've noticed that the Ceph cluster is not functional yet, We need OSDs!
But before that, if you are only interested in deploying on a single node, it would be worthwhile to change the CRUSH rules. With the below commands, we're re-creating the default rule to have a failure domain of osd (instead of the default host failure domain)

# Change Ceph failure domain to OSD
$ sudo microceph.ceph osd crush rule rm replicated_rule
$ sudo microceph.ceph osd crush rule create-replicated single default osd

βš™οΈ Adding OSDs and RGW

# Adding OSD Disks
$ sudo microceph disk list
    Disks configured in MicroCeph:
    +-----+----------+------+
    | OSD | LOCATION | PATH |
    +-----+----------+------+

    Available unpartitioned disks on this system:
    +-------+----------+--------+---------------------------------------------+
    | MODEL | CAPACITY |  TYPE  |                    PATH                     |
    +-------+----------+--------+---------------------------------------------+
    |       | 10.00GiB | virtio | /dev/disk/by-id/virtio-46c76c00-48fd-4f8d-9 |
    +-------+----------+--------+---------------------------------------------+
    |       | 10.00GiB | virtio | /dev/disk/by-id/virtio-2171ea8f-e8a9-44c7-8 |
    +-------+----------+--------+---------------------------------------------+
    |       | 10.00GiB | virtio | /dev/disk/by-id/virtio-cf9c6e20-306f-4296-b |
    +-------+----------+--------+---------------------------------------------+

$ sudo microceph disk add --wipe /dev/disk/by-id/virtio-46c76c00-48fd-4f8d-9
$ sudo microceph disk add --wipe /dev/disk/by-id/virtio-2171ea8f-e8a9-44c7-8
$ sudo microceph disk add --wipe /dev/disk/by-id/virtio-cf9c6e20-306f-4296-b
$ sudo microceph disk list
    Disks configured in MicroCeph:
    +-----+---------------+---------------------------------------------+
    | OSD |   LOCATION    |                    PATH                     |
    +-----+---------------+---------------------------------------------+
    | 0   | host | /dev/disk/by-id/virtio-46c76c00-48fd-4f8d-9 |
    +-----+---------------+---------------------------------------------+
    | 1   | host | /dev/disk/by-id/virtio-2171ea8f-e8a9-44c7-8 |
    +-----+---------------+---------------------------------------------+
    | 2   | host | /dev/disk/by-id/virtio-cf9c6e20-306f-4296-b |
    +-----+---------------+---------------------------------------------+

    Available unpartitioned disks on this system:
    +-------+----------+--------+------------------+
    | MODEL | CAPACITY |  TYPE  |       PATH       |
    +-------+----------+--------+------------------+

Dashboard

# Adding RGW Service
$ sudo microceph enable rgw
# Perform IO and Check cluster status
$ sudo microceph.ceph status
    cluster:
        id:     a8f9b673-f3f3-4e3f-b427-a9cf0d2f2323
        health: HEALTH_OK
    
    services:
        mon: 1 daemons, quorum host (age 12m)
        mgr: host(active, since 12m)
        osd: 3 osds: 3 up (since 5m), 3 in (since 5m)
        rgw: 1 daemon active (1 hosts, 1 zones)
    
    data:
        pools:   7 pools, 193 pgs
        objects: 341 objects, 504 MiB
        usage:   1.6 GiB used, 28 GiB / 30 GiB avail
        pgs:     193 active+clean

Dashboard

πŸ‘ How Can I Contribute ?

  1. Checkout Microceph Hacking Guide to start building and contributing to the codebase.
  2. Excited about MicroCeph ? Join our Stargazers
  3. Write reviews or tutorials to help spread the knowledge πŸ“–
  4. Participate in Pull Requests and help fix Issues

You can also find us on Matrix @Ubuntu Ceph

About

Ceph for a one-rack cluster and appliances

https://snapcraft.io/microceph

License:GNU Affero General Public License v3.0


Languages

Language:Go 98.2%Language:Shell 1.2%Language:Makefile 0.5%