Difference between revisions of "Kubernetes/SAN-Storage"
Jump to navigation
Jump to search
| Line 2: | Line 2: | ||
= Ceph = | = Ceph = | ||
Configure access to ceph cluster | = Configure access to ceph cluster = | ||
<source lang=bash> | <source lang=bash> | ||
# Ubuntu 20.04 | # Ubuntu 20.04 | ||
| Line 55: | Line 55: | ||
[https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/1.2.3/html/ceph_block_device/get-a-list-of-images Operations] | == Raw Block Device [https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/1.2.3/html/ceph_block_device/get-a-list-of-images Operations] == | ||
<source lang=bash> | <source lang=bash> | ||
# List block device images | # List block device images | ||
Revision as of 14:01, 17 November 2021
ceph us distributed storage it's not raw device block storage. RBD is Ceph's RADOS Block Device. cephfs are two ways how ceph serves the data but via gateways can provide S3, NFS, SMB, iSCSI and it's object storage. It's using something called placement groups to distribute the data and it's using special algorithm that pre-defines where the chunk of data is so if client has info it does not need to ask server where data block/object is but goes directly to the specific node which holds it.
Ceph
Configure access to ceph cluster
# Ubuntu 20.04
sudo apt install ceph-common
# Config file
cat > ~/.ceph/ceph.conf <<EOF
[global]
mon_host = XXXXXXX
keyring = /home/myuser/.ceph/ceph.client.admin.keyring # requires full absolute path
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
EOF
cat > ~/.ceph/ceph.client.admin.keyring <<EOF
[client.admin]
key = XXXXXXXXXXXX==
EOF
# Test
ceph -c ~/.ceph/ceph.conf status
cluster:
id: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeeee
health: HEALTH_OK
services:
mon: 3 daemons, quorum dc1ceph11-2222,dc2ceph21-3333,dc3ceph31-4444 (age 4h)
mgr: dc1ceph11-2222.dddddd(active, since 51m), standbys: dc2ceph21-3333.eeeeee
mds: devcephfs:1 {0=devcephfs.dc3ceph31-4444.nmngty=up:active} 2 up:standby
osd: 20 osds: 19 up (since 4d), 19 in (since 4d)
rgw: 1 daemon active (admin)
task status:
scrub status:
mds.devcephfs.dc3ceph31-4444.nmngty: idle
data:
pools: 22 pools, 449 pgs
objects: 10.77M objects, 25 TiB
usage: 54 TiB used, 85 TiB / 139 TiB avail
pgs: 447 active+clean
2 active+clean+scrubbing+deep
io:
client: 27 MiB/s rd, 5.6 MiB/s wr, 3.88k op/s rd, 191 op/s wr
# Aliases
alias ceph="ceph -c ~/.ceph/ceph.conf"
alias rbd="rbd -c ~/.ceph/ceph.conf"
Raw Block Device Operations
# List block device images
rbd ls {poolname} -c ~/.ceph/ceph.conf
kubernetes-dynamic-pvc-aaa9e0ff-14d9-479e-a425-aaaaaaaaaaaa
kubernetes-dynamic-pvc-aaa194fb-cdc3-4cb4-85e9-aaaaaaaaaaaa
myapp-postgresql-pv-0
myapp-postgresql-pv-1
otheapp-pg-data-0-volume
# Create a block device image
rbd create {image-name} --size {megabytes} --pool {pool-name}
rbd create dev-prometheus-data-0-vol --size 3072 --pool DEV_block # 3Gb
rbd info --image dev-prometheus-data-0-vol --pool DEV_block -c ~/.ceph/ceph.conf
rbd image 'dev-prometheus-data-0-vol':
size 3 GiB in 768 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 0f582d6be65aaa
block_name_prefix: rbd_data.0f582d6be65aaa
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Tue Nov 11 13:30:53 2021
access_timestamp: Tue Nov 11 13:30:53 2021
modify_timestamp: Tue Nov 11 13:30:53 2021
# Resize the image
rbd resize --image dev-prometheus-data-0-vol --size 4096 --pool DEV_block -c ~/.ceph/ceph.conf
Resizing image: 100% complete...done.