{"id":408,"date":"2014-09-02T09:45:26","date_gmt":"2014-09-01T23:45:26","guid":{"rendered":"https:\/\/icicimov.com\/blog\/?p=408"},"modified":"2017-02-23T23:57:06","modified_gmt":"2017-02-23T12:57:06","slug":"ceph-cluster-on-ubuntu-14-04","status":"publish","type":"post","link":"https:\/\/icicimov.com\/blog\/?p=408","title":{"rendered":"Ceph cluster on Ubuntu-14.04"},"content":{"rendered":"<p>As pointed on its home page, <a href=\"https:\/\/ceph.com\/\">Ceph<\/a> is a unified, distributed storage system designed for performance, reliability and scalability. It provides seamless access to objects using native language bindings or radosgw (RGW), a REST interface that&#8217;s compatible with applications written for S3 and Swift. Ceph&#8217;s RADOS Block Device (RBD) provides access to block device images that are striped and replicated across the entire storage cluster. It also provides a POSIX-compliant network file system (CephFS) that aims for high performance, large data storage, and maximum compatibility with legacy.<\/p>\n<p>I&#8217;m setting up a ceph cluster on three VM&#8217;s ostack-ceph1, ostack-ceph2 and ostack-ceph3, using the first one as deployment node as well.<\/p>\n<p>First we make sure the nodes can resolve each other names, we add to <code>\/etc\/hosts<\/code> on each server:<\/p>\n<pre><code>192.168.122.211 ostack-ceph1.virtual.local  ostack-ceph1\n192.168.122.212 ostack-ceph2.virtual.local  ostack-ceph2\n192.168.122.213 ostack-ceph3.virtual.local  ostack-ceph3\n<\/code><\/pre>\n<p>Then setup a password-less login for my user from ostack-ceph1 to ostack-ceph2 and ostack-ceph3. Create ssh public-private key pair:<\/p>\n<pre><code>igorc@ostack-ceph1:~$ ssh-keygen -t rsa -f \/home\/igorc\/.ssh\/id_rsa -N ''\n<\/code><\/pre>\n<p>and copy the public key over to the other nodes:<\/p>\n<pre><code>igorc@ostack-ceph1:~$ cat \/home\/igorc\/.ssh\/id_rsa.pub | ssh igorc@ostack-ceph2 \"cat &gt;&gt; ~\/.ssh\/authorized_keys\"\nigorc@ostack-ceph1:~$ cat \/home\/igorc\/.ssh\/id_rsa.pub | ssh igorc@ostack-ceph3 \"cat &gt;&gt; ~\/.ssh\/authorized_keys\"\nigorc@ostack-ceph1:~$ ssh igorc@ostack-ceph2 \"chmod 600 ~\/.ssh\/authorized_keys\"\nigorc@ostack-ceph1:~$ ssh igorc@ostack-ceph3 \"chmod 600 ~\/.ssh\/authorized_keys\"\n<\/code><\/pre>\n<p>Next set:<\/p>\n<pre><code>%sudo   ALL=(ALL:ALL) NOPASSWD:ALL\n<\/code><\/pre>\n<p>in <code>\/etc\/sudoers<\/code> file on each server. Make sure the user is part of the <code>sudo<\/code> group on each node.<\/p>\n<pre><code>$ sudo usermod -a -G sudo igorc\n<\/code><\/pre>\n<p>Then we can install <code>ceph-deploy<\/code> on ostak-ceph1:<\/p>\n<pre><code>igorc@ostack-ceph1:~$ wget -q -O- 'https:\/\/ceph.com\/git\/?p=ceph.git;a=blob_plain;f=keys\/release.asc' | sudo apt-key add -\nigorc@ostack-ceph1:~$ echo deb http:\/\/ceph.com\/debian-dumpling\/ $(lsb_release -sc) main | sudo tee \/etc\/apt\/sources.list.d\/ceph.list\nigorc@ostack-ceph1:~$ sudo aptitude update &amp;&amp; sudo aptitude install ceph-deploy\n<\/code><\/pre>\n<p>Now we can prepare the deployment directory, install ceph on all nodes and initiate the cluster:<\/p>\n<pre><code>igorc@ostack-ceph1:~$ mkdir ceph-cluster &amp;&amp; cd ceph-cluster\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph-deploy install ostack-ceph1 ostack-ceph2 ostack-ceph3\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph-deploy --cluster ceph new ostack-ceph{1,2,3}\n<\/code><\/pre>\n<p>Then I modify the <code>ceph.conf<\/code> file as shown below:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ vi ceph.conf \n[global]\nfsid = ed8d8819-e05b-48d4-ba9f-f0bc8493f18f\nmon_initial_members = ostack-ceph1, ostack-ceph2, ostack-ceph3\nmon_host = 192.168.122.211, 192.168.122.212, 192.168.122.213\nauth_cluster_required = cephx\nauth_service_required = cephx\nauth_client_required = cephx\nfilestore_xattr_use_omap = true\npublic_network = 192.168.122.0\/24\n\n[mon.ostack-ceph1]\n     host = ostack-ceph1 \n     mon addr = 192.168.122.211:6789\n\n[mon.ostack-ceph2]\n     host = ostack-ceph2 \n     mon addr = 192.168.122.212:6789\n\n[mon.ostack-ceph3]\n     host = ostack-ceph3 \n     mon addr = 192.168.122.213:6789\n\n# added below config\n[osd]\nosd_journal_size = 512 \nosd_pool_default_size = 3\nosd_pool_default_min_size = 1\nosd_pool_default_pg_num = 64 \nosd_pool_default_pgp_num = 64\n<\/code><\/pre>\n<p>and continue with Monitors installation:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ ceph-deploy mon create ostack-ceph1 ostack-ceph2 ostack-ceph3\n<\/code><\/pre>\n<p>Also collect the admin keyring on the local node and set read permissions:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ ceph-deploy gatherkeys ostack-ceph1\nigorc@ostack-ceph1:~\/ceph-cluster$ sudo chmod +r \/etc\/ceph\/ceph.client.admin.keyring\n<\/code><\/pre>\n<p>Now we can check the quorum status of the cluster:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ ceph quorum_status --format json-pretty\n\n{ \"election_epoch\": 6,\n  \"quorum\": [\n        0,\n        1,\n        2],\n  \"quorum_names\": [\n        \"ostack-ceph1\",\n        \"ostack-ceph2\",\n        \"ostack-ceph3\"],\n  \"quorum_leader_name\": \"ostack-ceph1\",\n  \"monmap\": { \"epoch\": 1,\n      \"fsid\": \"ed8d8819-e05b-48d4-ba9f-f0bc8493f18f\",\n      \"modified\": \"0.000000\",\n      \"created\": \"0.000000\",\n      \"mons\": [\n            { \"rank\": 0,\n              \"name\": \"ostack-ceph1\",\n              \"addr\": \"192.168.122.211:6789\\\/0\"},\n            { \"rank\": 1,\n              \"name\": \"ostack-ceph2\",\n              \"addr\": \"192.168.122.212:6789\\\/0\"},\n            { \"rank\": 2,\n              \"name\": \"ostack-ceph3\",\n              \"addr\": \"192.168.122.213:6789\\\/0\"}]}}\n<\/code><\/pre>\n<p>We also deploy the MDS component on all 3 nodes for redundancy:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ ceph-deploy --overwrite-conf mds create ostack-ceph1 ostack-ceph2 ostack-ceph3\n<\/code><\/pre>\n<p>Next we set-up the OSD&#8217;s:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ ceph-deploy --overwrite-conf osd --zap-disk create ostack-ceph1:\/dev\/sda ostack-ceph2:\/dev\/sda ostack-ceph3:\/dev\/sda\n<\/code><\/pre>\n<p>after which we can create our first pool:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ ceph osd pool create datastore 100\npool 'datastore' created\n<\/code><\/pre>\n<p>The number of placement groups (pgp) is based on <code>100 x the number of OSD\u2019s \/ the number of replicas we want to maintain<\/code>. I want 3 copies of the data (so if a server fails no data is lost), so <code>3 x 100 \/ 3 = 100<\/code>.<\/p>\n<p>Since I want to use this cluster as backend storage for Openstack Cinder and Glance I need to create some users with permissions to access specific pools. First is the <code>client.datastore<\/code> user for Cinder with access to the <code>datastore<\/code> pool we just created. We need to create a keyring, add it to ceph and set the appropriate permissions for the user on the pool:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ sudo ceph-authtool --create-keyring \/etc\/ceph\/ceph.client.datastore.keyring\nigorc@ostack-ceph1:~\/ceph-cluster$ sudo chmod +r \/etc\/ceph\/ceph.client.datastore.keyring\nigorc@ostack-ceph1:~\/ceph-cluster$ sudo ceph-authtool \/etc\/ceph\/ceph.client.datastore.keyring -n client.datastore --gen-key\nigorc@ostack-ceph1:~\/ceph-cluster$ sudo ceph-authtool -n client.datastore --cap mon 'allow r' --cap osd 'allow class-read object_prefix rbd_children, allow rwx pool=datastore' \/etc\/ceph\/ceph.client.datastore.keyring\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph auth add client.datastore -i \/etc\/ceph\/ceph.client.datastore.keyring\n<\/code><\/pre>\n<p>Now, we add the <code>client.datastore<\/code> user settings to the <code>ceph.conf<\/code> file:<\/p>\n<pre><code>...\n[client.datastore]\n     keyring = \/etc\/ceph\/ceph.client.datastore.keyring\n<\/code><\/pre>\n<p>and push that to all cluster members:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ ceph-deploy --overwrite-conf config push ostack-ceph1 ostack-ceph2 ostack-ceph3\n<\/code><\/pre>\n<p>Since we have MON service running on each host we want to be able to mount from each host too so we need to copy the new key we created:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ scp \/etc\/ceph\/ceph.client.datastore.keyring ostack-ceph2:~ &amp;&amp; ssh ostack-ceph2 sudo cp ceph.client.datastore.keyring \/etc\/ceph\/  \nigorc@ostack-ceph1:~\/ceph-cluster$ scp \/etc\/ceph\/ceph.client.datastore.keyring ostack-ceph3:~ &amp;&amp; ssh ostack-ceph3 sudo cp ceph.client.datastore.keyring \/etc\/ceph\/\n<\/code><\/pre>\n<p>We repeat the same procedure for Glance user and pool:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ ceph osd pool create images 64\nigorc@ostack-ceph1:~\/ceph-cluster$ sudo ceph-authtool --create-keyring \/etc\/ceph\/ceph.client.images.keyring\nigorc@ostack-ceph1:~\/ceph-cluster$ sudo chmod +r \/etc\/ceph\/ceph.client.images.keyring\nigorc@ostack-ceph1:~\/ceph-cluster$ sudo ceph-authtool \/etc\/ceph\/ceph.client.images.keyring -n client.images --gen-key\nigorc@ostack-ceph1:~\/ceph-cluster$ sudo ceph-authtool -n client.images --cap mon 'allow r' --cap osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' \/etc\/ceph\/ceph.client.images.keyring \nigorc@ostack-ceph1:~\/ceph-cluster$ ceph auth add client.images -i \/etc\/ceph\/ceph.client.images.keyring \n<\/code><\/pre>\n<p>Now, we add the <code>client.images<\/code> user settings to the <code>ceph.conf<\/code> file:<\/p>\n<pre><code>...\n[client.images]\n     keyring = \/etc\/ceph\/ceph.client.images.keyring\n<\/code><\/pre>\n<p>and push that to all cluster members:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ ceph-deploy --overwrite-conf config push ostack-ceph1 ostack-ceph2 ostack-ceph3\n<\/code><\/pre>\n<p>As previously done we need to copy the new key we created to all nodes:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ scp \/etc\/ceph\/ceph.client.images.keyring ostack-ceph2:~ &amp;&amp; ssh ostack-ceph2 sudo cp ceph.client.images.keyring \/etc\/ceph\/\nigorc@ostack-ceph1:~\/ceph-cluster$ scp \/etc\/ceph\/ceph.client.images.keyring ostack-ceph3:~ &amp;&amp; ssh ostack-ceph3 sudo cp ceph.client.images.keyring \/etc\/ceph\/\n<\/code><\/pre>\n<p><strong>UPDATE: 25\/08\/2015<\/strong><\/p>\n<p>The <code>ceph fs new<\/code> command was introduced in Ceph 0.84. Prior to this release, no manual steps are required to create a file system, and pools named <code>data<\/code> and <code>metadata<\/code> exist by default. The Ceph command line now includes commands for creating and removing file systems, but at present only one file system may exist at a time.<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ ceph osd pool create cephfs_metadata 64\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph osd pool create cephfs_data 64\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph fs new cephfs cephfs_metadata cephfs_data\nnew fs with metadata pool 2 and data pool 1\n\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph osd lspools\n0 rbd,1 cephfs_data,2 cephfs_metadata,3 datastore,4 images,\n\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph fs ls\nname: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]\n\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph mds stat\ne5: 1\/1\/1 up {0=ostack-ceph1=up:active}\n\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph status\n    cluster 5f1b2264-ab6d-43c3-af6c-3062e707a623\n     health HEALTH_WARN\n            too many PGs per OSD (320 &gt; max 300)\n     monmap e1: 3 mons at {ostack-ceph1=192.168.122.211:6789\/0,ostack-ceph2=192.168.122.212:6789\/0,ostack-ceph3=192.168.122.213:6789\/0}\n            election epoch 4, quorum 0,1,2 ostack-ceph1,ostack-ceph2,ostack-ceph3\n     mdsmap e5: 1\/1\/1 up {0=ostack-ceph1=up:active}\n     osdmap e25: 3 osds: 3 up, 3 in\n      pgmap v114: 320 pgs, 5 pools, 1962 bytes data, 20 objects\n            107 MB used, 22899 MB \/ 23006 MB avail\n                 320 active+clean\n\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph osd tree\nID WEIGHT  TYPE NAME             UP\/DOWN REWEIGHT PRIMARY-AFFINITY \n-1 0.02998 root default                                            \n-2 0.00999     host ostack-ceph1                                   \n 0 0.00999         osd.0              up  1.00000          1.00000 \n-3 0.00999     host ostack-ceph2                                   \n 1 0.00999         osd.1              up  1.00000          1.00000 \n-4 0.00999     host ostack-ceph3                                   \n 2 0.00999         osd.2              up  1.00000          1.00000\n<\/code><\/pre>\n<h2>Remove Ceph<\/h2>\n<p>To completely remove Ceph follow the procedure below:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ sudo service ceph-all stop\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph-deploy purge ostack-ceph1 ostack-ceph2 ostack-ceph3\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph-deploy purgedata ostack-ceph1 ostack-ceph2 ostack-ceph3\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph-deploy forgetkeys\n<\/code><\/pre>\n<h2>Calamari Ceph GUI<\/h2>\n<p>There is a great guide at <a href=\"http:\/\/www.cirgan.net\/how-to-install-calamari-for-ceph-cluster-on-ubuntu-14-04\/\">How to install Calamari for Ceph Cluster on Ubuntu 14.04<\/a>. With some tweaks I was able to setup Calamari on one of my VM&#8217;s and integrate with the Ceph cluster created above.<\/p>\n\n\t\t\t<div id='gallery-408-1' class='gallery gallery-408'>\n\t\t\t\t<div class='gallery-row gallery-col-3 gallery-clear'>\n\t\t\t\t\t<figure class='gallery-item col-3'>\n\t\t\t\t\t\t<div class='gallery-icon '><a href='https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_node_registration.png'><img loading=\"lazy\" decoding=\"async\" width=\"420\" height=\"161\" src=\"https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_node_registration-420x161.png\" class=\"attachment-thumbnail size-thumbnail\" alt=\"\" srcset=\"https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_node_registration-420x161.png 420w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_node_registration-744x285.png 744w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_node_registration-768x294.png 768w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_node_registration-1200x460.png 1200w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_node_registration.png 1719w\" sizes=\"auto, (max-width: 420px) 100vw, 420px\" \/><\/a><\/div>\n\t\t\t\t\t<\/figure>\n\t\t\t\t\t<figure class='gallery-item col-3'>\n\t\t\t\t\t\t<div class='gallery-icon '><a href='https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_dashboard.png'><img loading=\"lazy\" decoding=\"async\" width=\"420\" height=\"187\" src=\"https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_dashboard-420x187.png\" class=\"attachment-thumbnail size-thumbnail\" alt=\"\" srcset=\"https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_dashboard-420x187.png 420w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_dashboard-744x332.png 744w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_dashboard-768x343.png 768w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_dashboard-1200x535.png 1200w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_dashboard.png 1710w\" sizes=\"auto, (max-width: 420px) 100vw, 420px\" \/><\/a><\/div>\n\t\t\t\t\t<\/figure>\n\t\t\t\t\t<figure class='gallery-item col-3'>\n\t\t\t\t\t\t<div class='gallery-icon '><a href='https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_workbench.png'><img loading=\"lazy\" decoding=\"async\" width=\"420\" height=\"211\" src=\"https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_workbench-420x211.png\" class=\"attachment-thumbnail size-thumbnail\" alt=\"\" srcset=\"https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_workbench-420x211.png 420w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_workbench-744x373.png 744w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_workbench-768x385.png 768w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_workbench-1200x602.png 1200w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_workbench.png 1718w\" sizes=\"auto, (max-width: 420px) 100vw, 420px\" \/><\/a><\/div>\n\t\t\t\t\t<\/figure>\n\t\t\t\t<\/div>\n\t\t\t\t<div class='gallery-row gallery-col-3 gallery-clear'>\n\t\t\t\t\t<figure class='gallery-item col-3'>\n\t\t\t\t\t\t<div class='gallery-icon '><a href='https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_manage_cluster.png'><img loading=\"lazy\" decoding=\"async\" width=\"420\" height=\"115\" src=\"https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_manage_cluster-420x115.png\" class=\"attachment-thumbnail size-thumbnail\" alt=\"\" srcset=\"https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_manage_cluster-420x115.png 420w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_manage_cluster-744x203.png 744w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_manage_cluster-768x209.png 768w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_manage_cluster-1200x327.png 1200w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_manage_cluster.png 1705w\" sizes=\"auto, (max-width: 420px) 100vw, 420px\" \/><\/a><\/div>\n\t\t\t\t\t<\/figure>\n\t\t\t\t\t<figure class='gallery-item col-3'>\n\t\t\t\t\t\t<div class='gallery-icon '><a href='https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_manage_osd.png'><img loading=\"lazy\" decoding=\"async\" width=\"420\" height=\"110\" src=\"https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_manage_osd-420x110.png\" class=\"attachment-thumbnail size-thumbnail\" alt=\"\" srcset=\"https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_manage_osd-420x110.png 420w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_manage_osd-744x194.png 744w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_manage_osd-768x201.png 768w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_manage_osd-1200x314.png 1200w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_manage_osd.png 1710w\" sizes=\"auto, (max-width: 420px) 100vw, 420px\" \/><\/a><\/div>\n\t\t\t\t\t<\/figure>\n\t\t\t\t\t<figure class='gallery-item col-3'>\n\t\t\t\t\t\t<div class='gallery-icon '><a href='https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_manage_pools.png'><img loading=\"lazy\" decoding=\"async\" width=\"420\" height=\"139\" src=\"https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_manage_pools-420x139.png\" class=\"attachment-thumbnail size-thumbnail\" alt=\"\" srcset=\"https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_manage_pools-420x139.png 420w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_manage_pools-744x246.png 744w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_manage_pools-768x254.png 768w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_manage_pools-1200x396.png 1200w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/02\/Calamari_manage_pools.png 1707w\" sizes=\"auto, (max-width: 420px) 100vw, 420px\" \/><\/a><\/div>\n\t\t\t\t\t<\/figure>\n\t\t\t\t<\/div>\n\t\t\t<\/div><!-- .gallery -->\n\n","protected":false},"excerpt":{"rendered":"<p>As pointed on its home page, Ceph is a unified, distributed storage system designed for performance, reliability and scalability. It provides seamless access to objects using native language bindings or radosgw (RGW), a REST interface that&#8217;s compatible with applications written&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[16],"tags":[29,26,25],"class_list":["post-408","post","type-post","status-publish","format-standard","hentry","category-storage","tag-ceph","tag-cluster","tag-high-availability"],"_links":{"self":[{"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/408","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=408"}],"version-history":[{"count":7,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/408\/revisions"}],"predecessor-version":[{"id":424,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/408\/revisions\/424"}],"wp:attachment":[{"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=408"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=408"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=408"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}