{"id":271,"date":"2016-09-17T22:14:21","date_gmt":"2016-09-17T12:14:21","guid":{"rendered":"https:\/\/icicimov.com\/blog\/?p=271"},"modified":"2017-01-02T22:17:50","modified_gmt":"2017-01-02T11:17:50","slug":"adding-glusterfs-shared-storage-to-proxmox-to-support-live-migration","status":"publish","type":"post","link":"https:\/\/icicimov.com\/blog\/?p=271","title":{"rendered":"Adding GlusterFS shared storage to Proxmox to support Live Migration"},"content":{"rendered":"<p>To be able to move VM&#8217;s from one cluster member to another their root, and in fact any other attached disk, needs to be created on a shared storage. PVE has built in support for the native GlusterFS client among the other storage types which include LVM, NFS, iSCSI, RBD, ZFS and ZFS over iSCSI.<\/p>\n<h2>Prepare the volumes<\/h2>\n<p>The whole procedure executed on both nodes is given below:<\/p>\n<pre><code>root@proxmox01:~# fdisk -l \/dev\/vdb\nDisk \/dev\/vdb: 20 GiB, 21474836480 bytes, 41943040 sectors\nUnits: sectors of 1 * 512 = 512 bytes\nSector size (logical\/physical): 512 bytes \/ 512 bytes\nI\/O size (minimum\/optimal): 512 bytes \/ 512 bytes\n\nroot@proxmox01:~# pvcreate \/dev\/vdb\n  Physical volume \"\/dev\/vdb\" successfully created\n\nroot@proxmox01:~# vgcreate vg_proxmox \/dev\/vdb\n  Volume group \"vg_proxmox\" successfully created\n\nroot@proxmox01:~# lvcreate --name lv_proxmox -l 100%vg vg_proxmox\n  Logical volume \"lv_proxmox\" created.\n\nroot@proxmox01:~# mkfs -t xfs -f -i size=512 -n size=8192 -L PROXMOX \/dev\/vg_proxmox\/lv_proxmox\nmeta-data=\/dev\/vg_proxmox\/lv_proxmox isize=512    agcount=4, agsize=1310464 blks\n         =                       sectsz=512   attr=2, projid32bit=1\n         =                       crc=0        finobt=0\ndata     =                       bsize=4096   blocks=5241856, imaxpct=25\n         =                       sunit=0      swidth=0 blks\nnaming   =version 2              bsize=8192   ascii-ci=0 ftype=0\nlog      =internal log           bsize=4096   blocks=2560, version=2\n         =                       sectsz=512   sunit=0 blks, lazy-count=1\nrealtime =none                   extsz=4096   blocks=0, rtextents=0\n\nroot@proxmox01:~# mkdir -p \/data\/proxmox\nroot@proxmox01:~# vi \/etc\/fstab\n[...]\n\/dev\/mapper\/vg_proxmox-lv_proxmox       \/data\/proxmox xfs       defaults        0 0\n\nroot@proxmox01:~# mount -a\nroot@proxmox01:~# mount | grep proxmox\n\/dev\/mapper\/vg_proxmox-lv_proxmox on \/data\/proxmox type xfs (rw,relatime,attr2,inode64,noquota)\n<\/code><\/pre>\n<p>This created a LVM volume out of <code>\/dev\/vdb<\/code> disk and formatted it with XFS.<\/p>\n<h3>Install, setup and configure GLusterFS volume<\/h3>\n<p>Both nodes (proxmox01 and proxmox02) will run the GlusterFS server and client. The step-by-step procedure is given below, the <code>10.10.1.0\/24<\/code> network has been used for the cluster communication:<\/p>\n<pre><code>root@proxmox01:~# apt-get install glusterfs-server glusterfs-client\n\nroot@proxmox01:~# gluster peer probe 10.10.1.186\npeer probe: success.\nroot@proxmox01:~# gluster peer status\nNumber of Peers: 1\nHostname: 10.10.1.186\nUuid: 516154fa-84c4-437e-b745-97ed7505700e\nState: Peer in Cluster (Connected)\n\nroot@proxmox01:~# gluster volume create gfs-volume-proxmox transport tcp replica 2 10.10.1.185:\/data\/proxmox 10.10.1.186:\/data\/proxmox force\nvolume create: gfs-volume-proxmox: success: please start the volume to access data\n\nroot@proxmox01:~# gluster volume start gfs-volume-proxmox\nvolume start: gfs-volume-proxmox: success\n\nroot@proxmox01:~# gluster volume info\n\nVolume Name: gfs-volume-proxmox\nType: Replicate\nVolume ID: a8350bda-6e9a-4ccf-ade7-34c98c2197c3\nStatus: Started\nNumber of Bricks: 1 x 2 = 2\nTransport-type: tcp\nBricks:\nBrick1: 10.10.1.185:\/data\/proxmox\nBrick2: 10.10.1.186:\/data\/proxmox\nroot@proxmox01:~# gluster volume status\nStatus of volume: gfs-volume-proxmox\nGluster process                        Port    Online    Pid\n------------------------------------------------------------------------------\nBrick 10.10.1.185:\/data\/proxmox                49152    Y    18029\nBrick 10.10.1.186:\/data\/proxmox                49152    Y    6669\nNFS Server on localhost                         2049    Y    18043\nSelf-heal Daemon on localhost                    N\/A    Y    18048\nNFS Server on 10.10.1.186                       2049    Y    6683\nSelf-heal Daemon on 10.10.1.186                  N\/A    Y    6688\n\nTask Status of Volume gfs-volume-proxmox\n------------------------------------------------------------------------------\nThere are no active volume tasks\n\nroot@proxmox01:~# gluster volume set gfs-volume-proxmox performance.cache-size 256MB\nvolume set: success\nroot@proxmox01:~# gluster volume set gfs-volume-proxmox network.ping-timeout 5\nvolume set: success\nroot@proxmox01:~# gluster volume set gfs-volume-proxmox cluster.server-quorum-type server\nvolume set: success\nroot@proxmox01:~# gluster volume set gfs-volume-proxmox cluster.quorum-type fixed\nvolume set: success\nroot@proxmox01:~# gluster volume set gfs-volume-proxmox cluster.quorum-count 1\nvolume set: success\nroot@proxmox01:~# gluster volume set gfs-volume-proxmox cluster.eager-lock on\nvolume set: success\nroot@proxmox01:~# gluster volume set gfs-volume-proxmox network.remote-dio enable\nvolume set: success\nroot@proxmox01:~# gluster volume set gfs-volume-proxmox cluster.eager-lock enable\nvolume set: success\nroot@proxmox01:~# gluster volume set gfs-volume-proxmox performance.stat-prefetch off\nvolume set: success\nroot@proxmox01:~# gluster volume set gfs-volume-proxmox performance.io-cache off\nvolume set: success\nroot@proxmox01:~# gluster volume set gfs-volume-proxmox performance.read-ahead off\nvolume set: success\nroot@proxmox01:~# gluster volume set gfs-volume-proxmox performance.quick-read off\nvolume set: success\nroot@proxmox01:~# gluster volume set gfs-volume-proxmox performance.readdir-ahead on\nvolume set: success\n\nroot@proxmox01:~# gluster volume info\n\nVolume Name: gfs-volume-proxmox\nType: Replicate\nVolume ID: a8350bda-6e9a-4ccf-ade7-34c98c2197c3\nStatus: Started\nNumber of Bricks: 1 x 2 = 2\nTransport-type: tcp\nBricks:\nBrick1: 10.10.1.185:\/data\/proxmox\nBrick2: 10.10.1.186:\/data\/proxmox\nOptions Reconfigured:\nperformance.readdir-ahead: on\nperformance.quick-read: off\nperformance.read-ahead: off\nperformance.io-cache: off\nperformance.stat-prefetch: off\nnetwork.remote-dio: enable\ncluster.eager-lock: enable\ncluster.quorum-count: 1\ncluster.quorum-type: fixed\ncluster.server-quorum-type: server\nnetwork.ping-timeout: 5\nperformance.cache-size: 256MB\n\nroot@proxmox01:~# gluster volume status\nStatus of volume: gfs-volume-proxmox\nGluster process                             TCP Port  RDMA Port  Online  Pid\n------------------------------------------------------------------------------\nBrick proxmox01:\/data\/proxmox               49152     0          Y       4155 \nBrick proxmox02:\/data\/proxmox               49152     0          Y       3762 \nNFS Server on localhost                     2049      0          Y       4140 \nSelf-heal Daemon on localhost               N\/A       N\/A        Y       4146 \nNFS Server on proxmox02                     2049      0          Y       3746 \nSelf-heal Daemon on proxmox02               N\/A       N\/A        Y       3756 \n\nTask Status of Volume gfs-volume-proxmox\n------------------------------------------------------------------------------\nThere are no active volume tasks\n<\/code><\/pre>\n<h2>Configure the client<\/h2>\n<p>Now we go to the Proxmox GUI and add GLusterFS type of storage to the Datacenter. Proxmox has built-in support for the GLusterFS native client and this action will result with the following mount point created by PVE on both servers:<\/p>\n<pre><code># mount | grep proxmox\n10.10.1.185:gfs-volume-proxmox on \/mnt\/pve\/proxmox type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)\n<\/code><\/pre>\n<blockquote><p>\n  <strong>NOTE:<\/strong> Launching LXC containers on shared storage is not supported\n<\/p><\/blockquote>\n<p>[serialposts]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>To be able to move VM&#8217;s from one cluster member to another their root, and in fact any other attached disk, needs to be created on a shared storage. PVE has built in support for the native GlusterFS client among&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[17,9,22,13],"tags":[26,27,25,24,23],"class_list":["post-271","post","type-post","status-publish","format-standard","hentry","category-cluster","category-high-availability","category-kvm","category-virtualization","tag-cluster","tag-glusterfs","tag-high-availability","tag-kvm","tag-proxmox"],"_links":{"self":[{"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/271","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=271"}],"version-history":[{"count":1,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/271\/revisions"}],"predecessor-version":[{"id":272,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/271\/revisions\/272"}],"wp:attachment":[{"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=271"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=271"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=271"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}