{"id":254,"date":"2016-03-12T12:28:37","date_gmt":"2016-03-12T01:28:37","guid":{"rendered":"https:\/\/icicimov.com\/blog\/?p=254"},"modified":"2017-01-02T19:07:48","modified_gmt":"2017-01-02T08:07:48","slug":"highly-available-iscsi-alua-asymetric-logical-unit-access-storage-with-pacemaker-and-drbd-in-dual-primary-mode-part1","status":"publish","type":"post","link":"https:\/\/icicimov.com\/blog\/?p=254","title":{"rendered":"Highly Available iSCSI ALUA (Asymetric Logical Unit Access) Storage with Pacemaker and DRBD in Dual-Primary mode &#8211; Part1"},"content":{"rendered":"<p><div class=\"fx-toc fx-toc-id-254\"><h2 class=\"fx-toc-title\">Table of contents<\/h2><ul class='fx-toc-list level-1'>\n\t<li>\n\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=254#iscsi-target-servers-setup\">iSCSI Target Servers Setup<\/a>\n\t\t<ul class='toc-even level-2'>\n\t\t\t<li>\n\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=254#scst\">SCST<\/a>\n\t\t\t<\/li>\n\t\t\t<li>\n\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=254#drbd\">DRBD<\/a>\n\t\t\t<\/li>\n\t\t\t<li>\n\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=254#corosync-and-pacemaker\">Corosync and Pacemaker<\/a>\n\t\t\t<\/li>\n\t\t\t<li>\n\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=254#fencing\">Fencing<\/a>\n\t\t\t\t<ul class='toc-odd level-3'>\n\t\t\t\t\t<li>\n\t\t\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=254#amazon-ec2-fencing\">Amazon EC2 fencing<\/a>\n\t\t\t\t\t<\/li>\n\t\t\t\t<\/ul>\n\t\t\t<li>\n\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=254#dlm-clvm-lvm\">DLM, CLVM, LVM<\/a>\n\t\t\t<\/li>\n\t\t\t<li>\n\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=254#alua\">ALUA<\/a>\n\t\t\t<\/li>\n<\/ul>\n<\/ul>\n<\/div>\n<br \/>\nI already wrote a post on this topic so this is kind of extension or variation of the setup described here <a href=\"https:\/\/icicimov.com\/blog\/?p=242\">Highly Available iSCSI Storage with SCST, Pacemaker, DRBD and OCFS2<\/a>.<\/p>\n<p>The main and most important difference is that thanks to <code>ALUA<\/code> (Asymetric Logical Unit Access) the back-end iSCSI storage can work in <code>Active\/Active<\/code> setup thus providing faster fail-over since in this case the resources are not being moved around. Instead, the initiator that now has the paths to the same target on both back-end servers available, can detect when the current active path has failed and quickly switch to the spare one.<\/p>\n<p>The layout described in the above mentioned post is still valid. The only difference is that this time the back-end iSCSI servers are running Ubuntu-14.04.4 LTS, the front-end initiator servers Debian-8.3 Jessie and different subnets are being used.<\/p>\n<h1><span id=\"iscsi-target-servers-setup\">iSCSI Target Servers Setup<\/span><\/h1>\n<p>What was said here in the previous post about iSCSI and the choice of SCST for the task over other solutions goes here as well, especially since we want to use ALUA which is well matured and documented in SCST. The only difference is that I&#8217;ll be using the <code>vdisk_blockio<\/code> handler for the LUN&#8217;s instead of <code>vdisk_fileio<\/code> in this case since I want to test its performance too. This is the network configuration on the iSCSI hosts, hpms01:<\/p>\n<pre><code>root@hpms01:~# ifconfig\neth0      Link encap:Ethernet  HWaddr 52:54:00:c5:a7:94 \n          inet addr:192.168.122.99  Bcast:192.168.122.255  Mask:255.255.255.0\n          inet6 addr: fe80::5054:ff:fec5:a794\/64 Scope:Link\n          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1\n          RX packets:889537 errors:0 dropped:10 overruns:0 frame:0\n          TX packets:271329 errors:0 dropped:0 overruns:0 carrier:0\n          collisions:0 txqueuelen:1000\n          RX bytes:117094884 (117.0 MB)  TX bytes:43494633 (43.4 MB)\n\neth1      Link encap:Ethernet  HWaddr 52:54:00:da:f7:ae \n          inet addr:192.168.152.99  Bcast:192.168.152.255  Mask:255.255.255.0\n          inet6 addr: fe80::5054:ff:feda:f7ae\/64 Scope:Link\n          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1\n          RX packets:512956 errors:0 dropped:10 overruns:0 frame:0\n          TX packets:270358 errors:0 dropped:0 overruns:0 carrier:0\n          collisions:0 txqueuelen:1000\n          RX bytes:60843899 (60.8 MB)  TX bytes:38435981 (38.4 MB)\n<\/code><\/pre>\n<p>and on hpms02:<\/p>\n<pre><code>eth0      Link encap:Ethernet  HWaddr 52:54:00:da:95:17 \n          inet addr:192.168.122.98  Bcast:192.168.122.255  Mask:255.255.255.0\n          inet6 addr: fe80::5054:ff:feda:9517\/64 Scope:Link\n          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1\n          RX packets:455133 errors:0 dropped:12 overruns:0 frame:0\n          TX packets:697089 errors:0 dropped:0 overruns:0 carrier:0\n          collisions:0 txqueuelen:1000\n          RX bytes:59913508 (59.9 MB)  TX bytes:93174506 (93.1 MB)\n\neth1      Link encap:Ethernet  HWaddr 52:54:00:6b:56:12 \n          inet addr:192.168.152.98  Bcast:192.168.152.255  Mask:255.255.255.0\n          inet6 addr: fe80::5054:ff:fe6b:5612\/64 Scope:Link\n          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1\n          RX packets:296660 errors:0 dropped:12 overruns:0 frame:0\n          TX packets:485093 errors:0 dropped:0 overruns:0 carrier:0\n          collisions:0 txqueuelen:1000\n          RX bytes:32516767 (32.5 MB)  TX bytes:63285013 (63.2 MB)\n<\/code><\/pre>\n<h2><span id=\"scst\">SCST<\/span><\/h2>\n<p>I will not go into details this time. Started by installing some prerequisites:<\/p>\n<pre><code># aptitude install fakeroot kernel-wedge build-essential makedumpfile kernel-package libncurses5 libncurses5-dev gcc libncurses5-dev linux-headers-$(uname -r) lsscsi patch subversion lldpad\n<\/code><\/pre>\n<p>and fetching the SCST source code from SVN as per usual on both nodes:<\/p>\n<pre><code># svn checkout svn:\/\/svn.code.sf.net\/p\/scst\/svn\/trunk scst-trunk\n# cd scst-trunk\n# make scst scst_install iscsi iscsi_install scstadm scstadm_install srpt srpt_install\n<\/code><\/pre>\n<p>I didn&#8217;t bother re-compiling the kernel to gain some additional benefits in speed this time since in one of my tests on Ubuntu it failed after 2 hours of compiling so decided it&#8217;s not worth the effort. Plus I&#8217;m sure this step is not even needed for the latest kernels.<\/p>\n<h2><span id=\"drbd\">DRBD<\/span><\/h2>\n<p>Nothing new here, will just show the resource configuration file <code>\/etc\/drbd.d\/vg1.res<\/code> for <code>vg1<\/code> after we install the <code>drbd8-utils<\/code> package first of course:<\/p>\n<pre><code>resource vg1 {\n    startup {\n        wfc-timeout 300;\n        degr-wfc-timeout 120;\n        outdated-wfc-timeout 120;\n        become-primary-on both;\n    }\n    syncer {\n        rate 40M;\n    }\n    disk {\n        on-io-error detach;\n        fencing resource-only;\n        al-extents 3389;\n        c-plan-ahead 0;\n    }\n    handlers {\n        fence-peer              \"\/usr\/lib\/drbd\/crm-fence-peer.sh\";\n        after-resync-target     \"\/usr\/lib\/drbd\/crm-unfence-peer.sh\";\n        outdate-peer            \"\/usr\/lib\/heartbeat\/drbd-peer-outdater\";\n    }\n    options {\n        on-no-data-accessible io-error;\n        #on-no-data-accessible suspend-io;\n    }\n    net {\n        allow-two-primaries;\n        timeout 60;\n        ping-timeout 30;\n        ping-int 30;\n        cram-hmac-alg \"sha1\";\n        shared-secret \"secret\";\n        max-epoch-size 8192;\n        max-buffers 8912;\n        sndbuf-size 512k;\n        rr-conflict disconnect;\n        after-sb-0pri discard-zero-changes;\n        after-sb-1pri discard-secondary;\n        after-sb-2pri disconnect;\n    }\n    volume 0 {\n       device      \/dev\/drbd0;\n       disk        \/dev\/sda;\n       meta-disk   internal;\n    }\n    on hpms01 {\n       address     192.168.152.99:7788;\n    }\n    on hpms02 {\n       address     192.168.152.98:7788;\n\n}\n<\/code><\/pre>\n<p>where the only noticeable difference is <code>allow-two-primaries<\/code> in the net section, which allows DRBD to become Active on both nodes. The rest is same, we create the meta-data on both nodes and activate the resource:<\/p>\n<pre><code># drbdadm create-md vg1\n# drbdadm up vg1\n<\/code><\/pre>\n<p>and then perform the initial sync on one of them selecting it as <code>Master<\/code>:<\/p>\n<pre><code># drbdadm primary --force vg1\n<\/code><\/pre>\n<p>When the sync is complete we just promote the other node to primary too:<\/p>\n<pre><code># drbdadm primary vg1\n<\/code><\/pre>\n<p>after which we have:<\/p>\n<pre><code>root@hpms01:~# cat \/proc\/drbd\nversion: 8.4.3 (api:1\/proto:86-101)\nsrcversion: 6551AD2C98F533733BE558C\n 0: cs:Connected ro:Primary\/Primary ds:UpToDate\/UpToDate C r-----\n    ns:0 nr:0 dw:0 dr:20372 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0\n<\/code><\/pre>\n<p>In production environment we also need to adjust fencing mechanism to <code>fencing resource-and-stonith;<\/code> so DRBD passes this task to <code>Pacemaker<\/code> once we have <code>STONITH<\/code> tested and working on the bare metal servers.<\/p>\n<h2><span id=\"corosync-and-pacemaker\">Corosync and Pacemaker<\/span><\/h2>\n<p>We install needed packages as per usual, this being Ubuntu cloud image I&#8217;m using for the VM&#8217;s I need to install linux-image-extra-virtual package that provides the clustering goodies:<\/p>\n<pre><code># aptitude install linux-image-extra-virtual\n# shutdown -r now\n<\/code><\/pre>\n<p>Then the rest of the software:<\/p>\n<pre><code># aptitude install heartbeat pacemaker corosync fence-agents openais cluster-glue resource-agents dlm lvm2 clvm drbd8-utils sg3-utils\n<\/code><\/pre>\n<p>Then comes the Corosync configuration file <code>\/etc\/corosync\/corosync.conf<\/code> with double ring setup:<\/p>\n<pre><code>totem {\n    version: 2\n\n    # How long before declaring a token lost (ms)\n    token: 3000\n\n    # How many token retransmits before forming a new configuration\n    token_retransmits_before_loss_const: 10\n\n    # How long to wait for join messages in the membership protocol (ms)\n    join: 60\n\n    # How long to wait for consensus to be achieved before starting a new round of membership configuration (ms)\n    consensus: 3600\n\n    # Turn off the virtual synchrony filter\n    vsftype: none\n\n    # Number of messages that may be sent by one processor on receipt of the token\n    max_messages: 20\n\n    # Stagger sending the node join messages by 1..send_join ms\n    send_join: 45\n\n    # Limit generated nodeids to 31-bits (positive signed integers)\n    clear_node_high_bit: yes\n\n    # Disable encryption\n    secauth: off\n\n    # How many threads to use for encryption\/decryption\n    threads: 0\n\n    # Optionally assign a fixed node id (integer)\n    # nodeid: 1234\n\n    # CLuster name, needed for DLM or DLM wouldn't start\n    cluster_name: iscsi\n\n    # This specifies the mode of redundant ring, which may be none, active, or passive.\n    rrp_mode: active\n\n    interface {\n        ringnumber: 0\n        bindnetaddr: 192.168.152.99\n        mcastaddr: 226.94.1.1\n        mcastport: 5404\n    }\n    interface {\n        ringnumber: 1\n        bindnetaddr: 192.168.122.99\n        mcastaddr: 226.94.41.1\n        mcastport: 5405\n    }\n    transport: udpu\n}\nnodelist {\n    node {\n        ring0_addr: 192.168.152.99\n        ring1_addr: 192.168.122.99\n        nodeid: 1\n    }\n    node {\n        ring0_addr: 192.168.152.98\n        ring1_addr: 192.168.122.99\n        nodeid: 2\n    }\n}\nquorum {\n    provider: corosync_votequorum\n    expected_votes: 2\n    two_node: 1\n    wait_for_all: 1\n}\namf {\n    mode: disabled\n}\nservice {\n     # Load the Pacemaker Cluster Resource Manager\n     # if 0: start pacemaker\n     # if 1: don't start pacemaker\n     ver:       1\n     name:      pacemaker\n}\naisexec {\n        user:   root\n        group:  root\n}\nlogging {\n        fileline: off\n        to_stderr: yes\n        to_logfile: no\n        to_syslog: yes\n        syslog_facility: daemon\n        debug: off\n        timestamp: on\n        logger_subsys {\n                subsys: subsys: QUORUM\n                debug: off\n                tags: enter|leave|trace1|trace2|trace3|trace4|trace6\n        }\n}\n<\/code><\/pre>\n<p>Starting with Corosync-2.0 new <code>quorum<\/code> section has been introduced. For <code>2-node<\/code> cluster it looks like in the setup above and is very important for proper operation. The option <code>two_node: 1<\/code> tells Corosync this is 2-node cluster and enables the cluster to remain operational when one node powers down or crashes. It implies <code>expected_votes: 2<\/code> to be setup too. The option <code>wait_for_all: 1<\/code> means though that <strong>BOTH<\/strong> nodes need to be running in order for the cluster to become operational. This is to prevent split-brain situation in case of partitioned cluster on startup.<\/p>\n<p>Then we set Pacemaker parameters for 2 nodes cluster (on one node only):<\/p>\n<pre><code>root@hpms02:~# crm configure property stonith-enabled=false\nroot@hpms02:~# crm configure property no-quorum-policy=ignore\n\nroot@hpms02:~# crm status\nLast updated: Sat Mar  5 11:53:20 2016\nLast change: Sat Mar  5 09:53:00 2016 via cibadmin on hpms02\nStack: corosync\nCurrent DC: hpms01 (1) - partition with quorum\nVersion: 1.1.10-42f2063\n2 Nodes configured\n0 Resources configured\n\nOnline: [ hpms01 hpms02 ]\n<\/code><\/pre>\n<p>And finally take care of the auto start:<\/p>\n<pre><code>root@hpms02:~# update-rc.d corosync enable\nroot@hpms01:~# update-rc.d -f pacemaker remove\nroot@hpms01:~# update-rc.d pacemaker start 50 1 2 3 4 5 . stop 01 0 6 .\nroot@hpms01:~# update-rc.d pacemaker enable\n<\/code><\/pre>\n<p>Now we can set DRBD under Pacemaker control (on one of the nodes):<\/p>\n<pre><code>root@hpms01:~# crm configure\ncrm(live)configure# primitive p_drbd_vg1 ocf:linbit:drbd \\\n    params drbd_resource=\"vg1\" \\\n    op monitor interval=\"10\" role=\"Master\" \\\n    op monitor interval=\"20\" role=\"Slave\" \\\n    op start interval=\"0\" timeout=\"240\" \\\n    op stop interval=\"0\" timeout=\"100\"\nms ms_drbd p_drbd_vg1 \\\n    meta master-max=\"2\" master-node-max=\"1\" clone-max=\"2\" clone-node-max=\"1\" notify=\"true\" interleave=\"true\"\ncrm(live)configure# commit\ncrm(live)configure# quit\nbye\nroot@hpms01:~#\n<\/code><\/pre>\n<p>after which the cluster state will be:<\/p>\n<pre><code>root@hpms01:~# crm status\nLast updated: Wed Mar  9 04:09:38 2016\nLast change: Tue Mar  8 12:24:13 2016 via crmd on hpms01\nStack: corosync\nCurrent DC: hpms02 (2) - partition with quorum\nVersion: 1.1.10-42f2063\n2 Nodes configured\n10 Resources configured\n\n\nOnline: [ hpms01 hpms02 ]\n\n Master\/Slave Set: ms_drbd [p_drbd_vg1]\n     Masters: [ hpms01 hpms02 ]\n<\/code><\/pre>\n<h2><span id=\"fencing\">Fencing<\/span><\/h2>\n<p>Since our servers are VM&#8217;s running in <code>Libvirt\/KVM<\/code> we can use the <code>fence_virsh<\/code> STONITH device in this case and we enable the STONITH feature in Pacemaker:<\/p>\n<pre><code>primitive p_fence_hpms01 stonith:fence_virsh \\\n   params action=\"reboot\" ipaddr=\"vm-host\" \\\n          login=\"root\" identity_file=\"\/root\/.ssh\/id_rsa\" \\\n          port=\"hpms01\"\nprimitive p_fence_hpms02 stonith:fence_virsh \\\n   params action=\"reboot\" ipaddr=\"vm-host\" \\\n          login=\"root\" identity_file=\"\/root\/.ssh\/id_rsa\" \\\n          port=\"hpms02\"\nlocation l_fence_hpms01 p_fence_hpms01 -inf: hpms01.virtual.local\nlocation l_fence_hpms02 p_fence_hpms02 -inf: hpms02.virtual.local\nproperty stonith-enabled=\"true\"\ncommit\n<\/code><\/pre>\n<p>The <code>location<\/code> parameter takes care that the fencing device for hpms01 never ends up on hpms01 and same for hpms02, a node fencing it self does not make any sense. The <code>port<\/code> parameter tells libvirt which VM needs rebooting.<\/p>\n<p>We also need to install the libvirt-bin package to have virsh utility available in the VM&#8217;s:<\/p>\n<pre><code>root@hpms01:~# aptitude install libvirt-bin\n<\/code><\/pre>\n<p>Then to enable VM fencing and support live VM migration in the hypervizor, we edit the hypervizor host libvirtd config first <code>\/etc\/libvirt\/libvirtd.conf<\/code> as shown bellow:<\/p>\n<pre><code>...\nlisten_tls = 0\nlisten_tcp = 1\ntcp_port = \"16509\"\nauth_tcp = \"none\"\n...\n<\/code><\/pre>\n<p>and restart:<\/p>\n<pre><code># service libvirt-bin restart\n<\/code><\/pre>\n<p>After all this has been setup we should be able to access the hypervizor from within our VM&#8217;s:<\/p>\n<pre><code>root@hpms01:~# virsh --connect=qemu+tcp:\/\/192.168.1.210\/system list --all\n Id    Name                           State\n----------------------------------------------------\n 15    hpms01                         running\n 16    hpms02                         running\n 17    proxmox01                      running\n 18    proxmox02                      running\n<\/code><\/pre>\n<p>meaning the fencing should now work. To test it:<\/p>\n<pre><code>root@hpms01:~# fence_virsh -a 192.168.1.210 -l root -k ~\/.ssh\/id_rsa -n hpms02 -o status\nStatus: ON\n\nroot@hpms02:~# fence_virsh -a 192.168.1.210 -l root -k ~\/.ssh\/id_rsa -n hpms01 -o status\nStatus: ON\n<\/code><\/pre>\n<p>but we need to add the hpms01 and hpms02 public ssh keys to the hypervisor&#8217;s <code>\/root\/.ssh\/authorized_keys<\/code> file for password-less login.<\/p>\n<p>Another option is using external\/libvirt device in which case we don&#8217;t need to fiddle with ssh and works over TCP:<\/p>\n<pre><code>primitive p_fence_hpms01 stonith:external\/libvirt \\\n  params hostlist=\"hpms01\" \\\n         hypervisor_uri=\"qemu+tcp:\/\/192.168.1.210\/system\" \\\n  op monitor interval=\"60s\"\nprimitive p_fence_hpms02 stonith:external\/libvirt \\\n  params hostlist=\"hpms02\" \\\n         hypervisor_uri=\"qemu+tcp:\/\/192.168.1.210\/system\" \\\n  op monitor interval=\"60s\"\nlocation l_fence_hpms01 p_fence_hpms01 -inf: hpms01.virtual.local\nlocation l_fence_hpms02 p_fence_hpms02 -inf: hpms02.virtual.local\nproperty stonith-enabled=\"true\"\ncommit\n<\/code><\/pre>\n<p>We can confirm it&#8217;s been started:<\/p>\n<pre><code>root@hpms01:~# crm status\nLast updated: Fri Mar 18 01:53:38 2016\nLast change: Fri Mar 18 01:51:01 2016 via cibadmin on hpms01\nStack: corosync\nCurrent DC: hpms02 (2) - partition with quorum\nVersion: 1.1.10-42f2063\n2 Nodes configured\n12 Resources configured\n\nOnline: [ hpms01 hpms02 ]\n\n Master\/Slave Set: ms_drbd [p_drbd_vg1]\n     Masters: [ hpms01 hpms02 ]\n Clone Set: cl_lvm [p_lvm_vg1]\n     Started: [ hpms01 hpms02 ]\n Master\/Slave Set: ms_scst [p_scst]\n     Masters: [ hpms02 ]\n     Slaves: [ hpms01 ]\n Clone Set: cl_lock [g_lock]\n     Started: [ hpms01 hpms02 ]\n p_fence_hpms01    (stonith:external\/libvirt):    Started hpms02\n p_fence_hpms02    (stonith:external\/libvirt):    Started hpms01\n<\/code><\/pre>\n<p>In production bare metal server this would be a real remote management dedicated device\/card like ILO, iDRAC, IPMI (depending on the server brand) or network managed UPS unit. Example for IPMI LAN device:<\/p>\n<pre><code>primitive p_fence_hpms01 stonith:fence_ipmilan \\\n   pcmk_host_list=\"pcmk-1\" ipaddr=\"&lt;hpms01_ipmi_ip_address&gt;\" \\\n   action=\"reboot\" login=\"admin\" passwd=\"secret\" delay=15 \\\n   op monitor interval=60s\nprimitive p_fence_hpms02 stonith:fence_ipmilan \\\n   pcmk_host_list=\"pcmk-2\" ipaddr=\"&lt;hpms02_ipmi_ip_address&gt;\" \\\n   action=\"reboot\" login=\"admin\" passwd=\"secret\" delay=5 \\\n   op monitor interval=60s\nlocation l_fence_hpms01 p_fence_hpms01 -inf: hpms01.virtual.local\nlocation l_fence_hpms02 p_fence_hpms02 -inf: hpms02.virtual.local\nproperty stonith-enabled=\"true\"\ncommit\n<\/code><\/pre>\n<p>The <code>delay<\/code> parameter is needed to avoid dual-fencing in two-node clusters and prevent infinite fencing loop. The node with the <code>delay=\"15\"<\/code> will have a 15 second head-start, so in a network partition triggered fence, the node with the delay should always live and the node without the delay will be immediately fenced.<\/p>\n<h3><span id=\"amazon-ec2-fencing\">Amazon EC2 fencing<\/span><\/h3>\n<p>This is a special case. If the VM&#8217;s are running on AWS we need a fencing agent available at <a href=\"https:\/\/github.com\/beekhof\/fence_ec2\/blob\/392a146b232fbf2bf2f75605b1e92baef4be4a01\/fence_ec2\">fence_ec2<\/a>.<\/p>\n<pre><code># wget -O \/usr\/sbin\/fence_ec2 https:\/\/raw.githubusercontent.com\/beekhof\/fence_ec2\/392a146b232fbf2bf2f75605b1e92baef4be4a01\/fence_ec2\n# chmod 755 \/usr\/sbin\/fence_ec2\n<\/code><\/pre>\n<p>Then the fence primitive would look something like this:<\/p>\n<pre><code>primitive stonith_my-ec2-nodes stonith:fence_ec2 \\\n   params ec2-home=\"\/root\/ec2\" action=\"reboot\" \\\n      pcmk_host_check=\"static-list\" \\\n      pcmk_host_list=\"ec2-iscsi-01 ec2-iscsi-02\" \\\n   op monitor interval=\"600s\" timeout=\"300s\" \\\n   op start start-delay=\"30s\" interval=\"0\"\n<\/code><\/pre>\n<p>So we need to point the resource to our AWS environment and the API keys to use.<\/p>\n<h2><span id=\"dlm-clvm-lvm\">DLM, CLVM, LVM<\/span><\/h2>\n<p>What was said in Highly Available iSCSI Storage with Pacemaker and DRBD about DLM problems in Ubuntu applies here too. After sorting out the issues as described in that article we proceed to creating the vg1 volume group, which thanks to CLVM will be clustered,and the Logical Volume (on one node only) which is describe in more details in Highly Available Replicated Storage with Pacemaker and DRBD in Dual-Primary mode. First we set some LVM parameters in <code>\/etc\/lvm\/lvm.conf<\/code>:<\/p>\n<pre><code>...\n    filter = [ \"a|drbd.*|\", \"r|.*|\" ]\n    write_cache_state = 0\n    locking_type = 3\n...\n<\/code><\/pre>\n<p>to tell LVM to loock for VGs on the DRBD devices only and change the locking type to cluster. Then we can create the volume.<\/p>\n<pre><code>root@hpms01:~# pvcreate \/dev\/drbd0\nroot@hpms01:~# vgcreate -c y vg1 \/dev\/drbd0\nroot@hpms01:~# lvcreate --name lun1 -l 100%vg vg1\n<\/code><\/pre>\n<p>Although we created the VG on the first node, if we run vgdisplay on the other node we will be able to see the Volume Group there as well. Then we configure the resources in Pacemaker:<\/p>\n<pre><code>primitive p_clvm ocf:lvm2:clvmd \\\n    params daemon_timeout=\"30\" \\\n    op monitor interval=\"60\" timeout=\"30\" \\\n    op start interval=\"0\" timeout=\"90\" \\\n    op stop interval=\"0\" timeout=\"100\"\nprimitive p_controld ocf:pacemaker:controld \\\n    op monitor interval=\"60\" timeout=\"60\" \\\n    op start interval=\"0\" timeout=\"90\" \\\n    op stop interval=\"0\" timeout=\"100\" \\\n    params daemon=\"dlm_controld\" \\\n    meta target-role=\"Started\"\nprimitive p_lvm_vg1 ocf:heartbeat:LVM \\\n    params volgrpname=\"vg1\" \\\n    op start interval=\"0\" timeout=\"30\" \\\n    op stop interval=\"0\" timeout=\"30\" \\\n    op monitor interval=\"0\" timeout=\"30\"\nclone cl_lock g_lock \\\n        meta globally-unique=\"false\" interleave=\"true\"\nclone cl_lvm p_lvm_vg1 \\\n        meta interleave=\"true\" target-role=\"Started\" globally-unique=\"false\"\ncolocation co_drbd_lock inf: cl_lock ms_drbd:Master\ncolocation co_lock_lvm inf: cl_lvm cl_lock\norder o_drbd_lock inf: ms_drbd:promote cl_lock\norder o_lock_lvm inf: cl_lock cl_lvm\norder o_vg1 inf: ms_drbd:promote cl_lvm:start ms_scst:start\ncommit\n<\/code><\/pre>\n<p>after which the state is:<\/p>\n<pre><code>root@hpms01:~# crm status\nLast updated: Wed Mar  9 04:20:39 2016\nLast change: Tue Mar  8 12:24:13 2016 via crmd on hpms01\nStack: corosync\nCurrent DC: hpms02 (2) - partition with quorum\nVersion: 1.1.10-42f2063\n2 Nodes configured\n10 Resources configured\n\nOnline: [ hpms01 hpms02 ]\n\n Master\/Slave Set: ms_drbd [p_drbd_vg1]\n     Masters: [ hpms01 hpms02 ]\n Clone Set: cl_lvm [p_lvm_vg1]\n     Started: [ hpms01 hpms02 ]\n Clone Set: cl_lock [g_lock]\n     Started: [ hpms01 hpms02 ]\n<\/code><\/pre>\n<p>We created some <code>colocation<\/code> and <code>order<\/code> constraints so the resources start properly and in specific order.<\/p>\n<h2><span id=\"alua\">ALUA<\/span><\/h2>\n<p>This is the last and crucial step and where the SCST configuration is done. I used this excellent article from <a href=\"http:\/\/marcitland.blogspot.com.au\/2013\/04\/building-using-highly-available-esos.html\">Marc&#8217;s Adventures in IT Land<\/a> website as reference for the ALUA setup, which Marc describes in details.<\/p>\n<p>First we load the kernel module prepare the Target on each node:<\/p>\n<pre><code>root@hpms01:~# modprobe scst_vdisk\nroot@hpms01:~# scstadmin -add_target iqn.2016-02.local.virtual:hpms01.vg1 -driver iscsi\n\nroot@hpms02:~# modprobe scst_vdisk\nroot@hpms02:~# scstadmin -add_target iqn.2016-02.local.virtual:hpms02.vg1 -driver iscsi\n<\/code><\/pre>\n<p>Then we configure ALUA, the local and remote Target Group Paths. Instead running all this manually line by line we can put the commands in a file <code>alua_setup.sh<\/code>:<\/p>\n<pre><code>scstadmin -add_target iqn.2016-02.local.virtual:hpms02.vg1 -driver iscsi || 1\n#scstadmin -set_tgt_attr iqn.2016-02.local.virtual:hpms01.vg1 -driver iscsi -attributes allowed_portal=\"192.168.122.99 192.168.152.99\"\nscstadmin -enable_target iqn.2016-02.local.virtual:hpms01.vg1 -driver iscsi\nscstadmin -set_drv_attr iscsi -attributes enabled=1\nscstadmin -add_dgrp esos\nscstadmin -add_tgrp local -dev_group esos\nscstadmin -set_tgrp_attr local -dev_group esos -attributes group_id=1\nscstadmin -add_tgrp_tgt iqn.2016-02.local.virtual:hpms01.vg1 -dev_group esos -tgt_group local\nscstadmin -set_tgt_attr iqn.2016-02.local.virtual:hpms01.vg1 -driver iscsi -attributes rel_tgt_id=1\nscstadmin -add_tgrp remote -dev_group esos\nscstadmin -set_tgrp_attr remote -dev_group esos -attributes group_id=2\nscstadmin -add_tgrp_tgt iqn.2016-02.local.virtual:hpms02.vg1 -dev_group esos -tgt_group remote\nscstadmin -set_ttgt_attr iqn.2016-02.local.virtual:hpms02.vg1 -dev_group esos -tgt_group remote -attributes rel_tgt_id=2\nscstadmin -open_dev vg1 -handler vdisk_blockio -attributes filename=\/dev\/vg1\/lun1,write_through=1,nv_cache=0\nscstadmin -add_lun 0 -driver iscsi -target iqn.2016-02.local.virtual:hpms01.vg1 -device vg1\nscstadmin -add_dgrp_dev vg1 -dev_group esos\n<\/code><\/pre>\n<p>and run it:<\/p>\n<pre><code>root@hpms01:~# \/bin\/bash alua_setup.sh\n<\/code><\/pre>\n<p>On hpms02 <code>alua_setup.sh<\/code>:<\/p>\n<pre><code>scstadmin -add_target iqn.2016-02.local.virtual:hpms02.vg1 -driver iscsi || 1\n#scstadmin -set_tgt_attr iqn.2016-02.local.virtual:hpms02.vg1 -driver iscsi -attributes allowed_portal=\"192.168.122.98 192.168.152.98\"\nscstadmin -enable_target iqn.2016-02.local.virtual:hpms02.vg1 -driver iscsi\nscstadmin -set_drv_attr iscsi -attributes enabled=1\nscstadmin -add_dgrp esos\nscstadmin -add_tgrp local -dev_group esos\nscstadmin -set_tgrp_attr local -dev_group esos -attributes group_id=2\nscstadmin -add_tgrp_tgt iqn.2016-02.local.virtual:hpms02.vg1 -dev_group esos -tgt_group local\nscstadmin -set_tgt_attr iqn.2016-02.local.virtual:hpms02.vg1 -driver iscsi -attributes rel_tgt_id=2\nscstadmin -add_tgrp remote -dev_group esos\nscstadmin -set_tgrp_attr remote -dev_group esos -attributes group_id=1\nscstadmin -add_tgrp_tgt iqn.2016-02.local.virtual:hpms01.vg1 -dev_group esos -tgt_group remote\nscstadmin -set_ttgt_attr iqn.2016-02.local.virtual:hpms01.vg1 -dev_group esos -tgt_group remote -attributes rel_tgt_id=1\nscstadmin -open_dev vg1 -handler vdisk_blockio -attributes filename=\/dev\/vg1\/lun1,write_through=1,nv_cache=0\nscstadmin -add_lun 0 -driver iscsi -target iqn.2016-02.local.virtual:hpms02.vg1 -device vg1\nscstadmin -add_dgrp_dev vg1 -dev_group esos\n<\/code><\/pre>\n<p>and execute:<\/p>\n<pre><code>root@hpms02:~# \/bin\/bash alua_setup.sh\n<\/code><\/pre>\n<p>What this does is creates local and remote target groups with id&#8217;s of 1 and 2 on each node, puts them in a device group called esos and maps the device vg1 to the target LUN. Each target group will be presented as different path to the LUN initiators.<\/p>\n<p>Now the crucial point, integrating this into Pacemaker. I decided to try the SCST ALUA OCF agent from the opened source ESOS (Enterprise Storage OS) project (in lack of any other option really apart from the SCST OCF agent SCSTLunMS that does NOT have required features). I downloaded it from the project GIThub repo:<\/p>\n<pre><code># wget https:\/\/raw.githubusercontent.com\/astersmith\/esos\/master\/misc\/ocf\/scst\n# mkdir \/usr\/lib\/ocf\/resources.d\/esos\n# mv scst \/usr\/lib\/ocf\/resources.d\/esos\/\n# chmod +x \/usr\/lib\/ocf\/resources.d\/esos\/scst\n<\/code><\/pre>\n<p>However, this agent is prepared for the <code>ESOS<\/code> distribution, that installs on bare metal by the way, so it looks for specific software that we don&#8217;t need and don&#8217;t install for our iSCSI usage. Thus it needs some modifications in order to work (fact which I discovered after hours of debugging using ocf-tester utility). So here are the changes <code>\/usr\/lib\/ocf\/resource.d\/esos\/scst<\/code>:<\/p>\n<pre><code>...\nPATH=$PATH:\/usr\/local\/sbin\n...\n#MODULES=\"scst qla2x00tgt iscsi_scst ib_srpt \\\n#scst_disk scst_vdisk scst_tape scst_changer fcst\"\nMODULES=\"scst iscsi_scst scst_disk scst_vdisk\"\n...\n    #if pidof fcoemon &gt; \/dev\/null 2&gt;&amp;1; then\n    #    ocf_log warn \"The fcoemon daemon is already running!\"\n    #else\n    #    ocf_run fcoemon -s || exit ${OCF_ERR_GENERIC}\n    #fi\n...\n    #for i in \"fcoemon lldpad iscsi-scstd\"; do\n    for i in \"lldpad iscsi-scstd\"; do\n...\n<\/code><\/pre>\n<p>So basically, we need to point it to <code>scstadmin<\/code> binary under <code>\/usr\/local\/sbin<\/code> which it was not able to find, and remove <code>fcoemon<\/code> daemon test since we don&#8217;t need it (Fibre Channel Over Ethernet) and some modules we are not going to be using and have not installed, like <code>InfiniBand<\/code>, <code>tape<\/code> and <code>disk<\/code> charger etc. You can download the file from <a href=\"https:\/\/icicimov.github.io\/blog\/download\/scst\">here<\/a>.<\/p>\n<p>Now when I ran the OCF test again providing the agent with all needed input parameters:<\/p>\n<pre><code>root@hpms01:~\/scst-ocf# ocf-tester -v -n ms_scst -o OCF_ROOT=\/usr\/lib\/ocf -o alua=true -o device_group=esos -o local_tgt_grp=local -o remote_tgt_grp=remote -o m_alua_state=active -o s_alua_state=nonoptimized \/usr\/lib\/ocf\/resource.d\/esos\/scst\n<\/code><\/pre>\n<p>the test passed and the following SCST config file <code>\/etc\/scst.conf<\/code> was created:<\/p>\n<pre><code># Automatically generated by SCST Configurator v3.1.0-pre1.\n\n# Non-key attributes\nmax_tasklet_cmd 10\nsetup_id 0x0\nsuspend 0\nthreads 2\n\nHANDLER vdisk_blockio {\n    DEVICE vg1 {\n        filename \/dev\/vg1\/lun1\n        write_through 1\n\n        # Non-key attributes\n        block \"0 0\"\n        blocksize 512\n        cluster_mode 0\n        expl_alua 0\n        nv_cache 0\n        pr_file_name \/var\/lib\/scst\/pr\/vg1\n        prod_id vg1\n        prod_rev_lvl \" 320\"\n        read_only 0\n        removable 0\n        rotational 1\n        size 21470642176\n        size_mb 20476\n        t10_dev_id 509f7d73-vg1\n        t10_vend_id SCST_BIO\n        thin_provisioned 0\n        threads_num 1\n        threads_pool_type per_initiator\n        tst 1\n        usn 509f7d73\n        vend_specific_id 509f7d73-vg1\n    }\n}\n\nTARGET_DRIVER copy_manager {\n    # Non-key attributes\n    allow_not_connected_copy 0\n\n    TARGET copy_manager_tgt {\n        # Non-key attributes\n        addr_method PERIPHERAL\n        black_hole 0\n        cpu_mask ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff\n        forwarding 0\n        io_grouping_type auto\n        rel_tgt_id 0\n\n        LUN 0 vg1 {\n            # Non-key attributes\n            read_only 0\n        }\n    }\n}\n\nTARGET_DRIVER iscsi {\n    enabled 1\n\n    TARGET iqn.2016-02.local.virtual:hpms01.vg1 {\n        enabled 0\n\n        # Non-key attributes\n        DataDigest None\n        FirstBurstLength 65536\n        HeaderDigest None\n        ImmediateData Yes\n        InitialR2T No\n        MaxBurstLength 1048576\n        MaxOutstandingR2T 32\n        MaxRecvDataSegmentLength 1048576\n        MaxSessions 0\n        MaxXmitDataSegmentLength 1048576\n        NopInInterval 30\n        NopInTimeout 30\n        QueuedCommands 32\n        RDMAExtensions Yes\n        RspTimeout 90\n        addr_method PERIPHERAL\n        black_hole 0\n        cpu_mask ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff\n        forwarding 0\n        io_grouping_type auto\n        per_portal_acl 0\n        rel_tgt_id 0\n\n        LUN 0 vg1 {\n            # Non-key attributes\n            read_only 0\n        }\n    }\n}\n\nDEVICE_GROUP esos {\n    DEVICE vg1\n\n    TARGET_GROUP local {\n        group_id 1\n        state nonoptimized\n\n        # Non-key attributes\n        preferred 0\n\n        TARGET iqn.2016-02.local.virtual:hpms01.vg1\n    }\n\n    TARGET_GROUP remote {\n        group_id 2\n        state active\n\n        # Non-key attributes\n        preferred 0\n\n        TARGET iqn.2016-02.local.virtual:hpms02.vg1 {\n            rel_tgt_id 2\n        }\n    }\n}\n<\/code><\/pre>\n<p>which matches our ALUA setup.<\/p>\n<p>After that I could finalized the cluster configuration:<\/p>\n<pre><code>primitive p_scst ocf:esos:scst \\\n    params alua=\"true\" device_group=\"esos\" local_tgt_grp=\"local\" remote_tgt_grp=\"remote\" m_alua_state=\"active\" s_alua_state=\"nonoptimized\" \\\n    op monitor interval=\"10\" role=\"Master\" \\\n    op monitor interval=\"20\" role=\"Slave\" \\\n    op start interval=\"0\" timeout=\"120\" \\\n    op stop interval=\"0\" timeout=\"60\"\nms ms_scst p_scst \\\n    meta master-max=\"1\" master-node-max=\"1\" clone-max=\"2\" clone-node-max=\"1\" notify=\"true\" interleave=\"true\" target-role=\"Started\"\ncolocation co_vg1 inf: cl_lvm:Started ms_scst:Started ms_drbd:Master\norder o_vg1 inf: ms_drbd:promote cl_lvm:start ms_scst:start\ncommit\n<\/code><\/pre>\n<p>So, we create the SCST as Multi State resource passing the ALUA parameters to it. The cluster will put one of the TGPS in <code>active\/active<\/code> and the other in <code>active\/nonoptimized<\/code> state and we let the cluster decide this since both are replicated via DRBD in <code>Active\/Active<\/code> state and the clustered LVM so from a data point of view it doesn&#8217;t really matter.<\/p>\n<p>After that we have this state in Pacemaker:<\/p>\n<pre><code>root@hpms01:~# crm_mon -1 -rfQ\nStack: corosync\nCurrent DC: hpms01 (1) - partition with quorum\nVersion: 1.1.10-42f2063\n2 Nodes configured\n10 Resources configured\n\nOnline: [ hpms01 hpms02 ]\n\nFull list of resources:\n\n Master\/Slave Set: ms_drbd [p_drbd_vg1]\n     Masters: [ hpms01 hpms02 ]\n Clone Set: cl_lvm [p_lvm_vg1]\n     Started: [ hpms01 hpms02 ]\n Master\/Slave Set: ms_scst [p_scst]\n     Masters: [ hpms01 ]\n     Slaves: [ hpms02 ]\n Clone Set: cl_lock [g_lock]\n     Started: [ hpms01 hpms02 ]\n\nMigration summary:\n* Node hpms02:\n* Node hpms01:\n<\/code><\/pre>\n<p>As we can see all is up and running. For the end, to prevent resource migration due server flapping up and down (bad network lets say) and faster resource failover when a node goes offline:<\/p>\n<pre><code>root@hpms02:~# crm configure rsc_defaults resource-stickiness=100\nroot@hpms02:~# crm configure rsc_defaults migration-threshold=3\n<\/code><\/pre>\n<p>We can test now the connectivity from one of the clients:<\/p>\n<pre><code>root@proxmox01:~# iscsiadm -m discovery -t st -p 192.168.122.98\n192.168.122.98:3260,1 iqn.2016-02.local.virtual:hpms02.vg1\n192.168.152.98:3260,1 iqn.2016-02.local.virtual:hpms02.vg1\n\nroot@proxmox01:~# iscsiadm -m discovery -t st -p 192.168.122.99\n192.168.122.99:3260,1 iqn.2016-02.local.virtual:hpms01.vg1\n192.168.152.99:3260,1 iqn.2016-02.local.virtual:hpms01.vg1\n<\/code><\/pre>\n<p>As we can see it can discover the targets on both portals, one target per IP the SCST is listening on. For the end, our complete cluster configuration looks like this:<\/p>\n<pre><code>root@hpms01:~# crm configure show | cat\nnode $id=\"1\" hpms01 \\\n    attributes standby=\"off\"\nnode $id=\"2\" hpms02\nprimitive p_clvm ocf:lvm2:clvmd \\\n    params daemon_timeout=\"30\" \\\n    op monitor interval=\"60\" timeout=\"30\" \\\n    op start interval=\"0\" timeout=\"90\" \\\n    op stop interval=\"0\" timeout=\"100\"\nprimitive p_controld ocf:pacemaker:controld \\\n    op monitor interval=\"60\" timeout=\"60\" \\\n    op start interval=\"0\" timeout=\"90\" \\\n    op stop interval=\"0\" timeout=\"100\" \\\n    params daemon=\"dlm_controld\" \\\n    meta target-role=\"Started\"\nprimitive p_drbd_vg1 ocf:linbit:drbd \\\n    params drbd_resource=\"vg1\" \\\n    op monitor interval=\"10\" role=\"Master\" \\\n    op monitor interval=\"20\" role=\"Slave\" \\\n    op start interval=\"0\" timeout=\"240\" \\\n    op stop interval=\"0\" timeout=\"100\"\nprimitive p_fence_hpms01 stonith:external\/libvirt \\\n    params hostlist=\"hpms01\" hypervisor_uri=\"qemu+tcp:\/\/192.168.1.210\/system\" \\\n    op monitor interval=\"60s\"\nprimitive p_fence_hpms02 stonith:external\/libvirt \\\n    params hostlist=\"hpms02\" hypervisor_uri=\"qemu+tcp:\/\/192.168.1.210\/system\" \\\n    op monitor interval=\"60s\"\nprimitive p_lvm_vg1 ocf:heartbeat:LVM \\\n    params volgrpname=\"vg1\" \\\n    op start interval=\"0\" timeout=\"30\" \\\n    op stop interval=\"0\" timeout=\"30\" \\\n    op monitor interval=\"0\" timeout=\"30\"\nprimitive p_scst ocf:esos:scst \\\n    params alua=\"true\" device_group=\"esos\" local_tgt_grp=\"local\" remote_tgt_grp=\"remote\" m_alua_state=\"active\" s_alua_state=\"nonoptimized\" \\\n    op monitor interval=\"10\" role=\"Master\" \\\n    op monitor interval=\"20\" role=\"Slave\" \\\n    op start interval=\"0\" timeout=\"120\" \\\n    op stop interval=\"0\" timeout=\"60\"\ngroup g_lock p_controld p_clvm\nms ms_drbd p_drbd_vg1 \\\n    meta master-max=\"2\" master-node-max=\"1\" clone-max=\"2\" clone-node-max=\"1\" notify=\"true\" interleave=\"true\"\nms ms_scst p_scst \\\n    meta master-max=\"1\" master-node-max=\"1\" clone-max=\"2\" clone-node-max=\"1\" notify=\"true\" interleave=\"true\" target-role=\"Started\"\nclone cl_lock g_lock \\\n    meta globally-unique=\"false\" interleave=\"true\"\nclone cl_lvm p_lvm_vg1 \\\n    meta interleave=\"true\" target-role=\"Started\" globally-unique=\"false\"\nlocation l_fence_hpms01 p_fence_hpms01 -inf: hpms01\nlocation l_fence_hpms02 p_fence_hpms02 -inf: hpms02\ncolocation co_drbd_lock inf: cl_lock ms_drbd:Master\ncolocation co_lock_lvm inf: cl_lvm cl_lock\ncolocation co_vg1 inf: cl_lvm:Started ms_scst:Started ms_drbd:Master\norder o_drbd_lock inf: ms_drbd:promote cl_lock\norder o_lock_lvm inf: cl_lock cl_lvm\norder o_vg1 inf: ms_drbd:promote cl_lvm:start ms_scst:start\nproperty $id=\"cib-bootstrap-options\" \\\n    dc-version=\"1.1.10-42f2063\" \\\n    cluster-infrastructure=\"corosync\" \\\n    stonith-enabled=\"true\" \\\n    no-quorum-policy=\"ignore\" \\\n    last-lrm-refresh=\"1458176609\"\nrsc_defaults $id=\"rsc-options\" \\\n    resource-stickiness=\"100\" \\\n    migration-threshold=\"3\"\n<\/code><\/pre>\n<p>[serialposts]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I already wrote a post on this topic so this is kind of extension or variation of the setup described here Highly Available iSCSI Storage with SCST, Pacemaker, DRBD and OCFS2. The main and most important difference is that thanks&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[17,9,16],"tags":[21,18,20],"class_list":["post-254","post","type-post","status-publish","format-standard","hentry","category-cluster","category-high-availability","category-storage","tag-drbd","tag-iscsi","tag-pacemaker"],"_links":{"self":[{"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/254","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=254"}],"version-history":[{"count":4,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/254\/revisions"}],"predecessor-version":[{"id":265,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/254\/revisions\/265"}],"wp:attachment":[{"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=254"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=254"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=254"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}