{"id":257,"date":"2016-03-13T10:34:18","date_gmt":"2016-03-12T23:34:18","guid":{"rendered":"https:\/\/icicimov.com\/blog\/?p=257"},"modified":"2017-01-02T19:25:49","modified_gmt":"2017-01-02T08:25:49","slug":"257","status":"publish","type":"post","link":"https:\/\/icicimov.com\/blog\/?p=257","title":{"rendered":"Highly Available iSCSI ALUA (Asymetric Logical Unit Access) Storage with Pacemaker and DRBD in Dual-Primary mode &#8211; Part2"},"content":{"rendered":"<p><div class=\"fx-toc fx-toc-id-257\"><h2 class=\"fx-toc-title\">Table of contents<\/h2><ul class='fx-toc-list level-1'>\n\t<li>\n\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=257#iscsi-client-initiator-servers-setup\">iSCSI Client (Initiator) Servers Setup<\/a>\n\t<\/li>\n\t<li>\n\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=257#testing\">TESTING<\/a>\n\t\t<ul class='toc-even level-2'>\n\t\t\t<li>\n\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=257#multipath-and-cluster-failover\">Multipath and Cluster failover<\/a>\n\t\t\t<\/li>\n\t\t\t<li>\n\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=257#raw-disk-testing\">Raw disk testing<\/a>\n\t\t\t<\/li>\n\t\t\t<li>\n\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=257#file-system-load-testing\">File system load testing<\/a>\n\t\t\t<\/li>\n<\/ul>\n<\/ul>\n<\/div>\n<br \/>\nThis is continuation of the <a href=\"https:\/\/icicimov.com\/blog\/?p=254\">Highly Available iSCSI ALUA Storage with Pacemaker and DRBD in Dual-Primary mode<\/a> series. We have setup the HA backing iSCSI storage and now we are going to setup a HA shared storage on the client side.<\/p>\n<h1><span id=\"iscsi-client-initiator-servers-setup\">iSCSI Client (Initiator) Servers Setup<\/span><\/h1>\n<p>As mentioned before these servers are running the latest Debian Jessie release:<\/p>\n<pre><code>root@proxmox01:~# lsb_release -a\nNo LSB modules are available.\nDistributor ID:    Debian\nDescription:    Debian GNU\/Linux 8.3 (jessie)\nRelease:    8.3\nCodename:    jessie\n<\/code><\/pre>\n<p>Same as in our previous setup we will use Multipathing for our Targets. Our client servers are proxmox01 and proxmox02 with following network config:<\/p>\n<pre><code>root@proxmox01:~# ifconfig\neth0      Link encap:Ethernet  HWaddr 52:54:00:70:2a:f7 \n          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1\n          RX packets:1246893 errors:0 dropped:0 overruns:0 frame:0\n          TX packets:119352 errors:0 dropped:0 overruns:0 carrier:0\n          collisions:0 txqueuelen:1000\n          RX bytes:222250289 (211.9 MiB)  TX bytes:25971272 (24.7 MiB)\n\neth1      Link encap:Ethernet  HWaddr 52:54:00:5d:8f:fc \n          inet addr:192.168.152.52  Bcast:192.168.152.255  Mask:255.255.255.0\n          inet6 addr: fe80::5054:ff:fe5d:8ffc\/64 Scope:Link\n          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1\n          RX packets:962 errors:0 dropped:0 overruns:0 frame:0\n          TX packets:208 errors:0 dropped:0 overruns:0 carrier:0\n          collisions:0 txqueuelen:1000\n          RX bytes:386042 (376.9 KiB)  TX bytes:17556 (17.1 KiB)\n\nvmbr0     Link encap:Ethernet  HWaddr 52:54:00:70:2a:f7 \n          inet addr:192.168.122.160  Bcast:192.168.122.255  Mask:255.255.255.0\n          inet6 addr: fe80::5054:ff:fe70:2af7\/64 Scope:Link\n          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1\n          RX packets:1246848 errors:0 dropped:0 overruns:0 frame:0\n          TX packets:119353 errors:0 dropped:0 overruns:0 carrier:0\n          collisions:0 txqueuelen:0\n          RX bytes:204787619 (195.3 MiB)  TX bytes:25971378 (24.7 MiB)\n\nroot@proxmox02:~# ifconfig\neth0      Link encap:Ethernet  HWaddr 52:54:00:51:6e:74 \n          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1\n          RX packets:1190402 errors:0 dropped:0 overruns:0 frame:0\n          TX packets:1567653 errors:0 dropped:0 overruns:0 carrier:0\n          collisions:0 txqueuelen:1000\n          RX bytes:516871378 (492.9 MiB)  TX bytes:610374910 (582.0 MiB)\n\neth1      Link encap:Ethernet  HWaddr 52:54:00:f7:df:df \n          inet addr:192.168.152.62  Bcast:192.168.152.255  Mask:255.255.255.0\n          inet6 addr: fe80::5054:ff:fef7:dfdf\/64 Scope:Link\n          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1\n          RX packets:160786 errors:0 dropped:0 overruns:0 frame:0\n          TX packets:1214 errors:0 dropped:0 overruns:0 carrier:0\n          collisions:0 txqueuelen:1000\n          RX bytes:11285786 (10.7 MiB)  TX bytes:106168 (103.6 KiB)\n\nvmbr0     Link encap:Ethernet  HWaddr 52:54:00:51:6e:74 \n          inet addr:192.168.122.170  Bcast:192.168.122.255  Mask:255.255.255.0\n          inet6 addr: fe80::5054:ff:fe51:6e74\/64 Scope:Link\n          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1\n          RX packets:1190387 errors:0 dropped:0 overruns:0 frame:0\n          TX packets:1567654 errors:0 dropped:0 overruns:0 carrier:0\n          collisions:0 txqueuelen:0\n          RX bytes:500204984 (477.0 MiB)  TX bytes:610375016 (582.0 MiB)\n<\/code><\/pre>\n<p>Install the needed software:<\/p>\n<pre><code>root@proxmox01:~# aptitude install sg3-utils lsscsi open-iscsi\n<\/code><\/pre>\n<p>We do the next steps on both nodes although I show the process on the proxmox01 only. Discover the targets:<\/p>\n<pre><code>root@proxmox01:~# iscsiadm -m discovery -t st -p 192.168.122.98\n192.168.122.98:3260,1 iqn.2016-02.local.virtual:hpms02.vg1\n192.168.152.98:3260,1 iqn.2016-02.local.virtual:hpms02.vg1\n\nroot@proxmox01:~# iscsiadm -m discovery -t st -p 192.168.122.99\n192.168.122.99:3260,1 iqn.2016-02.local.virtual:hpms01.vg1\n192.168.152.99:3260,1 iqn.2016-02.local.virtual:hpms01.vg1\n<\/code><\/pre>\n<p>and login:<\/p>\n<pre><code>root@proxmox01:~# iscsiadm -m node -T iqn.2016-02.local.virtual:hpms01.vg1 -p 192.168.122.99:3260 --login\nroot@proxmox01:~# iscsiadm -m node -T iqn.2016-02.local.virtual:hpms01.vg1 -p 192.168.152.99:3260 --login\n\nroot@proxmox01:~# iscsiadm -m node -T iqn.2016-02.local.virtual:hpms02.vg1 -p 192.168.122.98:3260 --login\nroot@proxmox01:~# iscsiadm -m node -T iqn.2016-02.local.virtual:hpms02.vg1 -p 192.168.152.98:3260 --login\n<\/code><\/pre>\n<p>Check the sessions:<\/p>\n<pre>\nroot@proxmox01:~# iscsiadm -m session -P 1\nTarget: iqn.2016-02.local.virtual:hpms01.vg1 (non-flash)\n    Current Portal: 192.168.122.99:3260,1\n    Persistent Portal: 192.168.122.99:3260,1\n        **********\n        Interface:\n        **********\n        Iface Name: default\n        Iface Transport: tcp\n        Iface Initiatorname: iqn.1993-08.org.debian:01:f1da7239b69\n        Iface IPaddress: 192.168.122.160\n        Iface HWaddress: \n        Iface Netdev: \n        SID: 1\n        iSCSI Connection State: LOGGED IN\n        iSCSI Session State: LOGGED_IN\n        Internal iscsid Session State: NO CHANGE\n    Current Portal: 192.168.152.99:3260,1\n    Persistent Portal: 192.168.152.99:3260,1\n        **********\n        Interface:\n        **********\n        Iface Name: default\n        Iface Transport: tcp\n        Iface Initiatorname: iqn.1993-08.org.debian:01:f1da7239b69\n        Iface IPaddress: 192.168.122.160\n        Iface HWaddress: \n        Iface Netdev:\n        SID: 7\n        iSCSI Connection State: LOGGED IN\n        iSCSI Session State: LOGGED_IN\n        Internal iscsid Session State: NO CHANGE\nTarget: iqn.2016-02.local.virtual:hpms02.vg1 (non-flash)\n    Current Portal: 192.168.122.98:3260,1\n    Persistent Portal: 192.168.122.98:3260,1\n        **********\n        Interface:\n        **********\n        Iface Name: default\n        Iface Transport: tcp\n        Iface Initiatorname: iqn.1993-08.org.debian:01:f1da7239b69\n        Iface IPaddress: 192.168.122.160\n        Iface HWaddress: \n        Iface Netdev: \n        SID: 2\n        iSCSI Connection State: LOGGED IN\n        iSCSI Session State: LOGGED_IN\n        Internal iscsid Session State: NO CHANGE\n    Current Portal: 192.168.152.98:3260,1\n    Persistent Portal: 192.168.152.98:3260,1\n        **********\n        Interface:\n        **********\n        Iface Name: default\n        Iface Transport: tcp\n        Iface Initiatorname: iqn.1993-08.org.debian:01:f1da7239b69\n        Iface IPaddress: 192.168.122.160\n        Iface HWaddress: \n        Iface Netdev:\n        SID: 8\n        iSCSI Connection State: LOGGED IN\n        iSCSI Session State: LOGGED_IN\n        Internal iscsid Session State: NO CHANGE\n<\/pre>\n<p>To find which device belongs to which portal connection we can run the same command with <code>-P 3<\/code> to get even more details:<\/p>\n<pre>\nroot@proxmox01:~# iscsiadm -m session -P3\niSCSI Transport Class version 2.0-870\nversion 2.0-873\nTarget: iqn.2016-02.local.virtual:hpms01.vg1 (non-flash)\n    Current Portal: 192.168.122.99:3260,1\n    Persistent Portal: 192.168.122.99:3260,1\n        **********\n        Interface:\n        **********\n        Iface Name: default\n        Iface Transport: tcp\n        Iface Initiatorname: iqn.1993-08.org.debian:01:f1da7239b69\n        Iface IPaddress: 192.168.122.160\n        Iface HWaddress: \n        Iface Netdev: \n        SID: 17\n        iSCSI Connection State: LOGGED IN\n        iSCSI Session State: LOGGED_IN\n        Internal iscsid Session State: NO CHANGE\n        *********\n        Timeouts:\n        *********\n        Recovery Timeout: 120\n        Target Reset Timeout: 30\n        LUN Reset Timeout: 30\n        Abort Timeout: 15\n        *****\n        CHAP:\n        *****\n        username: \n        password: ********\n        username_in: \n        password_in: ********\n        ************************\n        Negotiated iSCSI params:\n        ************************\n        HeaderDigest: None\n        DataDigest: None\n        MaxRecvDataSegmentLength: 262144\n        MaxXmitDataSegmentLength: 1048576\n        FirstBurstLength: 65536\n        MaxBurstLength: 1048576\n        ImmediateData: Yes\n        InitialR2T: No\n        MaxOutstandingR2T: 1\n        ************************\n        Attached SCSI devices:\n        ************************\n        Host Number: 18    State: running\n        scsi18 Channel 00 Id 0 Lun: 0\n            Attached scsi disk sda        State: running\n    Current Portal: 192.168.152.99:3260,1\n    Persistent Portal: 192.168.152.99:3260,1\n        **********\n        Interface:\n        **********\n        Iface Name: default\n        Iface Transport: tcp\n        Iface Initiatorname: iqn.1993-08.org.debian:01:f1da7239b69\n        Iface IPaddress: 192.168.152.52\n        Iface HWaddress:\n                 Iface Netdev: \n        SID: 18\n        iSCSI Connection State: LOGGED IN\n        iSCSI Session State: LOGGED_IN\n        Internal iscsid Session State: NO CHANGE\n        *********\n        Timeouts:\n        *********\n        Recovery Timeout: 120\n        Target Reset Timeout: 30\n        LUN Reset Timeout: 30\n        Abort Timeout: 15\n        *****\n        CHAP:\n        *****\n        username: \n        password: ********\n        username_in: \n        password_in: ********\n        ************************\n        Negotiated iSCSI params:\n        ************************\n        HeaderDigest: None\n        DataDigest: None\n        MaxRecvDataSegmentLength: 262144\n        MaxXmitDataSegmentLength: 1048576\n        FirstBurstLength: 65536\n        MaxBurstLength: 1048576\n        ImmediateData: Yes\n        InitialR2T: No\n        MaxOutstandingR2T: 1\n        ************************\n        Attached SCSI devices:\n        ************************\n        Host Number: 19    State: running\n        scsi19 Channel 00 Id 0 Lun: 0\n            Attached scsi disk sdb        State: running\nTarget: iqn.2016-02.local.virtual:hpms02.vg1 (non-flash)\n    Current Portal: 192.168.122.98:3260,1\n    Persistent Portal: 192.168.122.98:3260,1\n        **********\n        Interface:\n        **********\n        Iface Name: default\n        Iface Transport: tcp\n        Iface Initiatorname: iqn.1993-08.org.debian:01:f1da7239b69\n        Iface IPaddress: 192.168.122.160\n        Iface HWaddress: \n        Iface Netdev: \n        SID: 19\n        iSCSI Connection State: LOGGED IN\n        iSCSI Session State: LOGGED_IN\n        Internal iscsid Session State: NO CHANGE\n        *********\n        Timeouts:\n        *********\n        Recovery Timeout: 120\n        Target Reset Timeout: 30\n        LUN Reset Timeout: 30\n        Abort Timeout: 15\n        *****\n        CHAP:\n        *****\n        username: \n        password: ********\n        username_in:\n        password_in: ********\n        ************************\n        Negotiated iSCSI params:\n        ************************\n        HeaderDigest: None\n        DataDigest: None\n        MaxRecvDataSegmentLength: 262144\n        MaxXmitDataSegmentLength: 1048576\n        FirstBurstLength: 65536\n        MaxBurstLength: 1048576\n        ImmediateData: Yes\n        InitialR2T: No\n        MaxOutstandingR2T: 1\n        ************************\n        Attached SCSI devices:\n        ************************\n        Host Number: 20    State: running\n        scsi20 Channel 00 Id 0 Lun: 0\n            Attached scsi disk sdc        State: running\n    Current Portal: 192.168.152.98:3260,1\n    Persistent Portal: 192.168.152.98:3260,1\n        **********\n        Interface:\n        **********\n        Iface Name: default\n        Iface Transport: tcp\n        Iface Initiatorname: iqn.1993-08.org.debian:01:f1da7239b69\n        Iface IPaddress: 192.168.152.52\n        Iface HWaddress:\n        Iface Netdev: \n        SID: 20\n        iSCSI Connection State: LOGGED IN\n        iSCSI Session State: LOGGED_IN\n        Internal iscsid Session State: NO CHANGE\n        *********\n        Timeouts:\n        *********\n        Recovery Timeout: 120\n        Target Reset Timeout: 30\n        LUN Reset Timeout: 30\n        Abort Timeout: 15\n        *****\n        CHAP:\n        *****\n        username:\n        password: ********\n        username_in: \n        password_in: ********\n        ************************\n        Negotiated iSCSI params:\n        ************************\n        HeaderDigest: None\n        DataDigest: None\n        MaxRecvDataSegmentLength: 262144\n        MaxXmitDataSegmentLength: 1048576\n        FirstBurstLength: 65536\n        MaxBurstLength: 1048576\n        ImmediateData: Yes\n        InitialR2T: No\n        MaxOutstandingR2T: 1\n        ************************\n        Attached SCSI devices:\n        ************************\n        Host Number: 21    State: running\n        scsi21 Channel 00 Id 0 Lun: 0\n            Attached scsi disk sdd        State: running\n<\/pre>\n<p>We can see the 4 new block devices have been created upon login to the targets, <code>sda<\/code>, <code>sdb<\/code>, <code>sdc<\/code> and <code>sdd<\/code>. The device names depend on the login order so it is important we use <code>disk-by-id<\/code> or the disk <code>WWID<\/code> in our further configuration as the disk order\/names can change. The LUN&#8217;s from hpms01 have been mounted locally as <code>sda<\/code> and <code>sdb<\/code>, where is the LUN&#8217;s from hpms02 as <code>sdc<\/code> and <code>sdd<\/code>. These disks have to match the multipath connections further down on this page and grouped in appropriate paths of which the path leading to the current SCST ALUA Master (and its disks) should be marked as <code>status=active<\/code> and other one as <code>status=enabled<\/code>.<\/p>\n<p>We can query one of the devices to discover the features offered by the iSCSI backend:<\/p>\n<pre><code>root@proxmox01:~# sg_inq \/dev\/sda\nstandard INQUIRY:\n  PQual=0  Device_type=0  RMB=0  LU_CONG=0  version=0x06  [SPC-4]\n  [AERC=0]  [TrmTsk=0]  NormACA=0  HiSUP=0  Resp_data_format=2\n  SCCS=0  ACC=0  TPGS=1  3PC=1  Protect=0  [BQue=0]\n  EncServ=0  MultiP=1 (VS=0)  [MChngr=0]  [ACKREQQ=0]  Addr16=0\n  [RelAdr=0]  WBus16=0  Sync=0  [Linked=0]  [TranDis=0]  CmdQue=1\n  [SPI: Clocking=0x0  QAS=0  IUS=0]\n    length=66 (0x42)   Peripheral device type: disk\n Vendor identification: SCST_BIO\n Product identification: vg1            \n Product revision level:  320\n Unit serial number: 509f7d73\n<\/code><\/pre>\n<p>where most important value is <code>TPGS=1<\/code> which tells us the path groups are enabled on the target. Now to read the TPG settings:<\/p>\n<pre><code>root@proxmox01:~# sg_rtpg -vvd \/dev\/sda\nopen \/dev\/sda with flags=0x802\n    report target port groups cdb: a3 0a 00 00 00 00 00 00 04 00 00 00\n    report target port group: pass-through requested 1024 bytes (data-in) but got 28 bytes\nReport list length = 28\nReport target port groups:\n  target port group id : 0x1 , Pref=0, Rtpg_fmt=0\n    target port group asymmetric access state : 0x00 (active\/optimized)\n    T_SUP : 1, O_SUP : 1, LBD_SUP : 0, U_SUP : 1, S_SUP : 1, AN_SUP : 1, AO_SUP : 1\n    status code : 0x02 (target port asym. state changed by implicit lu behaviour)\n    vendor unique status : 0x00\n    target port count : 01\n    Relative target port ids:\n      0x01\n  target port group id : 0x2 , Pref=0, Rtpg_fmt=0\n    target port group asymmetric access state : 0x01 (active\/non optimized)\n    T_SUP : 1, O_SUP : 1, LBD_SUP : 0, U_SUP : 1, S_SUP : 1, AN_SUP : 1, AO_SUP : 1\n    status code : 0x02 (target port asym. state changed by implicit lu behaviour)\n    vendor unique status : 0x00\n    target port count : 01\n    Relative target port ids:\n      0x02\n<\/code><\/pre>\n<p>where we can see both TPGS (Target Port Group) we created on the server, marked with id of 1 and 2. It also tells us that the target has implicit ALUA feature. We can also see that the TPG id 1 is in <code>active\/optimized<\/code> state and id 2 in <code>active\/non optimized<\/code>, exactly as we want them to be and the way we configured them on the server.<\/p>\n<p>To learn more about the device we can run:<\/p>\n<pre><code>root@proxmox01:~# sg_vpd -p 0x83 --hex \/dev\/sda\nDevice Identification VPD page:\n 00     00 83 00 34 02 01 00 14  53 43 53 54 5f 42 49 4f    ...4....SCST_BIO\n 10     35 30 39 66 37 64 37 33  2d 76 67 31 01 14 00 04    509f7d73-vg1....\n 20     00 00 00 01 01 15 00 04  00 00 00 01 01 02 00 08    ................\n 30     35 30 39 66 37 64 37 33                             509f7d73\n<\/code><\/pre>\n<p>which gives us some details about the device PVD (Virtual Product Data) in case they are not clear enough in the previous outputs.<\/p>\n<p>All other devices will show the same output since all of them are mapped to the same LUN in the iSCSI server. Now armed with this knowledge we can install and configure Multipath. First find the WWID of the new device:<\/p>\n<pre><code>root@proxmox01:~# \/lib\/udev\/scsi_id -g -u -d \/dev\/sda\n23530396637643733\n<\/code><\/pre>\n<p>and then we create Multipath config file. This is the config that worked wor me <code>\/etc\/multipath.conf<\/code>:<\/p>\n<pre><code>defaults {\n    user_friendly_names         yes\n    polling_interval            2\n    path_selector               \"round-robin 0\"\n    path_grouping_policy        group_by_prio\n    path_checker                readsector0\n    #getuid_callout             \"\/lib\/udev\/scsi_id -g -u -d \/dev\/%n\"\n    rr_min_io                   100\n    failback                    immediate\n    prio                        \"alua\"\n    features                    \"0\"\n    no_path_retry               1\n    detect_prio                 yes\n    retain_attached_hw_handler  yes\n}\n\ndevices {\n  device {\n    vendor              \"SCST_BIO\"\n    product             \"vg1\"\n    hardware_handler    \"1 alua\"\n  }\n}\n\nblacklist {\n    devnode \"^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*\"\n    devnode \"^(hd|xvd|vd)[a-z]*\"\n    devnode \"ofsctl\"\n    devnode \"^asm\/*\"\n}\n\nblacklist_exceptions {\n        wwid \"23238363932313833\"\n        property \"(ID_SCSI_VPD|ID_WWN|ID_SERIAL)\"\n}\n\nmultipaths {\n  multipath {\n    wwid    23238363932313833\n    alias    mylun\n  }\n}\n<\/code><\/pre>\n<p>The way we set it up means Multipath will use both links in the active path in <code>round-robin<\/code> fashion sending minimum of 100 I\/Os down one link before it switches to the other one. In this way we are trying to avoid or minimize any issues in case one of the links in the active path suffers from congestion.<\/p>\n<p>Restart the service:<\/p>\n<pre><code>root@proxmox01:~# systemctl restart multipath-tools.service\n<\/code><\/pre>\n<p>and check multipath:<\/p>\n<pre><code>root@proxmox01:~# multipath -ll\nmpatha (23530396637643733) dm-3 SCST_BIO,vg1\nsize=20G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw\n|-+- policy='round-robin 0' prio=50 status=active\n| |- 2:0:0:0 sda 8:0  active ready running\n| `- 8:0:0:0 sdc 8:32 active ready running\n`-+- policy='round-robin 0' prio=10 status=enabled\n  |- 3:0:0:0 sdb 8:16 active ready running\n  `- 9:0:0:0 sdd 8:48 active ready running\n<\/code><\/pre>\n<p>We can see Multipath created 2 multipaths for each iSCSI server and marked the first one as <code>Primary<\/code> with priority of 50 and status of active, and the second one as <code>Secondary<\/code> with priority of 10 and status of enabled. It also created our new multipath device:<\/p>\n<pre><code>root@proxmox01:~# ls -l \/dev\/mapper\/mpatha\nlrwxrwxrwx 1 root root 7 Mar  9 12:54 \/dev\/mapper\/mpatha -&gt; ..\/dm-3\n<\/code><\/pre>\n<p>which we can mount and start using as any other block device.<\/p>\n<p>Whats left is set the path failure timeout to 10 seconds from the default 120 seconds which is too high:<\/p>\n<pre><code>root@proxmox01:~# iscsiadm -m node -T iqn.2016-02.local.virtual:hpms01.vg1 | grep node.session.timeo.replacement_timeout\nnode.session.timeo.replacement_timeout = 120\nnode.session.timeo.replacement_timeout = 120\nroot@proxmox01:~# iscsiadm -m node -T iqn.2016-02.local.virtual:hpms01.vg1 -o update -n node.session.timeo.replacement_timeout -v 10\nroot@proxmox01:~# iscsiadm -m node -T iqn.2016-02.local.virtual:hpms01.vg1 | grep node.session.timeo.replacement_timeout\nnode.session.timeo.replacement_timeout = 10\nnode.session.timeo.replacement_timeout = 10\n\nroot@proxmox01:~# iscsiadm -m node -T iqn.2016-02.local.virtual:hpms02.vg1 | grep node.session.timeo.replacement_timeout\nnode.session.timeo.replacement_timeout = 120\nnode.session.timeo.replacement_timeout = 120\nroot@proxmox01:~# iscsiadm -m node -T iqn.2016-02.local.virtual:hpms02.vg1 -o update -n node.session.timeo.replacement_timeout -v 10\nroot@proxmox01:~# iscsiadm -m node -T iqn.2016-02.local.virtual:hpms02.vg1 | grep node.session.timeo.replacement_timeout\nnode.session.timeo.replacement_timeout = 10\nnode.session.timeo.replacement_timeout = 10\n<\/code><\/pre>\n<p>and set client to login to the targets on startup:<\/p>\n<pre><code>root@proxmox01:~# iscsiadm -m node -T iqn.2016-02.local.virtual:hpms01.vg1 -o update -n node.startup -v automatic\nroot@proxmox01:~# iscsiadm -m node -T iqn.2016-02.local.virtual:hpms02.vg1 -o update -n node.startup -v automatic\n<\/code><\/pre>\n<p>so the device is available to Multipath. Finally we set Multipath to auto start:<\/p>\n<pre><code>root@proxmox01:~# systemctl enable multipath-tools\nSynchronizing state for multipath-tools.service with sysvinit using update-rc.d...\nExecuting \/usr\/sbin\/update-rc.d multipath-tools defaults\nExecuting \/usr\/sbin\/update-rc.d multipath-tools enable\n<\/code><\/pre>\n<h1><span id=\"testing\">TESTING<\/span><\/h1>\n<h2><span id=\"multipath-and-cluster-failover\">Multipath and Cluster failover<\/span><\/h2>\n<p>First, basic Multipath test with link failure detection. We bring down <code>eth1<\/code> which is one of the links in the active path:<\/p>\n<pre><code>root@proxmox01:~# ifdown eth1\nroot@proxmox01:~# multipath -ll\nmpatha (23530396637643733) dm-3 SCST_BIO,vg1\nsize=20G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw\n|-+- policy='round-robin 0' prio=50 status=active\n| |- 2:0:0:0 sda 8:0  active ready  running\n| `- 8:0:0:0 sdc 8:32 active faulty running\n`-+- policy='round-robin 0' prio=10 status=enabled\n  |- 3:0:0:0 sdb 8:16 active ready  running\n  `- 9:0:0:0 sdd 8:48 active ready  running\n<\/code><\/pre>\n<p>and we can see Multipath noticed that and marked it as faulty. On bringing it back:<\/p>\n<pre><code>root@proxmox01:~# ifup eth1\nroot@proxmox01:~# multipath -ll\nmpatha (23530396637643733) dm-3 SCST_BIO,vg1\nsize=20G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw\n|-+- policy='round-robin 0' prio=50 status=active\n| |- 8:0:0:0 sdc 8:32 active ready running\n| `- 2:0:0:0 sda 8:0  active ready running\n`-+- policy='round-robin 0' prio=10 status=enabled\n  |- 3:0:0:0 sdb 8:16 active ready running\n  `- 9:0:0:0 sdd 8:48 active ready running\n<\/code><\/pre>\n<p>it puts it back into the active state.<\/p>\n<p>Next we test the failover of the iSCSI backend servers. Take a note of the Multipath state above and the cluster resources state:<\/p>\n<pre><code>root@hpms01:~# crm status\nLast updated: Thu Mar 17 00:56:50 2016\nLast change: Thu Mar 17 00:47:41 2016 via cibadmin on hpms01\nStack: corosync\nCurrent DC: hpms01 (1) - partition with quorum\nVersion: 1.1.10-42f2063\n2 Nodes configured\n10 Resources configured\n\nOnline: [ hpms01 hpms02 ]\n\n Master\/Slave Set: ms_drbd [p_drbd_vg1]\n     Masters: [ hpms01 hpms02 ]\n Clone Set: cl_lvm [p_lvm_vg1]\n     Started: [ hpms01 hpms02 ]\n Master\/Slave Set: ms_scst [p_scst]\n     Masters: [ hpms01 ]\n     Slaves: [ hpms02 ]\n Clone Set: cl_lock [g_lock]\n     Started: [ hpms01 hpms02 ]\n<\/code><\/pre>\n<p>Now reboot the hpms01 node which has the iSCSI target active (Master mode of ms_scst resource):<\/p>\n<pre><code>root@hpms01:~# reboot\nBroadcast message from ubuntu@hpms01\n    (\/dev\/pts\/0) at 1:01 ...\n\nThe system is going down for reboot NOW!\n<\/code><\/pre>\n<p>and monitor pacemaker state on the second node, hpms02:<\/p>\n<pre><code>root@hpms02:~# crm_mon -Qrf\nStack: corosync\nCurrent DC: hpms02 (2) - partition with quorum\nVersion: 1.1.10-42f2063\n2 Nodes configured\n10 Resources configured\n\nOnline: [ hpms02 ]\nOffline: [ hpms01 ]\n\nFull list of resources:\n\n Master\/Slave Set: ms_drbd [p_drbd_vg1]\n     Masters: [ hpms02 ]\n     Stopped: [ hpms01 ]\n Clone Set: cl_lvm [p_lvm_vg1]\n     Started: [ hpms02 ]\n     Stopped: [ hpms01 ]\n Master\/Slave Set: ms_scst [p_scst]\n     Masters: [ hpms02 ]\n     Stopped: [ hpms01 ]\n Clone Set: cl_lock [g_lock]\n     Started: [ hpms02 ]\n     Stopped: [ hpms01 ]\n\nMigration summary:\n* Node hpms02:\n* Node hpms01:\n<\/code><\/pre>\n<p>We can see the cluster detected the node hpms01 went offline and promoted the ms_scst resource on the other node into Master state.<\/p>\n<p>On the client, we can also see Multipath switched to the secondary path and marked the primary one as faulty:<\/p>\n<pre><code>root@proxmox01:~# multipath -ll\nmpatha (23530396637643733) dm-3 SCST_BIO,vg1\nsize=20G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw\n`-+- policy='round-robin 0' prio=50 status=active\n  |- 6:0:0:0 sda 8:0  failed faulty offline\n  |- 7:0:0:0 sdd 8:48 failed faulty offline\n  |- 8:0:0:0 sdc 8:32 active ready  running\n  `- 9:0:0:0 sdb 8:16 active ready  running\n<\/code><\/pre>\n<p>and the shared drive is still mounted and the file system available:<\/p>\n<pre><code>root@proxmox01:~# ls -l  \/share\/\ntotal 1536000\n-rw-r--r-- 1 root root 1572864000 Mar 11 11:36 test.img\n<\/code><\/pre>\n<p>When hpms01 comes online:<\/p>\n<pre><code>root@hpms02:~# crm_mon -Qrf1\nStack: corosync\nCurrent DC: hpms02 (2) - partition with quorum\nVersion: 1.1.10-42f2063\n2 Nodes configured\n10 Resources configured\n\nOnline: [ hpms01 hpms02 ]\n\nFull list of resources:\n\n Master\/Slave Set: ms_drbd [p_drbd_vg1]\n     Masters: [ hpms01 hpms02 ]\n Clone Set: cl_lvm [p_lvm_vg1]\n     Started: [ hpms01 hpms02 ]\n Master\/Slave Set: ms_scst [p_scst]\n     Masters: [ hpms02 ]\n     Slaves: [ hpms01 ]\n Clone Set: cl_lock [g_lock]\n     Started: [ hpms01 hpms02 ]\n\nMigration summary:\n* Node hpms02:\n* Node hpms01:\n<\/code><\/pre>\n<p>we can see it joins the cluster with no errors and Multipath on the client detects this:<\/p>\n<pre><code>root@proxmox01:~# multipath -ll\nmpatha (23530396637643733) dm-3 SCST_BIO,vg1\nsize=20G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw\n|-+- policy='round-robin 0' prio=50 status=active\n| |- 8:0:0:0 sdc 8:32 active ready running\n| `- 9:0:0:0 sdb 8:16 active ready running\n`-+- policy='round-robin 0' prio=10 status=enabled\n  |- 7:0:0:0 sdd 8:48 active ready running\n  `- 6:0:0:0 sda 8:0  active ready running\n<\/code><\/pre>\n<p>but compared to the state before reboot we can see that the second path stays as primary and the previous primary is a backup one now since the iSCSI switched to the other node to become Master.<\/p>\n<h2><span id=\"raw-disk-testing\">Raw disk testing<\/span><\/h2>\n<p>Sequential reads and writes:<\/p>\n<pre><code>root@proxmox01:~# fio --bs=4M --direct=1 --rw=read --ioengine=libaio --iodepth=64 --name=\/dev\/mapper\/mpatha --runtime=60\n\/dev\/mapper\/mpatha: (g=0): rw=read, bs=4M-4M\/4M-4M\/4M-4M, ioengine=libaio, iodepth=64\nfio-2.1.11\nStarting 1 process\nJobs: 1 (f=1): [R(1)] [36.0% done] [0KB\/0KB\/0KB \/s] [0\/0\/0 iops] [eta 01m:50s]     \n\/dev\/mapper\/mpatha: (groupid=0, jobs=1): err= 0: pid=6920: Thu Mar 10 16:29:40 2016\n  read : io=7612.0MB, bw=126628KB\/s, iops=30, runt= 61556msec\n    slat (usec): min=285, max=51033, avg=1581.12, stdev=4362.51\n    clat (msec): min=28, max=5324, avg=2048.51, stdev=1145.08\n     lat (msec): min=29, max=5325, avg=2050.09, stdev=1145.58\n    clat percentiles (msec):\n     |  1.00th=[   95],  5.00th=[  273], 10.00th=[  515], 20.00th=[  979],\n     | 30.00th=[ 1139], 40.00th=[ 1516], 50.00th=[ 2212], 60.00th=[ 2606],\n     | 70.00th=[ 2769], 80.00th=[ 2999], 90.00th=[ 3490], 95.00th=[ 4015],\n     | 99.00th=[ 4424], 99.50th=[ 4752], 99.90th=[ 5276], 99.95th=[ 5342],\n     | 99.99th=[ 5342]\n    bw (KB  \/s): min= 2946, max=172298, per=100.00%, avg=127106.71, stdev=32315.94\n    lat (msec) : 50=0.32%, 100=1.00%, 250=3.31%, 500=4.94%, 750=4.57%\n    lat (msec) : 1000=7.30%, 2000=24.44%, &gt;=2000=54.13%\n  cpu          : usr=0.14%, sys=5.39%, ctx=2840, majf=0, minf=65543\n  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, &gt;=64=96.7%\n     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &gt;=64=0.0%\n     complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, &gt;=64=0.0%\n     issued    : total=r=1903\/w=0\/d=0, short=r=0\/w=0\/d=0\n     latency   : target=0, window=0, percentile=100.00%, depth=64\nRun status group 0 (all jobs):\n   READ: io=7612.0MB, aggrb=126627KB\/s, minb=126627KB\/s, maxb=126627KB\/s, mint=61556msec, maxt=61556msec\n\nroot@proxmox01:~# fio --bs=4K --direct=1 --rw=write --ioengine=libaio --iodepth=64 --name=\/dev\/mapper\/mpatha --runtime=60\n\/dev\/mapper\/mpatha: (g=0): rw=write, bs=4K-4K\/4K-4K\/4K-4K, ioengine=libaio, iodepth=64\nfio-2.1.11\nStarting 1 process\nJobs: 1 (f=1): [W(1)] [0.2% done] [0KB\/1013KB\/0KB \/s] [0\/253\/0 iops] [eta 10h:06m:05s]\n\/dev\/mapper\/mpatha: (groupid=0, jobs=1): err= 0: pid=7535: Thu Mar 10 16:42:42 2016\n  write: io=35368KB, bw=601738B\/s, iops=146, runt= 60187msec\n    slat (usec): min=7, max=82441, avg=122.22, stdev=1017.52\n    clat (msec): min=49, max=1506, avg=435.47, stdev=171.66\n     lat (msec): min=49, max=1506, avg=435.60, stdev=171.64\n    clat percentiles (msec):\n     |  1.00th=[  130],  5.00th=[  196], 10.00th=[  237], 20.00th=[  302],\n     | 30.00th=[  338], 40.00th=[  371], 50.00th=[  408], 60.00th=[  445],\n     | 70.00th=[  506], 80.00th=[  570], 90.00th=[  652], 95.00th=[  750],\n     | 99.00th=[  963], 99.50th=[ 1012], 99.90th=[ 1303], 99.95th=[ 1352],\n     | 99.99th=[ 1500]\n    bw (KB  \/s): min=  115, max= 1226, per=100.00%, avg=588.72, stdev=190.46\n    lat (msec) : 50=0.01%, 100=0.24%, 250=11.72%, 500=57.17%, 750=25.84%\n    lat (msec) : 1000=4.48%, 2000=0.54%\n  cpu          : usr=0.39%, sys=1.69%, ctx=6620, majf=0, minf=7\n  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, &gt;=64=99.3%\n     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &gt;=64=0.0%\n     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, &gt;=64=0.0%\n     issued    : total=r=0\/w=8842\/d=0, short=r=0\/w=0\/d=0\n     latency   : target=0, window=0, percentile=100.00%, depth=64\nRun status group 0 (all jobs):\n  WRITE: io=35368KB, aggrb=587KB\/s, minb=587KB\/s, maxb=587KB\/s, mint=60187msec, maxt=60187msec\n<\/code><\/pre>\n<p>The device shows throughput of around 126MB\/s for reads and 5.8MB\/s for writes with 4K block size.<\/p>\n<p>Random reads and writes:<\/p>\n<pre><code>root@proxmox01:~# fio --bs=4k --direct=1 --rw=randread --ioengine=libaio --iodepth=64 --name=\/dev\/mapper\/mpatha --runtime=60\n\/dev\/mapper\/mpatha: (g=0): rw=randread, bs=4K-4K\/4K-4K\/4K-4K, ioengine=libaio, iodepth=64\nfio-2.1.11\nStarting 1 process\nJobs: 1 (f=1): [r(1)] [100.0% done] [10450KB\/0KB\/0KB \/s] [2612\/0\/0 iops] [eta 00m:00s]\n\/dev\/mapper\/mpatha: (groupid=0, jobs=1): err= 0: pid=7246: Thu Mar 10 16:36:43 2016\n  read : io=571136KB, bw=9516.5KB\/s, iops=2379, runt= 60016msec\n    slat (usec): min=6, max=20178, avg=57.84, stdev=451.54\n    clat (usec): min=795, max=612854, avg=26832.35, stdev=24196.91\n     lat (msec): min=1, max=612, avg=26.89, stdev=24.21\n    clat percentiles (msec):\n     |  1.00th=[    9],  5.00th=[   12], 10.00th=[   14], 20.00th=[   17],\n     | 30.00th=[   19], 40.00th=[   21], 50.00th=[   23], 60.00th=[   25],\n     | 70.00th=[   28], 80.00th=[   32], 90.00th=[   39], 95.00th=[   49],\n     | 99.00th=[  116], 99.50th=[  165], 99.90th=[  367], 99.95th=[  424],\n     | 99.99th=[  611]\n    bw (KB  \/s): min= 1080, max=12392, per=100.00%, avg=9573.66, stdev=2173.80\n    lat (usec) : 1000=0.01%\n    lat (msec) : 2=0.01%, 4=0.05%, 10=2.16%, 20=34.38%, 50=58.86%\n    lat (msec) : 100=3.34%, 250=1.01%, 500=0.17%, 750=0.04%\n  cpu          : usr=3.45%, sys=10.35%, ctx=99945, majf=0, minf=70\n  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, &gt;=64=100.0%\n     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &gt;=64=0.0%\n     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, &gt;=64=0.0%\n     issued    : total=r=142784\/w=0\/d=0, short=r=0\/w=0\/d=0\n     latency   : target=0, window=0, percentile=100.00%, depth=64\nRun status group 0 (all jobs):\n   READ: io=571136KB, aggrb=9516KB\/s, minb=9516KB\/s, maxb=9516KB\/s, mint=60016msec, maxt=60016msec\n\nroot@proxmox01:~# fio --bs=4k --direct=1 --rw=randwrite --ioengine=libaio --iodepth=64 --name=\/dev\/mapper\/mpatha --runtime=60\n\/dev\/mapper\/mpatha: (g=0): rw=randwrite, bs=4K-4K\/4K-4K\/4K-4K, ioengine=libaio, iodepth=64\nfio-2.1.11\nStarting 1 process\nJobs: 1 (f=1): [w(1)] [0.1% done] [0KB\/163KB\/0KB \/s] [0\/40\/0 iops] [eta 01d:06h:34m:44s]\n\/dev\/mapper\/mpatha: (groupid=0, jobs=1): err= 0: pid=7400: Thu Mar 10 16:40:10 2016\n  write: io=11864KB, bw=199525B\/s, iops=48, runt= 60888msec\n    slat (usec): min=8, max=13359, avg=143.63, stdev=655.08\n    clat (msec): min=63, max=3869, avg=1313.34, stdev=593.31\n     lat (msec): min=63, max=3869, avg=1313.48, stdev=593.34\n    clat percentiles (msec):\n     |  1.00th=[  219],  5.00th=[  424], 10.00th=[  578], 20.00th=[  816],\n     | 30.00th=[  979], 40.00th=[ 1123], 50.00th=[ 1270], 60.00th=[ 1401],\n     | 70.00th=[ 1565], 80.00th=[ 1762], 90.00th=[ 2089], 95.00th=[ 2442],\n     | 99.00th=[ 3032], 99.50th=[ 3228], 99.90th=[ 3720], 99.95th=[ 3851],\n     | 99.99th=[ 3884]\n    bw (KB  \/s): min=    4, max=  410, per=99.26%, avg=192.56, stdev=69.40\n    lat (msec) : 100=0.07%, 250=1.38%, 500=5.77%, 750=8.93%, 1000=14.46%\n    lat (msec) : 2000=57.48%, &gt;=2000=11.90%\n  cpu          : usr=0.23%, sys=0.78%, ctx=2792, majf=0, minf=7\n  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, &gt;=64=97.9%\n     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &gt;=64=0.0%\n     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, &gt;=64=0.0%\n     issued    : total=r=0\/w=2966\/d=0, short=r=0\/w=0\/d=0\n     latency   : target=0, window=0, percentile=100.00%, depth=64\nRun status group 0 (all jobs):\n  WRITE: io=11864KB, aggrb=194KB\/s, minb=194KB\/s, maxb=194KB\/s, mint=60888msec, maxt=60888msec\n<\/code><\/pre>\n<p>The device shows read throughput of 2612 iops and 40 iops for writing, so much faster reading then writing in this case.<\/p>\n<h2><span id=\"file-system-load-testing\">File system load testing<\/span><\/h2>\n<p>I will use XFS for the test.<\/p>\n<pre><code>root@proxmox01:~# mkfs -t xfs \/dev\/mapper\/mpatha\nmeta-data=\/dev\/mapper\/mpatha     isize=256    agcount=16, agsize=327616 blks\n         =                       sectsz=512   attr=2, projid32bit=1\n         =                       crc=0        finobt=0\ndata     =                       bsize=4096   blocks=5241856, imaxpct=25\n         =                       sunit=1      swidth=128 blks\nnaming   =version 2              bsize=4096   ascii-ci=0 ftype=0\nlog      =internal log           bsize=4096   blocks=2560, version=2\n         =                       sectsz=512   sunit=1 blks, lazy-count=1\nrealtime =none                   extsz=4096   blocks=0, rtextents=0\n\nroot@proxmox01:~# mkdir -p \/share\nroot@proxmox01:~# mount \/dev\/mapper\/mpatha \/share -o _netdev,noatime,nodiratime,rw\nroot@proxmox01:~# cat \/proc\/mounts | grep share\n\/dev\/mapper\/mpatha \/share xfs rw,noatime,nodiratime,attr2,inode64,sunit=8,swidth=1024,noquota 0 0\n<\/code><\/pre>\n<p>Now simple dd test bypassing the file system caches and disk buffers:<\/p>\n<pre><code>root@proxmox01:~# echo 3 &gt; \/proc\/sys\/vm\/drop_caches\nroot@proxmox01:~# dd if=\/dev\/zero of=\/share\/test.img bs=1024K count=1500 oflag=direct conv=fsync &amp;&amp; sync;sync\n1500+0 records in\n1500+0 records out\n1572864000 bytes (1.6 GB) copied, 198.26 s, 7.9 MB\/s\n\nroot@proxmox01:~# echo 3 &gt; \/proc\/sys\/vm\/drop_caches\nroot@proxmox01:~# dd if=\/share\/test.img of=\/dev\/null iflag=nocache oflag=nocache,sync\n3072000+0 records in\n3072000+0 records out\n1572864000 bytes (1.6 GB) copied, 41.4182 s, 38.0 MB\/s\n<\/code><\/pre>\n<p>So, without any help of caches, we get speed of 8MB\/s for writes and 38MB\/s for reads for 1MB block size.<\/p>\n<p>[serialposts]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This is continuation of the Highly Available iSCSI ALUA Storage with Pacemaker and DRBD in Dual-Primary mode series. We have setup the HA backing iSCSI storage and now we are going to setup a HA shared storage on the client&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[17,9,16],"tags":[21,18,20],"class_list":["post-257","post","type-post","status-publish","format-standard","hentry","category-cluster","category-high-availability","category-storage","tag-drbd","tag-iscsi","tag-pacemaker"],"_links":{"self":[{"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/257","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=257"}],"version-history":[{"count":7,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/257\/revisions"}],"predecessor-version":[{"id":268,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/257\/revisions\/268"}],"wp:attachment":[{"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=257"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=257"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=257"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}