{"id":227,"date":"2014-09-24T17:14:55","date_gmt":"2014-09-24T07:14:55","guid":{"rendered":"https:\/\/icicimov.com\/blog\/?p=227"},"modified":"2017-01-02T17:40:19","modified_gmt":"2017-01-02T06:40:19","slug":"openstack-icehouse-multi-node-installation-with-ceph-backend-for-cinder-and-glance","status":"publish","type":"post","link":"https:\/\/icicimov.com\/blog\/?p=227","title":{"rendered":"OpenStack Icehouse Multi-node Installation with Ceph backend for Cinder and Glance"},"content":{"rendered":"<p><div class=\"fx-toc fx-toc-id-227\"><h2 class=\"fx-toc-title\">Table of contents<\/h2><ul class='fx-toc-list level-1'>\n\t<li>\n\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=227#preparation\">Preparation<\/a>\n\t\t<ul class='toc-even level-2'>\n\t\t\t<li>\n\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=227#networking\">Networking<\/a>\n\t\t\t<\/li>\n\t\t\t<li>\n\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=227#mysql-and-openstack-services-db-setup\">MySQL and OpenStack services DB setup<\/a>\n\t\t\t<\/li>\n\t\t\t<li>\n\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=227#rabbitmq\">RabbitMQ<\/a>\n\t\t\t<\/li>\n\t\t<\/ul>\n\t<li>\n\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=227#keystone\">Keystone<\/a>\n\t<\/li>\n\t<li>\n\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=227#glance\">Glance<\/a>\n\t<\/li>\n\t<li>\n\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=227#nova\">Nova<\/a>\n\t\t<ul class='toc-even level-2'>\n\t\t\t<li>\n\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=227#controller-node\">Controller node<\/a>\n\t\t\t<\/li>\n\t\t\t<li>\n\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=227#compute-node\">Compute node<\/a>\n\t\t\t<\/li>\n\t\t<\/ul>\n\t<li>\n\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=227#neutron\">Neutron<\/a>\n\t\t<ul class='toc-even level-2'>\n\t\t\t<li>\n\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=227#controller-node_1\">Controller node<\/a>\n\t\t\t<\/li>\n\t\t\t<li>\n\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=227#networking-node\">Networking node<\/a>\n\t\t\t<\/li>\n\t\t\t<li>\n\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=227#compute-node_1\">Compute node<\/a>\n\t\t\t<\/li>\n\t\t\t<li>\n\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=227#create-the-initial-networks\">Create the initial networks<\/a>\n\t\t\t<\/li>\n\t\t<\/ul>\n\t<li>\n\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=227#horizon\">Horizon<\/a>\n\t<\/li>\n\t<li>\n\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=227#cinder\">Cinder<\/a>\n\t\t<ul class='toc-even level-2'>\n\t\t\t<li>\n\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=227#ceph-cluster-setup\">Ceph cluster setup<\/a>\n\t\t\t<\/li>\n\t\t\t<li>\n\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=227#cinder-setup\">Cinder setup<\/a>\n\t\t\t\t<ul class='toc-odd level-3'>\n\t\t\t\t\t<li>\n\t\t\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=227#controller-node_2\">Controller node<\/a>\n\t\t\t\t\t<\/li>\n\t\t\t\t\t<li>\n\t\t\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=227#volume-nodes\">Volume nodes<\/a>\n\t\t\t\t\t<\/li>\n\t\t\t\t<\/ul>\n\t\t\t<li>\n\t\t\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=227#create-the-first-volume\">Create the first volume<\/a>\n\t\t\t<\/li>\n\t\t<\/ul>\n\t<li>\n\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=227#launch-an-instance\">Launch an instance<\/a>\n\t<\/li>\n\t<li>\n\t\t<a href=\"https:\/\/icicimov.com\/blog\/?p=227#booting-from-image-volumes-stored-in-ceph\">Booting from image volumes stored in CEPH<\/a>\n\t<\/li>\n<\/ul>\n<\/div>\n<br \/>\nThis is a standard Installation of OpenStack Icehouse on 3 x VM nodes: Controller, Compute and Networking. Later I decided to create 2 separate storage nodes for the <code>Cinder<\/code> service that will be using <code>CEPH\/RADOS<\/code> cluster as object storage since I wanted to test this functionality as well.<\/p>\n<p>These are the VM instances comprising the OpenStack setup, including the 3 for the Ceph cluster:<\/p>\n<pre><code>root@aywun:~# virsh list\n Id    Name                           State\n----------------------------------------------------\n 2     ostack-controller              running\n 3     ostack-ceph1                   running\n 4     ostack-ceph2                   running\n 5     ostack-ceph3                   running\n 6     ostack-network                 running\n 7     ostack-compute                 running\n 8     ostack-cinder-volume1          running\n 9     ostack-cinder-volume2          running\n<\/code><\/pre>\n<p><a href=\"https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/01\/openstack-multinode.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/01\/openstack-multinode.png\" alt=\"\" width=\"778\" height=\"569\" class=\"aligncenter size-full wp-image-228\" srcset=\"https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/01\/openstack-multinode.png 778w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/01\/openstack-multinode-420x307.png 420w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/01\/openstack-multinode-744x544.png 744w, https:\/\/icicimov.com\/blog\/wp-content\/uploads\/2017\/01\/openstack-multinode-768x562.png 768w\" sizes=\"auto, (max-width: 778px) 100vw, 778px\" \/><\/a><br \/>\n<em><strong>Picture1:<\/strong> Openstak test environment<\/em><\/p>\n<h1><span id=\"preparation\">Preparation<\/span><\/h1>\n<h2><span id=\"networking\">Networking<\/span><\/h2>\n<p>Network node interface setup:<\/p>\n<pre><code># The primary network interface\nauto eth0\niface eth0 inet static\n    address 192.168.122.113\n    netmask 255.255.255.0\n    network 192.168.122.0\n    broadcast 192.168.122.255\n    gateway 192.168.122.1\n    # dns-* options are implemented by the resolvconf package, if installed\n    dns-nameservers 192.168.122.1\n    dns-search virtual.local\n\n# The Data nework interface\nauto eth1\niface eth1 inet static\n    address 192.168.133.113\n    netmask 255.255.255.0\n\n# The External nework interface\nauto eth2\niface eth2 inet static\n    address 192.168.144.113\n    netmask 255.255.255.128\n<\/code><\/pre>\n<p>Compute node interface setup:<\/p>\n<pre><code># The primary network interface\nauto eth0\niface eth0 inet static\n    address 192.168.122.112\n    netmask 255.255.255.0\n    network 192.168.122.0\n    broadcast 192.168.122.255\n    gateway 192.168.122.1\n    # dns-* options are implemented by the resolvconf package, if installed\n    dns-nameservers 192.168.122.1\n    dns-search virtual.local\n\n# The Data nework interface\nauto eth1\niface eth1 inet static\n    address 192.168.133.112\n    netmask 255.255.255.0\n<\/code><\/pre>\n<p>Controller node interface setup:<\/p>\n<pre><code># The primary network interface\nauto eth0\niface eth0 inet static\n    address 192.168.122.111\n    netmask 255.255.255.0\n    network 192.168.122.0\n    broadcast 192.168.122.255\n    gateway 192.168.122.1\n    # dns-* options are implemented by the resolvconf package, if installed\n    dns-nameservers 192.168.122.1\n    dns-search virtual.local\n\n# The API external nework interface\nauto eth2\niface eth2 inet static\n    address 192.168.144.144\n    netmask 255.255.255.128\n<\/code><\/pre>\n<p>Meaning the Data and API network are sharing same <code>\/24<\/code> segment with Data having the lower end <code>192.168.144.0\/25<\/code> and API network the upper half <code>192.168.144.129\/25<\/code> of the range. The <code>192.168.122.0\/24<\/code> is the Management network and the <code>192.168.133.0\/24<\/code> is the VM data network.<\/p>\n<p>The hosts file on the servers:<\/p>\n<pre><code>192.168.122.111 ostack-controller.virtual.local ostack-controller\n192.168.122.112 ostack-compute.virtual.local    ostack-compute\n192.168.122.113 ostack-network.virtual.local    ostack-network\n<\/code><\/pre>\n<h2><span id=\"mysql-and-openstack-services-db-setup\">MySQL and OpenStack services DB setup<\/span><\/h2>\n<p>On the Controller node install <code>mysql-server<\/code> package and change the settings in <code>\/etc\/mysql\/my.cfg<\/code>. First, set the bind address:<\/p>\n<pre><code>[mysqld]\n...\nbind-address = 0.0.0.0\n<\/code><\/pre>\n<p>Under the [mysqld] section. Then set the following keys to enable InnoDB, UTF-8 character set, and UTF-8 collation by default:<\/p>\n<pre><code>[mysqld]\n...\ndefault-storage-engine = innodb\ninnodb_file_per_table\ncollation-server = utf8_general_ci\ninit-connect = 'SET NAMES utf8'\ncharacter-set-server = utf8\n<\/code><\/pre>\n<p>Restart and finish off the installation.<\/p>\n<pre><code># service mysql restart\n# mysql_install_db\n# mysql_secure_installation\n<\/code><\/pre>\n<p>Create the needed databases:<\/p>\n<pre><code>mysql -u root -ppassword&lt;&lt;EOF\nCREATE DATABASE nova;\nGRANT ALL PRIVILEGES ON nova.* TO 'novadbadmin'@'%' \n  IDENTIFIED BY 'dieD9Mie';\nEOF\nmysql -v -u root -ppassword&lt;&lt;EOF\nCREATE DATABASE glance;\nGRANT ALL PRIVILEGES ON glance.* TO 'glancedbadmin'@'%' \n  IDENTIFIED BY 'ohC3teiv';\nEOF\nmysql -v -u root -ppassword&lt;&lt;EOF\nCREATE DATABASE keystone;\nGRANT ALL PRIVILEGES ON keystone.* TO 'keystonedbadmin'@'%'\n  IDENTIFIED BY 'Ue0Ud7ra';\nEOF\nmysql -v -u root -ppassword&lt;&lt;EOF\nCREATE DATABASE cinder;\nGRANT ALL PRIVILEGES ON cinder.* TO 'cinderdbadmin'@'%'\n  IDENTIFIED BY 'Ue8Ud8re';\nEOF\nmysql -v -u root -ppassword&lt;&lt;EOF\nCREATE DATABASE neutron;\nGRANT ALL PRIVILEGES ON neutron.* TO 'neutrondbadmin'@'%'\n  IDENTIFIED BY 'wozohB8g';\nEOF\n<\/code><\/pre>\n<p>Enable some recommended kernel parameters:<\/p>\n<pre><code>net.ipv4.conf.default.rp_filter = 1\nnet.ipv4.conf.all.rp_filter = 1\nnet.ipv4.tcp_syncookies = 1\nnet.ipv4.ip_forward = 1\nnet.ipv4.conf.all.log_martians = 1\n<\/code><\/pre>\n<h2><span id=\"rabbitmq\">RabbitMQ<\/span><\/h2>\n<p>Install RabbitMQ package on the Controller node and change RabbitMQ password:<\/p>\n<pre><code># rabbitmqctl change_password guest password\n<\/code><\/pre>\n<h1><span id=\"keystone\">Keystone<\/span><\/h1>\n<p>Install needed packages:<\/p>\n<pre><code>root@ostack-controller:~# aptitude install keystone python-keystone python-keystoneclient qemu-utils\n<\/code><\/pre>\n<p>Edit the Keystone config file:<\/p>\n<pre><code>root@ostack-controller:~# vi \/etc\/keystone\/keystone.conf\n[DEFAULT]\nadmin_token=ADMIN\nrabbit_host=localhost\nrabbit_port=5672\nrabbit_userid=guest\nrabbit_password=password\nlog_dir=\/var\/log\/keystone\n...\n[catalog]\ndriver=keystone.catalog.backends.sql.Catalog\n...\n[database]\nconnection = mysql:\/\/keystonedbadmin:Ue0Ud7ra@192.168.122.111\/keystone\nidle_timeout=200\n...\n[identity]\ndriver=keystone.identity.backends.sql.Identity\n<\/code><\/pre>\n<p>Populate the database schema:<\/p>\n<pre><code>root@ostack-controller:~# su -s \/bin\/sh -c \"keystone-manage db_sync\" keystone\nroot@ostack-controller:~# rm \/var\/lib\/keystone\/keystone.db\n<\/code><\/pre>\n<p>Run the following command to purge expired tokens every hour and log the output to the <code>\/var\/log\/keystone\/keystone-tokenflush.log<\/code> file:<\/p>\n<pre><code>root@ostack-controller:~# (crontab -l -u keystone 2&gt;&amp;1 | grep -q token_flush) || \\\necho '@hourly \/usr\/bin\/keystone-manage token_flush &gt;\/var\/log\/keystone\/keystone-tokenflush.log 2&gt;&amp;1' \\\n&gt;&gt; \/var\/spool\/cron\/crontabs\/keystone\n<\/code><\/pre>\n<p>This creates the folowwing cronjob for the keystone user:<\/p>\n<pre><code>root@ostack-controller:~# crontab -l -u keystone\n@hourly \/usr\/bin\/keystone-manage token_flush &gt;\/var\/log\/keystone\/keystone-tokenflush.log 2&gt;&amp;1\n<\/code><\/pre>\n<p>Create tenants, users and roles, the script is available for download from <a href=\"https:\/\/icicimov.github.io\/blog\/download\/keystone_data.sh\">here<\/a>:<\/p>\n<pre><code>root@aywun:~# .\/keystone_data.sh\n<\/code><\/pre>\n<p>Create endpoints (address of the API of each service), the script is available for download from <a href=\"https:\/\/icicimov.github.io\/blog\/download\/endpoints.sh\">here<\/a>:<\/p>\n<pre><code>root@ostack-controller:~# .\/endpoints.sh -m 192.168.122.111 -u keystonedbadmin -D keystone -p Ue0Ud7ra -K 192.168.122.111 -R RegionOne -E \"http:\/\/192.168.122.111:35357\/v2.0\" -S 192.168.122.113 -T ADMIN\n+-------------+----------------------------------+\n|   Property  |              Value               |\n+-------------+----------------------------------+\n| description |    OpenStack Compute Service     |\n|   enabled   |               True               |\n|      id     | ee52b3f268f84e43849f40418328c3c8 |\n|     name    |               nova               |\n|     type    |             compute              |\n+-------------+----------------------------------+\n+-------------+----------------------------------+\n|   Property  |              Value               |\n+-------------+----------------------------------+\n| description |     OpenStack Volume Service     |\n|   enabled   |               True               |\n|      id     | d1c5d9e2435146668c3a18238ba8b0fb |\n|     name    |              volume              |\n|     type    |              volume              |\n+-------------+----------------------------------+\n+-------------+----------------------------------+\n|   Property  |              Value               |\n+-------------+----------------------------------+\n| description |     OpenStack Image Service      |\n|   enabled   |               True               |\n|      id     | 12dc6eea2b094ede93df56c466ddb0b4 |\n|     name    |              glance              |\n|     type    |              image               |\n+-------------+----------------------------------+\n+-------------+----------------------------------+\n|   Property  |              Value               |\n+-------------+----------------------------------+\n| description |    OpenStack Storage Service     |\n|   enabled   |               True               |\n|      id     | f33af098d51c42b0a8e736f7aea6ba75 |\n|     name    |              swift               |\n|     type    |           object-store           |\n+-------------+----------------------------------+\n+-------------+----------------------------------+\n|   Property  |              Value               |\n+-------------+----------------------------------+\n| description |        OpenStack Identity        |\n|   enabled   |               True               |\n|      id     | 42f85e2e1e714efda3f856a92fbf0f9f |\n|     name    |             keystone             |\n|     type    |             identity             |\n+-------------+----------------------------------+\n+-------------+----------------------------------+\n|   Property  |              Value               |\n+-------------+----------------------------------+\n| description |      OpenStack EC2 service       |\n|   enabled   |               True               |\n|      id     | a9c2088d883849679c28db9d3bef0dc6 |\n|     name    |               ec2                |\n|     type    |               ec2                |\n+-------------+----------------------------------+\n+-------------+----------------------------------------------+\n|   Property  |                    Value                     |\n+-------------+----------------------------------------------+\n|   adminurl  | http:\/\/192.168.122.111:8774\/v2\/%(tenant_id)s |\n|      id     |       6c0e8f3a3f384b63a2229772637f4699       |\n| internalurl | http:\/\/192.168.122.111:8774\/v2\/%(tenant_id)s |\n|  publicurl  | http:\/\/192.168.122.111:8774\/v2\/%(tenant_id)s |\n|    region   |                  RegionOne                   |\n|  service_id |       ee52b3f268f84e43849f40418328c3c8       |\n+-------------+----------------------------------------------+\n+-------------+----------------------------------------------+\n|   Property  |                    Value                     |\n+-------------+----------------------------------------------+\n|   adminurl  | http:\/\/192.168.122.111:8776\/v1\/%(tenant_id)s |\n|      id     |       f4814fca1c1a414d85403407350650b5       |\n| internalurl | http:\/\/192.168.122.111:8776\/v1\/%(tenant_id)s |\n|  publicurl  | http:\/\/192.168.122.111:8776\/v1\/%(tenant_id)s |\n|    region   |                  RegionOne                   |\n|  service_id |       d1c5d9e2435146668c3a18238ba8b0fb       |\n+-------------+----------------------------------------------+\n+-------------+----------------------------------+\n|   Property  |              Value               |\n+-------------+----------------------------------+\n|   adminurl  |  http:\/\/192.168.122.111:9292\/v1  |\n|      id     | 08ab9db2295f4f89acfb31737ad1c354 |\n| internalurl |  http:\/\/192.168.122.111:9292\/v1  |\n|  publicurl  |  http:\/\/192.168.122.111:9292\/v1  |\n|    region   |            RegionOne             |\n|  service_id | 12dc6eea2b094ede93df56c466ddb0b4 |\n+-------------+----------------------------------+\n+-------------+---------------------------------------------------+\n|   Property  |                       Value                       |\n+-------------+---------------------------------------------------+\n|   adminurl  |           http:\/\/192.168.122.113:8080\/v1          |\n|      id     |          cf06c05b36a448809e843864a78db2bc         |\n| internalurl | http:\/\/192.168.122.113:8080\/v1\/AUTH_%(tenant_id)s |\n|  publicurl  | http:\/\/192.168.122.113:8080\/v1\/AUTH_%(tenant_id)s |\n|    region   |                     RegionOne                     |\n|  service_id |          f33af098d51c42b0a8e736f7aea6ba75         |\n+-------------+---------------------------------------------------+\n+-------------+-----------------------------------+\n|   Property  |               Value               |\n+-------------+-----------------------------------+\n|   adminurl  | http:\/\/192.168.122.111:35357\/v2.0 |\n|      id     |  f1d9056f50b942c085c095c092e5d86e |\n| internalurl |  http:\/\/192.168.122.111:5000\/v2.0 |\n|  publicurl  |  http:\/\/192.168.122.111:5000\/v2.0 |\n|    region   |             RegionOne             |\n|  service_id |  42f85e2e1e714efda3f856a92fbf0f9f |\n+-------------+-----------------------------------+\n+-------------+--------------------------------------------+\n|   Property  |                   Value                    |\n+-------------+--------------------------------------------+\n|   adminurl  | http:\/\/192.168.122.111:8773\/services\/Admin |\n|      id     |      54a672b19ea74b8fa04548147ef66f2e      |\n| internalurl | http:\/\/192.168.122.111:8773\/services\/Cloud |\n|  publicurl  | http:\/\/192.168.122.111:8773\/services\/Cloud |\n|    region   |                 RegionOne                  |\n|  service_id |      a9c2088d883849679c28db9d3bef0dc6      |\n+-------------+--------------------------------------------+\n<\/code><\/pre>\n<p>The <code>-m<\/code> specifies the address where MySQL is listening on, <code>-u<\/code>, <code>-D<\/code> and <code>-p<\/code> supply the access credentials for MySQL keystone DB, <code>-K<\/code> sets the Keystone host, <code>-R<\/code> sets the Openstack region, <code>-E<\/code> gives the Keystone service point, <code>-S<\/code> supplies the address for the (future) Swift service and finally <code>-T<\/code> gives the admin token.<\/p>\n<p>Create the <code>keystonerc<\/code> file:<\/p>\n<pre><code>root@ostack-controller:~# vi keystonerc_admin\nexport OS_USERNAME=admin\nexport OS_PASSWORD=password\nexport OS_TENANT_NAME=admin\nexport OS_AUTH_URL=http:\/\/localhost:5000\/v2.0\/\nexport OS_VERSION=1.1\nexport OS_NO_CACHE=1\n<\/code><\/pre>\n<p>and source it out to load the credentials:<\/p>\n<pre><code>root@ostack-controller:~# . .\/keystonerc_admin\n<\/code><\/pre>\n<p>Now we can access the Keystone service:<\/p>\n<pre><code>root@ostack-controller:~# keystone role-list\n+----------------------------------+----------------------+\n|                id                |         name         |\n+----------------------------------+----------------------+\n| 785bc0f9516243a2bef5edfebc074538 |    KeystoneAdmin     |\n| ae31856bc9904017b16e2b8a1fd8990e | KeystoneServiceAdmin |\n| 26f88fee2fa64aa3bc0fc2bf2fb43d45 |        Member        |\n| c0542595bfaf43748b861c752012a75f |    ResellerAdmin     |\n| 9fe2ff9ee4384b1894a90878d3e92bab |       _member_       |\n| 09be25b0a1474cc9abbd29bdcd3b738b |        admin         |\n| dc8bbb1a9a1041ab88667729fbae0ded |     anotherrole      |\n+----------------------------------+----------------------+\nroot@ostack-controller:~# keystone tenant-list\n+----------------------------------+--------------------+---------+\n|                id                |        name        | enabled |\n+----------------------------------+--------------------+---------+\n| 4b53dc514f0a4f6bbfd89eac63f7b206 |       admin        |   True  |\n| 9371007854e24ecd9a0fa87bd7426ac0 |        demo        |   True  |\n| 35d820528ea3473191e0ffb16b55a84b | invisible_to_admin |   True  |\n| d38657485ad24b9fb2e216dadc612f92 |      service       |   True  |\n+----------------------------------+--------------------+---------+\nroot@ostack-controller:~# keystone user-list\n+----------------------------------+---------+---------+-------------------------+\n|                id                |   name  | enabled |          email          |\n+----------------------------------+---------+---------+-------------------------+\n| d6145ea56cc54bb4aa2b2b4a1c7ae6bb |  admin  |   True  |  admin@icicimov.com  |\n| 156bd8b8193045c89b72c4bf8454dfb9 |   demo  |   True  |   demo@icicimov.com  |\n| dacb282128df44f0be63b96bbf5382b5 |  glance |   True  |  glance@icicimov.com |\n| effad9646b524c43b3aec467be48132c | neutron |   True  | neutron@icicimov.com |\n| b52bf10633934e2eb1ed8f06df1fd033 |   nova  |   True  |   nova@icicimov.com  |\n| 155fdfddc69545d5bc0e43a76f3c20f0 |  swift  |   True  |  swift@icicimov.com  |\n+----------------------------------+---------+---------+-------------------------+\nroot@ostack-controller:~# keystone service-list\n+----------------------------------+----------+--------------+---------------------------+\n|                id                |   name   |     type     |        description        |\n+----------------------------------+----------+--------------+---------------------------+\n| a9c2088d883849679c28db9d3bef0dc6 |   ec2    |     ec2      |   OpenStack EC2 service   |\n| 12dc6eea2b094ede93df56c466ddb0b4 |  glance  |    image     |  OpenStack Image Service  |\n| 42f85e2e1e714efda3f856a92fbf0f9f | keystone |   identity   |     OpenStack Identity    |\n| c1bf491d743b4d5ab874acd6365555b3 | neutron  |   network    |    OpenStack Networking   |\n| ee52b3f268f84e43849f40418328c3c8 |   nova   |   compute    | OpenStack Compute Service |\n| f33af098d51c42b0a8e736f7aea6ba75 |  swift   | object-store | OpenStack Storage Service |\n| d1c5d9e2435146668c3a18238ba8b0fb |  volume  |    volume    |  OpenStack Volume Service |\n+----------------------------------+----------+--------------+---------------------------+\nroot@ostack-controller:~# keystone endpoint-list\n+----------------------------------+-----------+---------------------------------------------------+---------------------------------------------------+----------------------------------------------+----------------------------------+\n|                id                |   region  |                     publicurl                     |                    internalurl                    |                   adminurl                   |            service_id            |\n+----------------------------------+-----------+---------------------------------------------------+---------------------------------------------------+----------------------------------------------+----------------------------------+\n| 08ab9db2295f4f89acfb31737ad1c354 | RegionOne |           http:\/\/192.168.122.111:9292\/v1          |           http:\/\/192.168.122.111:9292\/v1          |        http:\/\/192.168.122.111:9292\/v1        | 12dc6eea2b094ede93df56c466ddb0b4 |\n| 54a672b19ea74b8fa04548147ef66f2e | RegionOne |     http:\/\/192.168.122.111:8773\/services\/Cloud    |     http:\/\/192.168.122.111:8773\/services\/Cloud    |  http:\/\/192.168.122.111:8773\/services\/Admin  | a9c2088d883849679c28db9d3bef0dc6 |\n| 6c0e8f3a3f384b63a2229772637f4699 | RegionOne |    http:\/\/192.168.122.111:8774\/v2\/%(tenant_id)s   |    http:\/\/192.168.122.111:8774\/v2\/%(tenant_id)s   | http:\/\/192.168.122.111:8774\/v2\/%(tenant_id)s | ee52b3f268f84e43849f40418328c3c8 |\n| a5c435797a774bacb1b634d8b6f31d56 | regionOne |            http:\/\/192.168.122.111:9696            |            http:\/\/192.168.122.111:9696            |         http:\/\/192.168.122.111:9696          | c1bf491d743b4d5ab874acd6365555b3 |\n| cf06c05b36a448809e843864a78db2bc | RegionOne | http:\/\/192.168.122.113:8080\/v1\/AUTH_%(tenant_id)s | http:\/\/192.168.122.113:8080\/v1\/AUTH_%(tenant_id)s |        http:\/\/192.168.122.113:8080\/v1        | f33af098d51c42b0a8e736f7aea6ba75 |\n| f1d9056f50b942c085c095c092e5d86e | RegionOne |          http:\/\/192.168.122.111:5000\/v2.0         |          http:\/\/192.168.122.111:5000\/v2.0         |      http:\/\/192.168.122.111:35357\/v2.0       | 42f85e2e1e714efda3f856a92fbf0f9f |\n| f4814fca1c1a414d85403407350650b5 | RegionOne |    http:\/\/192.168.122.111:8776\/v1\/%(tenant_id)s   |    http:\/\/192.168.122.111:8776\/v1\/%(tenant_id)s   | http:\/\/192.168.122.111:8776\/v1\/%(tenant_id)s | d1c5d9e2435146668c3a18238ba8b0fb |\n+----------------------------------+-----------+---------------------------------------------------+---------------------------------------------------+----------------------------------------------+----------------------------------+\n<\/code><\/pre>\n<h1><span id=\"glance\">Glance<\/span><\/h1>\n<p>Installation:<\/p>\n<pre><code>root@ostack-controller:~# aptitude install glance python-glance\n<\/code><\/pre>\n<p>Edit the Glance API config file:<\/p>\n<pre><code>root@ostack-controller:~# vi \/etc\/glance\/glance-api.conf\n[DEFAULT]\n...\nrabbit_host = localhost\nrabbit_port = 5672\nrabbit_use_ssl = false\nrabbit_userid = guest\nrabbit_password = password\nrabbit_virtual_host = \/\nrabbit_notification_exchange = glance\nrabbit_notification_topic = notifications\nrabbit_durable_queues = False\n...\n[database]\nconnection = mysql:\/\/glancedbadmin:ohC3teiv@192.168.122.111\/glance\n...\n[keystone_authtoken]\nauth_uri = http:\/\/192.168.122.111:5000\/v2.0\nauth_host = 192.168.122.111\nauth_port = 35357\nauth_protocol = http\nadmin_tenant_name = service\nadmin_user = glance\nadmin_password = password\n...\n[paste_deploy]\nconfig_file = \/etc\/glance\/glance-api-paste.ini\nflavor=keystone\n<\/code><\/pre>\n<p>Then the Glance registry config file:<\/p>\n<pre><code>root@ostack-controller:~# vi \/etc\/glance\/glance-registry.conf\n...\n[database]\nconnection = mysql:\/\/glancedbadmin:ohC3teiv@192.168.122.111\/glance\n...\n[keystone_authtoken]\nauth_uri = http:\/\/192.168.122.111:5000\/v2.0\nauth_host = 192.168.122.111\nauth_port = 35357\nauth_protocol = http\nadmin_tenant_name = service\nadmin_user = glance\nadmin_password = password\n...\n[paste_deploy]\nconfig_file = \/etc\/glance\/glance-api-paste.ini\nflavor=keystone\n<\/code><\/pre>\n<p>Populate the db schema and remove the sqlite database file:<\/p>\n<pre><code>root@ostack-controller:~# su -s \/bin\/sh -c \"glance-manage db_sync\" glance\n<\/code><\/pre>\n<p>and restart the services:<\/p>\n<pre><code>root@ostack-controller:~# service glance-registry restart\nroot@ostack-controller:~# service glance-api restart\nroot@ostack-controller:~# rm -f \/var\/lib\/glance\/glance.sqlite\n<\/code><\/pre>\n<p>Create out first images:<\/p>\n<pre><code>root@ostack-controller:~# glance image-create --copy-from http:\/\/uec-images.ubuntu.com\/releases\/12.04\/release\/ubuntu-12.04-server-cloudimg-amd64-disk1.img --name=\"Ubuntu 12.04 cloudimg amd64\" --is-public true --container-format ovf --disk-format qcow2\n+------------------+--------------------------------------+\n| Property         | Value                                |\n+------------------+--------------------------------------+\n| checksum         | None                                 |\n| container_format | ovf                                  |\n| created_at       | 2014-09-13T09:53:18                  |\n| deleted          | False                                |\n| deleted_at       | None                                 |\n| disk_format      | qcow2                                |\n| id               | e871958c-8bbd-42ec-ad16-31959949a43c |\n| is_public        | True                                 |\n| min_disk         | 0                                    |\n| min_ram          | 0                                    |\n| name             | Ubuntu 12.04 cloudimg amd64          |\n| owner            | 4b53dc514f0a4f6bbfd89eac63f7b206     |\n| protected        | False                                |\n| size             | 261095936                            |\n| status           | queued                               |\n| updated_at       | 2014-09-13T09:53:18                  |\n| virtual_size     | None                                 |\n+------------------+--------------------------------------+\n\nroot@ostack-controller:~# glance image-create --copy-from http:\/\/download.cirros-cloud.net\/0.3.1\/cirros-0.3.1-x86_64-disk.img --name=\"CirrOS-0.3.1-x86_64\" --is-public true --container-format bare --disk-format qcow2\n+------------------+--------------------------------------+\n| Property         | Value                                |\n+------------------+--------------------------------------+\n| checksum         | None                                 |\n| container_format | bare                                 |\n| created_at       | 2014-09-13T09:54:33                  |\n| deleted          | False                                |\n| deleted_at       | None                                 |\n| disk_format      | qcow2                                |\n| id               | a25d69b3-623a-40c6-aca3-00f1233295ea |\n| is_public        | True                                 |\n| min_disk         | 0                                    |\n| min_ram          | 0                                    |\n| name             | CirrOS-0.3.1-x86_64                  |\n| owner            | 4b53dc514f0a4f6bbfd89eac63f7b206     |\n| protected        | False                                |\n| size             | 13147648                             |\n| status           | queued                               |\n| updated_at       | 2014-09-13T09:54:33                  |\n| virtual_size     | None                                 |\n+------------------+--------------------------------------+\n<\/code><\/pre>\n<p>and list the result:<\/p>\n<pre><code>root@ostack-controller:~# glance image-list\n+--------------------------------------+-----------------------------+-------------+------------------+-----------+--------+\n| ID                                   | Name                        | Disk Format | Container Format | Size      | Status |\n+--------------------------------------+-----------------------------+-------------+------------------+-----------+--------+\n| a25d69b3-623a-40c6-aca3-00f1233295ea | CirrOS-0.3.1-x86_64         | qcow2       | bare             | 13147648  | active |\n| e871958c-8bbd-42ec-ad16-31959949a43c | Ubuntu 12.04 cloudimg amd64 | qcow2       | ovf              | 261095936 | saving |\n+--------------------------------------+-----------------------------+-------------+------------------+-----------+--------+\n<\/code><\/pre>\n<h1><span id=\"nova\">Nova<\/span><\/h1>\n<h2><span id=\"controller-node\">Controller node<\/span><\/h2>\n<p>Install packages:<\/p>\n<pre><code>root@ostack-compute:~# aptitude install nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient\n<\/code><\/pre>\n<p>Edit the Nova config file as follows:<\/p>\n<pre><code>root@ostack-controller:~# cat \/etc\/nova\/nova.conf \n[DEFAULT]\ndhcpbridge_flagfile=\/etc\/nova\/nova.conf\ndhcpbridge=\/usr\/bin\/nova-dhcpbridge\nlogdir=\/var\/log\/nova\nstate_path=\/var\/lib\/nova\nlock_path=\/var\/lock\/nova\nforce_dhcp_release=True\niscsi_helper=tgtadm\nlibvirt_use_virtio_for_bridges=True\nconnection_type=libvirt\nroot_helper=sudo nova-rootwrap \/etc\/nova\/rootwrap.conf\nverbose=True\nec2_private_dns_show_ip=True\napi_paste_config=\/etc\/nova\/api-paste.ini\nvolumes_path=\/var\/lib\/nova\/volumes\nenabled_apis=ec2,osapi_compute,metadata\nmy_ip = 192.168.122.111 \nauth_strategy=keystone\nsql_connection = mysql:\/\/novadbadmin:dieD9Mie@192.168.122.111\/nova\nrpc_backend = rabbit\nrabbit_host = 192.168.122.111 \nrabbit_password = password\nvncserver_listen = 192.168.122.111 \nvncserver_proxyclient_address = 192.168.122.111\nglance_host = 192.168.122.111\n## NETWORKING (NEUTRON) ##\nnetwork_api_class = nova.network.neutronv2.api.API\nneutron_url = http:\/\/192.168.122.111:9696\nneutron_auth_strategy = keystone\nneutron_admin_tenant_name = service\nneutron_admin_username = neutron\nneutron_admin_password = password \nneutron_admin_auth_url = http:\/\/192.168.122.111:35357\/v2.0\nlinuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver\nfirewall_driver = nova.virt.firewall.NoopFirewallDriver\nsecurity_group_api = neutron\n# metadata proxy (running on the networking node)\n# note: add these 2 lines after we have set Neutron service\nservice_neutron_metadata_proxy = true\nneutron_metadata_proxy_shared_secret = password\n\n[keystone_authtoken]\nauth_uri = http:\/\/192.168.122.111:5000\/v2.0\nauth_host = 192.168.122.111\nauth_port = 35357\nauth_protocol = http\nadmin_tenant_name = service\nadmin_user = nova\nadmin_password = password\n<\/code><\/pre>\n<p>and restart all Nova services:<\/p>\n<pre><code>root@ostack-controller:~# for i in nova-api nova-cert nova-consoleauth nova-scheduler nova-conductor nova-novncproxy; do service $i restart; done\n<\/code><\/pre>\n<p>Get list of images and services:<\/p>\n<pre><code>root@ostack-controller:~# nova image-list\n+--------------------------------------+-----------------------------+--------+--------+\n| ID                                   | Name                        | Status | Server |\n+--------------------------------------+-----------------------------+--------+--------+\n| a25d69b3-623a-40c6-aca3-00f1233295ea | CirrOS-0.3.1-x86_64         | ACTIVE |        |\n| e871958c-8bbd-42ec-ad16-31959949a43c | Ubuntu 12.04 cloudimg amd64 | ACTIVE |        |\n+--------------------------------------+-----------------------------+--------+--------+ \n\nroot@ostack-controller:~# nova service-list\n+------------------+-------------------+----------+---------+-------+----------------------------+-----------------+\n| Binary           | Host              | Zone     | Status  | State | Updated_at                 | Disabled Reason |\n+------------------+-------------------+----------+---------+-------+----------------------------+-----------------+\n| nova-cert        | ostack-controller | internal | enabled | up    | 2014-09-14T06:19:24.000000 | -               |\n| nova-consoleauth | ostack-controller | internal | enabled | up    | 2014-09-14T06:19:24.000000 | -               |\n| nova-scheduler   | ostack-controller | internal | enabled | up    | 2014-09-14T06:19:24.000000 | -               |\n| nova-conductor   | ostack-controller | internal | enabled | up    | 2014-09-14T06:19:24.000000 | -               |\n| nova-compute     | ostack-compute    | nova     | enabled | up    | 2014-09-14T06:19:24.000000 | -               |\n+------------------+-------------------+----------+---------+-------+----------------------------+-----------------+\n<\/code><\/pre>\n<h2><span id=\"compute-node\">Compute node<\/span><\/h2>\n<pre><code>root@ostack-compute:~# aptitude install nova-compute\n\nroot@ostack-compute:~# vi \/etc\/nova\/nova.conf\n[DEFAULT]\ndhcpbridge_flagfile=\/etc\/nova\/nova.conf\ndhcpbridge=\/usr\/bin\/nova-dhcpbridge\nlogdir=\/var\/log\/nova\nstate_path=\/var\/lib\/nova\nlock_path=\/var\/lock\/nova\nforce_dhcp_release=True\niscsi_helper=tgtadm\nlibvirt_use_virtio_for_bridges=True\nconnection_type=libvirt\nroot_helper=sudo nova-rootwrap \/etc\/nova\/rootwrap.conf\nverbose=True\nec2_private_dns_show_ip=True\napi_paste_config=\/etc\/nova\/api-paste.ini\nvolumes_path=\/var\/lib\/nova\/volumes\nenabled_apis=ec2,osapi_compute,metadata\nmy_ip = 192.168.122.112\nauth_strategy=keystone\nsql_connection = mysql:\/\/novadbadmin:dieD9Mie@192.168.122.111\/nova\nrpc_backend = rabbit\nrabbit_host = 192.168.122.111\nrabbit_password = password\nglance_host = 192.168.122.111\n## VNC ##\nvnc_enabled = True\nvncserver_listen = 0.0.0.0\nvncserver_proxyclient_address = 192.168.122.112\nnovncproxy_base_url = http:\/\/192.168.122.111:6080\/vnc_auto.html\n\n[keystone_authtoken]\nauth_uri = http:\/\/192.168.122.111:5000\/v2.0\nauth_host = 192.168.122.111\nauth_port = 35357\nauth_protocol = http\nadmin_tenant_name = service\nadmin_user = nova\nadmin_password = password\n<\/code><\/pre>\n<p>Since I&#8217;m running on VM&#8217;s I can&#8217;t use hardware acceleration:<\/p>\n<pre><code>root@ostack-compute:~# grep -c '(vmx|svm)' \/proc\/cpuinfo\n0\n<\/code><\/pre>\n<p>and have to switch from KVM to Qemu hypervisor:<\/p>\n<pre><code>root@ostack-compute:~# cat \/etc\/nova\/nova-compute.conf\n[DEFAULT]\ncompute_driver=libvirt.LibvirtDriver\n[libvirt]\n#virt_type=kvm\nvirt_type=qemu\n<\/code><\/pre>\n<p>Restart the service and remove the sqlite db file:<\/p>\n<pre><code>root@ostack-compute:~# service nova-compute restart\nroot@ostack-compute:~# rm -f \/var\/lib\/nova\/nova.sqlite\n<\/code><\/pre>\n<h1><span id=\"neutron\">Neutron<\/span><\/h1>\n<h2><span id=\"controller-node_1\">Controller node<\/span><\/h2>\n<p>Create the Neutron keystone service and endpoint:<\/p>\n<pre><code>root@ostack-controller:~# keystone service-create --name neutron --type network --description \"OpenStack Networking\"\n\nroot@ostack-controller:~# keystone endpoint-create \\\n  --region RegionOne \\\n  --service-id \\\n    $(keystone service-list | awk '\/ network \/ {print $2}') \\\n  --publicurl http:\/\/192.168.122.111:9696 \\\n  --adminurl http:\/\/192.168.122.111:9696 \\\n  --internalurl http:\/\/192.168.122.111:9696\n<\/code><\/pre>\n<p>Install the ML2 plug-in:<\/p>\n<pre><code>root@ostack-controller:~# aptitude install neutron-server neutron-plugin-ml2\n<\/code><\/pre>\n<p>Configure Networking to use the Modular Layer 2 (ML2) plug-in and associated services, plus add the DB connection and Keystone authentication settings:<\/p>\n<pre><code>root@ostack-controller:~# vi \/etc\/neutron\/neutron.conf\n[DEFAULT]\nverbose = True \nstate_path = \/var\/lib\/neutron\nlock_path = $state_path\/lock\n...\ncore_plugin = ml2\nservice_plugins = router\nauth_strategy = keystone\nallow_overlapping_ips = True\n...\nrpc_backend = neutron.openstack.common.rpc.impl_kombu\nrabbit_host = 192.168.122.111 \nrabbit_password = password \nrabbit_port = 5672\nrabbit_userid = guest\n...\nnotification_driver = neutron.openstack.common.notifier.rpc_notifier\nnotify_nova_on_port_status_changes = True\nnotify_nova_on_port_data_changes = True\n...\nnova_url = http:\/\/192.168.122.111:8774\/v2\nnova_admin_username = nova\nnova_admin_tenant_id = d38657485ad24b9fb2e216dadc612f92\nnova_admin_password = password\nnova_admin_auth_url = http:\/\/192.168.122.111:35357\/v2.0\n...\n[keystone_authtoken]\nauth_uri = http:\/\/192.168.122.111:5000\nauth_host = 192.168.122.111\nauth_port = 35357\nauth_protocol = http\nadmin_tenant_name = service\nadmin_user = neutron\nadmin_password = password\n...\n[database]\nconnection = mysql:\/\/neutrondbadmin:wozohB8g@192.168.122.111\/neutron\n<\/code><\/pre>\n<p>To obtain the value for <code>nova_admin_tenant_id<\/code> we run:<\/p>\n<pre><code>root@ostack-controller:~# keystone tenant-get service\n+-------------+----------------------------------+\n|   Property  |              Value               |\n+-------------+----------------------------------+\n| description |                                  |\n|   enabled   |               True               |\n|      id     | d38657485ad24b9fb2e216dadc612f92 |\n|     name    |             service              |\n+-------------+----------------------------------+\n<\/code><\/pre>\n<p>Then we configure the <code>ML2<\/code> plugin:<\/p>\n<pre><code>root@ostack-controller:~# vi \/etc\/neutron\/plugins\/ml2\/ml2_conf.ini\n[ml2]\ntype_drivers = gre\ntenant_network_types = gre\nmechanism_drivers = openvswitch\n...\n[ml2_type_vlan]\nnetwork_vlan_ranges = 1:1000\n...\n[ml2_type_gre]\ntunnel_id_ranges = 1:1000\n...\n[securitygroup]\nenable_security_group = True\nfirewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver\n<\/code><\/pre>\n<p>Finally we tell Nova-Compute to use Neutron for networking by adding:<\/p>\n<pre><code>## NETWORKING (NEUTRON) ##\nnetwork_api_class = nova.network.neutronv2.api.API\nneutron_url = http:\/\/192.168.122.111:9696\nneutron_auth_strategy = keystone\nneutron_admin_tenant_name = service\nneutron_admin_username = neutron\nneutron_admin_password = password \nneutron_admin_auth_url = http:\/\/192.168.122.111:35357\/v2.0\nlinuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver\nfirewall_driver = nova.virt.firewall.NoopFirewallDriver\nsecurity_group_api = neutron\n<\/code><\/pre>\n<p>under the <code>[DEFAULT]<\/code> section in <code>\/etc\/nova\/nova.conf<\/code> file and restarting the services.<\/p>\n<h2><span id=\"networking-node\">Networking node<\/span><\/h2>\n<p>Install packages:<\/p>\n<pre><code>root@ostack-network:~# aptitude install neutron-plugin-ml2 neutron-plugin-openvswitch-agent openvswitch-datapath-dkms neutron-l3-agent neutron-dhcp-agent\n<\/code><\/pre>\n<p>Edit the Neutron config file:<\/p>\n<pre><code>root@ostack-network:~# vi \/etc\/neutron\/neutron.conf\n[DEFAULT]\nverbose = True \n...\ncore_plugin = ml2\nservice_plugins = router\nauth_strategy = keystone\nallow_overlapping_ips = True\n...\nrpc_backend = neutron.openstack.common.rpc.impl_kombu\nrabbit_host = 192.168.122.111 \nrabbit_password = password \nrabbit_port = 5672\nrabbit_userid = guest\n...\n[keystone_authtoken]\nauth_uri = http:\/\/192.168.122.111:5000\nauth_host = 192.168.122.111 \nauth_port = 35357\nauth_protocol = http\nadmin_tenant_name = service \nadmin_user = neutron \nadmin_password = password\nsigning_dir = $state_path\/keystone-signing\n<\/code><\/pre>\n<p>the L3 agent config file:<\/p>\n<pre><code>root@ostack-network:~# vi \/etc\/neutron\/l3_agent.ini\n[DEFAULT]\nverbose = True\ninterface_driver = neutron.agent.linux.interface.OVSInterfaceDriver\nuse_namespaces = True\n<\/code><\/pre>\n<p>and the DHCP agent config file:<\/p>\n<pre><code>root@ostack-network:~# vi \/etc\/neutron\/dhcp_agent.ini\n[DEFAULT]\nverbose = True\ninterface_driver = neutron.agent.linux.interface.OVSInterfaceDriver\novs_integration_bridge = br-int\ndhcp_driver = neutron.agent.linux.dhcp.Dnsmasq\nuse_namespaces = True\ndnsmasq_config_file = \/etc\/neutron\/dnsmasq-neutron.conf\n<\/code><\/pre>\n<p>Then setup and restart <code>dnsmasq<\/code> that actually provides the DHCP services for the VM&#8217;s:<\/p>\n<pre><code>root@ostack-network:~# vi \/etc\/neutron\/dnsmasq-neutron.conf\ndhcp-option-force=26,1454\n\nroot@ostack-network:~# pkill dnsmasq\n<\/code><\/pre>\n<p>Configure the metadata agent:<\/p>\n<pre><code>root@ostack-network:~# vi \/etc\/neutron\/metadata_agent.ini\n[DEFAULT]\nverbose = True\nauth_url = http:\/\/192.168.122.111:5000\/v2.0\nauth_region = RegionOne\nadmin_tenant_name = service \nadmin_user = neutron \nadmin_password = password \nnova_metadata_ip = 192.168.122.111\nnova_metadata_port = 8775\nmetadata_proxy_shared_secret = password\n<\/code><\/pre>\n<p>On the Controller node add at the end of the Neutron section:<\/p>\n<pre><code>root@ostack-controller:~# vi \/etc\/nova\/nova.conf\n[DEFAULT]\n...\n# metadata proxy (running on the networking node)\nservice_neutron_metadata_proxy = true\nneutron_metadata_proxy_shared_secret = password\n<\/code><\/pre>\n<p>and restart the api service:<\/p>\n<pre><code>root@ostack-controller:~# service nova-api restart \n<\/code><\/pre>\n<p>Back on the Networking node configure the ML2 plug-in with <code>GRE<\/code> tunneling:<\/p>\n<pre><code>root@ostack-network:~# vi \/etc\/neutron\/plugins\/ml2\/ml2_conf.ini\n...\n[ml2]\ntype_drivers = gre\ntenant_network_types = gre\nmechanism_drivers = openvswitch\n...\n[ml2_type_gre]\ntunnel_id_ranges = 1:1000\n...\n[ovs]\nlocal_ip = 192.168.133.113 \ntunnel_type = gre\nenable_tunneling = True\n\n[securitygroup]\nfirewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver\nenable_security_group = True\n<\/code><\/pre>\n<p>The OVS service provides the underlying virtual networking framework for instances. The integration bridge <code>br-int<\/code> handles internal instance network traffic within OVS. The external bridge <code>br-ex<\/code> handles external instance network traffic within OVS. The external bridge requires a port on the physical external network interface to provide instances with external network access. In essence, this port bridges the virtual and physical external networks in your environment.<\/p>\n<pre><code>root@ostack-network:~# ovs-vsctl add-br br-ex\nroot@ostack-network:~# ovs-vsctl add-port br-ex eth2\nroot@ostack-network:~# ovs-vsctl show\ne6ef64d8-e27e-472b-89b7-2d0fcb590d9c\n    Bridge br-int\n        fail_mode: secure\n        Port br-int\n            Interface br-int\n                type: internal\n    Bridge br-ex\n        Port br-ex\n            Interface br-ex\n                type: internal\n        Port \"eth2\"\n            Interface \"eth2\"\n    ovs_version: \"2.0.2\"\n<\/code><\/pre>\n<p>Restart the Neutron services:<\/p>\n<pre><code>root@ostack-network:~# service neutron-plugin-openvswitch-agent restart\nroot@ostack-network:~# service neutron-l3-agent restart\nroot@ostack-network:~# service neutron-dhcp-agent restart\nroot@ostack-network:~# service neutron-metadata-agent restart\n<\/code><\/pre>\n<p>and check for the created OVS ports and interfaces:<\/p>\n<pre>\nroot@ostack-network:~# ip a | grep state\n1: lo: <loopback ,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default \n2: eth0: <broadcast ,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000\n3: eth1: <\/broadcast><broadcast ,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000\n4: eth2: <\/broadcast><broadcast ,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP group default qlen 1000\n5: ovs-system: <\/broadcast><broadcast ,MULTICAST> mtu 1500 qdisc noop state DOWN group default \n6: br-ex: <\/broadcast><broadcast ,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default \n8: br-int: <\/broadcast><broadcast ,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default \n12: br-tun: <\/broadcast><broadcast ,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default\n\nroot@ostack-network:~# ovs-vsctl list-ports br-ex\neth2\nqg-3c95c6ae-1c\n\nroot@ostack-network:~# ovs-vsctl list-ports br-int\npatch-tun\nqr-7db3920b-bb\n\nroot@ostack-network:~# ovs-vsctl list-ports br-tun\npatch-int\n<\/broadcast><\/loopback><\/pre>\n<p>On the Controller node we can see the following Neutron agents running:<\/p>\n<pre><code>root@ostack-controller:~# neutron agent-list \n+--------------------------------------+--------------------+----------------+-------+----------------+\n| id                                   | agent_type         | host           | alive | admin_state_up |\n+--------------------------------------+--------------------+----------------+-------+----------------+\n| 3f01bd6e-99e7-4a28-bec7-2edba4df479d | Open vSwitch agent | ostack-compute | :-)   | True           |\n| 5534539d-68b8-40f1-9e44-52795cfa0cc8 | Open vSwitch agent | ostack-network | :-)   | True           |\n| 698b412a-948a-4a12-901f-e92363b41dd6 | L3 agent           | ostack-network | :-)   | True           |\n| bd3678a8-9537-4631-8c57-6e3f1eb872f8 | Metadata agent     | ostack-network | :-)   | True           |\n| faeb4bb6-4449-4381-8ab1-0d02425dc29c | DHCP agent         | ostack-network | :-)   | True           |\n+--------------------------------------+--------------------+----------------+-------+----------------+\n<\/code><\/pre>\n<h2><span id=\"compute-node_1\">Compute node<\/span><\/h2>\n<p>Install Neutron packages needed:<\/p>\n<pre><code>root@ostack-compute:~# aptitude install neutron-plugin-ml2 neutron-plugin-openvswitch-agent\n<\/code><\/pre>\n<h2><span id=\"create-the-initial-networks\">Create the initial networks<\/span><\/h2>\n<p>We run this on the Controller node.<\/p>\n<p>First External network:<\/p>\n<pre><code>root@ostack-controller:~# neutron net-create ext-net --shared --router:external True\nCreated a new network:\n+---------------------------+--------------------------------------+\n| Field                     | Value                                |\n+---------------------------+--------------------------------------+\n| admin_state_up            | True                                 |\n| id                        | 4d584b71-1b3a-46a5-b32a-7fd2ba3e2535 |\n| name                      | ext-net                              |\n| provider:network_type     | gre                                  |\n| provider:physical_network |                                      |\n| provider:segmentation_id  | 1                                    |\n| router:external           | True                                 |\n| shared                    | True                                 |\n| status                    | ACTIVE                               |\n| subnets                   |                                      |\n| tenant_id                 | 4b53dc514f0a4f6bbfd89eac63f7b206     |\n+---------------------------+--------------------------------------+\n<\/code><\/pre>\n<p>and first external pseudo subnet:<\/p>\n<pre><code>root@ostack-controller:~# neutron subnet-create ext-net --name ext-subnet --allocation-pool start=192.168.144.2,end=192.168.144.120 --disable-dhcp --gateway 192.168.144.1 192.168.144.0\/25\nCreated a new subnet:\n+------------------+------------------------------------------------------+\n| Field            | Value                                                |\n+------------------+------------------------------------------------------+\n| allocation_pools | {\"start\": \"192.168.144.2\", \"end\": \"192.168.144.120\"} |\n| cidr             | 192.168.144.0\/25                                     |\n| dns_nameservers  |                                                      |\n| enable_dhcp      | False                                                |\n| gateway_ip       | 192.168.144.1                                        |\n| host_routes      |                                                      |\n| id               | e796143e-1ad0-4d7d-8967-6b47191e284f                 |\n| ip_version       | 4                                                    |\n| name             | ext-subnet                                           |\n| network_id       | 4d584b71-1b3a-46a5-b32a-7fd2ba3e2535                 |\n| tenant_id        | 4b53dc514f0a4f6bbfd89eac63f7b206                     |\n+------------------+------------------------------------------------------+\n<\/code><\/pre>\n<p>Then Internal one for VM&#8217;s intercommunication:<\/p>\n<pre><code>root@ostack-controller:~# neutron net-create demo-net\nCreated a new network:\n+---------------------------+--------------------------------------+\n| Field                     | Value                                |\n+---------------------------+--------------------------------------+\n| admin_state_up            | True                                 |\n| id                        | 2322ae02-88a9-4daa-898d-1c4c0b2653ca |\n| name                      | demo-net                             |\n| provider:network_type     | gre                                  |\n| provider:physical_network |                                      |\n| provider:segmentation_id  | 2                                    |\n| shared                    | False                                |\n| status                    | ACTIVE                               |\n| subnets                   |                                      |\n| tenant_id                 | 4b53dc514f0a4f6bbfd89eac63f7b206     |\n+---------------------------+--------------------------------------+\n\nroot@ostack-controller:~# neutron subnet-create demo-net --name demo-subnet --gateway 10.0.0.1 10.0.0.0\/24\nCreated a new subnet:\n+------------------+--------------------------------------------+\n| Field            | Value                                      |\n+------------------+--------------------------------------------+\n| allocation_pools | {\"start\": \"10.0.0.2\", \"end\": \"10.0.0.254\"} |\n| cidr             | 10.0.0.0\/24                                |\n| dns_nameservers  |                                            |\n| enable_dhcp      | True                                       |\n| gateway_ip       | 10.0.0.1                                   |\n| host_routes      |                                            |\n| id               | a55ce25e-21fe-4619-b12e-8573664e6a36       |\n| ip_version       | 4                                          |\n| name             | demo-subnet                                |\n| network_id       | 2322ae02-88a9-4daa-898d-1c4c0b2653ca       |\n| tenant_id        | 4b53dc514f0a4f6bbfd89eac63f7b206           |\n+------------------+--------------------------------------------+\n<\/code><\/pre>\n<p>A virtual router passes network traffic between two or more virtual networks. Each router requires one or more interfaces and\/or gateways that provide access to specific networks. In this case, you will create a router and attach your tenant and external networks to it.<\/p>\n<pre><code>root@ostack-controller:~# neutron router-create demo-router\nCreated a new router:\n+-----------------------+--------------------------------------+\n| Field                 | Value                                |\n+-----------------------+--------------------------------------+\n| admin_state_up        | True                                 |\n| external_gateway_info |                                      |\n| id                    | a81c303a-b1a8-4817-906a-42b863817d1d |\n| name                  | demo-router                          |\n| status                | ACTIVE                               |\n| tenant_id             | 4b53dc514f0a4f6bbfd89eac63f7b206     |\n+-----------------------+--------------------------------------+\n\nroot@ostack-controller:~# neutron router-interface-add demo-router demo-subnet\nAdded interface 7db3920b-bb78-4ce4-9f9b-dafff1d5271c to router demo-router.\n\nroot@ostack-controller:~# neutron router-gateway-set demo-router ext-net\nSet gateway for router demo-router\n<\/code><\/pre>\n<p>What we did here is created a router, attached it to the demo tenant subnet AND to the external network by setting it as the gateway.<\/p>\n<p>This is the result we can see:<\/p>\n<pre><code>root@ostack-controller:~# neutron net-list\n+--------------------------------------+----------+-------------------------------------------------------+\n| id                                   | name     | subnets                                               |\n+--------------------------------------+----------+-------------------------------------------------------+\n| 2322ae02-88a9-4daa-898d-1c4c0b2653ca | demo-net | a55ce25e-21fe-4619-b12e-8573664e6a36 10.0.0.0\/24      |\n| 4d584b71-1b3a-46a5-b32a-7fd2ba3e2535 | ext-net  | e796143e-1ad0-4d7d-8967-6b47191e284f 192.168.144.0\/25 |\n+--------------------------------------+----------+-------------------------------------------------------+\n\nroot@ostack-controller:~# neutron router-list\n+--------------------------------------+-------------+-----------------------------------------------------------------------------+\n| id                                   | name        | external_gateway_info                                                       |\n+--------------------------------------+-------------+-----------------------------------------------------------------------------+\n| a81c303a-b1a8-4817-906a-42b863817d1d | demo-router | {\"network_id\": \"4d584b71-1b3a-46a5-b32a-7fd2ba3e2535\", \"enable_snat\": true} |\n+--------------------------------------+-------------+-----------------------------------------------------------------------------+\n<\/code><\/pre>\n<p>Now the router we created for the external network should be reachable from the outside. We test by pinging it from the host it self:<\/p>\n<pre><code>igorc@silverstone:~\/Downloads$ ping -c 4 192.168.144.1\nPING 192.168.144.1 (192.168.144.1) 56(84) bytes of data.\n64 bytes from 192.168.144.1: icmp_seq=1 ttl=64 time=0.094 ms\n64 bytes from 192.168.144.1: icmp_seq=2 ttl=64 time=0.089 ms\n64 bytes from 192.168.144.1: icmp_seq=3 ttl=64 time=0.054 ms\n64 bytes from 192.168.144.1: icmp_seq=4 ttl=64 time=0.046 ms\n\n--- 192.168.144.1 ping statistics ---\n4 packets transmitted, 4 received, 0% packet loss, time 2997ms\nrtt min\/avg\/max\/mdev = 0.046\/0.070\/0.094\/0.023 ms\nigorc@silverstone:~\/Downloads$\n<\/code><\/pre>\n<p>All good here.<\/p>\n<h1><span id=\"horizon\">Horizon<\/span><\/h1>\n<p>Simply install the Horizon packages on the Controller node:<\/p>\n<pre><code>root@ostack-controller:~# aptitude install apache2 memcached libapache2-mod-wsgi openstack-dashboard\n<\/code><\/pre>\n<p>and connect to <code>http:\/\/192.168.122.111\/horizon<\/code> to access the dashboard.<\/p>\n<h1><span id=\"cinder\">Cinder<\/span><\/h1>\n<p>In this example I have setup 2 Cinder Volume nodes with Ceph backend of 3 clustered nodes.<\/p>\n<pre><code>        192.168.122.214             |             192.168.122.216\n     +------------------+           |           +-----------------+\n     |[ Cinder Volume ] |           |           |[ Cinder Volume ]|\n     |      node1       |-----------+-----------|      node2      |\n     +------------------+           |           +-----------------+\n                                    |\n        +---------------------------+--------------------------+\n        |                           |                          |\n        |192.168.122.211            |192.168.122.212           |192.168.122.213 \n+-------+----------+       +--------+---------+       +--------+---------+\n| [ Ceph Node #1 ] |       | [ Ceph Node #2 ] |       | [ Ceph Node #3 ] |\n|  Monitor Daemon  +-------+  Monitor Daemon  +-------+  Monitor Daemon  |\n|  Object Storage  |       |  Object Storage  |       |  Object Storage  |\n| Meta Data Server |       | Meta Data Server |       | Meta Data Server |\n|   Ceph-Deploy    |       |                  |       |                  |\n+------------------+       +------------------+       +------------------+\n<\/code><\/pre>\n<h2><span id=\"ceph-cluster-setup\">Ceph cluster setup<\/span><\/h2>\n<p>First setup a password-less login for <code>igorc<\/code> user from <code>ostack-ceph1<\/code> to <code>ostack-ceph2<\/code> and <code>ostack-ceph3<\/code>:<\/p>\n<pre><code>igorc@ostack-ceph1:~$ ssh-keygen -t rsa -f \/home\/igorc\/.ssh\/id_rsa -N ''\n<\/code><\/pre>\n<p>on ostack-ceph1 only:<\/p>\n<pre><code>igorc@ostack-ceph1:~$ cat \/home\/igorc\/.ssh\/id_rsa.pub | ssh igorc@ostack-ceph2 \"cat &gt;&gt; ~\/.ssh\/authorized_keys\"\nigorc@ostack-ceph1:~$ cat \/home\/igorc\/.ssh\/id_rsa.pub | ssh igorc@ostack-ceph3 \"cat &gt;&gt; ~\/.ssh\/authorized_keys\"\nigorc@ostack-ceph1:~$ ssh igorc@ostack-ceph2 \"chmod 600 ~\/.ssh\/authorized_keys\"\nigorc@ostack-ceph1:~$ ssh igorc@ostack-ceph3 \"chmod 600 ~\/.ssh\/authorized_keys\"\n<\/code><\/pre>\n<p>and set:<\/p>\n<pre><code>%sudo   ALL=(ALL:ALL) NOPASSWD:ALL\n<\/code><\/pre>\n<p>in <code>\/etc\/sudoers<\/code> file on each server.<\/p>\n<p>Prepare the installation on <code>ostack-ceph1<\/code>:<\/p>\n<pre><code>$ wget -q -O- 'https:\/\/ceph.com\/git\/?p=ceph.git;a=blob_plain;f=keys\/release.asc' | sudo apt-key add -\n$ echo deb http:\/\/ceph.com\/debian-dumpling\/ $(lsb_release -sc) main | sudo tee \/etc\/apt\/sources.list.d\/ceph.list\n$ sudo aptitude update &amp;&amp; sudo aptitude install ceph-deploy\n<\/code><\/pre>\n<p>Then initiate the new cluster using <code>ceph-deploy<\/code>:<\/p>\n<pre><code>igorc@ostack-ceph1:~$ mkdir ceph-cluster &amp;&amp; cd ceph-cluster\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph-deploy install ostack-ceph1 ostack-ceph2 ostack-ceph3\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph-deploy --cluster ceph new ostack-ceph{1,2,3}\n<\/code><\/pre>\n<p>Then we need to modify the <code>ceph.conf<\/code> file:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ vi ceph.conf \n[global]\nfsid = ed8d8819-e05b-48d4-ba9f-f0bc8493f18f\nmon_initial_members = ostack-ceph1, ostack-ceph2, ostack-ceph3\nmon_host = 192.168.122.211, 192.168.122.212, 192.168.122.213\nauth_cluster_required = cephx\nauth_service_required = cephx\nauth_client_required = cephx\nfilestore_xattr_use_omap = true\npublic_network = 192.168.122.0\/24\n\n[mon.ostack-ceph1]\n     host = ostack-ceph1 \n     mon addr = 192.168.122.211:6789\n\n[mon.ostack-ceph2]\n     host = ostack-ceph2 \n     mon addr = 192.168.122.212:6789\n\n[mon.ostack-ceph3]\n     host = ostack-ceph3 \n     mon addr = 192.168.122.213:6789\n\n[osd]\nosd_journal_size = 512 \nosd_pool_default_size = 3\nosd_pool_default_min_size = 1\nosd_pool_default_pg_num = 64 \nosd_pool_default_pgp_num = 64\n<\/code><\/pre>\n<p>and continue with Monitors installation:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ ceph-deploy mon create ostack-ceph1 ostack-ceph2 ostack-ceph3\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph-deploy gatherkeys ostack-ceph1\nigorc@ostack-ceph1:~\/ceph-cluster$ sudo chmod +r \/etc\/ceph\/ceph.client.admin.keyring\n<\/code><\/pre>\n<p>and check for cluster status:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ ceph quorum_status --format json-pretty\n\n{ \"election_epoch\": 6,\n  \"quorum\": [\n        0,\n        1,\n        2],\n  \"quorum_names\": [\n        \"ostack-ceph1\",\n        \"ostack-ceph2\",\n        \"ostack-ceph3\"],\n  \"quorum_leader_name\": \"ostack-ceph1\",\n  \"monmap\": { \"epoch\": 1,\n      \"fsid\": \"ed8d8819-e05b-48d4-ba9f-f0bc8493f18f\",\n      \"modified\": \"0.000000\",\n      \"created\": \"0.000000\",\n      \"mons\": [\n            { \"rank\": 0,\n              \"name\": \"ostack-ceph1\",\n              \"addr\": \"192.168.122.211:6789\\\/0\"},\n            { \"rank\": 1,\n              \"name\": \"ostack-ceph2\",\n              \"addr\": \"192.168.122.212:6789\\\/0\"},\n            { \"rank\": 2,\n              \"name\": \"ostack-ceph3\",\n              \"addr\": \"192.168.122.213:6789\\\/0\"}]}}\n<\/code><\/pre>\n<p>Then we set the OSD&#8217;s:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ ceph-deploy --overwrite-conf osd --zap-disk create ostack-ceph1:\/dev\/sda ostack-ceph2:\/dev\/sda ostack-ceph3:\/dev\/sda\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph osd pool create datastore 100\npool 'datastore' created\n<\/code><\/pre>\n<p>The number of placement groups (pgp) is based on 100 x the number of OSD\u2019s \/ the number of replicas we want to maintain. I want 3 copies of the data (so if a server fails no data is lost), so 3 x 100 \/ 3 = 100.<\/p>\n<p>Setup the MDS service:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ ceph-deploy --overwrite-conf mds create ostack-ceph1 ostack-ceph2 ostack-ceph3\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph osd pool create cephfs_metadata 64\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph osd pool create cephfs_data 64\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph fs new cephfs cephfs_metadata cephfs_data\nnew fs with metadata pool 2 and data pool 1\n\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph osd lspools\n0 rbd,1 cephfs_data,2 cephfs_metadata,3 datastore,4 images,\n\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph fs ls\nname: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]\n<\/code><\/pre>\n<p>Now our MDS will be up and active:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ ceph mds stat\ne5: 1\/1\/1 up {0=ostack-ceph1=up:active}\n\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph status\n    cluster 5f1b2264-ab6d-43c3-af6c-3062e707a623\n     health HEALTH_WARN\n            too many PGs per OSD (320 &gt; max 300)\n     monmap e1: 3 mons at {ostack-ceph1=192.168.122.211:6789\/0,ostack-ceph2=192.168.122.212:6789\/0,ostack-ceph3=192.168.122.213:6789\/0}\n            election epoch 4, quorum 0,1,2 ostack-ceph1,ostack-ceph2,ostack-ceph3\n     mdsmap e5: 1\/1\/1 up {0=ostack-ceph1=up:active}\n     osdmap e25: 3 osds: 3 up, 3 in\n      pgmap v114: 320 pgs, 5 pools, 1962 bytes data, 20 objects\n            107 MB used, 22899 MB \/ 23006 MB avail\n                 320 active+clean\n\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph osd tree\nID WEIGHT  TYPE NAME             UP\/DOWN REWEIGHT PRIMARY-AFFINITY \n-1 0.02998 root default                                            \n-2 0.00999     host ostack-ceph1                                   \n 0 0.00999         osd.0              up  1.00000          1.00000 \n-3 0.00999     host ostack-ceph2                                   \n 1 0.00999         osd.1              up  1.00000          1.00000 \n-4 0.00999     host ostack-ceph3                                   \n 2 0.00999         osd.2              up  1.00000          1.00000\n<\/code><\/pre>\n<p>Next we create the keyring for the <code>datastore<\/code> pool we created:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ sudo ceph-authtool --create-keyring \/etc\/ceph\/ceph.client.datastore.keyring\ncreating \/etc\/ceph\/ceph.client.datastore.keyring\n\nigorc@ostack-ceph1:~\/ceph-cluster$ sudo chmod +r \/etc\/ceph\/ceph.client.datastore.keyring\n<\/code><\/pre>\n<p>add new key to the keyring and set proper permissions for the <code>datastore<\/code> client on the <code>datastore<\/code> pool:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ sudo ceph-authtool \/etc\/ceph\/ceph.client.datastore.keyring -n client.datastore --gen-key\nigorc@ostack-ceph1:~\/ceph-cluster$ sudo ceph-authtool -n client.datastore --cap mon 'allow r' --cap osd 'allow class-read object_prefix rbd_children, allow rwx pool=datastore' \/etc\/ceph\/ceph.client.datastore.keyring\n\nigorc@ostack-ceph1:~\/ceph-cluster$ ceph auth add client.datastore -i \/etc\/ceph\/ceph.client.datastore.keyring\nadded key for client.datastore\n<\/code><\/pre>\n<p>Now, we add the <code>client.datastore<\/code> user settings to the local <code>ceph.conf<\/code> file:<\/p>\n<pre><code>...\n[client.datastore]\n     keyring = \/etc\/ceph\/ceph.client.datastore.keyring\n<\/code><\/pre>\n<p>and push that to all cluster members<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ ceph-deploy --overwrite-conf config push ostack-ceph1 ostack-ceph2 ostack-ceph3\n<\/code><\/pre>\n<p>Since we have MON service running on each host we want to be able to mount from each host too so we need to copy the new key we created:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ scp \/etc\/ceph\/ceph.client.datastore.keyring ostack-ceph2:~ &amp;&amp; ssh ostack-ceph2 sudo cp ceph.client.datastore.keyring \/etc\/ceph\/  \nigorc@ostack-ceph1:~\/ceph-cluster$ scp \/etc\/ceph\/ceph.client.datastore.keyring ostack-ceph3:~ &amp;&amp; ssh ostack-ceph3 sudo cp ceph.client.datastore.keyring \/etc\/ceph\/\n<\/code><\/pre>\n<p>Next we create a separate pool for the Glance images, repeating the above procedure for the keyring and the user:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ ceph osd pool create images 64\npool 'images' created\n\nigorc@ostack-ceph1:~\/ceph-cluster$ sudo ceph-authtool --create-keyring \/etc\/ceph\/ceph.client.images.keyring\ncreating \/etc\/ceph\/ceph.client.images.keyring\n\nigorc@ostack-ceph1:~\/ceph-cluster$ sudo chmod +r \/etc\/ceph\/ceph.client.images.keyring\nigorc@ostack-ceph1:~\/ceph-cluster$ sudo ceph-authtool \/etc\/ceph\/ceph.client.images.keyring -n client.images --gen-key\nigorc@ostack-ceph1:~\/ceph-cluster$ sudo ceph-authtool -n client.images --cap mon 'allow r' --cap osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' \/etc\/ceph\/ceph.client.images.keyring \nigorc@ostack-ceph1:~\/ceph-cluster$ ceph auth add client.images -i \/etc\/ceph\/ceph.client.images.keyring \nadded key for client.images\n<\/code><\/pre>\n<p>Now, we add the client.images user settings to the local <code>ceph.conf<\/code> file:<\/p>\n<pre><code>...\n[client.images]\n     keyring = \/etc\/ceph\/ceph.client.images.keyring\n<\/code><\/pre>\n<p>and push that to all cluster members:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ ceph-deploy --overwrite-conf config push ostack-ceph1 ostack-ceph2 ostack-ceph3\n<\/code><\/pre>\n<p>Since we have MON service running on each host we want to be able to mount from each host too so we need to copy the new key we created:<\/p>\n<pre><code>igorc@ostack-ceph1:~\/ceph-cluster$ scp \/etc\/ceph\/ceph.client.images.keyring ostack-ceph2:~ &amp;&amp; ssh ostack-ceph2 sudo cp ceph.client.images.keyring \/etc\/ceph\/\nigorc@ostack-ceph1:~\/ceph-cluster$ scp \/etc\/ceph\/ceph.client.images.keyring ostack-ceph3:~ &amp;&amp; ssh ostack-ceph3 sudo cp ceph.client.images.keyring \/etc\/ceph\/\n<\/code><\/pre>\n<p>And copy over the keyring to the Controller node where Glance is running:<\/p>\n<pre><code>root@ostack-controller:~# vi \/etc\/ceph\/ceph.client.images.keyring \nroot@ostack-controller:~# chmod +r \/etc\/ceph\/ceph.client.images.keyring\n<\/code><\/pre>\n<h2><span id=\"cinder-setup\">Cinder setup<\/span><\/h2>\n<h3><span id=\"controller-node_2\">Controller node<\/span><\/h3>\n<p>Create Cinder user and admin role:<\/p>\n<pre><code>root@ostack-controller:~# keystone user-create --name=cinder --pass=password --tenant_id d38657485ad24b9fb2e216dadc612f92 --email=cinder@icicimov.com\n+----------+----------------------------------+\n| Property |              Value               |\n+----------+----------------------------------+\n|  email   |      cinder@icicimov.com      |\n| enabled  |               True               |\n|    id    | 30754a3c623f4ea2a4563d0092dd74f1 |\n|   name   |              cinder              |\n| tenantId | d38657485ad24b9fb2e216dadc612f92 |\n| username |              cinder              |\n+----------+----------------------------------+\nroot@ostack-controller:~# keystone user-role-add --tenant_id d38657485ad24b9fb2e216dadc612f92 --user 30754a3c623f4ea2a4563d0092dd74f1 --role admin\n<\/code><\/pre>\n<p>Install Cinder packages:<\/p>\n<pre><code>root@ostack-controller:~# aptitude install cinder-api cinder-scheduler python-cinderclient\n<\/code><\/pre>\n<p>and configure Cinder to include Ceph backend as storage using the <code>rbd<\/code> driver:<\/p>\n<pre><code>root@ostack-controller:~# cat \/etc\/cinder\/cinder.conf \n[DEFAULT]\nrootwrap_config = \/etc\/cinder\/rootwrap.conf\napi_paste_confg = \/etc\/cinder\/api-paste.ini\niscsi_helper = tgtadm\nvolume_name_template = volume-%s\nvolume_group = cinder-volumes\nverbose = True\nauth_strategy = keystone\nstate_path = \/var\/lib\/cinder\nlock_path = \/var\/lock\/cinder\nvolumes_dir = \/var\/lib\/cinder\/volumes\nrpc_backend = rabbit\nrabbit_host = 192.168.122.111 \nrabbit_password = password\nrabbit_userid = guest\n## Ceph backend ##\nvolume_driver=cinder.volume.drivers.rbd.RBDDriver\nrbd_pool=datastore\nrbd_ceph_conf=\/etc\/ceph\/ceph.conf\nrbd_flatten_volume_from_snapshot=false\nrbd_max_clone_depth=5\nrbd_user=icehouse\nglance_api_version=2\n\n[database]\nconnection = mysql:\/\/cinderdbadmin:Ue8Ud8re@192.168.122.111\/cinder\n\n[keystone_authtoken]\nauth_uri = http:\/\/192.168.122.111:5000\/v2.0\nauth_host = 192.168.122.111 \nauth_port = 35357\nauth_protocol = http\nadmin_tenant_name = service\nadmin_user = cinder\nadmin_password = password\n<\/code><\/pre>\n<p>Populate the db schema:<\/p>\n<pre><code>root@ostack-controller:~# su -s \/bin\/sh -c \"cinder-manage db sync\" cinder\n<\/code><\/pre>\n<p>and restart Cinder services:<\/p>\n<pre><code>root@ostack-controller:~# service cinder-scheduler restart\nroot@ostack-controller:~# service cinder-api restart\n\nroot@ostack-controller:~# rm -f \/var\/lib\/cinder\/cinder.sqlite\n<\/code><\/pre>\n<p>Now prepare the ceph configuration:<\/p>\n<pre><code>root@ostack-controller:~# aptitude install ceph-common python-ceph\nroot@ostack-controller:~# mkdir \/etc\/ceph\n<\/code><\/pre>\n<p>and copy the <code>\/etc\/ceph\/ceph.conf<\/code> and <code>\/etc\/ceph\/ceph.client.datastore.keyring<\/code> from the ceph cluster and set the keyring permission to <code>read<\/code> so Cinder can open the file:<\/p>\n<pre><code>root@ostack-controller:~# chmod +r \/etc\/ceph\/ceph.client.datastore.keyring\nroot@ostack-controller:~# service cinder-api restart\n<\/code><\/pre>\n<h3><span id=\"volume-nodes\">Volume nodes<\/span><\/h3>\n<p>Install Cinder packages:<\/p>\n<pre><code>root@ostack-cinder-volume1:~# aptitude install cinder-volume python-mysqldb sysfsutils\n<\/code><\/pre>\n<p>and configure Cinder:<\/p>\n<pre><code>root@ostack-cinder-volume1:~# cat \/etc\/cinder\/cinder.conf \n[DEFAULT]\nrootwrap_config = \/etc\/cinder\/rootwrap.conf\napi_paste_confg = \/etc\/cinder\/api-paste.ini\niscsi_helper = tgtadm\nvolume_name_template = volume-%s\nvolume_group = cinder-volumes\nverbose = True\nauth_strategy = keystone\nstate_path = \/var\/lib\/cinder\nlock_path = \/var\/lock\/cinder\nvolumes_dir = \/var\/lib\/cinder\/volumes\nrpc_backend = rabbit\nrabbit_host = 192.168.122.111 \nrabbit_password = password\nrabbit_userid = guest\nglance_host = 192.168.122.111\n## Ceph backend ##\nvolume_driver=cinder.volume.drivers.rbd.RBDDriver\nrbd_pool=datastore\nrbd_ceph_conf=\/etc\/ceph\/ceph.conf\nrbd_flatten_volume_from_snapshot=false\nrbd_max_clone_depth=5\nrbd_user=datastore\nglance_api_version=2\n#rbd_secret_uuid=e1915277-e3a5-4547-bc9e-xxxxxxx\nquota_volumes=20\nquota_snapshots=20\n\n[database]\nconnection = mysql:\/\/cinderdbadmin:Ue8Ud8re@192.168.122.111\/cinder\n\n[keystone_authtoken]\nauth_uri = http:\/\/192.168.122.111:5000\/v2.0\nauth_host = 192.168.122.111 \nauth_port = 35357\nauth_protocol = http\nadmin_tenant_name = service\nadmin_user = cinder\nadmin_password = password\n<\/code><\/pre>\n<p>Now prepare the ceph configuration:<\/p>\n<pre><code>root@ostack-cinder-volume1:~# aptitude install ceph-common python-ceph ceph-fuse ceph-fs-common\nroot@ostack-cinder-volume1:~# mkdir \/etc\/ceph\n<\/code><\/pre>\n<p>and copy the <code>\/etc\/ceph\/ceph.conf<\/code> and <code>\/etc\/ceph\/ceph.client.datastore.keyring<\/code> from the ceph cluster and set the keyring permission to read so Cinder can open the file:<\/p>\n<pre><code>root@ostack-cinder-volume1:~# chmod +r \/etc\/ceph\/ceph.client.datastore.keyring\nroot@ostack-cinder-volume1:~# service cinder-volume restart\n<\/code><\/pre>\n<h2><span id=\"create-the-first-volume\">Create the first volume<\/span><\/h2>\n<p>Finally we go and create our first Ceph backed volume:<\/p>\n<pre><code>root@ostack-controller:~# nova volume-create --display_name \"volume1\" 1\n+---------------------+--------------------------------------+\n| Property            | Value                                |\n+---------------------+--------------------------------------+\n| attachments         | []                                   |\n| availability_zone   | nova                                 |\n| bootable            | false                                |\n| created_at          | 2014-09-17T02:45:06.999692           |\n| display_description | -                                    |\n| display_name        | volume1                              |\n| encrypted           | False                                |\n| id                  | d137be6f-7c40-447c-8106-30d0ff8d9a20 |\n| metadata            | {}                                   |\n| size                | 1                                    |\n| snapshot_id         | -                                    |\n| source_volid        | -                                    |\n| status              | creating                             |\n| volume_type         | None                                 |\n+---------------------+--------------------------------------+\n\nroot@ostack-controller:~# cinder list\n+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+\n|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |\n+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+\n| d137be6f-7c40-447c-8106-30d0ff8d9a20 | available |   volume1    |  1   |     None    |  false   |             |\n+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+\n\nroot@ostack-controller:~# nova volume-show volume1\n+--------------------------------+--------------------------------------+\n| Property                       | Value                                |\n+--------------------------------+--------------------------------------+\n| attachments                    | []                                   |\n| availability_zone              | nova                                 |\n| bootable                       | false                                |\n| created_at                     | 2014-09-17T02:45:06.000000           |\n| display_description            | -                                    |\n| display_name                   | volume1                              |\n| encrypted                      | False                                |\n| id                             | d137be6f-7c40-447c-8106-30d0ff8d9a20 |\n| metadata                       | {}                                   |\n| os-vol-host-attr:host          | ostack-cinder-volume1                |\n| os-vol-mig-status-attr:migstat | -                                    |\n| os-vol-mig-status-attr:name_id | -                                    |\n| os-vol-tenant-attr:tenant_id   | 4b53dc514f0a4f6bbfd89eac63f7b206     |\n| size                           | 1                                    |\n| snapshot_id                    | -                                    |\n| source_volid                   | -                                    |\n| status                         | available                            |\n| volume_type                    | None                                 |\n+--------------------------------+--------------------------------------+\n<\/code><\/pre>\n<h1><span id=\"launch-an-instance\">Launch an instance<\/span><\/h1>\n<p>First, create a keypair we are going to use to login to the instance:<\/p>\n<pre><code>root@ostack-controller:~# ssh-keygen -t rsa -b 2048 -f ~\/.ssh\/id_rsa -N ''\nGenerating public\/private rsa key pair.\nCreated directory '\/root\/.ssh'.\nYour identification has been saved in \/root\/.ssh\/id_rsa.\nYour public key has been saved in \/root\/.ssh\/id_rsa.pub.\nThe key fingerprint is:\n01:7e:8d:38:f7:cf:5f:22:f6:ea:b4:71:c3:2a:76:b5 root@ostack-controller\nThe key's randomart image is:\n+--[ RSA 2048]----+\n|      .          |\n|     . o o       |\n|      + = .      |\n|       + o       |\n|        S .      |\n|           o ..  |\n|            B.=..|\n|          oo.OE+ |\n|         . +=.o  |\n+-----------------+\n\nroot@ostack-controller:~# nova keypair-add --pub_key ~\/.ssh\/id_rsa.pub key1\nroot@ostack-controller:~# nova keypair-list\n+------+-------------------------------------------------+\n| Name | Fingerprint                                     |\n+------+-------------------------------------------------+\n| key1 | 01:7e:8d:38:f7:cf:5f:22:f6:ea:b4:71:c3:2a:76:b5 |\n+------+-------------------------------------------------+\n<\/code><\/pre>\n<p>Next, create and launch the instance:<\/p>\n<pre><code>root@ostack-controller:~# nova boot --poll --flavor 1 --image a25d69b3-623a-40c6-aca3-00f1233295ea --security-groups default --key-name key1 --nic net-id=2322ae02-88a9-4daa-898d-1c4c0b2653ca Cirros01\n+--------------------------------------+------------------------------------------------------------+\n| Property                             | Value                                                      |\n+--------------------------------------+------------------------------------------------------------+\n| OS-DCF:diskConfig                    | MANUAL                                                     |\n| OS-EXT-AZ:availability_zone          | nova                                                       |\n| OS-EXT-SRV-ATTR:host                 | -                                                          |\n| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                          |\n| OS-EXT-SRV-ATTR:instance_name        | instance-00000003                                          |\n| OS-EXT-STS:power_state               | 0                                                          |\n| OS-EXT-STS:task_state                | scheduling                                                 |\n| OS-EXT-STS:vm_state                  | building                                                   |\n| OS-SRV-USG:launched_at               | -                                                          |\n| OS-SRV-USG:terminated_at             | -                                                          |\n| accessIPv4                           |                                                            |\n| accessIPv6                           |                                                            |\n| adminPass                            | pRiYGsBiTR9s                                               |\n| config_drive                         |                                                            |\n| created                              | 2014-09-17T11:37:18Z                                       |\n| flavor                               | m1.tiny (1)                                                |\n| hostId                               |                                                            |\n| id                                   | e4703509-eab2-45d0-9ab9-f3362448da21                       |\n| image                                | CirrOS-0.3.1-x86_64 (a25d69b3-623a-40c6-aca3-00f1233295ea) |\n| key_name                             | key1                                                       |\n| metadata                             | {}                                                         |\n| name                                 | Cirros01                                                   |\n| os-extended-volumes:volumes_attached | []                                                         |\n| progress                             | 0                                                          |\n| security_groups                      | default                                                    |\n| status                               | BUILD                                                      |\n| tenant_id                            | 4b53dc514f0a4f6bbfd89eac63f7b206                           |\n| updated                              | 2014-09-17T11:37:19Z                                       |\n| user_id                              | d6145ea56cc54bb4aa2b2b4a1c7ae6bb                           |\n+--------------------------------------+------------------------------------------------------------+\nServer building... 100% complete\nFinished\n\nroot@ostack-controller:~# nova list\n+--------------------------------------+----------+--------+------------+-------------+-------------------+\n| ID                                   | Name     | Status | Task State | Power State | Networks          |\n+--------------------------------------+----------+--------+------------+-------------+-------------------+\n| e4703509-eab2-45d0-9ab9-f3362448da21 | Cirros01 | ACTIVE | -          | Running     | demo-net=10.0.0.3 |\n+--------------------------------------+----------+--------+------------+-------------+-------------------+\n<\/code><\/pre>\n<p>To give instance specific IP we can do:<\/p>\n<pre><code>$ neutron port-create --fixed-ip subnet_id=SUBNET_ID,ip_address=IP_ADDRESS NET_ID\n$ nova boot --image IMAGE --flavor FLAVOR --nic port-id=PORT_ID VM_NAME\n<\/code><\/pre>\n<p>Create security group for the instance:<\/p>\n<pre><code>root@ostack-controller:~# nova secgroup-add-rule default tcp 22 22 0.0.0.0\/0\n+-------------+-----------+---------+-----------+--------------+\n| IP Protocol | From Port | To Port | IP Range  | Source Group |\n+-------------+-----------+---------+-----------+--------------+\n| tcp         | 22        | 22      | 0.0.0.0\/0 |              |\n+-------------+-----------+---------+-----------+--------------+\n\nroot@ostack-controller:~# nova secgroup-add-rule default icmp -1 -1 0.0.0.0\/0\n+-------------+-----------+---------+-----------+--------------+\n| IP Protocol | From Port | To Port | IP Range  | Source Group |\n+-------------+-----------+---------+-----------+--------------+\n| icmp        | -1        | -1      | 0.0.0.0\/0 |              |\n+-------------+-----------+---------+-----------+--------------+\n<\/code><\/pre>\n<p>Give the instance a public ip so we can connect to it. First create a <code>floating<\/code> ip:<\/p>\n<pre><code>root@ostack-controller:~# neutron floatingip-create ext-net\nCreated a new floatingip:\n+---------------------+--------------------------------------+\n| Field               | Value                                |\n+---------------------+--------------------------------------+\n| fixed_ip_address    |                                      |\n| floating_ip_address | 192.168.144.3                        |\n| floating_network_id | 4d584b71-1b3a-46a5-b32a-7fd2ba3e2535 |\n| id                  | 44a4b23c-1345-4dcb-b286-a2759246cdb4 |\n| port_id             |                                      |\n| router_id           |                                      |\n| status              | DOWN                                 |\n| tenant_id           | 4b53dc514f0a4f6bbfd89eac63f7b206     |\n+---------------------+--------------------------------------+\n\nroot@ostack-controller:~# nova list\n+--------------------------------------+----------+--------+------------+-------------+-------------------+\n| ID                                   | Name     | Status | Task State | Power State | Networks          |\n+--------------------------------------+----------+--------+------------+-------------+-------------------+\n| e4703509-eab2-45d0-9ab9-f3362448da21 | Cirros01 | ACTIVE | -          | Running     | demo-net=10.0.0.3 |\n+--------------------------------------+----------+--------+------------+-------------+-------------------+\n<\/code><\/pre>\n<p>and then associate the ip with the instance:<\/p>\n<pre><code>root@ostack-controller:~# nova floating-ip-associate Cirros01 192.168.144.3\n\nroot@ostack-controller:~# nova list\n+--------------------------------------+----------+--------+------------+-------------+----------------------------------+\n| ID                                   | Name     | Status | Task State | Power State | Networks                         |\n+--------------------------------------+----------+--------+------------+-------------+----------------------------------+\n| e4703509-eab2-45d0-9ab9-f3362448da21 | Cirros01 | ACTIVE | -          | Running     | demo-net=10.0.0.3, 192.168.144.3 |\n+--------------------------------------+----------+--------+------------+-------------+----------------------------------+\n<\/code><\/pre>\n<p>Now using the ssh key we created before and the public (floating) IP we attached we can connect to it from outside (the hypervisor):<\/p>\n<pre><code>root@ostack-controller:~# ssh cirros@192.168.144.3\n<\/code><\/pre>\n<h1><span id=\"booting-from-image-volumes-stored-in-ceph\">Booting from image volumes stored in CEPH<\/span><\/h1>\n<p>First the image stored needs to be in RAW format.<\/p>\n<pre><code>root@ostack-controller:~# wget http:\/\/download.cirros-cloud.net\/0.3.4\/cirros-0.3.4-x86_64-disk.img\nroot@ostack-controller:~# qemu-img convert -f qcow2 -O raw cirros-0.3.4-x86_64-disk.img cirros-0.3.4-x86_64-disk.raw\nroot@ostack-controller:~# glance image-create --name CirrOS-0.3.4-x86_64_raw --is-public=true --disk-format=raw --container-format=bare &lt; cirros-0.3.4-x86_64-disk.raw \n+------------------+--------------------------------------+\n| Property         | Value                                |\n+------------------+--------------------------------------+\n| checksum         | 56730d3091a764d5f8b38feeef0bfcef     |\n| container_format | bare                                 |\n| created_at       | 2016-02-16T01:18:00                  |\n| deleted          | False                                |\n| deleted_at       | None                                 |\n| disk_format      | raw                                  |\n| id               | 147c22d8-2d32-4042-8f74-740f40112052 |\n| is_public        | True                                 |\n| min_disk         | 0                                    |\n| min_ram          | 0                                    |\n| name             | CirrOS-0.3.4-x86_64_raw              |\n| owner            | 4b53dc514f0a4f6bbfd89eac63f7b206     |\n| protected        | False                                |\n| size             | 41126400                             |\n| status           | active                               |\n| updated_at       | 2016-02-16T01:18:16                  |\n| virtual_size     | None                                 |\n+------------------+--------------------------------------+\nroot@ostack-controller:~# glance image-list\n+--------------------------------------+-----------------------------+-------------+------------------+-----------+--------+\n| ID                                   | Name                        | Disk Format | Container Format | Size      | Status |\n+--------------------------------------+-----------------------------+-------------+------------------+-----------+--------+\n| a25d69b3-623a-40c6-aca3-00f1233295ea | CirrOS-0.3.1-x86_64         | qcow2       | bare             | 13147648  | active |\n| 398ecc61-2b38-47e9-972b-1b2a760aa3c7 | CirrOS-0.3.2-x86_64         | qcow2       | bare             | 13167616  | active |\n| df438372-414c-46fe-910f-22fdb78cecb8 | CirrOS-0.3.3-x86_64         | qcow2       | bare             | 13200896  | active |\n| 147c22d8-2d32-4042-8f74-740f40112052 | CirrOS-0.3.4-x86_64_raw     | raw         | bare             | 41126400  | active |\n| e871958c-8bbd-42ec-ad16-31959949a43c | Ubuntu 12.04 cloudimg amd64 | qcow2       | ovf              | 261095936 | active |\n+--------------------------------------+-----------------------------+-------------+------------------+-----------+--------+\n<\/code><\/pre>\n<p>We can also see the used store size has increased in Ceph:<\/p>\n<pre><code>igorc@ostack-ceph1:~$ ceph -s\n    cluster 5f1b2264-ab6d-43c3-af6c-3062e707a623\n     health HEALTH_WARN\n            too many PGs per OSD (320 &gt; max 300)\n     monmap e1: 3 mons at {ostack-ceph1=192.168.122.211:6789\/0,ostack-ceph2=192.168.122.212:6789\/0,ostack-ceph3=192.168.122.213:6789\/0}\n            election epoch 38, quorum 0,1,2 ostack-ceph1,ostack-ceph2,ostack-ceph3\n     mdsmap e23: 1\/1\/1 up {0=ostack-ceph1=up:active}\n     osdmap e55: 3 osds: 3 up, 3 in\n      pgmap v10010: 320 pgs, 5 pools, 40164 kB data, 31 objects\n            228 MB used, 22778 MB \/ 23006 MB avail\n                 320 active+clean\n<\/code><\/pre>\n<p>and both the <code>datastore<\/code> (cinder) and <code>images<\/code> (glance) pools have objects inside:<\/p>\n<pre><code>igorc@ostack-ceph1:~$ rbd -p images ls\n147c22d8-2d32-4042-8f74-740f40112052\n\nigorc@ostack-ceph1:~$ rbd -p datastore ls\nvolume-4ca5327e-e839-4742-81db-77f8fe9ba5a0\n<\/code><\/pre>\n<p>Now if we create a volume from this image:<\/p>\n<pre><code>root@ostack-controller:~# cinder create --image-id 147c22d8-2d32-4042-8f74-740f40112052 --display-name cephVolume1 4\n+---------------------+--------------------------------------+\n|       Property      |                Value                 |\n+---------------------+--------------------------------------+\n|     attachments     |                  []                  |\n|  availability_zone  |                 nova                 |\n|       bootable      |                false                 |\n|      created_at     |      2016-02-16T01:25:28.514010      |\n| display_description |                 None                 |\n|     display_name    |             cephVolume1              |\n|      encrypted      |                False                 |\n|          id         | 1e8dd895-6987-4ca0-aab1-f583a6e0740c |\n|       image_id      | 147c22d8-2d32-4042-8f74-740f40112052 |\n|       metadata      |                  {}                  |\n|         size        |                  4                   |\n|     snapshot_id     |                 None                 |\n|     source_volid    |                 None                 |\n|        status       |               creating               |\n|     volume_type     |                 None                 |\n+---------------------+--------------------------------------+\n<\/code><\/pre>\n<p>Then we need to enable Nova and <code>libvirt<\/code> to work with CEPH storage. Since we use authentication in ceph we need to create auth secret in libvirt on the compute node. We will use the existing datastore ceph user we created before in ceph.<\/p>\n<pre>\nroot@ostack-compute:~# uuidgen\n1c5a669e-980f-4721-9f31-8103551c917c\n\nroot@ostack-compute:~# vi secret.xml\n<secret ephemeral='no' private='no'>\n  <uuid>1c5a669e-980f-4721-9f31-8103551c917c<\/uuid>\n  <usage type='ceph'>\n    <name>client.datastore secret<\/name>\n  <\/usage>\n<\/secret>\n\nroot@ostack-compute:~# virsh secret-define --file secret.xml\nSecret 1c5a669e-980f-4721-9f31-8103551c917c created\n<\/pre>\n<p>We get the datastore user&#8217;s key from one of the ceph cluster nodes:<\/p>\n<pre><code>igorc@ostack-ceph1:~$ ceph auth get-key client.datastore\nAQA3SuRVuaeGAxAAPHAFDfT2gX8iNIj1QWfQkA==\n<\/code><\/pre>\n<p>and create the libvirt secret:<\/p>\n<pre><code>root@ostack-compute:~# virsh secret-set-value --secret 1c5a669e-980f-4721-9f31-8103551c917c --base64 AQA3SuRVuaeGAxAAPHAFDfT2gX8iNIj1QWfQkA==\nSecret value set\n<\/code><\/pre>\n<p>Now we enable Nova to work with Ceph volumes (rbd storage driver):<\/p>\n<pre><code>root@ostack-compute:~# vi \/etc\/nova\/nova.conf\n[DEFAULT]\n...\n## CEPH VOLUMES ##\nlibvirt_images_type=rbd\nlibvirt_images_rbd_pool=datastore\nlibvirt_images_rbd_ceph_conf=\/etc\/ceph\/ceph.conf\nrbd_user=datastore\nrbd_secret_uuid=1c5a669e-980f-4721-9f31-8103551c917c\nlibvirt_inject_password=false\nlibvirt_inject_key=false\nlibvirt_inject_partition=-2\n<\/code><\/pre>\n<p>Confirm the file <code>\/etc\/ceph\/ceph.conf<\/code> exists and restart the compute service:<\/p>\n<pre><code>root@ostack-compute:~# service nova-compute restart\n<\/code><\/pre>\n<p>After this we can go to the GUI and launch new Cirros <code>m1.small<\/code> instance (we can&#8217;t use m1.tiny since this flavor supports 1GB volumes only and ours is 4GB) and<br \/>\nchoose boot from volume option:<\/p>\n<pre><code>Instance boot source: Boot from volume\nVolume: cephVolume1 - 4GB (volume)\n<\/code><\/pre>\n","protected":false},"excerpt":{"rendered":"<p>This is a standard Installation of OpenStack Icehouse on 3 x VM nodes: Controller, Compute and Networking. Later I decided to create 2 separate storage nodes for the Cinder service that will be using CEPH\/RADOS cluster as object storage since&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[15,13],"tags":[],"class_list":["post-227","post","type-post","status-publish","format-standard","hentry","category-openstack","category-virtualization"],"_links":{"self":[{"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/227","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=227"}],"version-history":[{"count":4,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/227\/revisions"}],"predecessor-version":[{"id":237,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/227\/revisions\/237"}],"wp:attachment":[{"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=227"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=227"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/icicimov.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=227"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}