OpenStack liberty- Remove elements like duplicated hypervisors or unwanted ports

When your are experimenting with openstack, It’s usual have failures. This post shows how to remove these unwanted ports or duplicated compute nodes.

Removing Duplicated Compute Nodes

I’ve used this trick several times. Cause my Nested OpenStack Nuage lab (and my several installations). I had to remove duplicated nova computes using this procedure.

First, let’s check out our hypervisors.


[root@ocs01 ~]# . keystonerc_admin 
[root@ocs01 ~(keystone_admin)]# nova hypervisor-list
+----+---------------------+-------+---------+
| ID | Hypervisor hostname | State | Status  |
+----+---------------------+-------+---------+
| 1  | nova01              | down  | enabled |
| 3  | nova01              | up    | enabled |
+----+---------------------+-------+---------+

Now, We’ll check out our database and see what we have:


[root@ocs01 ~(keystone_admin)]# mysql -u root
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 17
Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026

Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> use nova
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [nova]> ;
ERROR: No query specified

MariaDB [nova]> SELECT id, created_at, updated_at, hypervisor_hostname FROM compute_nodes;
+----+---------------------+---------------------+---------------------+
| id | created_at          | updated_at          | hypervisor_hostname |
+----+---------------------+---------------------+---------------------+
|  1 | 2016-05-19 14:23:52 | 2016-05-19 19:16:56 | nova01              |
|  2 | 2016-05-19 19:30:00 | 2016-05-19 20:52:29 | nova01.novalocal    |
|  3 | 2016-05-23 17:27:07 | 2016-05-23 18:15:51 | nova01              |
+----+---------------------+---------------------+---------------------+
3 rows in set (0.00 sec)
MariaDB [nova]> exit
Bye

Let’s check the service list.


[root@ocs01 ~(keystone_admin)]# nova service-list
+----+------------------+------------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host             | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+------------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-consoleauth | osc01.nuage.lab  | internal | enabled | down  | 2016-05-19T19:17:46.000000 | -               |
| 2  | nova-scheduler   | osc01.nuage.lab  | internal | enabled | down  | 2016-05-19T19:17:46.000000 | -               |
| 3  | nova-conductor   | osc01.nuage.lab  | internal | enabled | down  | 2016-05-19T19:17:54.000000 | -               |
| 4  | nova-cert        | osc01.nuage.lab  | internal | enabled | down  | 2016-05-19T19:17:46.000000 | -               |
| 5  | nova-compute     | nova01           | nova     | enabled | down  | 2016-05-19T19:17:52.000000 | -               |
| 6  | nova-cert        | ocs01.novalocal  | internal | enabled | up    | 2016-05-23T18:16:53.000000 | -               |
| 7  | nova-conductor   | ocs01.novalocal  | internal | enabled | up    | 2016-05-23T18:16:53.000000 | -               |
| 8  | nova-consoleauth | ocs01.novalocal  | internal | enabled | up    | 2016-05-23T18:16:53.000000 | -               |
| 9  | nova-scheduler   | ocs01.novalocal  | internal | enabled | up    | 2016-05-23T18:16:53.000000 | -               |
| 10 | nova-compute     | nova01.novalocal | nova     | enabled | up    | 2016-05-23T18:17:01.000000 | -               |
+----+------------------+------------------+----------+---------+-------+----------------------------+-----------------+

We’ll remove the hypervisor from compute_nodes and services tables as following:


[root@ocs01 ~(keystone_admin)]# mysql -u root
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 18
Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026

Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> use nova;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed

MariaDB [nova]> DELETE FROM compute_nodes WHERE id='1';
Query OK, 1 row affected (0.05 sec)

MariaDB [nova]> DELETE FROM compute_nodes WHERE id='2';
Query OK, 1 row affected (0.07 sec)

MariaDB [nova]> DELETE FROM services WHERE host='nova01';
Query OK, 1 row affected (0.01 sec)

MariaDB [nova]> exit
Bye

Let’s check if we’ve got this fixed.


[root@ocs01 ~(keystone_admin)]# nova service-list
+----+------------------+------------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host             | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+------------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-consoleauth | osc01.nuage.lab  | internal | enabled | down  | 2016-05-19T19:17:46.000000 | -               |
| 2  | nova-scheduler   | osc01.nuage.lab  | internal | enabled | down  | 2016-05-19T19:17:46.000000 | -               |
| 3  | nova-conductor   | osc01.nuage.lab  | internal | enabled | down  | 2016-05-19T19:17:54.000000 | -               |
| 4  | nova-cert        | osc01.nuage.lab  | internal | enabled | down  | 2016-05-19T19:17:46.000000 | -               |
| 6  | nova-cert        | ocs01.novalocal  | internal | enabled | up    | 2016-05-23T18:19:43.000000 | -               |
| 7  | nova-conductor   | ocs01.novalocal  | internal | enabled | up    | 2016-05-23T18:19:43.000000 | -               |
| 8  | nova-consoleauth | ocs01.novalocal  | internal | enabled | up    | 2016-05-23T18:19:43.000000 | -               |
| 9  | nova-scheduler   | ocs01.novalocal  | internal | enabled | up    | 2016-05-23T18:19:43.000000 | -               |
| 10 | nova-compute     | nova01.novalocal | nova     | enabled | up    | 2016-05-23T18:19:41.000000 | -               |
+----+------------------+------------------+----------+---------+-------+----------------------------+-----------------+
[root@ocs01 ~(keystone_admin)]# nova hypervisor-list
+----+---------------------+-------+---------+
| ID | Hypervisor hostname | State | Status  |
+----+---------------------+-------+---------+
| 3  | nova01              | up    | enabled |
+----+---------------------+-------+---------+

Removing unwanted ports

You can have issues with your vports sometimes. Well, happened to me when I’ve got issues with the configuration of my Nuage plugin. After you fixed, you will have issues with some ports, and you will have to remove them from VSD and also from neutron database.

Here we have a way to do it from neutron. Let’s check what port we need to remove:


[root@ocs01 neutron(keystone_chain)]# neutron port-list
+--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                         |
+--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------+
| 10c38a65-954c-4b74-92d3-83a2fc63306a |      | fa:16:3e:ec:62:41 | {"subnet_id": "889bba29-bcb1-4e0a-9219-0785e76c95bb", "ip_address": "10.31.31.2"} |
| 538479f2-e715-4687-aa88-b4c7626015ea |      | fa:16:3e:f9:e2:7c | {"subnet_id": "889bba29-bcb1-4e0a-9219-0785e76c95bb", "ip_address": "10.31.31.3"} |
| 70466c99-8abd-4ed9-9fcc-2800d4417698 |      | fa:16:3e:78:7a:eb | {"subnet_id": "9d80cebb-cb07-436e-8620-a8277a30ce4a", "ip_address": "10.41.41.2"} |
| 842ae886-2ade-466a-9e1d-3321f26928b0 |      | fa:16:3e:f9:d7:97 | {"subnet_id": "9d80cebb-cb07-436e-8620-a8277a30ce4a", "ip_address": "10.41.41.1"} |
| 8dd2d824-eb70-46c9-b3fa-494aec382bd8 |      | fa:16:3e:1c:01:a7 | {"subnet_id": "889bba29-bcb1-4e0a-9219-0785e76c95bb", "ip_address": "10.31.31.1"} |
+--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------+

Now, let’s go to the neutron database and remove these unwanted ports.


[root@ocs01 neutron(keystone_chain)]# mysql -u root 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 600
Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026

Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> use neutron
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [neutron]> delete from ports where id='10c38a65-954c-4b74-92d3-83a2fc63306a';
Query OK, 1 row affected (0.09 sec)

MariaDB [neutron]> delete from ports where id='538479f2-e715-4687-aa88-b4c7626015ea';
Query OK, 1 row affected (0.07 sec)

MariaDB [neutron]> delete from ports where id='70466c99-8abd-4ed9-9fcc-2800d4417698';
Query OK, 1 row affected (0.00 sec)

Send me any comment if you struggle.
See you.

Building a Nuage/PackStack Lab at home Part 3

Update – August 1st, 2016: I’ve made an update thanks to @karkull feedback. some changes to neutron.cfg and nova.cfg files. I’ve made important changes the way I was presenting the info over this post.

Hi there,

Through this post, I will install nested packstack liberty with a controller/network node and a nova compute node. Then, I will install the Nuage plugin for neutron and the metadata, heat, horizon files. Also I will install our VRS (Virtualized Routing and Switching) replacing the OVS instance.

I’ve done some changes from my last post. I’ve created a couple new flavors: nuage.osc.2 and nuage.nova.2. Reason: I’ve got some issues with the memory capacity into the OpenStack Controller. Since now, replace flavors nuage.osc and nuage.nova with those:

[root@box01 ~(keystone_admin)]# openstack flavor create --ram 10240 --disk 250 --vcpus 4 --public nuage.nova.2
+----------------------------+--------------------------------------+
| Field                      | Value                                |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled   | False                                |
| OS-FLV-EXT-DATA:ephemeral  | 0                                    |
| disk                       | 250                                  |
| id                         | 4e191554-25f9-4ce7-bb1b-80941d6ab839 |
| name                       | nuage.nova.2                         |
| os-flavor-access:is_public | True                                 |
| ram                        | 10240                                |
| rxtx_factor                | 1.0                                  |
| swap                       |                                      |
| vcpus                      | 4                                    |
+----------------------------+--------------------------------------+
[root@box01 ~(keystone_admin)]# openstack flavor create --ram 8192 --disk 50 --vcpus 4 --public nuage.osc.2
+----------------------------+--------------------------------------+
| Field                      | Value                                |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled   | False                                |
| OS-FLV-EXT-DATA:ephemeral  | 0                                    |
| disk                       | 50                                   |
| id                         | a98464a5-1008-45bb-972d-7997cc2f0df3 |
| name                       | nuage.osc.2                          |
| os-flavor-access:is_public | True                                 |
| ram                        | 8192                                 |
| rxtx_factor                | 1.0                                  |
| swap                       |                                      |
| vcpus                      | 4                                    |
+----------------------------+--------------------------------------+

Our new list of instances will be now:

pinrojas - packstack lab nuage new layout

OpenStack Controller

I will install an OpenStack controller/network with the services: neutron, horizon, heat, nova, keystone and glance. And a nova compute server with KVM.

Let’s start creating the server

[root@box01 ~]# . keystonerc_nuage
[root@box01 ~(keystone_nuage)]# openstack server create --image centos7-image  --flavor nuage.osc.2 --key-name pin-laptop --nic net-id=nuage-lab,v4-fixed-ip=192.168.101.6 osc01
+--------------------------------------+----------------------------------------------------------+
| Field                                | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                   |
| OS-EXT-AZ:availability_zone          |                                                          |
| OS-EXT-STS:power_state               | 0                                                        |
| OS-EXT-STS:task_state                | scheduling                                               |
| OS-EXT-STS:vm_state                  | building                                                 |
| OS-SRV-USG:launched_at               | None                                                     |
| OS-SRV-USG:terminated_at             | None                                                     |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| addresses                            |                                                          |
| adminPass                            | fdqWisumw9tB                                             |
| config_drive                         |                                                          |
| created                              | 2016-05-23T17:15:20Z                                     |
| flavor                               | nuage.osc.2 (a98464a5-1008-45bb-972d-7997cc2f0df3)       |
| hostId                               |                                                          |
| id                                   | 859bfab9-6547-471f-b83f-73b7997a2b7f                     |
| image                                | snap-160519-osc01 (6082c049-a98d-4fa3-87be-241e08ea0b19) |
| key_name                             | pin-laptop                                               |
| name                                 | ocs01                                                    |
| os-extended-volumes:volumes_attached | []                                                       |
| progress                             | 0                                                        |
| project_id                           | 39e2f35bc10f4047b1ea77f79559807d                         |
| properties                           |                                                          |
| security_groups                      | [{u'name': u'default'}]                                  |
| status                               | BUILD                                                    |
| updated                              | 2016-05-23T17:15:20Z                                     |
| user_id                              | c91cd992e79149209c41416a55a661b1                         |
+--------------------------------------+----------------------------------------------------------+

I will add a floating IP 192.168.1.30 to get access from my home network to our osc01.


openstack ip floating create external_network
openstack ip floating add 192.168.1.30 ocs01

Let’s proceed preparing our controller and installing PackStack now.

OpenStack Controller: disable selinux and update

Let’s disable selinux to save resources.

[root@ocs01 ~]# cat /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=DISABLED
# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
[root@ocs01 ~]# vi /etc/grub2.cfg

change /etc/grub2.conf and reboot. See an extract of the file over the following:


### BEGIN /etc/grub.d/10_linux ###
menuentry 'CentOS Linux (3.10.0-327.13.1.el7.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-327.13.1.el7.x86_64-advanced-8a9d38ed-14e7-462a-be6c-e385d6b1906d' {
load_video
set gfxpayload=keep
insmod gzio
insmod part_msdos
insmod xfs
set root='hd0,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1' 8a9d38ed-14e7-462a-be6c-e385d6b1906d
else
search --no-floppy --fs-uuid --set=root 8a9d38ed-14e7-462a-be6c-e385d6b1906d
fi
linux16 /boot/vmlinuz-3.10.0-327.13.1.el7.x86_64 root=UUID=8a9d38ed-14e7-462a-be6c-e385d6b1906d ro console=tty0 console=ttyS0,115200n8 crashkernel=auto console=ttyS0,115200 LANG=en_US.UTF-8 selinux=0

Update your system thru “yum -y update”. Set you timezone (My case is US/Central): sudo ln -s /usr/share/zoneinfo/US/Central /etc/localtime. You may need to delete /etc/localtime first.

OpenStack Controller: Configure NTP Server

Add you jumpbox server into the /etc/ntp.conf file as following (jus showing an extract of the file)


[root@ocs01 ~]# yum -y install ntp
Loaded plugins: fastestmirror
#
# some boring lines
# more boring lines
#
Installed:
ntp.x86_64 0:4.2.6p5-22.el7.centos.1

Dependency Installed:
autogen-libopts.x86_64 0:5.18-5.el7 ntpdate.x86_64 0:4.2.6p5-22.el7.centos.1

Complete!
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server jumpbox.nuage.lab iburst
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst

Synchronize time services as the following:


[root@ocs01 ~]# service ntpd stop
Redirecting to /bin/systemctl stop ntpd.service
[root@ocs01 ~]# ntpdate -u jumpbox.nuage.lab
16 May 19:49:30 ntpdate[11914]: adjust time server 192.168.101.3 offset 0.018515 sec
[root@ocs01 ~]# service ntpd start
Redirecting to /bin/systemctl start ntpd.service
[root@ocs01 ~]# ntpstat
synchronised to NTP server (107.161.29.207) at stratum 3
time correct to within 7972 ms
polling server every 64 s

OpenStack Controller: pre-tasks to packstack installation

Install packstack running “yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-liberty/rdo-release-liberty-2.noarch.rpm” and then “yum install -y openstack-packstack”

I’ve created a snap from it to use it later. take a look the following:

Now use “ssh-keygen” to generate your key pair into the controller:


[root@ocs01 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
f8:d1:79:50:3d:4d:e6:2c:6c:13:e4:86:65:21:0e:c4 root@ocs01.novalocal
The key's randomart image is:
+--[ RSA 2048]----+
| oo oo*+o|
| E+ Bo=.|
| . o B.o|
| . . o o o |
| . S o . |
| . . . |
| . |
| |
| |
+-----------------+
[root@ocs01 ~]# cd .ssh/
[root@ocs01 .ssh]# cat id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDL/k1URNcPeTG3NZJENPloueh/orIiDuzFRfMbgFuUVJrVVoAWHjAsHYu8N3pzDtZAQSxGK7AcpuHjCveNY+kk1cVI5nzmvguHRce8OeGpXxp1AWAVDOia5ipTPEmdOSk+RP496v64bZR2uInZXMaS97SsXwqXULLLtTxWMjS5evdynNCmAsfmJ+Z2mNrE3l2rZcECJj4uKlNhWAhTN7BlO8soPvE+oX+yjfXqOsTZW+Rtz5tg7ZSDOftNR3HVa859dJxqu6FgOhEELOtP/B5T/NAoSMhpR9VcJmJEZA5iQtTSORIdylHnw+kkGg0ks1/j4TfCzFcm8RvcJ4YKSg6H root@ocs01.novalocal

Create a new key-pair for your OpenStack Controller importing the public key as following:

pinrojas - packstack import keypair openstack controller.png

Compute Node

We’ll use our snap from the controller as following (don’t forget use keystone_nuage for credentials). Switch to box01 to create the servers.


[root@box01 ~(keystone_nuage)]# openstack server create --image snap-osc01-160516-packstack-pkg --flavor nuage.nova.2 --key-name osc01-kpair --nic net-id=nuage-lab,v4-fixed-ip=192.168.101.7 nova01
+--------------------------------------+-----------------------------------------------------------+
| Field                                | Value                                                     |
+--------------------------------------+-----------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                    |
| OS-EXT-AZ:availability_zone          |                                                           |
| OS-EXT-STS:power_state               | 0                                                         |
| OS-EXT-STS:task_state                | scheduling                                                |
| OS-EXT-STS:vm_state                  | building                                                  |
| OS-SRV-USG:launched_at               | None                                                      |
| OS-SRV-USG:terminated_at             | None                                                      |
| accessIPv4                           |                                                           |
| accessIPv6                           |                                                           |
| addresses                            |                                                           |
| adminPass                            | GTbBa5A6JxzS                                              |
| config_drive                         |                                                           |
| created                              | 2016-05-23T17:23:55Z                                      |
| flavor                               | nuage.nova.2 (4e191554-25f9-4ce7-bb1b-80941d6ab839)       |
| hostId                               |                                                           |
| id                                   | c0f78a72-e304-4292-8620-c0581a9e6aa8                      |
| image                                | snap-160519-nova01 (958f0ed6-b186-4a72-a662-df78c3ab78b8) |
| key_name                             | osc01-kpair                                               |
| name                                 | nova01                                                    |
| os-extended-volumes:volumes_attached | []                                                        |
| progress                             | 0                                                         |
| project_id                           | 39e2f35bc10f4047b1ea77f79559807d                          |
| properties                           |                                                           |
| security_groups                      | [{u'name': u'default'}]                                   |
| status                               | BUILD                                                     |
| updated                              | 2016-05-23T17:23:56Z                                      |
| user_id                              | c91cd992e79149209c41416a55a661b1                          |
+--------------------------------------+-----------------------------------------------------------+

Some minutes later, go back to osc01. Check the connection to nova server from your OpenStack Controller and add the public key in /root/ssh/authorized_host


[root@ocs01 ~]# ssh centos@192.168.101.7
The authenticity of host '192.168.101.7 (192.168.101.7)' can't be established.
ECDSA key fingerprint is aa:31:dd:ab:9a:08:3d:7a:23:93:71:97:e1:fb:15:6b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.101.7' (ECDSA) to the list of known hosts.
Last login: Mon May 16 19:38:42 2016 from 192.168.1.66
[centos@nova01 ~]$
[centos@nova01 ~]$ sudo vi /root/.ssh/authorized_keys
#
# add public OCS's public key
#
[centos@nova01 ~]$ exit
logout
Connection to 192.168.101.7 closed.
[root@ocs01 ~]# ssh 192.168.101.7
Last login: Tue May 17 18:12:23 2016
[root@nova01 ~]#

IMPORTANT: Add this public key even into /root/.ssh/authorized_keys at ocs01 server
Sync NTP after you get a clean access to nova01 as root user:


[root@nova01 ~]# ntpdate -u jumpbox.nuage.lab
17 May 18:17:38 ntpdate[9205]: adjust time server 192.168.101.3 offset 0.018297 sec
[root@nova01 ~]# service ntpd start
Redirecting to /bin/systemctl start ntpd.service
[root@nova01 ~]# ntpstat
synchronised to NTP server (192.168.101.3) at stratum 4
time correct to within 8139 ms
polling server every 64 s

PackStack Installation: Using answer file to install both servers

Install packstack now from the controller (osc01) changing the compute to nova01 in the answer file. First, create the answer file


[root@ocs01 ~]# packstack --gen-answer-file=/root/answer.txt
[root@ocs01 ~]# vi answer.txt

Change the following parameters:


CONFIG_CONTROLLER_HOST=192.168.101.6
CONFIG_COMPUTE_HOSTS=192.168.101.7
CONFIG_NETWORK_HOSTS=192.168.101.6
CONFIG_PROVISION_DEMO=n
CONFIG_CINDER_INSTALL=n
CONFIG_SWIFT_INSTALL=n
CONFIG_CEILOMETER_INSTALL=n
CONFIG_NAGIOS_INSTALL=n
CONFIG_NTP_SERVERS=192.168.101.3

Now, execute “packstack –answer-file=/root/answer.txt”


[root@ocs01 ~]# packstack --answer-file=/root/answer.txt
Welcome to the Packstack setup utility

The installation log file is available at: /var/tmp/packstack/20160517-184422-KxwSmh/openstack-setup.log

Installing:
Clean Up [ DONE ]
Discovering ip protocol version [ DONE ]
Setting up ssh keys [ DONE ]
Preparing servers [ DONE ]
Pre installing Puppet and discovering hosts' details [ DONE ]
Adding pre install manifest entries [ DONE ]
Installing time synchronization via NTP [ DONE ]
Setting up CACERT [ DONE ]
Adding AMQP manifest entries [ DONE ]
Adding MariaDB manifest entries [ DONE ]
Fixing Keystone LDAP config parameters to be undef if empty[ DONE ]
Adding Keystone manifest entries [ DONE ]
Adding Glance Keystone manifest entries [ DONE ]
Adding Glance manifest entries [ DONE ]
Adding Nova API manifest entries [ DONE ]
Adding Nova Keystone manifest entries [ DONE ]
Adding Nova Cert manifest entries [ DONE ]
Adding Nova Conductor manifest entries [ DONE ]
Creating ssh keys for Nova migration [ DONE ]
Gathering ssh host keys for Nova migration [ DONE ]
Adding Nova Compute manifest entries [ DONE ]
Adding Nova Scheduler manifest entries [ DONE ]
Adding Nova VNC Proxy manifest entries [ DONE ]
Adding OpenStack Network-related Nova manifest entries[ DONE ]
Adding Nova Common manifest entries [ DONE ]
Adding Neutron VPNaaS Agent manifest entries [ DONE ]
Adding Neutron FWaaS Agent manifest entries [ DONE ]
Adding Neutron LBaaS Agent manifest entries [ DONE ]
Adding Neutron API manifest entries [ DONE ]
Adding Neutron Keystone manifest entries [ DONE ]
Adding Neutron L3 manifest entries [ DONE ]
Adding Neutron L2 Agent manifest entries [ DONE ]
Adding Neutron DHCP Agent manifest entries [ DONE ]
Adding Neutron Metering Agent manifest entries [ DONE ]
Adding Neutron Metadata Agent manifest entries [ DONE ]
Adding Neutron SR-IOV Switch Agent manifest entries [ DONE ]
Checking if NetworkManager is enabled and running [ DONE ]
Adding OpenStack Client manifest entries [ DONE ]
Adding Horizon manifest entries [ DONE ]
Adding post install manifest entries [ DONE ]
Copying Puppet modules and manifests [ DONE ]
Applying 192.168.101.6_prescript.pp
Applying 192.168.101.7_prescript.pp
192.168.101.7_prescript.pp: [ DONE ]
192.168.101.6_prescript.pp: [ DONE ]
Applying 192.168.101.6_chrony.pp
Applying 192.168.101.7_chrony.pp
192.168.101.7_chrony.pp: [ DONE ]
192.168.101.6_chrony.pp: [ DONE ]
Applying 192.168.101.6_amqp.pp
Applying 192.168.101.6_mariadb.pp
192.168.101.6_amqp.pp: [ DONE ]
192.168.101.6_mariadb.pp: [ DONE ]
Applying 192.168.101.6_keystone.pp
Applying 192.168.101.6_glance.pp
192.168.101.6_keystone.pp: [ DONE ]
192.168.101.6_glance.pp: [ DONE ]
Applying 192.168.101.6_api_nova.pp
192.168.101.6_api_nova.pp: [ DONE ]
Applying 192.168.101.6_nova.pp
Applying 192.168.101.7_nova.pp
192.168.101.6_nova.pp: [ DONE ]
192.168.101.7_nova.pp: [ DONE ]
Applying 192.168.101.6_neutron.pp
Applying 192.168.101.7_neutron.pp
192.168.101.7_neutron.pp: [ DONE ]
192.168.101.6_neutron.pp: [ DONE ]
Applying 192.168.101.6_osclient.pp
Applying 192.168.101.6_horizon.pp
192.168.101.6_osclient.pp: [ DONE ]
192.168.101.6_horizon.pp: [ DONE ]
Applying 192.168.101.6_postscript.pp
Applying 192.168.101.7_postscript.pp
192.168.101.7_postscript.pp: [ DONE ]
192.168.101.6_postscript.pp: [ DONE ]
Applying Puppet manifests [ DONE ]
Finalizing [ DONE ]

**** Installation completed successfully ******

Additional information:
* File /root/keystonerc_admin has been created on OpenStack client host 192.168.101.6. To use the command line tools you need to source the file.
* To access the OpenStack Dashboard browse to http://192.168.101.6/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
* Because of the kernel update the host 192.168.101.6 requires reboot.
* The installation log file is available at: /var/tmp/packstack/20160517-184422-KxwSmh/openstack-setup.log
* The generated manifests are available at: /var/tmp/packstack/20160517-184422-KxwSmh/manifests

Reboot the controller

OpenStack Controller: Installing Nuage Plugin for Liberty

First, remove Neutron services from controller/network node osc01.


[root@osc01 ~]# systemctl stop neutron-dhcp-agent.service
[root@osc01 ~]# systemctl stop neutron-l3-agent.service
[root@osc01 ~]# systemctl stop neutron-metadata-agent.service
[root@osc01 ~]# systemctl stop neutron-openvswitch-agent.service
[root@osc01 ~]# systemctl stop neutron-netns-cleanup.service
[root@osc01 ~]# systemctl stop neutron-ovs-cleanup.service
[root@osc01 ~]# systemctl disable neutron-dhcp-agent.service
Removed symlink /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service.
[root@osc01 ~]# systemctl disable neutron-l3-agent.service
Removed symlink /etc/systemd/system/multi-user.target.wants/neutron-l3-agent.service.
[root@osc01 ~]# systemctl disable neutron-metadata-agent.service
Removed symlink /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service.
[root@osc01 ~]# systemctl disable neutron-openvswitch-agent.service
Removed symlink /etc/systemd/system/multi-user.target.wants/neutron-openvswitch-agent.service.
[root@osc01 ~]# systemctl disable neutron-netns-cleanup.service
[root@osc01 ~]# systemctl disable neutron-ovs-cleanup.service
Removed symlink /etc/systemd/system/multi-user.target.wants/neutron-ovs-cleanup.service.

Get the rpm files for openstack liberty (el7) from Nokia’s support site (send me a comment if you need help on that).


[root@osc01 ~]# ls
answer.txt nuage-openstack-heat-5.0.0.1818-nuage.noarch.rpm
keystonerc_admin nuage-openstack-horizon-8.0.0.1818-nuage.noarch.rpm
nuage-metadata-agent-3.2.6-232.el7.x86_64.rpm nuage-openstack-neutron-7.0.0.1818-nuage.noarch.rpm
nuagenetlib-2015.1.3.2.6_228-nuage.noarch.rpm nuage-openstack-neutronclient-3.1.0.1818-nuage.noarch.rpm
[root@osc01 ~]# rpm -i nuagenetlib-2015.1.3.2.6_228-nuage.noarch.rpm
[root@osc01 ~]# rpm -i nuage-openstack-neutron-7.0.0.1818-nuage.noarch.rpm
[root@osc01 ~]# rpm -i nuage-openstack-neutronclient-3.1.0.1818-nuage.noarch.rpm
[root@osc01 ~]# rpm -i nuage-openstack-horizon-8.0.0.1818-nuage.noarch.rpm
[root@osc01 ~]# rpm -i nuage-openstack-heat-5.0.0.1818-nuage.noarch.rpm
[root@osc01 ~]# rpm -i nuage-metadata-agent-3.2.6-232.el7.x86_64.rpm

Configuring Nuage plugin

Modify neutron.conf file using token string from keystone.conf file:

[root@osc01 ~]# <b>mkdir /etc/neutron/plugins/nuage/</b>
[root@osc01 ~]# <b>vi /etc/neutron/plugins/nuage/nuage_plugin.ini</b>
[root@osc01 ~]# cat /etc/neutron/plugins/nuage/nuage_plugin.ini
[RESTPROXY]
default_net_partition_name = OpenStack_Lab
auth_resource = /me
server = 192.168.101.4:8443
organization = csp
serverauth = csproot:csproot
serverssl = True
base_uri = /nuage/api/v3_2

Now, Let’s modify /etc/nova/nova.conf. Change the following lines (have to be change in all compute and controller/network nodes):

use_forwarded_for = False
[neutron]
service_metadata_proxy = True
metadata_proxy_shared_secret=NuageNetworksSharedSecret
ovs_bridge=alubr0
security_group_api=neutron

Configuring Neutron

Edit/add the following lines to /etc/neutron/neutron.conf. Don’t forget to comment out “service_plugins = router”

core_plugin = neutron.plugins.nuage.plugin.NuagePlugin

Required installation tasks in PackStack Controller

More changes. copy “nuage-openstack-upgrade-1818.tar.gz” to packstack controller.


[root@osc01 ~]# mkdir /tmp/nuage
[root@osc01 ~]# mkdir /tmp/nuage/upgrade
[root@osc01 ~]# cd /tmp/nuage/upgrade
[root@osc01 upgrade]# mv /root/
 .
[root@osc01 upgrade]# tar -xzf nuage-openstack-upgrade-1818.tar.gz
[root@osc01 upgrade]# python set_and_audit_cms.py --neutron-config-file /etc/neutron/neutron.conf --plugin-config-file /etc/neutron/plugins/nuage/nuage_plugin.ini
WARNING:oslo_config.cfg:Option "verbose" from group "DEFAULT" is deprecated for removal. Its value may be silently ignored in the future.
INFO:VPort_Sync:Starting Vports Sync.
WARNING:neutron.notifiers.nova:Authenticating to nova using nova_admin_* options is deprecated. This should be done using an auth plugin, like password
WARNING:oslo_config.cfg:Option "nova_region_name" from group "DEFAULT" is deprecated. Use option "region_name" from group "nova".
INFO:VPort_Sync:Vports Sync on VSD is now complete.
INFO:generate_cms_id:created CMS 031b436e-3181-4705-8285-e74816d9f5b9
WARNING:neutron.notifiers.nova:Authenticating to nova using nova_admin_* options is deprecated. This should be done using an auth plugin, like password
WARNING:oslo_config.cfg:Option "nova_region_name" from group "DEFAULT" is deprecated. Use option "region_name" from group "nova".
INFO:Upgrade_Logger:Audit begins.
INFO:Upgrade_Logger:Checking subnets.
INFO:Upgrade_Logger:Subnets done.
INFO:Upgrade_Logger:Checking domains.
INFO:Upgrade_Logger:Domains done.
INFO:Upgrade_Logger:Checking static routes.
INFO:Upgrade_Logger:Static routes done.
INFO:Upgrade_Logger:Checking acl entry templates.
INFO:Upgrade_Logger:Acl entry templates done.
INFO:Upgrade_Logger:Checking policy groups.
INFO:Upgrade_Logger:Policy groups done.
INFO:Upgrade_Logger:Checking floating ips.
INFO:Upgrade_Logger:Floating ips done.
INFO:Upgrade_Logger:Checking vports.
INFO:Upgrade_Logger:Vports done.
INFO:Upgrade_Logger:Checking shared network resources.
INFO:Upgrade_Logger:Shared network resources done.
INFO:Upgrade_Logger:Checking application domains.
INFO:Upgrade_Logger:Application domains done.
INFO:Upgrade_Logger:File "audit.yaml" created.
INFO:Upgrade_Logger:Audit Finished.
INFO:Upgrade_Logger:Processing CMS ID discrepancies in the audit file...
INFO:Upgrade_Logger:Processed all the CMS ID discrepancies in the audit file
[root@osc01 upgrade]# systemctl restart neutron-server
[root@osc01 upgrade]# cd
[root@osc01 ~]# . keystonerc_admin
[root@osc01 ~(keystone_admin)]# nova list
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+
[root@osc01 ~]# systemctl restart neutron-server
[root@osc01 ~]# rm -rf /etc/neutron/plugin.ini
[root@osc01 ~]# ln -s /etc/neutron/plugins/nuage/nuage_plugin.ini /etc/neutron/plugin.ini
[root@osc01 ~]# neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/nuage/nuage_plugin.ini upgrade head
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Running upgrade for neutron ...
#
# Some boring lines
# More boring lines
#
INFO [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051, qos db changes
INFO [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136, quota_reservations
INFO [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59, Add dns_name to Port
OK
[root@osc01 ~]# systemctl restart openstack-nova-api
[root@osc01 ~]# systemctl restart openstack-nova-scheduler
[root@osc01 ~]# systemctl restart openstack-nova-conductor
[root@osc01 ~]# systemctl restart neutron-server

Just, let’s check if we have access to horizon (don’t login yet!).

pinrojas - nuage lab packstack home horizon access.png

Compute Node: Configuring nova.conf and  installing VRS

It’s turn to make same changes to our compute node nova01.


[root@nova01 ~]# rpm -Uvh http://mirror.pnl.gov/epel/7/x86_64/e/epel-release-7-6.noarch.rpm
Retrieving http://mirror.pnl.gov/epel/7/x86_64/e/epel-release-7-6.noarch.rpm
warning: /var/tmp/rpm-tmp.VNThyF: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:epel-release-7-6 ################################# [100%]
[root@nova01 ~]# vi /etc/yum.repos.d/CentOS-Base.repo
[root@nova01 ~]# yum -y update
Loaded plugins: fastestmirror
base | 3.6 kB 00:00:00
centosplus | 3.4 kB 00:00:00
epel/x86_64/metalink | 12 kB 00:00:00
epel | 4.3 kB 00:00:00
extras | 3.4 kB 00:00:00
updates | 3.4 kB 00:00:00
(1/4): centosplus/7/x86_64/primary_db | 2.3 MB 00:00:00
(2/4): epel/x86_64/updateinfo | 555 kB 00:00:01
(3/4): epel/x86_64/group_gz | 170 kB 00:00:01
(4/4): epel/x86_64/primary_db | 4.1 MB 00:00:00
Loading mirror speeds from cached hostfile
* base: mirror.rackspace.com
* centosplus: pubmirrors.dal.corespace.com
* epel: mirror.compevo.com
* extras: mirror.team-cymru.org
* updates: mirror.steadfast.net
Resolving Dependencies
#
# Boring lines
# more boring lines
#

Installed:
python2-boto.noarch 0:2.39.0-1.el7 python2-crypto.x86_64 0:2.6.1-9.el7 python2-ecdsa.noarch 0:0.13-4.el7 python2-msgpack.x86_64 0:0.4.7-3.el7

Dependency Installed:
libtomcrypt.x86_64 0:1.17-23.el7 libtommath.x86_64 0:0.42.0-4.el7 postgresql-libs.x86_64 0:9.2.15-1.el7_2 python2-rsa.noarch 0:3.4.1-1.el7

Updated:
hiera.noarch 1:1.3.4-5.el7 libndp.x86_64 0:1.2-6.el7_2 postfix.x86_64 2:2.10.1-6.0.1.el7.centos
python-contextlib2.noarch 0:0.5.1-1.el7 python-mimeparse.noarch 0:0.1.4-2.el7 python-perf.x86_64 0:3.10.0-327.18.2.el7.centos.plus
python-psutil.x86_64 0:2.2.1-1.el7 python-pygments.noarch 0:2.0.2-4.el7 python-qpid.noarch 0:0.32-13.el7
python-qpid-common.noarch 0:0.32-13.el7 python-requests.noarch 0:2.9.1-2.el7 python-unicodecsv.noarch 0:0.14.1-4.el7
python-unittest2.noarch 0:1.1.0-4.el7 python-urllib3.noarch 0:1.13.1-3.el7 python2-eventlet.noarch 0:0.18.4-1.el7

Replaced:
python-boto.noarch 0:2.25.0-2.el7.centos python-crypto.x86_64 0:2.6.1-1.el7.centos python-ecdsa.noarch 0:0.11-3.el7.centos
python-msgpack.x86_64 0:0.4.6-3.el7

Complete!

 

Nova/KVM: solving dependencies

Solve some dependencies in KVM.


[root@nova01 ~]# yum install libvirt -y
#
# Boring lines
#
Installed:
libvirt.x86_64 0:1.2.17-13.el7_2.4

Dependency Installed:
libvirt-daemon-config-network.x86_64 0:1.2.17-13.el7_2.4 libvirt-daemon-driver-lxc.x86_64 0:1.2.17-13.el7_2.4

Complete!
[root@nova01 ~]# yum install python-twisted-core -y
#
# Boring lines
#

Installed:
python-twisted.x86_64 0:15.4.0-3.el7

Dependency Installed:
libXft.x86_64 0:2.3.2-2.el7 libXrender.x86_64 0:0.9.8-2.1.el7 pyserial.noarch 0:2.6-5.el7
python-characteristic.noarch 0:14.3.0-4.el7 python-service-identity.noarch 0:14.0.0-4.el7 python-zope-interface.x86_64 0:4.0.5-4.el7
python2-pyasn1-modules.noarch 0:0.1.9-6.el7.1 tcl.x86_64 1:8.5.13-8.el7 tix.x86_64 1:8.4.3-12.el7
tk.x86_64 1:8.5.13-6.el7 tkinter.x86_64 0:2.7.5-34.el7

Complete!
[root@nova01 ~]# yum install perl-JSON -y
#
# Boring lines
#

Installed:
perl-JSON.noarch 0:2.59-2.el7

Complete!
[root@nova01 ~]# yum install vconfig -y
#
# Boring lines
#

Installed:
vconfig.x86_64 0:1.9-16.el7

Complete!

Configure Metadata agent in PackStack controller

Delete current file nuage-metadata-agent and create a new file with the following information:

[root@nova01 ~]# vi /etc/nova/nova.conf
[root@nova01 ~]# rm -rf /etc/default/nuage-metadata-agent
[root@nova01 ~]# vi /etc/default/nuage-metadata-agent
[centos@nova01 ~]$ cat /etc/default/nuage-metadata-agent
METADATA_PORT=9697
NOVA_METADATA_IP=127.0.0.1
NOVA_METADATA_PORT=8775
METADATA_PROXY_SHARED_SECRET="NuageNetworksSharedSecret"
NOVA_CLIENT_VERSION=2
NOVA_OS_USERNAME=nova
NOVA_OS_PASSWORD=2b12874fcf3c43ff
NOVA_OS_TENANT_NAME=services
NOVA_OS_AUTH_URL=http://192.168.101.6:5000/v2.0
NOVA_REGION_NAME=RegionOne
NUAGE_METADATA_AGENT_START_WITH_OVS=true
NOVA_API_ENDPOINT_TYPE=publicURL

 

Installing Nuage VRS

We’ll install VRS into the nova node and replace OVS instance.


[root@nova01 ~]# cd /tmp/nuage/
[root@nova01 nuage]# mv /root/nuage-openvswitch-* .
[root@nova01 nuage]# yum -y remove openvswitch
#
# Some boring lines
# More boring lines
#
Removed:
openvswitch.x86_64 0:2.4.0-1.el7

Dependency Removed:
openstack-neutron-openvswitch.noarch 1:7.0.4-1.el7

Complete!
[root@nova01 nuage]# yum -y remove python-openvswitch
#
# Some boring lines
# More boring lines
#

Removed:
python-openvswitch.noarch 0:2.4.0-1.el7

Complete!
[root@nova01 nuage]# yum -y install nuage-openvswitch-3.2.6-232.el7.x86_64.rpm
#
# Some boring lines
# More boring lines
#
Installed:
nuage-openvswitch.x86_64 0:3.2.6-232.el7

Dependency Installed:
perl-Sys-Syslog.x86_64 0:0.33-3.el7 protobuf-c.x86_64 0:1.0.2-2.el7 python-setproctitle.x86_64 0:1.1.6-5.el7

Complete!
[root@nova01 nuage]# vi /etc/default/openvswitch
[root@nova01 nuage]# cat /etc/default/openvswitch | grep 101.5
ACTIVE_CONTROLLER=192.168.101.5
[root@nova01 nuage]# mv /root/nuage-metadata-agent-3.2.6-232.el7.x86_64.rpm .
[root@nova01 nuage]# rpm -i nuage-metadata-agent-3.2.6-232.el7.x86_64.rpm
[root@nova01 ~]# cd /tmp/nuage/
[root@nova01 nuage]# mv /root/nuage-metadata-agent-3.2.6-232.el7.x86_64.rpm .
[root@nova01 nuage]# rpm -i nuage-metadata-agent-3.2.6-232.el7.x86_64.rpm
[root@nova01 nuage]# vi /etc/nova/nova.conf

Configure nova.conf

We’ll modify /etc/nova/nova.conf as following:

ovs_bridge=alubr0

Restart services as following:


[root@nova01 nuage]# systemctl restart openstack-nova-compute
[root@nova01 nuage]# systemctl restart openvswitch

Checking service status and connections.


[root@nova01 ~]# systemctl status openvswitch
● openvswitch.service - Nuage Openvswitch
   Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; enabled; vendor preset: disabled)
   Active: active (exited) since Mon 2016-05-23 12:26:19 CDT; 9h ago
 Main PID: 508 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/openvswitch.service
           ├─ 601 ovsdb-server: monitoring pid 602 (healthy)
           ├─ 602 ovsdb-server /etc/openvswitch/conf.db -vconsole:emer -vsyslog:err -vfile:warn --remote=punix:/var/run/openvswitch/db.sock --private-key=db:O...
           ├─ 694 ovs-vswitchd: monitoring pid 695 (healthy)
           ├─ 695 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:warn --mlockall --no-chdir --log-file=/var/log/openvswitch...
           ├─1069 nuage-SysMon: monitoring pid 1070 healthy
           ├─1070 /usr/bin/python /sbin/nuage-SysMon -vany:console:emer -vany:syslog:err -vany:file:info --no-chdir --log-file=/var/log/openvswitch/nuage-SysM...
           ├─1121 monitor(vm-monitor): vm-monitor: monitoring pid 1122 (healthy)
           ├─1122 vm-monitor --no-chdir --log-file=/var/log/openvswitch/vm-monitor.log --pidfile=/var/run/openvswitch/vm-monitor.pid --detach --monitor
           ├─1144 nuage-rpc: monitoring pid 1145 (healthy)
           └─1145 nuage-rpc unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --tcp 7406 --ssl 7407 --no-chdir --log-file=/var/log/ope...

May 23 12:26:13 nova01.novalocal openvswitch.init[508]: iptables: No chain/target/match by that name.
May 23 12:26:13 nova01.novalocal openvswitch.init[508]: iptables: No chain/target/match by that name.
May 23 12:26:13 nova01.novalocal openvswitch.init[508]: iptables: Bad rule (does a matching rule exist in that chain?).
May 23 12:26:16 nova01.novalocal openvswitch.init[508]: Starting nuage system monitor:Starting nuage-SysMon[  OK  ]
May 23 12:26:19 nova01.novalocal openvswitch.init[508]: Starting vm-monitor:Starting vm-monitor:Starting vm-monitor[  OK  ]
May 23 12:26:19 nova01.novalocal openvswitch.init[508]: Starting nuage rpc server:Starting nuage-rpc[  OK  ]
May 23 12:26:19 nova01.novalocal systemd[1]: Started Nuage Openvswitch.
May 23 12:26:20 nova01.novalocal ovs-vsctl[1154]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait --timeout=5 set Open_vSwitch . other_config:acl-...-port=514
May 23 12:26:22 nova01.novalocal ovs-vsctl[1185]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait --timeout=5 set Open_vSwitch . other_config:stat...1.4:39090
May 23 12:29:24 nova01 systemd[1]: [/usr/lib/systemd/system/openvswitch.service:10] Unknown lvalue 'ExecRestart' in section 'Service'
Hint: Some lines were ellipsized, use -l to show in full.
[root@nova01 ~]# ovs-vsctl show
2df2c5a3-5f96-4186-bf54-4836d73e3b39
    Bridge "alubr0"
        Controller "ctrl1"
            target: "tcp:192.168.101.5:6633"
            role: master
            is_connected: true
        Port "svc-rl-tap1"
            Interface "svc-rl-tap1"
        Port "svc-rl-tap2"
            Interface "svc-rl-tap2"
        Port svc-pat-tap
            Interface svc-pat-tap
                type: internal
        Port "alubr0"
            Interface "alubr0"
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-c0a86506"
            Interface "vxlan-c0a86506"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.101.7", out_key=flow, remote_ip="192.168.101.6"}
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    ovs_version: "3.2.6-232-nuage"

The next image will show you what you will get into the console

pinrojas - nuage packstack console monitoring openstack demo lab.png

Last Step: Add csproot user to CMS Group as the following

pinrojas - packstack openstack plugin nuage neutron.png

And we are done with our lab. Thanks very much for reading!
see you.

Sentinel.la | LATAM startups play global | OpenStack Monitoring & Healthcheck

Sentinel.la is not just another startup supported on the open source community mattress. It’s a live proof that LATAM’s talent is playing global. Founders Memo and Paco have been closed to OpenStack foundation over the last four years. They have a HUGE experience operating it.

Sentinel.la has launched its beta last week. You would identify the value of the solution just seeing their posts and videos. I will point out some of the key bullets of their offering in the following lines.

One step ahead of users

OpenStack is an amazing platform. However, most of the users are just starting with this bunch of projects.

Liberty is offering a really nice dashboard experience. Horizon took what is using to display orchestration ongoing process (heat) and reproduce it into the “network topology”. Just let the next picture get you hooked.

networktopology sentinel.la openstack healthcare monitoring nova neutron liberty

source: https://www.openstack.org/assets/software/liberty/networktopology.png

Most of the tenant users don’t have enough insight in openstack to understand simple issues like: the size of the server flavor doesn’t big enough to fit the selected image (check next image)

sentinel.la openstack healthcare monitoring nova neutron

source: sentinel.la video https://vimeo.com/154817235

sentinel.la openstack healthcare monitoring nova neutron 02.png

source: sentinel.la video https://vimeo.com/154817235

Those events can be quickly managed. Even before users start to call Help Desk complaining about it. Understand what is happening with your users is priceless. Install OpenStack is so complex, even for experts. Imagine how hard could be operate it. Get eyes on what is happening behind will make your service more responsive and agile. If you don’t believe me, just check the next over the last openstack summit in Tokio: No Valid Host Was Found: Translating Tracebacks by Rackspace (James Denton, Wade Lewis, Sam Yaple). And tell how many guys in your team can traceback an issue like this.

https://www.openstack.org/summit/tokyo-2015/videos/presentation/no-valid-host-was-found-translating-tracebacks

These guys state, “Deciphering a traceback is a bit like reading the matrix”. Good way to size the challenge that means deal with it. This example shows an error message: “No valid host was found. There are not enough hosts available”. What that means? That means you need to dig deep among different openstack’s logs to get the root source of the problem. Sentinel.la server’s view bring tools to check log’s messages out from different services into only one panel.

Check Paco’s post: Mastering the Openstack logs

Geographically distributed monitoring

I love this dashboard! Know what is happening to all your sites at a glance. Features that I can point out:

  • Push notifications that keep aware of any error from any server.
  • Showing the last alerts that you get from all your servers.
  • Arrange your servers into different clouds and OpenStack versions and see them how they are display in a map.
  • Showing global counters of your availability and services.

Also, there is a server view that helps you to dig deep into logs and performance. Love the way services are classified into different OpenStack projects. Features to point out about this view:

  • Last alerts panel and services status are really useful to dig into server’s issues.
  • Search into the log’s events using keyword and correlate
  • A snapshot on the services that are running at every server.

sentinel.la-openstack-monitoring-healthcheck-service-nova-neutron-heat-cinder-ceilometer-monasca.000

Unlimited scalability and agility

On-demand resources will help your business scales forever. Sentinel.la doesn’t own any piece of infrastructure. 100% of their business is on the cloud. I am not talking only about compute. I am sure it took some time to study the best of the art regarding databases, platforms, agents and try to move all on a PaaS strategy.

Starting from a agent based on open source project tourbillon.  Using InfluxDB to master metric storage management. Leverage their scalability and costs through PaaS offerings for MQSeries and MongoDB.

Check the following posts at their site:

  • JSON Web Tokens for dummies: If you bring service on the cloud, you have to bring confidence to users this service is safe. I think JWT has been used perfectly in this case to ensure data is kept secure like your identity
  • OpenStack services on a Time-Series database: A post that describes why is very important to choose specific database tech to manage time-series data. I would think NoSQL could be a nice resource that. Well Paco shows why NoSQL isn’t good fit for this case.  InfluxDB makes the perfect match in this case. Something nice that’s been copied from the OpenStack Monasca project architecture.

Final Words

A big great step has been to launch the beta version last week. There are many things to improve yet: Make agent identify openstack services by itself, extend capabilities to other projects like keystone, cinder besides nova and neutron. No doubts you would make those ones up. You are on the right way to succeed for sure.

Paco/Memo, congrats for the courage to do something different. I hope more people take this as an example in LATAM to dream BIG!

See you!

The heart of Nova: Libvirt | CHANGE as a Service

“The Libvirt driver for Nova is one of the most comprehensive implementations across all supported hypervisors. Do you know how Nova can spawn, suspend or terminate an instance? well, those are API calls to get interactions between Nova, Libvirt and KVM”

Thanks Memo for such a nice post about NOVA… Could you post something about Cinder ;)?

via The heart of Nova: Libvirt | CHANGE as a Service.

There’s Real Magic behind OpenStack Neutron

Really easy to create a new post on any blog – the most complex part is to think about the content – you can do it through a couple of clicks on your laptop, or from your phone/tablet, or just reblog other interesting posts from some mates. The same thing happens with OpenStack, you can create Servers instances, routers, subnets, firewalls, VPNs, Load Balancers through just a couple of clicks – well, first of all,  you need to plan and figure it why you need to create them, it’s not just create things because you can, depending on the application this virtual infrastructure will support, you need to define what your virtual IT architecture will be –

In my previous note about Neutron I’ve mentioned how Neutron brings Layer 3 capabilities and how easy is to create and manage it. Now I will describe the magic behind, I’m sure you will love it as I do. All this thing about Software Defined Network is been fully applied with OpenStack Neutron and its most used third party Plug-in: OpenVSwitch

First of all, I want to mention there’s a lot of information from vendors and community contributors:

Let’s start describing what is happening inside a Nova Compute Host with Neutron and OpenVSwitch. You have the instances where all the traffic ends or starts. These instances have ethernet virtual interfaces attached called tap devices and where these instances define logical interfaces as an “eth0“. these tap devices are directly connected to a Linux Bridge where OpenStack Security Groups are defined through iptables. This linux bridge acts like a transparent firewall to every instance – you have only one security group associated to only one instance -. A linux bridge are processes that include iptables running inside an associated Linux Network Stack – A Network stack includes the minimal network resources to be able to connect through layer 3 or 2 to any other system –

neutron openstack openvswitch router dnsmasq gre kionetworks plug-in tap devices instances nova compute software defined networks sdn cloud layer 1

OpenVSwitch Bridges are not compatible to this Linux Bridge, then we’ve defined on our architecture a pair or interfaces called qbr and qvo.  qbr is directly connected to the Linux Bridge and qvo is connected to the Integration Bridge. The Integration Bridge is part of the OpenVSwitch solution and it’s responsible to tag and un-tag with VLAN IDs all network traffic come in and out from/to the instances. Also is responsible to communicate all the instances that belong to the same tenant or lead the traffic out the Tunnel Bridge and then to external network components.

neutron openstack openvswitch router dnsmasq gre kionetworks plug-in tap devices instances nova compute software defined networks sdn cloud layer 2

Tunnel Bridge (br-tun) is connected to the Integration Bridge (br-int) through the patch-in interfaces. The OpenFlow rules about how to setup the traffic in the tunnels to run on top of the br-tun. the tunnels are defined through GRE and there is an association between every VLAND ID and a GRE Tunnel ID.

neutron openstack openvswitch router dnsmasq gre kionetworks plug-in tap devices instances nova compute software defined networks sdn cloud layer 3

Top of Rack and Core Cloud Switching

All Hosts (controller, compute, network, etc.) are connected to a core cloud network compose by high-performance switches. The good thing about this architecture controlling all data through virtual resources is all these switches are just providing transportation through layer 2. You don’t need VLANs more than the main ones to split the traffic between internal and external frames  – probably no more than couple of these ones-

You don’t need advanced licenses for Routing, GRE, Security… all these components are provided by Neutron and OpenVSwitch. There isn’t lock-in with any vendor – at least you start using some specific plug-ins to manage some control features by hardware or specific appliances -. Then, you are able to use product for different known vendors that can offer a powerful data backplane to transport an big amount of data with the lowest latency.

neutron openstack openvswitch router dnsmasq gre kionetworks plug-in tap devices instances nova compute software defined networks sdn cloud layer 4

Neutron Network Host

Neutron Host or Hosts are independent servers where all the more advanced network service and components  are running (Routers, DHCP, External Firewalls, etc.). All the traffic to the External Network (or Internet) through the GRE tunnels from the Nova Servers ends at the br-tun defined by Neutron Hosts. br-tun has the OpenFlow rules that define the match between VLAN IDs and GRE Tunnels IDs, and all this conversion processes, between VLAN and GRE, occurs at this bridge. This Bridge is connected to the integration bridge (br-int) inside Neutron through the patch-in interfaces.

neutron openstack openvswitch router dnsmasq gre kionetworks plug-in tap devices instances nova compute software defined networks sdn cloud layer 5

As the Host Node, br-int is un-tagging and tagging the traffic, but in this case, to/from routers and dnsmasq resources. routers and dnsmasq are independent namespaces with their own Linux Network Stack. Namespaces helps to manage traffic with overlapping private IPs between different routers associated to different tenants or projects. Routers helps to route traffic between tenant’s subnets and also to/from the external World. Also routers are using Linux iptables to filter traffic and also to make floating IP works through NAT (Network Address Translation) to chosen instances. dnsmasq is also working on a Linux Network Stack with DHCP and DNS processes serving exclusively to the associated tenant.

neutron openstack openvswitch router dnsmasq gre kionetworks plug-in tap devices instances nova compute software defined networks sdn cloud layer 6

Important tips to design Neutron+OVS Architectures

Neutron is containing router, basic/advanced network and security services, then almost all data goes through these hosts. size these servers with as much memory you can. Memory is the most used resource for this processes. Consider to have at least two server with equal configuration, to use one of them as stand-by for any issue on the main one.

It’s really important to consider high performance switches, a big amount of data is transported between nodes, use at least two levels design for cloud switching and manage the traffic trying to keep instances of the same tenant in the same cabinet – almost all the traffic will be keep it inside the Top of Rack Switch

Well, this is it, see you next time!

Neutron makes easier to get Layer 3 at your service

What a nice surprise to notice from twitter today that SAP is officially now part of OpenStack foundation, I am really glad that giants like them try to be part of this, a while ago, beautiful tech trend. Also, we have just started a new blog site called CHANGEaaS.com to start sharing our experiences and researches with the community, we have a lot of talent in our team and they are very excited to communicate and eat the world. Well, you can see it directly at this new site.

I have explained through my previous note how OpenVSwitch and OpenStack Neutron bring cool opportunities to expand your network services portfolio, and how they could be a first nice approach to get SDN (Software Defined Network) into your operation.

Now, let’s talk a bit more deeper about Neutron and its Layer 3 contribution. I strongly suggest to stop at OpenStack Docs and read “Layer 3 Networking in Neutron” section for more details.

Neutron offers the option to create by yourself virtual routers directly from your tenant. If you try this cool component you will find out It will be very simple to create, delete and manage it from the OpenStack Dashboard. Also, the new version of OpenStack Horizon (the dashboard) show you an image of your network topology  – you could see a picture at Cloud Actual‘s Post  – to help you visualize how your “virtual” network has been transformed at every change.

Sounds simple, but the magic behind is really a great achievement.

There are “network nodes” as part of something that we privately called “control servers”. The network nodes are responsible to orchestrate all network operations inside every tenant through Neutron/OVS Plug-Ins. Despite every Nova Compute – the nodes that contain the users’ Virtual Servers and the Hyper-Visors that support them – is in charge to process its own virtual switches and the associated IP addresses local contexts. All the traffic between Tenants’ subnets and/or Public Networks is processed and managed by this Network Nodes. These Network Nodes have a direct physical connection to the Public Network and defines their own Linux name spaces in order to bring the different required Forwarding context among the tenants and avoid overlapping IP addresses. Data Forwarding is support by the Linux IP Stack and also iptables is used to bring NAT  – Network Address Translation – functionalities at your service.

Then, you will have as many Linux Name Spaces in these Network Nodes as Virtual Routers in the overall Cloud Platform. That brings an important responsibility over your shoulders to size these servers according the overall capacity that you expect to grant to your users. Just think about this: if you have 10,000 tenants, you will manage more than 10,000 Linux IP Stacks in your Neutron Servers.

There is a high chance that some question start crossing your mind right now:

How do you expect to bring this capacity?

How you expect to create a scale-out solution to your Neutron nodes?

How do I size these servers up is as If I would think to provide virtual Load Balancers and other Security/Network products inside every tenant?

Well, there isn’t just one option to solve it. Let me think If I can bring some of these options through my next post – based on our experiences of course 🙂

See you next time!

#OpenStack speeds Loosely Coupled Scale-Out storage Type’s adoption

In my previous notes I’ve shown the definition of Loosely Coupled Scale-Out Storage Architecture and I’ve mentioned some products and vendors as reference. #Ceph, the open source project, and its block storage presentation, is an example of this type of solutions. I’ve exposed as well how #Ceph integrates itself to #Openstack as a block and object storage system, and I have to say that I personally prefer to work with #Swift instead #Ceph object storage to simplify the management and the architecture. However, we’ve got cool performance storage results directly from a Virtual Machine with #Ceph block storage system, the block device under tests was provisioned through #OpenStack Cinder – you can see these results in my note called “#Ceph: The most affordable flash storage to speed your Cloud (our labs results with #OpenStack)” – #Ceph helps you to lower your storage costs at the level that you can work exclusively with SSD drives.

cartoon ceph openstack emc vipr scaleio vmware nutanix amazon loosely coupled scale-out distributed shared nothing cto mauricio rojas

On the other hand, there are other storage solutions of this mentioned type that can help you to get the features that I’ve just mentioned with #Ceph, solutions like EMC ViPR and specifically its Block Storage solution that came through #ScaleIO, a recent EMC’s acquisition.

As I’ve written in my note called “#EMCWorld14: ScaleIO or ViPR Block – A brutal paradigm change to innovation“, #ScaleIO helps you to lower your storage cost – for an additional software cost license base on your raw capacity –  bringing the flexibility to work with commodity hardware. You can choose to work with SSD disks exclusively as we’ve done with #Ceph to get a valuable performance to speed your VMWare Cluster’s IOs. Also, you will get awesome capabilities to scale-out your capacity and the advantage to upgrade your hardware without any disruption at all to your cloud service. The ScaleIO’s dashboard is remarkable, it’s simply and helps to understand how is your storage system heath in a glance.

Something is missed on this story: Does #ScaleIO really work with #OpenStack #Cinder?

The answer is apparently “yes”. Some days ago, one of my mates has shown me a note called “Preview of #ScaleIO and #OpenStack demo at EMC World” by Nikhil Sharma (@NikhilS2000). You can see info about this important demo that, like you, I’ve missed at EMC World. Nikhil shows us perfectly how the integration works among these two techs with Nova and the interaction between. I am really excited to start doing tests on our own cloud stack based on #OpenStack #IceHouse.

As you can see at the Cinder Support Matrix, It’s a long list of vendors and products, but ScaleIO is missing like other solutions of this type: Nutanix, Pivot3… However, you can find products like VMWare, Nexenta, SolidFire…

Anyway, we are sure storage products of this same kind will increase this Cinder’s supported number in the near future – extra officially, we know they are already working to integrate Cinder as soon as possible – Finally, you don’t have to be a genius to figure out that the #OpenStack’s abstraction will speed its adoption.

See you around!