#ansible and #python pexpect for unconventional ssh interfaces

Eating some ansible last months. Love the way it works. However, when you need to deal with some appliances using unconventional ssh interfaces. Well, It’s been quite a challenge. Then, I started to create my own python tool for those cases.

The victim is VSC (Nuage Virtualized Services Controller). The secret weapon of Nuage SDN. It’s a pretty nice piece of software encapsulating the famous Alcatel-Lucent SROS. Tried to use of course ansible SROS module. However, I couldn’t make it work. Maybe because VSC is changing some part of the interaction.

pexpect is the answer to your prays

Straigth to the point. I using pexpect Python library. This case, this file will create a TLS profile in VSC for NSGs (Gateways used at branches for a SD-WAN solution). Openflow communications between VSC and NSG will be encrypted. Check the following file.

import pexpect
import time
import sys
import argparse

parser = argparse.ArgumentParser()
parser.add_argument('vsc_host', type=str)
parser.add_argument('vsc_ip', type=str)
parser.add_argument('passwd', type=str)
parser.add_argument('cert_name', type=str)
args =  parser.parse_args()

try:
  child = pexpect.spawn('ssh admin@%s' % args.vsc_ip)
#  child.logfile = sys.stdout  # uncomment to debug
  child.expect ('password:')
  child.sendline (args.passwd)
  child.expect (args.vsc_host)
  child.sendline ('configure system security tls-profile "ex-tls-profile" create')
  child.expect (args.vsc_host)
  child.sendline (r'own-key "cf1:\%s-Key.pem"' % args.cert_name)
  child.expect (args.vsc_host)
  child.sendline (r'own-certificate "cf1:\%s.pem"' % args.cert_name)
  child.expect (args.vsc_host)
  child.sendline (r'ca-certificate "cf1:\%s-CA.pem"' % args.cert_name)
  child.expect (args.vsc_host)
  child.sendline ('no shutdown')
  child.expect (args.vsc_host)
  child.sendline ('exit all')
  child.expect (args.vsc_host)
  child.sendline ('configure vswitch-controller open-flow tls-profile "ex-tls-profile"')
  child.expect (args.vsc_host)
  child.sendline ('configure vswitch-controller xmpp tls-profile "ex-tls-profile"')
  child.expect (args.vsc_host)

except Exception as e:
   msg = "Exception is:\n %s \n" % e
   print msg

As you can notice I am using arguments to re-use this file as many times as I want.

Call it from your role tasks

I am storing this python file into “files” folder into the role to call it later from some task. the way that I managing this is shown at follow.

- name: "Create TLS profile at VSC"
  local_action: command python {{playbook_dir}}/roles/util-deploy/files/nuage_tls_profile.py {{ vsc1_fqdn }} {{ vsc1_host }} {{ vsc1_passwd }} {{ vsc1_cert_name }}
  register: output

- name: Verification Result Failure Status
  fail: msg={{ output.stdout }}
  when: output.stdout != "success"

If everything goes well, then the output will be “success”. Otherwise, the task will fail and you will get the output.

What do you need to run this? You can get the required libraries and applications using my Dockerfile. Just check up my last post.

See ya!

Useful Nuage VRS openvswitch and VSC commands

We are moving out! An updated post can be found here

https://www.cloud-native-everything.com/useful-nuage-vrs-and-vsc-commands/

Check out what is happening actually behind among the VRSs or VSCs is important sometimes. Here you have some useful commands that will help you out.

Useful Nuage VRS (OpenVSwitch) Commands

Check out Nuage VRS ports and VXLAN tunnels information

[root@compute02 ~]# ovs-dpctl show
system@ovs-system:
	lookups: hit:48 missed:37 lost:0
	flows: 0
	masks: hit:112 total:0 hit/pkt:1.32
	port 0: ovs-system (internal)
	port 1: svc-pat-tap (internal)
	port 2: alubr0 (internal)
	port 3: svc-rl-tap2
	port 4: svc-rl-tap1
	port 5: eth-lxc-14546
	port 6: vxlan_sys_4789 (vxlan: df_default=false, ttl=0)
	port 7: eth-lxc-14666
	port 8: eth-lxc-14775
	port 9: eth-lxc-14897
	port 10: eth-lxc-15034
	port 11: eth-lxc-15164
	port 12: eth-lxc-15281
	port 13: eth-lxc-15406

To get more details try this other one

[root@compute02 ~]# ovs-appctl dpif/show
system@ovs-system: hit:48 missed:37
	alubr0:
		alubr0 65534/2: (internal)
		eltep-b2e041 6/6: (vxlan: df_default=false, in_key=11722817, local_ip=10.0.0.12)
		eth-lxc-14546 4/5: (system)
		eth-lxc-14666 9/7: (system)
		eth-lxc-14775 10/8: (system)
		eth-lxc-14897 11/9: (system)
		eth-lxc-15034 12/10: (system)
		eth-lxc-15164 13/11: (system)
		eth-lxc-15281 14/12: (system)
		eth-lxc-15406 15/13: (system)
		svc-pat-tap 1/1: (internal)
		svc-rl-tap1 3/4: (system)
		svc-rl-tap2 2/3: (system)
		ta00b8329de 7/6: (vxlan: df_default=false, key=8595934, remote_ip=10.0.0.11)
		ta00bb2e041 8/6: (vxlan: df_default=false, key=11722817, remote_ip=10.0.0.11)
		vltep-8329de 5/6: (vxlan: df_default=false, in_key=8595934, local_ip=10.0.0.12)

Find out the VRFS service id for our bridge alubr0 into the VRS

[root@compute02 ~]# ovs-appctl vrf/list alubr0
vrfs: 20029

Get the route table from this VRFS service id. We used the id 20029 that we’ve just got for instance.

[root@compute02 ~]# ovs-appctl vrf/route-table 20029
-----------------+----------+--------+------------+------------+-------------------------------
          Routes | Duration | Cookie |  Pkt Count |  Pkt Bytes |  EVPN-Id or Local/remote Out port
-----------------+----------+--------+------------+------------+-------------------------------
    10.37.120.66 |    1357s |    0x6 |          0 |          0 | 20030
    10.37.129.42 |    1369s |    0x6 |          0 |          0 | 20030
   10.37.159.184 |    1353s |    0x6 |          0 |          0 | 20030
   10.37.133.221 |    1362s |    0x6 |          0 |          0 | 20030
    10.37.76.239 |    1366s |    0x6 |          0 |          0 | 20030
   10.37.100.116 |    1380s |    0x6 |          0 |          0 | 20030
   10.37.100.116 |    1380s |    0x6 |          0 |          0 |     10.37.82.136 |    1376s |    0x6 |          0 |          0 | 20030
    10.37.186.42 |    1373s |    0x6 |          0 |          0 | 20030
     10.37.62.63 |    1380s |    0x6 |          0 |          0 | 20030
     10.37.62.63 |    1380s |    0x6 |          0 |          0 |     10.37.234.60 |    1380s |    0x6 |          0 |          0 | 20030
    10.37.36.162 |    1380s |    0x6 |          0 |          0 | 20030
    10.37.36.162 |    1380s |    0x6 |          0 |          0 |     10.37.0.0/16 |     119s |    0x6 |          0 |          0 | 20030
       0.0.0.0/0 |    1380s |    0x6 |          0 |          0 | -----------------+----------+--------+------------+------------+-------------------------------

Get the MAC table for the associated EVPN id service.

[root@compute02 ~]# ovs-appctl evpn/mac-table 20030

evpn_id: 20030	gen_id: 0x6	vni_id: 0xb2e041	ref_cnt: 10	ltep_port: 6
mode: L3_MODE	arp_proxy: DISABLED	aging_period: 300
pat_enabled: DISABLED	default_action: drop	dhcp_enabled: ENABLED	dhcp_relay: DISABLED	dhcp_pool: DISABLED
resiliency: DISABLED 	l2_encryption:DISABLED
subnet: 10.37.0.0	mask: 255.255.0.0	gw: 10.37.0.1	gw_mac: 68:54:ed:00:00:01

dhcp servers: mac_count: 11	cookie: 455606272

------------------+------+----------+----------+--------+------------+------------+-------------
              Mac | Port | Duration |   Expiry | Cookie |  Pkt Count |  Pkt Bytes |  VM Port name
------------------+------+----------+----------+--------+------------+------------+-------------
ff:ff:ff:ff:ff:ff |    - |    1486s |       0s |    0x6 |          0 |          0 | flood
02:ff:1e:3f:70:09 |    8 |    1486s |       0s |    0x6 |          0 |          0 | Vxlan: key=11722817 remote_ip=10.0.0.11
f2:87:87:aa:3b:a4 |   14 |    1463s |       0s |    0x6 |          0 |          0 | eth-lxc-15281 (grave_euler)
7e:ff:1e:10:18:55 |    8 |    1486s |       0s |    0x6 |          0 |          0 | Vxlan: key=11722817 remote_ip=10.0.0.11
66:87:87:06:c5:4d |   12 |    1472s |       0s |    0x6 |          0 |          0 | eth-lxc-15034 (desperate_archimedes)
4a:87:87:1f:75:d6 |    4 |    1486s |       0s |    0x6 |          0 |          0 | eth-lxc-14546 (suspicious_mirzakhani)
6e:87:87:ca:8d:40 |   11 |    1475s |       0s |    0x6 |          0 |          0 | eth-lxc-14897 (gloomy_liskov)
a2:87:87:37:d6:68 |   13 |    1468s |       0s |    0x6 |          0 |          0 | eth-lxc-15164 (modest_keller)
36:87:87:71:5b:9e |    9 |    1482s |       0s |    0x6 |          0 |          0 | eth-lxc-14666 (hopeful_nobel)
c2:ff:1e:82:1c:e9 |    8 |    1486s |       0s |    0x6 |          0 |          0 | Vxlan: key=11722817 remote_ip=10.0.0.11
16:87:87:5e:42:dc |   10 |    1478s |       0s |    0x6 |          0 |          0 | eth-lxc-14775 (backstabbing_thompson)
76:87:87:1e:c2:6b |   15 |    1459s |       0s |    0x6 |          0 |          0 | eth-lxc-15406 (fervent_goldstine)
------------------+------+----------+----------+--------+------------+------------+-------------

Useful Nuage VSC Commands

Check out what vswicthes (VRSs) are being managed by VSC

*A:vsc01# show vswitch-controller vswitches detail 

===============================================================================
Virtual Switch Table
===============================================================================
vswitch-instance           : va-10.0.0.4/1
Personality                : VRS_G
Uptime                     : 2d 12:00:26    VM Count                 : 0
Num of hostIf              : 0              Num of bridgeIf          : 1
Num of multiVMs            : 0
OF version                 : 1              OF nego. version         : 1
OF Conn. port              : 6633
Cntrl. role                : primary        Cntrl. Conn. type        : none
Cntrl. crl lookup          : false
Cntrl. Conn. mode          : secure
Cntrl. Conn. state         : ready
Cntrl. client verification : false
Cntrl. client IP verification : false
Peer IP for resiliency     : -
Received Role from VRS_G   : master         Elected Role for VRS_G   : master
Gateway Hold Time(sec)     : 3              Gateway Echo Time(sec)   : 1
Gateway Topic              : nuage_gateway_id_10.0.0.4
Gateway Retry/Audit Time   : 915
XMPP error code            : 0
XMPP error text            : (Not Specified)
JSON Conn. State           : Up
JSON Sess. Uptime          : 2d 11:59:57
Static Peer                : False
XMPP Tls Profile           : n/a
OF Tls Profile             : n/a
Ovsdb Tls Profile          : n/a
Ovsdb Conn Type            : none

vswitch-instance           : va-10.0.0.11/1
Personality                : VRS
Uptime                     : 1d 03:17:15    VM Count                 : 17
Num of hostIf              : 0              Num of bridgeIf          : 0
Num of multiVMs            : 0
OF version                 : 1              OF nego. version         : 1
OF Conn. port              : 6633
Cntrl. role                : primary        Cntrl. Conn. type        : none
Cntrl. crl lookup          : false
Cntrl. Conn. mode          : secure
Cntrl. Conn. state         : ready
Cntrl. client verification : false
Cntrl. client IP verification : false
Hold Time(sec)             : 15             Echo Time(sec)           : 5
JSON Conn. State           : Up
JSON Sess. Uptime          : 1d 03:17:13
Static Peer                : False
XMPP Tls Profile           : n/a
OF Tls Profile             : n/a
Ovsdb Tls Profile          : n/a
Ovsdb Conn Type            : none

vswitch-instance           : va-10.0.0.12/1
Personality                : VRS
Uptime                     : 2d 12:00:44    VM Count                 : 8
Num of hostIf              : 0              Num of bridgeIf          : 0
Num of multiVMs            : 0
OF version                 : 1              OF nego. version         : 1
OF Conn. port              : 6633
Cntrl. role                : primary        Cntrl. Conn. type        : none
Cntrl. crl lookup          : false
Cntrl. Conn. mode          : secure
Cntrl. Conn. state         : ready
Cntrl. client verification : false
Cntrl. client IP verification : false
Hold Time(sec)             : 15             Echo Time(sec)           : 5
JSON Conn. State           : Up
JSON Sess. Uptime          : 2d 12:00:28
Static Peer                : False
XMPP Tls Profile           : n/a
OF Tls Profile             : n/a
Ovsdb Tls Profile          : n/a
Ovsdb Conn Type            : none

-------------------------------------------------------------------------------
No. virtual switches: 3
===============================================================================

Check out what virtual instances are being managed for an specific Enterprise like “ACME Corp”. We are showing containers (docker) names for instance.

*A:vsc01# show vswitch-controller virtual-machines enterprise "ACME Corp"  

===============================================================================
Virtual Machine Table
===============================================================================
vswitch-instance        VM Name          UUID
-------------------------------------------------------------------------------
va-10.0.0.11/1          tender_meitner   35c5fcc9-11f1-b809-19ae-6d0167702e2c
va-10.0.0.11/1          hungry_mclean    497c0ee3-0696-fe66-35f7-cad3ecebc72b
va-10.0.0.11/1          boring_ardinghe* 57e8e917-974f-2b63-ba35-72d7e3752f01
va-10.0.0.11/1          prickly_northcu* 65e57580-a51b-58fc-b783-1744b2dc477d
va-10.0.0.11/1          berserk_visvesv* 732e4e99-d689-146d-fd51-681d8b80946a
va-10.0.0.11/1          trusting_keller  76850489-155d-b55d-dda6-dc8f2d729956
va-10.0.0.11/1          sick_leakey      77e628c9-3939-f620-51d3-35cacbd90f5c
va-10.0.0.11/1          sleepy_roentgen  80142e10-dc97-071a-6207-28e33b2a2166
va-10.0.0.11/1          modest_chandras* 8646cf04-9d24-f317-ebcf-7901c2b4590f
va-10.0.0.11/1          gigantic_wescoff 8d14224d-eaed-a50c-182c-8a5154e96516
va-10.0.0.11/1          prickly_mahavira 91233ace-7512-fa0b-f674-da68ba71c470
va-10.0.0.11/1          jovial_franklin  9258a63d-05f0-bf04-babe-ee107b97e961
va-10.0.0.11/1          lonely_keller    93446d20-3343-4c22-e6d4-e68f91d15818
va-10.0.0.11/1          admiring_murdock b325e587-5e6c-1c3d-b8a4-0bff402e6745
va-10.0.0.11/1          insane_kare      c0bca010-43c5-5078-c23d-fbec4ee97361
va-10.0.0.11/1          silly_feynman    ed4b8fe3-a4b2-2b3a-10d2-c4b5868dd939
va-10.0.0.11/1          jovial_blackwell f3899140-a0d4-921e-f89a-00c5e0cc6f0a
va-10.0.0.12/1          modest_keller    357a8ad9-bd16-c93e-2bd4-c02c69fc0b07
va-10.0.0.12/1          grave_euler      3a248af3-89b8-0f1f-1626-5f271682a746
va-10.0.0.12/1          suspicious_mirz* 3ae74fc1-4ac8-9b66-4ee8-4e1178f68b5c
va-10.0.0.12/1          desperate_archi* 52a98dcd-6628-690c-bb6b-ec5734d5ce77
va-10.0.0.12/1          backstabbing_th* aa46363a-2726-b10c-5a9a-95c6a2509752
va-10.0.0.12/1          gloomy_liskov    abd4d701-ec2f-8238-9a79-3e863638c203
va-10.0.0.12/1          fervent_goldsti* b6378095-5092-3643-33c0-8ae40da4f073
va-10.0.0.12/1          hopeful_nobel    fb43ddbd-81ee-6d6f-a87a-5dbe0bbb1774
-------------------------------------------------------------------------------
No. of virtual machines: 25
===============================================================================

We can get more details regarding instances even MAC Address, IP Address, VPRN and EVPN.

*A:vsc01#  show vswitch-controller vports type vm enterprise "ACME Corp"            

===============================================================================
Virtual Port Table
===============================================================================
VP Name                    VM Name                    VPRN    EVPN    Multicast
  VP IP Address              MacAddress                               Channel
                                                                      Map
-------------------------------------------------------------------------------
va-10.0.0.11/1/26          tender_meitner             20024   20026   Disabled
  10.10.10.35/24             1a:ff:1e:9b:b7:03
va-10.0.0.11/1/25          hungry_mclean              20024   20026   Disabled
  10.10.10.137/24            62:ff:1e:bc:3b:34
va-10.0.0.11/1/16          boring_ardinghelli         20024   20025   Disabled
  10.37.39.216/16            f6:ff:1e:55:a8:43
va-10.0.0.11/1/29          prickly_northcutt          20029   20030   Disabled
  10.37.62.63/16             c2:ff:1e:82:1c:e9
va-10.0.0.11/1/24          berserk_visvesvaraya       20024   20025   Disabled
  10.37.116.223/16           be:ff:1e:0a:09:a0
va-10.0.0.11/1/20          trusting_keller            20024   20025   Disabled
  10.37.168.238/16           8e:ff:1e:62:d8:09
va-10.0.0.11/1/23          sick_leakey                20024   20025   Disabled
  10.37.165.46/16            3a:ff:1e:2a:7a:79
va-10.0.0.11/1/21          sleepy_roentgen            20024   20025   Disabled
  10.37.123.69/16            86:ff:1e:7a:d6:6e
va-10.0.0.11/1/18          modest_chandrasekhar       20024   20025   Disabled
  10.37.119.92/16            f2:ff:1e:4a:1f:63
va-10.0.0.11/1/15          gigantic_wescoff           20024   20025   Disabled
  10.37.83.53/16             82:ff:1e:6a:a0:66
va-10.0.0.11/1/27          prickly_mahavira           20024   20026   Disabled
  10.10.10.6/24              2e:ff:1e:15:01:02
va-10.0.0.11/1/22          jovial_franklin            20024   20025   Disabled
  10.37.134.38/16            7a:ff:1e:a5:e1:2a
va-10.0.0.11/1/30          lonely_keller              20029   20030   Disabled
  10.37.100.116/16           7e:ff:1e:10:18:55
va-10.0.0.11/1/28          admiring_murdock           20024   20026   Disabled
  10.10.10.105/24            ee:ff:1e:06:3c:cc
va-10.0.0.11/1/17          insane_kare                20024   20025   Disabled
  10.37.202.88/16            16:ff:1e:0b:d1:c7
va-10.0.0.11/1/19          silly_feynman              20024   20025   Disabled
  10.37.105.245/16           1a:ff:1e:d3:b0:e7
va-10.0.0.11/1/31          jovial_blackwell           20029   20030   Disabled
  10.37.36.162/16            02:ff:1e:3f:70:09
va-10.0.0.12/1/6           modest_keller              20029   20030   Disabled
  10.37.133.221/16           a2:87:87:37:d6:68
va-10.0.0.12/1/7           grave_euler                20029   20030   Disabled
  10.37.120.66/16            f2:87:87:aa:3b:a4
va-10.0.0.12/1/1           suspicious_mirzakhani      20029   20030   Disabled
  10.37.234.60/16            4a:87:87:1f:75:d6
va-10.0.0.12/1/5           desperate_archimedes       20029   20030   Disabled
  10.37.76.239/16            66:87:87:06:c5:4d
va-10.0.0.12/1/3           backstabbing_thompson      20029   20030   Disabled
  10.37.186.42/16            16:87:87:5e:42:dc
va-10.0.0.12/1/4           gloomy_liskov              20029   20030   Disabled
  10.37.129.42/16            6e:87:87:ca:8d:40
va-10.0.0.12/1/8           fervent_goldstine          20029   20030   Disabled
  10.37.159.184/16           76:87:87:1e:c2:6b
va-10.0.0.12/1/2           hopeful_nobel              20029   20030   Disabled
  10.37.82.136/16            36:87:87:71:5b:9e
-------------------------------------------------------------------------------
No. of virtual ports: 25
===============================================================================

Show service details, it can be a VPRN or EVPN service ID. For a VPRN you can see the VRF-target configuration which is important to interconnect with VRFs on the PE. We are taking the VPRN 20024 for instance.

*A:vsc01# show service id 20024 base

===============================================================================
Service Basic Information
===============================================================================
Service Id        : 20024               Vpn Id            : 0
Service Type      : VPRN
Name              : (Not Specified)
Description       : (Not Specified)
Customer Id       : 10006
Last Status Change: 06/27/2016 17:55:03
Last Mgmt Change  : 06/27/2016 17:55:03
Admin State       : Up                  Oper State        : Up

Route Dist.       : 65534:13842         VPRN Type         : regular
AS Number         : None                Router Id         : 255.0.0.0
ECMP              : Enabled             ECMP Max Routes   : 1
Max IPv4 Routes   : No Limit            Auto Bind         : GRE
Max IPv6 Routes   : No Limit
Ignore NH Metric  : Disabled
Hash Label        : Disabled
Vrf Target        : target:65534:499
Vrf Import        : None
Vrf Export        : None
MVPN Vrf Target   : None
MVPN Vrf Import   : None
MVPN Vrf Export   : None
Car. Sup C-VPN    : Disabled
Label mode        : vrf
BGP VPN Backup    : Disabled            

SAP Count         : 0                   SDP Bind Count    : 0

-------------------------------------------------------------------------------
Service Access & Destination Points
-------------------------------------------------------------------------------
Identifier                               Type         AdmMTU  OprMTU  Adm  Opr
-------------------------------------------------------------------------------
vpls:backhaul-evpn20028                  rvpls        0       1500    Up   Up
vpls:evpn20025                           rvpls        0       1500    Up   Up
vpls:evpn20026                           rvpls        0       1500    Up   Up
===============================================================================

Check the EVPN 20025 for instance now.

*A:vsc01# show service id 20025 base

===============================================================================
Service Basic Information
===============================================================================
Service Id        : 20025               Vpn Id            : 0
Service Type      : VPLS
Name              : evpn20025
Description       : (Not Specified)
Customer Id       : 10006
Last Status Change: 06/27/2016 17:55:03
Last Mgmt Change  : 06/27/2016 17:55:03
Admin State       : Up                  Oper State        : Up
MTU               : 1514                Def. Mesh VC Id   : 20025
SAP Count         : 10                  SDP Bind Count    : 1
Snd Flush on Fail : Disabled            Host Conn Verify  : Disabled
Propagate MacFlush: Disabled            Per Svc Hashing   : Disabled
Allow IP Intf Bind: Enabled
InterConnect vlan*: 0                   InterConnect vlan*: 0
Def. Gateway IP   : None
Def. Gateway MAC  : None
Temp Flood Time   : Disabled            Temp Flood        : Inactive
Temp Flood Chg Cnt: 0
BGP-EVPN Encap    : vxlan
Vxlan Tenant ID   : 368626              

-------------------------------------------------------------------------------
Service Access & Destination Points
-------------------------------------------------------------------------------
Identifier                               Type         AdmMTU  OprMTU  Adm  Opr
-------------------------------------------------------------------------------
sap:va-10.0.0.11/1/15:0                  q-tag        1578    1578    Up   Up
sap:va-10.0.0.11/1/16:0                  q-tag        1578    1578    Up   Up
sap:va-10.0.0.11/1/17:0                  q-tag        1578    1578    Up   Up
sap:va-10.0.0.11/1/18:0                  q-tag        1578    1578    Up   Up
sap:va-10.0.0.11/1/19:0                  q-tag        1578    1578    Up   Up
sap:va-10.0.0.11/1/20:0                  q-tag        1578    1578    Up   Up
sap:va-10.0.0.11/1/21:0                  q-tag        1578    1578    Up   Up
sap:va-10.0.0.11/1/22:0                  q-tag        1578    1578    Up   Up
sap:va-10.0.0.11/1/23:0                  q-tag        1578    1578    Up   Up
sap:va-10.0.0.11/1/24:0                  q-tag        1578    1578    Up   Up
sdp:17406:368626 SB(10.0.0.11)           EvpnPmsi     0       0       Up   Down
===============================================================================
* indicates that the corresponding row element may have been truncated.

Check out the routes into vswitch-controller for a specific enterprise and domain (i.e. “dom2”)

*A:vsc01# show vswitch-controller ip-routes enterprise "ACME Corp" domain "dom2"     

===============================================================================
VPRN Routes
===============================================================================

-------------------------------------------------------------------------------
Legend:
Flag : P = Primary, S = Secondary, V = Virtual Next Hop on NAT, I = IPSEC
-------------------------------------------------------------------------------
Flag Prefix/                       NextHop                       Owner
     Prefix Length
-------------------------------------------------------------------------------
---  10.37.0.0/16                  10.0.0.11                     NVC_LOCAL
---  10.37.36.162/32               va-10.0.0.11/1/31             NVC
---  10.37.62.63/32                va-10.0.0.11/1/29             NVC
---  10.37.76.239/32               va-10.0.0.12/1/5              NVC
---  10.37.82.136/32               va-10.0.0.12/1/2              NVC
---  10.37.100.116/32              va-10.0.0.11/1/30             NVC
---  10.37.120.66/32               va-10.0.0.12/1/7              NVC
---  10.37.129.42/32               va-10.0.0.12/1/4              NVC
---  10.37.133.221/32              va-10.0.0.12/1/6              NVC
---  10.37.159.184/32              va-10.0.0.12/1/8              NVC
---  10.37.186.42/32               va-10.0.0.12/1/3              NVC
---  10.37.234.60/32               va-10.0.0.12/1/1              NVC
-------------------------------------------------------------------------------
No. of IP routes: 12
-------------------------------------------------------------------------------
===============================================================================

Find out the associated ingress ACLs for an specific port

*A:vsc01# show vswitch-controller vports vport-name va-10.0.0.11/1/29 acl ingress-security    

===============================================================================
Virtual Port Ingress ACL Table
===============================================================================
Pri  ACL UUID                                E-Type         Action
     SrcIP               DestIP              S-Prt[Min-Max] D-Prt[Min-Max]
     Proto               Match DSCP          FC override    Flow log/Stats log
     Reflexive ACL       Redirect Tgt                       PGID/Type
     PolicyGroupTag
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
VP Name:  va-10.0.0.11/1/29    VLAN ID:  0
-------------------------------------------------------------------------------
0    00000000-0000-0000-0000-000000000000    0x800          Drop
     0.0.0.0/0           0.0.0.0/0           0-0            0-0
     0                   0xff                n/a            False/False
     False               -                                  -
     0:0
1    00000000-0000-0000-0000-000000000000    0x806          Fwd
     0.0.0.0/0           0.0.0.0/0           0-0            0-0
     0                   0xff                n/a            False/False
     False               -                                  -
     0:0
2    00000000-0000-0000-0000-000000000000    0x0            Drop
     0.0.0.0/0           0.0.0.0/0           0-0            0-0
     0                   0xff                n/a            False/False
     False               -                                  -
     0:0
-------------------------------------------------------------------------------
No. of ACL's: 3
-------------------------------------------------------------------------------
Total No. of Ingress ACL's: 3
===============================================================================

See you!

Building a Nuage/PackStack Demo at home – Part 2

First of all, just a reminder that I’m using a fanless server 8 cores / 32 GB RAM at home (details at Diego’s post). And you will need to read my previous post: BUILDING A NUAGE/OPENSTACK DEMO AT HOME: GIVING PACKSTACK A CHANCE – CENTOS7.

Also, I want to say thanks to Scott Irwin for his scripts and Remi Vichery for his prompt support with my VSC.

This second part I will show you how to install Nuage VSP on PackStack. Most of the time those are installed using just KVM. However, I think you’ll have more fun doing on OpenStack. That way also, we’ll help us to make this demo portable to any other OpenStack instance.

Next, you will see how I am configuring these lab:

pinrojas - nuage packstack lab diagram

These are the instances that I am planing to have when I finish this journey

pinrojas - nuage packstack lab table

Important Note: before upload VSC image, be sure to have read NUAGE VSC – MODIFY QCOW2 IMAGES WITH GUESTFISH

Check your lab settings before start anything

We’ll check what we have so far. PackStack is already installed. We’ve done over the first part.

Checking the networks and subnets:


[root@box01 ~(keystone_admin)]# openstack network list
+--------------------------------------+------------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+------------------+--------------------------------------+
| 9eec420a-eb76-4ebc-a814-3ce935b9bca2 | external_network | 407b139d-70b6-49c9-9056-e9211a41b7fb |
| 05235f6d-95fc-4455-a6a6-3d4077cab245 | nuage-lab | 60724bd0-8606-4c7a-bae1-7c31410dd456 |
+--------------------------------------+------------------+--------------------------------------+
[root@box01 ~(keystone_admin)]# openstack network show 9eec420a-eb76-4ebc-a814-3ce935b9bca2
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| id | 9eec420a-eb76-4ebc-a814-3ce935b9bca2 |
| mtu | 0 |
| name | external_network |
| project_id | da64bceb671e4719b41de08c15e1eebe |
| provider:network_type | flat |
| provider:physical_network | extnet |
| provider:segmentation_id | None |
| router_type | External |
| shared | False |
| state | UP |
| status | ACTIVE |
| subnets | 407b139d-70b6-49c9-9056-e9211a41b7fb |
+---------------------------+--------------------------------------+
[root@box01 ~(keystone_admin)]# neutron subnet-show 407b139d-70b6-49c9-9056-e9211a41b7fb
+-------------------+--------------------------------------------------+
| Field | Value |
+-------------------+--------------------------------------------------+
| allocation_pools | {"start": "192.168.1.27", "end": "192.168.1.33"} |
| cidr | 192.168.1.0/24 |
| dns_nameservers | |
| enable_dhcp | False |
| gateway_ip | 192.168.1.254 |
| host_routes | |
| id | 407b139d-70b6-49c9-9056-e9211a41b7fb |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | public_subnet |
| network_id | 9eec420a-eb76-4ebc-a814-3ce935b9bca2 |
| subnetpool_id | |
| tenant_id | da64bceb671e4719b41de08c15e1eebe |
+-------------------+--------------------------------------------------+

Checking router in the lab’s tenant.


[root@box01 ~(keystone_nuage)]# neutron router-list
+--------------------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| id | name | external_gateway_info |
+--------------------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| b9d31b63-99c7-4d84-89e4-6c716210fb20 | nuage-router | {"network_id": "9eec420a-eb76-4ebc-a814-3ce935b9bca2", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "407b139d-70b6-49c9-9056-e9211a41b7fb", "ip_address": "192.168.1.27"}]} |
+--------------------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[root@box01 ~(keystone_nuage)]# neutron subnet-list
+--------------------------------------+--------------+------------------+------------------------------------------------------+
| id | name | cidr | allocation_pools |
+--------------------------------------+--------------+------------------+------------------------------------------------------+
| 60724bd0-8606-4c7a-bae1-7c31410dd456 | nuage-subnet | 192.168.101.0/24 | {"start": "192.168.101.2", "end": "192.168.101.254"} |
+--------------------------------------+--------------+------------------+------------------------------------------------------+
[root@box01 ~(keystone_admin)]# . keystonerc_nuage
[root@box01 ~(keystone_nuage)]# neutron router-list
+--------------------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| id | name | external_gateway_info |
+--------------------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| b9d31b63-99c7-4d84-89e4-6c716210fb20 | nuage-router | {"network_id": "9eec420a-eb76-4ebc-a814-3ce935b9bca2", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "407b139d-70b6-49c9-9056-e9211a41b7fb", "ip_address": "192.168.1.27"}]} |
+--------------------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

We’ll update the subnet to define a dhcp address pool to avoid any conflict with our instances.


[root@box01 ~(keystone_nuage)]# neutron subnet-update --allocation-pool start=192.168.101.50,end=192.168.101.254 nuage-subnet
Updated subnet: nuage-subnet

Preparing our images and flavor

Create the flavor and upload the images to glance for our jumpbox (local NTP/DNS server), VSD, VSC and our nested PackStack (Controller and Nova servers).


[root@box01 ~(keystone_admin)]# openstack flavor create --ram 1024 --disk 10 --vcpus 1 --public nuage.tiny
+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 10 |
| id | a9559f30-3914-4227-8201-5fd7e1262b3d |
| name | nuage.tiny |
| os-flavor-access:is_public | True |
| ram | 1024 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+--------------------------------------+
[root@box01 ~(keystone_admin)]# openstack flavor create --ram 4096 --disk 10 --vcpus 4 --public nuage.vsc
+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 10 |
| id | 6a17cd1c-ee29-4f29-a4c9-14852a1e0394 |
| name | nuage.vsc |
| os-flavor-access:is_public | True |
| ram | 4096 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 4 |
+----------------------------+--------------------------------------+

[root@box01 ~(keystone_admin)]# openstack flavor create --ram 8192 --disk 108 --vcpus 4 --public nuage.vsd
+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 108 |
| id | d4a3eda0-b2e2-4d86-b28a-357e8b94166c |
| name | nuage.vsd |
| os-flavor-access:is_public | True |
| ram | 8192 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 4 |
+----------------------------+--------------------------------------+
[root@box01 ~(keystone_admin)]# openstack flavor create --ram 2048 --disk 20 --vcpus 2 --public nuage.osc
+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 20 |
| id | ba637f8a-aff4-4e53-b758-d946c2242b6d |
| name | nuage.osc |
| os-flavor-access:is_public | True |
| ram | 2048 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 2 |
+----------------------------+--------------------------------------+
[root@box01 ~(keystone_admin)]# openstack flavor create --ram 5120 --disk 50 --vcpus 4 --public nuage.nova
+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 50 |
| id | 88c0cc7c-8aca-4374-aad1-c54c955ab754 |
| name | nuage.nova |
| os-flavor-access:is_public | True |
| ram | 5120 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 4 |
+----------------------------+--------------------------------------+

Let’s install wget to download our CentOS7 image


[root@box01 ~(keystone_admin)]# yum -y install wget
Loaded plugins: fastestmirror
#
# some boring lines
# more boring lines
#
Installed:
wget.x86_64 0:1.14-10.el7_0.1

Complete!
[root@box01 ~(keystone_admin)]# wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
--2016-05-05 18:18:14-- http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
Resolving cloud.centos.org (cloud.centos.org)... 162.252.80.138
Connecting to cloud.centos.org (cloud.centos.org)|162.252.80.138|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 912654336 (870M)
Saving to: ‘CentOS-7-x86_64-GenericCloud.qcow2’

100%[================================================================================================================>] 912,654,336 5.66MB/s in 43s

2016-05-05 18:18:57 (20.3 MB/s) - ‘CentOS-7-x86_64-GenericCloud.qcow2’ saved [912654336/912654336]

Let’s create our jumbox image:


[root@box01 ~(keystone_admin)]# openstack image create --file CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --public --container-format bare centos7-image
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | 6008a645f61baffe0d19dfe992def8a6 |
| container_format | bare |
| created_at | 2016-05-05T23:19:33Z |
| disk_format | qcow2 |
| id | e9ee4c2a-006b-4d53-a158-47ec6bb6c422 |
| min_disk | 0 |
| min_ram | 0 |
| name | centos7-image |
| owner | da64bceb671e4719b41de08c15e1eebe |
| protected | False |
| size | 912654336 |
| status | active |
| tags | [] |
| updated_at | 2016-05-05T23:19:43Z |
| virtual_size | None |
| visibility | private |
+------------------+--------------------------------------+

Create your VSD and VSC images. I’ve got them in my laptop, then I have to copy them thru scp.


[root@box01 ~(keystone_nuage)]# ls *.qcow2
CentOS-7-x86_64-GenericCloud.qcow2 vsc_singledisk.qcow2 VSD-3.2.6_230.qcow2
[root@box01 ~(keystone_nuage)]# . keystonerc_admin
[root@box01 ~(keystone_admin)]# openstack image create --file VSD-3.2.6_230.qcow2 --disk-format qcow2 --public --container-format bare VSD32R6
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | a1419434721c53bf3c848896c48de7d5 |
| container_format | bare |
| created_at | 2016-05-06T13:14:31Z |
| disk_format | qcow2 |
| id | aff1535d-570b-4e19-98de-9c27cde94784 |
| min_disk | 0 |
| min_ram | 0 |
| name | VSD32R6 |
| owner | da64bceb671e4719b41de08c15e1eebe |
| protected | False |
| size | 5573574656 |
| status | active |
| tags | [] |
| updated_at | 2016-05-06T13:15:22Z |
| virtual_size | None |
| visibility | private |
+------------------+--------------------------------------+
[root@box01 ~(keystone_admin)]# openstack image create --file vsc_singledisk-dhcp.qcow2 --disk-format qcow2 --public --container-format bare VSC32R6
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | 95a481632192ad8ea3f8701846b0c5ff |
| container_format | bare |
| created_at | 2016-05-06T13:31:55Z |
| disk_format | qcow2 |
| id | abcb1b0b-0389-4f07-b3a3-36bc2d0c0507 |
| min_disk | 0 |
| min_ram | 0 |
| name | VSC32R6 |
| owner | da64bceb671e4719b41de08c15e1eebe |
| protected | False |
| size | 45613056 |
| status | active |
| tags | [] |
| updated_at | 2016-05-06T13:31:56Z |
| virtual_size | None |
| visibility | private |
+------------------+--------------------------------------+

We need to create our keypair. I will use my laptop public key. And I will copy it as following:


usmovnmroja001:~ mroja001$ scp .ssh/id_rsa.pub root@192.168.1.25:/root
root@192.168.1.25's password:
id_rsa.pub 100% 414 0.4KB/s 00:00

Now, create your keypair using this public key file as following:


[root@box01 ~(keystone_admin)]# . keystonerc_nuage
[root@box01 ~(keystone_nuage)]# openstack keypair create --public-key id_rsa.pub pin-laptop
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| fingerprint | b6:01:9c:76:a6:e6:d8:04:38:27:5d:8f:92:20:f3:32 |
| name | pin-laptop |
| user_id | c91cd992e79149209c41416a55a661b1 |
+-------------+-------------------------------------------------+

Creating your servers

Time to create your servers: jumpbox (local NTP/DNS server), VSC and VSD.


[root@box01 ~(keystone_nuage)]# openstack server create --image centos7-image --flavor nuage.tiny --key-name pin-laptop --nic net-id=nuage-lab,v4-fixed-ip=192.168.101.3 jumpbox
+--------------------------------------+------------------------------------------------------+
| Field | Value |
+--------------------------------------+------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | id9AYj3o7WqE |
| config_drive | |
| created | 2016-05-06T13:56:02Z |
| flavor | nuage.tiny (a9559f30-3914-4227-8201-5fd7e1262b3d) |
| hostId | |
| id | f71bb396-47a8-477f-8f6b-8390769cfa73 |
| image | centos7-image (e9ee4c2a-006b-4d53-a158-47ec6bb6c422) |
| key_name | pin-laptop |
| name | jumpbox |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | 39e2f35bc10f4047b1ea77f79559807d |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | BUILD |
| updated | 2016-05-06T13:56:02Z |
| user_id | c91cd992e79149209c41416a55a661b1 |
+--------------------------------------+------------------------------------------------------+
[root@box01 ~(keystone_nuage)]# openstack server create --image VSD32R6 --flavor nuage.vsd --key-name pin-laptop --nic net-id=nuage-lab,v4-fixed-ip=192.168.101.4 vsd01
+--------------------------------------+--------------------------------------------------+
| Field | Value |
+--------------------------------------+--------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | SGsdF4DvkPVo |
| config_drive | |
| created | 2016-05-06T13:57:34Z |
| flavor | nuage.vsd (d4a3eda0-b2e2-4d86-b28a-357e8b94166c) |
| hostId | |
| id | 5befd9f3-98d5-404a-a1a7-ce1fa03127e8 |
| image | VSD32R6 (aff1535d-570b-4e19-98de-9c27cde94784) |
| key_name | pin-laptop |
| name | vsd01 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | 39e2f35bc10f4047b1ea77f79559807d |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | BUILD |
| updated | 2016-05-06T13:57:34Z |
| user_id | c91cd992e79149209c41416a55a661b1 |
+--------------------------------------+--------------------------------------------------+
[root@box01 ~(keystone_nuage)]# openstack server create --image VSC32R6 --flavor nuage.vsc --key-name pin-laptop --nic net-id=nuage-lab,v4-fixed-ip=192.168.101.5 vsc01
+--------------------------------------+--------------------------------------------------+
| Field | Value |
+--------------------------------------+--------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | wCM4DzJijau9 |
| config_drive | |
| created | 2016-05-06T13:58:10Z |
| flavor | nuage.vsc (6a17cd1c-ee29-4f29-a4c9-14852a1e0394) |
| hostId | |
| id | 77a75f63-4615-4479-ace2-e0b21e70a038 |
| image | VSC32R6 (abcb1b0b-0389-4f07-b3a3-36bc2d0c0507) |
| key_name | pin-laptop |
| name | vsc01 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | 39e2f35bc10f4047b1ea77f79559807d |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | BUILD |
| updated | 2016-05-06T13:58:10Z |
| user_id | c91cd992e79149209c41416a55a661b1 |
+--------------------------------------+--------------------------------------------------+
[root@box01 ~(keystone_nuage)]# nova list
+--------------------------------------+---------+--------+------------+-------------+-------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+--------+------------+-------------+-------------------------+
| f71bb396-47a8-477f-8f6b-8390769cfa73 | jumpbox | ACTIVE | - | Running | nuage-lab=192.168.101.3 |
| 77a75f63-4615-4479-ace2-e0b21e70a038 | vsc01 | ACTIVE | - | Running | nuage-lab=192.168.101.5 |
| 0f572cb6-d4a4-4b8a-b277-eb55fc859c68 | vsd01 | ACTIVE | - | Running | nuage-lab=192.168.101.4 |
+--------------------------------------+---------+--------+------------+-------------+-------------------------+

Lab topology so far (remember use nuage/foo credentials to access your horizon at http://192.168.1.25/dashboard)

pinrojas - nuage lab topology packstack 01.png

 

Jumpbox: Creating your DNS and NTP local server

We’ll start configuring NTP and DNS service in jumpbox. Assign Floating IP to your jumpbox to get access from outside.


[root@box01 ~(keystone_nuage)]# openstack ip floating pool list
+------------------+
| Name |
+------------------+
| external_network |
+------------------+
[root@box01 ~(keystone_nuage)]# openstack ip floating create external_network
+-------------+--------------------------------------+
| Field | Value |
+-------------+--------------------------------------+
| fixed_ip | None |
| id | ca767cc0-fc65-4d74-8e4a-d2ef555c6b0d |
| instance_id | None |
| ip | 192.168.1.28 |
| pool | external_network |
+-------------+--------------------------------------+
[root@box01 ~(keystone_nuage)]# openstack ip floating add 192.168.1.28 jumpbox

Add security rules to default group to open ssh and ping port.

pinrojas - nuage lab packstack adding rules to security group.png

Let’s start with network settings…


usmovnmroja001:~ mroja001$ ssh centos@192.168.1.28
The authenticity of host '192.168.1.28 (192.168.1.28)' can't be established.
RSA key fingerprint is d9:f2:5e:95:96:94:48:a2:4a:63:2e:6b:e0:31:fa:a0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.28' (RSA) to the list of known hosts.
[centos@jumpbox ~]$ su -
password: #you need to change the password before
[root@jumpbox ~]# cat /etc/hosts
127.0.0.1 localhost
192.168.101.3 jumpbox jumpbox.nuage.lab
[root@jumpbox ~]# cat /etc/hostname
jumpbox.nuage.lab
[root@jumpbox ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
BOOTPROTO="dhcp"
ONBOOT="yes"
TYPE="Ethernet"
USERCTL="yes"
PEERDNS="yes"
IPV6INIT="no"
PERSISTENT_DHCLIENT="1"
[root@jumpbox ~]# cat /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
search nuage.lab
nameserver 192.168.1.254
[root@jumpbox ~]# ping www.google.com
PING www.google.com (64.233.176.99) 56(84) bytes of data.
64 bytes from yw-in-f99.1e100.net (64.233.176.99): icmp_seq=1 ttl=43 time=23.3 ms
64 bytes from yw-in-f99.1e100.net (64.233.176.99): icmp_seq=2 ttl=43 time=22.9 ms

Jumpbox: Install your DNS local server

Time to install bind and get our DNS.


[root@jumpbox ~]# yum -y install bind bind-utils
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
#
#some boring lines....
#more boring lines....
Installed:
bind.x86_64 32:9.9.4-29.el7_2.3 bind-utils.x86_64 32:9.9.4-29.el7_2.3 

Dependency Installed:
bind-libs.x86_64 32:9.9.4-29.el7_2.3

Complete!

We have to create DNS zones and entries as following:


[root@jumpbox ~]# cat /etc/named.conf
acl "trusted" {
192.168.101.3; # ns1 - can be set to localhost
192.168.101.4;
192.168.101.5;
192.168.101.6;
192.168.101.7;
192.168.101.8;
};

options {
directory "/var/cache/bind";

recursion yes; # enables resursive queries
allow-recursion { trusted; }; # allows recursive queries from "trusted" clients
listen-on { 192.168.101.3; }; # ns1 private IP address - listen on private network only
allow-transfer { none; }; # disable zone transfers by default

forwarders {
8.8.8.8;
8.8.4.4;
};

};
include "/etc/named/named.conf.local";
[root@jumpbox ~]# cat /etc/named/named.conf.local
zone "nuage.lab" {
type master;
file "/etc/named/zones/db.nuage.lab"; # zone file path
};

zone "101.168.192.in-addr.arpa" {
type master;
file "/etc/named/zones/db.101.168.192"; # 192.168.101/24 subnet
};

[root@jumpbox ~]# cat /etc/named/zones/db.nuage.lab
;
; BIND data file for local loopback interface
;
$TTL 604800
@ IN SOA jumpbox.nuage.lab. admin.nuage.lab (
3 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
; name servers - NS records
IN NS jumpbox.nuage.lab.

; name servers - A records
jumpbox.nuage.lab. IN A 192.168.101.3

; 192.168.101.0/16 - A records
vsd01.nuage.lab. IN A 192.168.101.4
xmpp IN CNAME vsd01
vsc01.nuage.lab. IN A 192.168.101.5
osc01.nuage.lab. IN A 192.168.101.6
nova01.nuage.lab. IN A 192.168.101.7
nova02.nuage.lab. IN A 192.168.101.7

; SRV records
_xmpp-client._tcp IN SRV 10 0 5222 vsd01.nuage.lab.
[root@jumpbox ~]# cat /etc/named/zones/db.101.168.192
;
; BIND reverse data file for local loopback interface
;
$TTL 604800
@ IN SOA jumpbox.nuage.lab. admin.nuage.lab. (
3 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
; name servers
IN NS jumpbox.nuage.lab.

; PTR Records
3 IN PTR jumpbox.nuage.lab. ; 192.168.101.3
4 IN PTR vsd01.nuage.lab. ; 192.168.101.4
5 IN PTR vsc01.nuage.lab. ; 192.168.101.5
6 IN PTR osc01.nuage.lab. ; 192.168.101.6
7 IN PTR nova01.nuage.lab. ; 192.168.101.7
8 IN PTR nova02.nuage.lab. ; 192.168.101.8
last settings to make our DNS works


[root@jumpbox ~]# mkdir /var/cache/bind
[root@jumpbox ~]# systemctl start named
[root@jumpbox ~]# cat /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
search nuage.lab
nameserver 192.168.101.3

Test your local DNS


[root@jumpbox ~]# nslookup vsd01
Server: 192.168.101.3
Address: 192.168.101.3#53

Name: vsd01.nuage.lab
Address: 192.168.101.4

[root@jumpbox ~]# nslookup vsd01.nuage.lab
Server: 192.168.101.3
Address: 192.168.101.3#53

Name: vsd01.nuage.lab
Address: 192.168.101.4

[root@jumpbox ~]# nslookup 192.168.101.4
Server: 192.168.101.3
Address: 192.168.101.3#53

4.101.168.192.in-addr.arpa name = vsd01.nuage.lab.

Jumpbox: Install your NTP local server

We’ll install ntp as following:


[root@jumpbox ~]# yum -y install ntp
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
# boring lines
# more boring lines
# more boris lines...
Installed:
ntp.x86_64 0:4.2.6p5-22.el7.centos.1 

Dependency Installed:
autogen-libopts.x86_64 0:5.18-5.el7 ntpdate.x86_64 0:4.2.6p5-22.el7.centos.1

Complete!

Modify your ntp.conf file


[root@jumpbox ~]# cat /etc/ntp.conf
driftfile /var/lib/ntp/drift
restrict default nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict ::1

restrict 192.168.101.0 mask 255.255.255.0 nomodify notrap
restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

server ntp1.jst.mfeed.ad.jp iburst
server ntp2.jst.mfeed.ad.jp iburst
server ntp3.jst.mfeed.ad.jp iburst

includefile /etc/ntp/crypto/pw

keys /etc/ntp/keys

disable monitor

Let’s speed up the sync as following:


[root@jumpbox ~]# ntpdate -u ntp3.jst.mfeed.ad.jp
6 May 15:08:52 ntpdate[16769]: adjust time server 210.173.160.87 offset 0.037419 sec
[root@jumpbox ~]# ntpdate -u ntp2.jst.mfeed.ad.jp
6 May 15:09:14 ntpdate[16770]: adjust time server 210.173.160.57 offset 0.020899 sec
[root@jumpbox ~]# systemctl start ntpd
[root@jumpbox ~]# ntpstat
synchronised to NTP server (210.173.160.27) at stratum 3
time correct to within 8132 ms
polling server every 64 s
[root@jumpbox ~]# ntpq -cpe -cas
remote refid st t when poll reach delay offset jitter
==============================================================================
+ntp1.jst.mfeed. 133.243.236.17 2 u 17 64 1 190.149 26.285 3.164
*ntp2.jst.mfeed. 133.243.236.17 2 u 16 64 1 169.770 18.778 2.302
+ntp3.jst.mfeed. 133.243.236.17 2 u 15 64 1 168.504 12.655 2.307

ind assid status conf reach auth condition last_event cnt
===========================================================
1 55973 943a yes yes none candidate sys_peer 3
2 55974 963a yes yes none sys.peer sys_peer 3
3 55975 9424 yes yes none candidate reachable 2

VSD: Configuring Virtualized Services Director v32.R6

Attach a floating ip to VSD instance and updating nameserver into the subnet
Don’t forget to add other rule in default security group to access local NTP.


[root@box01 ~]# . keystonerc_nuage
[root@box01 ~(keystone_nuage)]# openstack ip floating create external_network
+-------------+--------------------------------------+
| Field | Value |
+-------------+--------------------------------------+
| fixed_ip | None |
| id | 91903e82-362b-4ab0-9bfb-437b443fa9ed |
| instance_id | None |
| ip | 192.168.1.29 |
| pool | external_network |
+-------------+--------------------------------------+
[root@box01 ~(keystone_nuage)]# openstack ip floating add 192.168.1.29 vsd01
[root@box01 ~(keystone_nuage)]# neutron subnet-update --dns-nameserver 192.168.101.3 nuage-subnet
Updated subnet: nuage-subnet
[root@box01 ~(keystone_nuage)]# openstack security group rule create --proto udp --dst-port 123 default

Reboot you VSD01 to get the last change to your subnet
We’ll prepare server before the VSD installation: change network settings, add NTP server to ntp.conf, change timezone and syncup time.


usmovnmroja001:~ mroja001$ ssh root@192.168.1.29
The authenticity of host '192.168.1.29 (192.168.1.29)' can't be established.
RSA key fingerprint is 7d:60:cd:5e:2e:08:6e:e1:f2:1d:28:a8:55:ae:23:7c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.29' (RSA) to the list of known hosts.
root@192.168.1.29's password:
Last login: Fri May 8 21:09:15 2015

Welcome to VSD. (3.2.6_230)

[root@host-192-168-101-4 ~]# hostname vsd01.nuage.lab
[root@host-192-168-101-4 ~]# hostname -f
vsd01.nuage.lab
[root@host-192-168-101-4 ~]# hostname
vsd01.nuage.lab
[root@host-192-168-101-4 ~]# cat /etc/resolv.conf
; generated by /sbin/dhclient-script
search nuage.lab
nameserver 192.168.101.3
[root@host-192-168-101-4 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
IPV6INIT="yes"
NM_CONTROLLED="yes"
ONBOOT="yes"
TYPE="Ethernet"
BOOTPROTO="dhcp"
[root@host-192-168-101-4 ~]# cat /etc/hosts
127.0.0.1 localhost
192.168.101.4 vsd01.nuage.lab vsd01
[root@host-192-168-101-4 ~]# cat /etc/ntp.conf
driftfile /var/lib/ntp/drift

restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery

restrict 127.0.0.1
restrict -6 ::1

server jumpbox.nuage.lab iburst
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst

includefile /etc/ntp/crypto/pw

keys /etc/ntp/keys

[root@vsd01 ~]# ntpdate -u jumpbox.nuage.lab
6 May 20:57:18 ntpdate[1363]: adjust time server 192.168.101.3 offset 0.001624 sec
[root@vsd01 ~]# service ntpd start
Starting ntpd:
[root@vsd01 ~]# ntpstat
synchronised to NTP server (216.218.254.202) at stratum 2
time correct to within 51 ms
polling server every 64 s
[root@vsd01 ~]# rm /etc/localtime
rm: remove regular file `/etc/localtime'? y
[root@vsd01 ~]# sudo ln -s /usr/share/zoneinfo/US/Central /etc/localtime

All set to start our installation. execute /opt/vsd/install.sh


[root@vsd01 ~]# /opt/vsd/install.sh
-------------------------------------------------------------
V I R T U A L I Z E D S E R V I C E S D I R E C T O R Y
version 3.2.6_230
(c) 2015 Nuage Networks
-------------------------------------------------------------
Error: FQDN vsd01 missing the domain part
[root@vsd01 ~]# vi /etc/hosts
[root@vsd01 ~]# hostname -f
vsd01.nuage.lab
[root@vsd01 ~]# /opt/vsd/install.sh
-------------------------------------------------------------
V I R T U A L I Z E D S E R V I C E S D I R E C T O R Y
version 3.2.6_230
(c) 2015 Nuage Networks
-------------------------------------------------------------
VSD supports two configurations:
1) HA, consisting of 3 redundant installs of VSD with a cluster name node server.
2) Standalone, where all services are installed on a single machine.
Is this a redundant (r) or standalone (s) installation [r|s]? (default=s): s
WARN: Memory is at 7872 MB; 16GB is preferred
Deploy VSD on single host vsd01.nuage.lab ...
Continue [y|n]? (default=y): y
Starting VSD deployment. This may take as long as 20 minutes in some situations ...
VSD package deployment and configuration DONE. Please initialize VSD.
DONE: VSD deployed.
Starting VSD initialization. This may take as long as 20 minutes in some situations ...
A self-signed certificate has been generated to get you started using VSD.
VSD installed and the services have started.

Wait a few minutes….
Now you can check your services:


[root@vsd01 ~]#
[root@vsd01 ~]#
[root@vsd01 ~]# monit summary
The Monit daemon 5.15 uptime: 3m 

Program 'vsd-stats-status' Status failed
Program 'vsd-core-status' Status failed
Program 'vsd-common-status' Status ok
Process 'tca-daemon' Initializing
Program 'tca-daemon-status' Initializing
Process 'stats-collector' Initializing
Program 'stats-collector-status' Initializing
Process 'opentsdb' Running
Program 'opentsdb-status' Status failed
Program 'ntp-status' Status ok
Process 'mysql' Running
Program 'mysql-status' Status ok
Process 'mediator' Running
Program 'mediator-status' Initializing
File 'jboss-console-log' Accessible
File 'monit-log' Accessible
File 'mediator-out' Does not exist
File 'stats-out' Does not exist
File 'tca-daemon-out' Does not exist
Program 'keyserver-status' Status ok
Process 'jboss' Running
Program 'jboss-status' Status ok
Process 'hbase' Running
Program 'hbase-status' Status ok
Program 'ejbca-status' Status ok
Process 'ejabberd' Running
Program 'ejabberd-status' Status ok
System 'vsd01.nuage.lab' Running

It’s important to understand how to gracefully restart these services as following (you need to wait some time between commands until services come up ‘ok’). We’ll keep stats down to avoid annoying messages later. This is matter for other post.


[root@vsd01 ~]# monit -g vsd-stats stop
# Wait for all the vsd-stats services to show as “Not Monitored”.
[root@vsd01 ~]# monit -g vsd-core stop
# Wait for all the vsd-core services to show as “Not Monitored”.
[root@vsd01 ~]# monit -g vsd-common stop
# Wait for all the vsd-common services to show as “Not Monitored”.
[root@vsd01 ~]# monit -g vsd-common start
# Wait for all the vsd-common services to show as “status ok”.
[root@vsd01 ~]# monit -g vsd-core start
# Wait for all the vsd-common services to show as “status ok”.
# I will keep vsd-stats down
[root@vsd01 ~]# monit summary
The Monit daemon 5.15 uptime: 17m

Program 'vsd-stats-status' Not monitored
Program 'vsd-core-status' Status ok
Program 'vsd-common-status' Status ok
Process 'tca-daemon' Not monitored
Program 'tca-daemon-status' Not monitored
Process 'stats-collector' Not monitored
Program 'stats-collector-status' Not monitored
Process 'opentsdb' Not monitored
Program 'opentsdb-status' Not monitored
Program 'ntp-status' Status ok
Process 'mysql' Running
Program 'mysql-status' Status ok
Process 'mediator' Running
Program 'mediator-status' Status ok
File 'jboss-console-log' Accessible
File 'monit-log' Accessible
File 'mediator-out' Accessible
File 'stats-out' Accessible
File 'tca-daemon-out' Accessible
Program 'keyserver-status' Status failed
Process 'jboss' Running
Program 'jboss-status' Status ok
Process 'hbase' Not monitored
Program 'hbase-status' Not monitored
Program 'ejbca-status' Status ok
Process 'ejabberd' Running
Program 'ejabberd-status' Status ok
System 'vsd01.nuage.lab' Running

We need to open tcp 8443 port to access vsd console
Switch again to your OS controller and add the required security group.


[root@box01 ~(keystone_nuage)]# openstack security group create vsd
+-------------+--------------------------------------+
| Field | Value |
+-------------+--------------------------------------+
| description | vsd |
| id | 7ff1256c-aeec-4dac-9cf8-ff6ae9c7ab04 |
| name | vsd |
| rules | [] |
| tenant_id | 39e2f35bc10f4047b1ea77f79559807d |
+-------------+--------------------------------------+
[root@box01 ~(keystone_nuage)]# openstack security group rule create --proto tcp --dst-port 8443 vsd
+-----------------+--------------------------------------+
| Field | Value |
+-----------------+--------------------------------------+
| group | {} |
| id | 5a82cacf-b846-4307-a059-7640154ff24b |
| ip_protocol | tcp |
| ip_range | 0.0.0.0/0 |
| parent_group_id | 7ff1256c-aeec-4dac-9cf8-ff6ae9c7ab04 |
| port_range | 8443:8443 |
+-----------------+--------------------------------------+
[root@box01 ~(keystone_nuage)]# openstack server add security group vsd01 vsd

It’s time to access your VSD server. use cpsroot/cpsroot credentials.

pinrojas - vsd nuage packstack console.png

You’ll need a license. comment this post to figure it out that.

pinrojas - license vsd console nuage lab packstack.png

VSC: Installing SDN Controller

First of all, you will have to change the qcow2 image. please check my post: NUAGE VSC – MODIFY QCOW2 IMAGES WITH GUESTFISH
This is what will you have on your console if everything was ok (Don’t get confuse is you see the screen stuck at the booting… state)

pinrojas - vsc screen nuage lab

Ping from vsd01 to your brand new VSC (vsc01 / 192.168.101.5) to check your installation as following


[root@vsd01 ~]# ping 192.168.101.5
PING 192.168.101.5 (192.168.101.5) 56(84) bytes of data.
64 bytes from 192.168.101.5: icmp_seq=1 ttl=64 time=2.70 ms
64 bytes from 192.168.101.5: icmp_seq=2 ttl=64 time=0.621 ms
^C
--- 192.168.101.5 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1244ms
rtt min/avg/max/mdev = 0.621/1.661/2.702/1.041 ms
[root@vsd01 ~]# ssh admin@192.168.101.5
The authenticity of host '192.168.101.5 (192.168.101.5)' can't be established.
RSA key fingerprint is 47:e6:d6:33:9f:d7:cb:fa:ab:83:89:28:28:02:8c:56.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.101.5' (RSA) to the list of known hosts.
TiMOS-DC-C-3.2.6-228 cpm/i386 NUAGE VSC Copyright (c) 2000-2016 Alcatel-Lucent.
All rights reserved. All use subject to applicable license agreements.
Built on Tue Jan 26 21:42:10 PST 2016 [d6274a] by builder in /rel3.2-DC/b1/6-228/panos/main

admin@192.168.101.5's password:

A:vm1#
A:vm1#
A:vm1#
A:vm1#
A:vm1# show bof
===============================================================================
BOF (Memory)
===============================================================================
primary-image cf1:\timos\cpm.tim
primary-config cf1:\config.cfg
ip-address-dhcp
address 192.168.101.5/24 active
primary-dns 192.168.101.3
dns-domain openstacklocal
autonegotiate
duplex full
speed 100
wait 3
persist off
no li-local-save
no li-separate
console-speed 115200
===============================================================================
A:vm1#

We’ll add a route and our domain to bof.cfg file


A:vm1# bof
A:vm1>bof# dns-domain nuage.lab
*A:vm1>bof# static-route 0.0.0.0/1 next-hop 192.168.101.1
*A:vm1>bof# save
Writing BOF to cf1:/bof.cfg ... OK
Completed.
A:vm1>bof# exit
A:vm1# show bof
===============================================================================
BOF (Memory)
===============================================================================
primary-image cf1:\timos\cpm.tim
primary-config cf1:\config.cfg
ip-address-dhcp
address 192.168.101.5/24 active
primary-dns 192.168.101.3
dns-domain nuage.lab
static-route 0.0.0.0/1 next-hop 192.168.101.1
autonegotiate
duplex full
speed 100
wait 3
persist off
no li-local-save
no li-separate
console-speed 115200
===============================================================================

Now, we’ll configure NTP and time zone.


A:vm1# configure system
A:vm1>config>system# name vsd01
*A:vsd01>config>system# snmp
*A:vsd01>config>system>snmp# exit
*A:vsd01>config>system# time
*A:vsd01>config>system>time# ntp
*A:vsd01>config>system>time>ntp# server 192.168.101.3
*A:vsd01>config>system>time>ntp# no shutdown
*A:vsd01>config>system>time>ntp# exit
*A:vsd01>config>system>time# sntp
*A:vsd01>config>system>time>sntp# shutdown
*A:vsd01>config>system>time>sntp# exit
*A:vsd01>config>system>time# dst-zone
*A:vsd01>config>system>time# dst-zone CST
*A:vsd01>config>system>time>dst-zone# start second sunday march 02:00
*A:vsd01>config>system>time>dst-zone# end first sunday november 02:00
*A:vsd01>config>system>time>dst-zone# exit
*A:vsd01>config>system>time# zone CST
*A:vsd01>config>system>time# exit
*A:vsd01>config>system# thresholds
*A:vsd01>config>system>thresholds# rmon
*A:vsd01>config>system>thresh>rmon# exit
*A:vsd01>config>system>thresholds# exit
*A:vsd01>config>system# exit
*A:vsd01#

Before save our configuration, we’ll set vsd connection thru xmpp as following:


*A:vsd01#
*A:vsd01# exit all
*A:vsd01# configure vswitch-controller
*A:vsd01>config>vswitch-controller# xmpp-server vsd01:password@vsd01.nuage.lab
*A:vsd01>config>vswitch-controller# open-flow
*A:vsd01>config>vswitch-controller>open-flow# exit
*A:vsd01>config>vswitch-controller# xmpp
*A:vsd01>config>v-switch-controller>xmpp# exit
*A:vsd01>config>vswitch-controller# ovsdb
*A:vsd01>config>vswitch-controller>ovsdb# exit
*A:vsd01>config>vswitch-controller# exit
*A:vsd01#
*A:vsd01# admin save

Now, let’s see if everything is ok and your VSC is connected to your VSD


A:vsd01# show vswitch-controller vsd detail 

===============================================================================
VSD Server Table
===============================================================================
VSD User Name : cna@vsd01.nuage.lab/nuage
Uptime : 0d 02:31:27
Status : available
Nuage Msg Tx. : 8 Nuage Msg Rx. : 8
Nuage Msg Ack. Rx. : 8 Nuage Msg Error : 0
Nuage Msg TimedOut : 0 Nuage Msg MinRtt : 50
Nuage Msg MaxRtt : 60

===============================================================================

Ok guys, next post we’ll install our plugin into a brand new openstack installation

See you around!

Nuage VSC – Modify QCOW2 images with guestfish

Hi there,

This post is useful to anyone planing to use guestfish to make some changes to any qcow2 disk image file. You can solve issues like change user settings, or define static ip addresses, or change grub settings.

I am using guestfish to change some configurations into my vsc_singledisk.qcow2 image. Why? because all the changes into VSC must be done thru a console. However, If you are thinking to run this on OpenStack liberty/kvm. It would be a issue. Most of the instances has been managed thru vnc (graphics). Then, I’ve added network settings to bof.cfg file to make this instance boot with a specific ip address. I can follow the next steps of its setup thru ssh 😉

Install your guestfish and libvirtd packages

I’ve downloaded a centos7 minimal OVA file to my Mac. I’ve imported it to my vbox and make run. I’ve used to have selinux disabled. Maybe you will need to change it to permissible.

First of all you have to install kvm and the guestfish. Then, you will have to disbale NetworManager and firewalld. Restart the libvirtd and set LIBGUESTFS_BACKEND=direct. I’ve copied my qcow2 file thru scp previously. You will have to change the ownership to qemu:qemu to this file to make it work.


[root@jumbox ~]# yum install -y qemu-kvm libvirt libvirt-python libguestfs-tools virt-install
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: bay.uchicago.edu
 * extras: centos-distro.cavecreek.net
 * updates: centos.mia.host-engine.com
#
# many boring lines
# many boring lines
#

Installed:
  libguestfs-tools.noarch 1:1.28.1-1.55.el7.centos.2        libvirt.x86_64 0:1.2.17-13.el7_2.4        qemu-kvm.x86_64 10:1.5.3-105.el7_2.4       
  virt-install.noarch 0:1.2.1-8.el7                        

Dependency Installed:
  libguestfs.x86_64 1:1.28.1-1.55.el7.centos.2                        libguestfs-tools-c.x86_64 1:1.28.1-1.55.el7.centos.2                      
  libvirt-daemon-kvm.x86_64 0:1.2.17-13.el7_2.4                       perl-Sys-Guestfs.x86_64 1:1.28.1-1.55.el7.centos.2                        
  perl-Sys-Virt.x86_64 0:1.2.17-2.el7                                 perl-libintl.x86_64 0:1.20-12.el7                                         

Complete!
[root@jumbox ~]# yum -y install guestfish
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: bay.uchicago.edu
 * extras: centos-distro.cavecreek.net
 * updates: centos.mia.host-engine.com
#
# many boring lines
# many boring lines
#
Installed:
  libguestfs-tools-c.x86_64 1:1.28.1-1.55.el7.centos.2                                                                                           

Complete!
[root@jumbox ~]# systemctl stop NetworkManager
[root@jumbox ~]# systemctl disable NetworkManager
Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.
[root@jumbox ~]# systemctl start network
[root@jumbox ~]# systemctl stop firewalld
[root@jumbox ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@jumbox ~]# service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service
[root@jumbox ~]# virsh
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # list
 Id    Name                           State
----------------------------------------------------

virsh # exit
[root@jumbox ~]# chown qemu:qemu vsc_singledisk.qcow2 
[root@jumbox ~]# export LIBGUESTFS_BACKEND=direct

Modify your files into your qcow2 image thru guestfish

Next lines will show you how to modify the qcow2 image file. This case I am modifying bof.cfg file at the root folder.


[root@jumbox ~]# mv vsc_singledisk.qcow2 vsc_singledisk_dhcp.qcow2 
[root@jumbox ~]# guestfish --rw -a vsc_singledisk_dhcp.qcow2 

Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.

Type: 'help' for help on commands
      'man' to read the manual
      'quit' to quit the shell

> run
 100% ⟦▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒⟧ 00:00
> mount /dev/sda1 /
> ls /
bof.cfg
config.cfg
nvram.dat
syslinux
timos
> vi /bof.cfg
> exit

The bof.cfg file that I’ve found is this:


primary-image        cf1:/timos/cpm.tim
primary-config       cf1:/config.cfg
autonegotiate
duplex               full
speed                100
wait                 3
persist              off
console-speed        115200

And this is how I’ve left it.


primary-image    cf1:\timos\cpm.tim
primary-config   cf1:\config.cfg
ip-address-dhcp
primary-dns      192.168.101.3
dns-domain       nuage.lab
static-route     0.0.0.0/1 next-hop 192.168.101.1
autonegotiate
duplex           full
speed            100
wait             3
persist          off
no li-local-save
no li-separate
console-speed    115200

See ya!

Building a Nuage/OpenStack demo at home – Part1

update 05/05/2016: I’ve done some important changes over the next post. Like switching desvtack to packstack. Anyway, you are invited to check out both and create your own opinion. Also, I’ve changed jumpbox from ubuntu to Centos 7. I will update on the next posts some of my notes here.

Next posts will take you to a step-by-step guide to create you on-premises Proof of Concept of Nuage 3.2R6 and OpenStack Liberty. I’m considering installing this demo in just one server.

Next picture shows you the components I’m considering for this demo:

nuage demo devstack pinrojas 01

 

This will help you to understand how Nuage works with OpenStack. You will be able to try different use cases like: forwarding policies (chaining), ACLs, Manage L3/L2 Domains, Create a LBaaS based on haproxy, etc…

Minimal Capacity Requirements

In order to try some of the mentioned use cases, we need a minimal capacity requirements:

1.- VSD requires at least 8G memory and 100G disk for a demo (24G is the required on production). 3 instances is needed for High Availability. I’ve tried 4G into my laptop. However, services takes a long time of your life to set-up.

2.- VSC requires 4GB of memory (You need at least x2 on production)

3.- Jumpbox requires a minimal of 2G of memory

4.- OpenStack controller and computes depends on what do you want to do. Controller Nodes can fit in 4GB memory perfectly. Computes depends on how many memory you will provide to your instances. My case I will consider 5GB to have a minimal o 3 instances to play around.

Now, If we consider to install everything in one just server using devstack. And we add 4GB to support QEMU and a few of the projects (neutron, nova, keystone, glance). We do need at least 27GB memory in just one server. Let’s make it 32GB (just in case you’ve motivated to install also a VRS-G or an additional compute node)

Creating Base Image with DevStack

Ok guys, we need to create our small private stack to build our demo. At first I was thinking just to use KVM, but what the hell! Let’s do it with DevStack.

My suggestion is to run this OS base for DevStack in a physical server with al least 32 GB of memory, 4CPUs, 2 NICs and 300GB of Disk. Otherwise, you can download my Virtualbox ubuntu image that contains my DevStack downloaded and ready to install via ./stack.sh.

We’ll go thru the process to install DevStack in a server. Most of the info I’ll show you is on a VirtualBox’s venture. However, you can use the same procedure to implement that on your own. Actually I buying a fanless box in CappuccinoPC and disks/memory from Amazon (I’ve got this configuration from Jerrod). You can also see Diego’s option.

Meantime I’ll follow next with my VBox configuration.

Creating you demo in VirtualBox

VBox: Preparing interfaces

I have to define two network adapters based on NAT  in my case (I am running this on my laptop and I need to take it with me over my trips, ok?). If your server will stay connected to your home network, you would better define both as bridged. Reserve a pool of bunch of IP addresses out of your DHCP’s one. Four IPs would be enough.

Then, you will have to  define forwarding rules to connect to your VM as the following picture

virtualbox forwarding rule devstack liberty nuage demo pinrojas

And connect your instance doing a something such as “ssh -l ubuntu -p 2222 127.0.0.1”

The second Interface It would be nice to create ssh access to your jumbox at 2223:

nuage demo devstack virtualbox jumbox.png

After you created your jumbox server you will be able to connect via “ssh -l ubuntu -p 2223 127.0.0.1″. Don’t forget to add your public-key among others into authorized_keys at jumbox server.

VBox: Creating and preparing your server

I am using VirtualBox 4.2.34 and I am using ubuntu-14.04-server-amd64.ova and download it from http://virtualboxes.org/images/ubuntu-server/ to my laptop. BTW, It’s a Mac with a 16GB RAM ☺.

I’ve changed the memory configuration of the OVA to 12GB memory and 4 CPUs. Remove useless things like USB ports.

As soon as you have your brand new Ubuntu running, I suggest you to upgrade and reboot after:


sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade
reboot

Now Let’s make more disk space into our brand new server. First add a disk as the following picture. I’ve defined 200GB for glance cache and nova images (/opt/stack).

virtualbox disk devstack liberty nuage demo pinrojas

After create these virtual devices now we have to partition and mount as the following way:


ubuntu@ubuntu-amd64:/var/lib$ sudo fdisk -l

Disk /dev/sda: 19.3 GB, 19327352832 bytes
255 heads, 63 sectors/track, 2349 cylinders, total 37748736 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c95b1

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048    36702207    18350080   83  Linux
/dev/sda2        36704254    37746687      521217    5  Extended
/dev/sda5        36704256    37746687      521216   82  Linux swap / Solaris

Disk /dev/sdb: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders, total 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn't contain a valid partition table
ubuntu@ubuntu-amd64:/var/lib$ sudo fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xa3859b8e.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): 
Using default response p
Partition number (1-4, default 1): 
Using default value 1
First sector (2048-419430399, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-419430399, default 419430399): 
Using default value 419430399

Command (m for help): p

Disk /dev/sdb: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders, total 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x8c9832c1

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048   419430399   209714176   83  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
ubuntu@ubuntu-amd64:/var/lib$ sudo mkfs -t ext4 /dev/sdb1
mke2fs 1.42.9 (4-Feb-2014)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
13107200 inodes, 52428544 blocks
2621427 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
1600 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
    4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done        

ubuntu@ubuntu-amd64:/var/lib$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            5.9G  4.0K  5.9G   1% /dev
tmpfs           1.2G  420K  1.2G   1% /run
/dev/sda1        18G  3.1G   14G  20% /
none            4.0K     0  4.0K   0% /sys/fs/cgroup
none            5.0M     0  5.0M   0% /run/lock
none            5.9G     0  5.9G   0% /run/shm
none            100M     0  100M   0% /run/user
ubuntu@ubuntu-amd64:/var/lib$ sudo mkdir /opt/stack
ubuntu@ubuntu-amd64:/var/lib$ sudo mount /dev/sdb1 /opt/stack
ubuntu@ubuntu-amd64:/var/lib$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            5.9G  4.0K  5.9G   1% /dev
tmpfs           1.2G  420K  1.2G   1% /run
/dev/sda1        18G  3.1G   14G  20% /
none            4.0K     0  4.0K   0% /sys/fs/cgroup
none            5.0M     0  5.0M   0% /run/lock
none            5.9G     0  5.9G   0% /run/shm
none            100M     0  100M   0% /run/user
/dev/sdb1       197G   52M  197G   1% /opt/stack
ubuntu@ubuntu-amd64:~$ sudo tune2fs -m 0 /dev/sdb1
tune2fs 1.42.9 (4-Feb-2014)
Setting reserved blocks percentage to 0% (0 blocks)
ubuntu@ubuntu-amd64:~$ sudo tune2fs -o journal_data_writeback /dev/sdb1
tune2fs 1.42.9 (4-Feb-2014)

Remember add the following lines to your /etc/fstab and reboot:


# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
#              
# / was on /dev/sda1 during installation
UUID=7d4c7424-d351-4b70-8bc2-37f5e37d778b /               ext4    errors=remount-ro 0       1
# swap was on /dev/sda5 during installation
UUID=8ceef34f-b47a-4ce3-b3b4-5d93f24667e6 none            swap    sw              0       0
/dev/sdb1 /opt/stack ext4 noatime,nodiratime,data=writeback,barrier=0,nobh,errors=remount-ro 0 1

OS Base DevStack: Install and Configuration

Let’s get our devstack files:


ubuntu@ubuntu-amd64:~$ git clone https://git.openstack.org/openstack-dev/devstack
Cloning into 'devstack'...
remote: Counting objects: 33096, done.
remote: Compressing objects: 100% (15655/15655), done.
remote: Total 33096 (delta 23512), reused 26231 (delta 17015)
Receiving objects: 100% (33096/33096), 6.48 MiB | 747.00 KiB/s, done.
Resolving deltas: 100% (23512/23512), done.
Checking connectivity... done.
 

Let’s set our local.conf file for this setup. We define the main directory to store our project and data files: “/opt/stack” and others. Also, eth0 would be our main network interface to connect all services like databases and MQ. I’ve disabled horizon and cinder to save memory for instances. The secondary port eth1 would be our external interface attached to our public bridge br-ex (check out my post as a reference of OpenVSwitch ports at the Network node).

I’ve added also the serial_console option. VSD would require access thru console to start doing anything.

Locate this file at ~/devstack folder. Here you have the file that I’ve executed:


[[local|localrc]]
DEST=/opt/stack
SCREEN_LOGDIR=/opt/stack/screen-logs
SYSLOG=True
LOGFILE=~/devstack/stack.sh.log

HOST_IP=10.0.2.15
SERVICE_HOST=10.0.2.15
MYSQL_HOST=10.0.2.15
RABBIT_HOST=10.0.2.15
GLANCE_HOSTPORT=10.0.2.15:9292

ADMIN_PASSWORD=demonuage
DATABASE_PASSWORD=demonuage
RABBIT_PASSWORD=demonuage
SERVICE_PASSWORD=demonuage

# Do not use Nova-Network
disable_service n-net
# Do not use Horizon & Cinder
disable_service horizon
disable_service c-api c-sch c-vol
# Enable Neutron
ENABLED_SERVICES+=,q-svc,q-dhcp,q-meta,q-agt,q-l3
# Enable-Console
enable_service n-sproxy


## Neutron options
Q_USE_SECGROUP=True
FIXED_RANGE="192.168.1.0/24"
FIXED_NETWORK_SIZE=256
NETWORK_GATEWAY=192.168.1.1
PRIVATE_SUBNET_NAME=Nuage-Priv01

PUBLIC_SUBNET_NAME=Nuage-Public
FLOATING_RANGE="10.0.3.15/27"
Q_FLOATING_ALLOCATION_POOL=start=10.0.3.20,end=10.0.3.30
PUBLIC_NETWORK_GATEWAY="10.0.3.2"
Q_L3_ENABLED=True
PUBLIC_INTERFACE=eth1

# Open vSwitch provider networking configuration
Q_USE_PROVIDERNET_FOR_PUBLIC=True
OVS_PHYSICAL_BRIDGE=br-ex
PUBLIC_BRIDGE=br-ex
OVS_BRIDGE_MAPPINGS=public:br-ex

Other important thing is to define just one default gateway. devstack resets interfaces over the process and you connection to the outside could be messy. I’ve change eth1 to static and remove its default gw at /etc/network/interfaces file as follow:


# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp
    dns-nameservers 8.8.8.8

# The sec interface
auto eth1
iface eth1 inet static
    address 10.0.3.15
    netmask 255.255.255.0
    dns-nameservers 8.8.8.8

Your session’s user must to have sudo privileges. Now run ./stack.sh and wait. wait.. wait… until you get this message:


========================
DevStack Components Timed
========================

run_process - 69 secs
test_with_retry - 4 secs
apt-get-update - 19 secs
pip_install - 100 secs
restart_apache_server - 5 secs
wait_for_service - 18 secs
apt-get - 41 secs


This is your host IP address: 10.0.2.15
This is your host IPv6 address: ::1
Keystone is serving at http://10.0.2.15:5000/
The default users are: admin and demo
The password: demonuage

You will get this network interface configuration:


ubuntu@ubuntu-amd64:~/devstack$ ifconfig –a
br-ex     Link encap:Ethernet  HWaddr 08:00:27:ea:81:23  
          inet addr:10.0.3.15  Bcast:10.0.3.255  Mask:255.255.255.0
          inet6 addr: fe80::34cf:80ff:fe38:387a/64 Scope:Link
          inet6 addr: 2001:db8::2/64 Scope:Global
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:18 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1284 (1.2 KB)  TX bytes:1166 (1.1 KB)

br-int    Link encap:Ethernet  HWaddr 02:59:41:8a:01:44  
          inet6 addr: fe80::105f:7bff:fef7:813d/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:72 errors:0 dropped:0 overruns:0 frame:0
          TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:6416 (6.4 KB)  TX bytes:828 (828.0 B)

br-tun    Link encap:Ethernet  HWaddr d2:c5:95:1f:b2:41  
          inet6 addr: fe80::2429:aff:fe60:2f8e/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:828 (828.0 B)

eth0      Link encap:Ethernet  HWaddr 08:00:27:96:dd:d0  
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe96:ddd0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:33395 errors:0 dropped:0 overruns:0 frame:0
          TX packets:20053 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:18895091 (18.8 MB)  TX bytes:3757597 (3.7 MB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:ea:81:23  
          inet6 addr: fe80::a00:27ff:feea:8123/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:187 errors:0 dropped:0 overruns:0 frame:0
          TX packets:210 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:23744 (23.7 KB)  TX bytes:29124 (29.1 KB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:122326 errors:0 dropped:0 overruns:0 frame:0
          TX packets:122326 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:64243895 (64.2 MB)  TX bytes:64243895 (64.2 MB)

ovs-system Link encap:Ethernet  HWaddr ee:ca:dd:82:73:83  
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

virbr0    Link encap:Ethernet  HWaddr ee:43:7c:62:b1:2d  
          inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Let’s set our env file to easier our job. You can copy ~/devstack/userrc_early to your home dir and run “source userrc_early”.


ubuntu@ubuntu-amd64:~$ cat userrc 
# Use this for debugging issues before files in accrc are created

# Set up password auth credentials now that Keystone is bootstrapped
export OS_IDENTITY_API_VERSION=3
export OS_AUTH_URL=http://10.0.2.15:35357
export OS_USERNAME=demo
export OS_USER_DOMAIN_ID=default
export OS_PASSWORD=demonuage
export OS_PROJECT_NAME=demo
export OS_PROJECT_DOMAIN_ID=default
export OS_REGION_NAME=RegionOne

DevStack: Building some resources to our lab

We’ll create some elements in our lab to test our devstack. First of fall, let’s create some key pairs into the demo project. I used my own key into this VM. If you don’t have your keys in .ssh folder you can create them with “ssh-keygen -t rsa”. Then let’s download a Ubuntu cloud image to take into glance. I will create a flavor to save some resources called pin.1.


openstack keypair create --public-key ~/.ssh/id_rsa.pub my-keypair
wget https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
glance image-create --name ubuntu-trusty-image --file trusty-server-cloudimg-amd64-disk1.img --disk-format qcow2 --container-format bare
# need admin access to add flavors. use ~/devstack/userrc_early credentials
openstack flavor create --ram 1024 --vcpus 1 --disk 5 --public pin.1

I’ve created a couple of network based on the demo architecture that I’ve shown. We are done to create our first server called jumpbox and add a secondary interface to our private network:


openstack network create public-demo
openstack network create private-demo
neutron subnet-create --dns-nameserver 8.8.8.8 --name public-demo public-demo 10.101.0.0/24
# preparing our subnet to use jumpbox as gateway/dns
neutron subnet-create --dns-nameserver 192.168.101.3 --gateway 192.168.101.3 --name private-demo private-demo 192.168.101.0/24
nova boot --image ubuntu-trusty-image --nic net-name=public-demo --flavor pin.1 --key-name my-keypair jumpbox
# use "nova list" to check how it's going
neutron port-create private-demo
# use "neutron port-list" to check out the ID to use over the next command
nova interface-attach --port-id e11de213-3141-465d-85a6-5957261ca395 jumpbox

Don’t forget to create your security group to bring ssh access to your instance.


openstack security group create ssh-access
openstack security group rule create --proto tcp --src-ip 0.0.0.0/0 --dst-port 22 ssh-access
openstack server add security group jumpbox ssh-access

I’ve added a route to connect this new server (route add –net 10.101.0.0/24 gw 10.0.3.20) thru router1 (use “neutron router-list” to check it). In case to use a physical server you will be able to add floating IPs to access this from your laptop (don’t forget to add your public key to authorized_keys file into .ssh). To add a floating ip you need to use IDs of your port and the floating ip. Maybe you will have to create a floating IPs. Here you have an example:


# before to add a floatingip, you need to create interface at router1 to subnet public-demo using ID of the subnet over the next command
neutron router-interface-add router1 971db454-91e2-4a04-af6c-75591a2b758b
neutron floatingip-create public
# use "neutron floatingip-list" to check available ips
neutron floatingip-associate ab73e086-0c70-4d67-80ab-a2c740d25b62 32144a6c-2d47-4e96-97b9-b144855b6a5e

Connect your jumpbox instance via IP address (i.e. 10.101.0.3). Don’t forget to add a route in your server thru the router (i.e sudo route add -net 10.101.0.0/24 gw 10.0.3.20) or access it thru console.

DevStack-Nova: Enable console access

To get access to console you need to install “novaconsole” via:

pip install git+http://github.com/larsks/novaconsole.git

More details at github. Connect thru the follwing way to test your console connection.


ubuntu@ubuntu-amd64:~$ nova get-serial-console jumpbox
+--------+-----------------------------------------------------------------+
| Type   | Url                                                             |
+--------+-----------------------------------------------------------------+
| serial | ws://127.0.0.1:6083/?token=5c48b7ef-84dc-476c-a02e-7cd4a500ab68 |
+--------+-----------------------------------------------------------------+
ubuntu@ubuntu-amd64:~$ novaconsole  --url ws://127.0.0.1:6083/?token=5c48b7ef-84dc-476c-a02e-7cd4a500ab68
WARNING:novaconsole.client:connected to: ws://127.0.0.1:6083/?token=5c48b7ef-84dc-476c-a02e-7cd4a500ab68
WARNING:novaconsole.client:type "~." to disconnect

Ubuntu 14.04.4 LTS jumpbox ttyS0

jumpbox login: 

See you into the next part!

Some Nuage Labs’ resources for NUTS

Hi there. I am just taking some tools over here, which I am normally using with Nuage’s labs (template: “Nuage VSP 3.2R4 with Red Hat OSP7 – blank”). More of these are being tested at Nuts ( A limited access labs we normally use to amaze customers). However, you can use them and modify them for your own purpose.

Use these scripts at your own risk. If you don’t know what they do, don’t use them (don’t make me say you later: “I told you!”)

neutron-lbaasv2-agent

A script file has been created to easily setup all what I explain in my previous post “INSTALLING OPENSTACK KILO (RED HAT OSP7) LBAAS WITH @NUAGENETWORKS VSP 3.2R4 (HAPROXY)”. It’s funny to see something that took me days is done in just 15min now.

It requires some files related to neutron, openvswitch and lbaas-agent configurations. Also, you will have to download VRS setup files from ALU’s support site: Nuage-VRS-3.2.4-133-el7.tar.gz.

All the files can be downloaded from my Bitbucket’s repo nuage-nuts-lbaas-install or download tar.gz file

I ‘ve added some additional steps to my previous posts. An update of our neutron’s nuage part from v3.2R4 to R5. Files must be downloaded from our support site:

  • nuagenetlib-2015.1.3.2.6_198-nuage.noarch.rpm
  • nuage-openstack-neutron-2015.1.1785-nuage.noarch.rpm
  • nuage-openstack-neutronclient-2015.1.1785-nuage.noarch.rpm

Get Nuage VSD domain’s details from command-line

A python small app that I built from some examples at Philippe Dellaert’s repo at Github. I personally hate to switch over to GUI many times to just get a couple of values. This app helps you to get details about your L3 domains, subnets and instances. You will also get expiration date of your Nuage VSP license.

Before to run any of these applications you have to install the following packages. (if you’ve installed neutron-lbaasv2-agent as I’ve showed in the previous section, you don’t need to add more repos to yum):


yum -y install python-pip
pip install bamboo
pip install vspk

Download list-domains-enterprise.py from here, and you’re done!

 
[root@os-controller python-files(kyst_adm)]# python list-domains-enterprise.py

License expiration date: 2016-12-31 15:59:59

Domains inside Enterprise OpenStack_Nuage_Lab
|- Domain: d24798fb-173d-483b-a6c8-c0949992584b
    |- Zone: def_zone-f4eac814-7543-4b5b-878a-cc95169d9762
        |- Subnets: 0240310e-d0da-4b78-9d50-fe67354123ac - 192.168.51.0 - 255.255.255.0
            |- Instance: instance-00000009
        |- Subnets: 02814ccd-e9ce-4415-9814-c0dcb71ec0f1 - 192.168.53.0 - 255.255.255.0
        |- Subnets: 1eaa4236-9c3a-4a83-9234-e5386fbeebf6 - 192.168.52.0 - 255.255.255.0
        |- Subnets: 45916c43-0f29-48bf-9fdd-332a2c99be5f - 172.16.1.0 - 255.255.255.0
        |- Subnets: d745c011-0573-4c00-b805-63d10dd397c3 - 192.168.54.0 - 255.255.255.0
    |- Zone: def_zone-pub-f4eac814-7543-4b5b-878a-cc95169d9762
--------------------------------------------------------------------------------

Source your OpenStack credentials

Source OS credentials will save you time to manage your resources thru CLI. It’s a trivial thing. However, if you didn’t know it. I’m taking you my personal file (admin.source):


export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=sag81-sled
export OS_AUTH_URL=http://10.0.0.10:5000/v2.0/
export OS_REGION_NAME=RegionOne
export PS1='[\u@\h \W(kyst_adm)]\$ '

Create this file and use it as the following:


[root@os-controller ~]# neutron --os-username admin --os-password sag81-sled --os-tenant-name admin --os-auth-url http://10.0.0.10:5000/v2.0/ net-list 
+--------------------------------------+------------------+----------------------------------------------------+
| id                                   | name             | subnets                                            |
+--------------------------------------+------------------+----------------------------------------------------+
| b1a4897a-d6e8-4a0f-ae13-41a6bc40cea5 | private          | 45916c43-0f29-48bf-9fdd-332a2c99be5f 172.16.1.0/24 |
| 2f61f543-214f-462f-afb7-182ec816abe9 | external_network | 8f73aa92-e8af-454b-bffe-55c72257453b 10.0.1.0/24   |
+--------------------------------------+------------------+----------------------------------------------------+

[root@os-controller ~]# source admin.source 
[root@os-controller ~(kyst_adm)]# neutron net-list
+--------------------------------------+------------------+----------------------------------------------------+
| id                                   | name             | subnets                                            |
+--------------------------------------+------------------+----------------------------------------------------+
| b1a4897a-d6e8-4a0f-ae13-41a6bc40cea5 | private          | 45916c43-0f29-48bf-9fdd-332a2c99be5f 172.16.1.0/24 |
| 2f61f543-214f-462f-afb7-182ec816abe9 | external_network | 8f73aa92-e8af-454b-bffe-55c72257453b 10.0.1.0/24   |
+--------------------------------------+------------------+----------------------------------------------------+

Populate your admin tenant

I did this script (add-things-to-admin.py) as soon as create my second lab at Nuts. I didn’t want to create instances and networks manually every time. That helps me to save time and start showing the awesomeness of Nuage just some minutes after I’ve got the lab running.

The app isn’t perfect. You can take it from where I left it. You can argue the same thing could be done thru heat. In fact, I have some nice yaml files that I will gather and share later.

Anyway, when you run the script should show the following (don’t forget to get ssh access from security groups or VSD if you want to access any instance by its floating IP)


[root@os-controller python-files(kyst_adm)]# python nuts.adm.v3.py 
Creating keypair: mykey...
mykey done
Network b9e6f9a3-fba4-4d3f-8b37-8e0c4d6e8178 created
Sub-Network e83eab12-a231-4d4a-a334-9fded03052f5 created
Port {u'subnet_id': u'e83eab12-a231-4d4a-a334-9fded03052f5', u'tenant_id': u'63d41744393243b6a51a95c6063fe4c1', u'subnet_ids': [u'e83eab12-a231-4d4a-a334-9fded03052f5'], u'port_id': u'e7172f8f-0458-49e8-aa1c-a27a70bcc006', u'id': u'd24798fb-173d-483b-a6c8-c0949992584b'} created
Network ff207aff-08b7-40a9-9ce0-1b03fda1b1f9 created
Sub-Network 5ced7285-974c-4a1e-83c7-8f8c809a1de4 created
Port {u'subnet_id': u'5ced7285-974c-4a1e-83c7-8f8c809a1de4', u'tenant_id': u'63d41744393243b6a51a95c6063fe4c1', u'subnet_ids': [u'5ced7285-974c-4a1e-83c7-8f8c809a1de4'], u'port_id': u'149b327f-396b-4825-838f-a94f60fdd3bb', u'id': u'd24798fb-173d-483b-a6c8-c0949992584b'} created
Network 7e643f6d-1979-4b1a-aae0-f5330dc791cc created
Sub-Network da6afb58-2d40-4572-b0e0-60a0a828d836 created
Port {u'subnet_id': u'da6afb58-2d40-4572-b0e0-60a0a828d836', u'tenant_id': u'63d41744393243b6a51a95c6063fe4c1', u'subnet_ids': [u'da6afb58-2d40-4572-b0e0-60a0a828d836'], u'port_id': u'7c5cc123-dd64-486c-95ac-81563edec87e', u'id': u'd24798fb-173d-483b-a6c8-c0949992584b'} created
Network 98d5b2ad-c8c0-4558-bdc2-617d4ad2fffa created
Sub-Network 9ceb8391-e525-4c30-a0bf-d4551e77814f created
Port {u'subnet_id': u'9ceb8391-e525-4c30-a0bf-d4551e77814f', u'tenant_id': u'63d41744393243b6a51a95c6063fe4c1', u'subnet_ids': [u'9ceb8391-e525-4c30-a0bf-d4551e77814f'], u'port_id': u'2e759115-c594-4196-8059-e27dde410395', u'id': u'd24798fb-173d-483b-a6c8-c0949992584b'} created
Port cb285691-2681-4911-86fe-e413d8d7d0a3 created
Booting instance...Creating floating ip...Port abb19e89-9cf7-4113-8646-30e9e4c64ee0 created
Booting instance...Creating floating ip...Port f546cecb-ba5f-4c6b-a368-105257754fdc created
Booting instance...Creating floating ip...Port da35424f-3e59-4e87-a961-f94c0022e43b created
Booting instance...Creating floating ip...done
[root@os-controller python-files(kyst_adm)]# ping 10.0.1.5
PING 10.0.1.5 (10.0.1.5) 56(84) bytes of data.
64 bytes from 10.0.1.5: icmp_seq=1 ttl=61 time=4.61 ms
64 bytes from 10.0.1.5: icmp_seq=2 ttl=61 time=1.24 ms
64 bytes from 10.0.1.5: icmp_seq=3 ttl=61 time=1.42 ms

The OS image is got from internet (http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1508.qcow2), and also nothing more is required for this python app than the basic stuff that is already loaded into the controller. Just be aware the VSD license is not expired. Once you run this app, you can use os-controller’s root private key to access any server thru their floating IPs (starting at 10.0.1.4 is you don’t use it before).

You will get four OpenStack managed private networks/subnets already connected to the router. All components can be managed by neutron after.

nuage openstack nuts python script

Enjoy and see you next time!