CNF/VNF Under-the-hood: EVPN saves the day for Layer-2 services

We are moving out! An updated post can be found here:

https://www.cloud-native-everything.com/cnf-vnf-under-the-hood-evpn-saves-the-day-for-layer-2-services/

Let’s say you need a more scalable layer-2 solution for your VMs and containers. You can do some Linux bridges or maybe use OVS and try some VxLAN/GRE encapsulation between Hypervisors. Containers is a different kind of animal though: Way more end-points in every server and we put them across the datacenter, it could turn into the worst nightmare if you need to troubleshoot it.

DevOps or IT guys, that normally try the network as a black box to connect end-points. And still think that hardware appliances can be replace for modules in the kernel (I used to be one of them). It’s important also to say that sort of thought could fit in Enterprise use cases, but a Telco cloud, VNFs, CNFs, is in other league.

Calico is great solution that work over pure Layer-3 approach. Why? Well, they say that “Layer 2 networking is based on broadcast/flooding. The cost of broadcast scales exponentially with the number of hosts”. And yes, they are absolutely right… if you are not putting something like EVPN (RFC 7209) to help you out on building Layer-2 services of course.

Calico Kubernetes Architecture

Calico Kubernetes Architecture

EVPN don’t rely on flooding and learning like other technologies, actually uses control plane to advertise MAC addresses across different places. They can say: “Well, you will be using encapsulation that also have an important cost”… Well, yes and no. EVPN can use different data planes actually (i.e. MPLS). Maybe Nuage uses VXLAN, but it’s not limited by the technology to use other sort of transportation.

Also, scale can kill any good idea. Like rely on Linux kernel modules and services to route all communications in the datacenter. Try every server as a router? Create security policies for every end-point in the DC? Uff! For example Calico took the idea from the biggest network ever known today. Yes! the Internet. But it doesn’t mean this network is optimal. Many companies needs to rely on MPLS or other sort of private WAN services for higher demand in security and performance.

Nuage and Kubernetes Architecture

Nuage and Kubernetes Architecture

Layer-2 services help to simplify network design and security. Some Layer-2 domains don’t need to connect to any Layer-3 for security reasons. Also, Layer-2 domains in the datacenter can be directly attached to a Layer-2 service in the WAN (i.e. VPLS, MPLS). We can add many more things on the list like Private LTEs, Network slicing…

EVPN benefits

EVPN came as an improved model from what is learnt in MPLS/VPLS operations. Also, it’s a join work of many vendors as Nokia (former ALU), Juniper and Cisco. MPLS/VPLS relies on flooding and learning to build Layer2 forwarding database (FDB). EVPN introduced a new model for Ethernet services. EVPN uses Layer-3 thru MP-BGP, as a new delivery model, to distribute MAC and IP routing information instead of flooding and learning.

In summary key benefits we can export to CNF/VNFs use cases:

  • Scalability: Suppress unknown unicast flooding since all active MACs and IPs are advertised by the leaf or the software router you have installed in the hypervisor.
  • Peace of mind: Network admin will have a better control on how the cloud instances escales avoiding issues regarding flooding or loops or MAC mobility/duplications. And cloud admin will keep provisioning and moving around instances with minimal concern o the impact they can cause on the network and reducing the overhead regarding the setup of layer-2 services.
  • Workload mobility: If local learning is used, software routers can not detect that MAC address has been moved to other Hypervisor or Host. EVPN uses a MAC mobility sequence number to select always the highest value and advertise rapidly any change. Also the local software router to the Hypervisor will always respond to ARP request for the Default Gateway, that avoids tromboning traffic across remote servers after a MAC moves.
  • Ready to work with IPv6. EVPN is ready to manage IPv4 and IPv6 in the control and data plane.
  • Industry standard: software router can be directly integrated with Layer-2 services to the WAN thru Datacenter Gateways efficiently advertising MAC and IP routing from VM and containers. Some VNF/CNF are very complex communication instances that require to work directly with ethernet services. It can be seen as a no better seamless and standard solution than EVPN. Some of you can tell me Segment routing can be a better fit… Agree. Great material for “later” post though.
  • Resiliency: Multi-homing with all active forwarding, load balancing between PEs. Don’t waste bandwidth with active and standby link. You can create a BGP multi-homed subnet to the datacenter gateway (DCGW) form any virtual workload.

 

Reference and Source

Info from Calico: https://www.projectcalico.org/why-calico/

Ethernet VPN (EVPN) – Overlay Networks for Ethernet Services:

CNF/VNF Under-the-hood: Quality of Service (QoS) in SDN

We are moving out! An updated post can be found here:
https://www.cloud-native-everything.com/qos-in-cloud-networking/

 

Relevant technologies like DPDK or SmartNICs are starting to come up lately to bring better performance options to CNF/VNF instances. Sharing a Linux kernel among other applications, is not good enough for some network functions. However, there are still other options before considering replacing your hardware.


Quality of Service (QoS) could be the cheapest and quick way to solve some scale issues. Or unless it will give you a better understanding of what sort of traffic you’re dealing with, or help you to avoid any fight over resources with traffic managed from IP Stack for KVM, management or Storage. 


QoS sounds like the right path to follow for a network professional, but unnatural for IT guys or DevOps. If you combine those skills, that would bring an easy and fast way to deploy QoS policies at big scale, and the power to adapt them much faster to changing conditions (i.e. market, covid, weather…)

CNF/VNF vs Network Infrastructure

OpenStack is being popular in Telco to orchestrate network functions. And along with that, different sorts of requirements have come up from the network. A VNF can be a bunch of VMs with different demands. Network traffic to/from those VMs can be managed through an OVS at the kernel, fighting for resources with other applications between kernel interruptions.
VNF traffic could be diverse, and some packets should be treated and forwarded faster. But if you don’t do anything, small critical packets (i.e. network control) will be treated as any sort of traffic, and they could be waiting for their turn after some big non-critical file transfer that maybe doesn’t have anyone urgently waiting on the other side.
There are hardware options like DPDK, off-load NICs or SR-IOV. They are more expensive and require specific pieces of software to manage them. SRIOV works passing-through OVS in the kernel and that would bring a higher challenger for orchestration to keep the so demanded agility. This won’t avoid you the fuss of designing appropriate QoS policies. Resources are limited, no matter what magic you do.
5G and CNF won’t make things easier. The density of end-points per server has grown exponentially. And if VMs could still be managed thru VLANs, command-line and QoS policies over a traditional fabric. CNF will force you to automate and orchestrate those policies through APIs and group policies based on metadata. Containers are state-less and change their IP address on every reboot. And sorry, you can’t skip QoS, best effort or FIFO approach from kernel modules is not good enough anymore.

Let’s check some QoS concepts in the following section

Common QoS Mechanisms

Quality service is more a collection of tool that just one thing. And the following are the categories of those tools.

  • Classification and Marking (i.e. DSCP): Before entering the network, some packets will be classified and marked to make their identification easier just reading the header for any later switch/router or virtual router involved on the forwarding process. That will make things more efficient in time and resources. There’s a point where you don’t know if the packet will stay in the DC or will be sent outside through MPLS to another DC. Apps will be able to flow anywhere.
  • Queueing: every router will manage different “buckets” for different packets that have been identified using the marks we told before, in the buffer for queuing and wait to be forwarded. Of course, packets classified with higher priority will be forwarded accordingly to their priority. SDN usually relies on queues which are defined in the NIC and mapped back to the tap interface of VM where the packets are marked before to be forwarded anywhere.
  • Congestion avoidance: If that buffer gets full at the network card, then packets will start to get dropped. Tools like RED will avoid that sort of issue by dropping random packets.
  • Policing (drop) and shaping (hold): Traffic conditioning, set a speed limit, maximum of bandwidth. Policing drops the traffic and shaping hold the traffic until packets can  be sent staying inside the limits of speed.

Classification and Marking (DSCP)

Class of Service (CoS)

There are bits you can use to define a CoS in every frame header. Then you can have 8 values (0 to 7). Vendors like Cisco leave the value 6 and 7 for other purposes. Then, It’s better to use values between 0-5. Cisco mark some of the packets for their VoIP products by default with the value 5.

Class of Service - CoS

Class of Service – CoS

Because it’s a Ethernet datagram, then you will run into issues with CoS when they go through a router, cause they will re-write the headers and then you might lose what was define.

Type of service (ToS) Byte

A byte in the IP packet header will carry the Type of Service (IPv6 call it Type of Class). You may use the three left bits in this section to identify your type of service. Those bits are used for the IP precedence marking. You may not use the values 6 and 7 like we said before, because those are still used to identify the CoS like we showed for datagrams.

If we add the next three bits of the byte, we expand the amount of values for a classification from 8 to 64. A more realistic number for nowadays, don’t you think? Those 6 left bits in the byte is what we call DSCP (Differentiated Services Code Point).

IPv4 Type of Service or IPv6 Type of Class Byte

IPv4 Type of Service or IPv6 Type of Class Byte

We have values between 0-63 (64 possible values like we said). To make things easier IETF set a standard of which values you could use and stay in sync with the rest of the world. They set 21 values. IETF gave them names and they call that Per hub behaviors (PHB). The two most important values to remember are:

  • Default: 000000
  • Expedited Forwarding: 101110

Next table will show the Assured Forwarding (AF) values. It’s important to mention that the dropping priority is independent of the class. Then, in case the buffer gets full and some packets need to be dropped, AF22 will be dropped before AF11. it doesn’t matter, it has higher priority on forwarding. Be careful selecting your values.

Table I from Pervasive Differentiated Services: A QoS Model for ...

DSCP Assured Forwarding Table

Random Early Detection (RED)

Now that we are talking about packet dropping. Then it’s important to talk about RED. RED is the industry standard for dropping packets. Eventually your router will do it and the question is how that will be managed. RED defines a Maximum threshold that when it is hit, it’s a 100% probability that queue will drop that traffic.

RED packet dropping

RED packet dropping

A better way to see it’s using the next picture. As an example we have a queue with 45 packets as average queue depth. before to reach 25 packets, there’s no dropping. After 25 the line is smoothly introducing the probability of discard packets until reach the 45, where the probability goes from 20% to 100%.

You can set those thresholds and MPD (mark probability denominator). MPD will define the probability at where the random dropping will work before to reach the Maximum threshold. And for your information, none ever try to touch those values. Unless you have a very specific use case.

RED Drop probability

Now, Cisco varies that probability depending on the Class of Service, defining RED profiles for each AF value.

WRED drop profile depending on Class of service

WRED drop profile depending on Class of service

Explicit congestion Notification (ECN)

If you remember we talked about ToS and the bits regarding DSCP. Ok, and what about the last 2 bits i the ToS Byte? Those bits, basically, are use to notify the origin router that the queue is full or congested and ask it to slow down. Then, if the router send 11, means it’s experiencing congestion and other route should be considered.

This is more informative. I didn’t want to let that gap i this overview. We won’t talk of this in CNF/VNF use cases.

Explicit Congestion Notification (ECN) RFC 3168 Justin Yackoski ...

Traffic Policing vs. Traffic Shaping

Well, a decent SDN solution should also offer Traffic Policing and Traffic Shaping. The idea is to understand the difference between both.

Some SDN software applies it only for the traffic which goes from VM to Network, around the VM’s tap interface. Others must be enforced directly in an external switch/router hardware piece thru the identification which VM is sending that packet thru some way. but. again, depends on the technology/vendor.

Traffic Policing

Traffic policing is more regarding a limitation in Bytes over a certain period of time. We have two important values:

  • Committed Information Rate (CIR) which is the average speed that the Network guarantees the VM’s port or container’s port
  • Committed Burst (Bc): which defines the amount of packets (usually in bytes) that can be sent as a group without causing any violations or exceeding the CIR.

Let me illustrate this with the following example:

Let’s say the average speed or CIR will be 200Mbps and we allow a Bc of 20MB. Then, the VM sends an initial bunch of packets for a total of 9.6MB. We have 10.4MB or 10.4M tokens left in the bucket. If we sent another burst 0.1s (100 msecs ) later. What would be the limit we have to avoid any packet delay or drop? The answer is 12.9MB maximum. Why? because we have 10.4 left plus the 2.5MB added after 0.1s based on the following formula (0.1s*200Mbps/8bits):

traffic policing

Traffic Shaping

Now, let’s do the same with traffic shaping:

  • Now we have other value call Traffic Interval (Tc)

We have 200Mbps on a 1Gbps virtual interface. and we want to set a Tc of 125ms. means we can send 25Mbps every 125ms. In this case, the enforcement is you can fill a bucket up to 25Mbits every 125ms, the rest of the packets could be dropped or delayed. However, those 25Mbits of data will be sent at “line speed” and the bucket could be emptied long before to reach those 125ms limit. Then, you can have a behavior like is shown in the next picture..

traffic shaping

QoS in a CNF/VNF Telco Cloud Setup

Instances like VNF and CNF commonly run on KVM and Docker Engine respectively. Either way, KVM or Docker will use Linux IP Stack to send traffic and let the kernel decide in what network resources are used to ensure priority in their common tasks. For example,  KVM uses specific queues in the NIC for some traffic like storage or management. The configuration commands on how to configure the egress queue scheduling as well as kernel mappings, etc. are highly dependent on the distribution of Linux used on the hypervisor as well as the specific physical NICs and the driver version.

Screen Shot 2020-07-13 at 5.33.44 PM.png

Nuage QoS Classification, NIC queues, Forwarding Class and Kernel

Nuage uses a management plane to manage multiple control planes with XMPP. VSD can be accessed through a GUI or APIs to define QoS policies at vport, subnet, zone or domains (i.e. VRF).

Nuage forwarding class and traffic shaping in VSD

Nuage forwarding class and traffic shaping in VSD

 

Nuage, for the VMs that use OVS with a VirtIO driver, uses 8 classes and binds every class to a different queue in the server NIC. You can mark traffic going into the network. However, traffic arriving to the VM is not affected, basically it is not worthy, the packet is already on the server.

Also, in the previous image you can see you can define Traffic Shaping policies also at different levels in the domain.

Also, depending on the SDN software, you will find different ways to deal with DSCP marking or traffic shaping. For example, you can define a specific policy based on TCP or UDP traffic.

DSCP nuage QoS policy TCP UDP marking remarking

DSCP marking in Nuage through a QoS policy based in specific TCP/UDP traffic or former DSCP mark (only for Traffic from VMs to Network)

 

Hope that explanation worked for you. You can check the references for more details though.

See ya!

References

We use info from Traffic Policing vs. Traffic Shaping

This post took notes from this tutorial from Kevin Wallace:

How to build a art painting canvas with a cheap drop cloth – getting ready

I’ve been taking a break of technology and doing some art painting. You can check some of my art work in my instagram. I have working on paper blocks and murals, but now it’s the time to start doing some canvas. And I want to go with something big size, 7×5 feet.

You need tools like a saw to do 90 and 45 degrees cuts for the 2inx4in timbers and a planer to level off one edge of the exterior timbers used a the main frame.

Then, I will start with the following materials:

  • 9ft x 12ft Canvas Drop Cloth ( $32 in Home Depot )
  • 8 units of  4inx2inx8ft timbers ($4 x 8 = $48)
  • 1 Gallon of primer of any color you want for your background (Mine it’d be gray. It’s about $30)
  • Wood Glue ($5)
  • Screws #8 1 1/3in (40mm), 3in (75mm), & 4in (100mm)  ($20 for boxes you can re-use for more canvas)

For building the frame you can use this video as reference.

After you build this canvas, then I’d use the primer to cover it up and bring the required protection. I’d try at least one layer first. That way, the canvas will last more, And the painting process will be much easier. On my case, I use flat latex for interior. That way, I am saving some money versus using exterior.

I will buy the materials soon and let you know how the process goes.

I believe using art quality or house paint doesn’t make the difference if your design is good. well, let’s see.

See ya!

 

Blindfolded lady in City Council

 

Mike and Troy, when they were hanging out on my place, asked me to do some art work like I’ve done on my place, in their just acquired restaurant. The City Council Restaurant & Bar. My first question was regarding what topic. It seems they will change the food menu to something related to asian food – hopefully drinks will stay, I love the moscow mule there – Second was about doing an art mural. Well, it was my first time doing something with flat latex paint. I’ve been always working with soft pastel or pencil. Those are some of the things I have on my place:

 

Then, I started working on the design. I thought on a geisha, but not the old model of it. Something modern. Then I found this:

Love the hair style and the make up. However, I had to do something with the lips. They look dry, lifeless. And also, the eyes, they needed something. Then something came up. Something that have been stuck in my mind for some time.

For those who doesn’t know me. My natal country is Chile. They have been struggling for some time with protests and a massive discomfort with the current government. I am not here to discuss about this though. Along with this, a feminist performance protesting violence against women started to come up on the media. A performance that was created by the Valparaíso feminist collective called #LasTesis. This has been adopted on many other countries. Countries with an important rate of physical/emotional violence or discrimination against women (i.e. Mexico, Turkey…)

 

As part of the performance, women are blindfolded and also, they use green cloths on the neck. Then, I thought to use it as part of this art work, like my humble and small contribution to acknowledge this movement. Then, I got a blindfold online and I use Gimp to create my final design that I used as a model for my paint.

Below you can see the process. Thanks for reading and see ya!

Enterprise cloud managed services: Daring moves to boost your offer

Some moves that I think you should have in mind if you want a strong portfolio of Cloud Managed Services for the Enterprise segment. A market that is growing almost exponentially every year, then it’s worth the shot.

Here you have a summary of the options you should consider to boost your portfolio – and enough to decide if you want to stay on this reading:

  • Databases: One of my most savviest friend told me: “data has gravity”. As soon as you move a database to the public cloud, many other apps that depend on it, will follow. This is trend you should embrace and I’ll tell you how and why later.
  • VMWare in AWS: A great move from AWS. VMWare has been the preferred virtualization ever. However, many organization want to stop dealing with hardware and its life cycle. Good opportunity to take that dirt off your ESX certificates and start supporting that transition.
  • SAP HANA: SAP is the most used ERP, probably worldwide. Now SAP customers must decide for HANA as a DB if they want to take full advantage of their data and real-time analytics. Also, it’s uncertain if SAP will stay supporting other DB vendors after 2025. it’s the perfect time to evolve those BASIS services and innovate thru the many options brought by public cloud. More later.
  • CI/CD: PaaS has got an important boost thanks to containers and Kubernetes. DevOps practices adoption are hard to get though. What are you waiting to help your customers? Also, a good opportunity to resell software tools subscriptions to support this code-to-production pipe.
Cloud Managed Services Market Size, Share & Trend Analysis Report By Service Type (Business, Network), By Deployment, By End-user, By Vertical, By Region, And Segment Forecasts, 2018 - 2025

Cloud Managed Services Market Size, Share & Trend Analysis Report By Service Type (Business, Network), By Deployment, By End-user, By Vertical, By Region, And Segment Forecasts, 2018 – 2025

Figure source: https://www.grandviewresearch.com/industry-analysis/cloud-managed-services-market

Next, I will bring more details on every bullet, don’t tear away from me yet.

Data has gravity

Migrate databases to the cloud is a very sensitive topic to any company. It’s complex, risky and resource consuming.

Then so is a huge opportunity for any provider to sell high-rated and a massive offer of managed services.

It’s matter of time that any DB will be moved to the public cloud anyway. Its hard to hide from the top-management, the huge cost benefits that change can bring – like market analyst aren’t rubbing it in our face all the time.

Then now it’s the time to size the opportunity and bring those experts.

As soon as a database is properly assessed, discussions will transition to identify what applications will have to be moved with it. This is why we says: “data has gravity”

DB-Engines Ranking - Trend Popularity - July 2019

DB-Engines Ranking – Trend Popularity – July 2019

Figure source: https://db-engines.com/en/ranking_trend

Databases like mysql and PostgreSQL can be moved to almost any public cloud right away. They are most typically use on web apps and you can find tons of tools and tutorials online.

Legacy DBs: The classic dilemma

More legacy stuff like MS SQL and Oracle is also available to have it on as-a-service mode. However, pricing is more convenient on their original vendor. You can opt for important savings or even no additional costs on cloud services using your current enterprise agreements or licenses.

On the other hand, you have vendors like GCP with Google SQL and AWS that brings Aurora, a very convenient relational database services. There are programs with important savings in credits on your migration project to encourage customers to migrate from Oracle/SQL. Of course, decisions like this depends on the complexity of your apps.

Analyze case by case.

NoSQL: New stuff on the air

New generation databases, like NoSQL, can even have birth for your organization directly in the public cloud.

Start asking for high volume of unstructured data or performance issues with the scalability on relational databases.

Cloud providers use different NoSQL technologies. It could be a little overwhelmed to choose the right one. Sometimes it depends on the use case (sort of queries, costs, throughput, app code, scalability) or basically in what cloud vendor most of your applications resides. I’d say the most popular are Mongo DB, AWS Dynamo, and lately GCP Big Table.

Finally, Big data analytics for massive info can also be a interesting point to attack. GCP Big Query have many public datasets ready to use regarding different needs like climate, sports, flights…

VMWare: A great move from AWS

6 years ago, I thought OpenStack/KVM would rule and take a big part of the market from VMWare. Well, it seems it was too much risk and complexity for the Enterprise market – you can’t blame it on the customers, they have a life! VMWare is sill the preferred solution for compute virtualization. OpenStack seems more appealing for Telcos and their VNFs.

It seems the worn out motivation for compute virtualization of optimizing compute resources is long gone. They are not enough to beat public cloud benefits. Companies wants more savings like getting rid of hardware refresh cycles.

Enterpise adoption of public cloud

Enterpise adoption of public cloud 2018

Figure source: https://www.forbes.com/sites/louiscolumbus/2018/09/23/roundup-of-cloud-computing-forecasts-and-market-estimates-2018/#6ab322f5507b

It’s not a surprise then that AWS is having a blast taking ESX instances to the public cloud lately. From the point of view of the operation, the transition is smoothly. You can stay with your vCenter managing instances almost on the same way you use to do it on-premises.

Where is then the opportunity for managed services? Well, change is hard for any organization. The migration of instances have some complexity, plus some additional challenge on the operation. Also, customers will want to take advantage mixing some other services in AWS like load balancers and S3.

Providers can support your customers on that transition, analyze benefits – build a business case to justify the change based on mid-term savings -, design and even take over an important role of the operation.

SAP HANA

An important trend in Big Data analytics is SAP HANA. Plus, SAP is forcing all customers to move to HANA from traditional DB instances like Oracle or SQL.

SAP HANA bring valuable benefits regarding real-time analysis of massive amounts of information collected from many sources. Something that relational databases are struggling due its nature and structure and mostly because they’re still working on disk.

The opportunity

This is also an important opportunity to develop SAP advanced managed services: From Advanced SAP BASIS to more specialize skills as industry specific data scientists.

As soon as SAP ECC/BW start working on HANA, then you can either deploy what-if scenarios directly on relational data with no concerns on the database performance, or more complex use cases like: get suggested actions in real-time to reduce churn after pulling data from many sources (IoT, call center logs…).

Build a SAP on HANA on-premises is not easy and cheap. You must acquire expensive specific certified hardware. Surely, troubleshooting can be hard if you don’t have the skills in house. Most of the cloud providers offer a specific IaaS for HANA. You can check them over here: https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html.

Customers would apply to BYOL and externalize SAP BASIS services to more experienced providers dealing with cloud resources. Unless you expect those cloud vendors will add traditional backup services based on tape to their portfolio. I wouldn’t hold my breath.

CI/CD and the magic of containers

If you are familiar with CI/CD (Continuous Integration / Continuous Delivery ), then you know it’s just a set of practices to go from code to production in the cheapest and fastest way. Containers are a key component on it.

There are so many tools on the market – and no doubt that more coming – to create this appealing pipe from dev to production.

All public clouds have developed there own version of Kubernetes (K8s) services, that is the most used container orchestration software until now.

Dev teams are not just buying K8s, they are buying the agility to release apps faster – or fail faster. Then, you have to understand their process from code to production. That pipe depends on many things like the code of preference, how micro-services interact, how they work with databases, CI/CD apps of choice (github, gitlab, jenkins)…

Also, monitoring is an under-rated topic that should be covered and other oppty to sell more subscription services along with managed services.

Final thoughts

Appreciate if you make it until this point – even if you skip some or maybe all my notes.

The biggest challenge is to build that sort of cutting-edge talent in house – Like It wasn’t hard enough to find and hire the right people with those highly demanded skills. And it’s even harder to retain them though.

There is not perfection building what is required when you have to deal with talented people. You may start with certified contractors or partners.

Also, you need to start developing partnerships with vendors like Google, AWS, Microsoft, SAP, VMWare… that will take time though, and some of them will ask some level of commitment or investment at front.

Start simple, maybe picking CI/CD and then adding tools, and then some others. Look into your organization, I am sure you can get some certified professionals on some of those techs that I have mentioned.

And always remember that the hardest part is yet to come: working on the distribution and go-to-market strategy.

If you have doubts, ping me.

See ya and happy selling

Signed URLs #AWS #ansible filters

Based on “Sharing #AWS S3 files for ansible thru Signed URLs” and URL split filter for ansible

I was editing my config file to download images thru ansible once or twice a week. Time to automate that.

the filter

Please check my previous post to see what do you need to prepare this. Actually, I have everything in a container. In summary, just need “aws” and “boto” installed thru python-pip

# {% set credentials = [ aws_key, aws_pass, aws_bucket] %}
# {% set aws_vars = [ release_base_dir , '.*Nuage-elastic-.*\.qcow2$'] | join() | aws_s3_list(credentials) | signed_url_7d(credentials) %}
#
import sys
import boto
from boto.s3.key import Key
import requests
import re

class FilterModule (object):
    def filters(self):
        return {
            "signed_url_7d"   :   self.signed_url_7d,
            "aws_s3_list"   :   self.aws_s3_list,
        }

    def signed_url_7d(self, path , credentials ):
        s3_session   =   boto.connect_s3(credentials[0], credentials[1])
        s3_bucket    =   s3_session.get_bucket(credentials[2], validate=False)
        s3_key       =   Key(s3_bucket)
        s3_key.key   =   path
        result = {
            'url'    : s3_key.generate_url( expires_in=604800 )
        }
        return result

    def aws_s3_list(self, reg_value, credentials ):
        s3_session   =   boto.connect_s3(credentials[0], credentials[1])
        s3_bucket    =   s3_session.get_bucket(credentials[2], validate=False)
        regexp       =   re.compile(reg_value)
#        return reg_value
        for key in s3_bucket.list():
             if regexp.search(key.name.encode('utf-8')):
                   return key.name.encode('utf-8')
        return 'ERROR: regex not found'

What do we want?

This is the file we will try to get to download my tar and qcow2 images.

url_base_location: "https://nuage-secure-files.s3.amazonaws.com/5.2.2-files/"
#files release dependant
url_elasticsearch_qcow2: "Nuage-elastic-5.2.2-22.qcow2?Signature=Z7g3C%2BOcAM2zPEjZ17Smyqbtt4E%3D&Expires=1521557331&AWSAccessKeyId={{ aws_access_key }}"
url_openvswitch_rpm: "nuage-openvswitch-5.2.2-25.el7.x86_64.rpm?Signature=ZdLX2sCZHKUcLDbx177V0Ghiing%3D&Expires=1521557332&AWSAccessKeyId={{ aws_access_key }}"
url_vsc_qcow2: "vsc_singledisk.qcow2?Signature=2C3oIIz0Kx%2BV0RO32I8SB7bg5JQ%3D&Expires=1521557333&AWSAccessKeyId={{ aws_access_key }}"
url_vsd_qcow2: "VSD-5.2.2_24.qcow2?Signature=U171si2fTXBqeXe9C%2B26H7dN4gw%3D&Expires=1521557333&AWSAccessKeyId={{ aws_access_key }}"

What template do we’ll use

Here you have the template in the folder templates in the playbook dir:

{% set credentials = [ aws_key, aws_pass, aws_bucket] %}
{% set aws_vars = [ release_base_dir , '.*Nuage-elastic-.*\.qcow2$'] | join() | aws_s3_list(credentials) | signed_url_7d(credentials) %}

url_base_location: "{{ 'https://' + aws_vars.url | urlsplit('hostname') + '/' + release_base_dir + '/' }}"

{% set aws_vars = [ release_base_dir , '.*Nuage-elastic-.*\.qcow2$'] | join() | aws_s3_list(credentials) | signed_url_7d(credentials) %}
url_elasticsearch_qcow2: "{{ aws_vars.url | replace( 'https://' + aws_vars.url | urlsplit('hostname') + '/' + release_base_dir + '/' , '') | replace( aws_key , '{'+'{'+' aws_access_key '+'}'+'}')  }}"
{% set aws_vars = [ release_base_dir , '.*nuage-openvswitch-.*\.rpm$'] | join() | aws_s3_list(credentials) | signed_url_7d(credentials) %}
url_openvswitch_rpm: "{{ aws_vars.url | replace( 'https://' + aws_vars.url | urlsplit('hostname') + '/' + release_base_dir + '/' , '') | replace( aws_key , '{'+'{'+' aws_access_key '+'}'+'}')  }}"
{% set aws_vars = [ release_base_dir , '.*vsc_singledisk.*\.qcow2$'] | join() | aws_s3_list(credentials) | signed_url_7d(credentials) %}
url_vsc_qcow2: "{{ aws_vars.url | replace( 'https://' + aws_vars.url | urlsplit('hostname') + '/' + release_base_dir + '/' , '') | replace( aws_key , '{'+'{'+' aws_access_key '+'}'+'}')  }}"
{% set aws_vars = [ release_base_dir , '.*VSD-.*\.qcow2$'] | join() | aws_s3_list(credentials) | signed_url_7d(credentials) %}
url_vsd_qcow2: "{{ aws_vars.url | replace( 'https://' + aws_vars.url | urlsplit('hostname') + '/' + release_base_dir + '/' , '') | replace( aws_key , '{'+'{'+' aws_access_key '+'}'+'}')  }}"

the playbook and how it works

This is the playbook

- hosts: localhost
  remote_user: root
  tasks:
    - name: test jinja2
      template: src=files_location.j2 dest="{{ playbook_dir }}/cfg/{{ cfg_dir }}/files_location.yml" backup=no

Let’s run this way (replace aws_key and aws_pass key with your own):

ansible-playbook -e aws_key="ABCDILUSFSBXLXMIQRSA" -e aws_pass="ABCD*T2oRghq47TCIKxoH2lhk4XXMibLkfilih/" -e aws_bucket="nuage-secure-files" -e release_base_dir="5.2.2-files" -e cfg_dir=5.2.2-docker-sdwan-elastic-ip  files-location.yml

See ya!

ipcalc #ansible filter – manage floating ip ranges for packet.net

Post based on https://github.com/digineo/ansible-ipcalc

I’ve been working many times with floating IPs for my SDWAN demos and deployments. It’s been some kind of annoying to change the configuration files so many times. Figure what IPs I should assigned to every server depending on the segment and netmask that packet gave me. Those changes also, are prone to typos that waste precious time.

On this post I’m going to show you the ansible filters regarding ipcalc on a real use case.

What do you need to start?

A Centos/RedHat server pip. I used a container image on my case:

docker run -d --name brickle-lab-data-only pinrojas/nuage-ansible:v1.2 true
docker run -d -i -t --volumes-from brickle-lab-data-only --name brickle-lab pinrojas/nuage-ansible:v1.2
docker exec -ti brickle-lab /bin/bash 

You will create a container called brickle-lab and a data container called brickle-lab-data-only
more details, check my post “data-only containers for ansible automation”

Finally, you will be on the container will all you need to run your playbooks and use your ipcalc filters.

The filter

Create or choose your carpet to run your playbook
Then, create a folder called “filter_plugins”
This is the filter I’ve created in that folder:

# https://github.com/digineo/ansible-ipcalc
#
# save this file in $ansible/filter_plugins/
#
# example usage in a jinja2 template:
# {% set network = "172.16.0.1/24" | ipcalc %}
#
# {{ "192.168.0.1" | ipadd(3) }} == "192.168.0.4"
# {{ "fe80::" | ipadd("::3") }} == "fe80::3"
#

import ipcalc

class FilterModule (object):
    def filters(self):
        return {
            "ipcalc": self.ipcalc,
            "ipadd":  self.ipadd,
        }

    def ipcalc(self, value):
        net    = ipcalc.Network(value)
        result = {
            'version'   : net.version(),
            'netmask'   : str(net.netmask()),
            'subnet'    : net.subnet(),
            'size'      : net.size(),
            'prefix'    : sum([bin(int(x)).count("1") for x in str(net.netmask()).split(".")])
        }

        if net.version() == 6:
            result['network']  = net.network().to_compressed()
            result['host_min'] = net.host_first().to_compressed()
            result['host_max'] = net.host_last().to_compressed()
        if net.version() == 4:
            result['network']   = str(net.network())
            result['host_min']  = str(net.host_first())
            result['host_max']  = str(net.host_last())
            result['broadcast'] = str(net.broadcast())

        return result

    # Add two addresses
    # works for IPv4 and IPv6
    def ipadd(self, one, another):
        version = 6 if (':' in one) else 4
        addr    = ipcalc.IP(ipcalc.IP(one).ip + ipcalc.IP(another).ip, version=version)
        if version == 6:
            return addr.to_compressed()
        else:
            return str(addr)

What do I want to get from it?

This is the final file I’m intended to get as result:

public_dns: 8.8.8.8

nuage_core_location: ewr1
prefix_elastic_ip: 29
netmask_elastic_ip: 255.255.255.248
gateway_elastic_ip: 147.75.38.9
util_elastic_ip: 147.75.38.10
vsc1_elastic_ip: 147.75.38.11
vsc2_elastic_ip: 147.75.38.12
dhcp_min_elastic_ip: 147.75.38.13
dhcp_max_elastic_ip: 147.75.38.14
dhcp_net_elastic_ip: 147.75.38.8
dhcp_bcst_elastic_ip: 147.75.38.15
dhcp_mask_elastic_ip: 255.255.255.248

branchA_location: sjc1


branchB_location: sjc1
# first uplink
bB1_gateway: 147.75.88.13
bB1_prefix: 30
bB1_dhcp_min: 147.75.88.14
bB1_dhcp_max: 147.75.88.14
bB1_dhcp_net: 147.75.88.12
bB1_dhcp_mask: 255.255.255.252
bB1_dhcp_bcst: 147.75.88.15
# second uplink
bB2_gateway: 147.75.88.117
bB2_prefix: 30
bB2_dhcp_min: 147.75.88.118
bB2_dhcp_max: 147.75.88.118
bB2_dhcp_net: 147.75.88.116
bB2_dhcp_mask: 255.255.255.252
bB2_dhcp_bcst: 147.75.88.119

How am I gonna get it?

I’ve created the folder template in my playbook dir. And I’ve created this file called elastic_ip.j2:

public_dns: 8.8.8.8

{% set nuage_core_net = nuage_core_net | ipcalc %}
nuage_core_location: {{ nuage_core_loc }}
prefix_elastic_ip: {{ nuage_core_net.prefix }}
netmask_elastic_ip: {{ nuage_core_net.netmask }}
gateway_elastic_ip: {{ nuage_core_net.host_min }}
util_elastic_ip: {{ nuage_core_net.host_min | ipadd(1) }}
vsc1_elastic_ip: {{ nuage_core_net.host_min | ipadd(2) }}
vsc2_elastic_ip: {{ nuage_core_net.host_min | ipadd(3) }}
dhcp_min_elastic_ip: {{ nuage_core_net.host_min | ipadd(4) }}
dhcp_max_elastic_ip: {{ nuage_core_net.host_max }}
dhcp_net_elastic_ip: {{ nuage_core_net.network }}
dhcp_bcst_elastic_ip: {{ nuage_core_net.broadcast }}
dhcp_mask_elastic_ip: {{ nuage_core_net.netmask }}

branchA_location: {{ bA_loc }}


branchB_location: {{ bB_loc }}
# first uplink
{% set bB1_net = bB1_net | ipcalc %}
bB1_gateway: {{ bB1_net.host_min }}
bB1_prefix: {{ bB1_net.prefix }}
bB1_dhcp_min: {{ bB1_net.host_min | ipadd(1) }}
bB1_dhcp_max: {{ bB1_net.host_min | ipadd(1) }}
bB1_dhcp_net: {{ bB1_net.network }}
bB1_dhcp_mask: {{ bB1_net.netmask }}
bB1_dhcp_bcst: {{ bB1_net.broadcast }}
# second uplink
{% set bB2_net = bB2_net | ipcalc %}
bB2_gateway: {{ bB2_net.host_min }}
bB2_prefix: {{ bB2_net.prefix }}
bB2_dhcp_min: {{ bB2_net.host_min | ipadd(1) }}
bB2_dhcp_max: {{ bB2_net.host_min | ipadd(1) }}
bB2_dhcp_net: {{ bB2_net.network }}
bB2_dhcp_mask: {{ bB2_net.netmask }}
bB2_dhcp_bcst: {{ bB2_net.broadcast }}

Time to make it work

This is the playbook called “elastic_ip_cfg.yml”:

- hosts: localhost
  gather_facts: no
  tasks:
    - name: Create files in extras
      template: src=elastic_ip.j2 dest="{{ playbook_dir }}/cfg/{{ nuage_release }}/elastic_ip.yml" backup=no

You should see something like this:

[dev@cc9816a6e5f8 pk-elastic-ip]$ ansible-playbook -e nuage_release=5.2.2-docker-sdwan-elastic-ip -e nuage_core_loc=ewr1 -e nuage_core_net=147.75.72.133/29 -e bB_loc=sjc1 -e bA_loc=sjc1 -e bB1_net=147.75.88.18/30 -e bB2_net=147.75.88.116/30  elastic_ip_cfg.yml
 [WARNING]: Could not match supplied host pattern, ignoring: all

 [WARNING]: provided hosts list is empty, only localhost is available


PLAY [localhost] *************************************************************************************************************************************************************

TASK [Create files in extras] ************************************************************************************************************************************************
changed: [localhost]

PLAY RECAP *******************************************************************************************************************************************************************
localhost                  : ok=1    changed=1    unreachable=0    failed=0

And we’re Done!. see ya!

Happy 2018, and by the way, SDWAN is growing bad…

I’ve been working more than two years in Nuage Networks traveling all around Americas (North, Central and South. And BTW, Mexicans are North-Americans. Please, don’t tell me otherwise. ok? ). Now, we have SDWAN initiatives thru Local Operators, Service Providers and Startups all over.

On the beginning, almost all my actions were related to promote and show how SDN can make things simpler and faster at the Datacenter. Techs like OpenStack and Docker where my entry point into any opportunity. Besides most of companies are still stuck with VMware. Customers want to lower costs and take advantage of orchestration. However, There’s big thing to sort out first… “the network”. Besides projects like Neutron and Dockers, Companies still have so much to do to make them work widely. 

Nuage can manage any kind of workload at the datacenter, either public or private ones. It can help to make that transition much faster and smoother.

Over the last year I’ve been putting more and more hours into SDWAN though. Why? Let’s blame the instant gratification… or Smart Phones, Moore’s law. 

An important factor is Telcos (a.k.a. Service Providers) have a huge delay on matters of Innovation (i.e Orchestration). On the other hand, customers are more hunger than ever for a more reliable, agile, affordable service.

Other reason is because Internet has many times more coverage, better price and reliability than a few years ago. Services as Spotify, Amazon video and Netflix are showing us that to download an App is much cheaper and faster than waiting for the TV cable technician to show up. Or are you still using a separated cable for each TV, Phone, Internet service at home?

Services Providers should seize the opportunity instead to see how to protect their current network.

Service providers, for companies like us, are a big pay-check opportunity. We’re helping some of them to develop new services on top of SDWAN. However, what is the rush for most of them? They actually did, over the end of the “last century”, a big investment for their MPLS network. They are getting more profit over-subscribing it.

Companies like Banks and Retails are tired of that. Waiting months to get services configured. Paying every month fortunes for a MPLS service that actually sucks. How much revenue won’t you get for waiting to have your branch connected? They haven’t known any other option until SDWAN came up.

SPs are still thinking on SDWAN as a cheap CPE box they used to install at branches. When SDWAN is putting all the intelligence over the end-points. Making the Network devices irrelevant when a VPN is stablished. Customer will have “absolute” visibility over every uplink performance. Managing better redundancy between two links attached to “different” networks (i.e. a MPLS and Internet). They can even switch paths for their Apps based on jitter information and thresholds.

Some local Operators has seen it. Even when they have a “huge” disadvantage over known Service Providers cause their tiny or non-existant network infrastructure. They don’t have a huge investment to protect. They can be more creative on what services to bring on top of it. Bringing “premium” private networks to enterprises not owing a piece of cable. Something as Uber bringing transportation services not owing the cars.

Just ping me and let you know who is actually offering that in your location 😉 or if you want to be become one of those visionaries. 

See ya!

Sharing #AWS S3 files for ansible thru Signed URLs

Hi Guys, I love automation. However, some much automation should get along with a secure way to share files like qcow2 images and rpm. I am doing that on my own and I would like to share what I’ve learnt
First of all, Thanks to Remi (@rvichery), Guillermo Alvarado (@galvarado89) and Boris Cortes (@boriscortes) for helping me out to find the best method here.

Prepare your system

It’d be great if you create a temporary user/group. My case I created a group called “read_only” like the next picture with the following attached permission “AmazonS3ReadOnlyAccess”

And then I created a user getting the AccessKeyID and the SecretKey. Install aws client. Guess what OS in my case. Nop! it isn’t Mac OSX now, I am doing that in my CentOS 🙂

yum -y install python-pip
pip install aws
pip install boto #checking if it was installed already

Configure your access with: “aws configure” and enter the required info (AccessKeyID and SecretKey)
Now, you can check your files on the bucket like as follow:

[root@ansible-sdnlab test]# aws s3 ls --recursive nuage-secure-files/5.1.1u1-files
2017-09-22 14:13:12          0 5.1.1u1-files/
2017-09-22 14:13:12    2672108 5.1.1u1-files/libnetwork5.1.1.rpm

Ok, now lets filter that to use it later as follow:

[root@ansible-sdnlab test]# aws s3 ls --recursive nuage-secure-files/5.1.1u1-files | awk '{ print $4 }'
5.1.1u1-files/
5.1.1u1-files/libnetwork5.1.1.rpm

Create your python app to generate URLs

Ok, this is a basic app I’ve done to generate my Signed URL with Expiration date (I’m using 45 days as deadline to expire in seconds):

import sys
import boto
from boto.s3.key import Key
import requests

#setup the bucket
c = boto.connect_s3('AFSGSHDKDKLAHDJDA', 'yeGe/dz9nxxOOxxOOxxxOOxxOOxxO/TI')
b = c.get_bucket('nuage-secure-files', validate=False)

#download the file
k = Key(b)
k.key = sys.argv[1]
#url_prt = k.generate_url( expires_in=259200, force_http=True )
#url_prt = k.generate_url( expires_in=259200, )
url_prt = k.generate_url( expires_in=3888000, )

print url_prt

Ok, now, Let’s create our first Signed URL

[root@ansible-sdnlab test]# python url_keygen.py 5.1.1u1-files/libnetwork5.1.1.rpm
https://nuage-secure-files.s3.amazonaws.com/5.1.1u1-files/libnetwork5.1.1.rpm?Signature=IggFmPRRJlWFvyMHJcYKgYWbXOQ%3D&Expires=1510005892&AWSAccessKeyId=AFAGAFADAFAHGAJKA

And basically you are done.

Playing with ansible

Ok, now, Let’s create a list of Signed URL for all the files (a.k.a keys in AWS’s words)

for i in `aws s3 ls --recursive nuage-secure-files/5.1.1u1-files | awk '{ print $4 }'` ; do python url_keygen.py $i >> signed_url_list.txt; done

After that, It throw that list into a YAML file like this:

rpm_lib_location: "https://nuage-secure-files.s3.amazonaws.com/5.1.1u1-files/libnetwork5.1.1.rpm?Signature=IggFmPRRJlWFvyMHJcYKgYWbXOQ%3D&Expires=1510005892&AWSAccessKeyId={{ AWS_Access_Key }}"

You can add “-e AWS_Access_Key=AFAGAFADAFAHGAJKA” when your invoking your playbook.
And I call the library as follow:

    - name: install library
      yum:
        name: '{{ rpm_lib_location }}'
        state: present
      remote_user: root

That’s all… See ya!

SDN/SDWAN lab on-premises within 60min #Nuage

Create your SDWAN lab on-premises with ansible is a nicer experience.
I have this new playbook that will help you. So far, it works on just one KVM host. Nuage instances will be accessed via ssh using other KVM Host’s ports with DNAT (iptables). Just check the “hosts” inventory file after build it.

We’ll do the following:
* yum update of your KVM host
* Install libvirt and plain openvswitch (using ovs instead linux bridge just in case you need to use more than one server)
* Create pat rules at your KVM Hosts (VMs will be accessed thru NAT ports. You can change that if you want… be my guest 🙂
* Create libvirt domains (remember modify image locations if you access to internet is not one of the best)
* Configure your NTP/DNS server (using 8.8.8.8 as forward servers, you can change that on the playbook)
* Configure every VM (VSD, VSC…)

NOTE: Before go ahead you need to get a Eval license from your Nuage rep and ask me about the AWS access key for the files. You can also register yourself at nuagex.io and get the NUage experience off-premises and much faster

You will get something like this

More details about the architecture at: https://pinrojas.com/2017/04/03/your-own-sdwan-demo-at-packet-net-60min/

Elements on the inventory can be configurable. You can have many options on releases and use cases. Servers will need unrestricted direct internet access.

KVM Host should have at least 64GB (using over-subscription) and Centos7

Build and run your nuage-ansible container

Prepare your ansible container to run your playbooks. It can’t be the same KVM hosts where you want to run all the VMs.

###Install docker.
If you have CentOS should be done as follow:
NOTE: Use any other device besides the KVM server with docker with fully remote access to those servers.

yum -y update
yum -y install docker
systemctl start docker
systemctl enable docker

Build your nuage-ansible container

Create a Folder and download Dockerfile and other files

mkdir ~/nuage-ansible
cd ~/nuage-ansible
curl -o Dockerfile https://raw.githubusercontent.com/p1nrojas/pk-nuage-multi/master/install/onpremises/Dockerfile
curl -o bash_profile https://raw.githubusercontent.com/p1nrojas/pk-nuage-multi/master/install/onpremises/bash_profile
curl -o vimrc https://raw.githubusercontent.com/p1nrojas/pk-nuage-multi/master/install/onpremises/vimrc
curl -o setup.sh https://raw.githubusercontent.com/p1nrojas/pk-nuage-multi/master/install/onpremises/setup.sh

Get you Nuage license

Contact your Nuage/Nokia rep to get your Eval license and copy the key at ~/nuage-ansible/.nuage_license_key

Build your docker image

Do the follow and wait for about 10min depending on your internet access. Check at the bottom to see what sort of output you should expect:

cd ~/nuage-ansible
docker build -t pinrojas/nuage-ansible:v1.0 .

Create your data and app container

I’m using a data container to avoid issues with permissions on the host system. Then you should run the following

docker run -d --name nuage-ansible01-data-only01 pinrojas/nuage-ansible:v1.0 true
docker run -d -i -t --volumes-from nuage-ansible01-data-only01 --name nuage-ansible-app01 pinrojas/nuage-ansible:v1.0

Getting access to your app container

Now, Get access thru the following command and check if the following was created:

docker exec -ti nuage-ansible-app01 /bin/bash

You should see something like this:

mroja001@usmovnmroja001:~/$ docker exec -ti nuage-ansible-app01 /bin/bash
[dev@f1fb5a3d07da ~]$ ls
ansible  images  pk-nuage-multi  setup.sh
[dev@f1fb5a3d07da ~]$ cd pk-nuage-multi/
[dev@f1fb5a3d07da pk-nuage-multi]$ ls
LICENSE      build-packet-reset.yml  cheat_sheet.md  library                   packet-bmetal-reset.yml      pk-create-40r10.sh  roles
README.md    build-packet.yml        files           onprem-create.yml         packet-inventory-create.yml  pk-create-40r9.sh   templates
ansible.cfg  cfg                     install         packet-bmetal-create.yml  packet-inventory-reset.yml   plugins
[dev@f1fb5a3d07da pk-nuage-multi]$ ls ~/.ssh/
id_rsa  id_rsa.pub

Preparing and running your your playbook to install Nuage VCS/VNS

If your internet access is not good enough, you should download all your VM images and files in advance and change the location on the YML files

Prepare your installation

You’re into the container. You should go to cd ~/pk-nuage-multi/cfg folder
You will see many options to se your env depending on the use case.
Will take the most simple that’s ~/pk-nuage-multi/cfg/5.1.1u1-docker-sdwan
You will install only VSD, VSC, ES and NTP/DNS servers.

Check the location of the files into ~/pk-nuage-multi/cfg/5.1.1u1-docker-sdwan/files_location.yml
And you will see something like:

#files release dependant
url_base_location: "https://nuage-secure-files.s3.amazonaws.com/5.1.1u1-files/"
url_elasticsearch_qcow2: "elastic5.1.1.qcow2?Signature=6fsjqVo22rB66PS8i3j59Lg%2Fx7E%3D&Expires=1510002581&AWSAccessKeyId={{ aws_access_key }}"
url_libnetwork_rpm: "libnetwork5.1.1.rpm?Signature=5%2Ftbd8p7RmKgcDEYeMjBprF123g%3D&Expires=1510002582&AWSAccessKeyId={{ aws_access_key }}"
url_newton_openstack_nova_esxi_rpm: "newton-el7/nuage-nova-esxi-14.0.0-5.1.1_21_nuage.noarch.rpm?Signature=gQtUTiDNjBhv4V1Ci32PdevCOg4%3D&Expires=1510002582&AWSAccessKeyId={{ aws_access_key }}"

And instead you would like something like. Your KVM host has to have to access to that uri.

url_base_location: "https://intranet.sdn/"
url_elasticsearch_qcow2: "elastic5.1.1.qcow2"
url_libnetwork_rpm: "libnetwork5.1.1.rpm"
url_newton_openstack_nova_esxi_rpm: "newton-el7/nuage-nova-esxi-14.0.0-5.1.1_21_nuage.noarch.rpm"

Now, get the public key at /home/dev/.ssh/id_rsa.pub and copy it into your /root/.ssh/authorization_keys file at the KVM Host.

Building your inventory

When your are ok with the preparation, you can run the playbook as follow:

cd ~/pk-nuage-multi
ansible-playbook -e nuage_release=40r9-core -e lab_domain=nuage.lab -e lab_network_prefix=172.16.1 -e host_ip=192.168.2.24 onprem-create-inventory.yml

The variables are the following:
* nuage_release: it’s the nuage release and configuration option inside cfg folder
* lab_domain: the domain to be set on the DNS and for all servers
* lab_network_prefix: the /24 network to use on the lab (you should use anyone can’t conflict on your site)
* host_ip: The KVM host where the servers will be running

If everything goes well, you should have a file called hosts in the playbook folder like this:

[dev@c4af69b09e6a pk-nuage-multi]$ cat hosts
---
# *** WARNING ***
# This file is automatically generated by build.yml.
# Changes made to this file may be overwritten.
#

[bmetal]
core ansible_host=192.168.2.24 ansible_user=root


[nserver]
ns1.nuage.lab ansible_host=192.168.2.24 ansible_user=centos ansible_port=2012

[vsd]
vsd1.nuage.lab ansible_host=192.168.2.24 ansible_user=root ansible_port=2020

[vsc]
vsc1.nuage.lab ansible_host=192.168.2.24 ansible_user=root ansible_port=2031

[stat]
stat.nuage.lab ansible_host=192.168.2.24 ansible_user=root ansible_port=2024

[webvirtmgr]
virt.nuage.lab ansible_host=192.168.2.24 ansible_user=centos ansible_port=2030

Building your lab

The next playbook will do the following:
* yum update of your KVM host
* Install libvirt and plain openvswitch
* Create pat rules at your KVM Hosts
* Create libvirt domains (VMs)
* Configure your NTP/DNS server
* Configure every VM (VSD, VSC…)

Run the installation as follow

cd ~/pk-nuage-multi
ansible-playbook -i hosts deploy-all.yml

deploy-all.yml will run all the playbooks.

Optionally you can use ovs-reset.yml and reset-all-domains.yml to reset your installation in case of any mistake.

See ya!

ADDITIONAL INFO:

Docker image building process

The following is the output you should get when you build your container image:

root@box01:~/nuage-ansible$ docker build -t pinrojas/nuage-ansible:v1.0 .
Sending build context to Docker daemon 10.24 kB
Step 1/38 : FROM centos:7.3.1611
 ---> 262f7381844c
Step 2/38 : RUN yum -y install epel-release && yum clean all
 ---> Using cache
 ---> 644fd82dae24
Step 3/38 : RUN curl "https://bootstrap.pypa.io/get-pip.py" -o "get-pip.py"
 ---> Using cache
 ---> b7e768df6f6c
Step 4/38 : RUN python get-pip.py
 ---> Using cache
 ---> 9b719fdf621b
Step 5/38 : RUN yum -y install ansible && yum clean all
 ---> Using cache
 ---> f24e0adf79f3
Step 6/38 : RUN yum -y install vim && yum clean all
 ---> Using cache
 ---> 1bb819ba75bd
Step 7/38 : RUN yum -y install curl && yum clean all
 ---> Using cache
 ---> d27b86305924
Step 8/38 : RUN yum -y install git && yum clean all
 ---> Using cache
 ---> 8a72a851696d
Step 9/38 : RUN rm -rf /usr/lib/python2.7/site-packages/chardet*
 ---> Using cache
 ---> 365efd7fa825
Step 10/38 : RUN pip install vspk
 ---> Using cache
 ---> 97e702309870
Step 11/38 : RUN pip install pexpect
 ---> Using cache
 ---> a141c3b9185d
Step 12/38 : RUN pip install packet-python
 ---> Using cache
 ---> 17dcb0ff39e9
Step 13/38 : RUN pip install django
 ---> Using cache
 ---> 68c70d14a5a7
Step 14/38 : RUN useradd dev
 ---> Using cache
 ---> 9eea619e87d5
Step 15/38 : RUN mkdir -p /home/dev/.ssh
 ---> Using cache
 ---> cf9c95c6f843
Step 16/38 : RUN mkdir -p /home/dev/images
 ---> Using cache
 ---> d9a997faf4e6
Step 17/38 : RUN mkdir -p /home/dev/pk-nuage-multi
 ---> Using cache
 ---> ab79724a79bb
Step 18/38 : RUN mkdir -p /var/log/ansible
 ---> Using cache
 ---> 0032fa2f2b1c
Step 19/38 : RUN chown -R dev:dev /home/dev
 ---> Using cache
 ---> 32856383b5d9
Step 20/38 : RUN chown -R dev:dev /var/log/ansible
 ---> Using cache
 ---> 530ea3da4591
Step 21/38 : VOLUME /home/dev/pk-nuage-multi
 ---> Using cache
 ---> 32c035a8b8ae
Step 22/38 : VOLUME /home/dev/.ssh
 ---> Using cache
 ---> f61f1c9170a0
Step 23/38 : VOLUME /var/log/ansible
 ---> Using cache
 ---> 45bb5df13ed1
Step 24/38 : WORKDIR /home/dev
 ---> Using cache
 ---> 87d1081701a5
Step 25/38 : ENV HOME /home/dev
 ---> Using cache
 ---> ea7f117a31a2
Step 26/38 : ADD vimrc /home/dev/.vimrc
 ---> Using cache
 ---> cf729acf4c10
Step 27/38 : ADD setup.sh /home/dev/setup.sh
 ---> Using cache
 ---> 176d7267eab5
Step 28/38 : ADD .nuage_license_key /home/dev/.nuage_license_key
 ---> Using cache
 ---> 78e9575c438f
Step 29/38 : RUN chmod 755 /home/dev/setup.sh
 ---> Using cache
 ---> 2701ffc9a9f0
Step 30/38 : ADD bash_profile /home/dev/.bash_profile
 ---> Using cache
 ---> 830c0edf3b3f
Step 31/38 : RUN mkdir -p /home/dev/.vim/autoload /home/dev/.vim/bundle
 ---> Using cache
 ---> ccebbfc38fba
Step 32/38 : RUN curl -LSso /home/dev/.vim/autoload/pathogen.vim https://tpo.pe/pathogen.vim
 ---> Using cache
 ---> 1a2d1987d675
Step 33/38 : RUN git clone git://github.com/chase/vim-ansible-yaml.git  /home/dev/.vim/bundle/vim-ansible-yaml
 ---> Using cache
 ---> 0d9641c4088b
Step 34/38 : RUN git clone https://github.com/lepture/vim-jinja.git /home/dev/.vim/bundle/vim-jinja
 ---> Using cache
 ---> 36940194aa83
Step 35/38 : RUN git clone git://github.com/ansible/ansible.git --recursive /home/dev/ansible
 ---> Using cache
 ---> fffa0dc891c2
Step 36/38 : RUN chown -R dev: /home/dev
 ---> Using cache
 ---> 8d0e89c1848e
Step 37/38 : USER dev
 ---> Using cache
 ---> c5f4b462152f
Step 38/38 : CMD /home/dev/setup.sh
 ---> Using cache
 ---> 8c179cd27888
Successfully built 8c179cd27888