CNF/VNF Under-the-hood: EVPN saves the day for Layer-2 services

We are moving out! An updated post can be found here:

https://www.cloud-native-everything.com/cnf-vnf-under-the-hood-evpn-saves-the-day-for-layer-2-services/

Let’s say you need a more scalable layer-2 solution for your VMs and containers. You can do some Linux bridges or maybe use OVS and try some VxLAN/GRE encapsulation between Hypervisors. Containers is a different kind of animal though: Way more end-points in every server and we put them across the datacenter, it could turn into the worst nightmare if you need to troubleshoot it.

DevOps or IT guys, that normally try the network as a black box to connect end-points. And still think that hardware appliances can be replace for modules in the kernel (I used to be one of them). It’s important also to say that sort of thought could fit in Enterprise use cases, but a Telco cloud, VNFs, CNFs, is in other league.

Calico is great solution that work over pure Layer-3 approach. Why? Well, they say that “Layer 2 networking is based on broadcast/flooding. The cost of broadcast scales exponentially with the number of hosts”. And yes, they are absolutely right… if you are not putting something like EVPN (RFC 7209) to help you out on building Layer-2 services of course.

Calico Kubernetes Architecture

Calico Kubernetes Architecture

EVPN don’t rely on flooding and learning like other technologies, actually uses control plane to advertise MAC addresses across different places. They can say: “Well, you will be using encapsulation that also have an important cost”… Well, yes and no. EVPN can use different data planes actually (i.e. MPLS). Maybe Nuage uses VXLAN, but it’s not limited by the technology to use other sort of transportation.

Also, scale can kill any good idea. Like rely on Linux kernel modules and services to route all communications in the datacenter. Try every server as a router? Create security policies for every end-point in the DC? Uff! For example Calico took the idea from the biggest network ever known today. Yes! the Internet. But it doesn’t mean this network is optimal. Many companies needs to rely on MPLS or other sort of private WAN services for higher demand in security and performance.

Nuage and Kubernetes Architecture

Nuage and Kubernetes Architecture

Layer-2 services help to simplify network design and security. Some Layer-2 domains don’t need to connect to any Layer-3 for security reasons. Also, Layer-2 domains in the datacenter can be directly attached to a Layer-2 service in the WAN (i.e. VPLS, MPLS). We can add many more things on the list like Private LTEs, Network slicing…

EVPN benefits

EVPN came as an improved model from what is learnt in MPLS/VPLS operations. Also, it’s a join work of many vendors as Nokia (former ALU), Juniper and Cisco. MPLS/VPLS relies on flooding and learning to build Layer2 forwarding database (FDB). EVPN introduced a new model for Ethernet services. EVPN uses Layer-3 thru MP-BGP, as a new delivery model, to distribute MAC and IP routing information instead of flooding and learning.

In summary key benefits we can export to CNF/VNFs use cases:

  • Scalability: Suppress unknown unicast flooding since all active MACs and IPs are advertised by the leaf or the software router you have installed in the hypervisor.
  • Peace of mind: Network admin will have a better control on how the cloud instances escales avoiding issues regarding flooding or loops or MAC mobility/duplications. And cloud admin will keep provisioning and moving around instances with minimal concern o the impact they can cause on the network and reducing the overhead regarding the setup of layer-2 services.
  • Workload mobility: If local learning is used, software routers can not detect that MAC address has been moved to other Hypervisor or Host. EVPN uses a MAC mobility sequence number to select always the highest value and advertise rapidly any change. Also the local software router to the Hypervisor will always respond to ARP request for the Default Gateway, that avoids tromboning traffic across remote servers after a MAC moves.
  • Ready to work with IPv6. EVPN is ready to manage IPv4 and IPv6 in the control and data plane.
  • Industry standard: software router can be directly integrated with Layer-2 services to the WAN thru Datacenter Gateways efficiently advertising MAC and IP routing from VM and containers. Some VNF/CNF are very complex communication instances that require to work directly with ethernet services. It can be seen as a no better seamless and standard solution than EVPN. Some of you can tell me Segment routing can be a better fit… Agree. Great material for “later” post though.
  • Resiliency: Multi-homing with all active forwarding, load balancing between PEs. Don’t waste bandwidth with active and standby link. You can create a BGP multi-homed subnet to the datacenter gateway (DCGW) form any virtual workload.

 

Reference and Source

Info from Calico: https://www.projectcalico.org/why-calico/

Ethernet VPN (EVPN) – Overlay Networks for Ethernet Services:

Enterprise cloud managed services: Daring moves to boost your offer

Some moves that I think you should have in mind if you want a strong portfolio of Cloud Managed Services for the Enterprise segment. A market that is growing almost exponentially every year, then it’s worth the shot.

Here you have a summary of the options you should consider to boost your portfolio – and enough to decide if you want to stay on this reading:

  • Databases: One of my most savviest friend told me: “data has gravity”. As soon as you move a database to the public cloud, many other apps that depend on it, will follow. This is trend you should embrace and I’ll tell you how and why later.
  • VMWare in AWS: A great move from AWS. VMWare has been the preferred virtualization ever. However, many organization want to stop dealing with hardware and its life cycle. Good opportunity to take that dirt off your ESX certificates and start supporting that transition.
  • SAP HANA: SAP is the most used ERP, probably worldwide. Now SAP customers must decide for HANA as a DB if they want to take full advantage of their data and real-time analytics. Also, it’s uncertain if SAP will stay supporting other DB vendors after 2025. it’s the perfect time to evolve those BASIS services and innovate thru the many options brought by public cloud. More later.
  • CI/CD: PaaS has got an important boost thanks to containers and Kubernetes. DevOps practices adoption are hard to get though. What are you waiting to help your customers? Also, a good opportunity to resell software tools subscriptions to support this code-to-production pipe.
Cloud Managed Services Market Size, Share & Trend Analysis Report By Service Type (Business, Network), By Deployment, By End-user, By Vertical, By Region, And Segment Forecasts, 2018 - 2025

Cloud Managed Services Market Size, Share & Trend Analysis Report By Service Type (Business, Network), By Deployment, By End-user, By Vertical, By Region, And Segment Forecasts, 2018 – 2025

Figure source: https://www.grandviewresearch.com/industry-analysis/cloud-managed-services-market

Next, I will bring more details on every bullet, don’t tear away from me yet.

Data has gravity

Migrate databases to the cloud is a very sensitive topic to any company. It’s complex, risky and resource consuming.

Then so is a huge opportunity for any provider to sell high-rated and a massive offer of managed services.

It’s matter of time that any DB will be moved to the public cloud anyway. Its hard to hide from the top-management, the huge cost benefits that change can bring – like market analyst aren’t rubbing it in our face all the time.

Then now it’s the time to size the opportunity and bring those experts.

As soon as a database is properly assessed, discussions will transition to identify what applications will have to be moved with it. This is why we says: “data has gravity”

DB-Engines Ranking - Trend Popularity - July 2019

DB-Engines Ranking – Trend Popularity – July 2019

Figure source: https://db-engines.com/en/ranking_trend

Databases like mysql and PostgreSQL can be moved to almost any public cloud right away. They are most typically use on web apps and you can find tons of tools and tutorials online.

Legacy DBs: The classic dilemma

More legacy stuff like MS SQL and Oracle is also available to have it on as-a-service mode. However, pricing is more convenient on their original vendor. You can opt for important savings or even no additional costs on cloud services using your current enterprise agreements or licenses.

On the other hand, you have vendors like GCP with Google SQL and AWS that brings Aurora, a very convenient relational database services. There are programs with important savings in credits on your migration project to encourage customers to migrate from Oracle/SQL. Of course, decisions like this depends on the complexity of your apps.

Analyze case by case.

NoSQL: New stuff on the air

New generation databases, like NoSQL, can even have birth for your organization directly in the public cloud.

Start asking for high volume of unstructured data or performance issues with the scalability on relational databases.

Cloud providers use different NoSQL technologies. It could be a little overwhelmed to choose the right one. Sometimes it depends on the use case (sort of queries, costs, throughput, app code, scalability) or basically in what cloud vendor most of your applications resides. I’d say the most popular are Mongo DB, AWS Dynamo, and lately GCP Big Table.

Finally, Big data analytics for massive info can also be a interesting point to attack. GCP Big Query have many public datasets ready to use regarding different needs like climate, sports, flights…

VMWare: A great move from AWS

6 years ago, I thought OpenStack/KVM would rule and take a big part of the market from VMWare. Well, it seems it was too much risk and complexity for the Enterprise market – you can’t blame it on the customers, they have a life! VMWare is sill the preferred solution for compute virtualization. OpenStack seems more appealing for Telcos and their VNFs.

It seems the worn out motivation for compute virtualization of optimizing compute resources is long gone. They are not enough to beat public cloud benefits. Companies wants more savings like getting rid of hardware refresh cycles.

Enterpise adoption of public cloud

Enterpise adoption of public cloud 2018

Figure source: https://www.forbes.com/sites/louiscolumbus/2018/09/23/roundup-of-cloud-computing-forecasts-and-market-estimates-2018/#6ab322f5507b

It’s not a surprise then that AWS is having a blast taking ESX instances to the public cloud lately. From the point of view of the operation, the transition is smoothly. You can stay with your vCenter managing instances almost on the same way you use to do it on-premises.

Where is then the opportunity for managed services? Well, change is hard for any organization. The migration of instances have some complexity, plus some additional challenge on the operation. Also, customers will want to take advantage mixing some other services in AWS like load balancers and S3.

Providers can support your customers on that transition, analyze benefits – build a business case to justify the change based on mid-term savings -, design and even take over an important role of the operation.

SAP HANA

An important trend in Big Data analytics is SAP HANA. Plus, SAP is forcing all customers to move to HANA from traditional DB instances like Oracle or SQL.

SAP HANA bring valuable benefits regarding real-time analysis of massive amounts of information collected from many sources. Something that relational databases are struggling due its nature and structure and mostly because they’re still working on disk.

The opportunity

This is also an important opportunity to develop SAP advanced managed services: From Advanced SAP BASIS to more specialize skills as industry specific data scientists.

As soon as SAP ECC/BW start working on HANA, then you can either deploy what-if scenarios directly on relational data with no concerns on the database performance, or more complex use cases like: get suggested actions in real-time to reduce churn after pulling data from many sources (IoT, call center logs…).

Build a SAP on HANA on-premises is not easy and cheap. You must acquire expensive specific certified hardware. Surely, troubleshooting can be hard if you don’t have the skills in house. Most of the cloud providers offer a specific IaaS for HANA. You can check them over here: https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html.

Customers would apply to BYOL and externalize SAP BASIS services to more experienced providers dealing with cloud resources. Unless you expect those cloud vendors will add traditional backup services based on tape to their portfolio. I wouldn’t hold my breath.

CI/CD and the magic of containers

If you are familiar with CI/CD (Continuous Integration / Continuous Delivery ), then you know it’s just a set of practices to go from code to production in the cheapest and fastest way. Containers are a key component on it.

There are so many tools on the market – and no doubt that more coming – to create this appealing pipe from dev to production.

All public clouds have developed there own version of Kubernetes (K8s) services, that is the most used container orchestration software until now.

Dev teams are not just buying K8s, they are buying the agility to release apps faster – or fail faster. Then, you have to understand their process from code to production. That pipe depends on many things like the code of preference, how micro-services interact, how they work with databases, CI/CD apps of choice (github, gitlab, jenkins)…

Also, monitoring is an under-rated topic that should be covered and other oppty to sell more subscription services along with managed services.

Final thoughts

Appreciate if you make it until this point – even if you skip some or maybe all my notes.

The biggest challenge is to build that sort of cutting-edge talent in house – Like It wasn’t hard enough to find and hire the right people with those highly demanded skills. And it’s even harder to retain them though.

There is not perfection building what is required when you have to deal with talented people. You may start with certified contractors or partners.

Also, you need to start developing partnerships with vendors like Google, AWS, Microsoft, SAP, VMWare… that will take time though, and some of them will ask some level of commitment or investment at front.

Start simple, maybe picking CI/CD and then adding tools, and then some others. Look into your organization, I am sure you can get some certified professionals on some of those techs that I have mentioned.

And always remember that the hardest part is yet to come: working on the distribution and go-to-market strategy.

If you have doubts, ping me.

See ya and happy selling

eBay, a massive k8s and OpenStack ref case, releasing tessmaster to opensource #os17

Based on their words “K8s is app centric as openstack is Infra centric”. eBay faced many challenges to run K8s. It was hard to make them work seamesslessly. Challenges like multi-tenancy, app security, container registry have been addressed in order to make this work (check on the slides to get more of those)

Tessmaster is their own deployment, running on k8s to manage k8s. They showed a live demo how tessmaster could thru openstack and k8s heal all k8s cluster running company wide. 

They are releasing the code in a few. Tessmaster can run over any kind of cloud platform as openstack, google, azure

Also, I am attaching some interesting figures of what they have today.

Kubernetes openstack summit boston ebay tessmaster

Kubernetes openstack summit boston ebay tessmaster

Kubernetes openstack summit boston ebay tessmaster

Kubernetes openstack summit boston ebay tessmaster

Kubernetes openstack summit boston ebay tessmaster

Kubernetes openstack summit boston ebay tessmaster

Kubernetes openstack summit boston ebay tessmaster

Kubernetes openstack summit boston ebay tessmaster


Kubernetes openstack summit boston ebay tessmaster

Kubernetes openstack summit boston ebay tessmaster

Kubernetes openstack summit boston ebay tessmaster

Kubernetes openstack summit boston ebay tessmaster

Kubernetes openstack summit boston ebay tessmaster

Kubernetes openstack summit boston ebay tessmaster

Kubernetes openstack summit boston ebay tessmaster

Kubernetes openstack summit boston ebay tessmaster

Kubernetes openstack summit boston ebay tessmaster

Kubernetes openstack summit boston ebay tessmaster

Second Generation Cloud #os17

Over OpenStack Summit Boston and based on Bryce’s words, we are facing the second generation cloud. Virtualizing everything, not just compute. Also, a new trend called “remotely managed private cloud” where you own the infra relying on a remote managed service from openstack experts. Kubernetes (including OpenShift of course) is by far the most used tool.

Public, hosted private and internal private are almost even on usage. And you can find even small as huge companies from diff industries adopting private openstack cloud

Big challenges of adoption are regarding culture and processes. Then, this is the reason that remote managed services are emerging so fast.

Second cloud generation features openstack summit Boston 2016 2017

Second cloud generation features openstack summit Boston 2016 2017

Openstack survey 2016 2017 summit boston users

Openstack survey 2016 2017 summit boston users

Openstack most used tools summit boston 2017 2016

Openstack most used tools summit boston 2017 2016

Primary cloud pltaform survey forrester boston openstack 2017 2016

Primary cloud pltaform survey forrester boston openstack 2017 2016

Second cloud generation challenges openstack summit 2016 2017 Boston

Second cloud generation challenges openstack summit 2016 2017 Boston

Docker with Nuage Integration

You saved me last night with this post. I couldn’t have made it work without it. thanks buddy.

scotts tech blog

The Nuage Virtual Router and Switch (VRS) can be utilized on a number of different platforms including BareMetal Servers, ESXi, KVM and HyperV. In this post we’re going to utilize the VRS as part of a KVM Docker Server running on a CentOS7.3 Minimum Server. During the time of this post the version of Nuage being utilized is 4.0R8 but the process should be similar on future releases.

Prerequisites:

  • VSD (standalone or cluster) and VSC (at least 1) must be deployed and operational
  • CentOS Server with at least 1 NIC that has reachability to both the VSD and VSC/s
  • CentOS Server must have internet access for updates
  • Access to the nuage-openvswitch and nuage-docker-monitor rpm files

Server preparation

1. Stop and disable NetworkManager, start network and disable firewalld

2. Disable SELINUX
vi /etc/selinux/config


3. Clean the cache and update the system
rm -fr /var/cache/yum/*
yum clean all
yum -y update

4…

View original post 298 more words

WebVirtMGR

Great tool to manage your KVM instances. Thanks Scott!

scotts tech blog

Need a simple and easy to use Free KVM Manager? While there are many tools out there that offer similar functionality I found that with only a few hosts WebVirtMgr offers a simple and easy UI to create and manage Hypervisors and VM’s.

Outline of setting up WebVirtMgr

  • Prepare CentOS 7.3 Server
  • Prepare CentOS 7.3 Client
  • Install and configure WebVirtMgr on Server
  • Install and configure WebVirtMgr on each Client
  • Test Connectivity between Client and Server

Server Setup

Prepare the CentOS 7 server
https://github.com/retspen/webvirtmgr/wiki/Install-WebVirtMgr

After a new CentOS 7.3 Server with security disabled

You may need to update current epel-release
yum -y install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-9.noarch.rpm
yum -y install git python-pip libvirt-python libxml2-python python-websockify supervisor nginx
yum -y install gcc python-devel

After running above command update pip and install numpy
pip install --upgrade pip
pip install numpy

clone webvirtmgr from github and install it
git clone git://github.com/retspen/webvirtmgr.git
cd webvirtmgr
pip install -r requirements.txt

View original post 265 more words

OpenStack NFS Storage

This is nice start form Scott. I am sure he has tons of info to share. Very powerful and simple post how to use a Shared NFS disk pool among nova computes.

scotts tech blog

Having an OpenStack NFS storage will allow for the testing of VM migration between Nova Compute nodes. The following highlights a simple way of building an NFS storage for your OpenStack VMs.

Prerequisites

  • NFS Server with nova access
  • CentOS 7.3 as the NFS Client
  • OpenStack compute nodes deployed (tested on Mitaka and Newton)

Server Side

Create a directory on your NFS Server
mkdir -p /share/nfs/nova_instances

Change permissions to r/w
chmod -R 777 /share/nfs/nova_instances

Client Side

Before beginning make sure /var/lib/nova/instances is empty of any existing VMs
ls /var/lib/nova/instances/

Make the following changes to /etc/libvirt/qemu.conf within each compute/nova node
vi /etc/libvirt/qemu.conf
Uncomment and change dynamic_ownership to…

Uncomment the following

Restart libvirtd
systemctl restart libvirtd

Edit fstab to add an NFS mount point to the bottom of the file
vi /etc/fstab

Reload the mount point
mount -a

Note1: Make sure file permissions on the NFS server allow the ‘nova’ user to write

View original post 35 more words

Highlight ansible YAML and Jinja syntax in “vim” Editor

We are moving out! An updated post can be found here:

https://www.cloud-native-everything.com/highlight-ansible-yaml-and-jinja-syntax-in-vim-editor/

Hi Guys,

Sometimes. I’m getting a bit dizzy with all those lines playing with ansible.
This will helpful to have more fun over your coding. Active syntax colors i your YAML and Jinja files

yaml-syntax-ansible-highlight-vi-editor

Use this: https://github.com/tpope/vim-pathogen

mkdir -p ~/.vim/autoload ~/.vim/bundle && \
curl -LSso ~/.vim/autoload/pathogen.vim https://tpo.pe/pathogen.vim

Now add the following lines to vim ~/.vimrc to activate this and start autoloading bundles.

execute pathogen#infect()
syntax on
filetype plugin indent on

Now, download the bundles.

cd ~/.vim/bundle
git clone git://github.com/chase/vim-ansible-yaml.git
git clone https://github.com/lepture/vim-jinja.git

You’re done!

Now prepare to be amazed “vim” any yml and j2 file (remeber use “vim” instead “vi”. Well, everything depends on your profile)

jinja-j2-ansible-file-syntax-on-vi

See ya!

Scale out #Kubernetes with #Nuage: Installation notes

Howdy,

Kubernetes is the dreamed orchestration tool for any developer. However, it could be a nightmare for security guys (get online images make them cringe). After all, K8s manages containers with apps and databases connected to an overlay network. Containers could have sensible information. They could be exploded and used to attack internal/external assets.

Nuage will help to microsegment containers and even put security instances in the middle to analyze traffic. A overall network management and their settings. Manage forwarding and control rules. Manage Public/WAN segment to publish your apps and to control bandwidth utilization.

The best thing! All those settings can be managed thru automated policies. You do not lose any agility.

Scalability is also a known issue. Customers managing 10000s of instances using a unique VSD instance as Global console. The usage of a lightweight resource as openflow is the evidence (Most “SDN” solutions use XMPP to program vswicthes or vrouters).

Containers scales in seconds. Does your security and network do so?

Next you will find a step-by-step to install your Kubernetes cluster with Nuage

Nuage Core solution

Nuage VSD and VSC have to be installed in advance. VSD is being serving at 10.10.10.5:8443 and VSC is installed at 10.10.10.6 in my case.

Also, I have a dns/ntp server for my lab. Even the K8s nodes have their CNAMEs and reverses. I am using my dns as web server to publish the rpm files.

Prepare your installation

I’ve done a fresh install of my two servers (check my previous post)
Then, we need to publish Nuage rpm files into any http server. And the prepare our ansible/git server (I would be the same k8s master in this case)

Don’t forget both servers as to be accessed thru ssh with no password at all (you will have to add ansible node’s public keys into authorized_server files into both servers)

rpm -iUvh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm
yum -y update
yum -y install ansible
yum -y install git
yum -y install python-netaddr

Synchronize to your ntp server and change the localtime zone to your current
Normalize your hostname vars

[root@k8scluster ansible]# HOSTNAME=k8scluster.nuage.lab
[root@k8scluster ansible]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
10.10.10.17 k8scluster.nuage.lab k8scluster
[root@k8scluster ansible]# cat /etc/hostname
k8scluster.nuage.lab
[root@k8scluster ansible]# hostname k8scluster.nuage.lab

Clone Nuage files:

git clone https://github.com/vishpat/contrib
cd contrib
git checkout origin/nuage -b nuage

You will get something like this:

[root@k8snode01 ~]# git clone https://github.com/vishpat/contrib
Cloning into 'contrib'...
remote: Counting objects: 23650, done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 23650 (delta 0), reused 0 (delta 0), pack-reused 23647
Receiving objects: 100% (23650/23650), 27.79 MiB | 12.87 MiB/s, done.
Resolving deltas: 100% (10400/10400), done.
Checking out files:  81% (10289/12702)   
Checking out files: 100% (12702/12702), done.
[root@k8snode01 ~]# cd contrib
[root@k8snode01 contrib]# git checkout origin/nuage -b nuage
Branch nuage set up to track remote branch nuage from origin.
Switched to a new branch 'nuage'

Installing K8s with Nuage in a few steps

Create ansible.cfg inside ~/contrib/ansible

# this ~/contrib/ansible/ansible.cfg file
[defaults]

# Set the log_path
log_path = /var/log/ansible.log

[ssh_connection]
pipelining = True

Create your inventory nodes file ~/contrib/ansible/nodes

# Inventory file ~/contrib/ansible/nodes
# Create an k8s group that contains the masters and nodes groups
[k8s:children]
masters
nodes

[k8s:vars]
ansible_ssh_user=root

vsd_api_url=https://10.10.10.5:8443
vsp_version=v4_0
enterprise=K8s_Lab
domain=Kubernetes03

vsc_active_ip=10.10.10.6
uplink_interface=eth0

nuage_monitor_rpm=http://10.10.10.2/Kubernetes/RPMS/x86_64/nuagekubemon-4.0-3.20.el7.centos.x86_64.rpm
vrs_rpm=http://10.10.10.2/Kubernetes/RPMS/x86_64/nuage-openvswitch-4.0.3-25.el7.x86_64.rpm
plugin_rpm=http://10.10.10.2/Kubernetes/RPMS/x86_64/nuage-k8s-plugin-4.0-3.20.el7.centos.x86_64.rpm

# host group for masters
[masters]
k8scluster.nuage.lab

[etcd]
k8scluster.nuage.lab

# host group for nodes, includes region info
[nodes]
k8scluster.nuage.lab
k8snode01.nuage.lab

Execute playbook

cd contrib/ansible
ansible-playbook -vvvv -i nodes cluster.yml

This is the final result I’ve got:

PLAY RECAP *********************************************************************
k8scluster.nuage.lab       : ok=303  changed=51   unreachable=0    failed=0   
k8snode01.nuage.lab        : ok=102  changed=36   unreachable=0    failed=0 

Check K8s cluster settings:

[root@k8scluster ~]# kubectl cluster-info
Kubernetes master is running at http://localhost:8080
Elasticsearch is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
Grafana is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb

Start your Nuage Kubernetes Monitor service:

service nuagekubemon start


================== Begin update Sept 22, 2016 ================
If you want to use services thru kube-proxy (like service type=loadbalancer). We have to change /etc/kubernetes/proxy settings in all nodes as follow. The reason? Instead iptables, Nuage is managing security settings now.

#/etc/kubernetes/proxy
KUBE_PROXY_ARGS="--kubeconfig=/etc/kubernetes/proxy.kubeconfig --proxy-mode=userspace"

================== End update Sept 22, 2016 ==================

Launching an App

Finally, Let’s launch soemthing:

[root@k8scluster ansible]# kubectl run my-nginx --image=nginx --replicas=2 --port=80
deployment "my-nginx" created
[root@k8scluster ansible]# kubectl get po
NAME                        READY     STATUS              RESTARTS   AGE
my-nginx-3800858182-r0om7   0/1       ContainerCreating   0          2m
my-nginx-3800858182-xtffj   0/1       ContainerCreating   0          2m
[root@k8scluster ansible]# kubectl get po
NAME                        READY     STATUS    RESTARTS   AGE
my-nginx-3800858182-r0om7   1/1       Running   0          6m
my-nginx-3800858182-xtffj   1/1       Running   0          6m

What we have in our VSD GUI: You will see containers from K8s cluster and nginx

kubernetes nuage 01

Now, you’ll see Security policies in a more human interface.

kubernetes nuage 02

See ya!

#Kubernetes in a few steps

Howdy,

There’s a lot online info how to set your kubernetes cluster. This is what I found as the simplest to install your own local one.

I am using my own lab server with Centos7 and qemu-kvm. I’ve created two servers called k8scluster and k8snode01 using centos7 cloud images (check the script here)

./virt-create-centos7.sh k8scluster01 8192 4 20 10.10.10.17
./virt-create-centos7.sh k8snode01 8192 4 20 10.10.10.18

I have a local DNS server based on bind. I’ve created both CNAME registers and their reverse under the domain nuage.lab. I’ve ensured that k8snode01 can be accessed from k8scluster with no password thru ssh.
I will use k8scluster as my ansible node. Before go further. Let’s do an update to both servers and let’s install git on k8scluster.

yum -y update
yum -y install git

Install Ansible

rpm -iUvh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm
yum -y update
yum -y install ansible

Install python-netaddr

yum install -y python-netaddr

Get the kubernetes scripts for installation

git clone https://github.com/kubernetes/contrib

You will see something like this

[root@k8scluster ~]# git clone https://github.com/kubernetes/contrib
Cloning into 'contrib'...
remote: Counting objects: 30900, done.
remote: Total 30900 (delta 0), reused 0 (delta 0), pack-reused 30900
Receiving objects: 100% (30900/30900), 32.66 MiB | 12.22 MiB/s, done.
Resolving deltas: 100% (14903/14903), done.
Checking out files: 100% (16112/16112), done.

Create ~/contrib/ansible/inventory/inventory file under with the following content:

k8scluster.nuage.lab

[etcd:children]
masters

[nodes]
k8snode01.nuage.lab
k8scluster.nuage.lab

And make changes to ~/contrib/ansible/inventory/group_vars/all.yml denpending on what you want. My case I’ve kept everything like it is. However, you will want to change kube_service_addresses to something that can be routed. That way you will access your apps thru this external network.

#this is all.yml file
source_type: packageManager 
cluster_name: cluster.local 
insecure_registrys:
kube_service_addresses: 10.254.0.0/16
networking: flannel
flannel_subnet: 172.16.0.0
flannel_prefix: 12
flannel_host_prefix: 24
cluster_logging: true
cluster_monitoring: true
kube-ui: false
kube-dash: false
dns_setup: true
dns_replicas: 1

Check if you are done:

[root@k8scluster ~]# kubectl cluster-info
Kubernetes master is running at http://localhost:8080
Elasticsearch is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
Grafana is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb

Let’s launch our first app to see if everything is ok

[root@k8scluster ~]# kubectl run my-nginx --image=nginx --replicas=2 --port=80
deployment "my-nginx" created
[root@k8scluster ~]# kubectl expose deployment my-nginx --target-port=80 --type=LoadBalancer
service "my-nginx" exposed
[root@k8scluster ~]# kubectl get po
NAME                        READY     STATUS              RESTARTS   AGE
my-nginx-3800858182-8yckb   0/1       ContainerCreating   0          23s
my-nginx-3800858182-u08tn   0/1       ContainerCreating   0          23s
[root@k8scluster ~]# kubectl get service my-nginx
NAME       CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
my-nginx   10.254.173.92                 80/TCP    16s
[root@k8scluster ~]# kubectl get po
NAME                        READY     STATUS    RESTARTS   AGE
my-nginx-3800858182-8yckb   1/1       Running   0          1m
my-nginx-3800858182-u08tn   1/1       Running   0          1m

Nice! let’s check our workloads on both servers. what we have into k8scluster:

[root@k8scluster ~]# docker ps
CONTAINER ID        IMAGE                                                 COMMAND                  CREATED             STATUS              PORTS               NAMES
477f0b39ae0d        nginx                                                 "nginx -g 'daemon off"   2 minutes ago       Up 2 minutes                            k8s_my-nginx.94312989_my-nginx-3800858182-8yckb_default_653df6f4-6ec7-11e6-946c-5254005e6a02_d0f8e009
89fce9ac474d        gcr.io/google_containers/pause:2.0                    "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD.cf58006d_my-nginx-3800858182-8yckb_default_653df6f4-6ec7-11e6-946c-5254005e6a02_dafc30db
439976e37027        gcr.io/google_containers/fluentd-elasticsearch:1.19   "td-agent"               About an hour ago   Up About an hour                        k8s_fluentd-elasticsearch.51dac75b_fluentd-elasticsearch-k8scluster.nuage.lab_kube-system_376c1ce61440747f2227a035bf5bb17b_eb7745cc
98f48c58c474        gcr.io/google_containers/pause:2.0                    "/pause"                 About an hour ago   Up About an hour                        k8s_POD.6059dfa2_fluentd-elasticsearch-k8scluster.nuage.lab_kube-system_376c1ce61440747f2227a035bf5bb17b_08185552

What we have into k8snode01 now:

[root@k8snode01 ~]# docker ps
CONTAINER ID        IMAGE                                                 COMMAND                  CREATED             STATUS              PORTS               NAMES
bc608aa29e2e        nginx                                                 "nginx -g 'daemon off"   2 minutes ago       Up 2 minutes                            k8s_my-nginx.94312989_my-nginx-3800858182-u08tn_default_6542ee6a-6ec7-11e6-946c-5254005e6a02_31e8f6a5
c658b776e20f        gcr.io/google_containers/pause:2.0                    "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD.cf58006d_my-nginx-3800858182-u08tn_default_6542ee6a-6ec7-11e6-946c-5254005e6a02_ddeadd38
77e8b8fdcb41        gcr.io/google_containers/kibana:1.3                   "/run.sh"                57 minutes ago      Up 57 minutes                           k8s_kibana-logging.80b4be80_kibana-logging-v1-tr7gz_kube-system_cf01c929-6ebd-11e6-946c-5254005e6a02_5d93fb99
de16f0337cc7        gcr.io/google_containers/exechealthz:1.0              "/exechealthz '-cmd=n"   59 minutes ago      Up 59 minutes                           k8s_healthz.ea531445_kube-dns-v11-rlt2r_kube-system_cf1fec49-6ebd-11e6-946c-5254005e6a02_9628ffa7
4f444be782a1        gcr.io/google_containers/skydns:2015-10-13-8c72f8c    "/skydns -machines=ht"   About an hour ago   Up About an hour                        k8s_skydns.24fb3a98_kube-dns-v11-rlt2r_kube-system_cf1fec49-6ebd-11e6-946c-5254005e6a02_4bd8d957
3a09f30dd267        gcr.io/google_containers/kube2sky:1.14                "/kube2sky --domain=c"   About an hour ago   Up About an hour                        k8s_kube2sky.7d240133_kube-dns-v11-rlt2r_kube-system_cf1fec49-6ebd-11e6-946c-5254005e6a02_7c38394d
96807c8d4455        gcr.io/google_containers/heapster_grafana:v2.6.0-2    "/bin/sh -c /run.sh"     About an hour ago   Up About an hour                        k8s_grafana.1848ba5c_monitoring-influxdb-grafana-v3-7en7m_kube-system_ceec7cba-6ebd-11e6-946c-5254005e6a02_0a28dbca
00d14bd0e4c6        gcr.io/google_containers/heapster:v1.0.2              "/heapster --source=k"   About an hour ago   Up About an hour                        k8s_heapster.a26e01fd_heapster-v1.0.2-1708347466-lf034_kube-system_cd39989b-6ebd-11e6-946c-5254005e6a02_a5351837
f6216bf7a16f        gcr.io/google_containers/addon-resizer:1.0            "/pod_nanny --cpu=100"   About an hour ago   Up About an hour                        k8s_eventer-nanny.8f2cc828_heapster-v1.0.2-1708347466-lf034_kube-system_cd39989b-6ebd-11e6-946c-5254005e6a02_3a71361e
8fb90fba2478        gcr.io/google_containers/addon-resizer:1.0            "/pod_nanny --cpu=100"   About an hour ago   Up About an hour                        k8s_heapster-nanny.9bd7c896_heapster-v1.0.2-1708347466-lf034_kube-system_cd39989b-6ebd-11e6-946c-5254005e6a02_d31a75d1
652a39c4b6c5        gcr.io/google_containers/elasticsearch:1.8            "/run.sh"                About an hour ago   Up About an hour                        k8s_elasticsearch-logging.99ece1ea_elasticsearch-logging-v1-ze2qu_kube-system_cf06f4d7-6ebd-11e6-946c-5254005e6a02_d38a078b
aa244cf22bed        gcr.io/google_containers/elasticsearch:1.8            "/run.sh"                About an hour ago   Up About an hour                        k8s_elasticsearch-logging.99ece1ea_elasticsearch-logging-v1-dvlpp_kube-system_cf09c6e7-6ebd-11e6-946c-5254005e6a02_a3a35e15
3416beba5e4f        gcr.io/google_containers/etcd-amd64:2.2.1             "/usr/local/bin/etcd "   About an hour ago   Up About an hour                        k8s_etcd.2bcc34f7_kube-dns-v11-rlt2r_kube-system_cf1fec49-6ebd-11e6-946c-5254005e6a02_9a144f9c
931a1e1536f1        gcr.io/google_containers/heapster_influxdb:v0.5       "influxd --config /et"   About an hour ago   Up About an hour                        k8s_influxdb.a62d4047_monitoring-influxdb-grafana-v3-7en7m_kube-system_ceec7cba-6ebd-11e6-946c-5254005e6a02_d82b2ded
a1877c5ffd04        gcr.io/google_containers/fluentd-elasticsearch:1.19   "td-agent"               About an hour ago   Up About an hour                        k8s_fluentd-elasticsearch.51dac75b_fluentd-elasticsearch-k8snode01.nuage.lab_kube-system_376c1ce61440747f2227a035bf5bb17b_a691a8f2
a50553d32497        gcr.io/google_containers/heapster:v1.0.2              "/eventer --source=ku"   About an hour ago   Up About an hour                        k8s_eventer.90caf32a_heapster-v1.0.2-1708347466-lf034_kube-system_cd39989b-6ebd-11e6-946c-5254005e6a02_f7742136
efc7d81a02bd        gcr.io/google_containers/pause:2.0                    "/pause"                 About an hour ago   Up About an hour                        k8s_POD.558027c8_elasticsearch-logging-v1-ze2qu_kube-system_cf06f4d7-6ebd-11e6-946c-5254005e6a02_16e51798
941a3f1de554        gcr.io/google_containers/pause:2.0                    "/pause"                 About an hour ago   Up About an hour                        k8s_POD.558027c8_elasticsearch-logging-v1-dvlpp_kube-system_cf09c6e7-6ebd-11e6-946c-5254005e6a02_b1d84cd6
108a66b1700f        gcr.io/google_containers/pause:2.0                    "/pause"                 About an hour ago   Up About an hour                        k8s_POD.7e0801af_kibana-logging-v1-tr7gz_kube-system_cf01c929-6ebd-11e6-946c-5254005e6a02_31b6ed7c
5cfa71d299a0        gcr.io/google_containers/pause:2.0                    "/pause"                 About an hour ago   Up About an hour                        k8s_POD.55c42327_monitoring-influxdb-grafana-v3-7en7m_kube-system_ceec7cba-6ebd-11e6-946c-5254005e6a02_7da9e87e
4001bac015e8        gcr.io/google_containers/pause:2.0                    "/pause"                 About an hour ago   Up About an hour                        k8s_POD.e2764897_kube-dns-v11-rlt2r_kube-system_cf1fec49-6ebd-11e6-946c-5254005e6a02_0470156f
5173e4b465b5        gcr.io/google_containers/pause:2.0                    "/pause"                 About an hour ago   Up About an hour                        k8s_POD.6059dfa2_heapster-v1.0.2-1708347466-lf034_kube-system_cd39989b-6ebd-11e6-946c-5254005e6a02_4cfc7a00
23b2330f25ad        gcr.io/google_containers/pause:2.0                    "/pause"                 About an hour ago   Up About an hour                        k8s_POD.6059dfa2_fluentd-elasticsearch-k8snode01.nuage.lab_kube-system_376c1ce61440747f2227a035bf5bb17b_b5e949e6

Let’s clean this app up.

[root@k8scluster ~]# kubectl delete deployment,service my-nginx
deployment "my-nginx" deleted
service "my-nginx" deleted
[root@k8scluster ~]# kubectl get po
[root@k8scluster ~]# 

See ya!