How to setup Juniper's Openstack FWaaS Plugin

I have written a tech wiki article on how to install Juniper's OpenStack FWaaS Plugin @ http://forums.juniper.net/t5/Data-Center/How-to...

Friday, February 27, 2015

Customising the Link attribute in Horizon dashboard's DataTable

I had a quick dab at Django & Openstack Horizon frameworks in order to build a dashboard for one of our Openstack projects.

Our requirement was to show a table on the UI with the corresponding CRUD functions. I had used DataTable for this purpose, where in you define the table structure in tables.py and the view rendering is defined in views.py

If a column is defined to be a link, DataTable by default uses the object id of each row to generate the corresponding link. In my scenario, I needed the link to point to a different object. Openstack's documentation was not very helpful and a quick grep through the openstack's code gave me the idea.

The approach to generate a custom link is as follows inside a DataTable:

from django.core.urlresolvers import reverse

def get_custom_link(datum):
    return reverse('horizon:myproject:mydashboard:detail', 
                    kwargs={'key': datum.value})

The value of the key will be used to generate the URL. 

class MyDataTable(tables.DataTable):
   myvar = tables.Column('myvar_name', 
                          verbose_name=_("My Variable"), 
                         link=get_custom_link)

Tuesday, February 10, 2015

QuickBite: Tap Vs Veth

Linux supports virtual networking via various artifacts such as:
  • Soft Switches (Linux Bridge, OpenVSwitch)
  • Virtual Network Adapters (tun, tap, veth and a few more)
In this blog, we will look at the virtual network adapters tap and veth. From a practical view point, both seem to be having the same functionality and its a bit confusing as to where to use what.

A quick definition of tap/veth is as follows:

TAP

A TAP is a simulated interface which exists only in the kernel and has no physical component associated with it. It can be viewed as a simple Point-to-Point or Ethernet device, which instead of receiving packets from a physical media, receives them from user space program and instead of sending packets via physical media writes them to the user space program.

When a user space program (in our case the VM) gets attached to the tap interface it gets hold of a file descriptor, reading from which gives it the data being sent on the tap interface. Writing to the file descriptor will send the data out on the tap interface.

Veth (Virtual Ethernet)

Veth interfaces always come in pairs and are like two ethernet adapters connected together by a RJ45 cable. Data sent on one interface exits the other and viceversa.

Openstack heavily uses these artifacts and when a person gets newly introduced to these concepts things can become quite fuzzy. One area that can cause confusion is in understanding what the difference between a tap interface is as compared to veth interface, as both of them seem to be doing the same thing i.e transmitting ethernet frames.

To illustrate, when a VM (vm01) is spawned in Openstack, the artifacts used in the background are shown below:

Pic obtained from Openstack Admin Guide


The network connectivity of vm01 is as follows:

vm01:eth0 <==== connected to ===> vnet0 (tap interface) of qbrXXX (bridge)

qbrXXX (Linux bridge) <=== connected to ===> br-int (OVS bridge) by the veth pair qvbXXX---qv0XXX.

As can be seen the tap connects vm0 to the first bridge and the veth pair connects the first bridge to the next. So both of them look like an RJ45 cable connecting devices. So whats the big deal? Why can't we use only one type? Why do we need this mix?

The answer is* because of legacy technologies in play. When a VM is spawned on KVM it expects a tap interface to be connected to the VM's ethernet port (eth0). By this process, KVM gets a file descriptor on which it can write/read ethernet frames.

Veth on the other hand is a newer construct and is supported by latest artifacts such as linux bridges, namespaces and openvswith.

In summary, both tap & Veth do the same job but interface with technologies from different eras.

Further Reading:
Note: I have derived this conclusion based on my current understanding of these artifacts. There is no cross reference available for the same. Till I get across any further insights/info or till some one points out any further use cases, I believe this conclusion to hold true.

    Thursday, February 5, 2015

    Openstack : Unable to connect to instance console at port 6080

    I have a VM on virtualbox which acts as an all-in-one Openstack setup. When I spawn a VM on it the VM boots up fine but from the browser I am not able to access its console.

    There are various ways to solve this issue:

    1. In the latest version of devstack (as on February 2015) n-novnc is no longer a default service and needs to be added to the local.conf to enable it.
    enabled_services=n-novnc  (https://ask.openstack.org/en/question/57993/dashboard-vnc-console-doesnt-work-on-devstack/)

    2. See if this helps you in tweaking manually : http://docs.openstack.org/admin-guide-cloud/content/nova-vncproxy-replaced-with-nova-novncproxy.html

    3.Another geeky solution is that, you can grep the KVM process to figure out the port on which the VNC is getting channeled to and access the console.

    openstack@Openstack-Server:~/devstack$ ps aux|grep qemu|grep vnc
    libvirt+  9167  0.6  2.4 1504044 97488 ?       Sl   14:25   0:49 /usr/bin/qemu-system-x86_64 -name instance-00000001 -S -machine pc-i440fx-trusty,accel=tcg,usb=off -m 64 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid c030bf15-b374-46a8-ad8b-2518679a750a -smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=2015.1,serial=7275c80a-e2db-4386-99c8-982121bbaeec,uuid=c030bf15-b374-46a8-ad8b-2518679a750a -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-00000001.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -kernel /opt/stack/data/nova/instances/c030bf15-b374-46a8-ad8b-2518679a750a/kernel -initrd /opt/stack/data/nova/instances/c030bf15-b374-46a8-ad8b-2518679a750a/ramdisk -append root=/dev/vda console=tty0 console=ttyS0 no_timer_check -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/opt/stack/data/nova/instances/c030bf15-b374-46a8-ad8b-2518679a750a/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/opt/stack/data/nova/instances/c030bf15-b374-46a8-ad8b-2518679a750a/disk.config,if=none,id=drive-ide0-1-1,readonly=on,format=raw,cache=none -device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1 -netdev tap,fd=24,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:02:bf:2c,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/opt/stack/data/nova/instances/c030bf15-b374-46a8-ad8b-2518679a750a/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -vnc 127.0.0.1:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

    Now use a VNC client and connect to 127.0.0.1:0 with credentials cirros/cubswin:) you should be able to see the console.



    Extra Info:
    If you want to connect to the console from your host rather than from the guest, then you can use port forwarding feature of virtualbox. This youtube video https://www.youtube.com/watch?v=1GgODv34E08 proposes a solution at around 12 mins.

    Wednesday, February 4, 2015

    QuickBite: Avoid Fedora qcow download during Devstack installation

    While installing the latest devstack, I found that it downloads Fedora's qcow image as a part of heat installation : https://download.fedoraproject.org/pub/alt/openstack/20/x86_64/Fedora-x86_64-20-20140618-sda.qcow2. Its a 200MB file and is definitely not needed for Neutron related development. I see that there is an option in local.conf which will make it stick to the default Cirros. You can do it by setting the below config in local.conf:

    IMAGE_URLS="http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-uec.tar.gz"

    This downloads the latest 0.3.3 series image of Cirros. At a later point, you can visit http://download.cirros-cloud.net/ and download the latest version as shown there.

    Tuesday, February 3, 2015

    How to run Juniper Firefly (vSRX) on KVM -- SRX in a box setup

    Juniper has released a virtual form factor SRX called Firefly Perimeter (vSRX). It provides security and networking features of the SRX Series Gateways in a virtual machine format. It can be spawned as a VM on a KVM+QEMU/VMWare hypervisor running on a X86 server.


    This post will give details on how to set it up as a standalone SRX box which can be used in any of your network deployments just like a normal SRX.

    Pre-requisites

    1. Have an X86 server with atleast 4 GB ram, 4 GB harddisk space and two ethernet ports.
    2. Install Ubuntu 14.04 on it (Centos should also work provided KVM related changes are taken care of)
    3. Assumption: You have logged into the system as root user.

    Get the Software

    Firefly Perimeter can be download as a part of Juniper's software evaluation program and can be tried out for 60 days. You will need a Juniper account to download it here. For the purpose of this post I will be using the appliance at "Firefly KVM Appliance - FOR EVALUATION".

    Configure the Server

    Firefly needs the following software to be installed in order to work properly:
    • qemu-kvm
    • Libvirt
    • OpenvSwitch
    • Virtual Machine Manager
    • Bridge utils
    You can install all of the above by running the command:
     
    apt-get install qemu-kvm libvirt-bin bridge-utils \
                    virt-manager openvswitch-switch
    

    Firefly Perimeters requires a storage pool configured on the KVM and virtual networks defined before it could be spawned.

    Creating a Storage Pool on KVM

     I am using a directory based storage pool for my example. If you want to try out other option you can check them out here.


    mkdir /guest_images
    chown root:root /guest_images
    chmod 700 /guest_images
    virsh pool-define-as guest_images dir - - - - "/guest_images"
    virsh pool-build guest_images
    virsh pool-autostart guest_images
    virsh pool-start guest_images
    

    Creating the virtual Networks

    As shown in the figure for this deployment, I will be creating two virtual networks and assigning them to Firefly. For this purpose, we will create two XML files with the corresponding network description and then will execute virsh commands to create these networks.


    dut.xml
    <network>
      <name>data</name>
      <bridge name="br_data" />
      <forward mode="bridge" />
      <virtualport type='openvswitch'/>
    </network>
    

    mgmt.xml
    <network>
      <name>mgmt</name>
      <bridge name="br_mgmt" />
      <forward mode="bridge" />
    </network>
    

    After creating the xml, execute the following commands:
    bash# virsh
    virsh# net-define mgmt.xml
    virsh# net-autostart mgmt
    virsh# net-start mgmt
    
    virsh# net-define dut.xml
    virsh# net-autostart dut
    virsh# net-start dut
    

    Create the bridges

    We need to create two bridges: br_mgmt and br_data and add eth0 and eth1 to them as shown in the figure above. 

    br_mgmt (linux bridge)
    brctl addbr br_mgmt
    brctl addif br_mgmt eth0

    br_data (Ovs Bridge)
    ovs-vsctl add-br br_data
    ovs-vsct add-port br_data eth1

    Now we need to move the server host ip from eth0 to br_mgmt

    vi /etc/network/interfaces
    auto eth0
    iface eth0 inet manual
    
    auto eth1
    iface eth1 inet manual
    
    auto br_mgmt
    iface br_mgmt inet static
    address xx.xx.xx.xx
    netmask 255.255.xxx.0
    gateway xx.xx.xx.xx
    dns-nameservers xx.xx.xx.xx
    #pre-up ip link set eth0 down
    pre-up brctl addbr br_mgmt
    pre-up brctl addif br_mgmt eth0
    post-down ip link set eth0 down
    post-down brctl delif br_mgmt eth0
    post-down brctl delbr br_mgmt
    
    Restart the networking service by calling /etc/init.d/networking restart

    Spawn the VM

    Once the storage pool and necessary virtual networks are ready, we can spawn the Firefly VM on the hypervisor using the command:

    bash -x junos-vsrx-12.1X47-D10.4-domestic.jva MySRX -i 2::mgmt,data -s guest_images
    virsh# start MySRX
    

    You can also use Virtual Machine Manger to start the VM, like so:


    In the next post, I will continue on this and give details on initial SRX setup and testing it out.

    Monday, February 2, 2015

    QuickBite : Verifying VLAN Tag using Wireshark CLI (tshark)

    If you want to verify the flow of packets from a VM (which is connected to a OVS) to a Switch and ensure that they are getting tagged properly, you can follow the process mentioned below:




    Step 1 : Are packets getting sent over eth1


    Looks good. We can see that packets are getting sent over eth1.

    Step 2 : Check if packets are being received on ge-0/0/10


    Looks good. If you see that packet count is not increasing on the interface, it may be because the corresponding VLAN is not associated with that interface (or the packet is being sent with out the VLAN tag)

    Check ge-0/0/10 configuration:



    Looks good.

    Step 3 : Lets check if packets are getting tagged when sent over eth1

    [I have Wireshark installed on CentOS. Am using the Wireshark CLI as my server does not have gui installed on it]



    Done. You are now ready to trouble shoot basic packet flow.

    A few other commands that come in handy on a VM are:
    • To check the routes known to the system : route -n
    • To check the arp table : cat /proc/net/arp
    • Ping a system over a specific interface :  ping -I eth1 ip-address