Stretching your cloud with Citrix NetScaler 10.5 and VXLANs

The latest release of Citrix NetScaler 10.5 firmware came with a diverse range of new features and we’ve spent some time at cloudDNA digging around the enhancements behind the headlines to look at the potential benefits for some of the organisations we support. Our Lead Architect, Andy Gravett, who’s been heavily involved with NetScaler since before the Citrix acquisition in 2005, has some great insight which I thought would be worth sharing…

One feature that caught his eye was the support for VXLANs, primarily because it is a cloud and virtualisation related network overlay framework (you wouldn’t expect anything less from a cloud and Citrix networking consultancy blog right?☺) and secondly, from the technical angle, how it is designed to solve the current limitations of traditional IP/VLAN (with Spanning Tree Protocol) implementations in a multi-tenant cloud environment.

Back in August 2011 at VMWorld, VMware and Cisco proposed a new standard for the migration of virtual networks from one environment to another and they coined the term “Virtual eXtensible Local Area Networks” or VXLAN for short. The collaboration with various other leaders in networking and hardware (including Citrix) led to the formulation the following IETF standard..

The VXLAN Framework is a method for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks.

VXLANs work across any normal Layer 2 or Layer 3 physical or virtual network in a similar fashion to their local area little brother the VLAN. A VXLAN Tunnel End Point (VTEP) is effectively a gateway entity which originates and/or terminates VXLAN tunnels and is used for encapsulating and then tunnelling Layer 2 MAC address information in Layer 3 UDP headers which in turn can be read by other VXLAN enabled routers and VXLAN Gateways across the network. This ultimately allows VXLAN Virtual Machines to communicate in isolation as if they were on the same Layer 2 VLAN.*

So where does the Citrix NetScaler ADC 10.5 fit into the VXLAN mix? Information in the 10.5 release notes is limited (there’s an awful lot of other stuff to squeeze in there!) but upon further investigation in the Citrix eDocs and testing with the new firmware, we can see that NetScaler 10.5 introduces a VXLAN Gateway function allowing the NetScaler ADC to act as an entity which forwards traffic between VXLAN and non-VXLAN environments. A seriously useful tool for those who are considering scaling, securing and/or stretching virtual enterprise clouds or when migrating to a Hybrid Cloud model. Early adopters of the ever present SDN take note.

The diagram from Citrix eDocs depicts how the NetScaler ADC may be deployed in a real world example as a VXLAN gateway.

NetScaler 10.5 VXLAN


While it’s not for everyone, integrating NetScaler’s advanced feature set with the efficient, dynamic, on demand network provisioning capabilities offered by VXLANs will help organisations mobilise workloads and simplify orchestration mechanisms.** We think this is a great new feature and it would be interested to hear any feedback or comments from early 10.5 adopters who are using this functionality to extend their current cloud environments further…

Thanks again to Andy.

© Al Taylor

24th July 2014

*This is just an overview of VXLAN Framework and it’s usage. For more detailed and description I would thoroughly recommend the IETF Internet Draft memo from April 10 2014 and authored by.

M. Mahalingam – (Storvisor) & D. Dutt (Cumulus Networks) with support from K. Duda (Arista), P. Agarwal (Broadcom), L. Kreeger (Cisco), T. Sridhar (VMware), M. Bursell(Citrix), C. Wright (Red Hat).

**Citrix provides some points for consideration prior to implementing VXLANs on your NetScaler ADC 10.5 systems including:

• A maximum of 2048 VXLANs can be configured on a NetScaler ADC.
• VXLANs are not supported in a NetScaler cluster.
• Link-local IPv6 addresses cannot be configured for each VXLAN.
• NetScaler ADCs do not support Internet Group Management Protocol (IGMP) protocol to form a multicast group. NetScaler ADCs rely on the IGMP protocol of its upstream router to join a multicast group, which share a common multicast group IP address. You can specify a multicast group IP address while creating a VXLAN tunnel but the multicast group must be configured on the upstream router. The NetScaler ADC sends broadcast, multicast, and unknown unicast frames over Layer 3 to the multicast group IP address of this VXLAN. The upstream router then forwards the packet to all the VTEPs that are part of the multicast group.
• VXLAN encapsulation adds an overhead of 50 bytes to each packet:
Outer Ethernet Header (14) + UDP header (8) + IP header (20) + VXLAN header (8) = 50 bytes
To avoid fragmentation and performance degradation, you must adjust the MTU settings of all network devices in a VXLAN pathway, including the VXLAN VTEP devices, to handle the 50 bytes of overhead in the VXLAN packets.
Important: Jumbo frames are not supported on the NetScaler VPX virtual appliances, NetScaler SDX appliances, and NetScaler MPX 15000/17000 appliances. These appliances support an MTU size of only 1500 bytes and cannot be adjusted to handle the 50 bytes overhead of VXLAN packets. VXLAN traffic might be fragmented or suffer performance degradation, if one of these appliances is in the VXLAN pathway or acts as a VXLAN VTEP device.
• On NetScaler SDX appliances, VLAN filtering does not work for VXLAN packets.
• IPv6 Dynamic Routing is not supported on VXLAN.
• You cannot set a MTU value on a VXLAN.
• You cannot bind interfaces to a VXLAN.


Tags: , , , ,

About netscalertaylor

Co-founder at cloudDNA - a team of like minded Citrix NetScaler specilists

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: