Manage Learn to apply best practices and optimize your operations.

Solaris Project Crossbow offers virtualized network management

With Project Crossbow, Solaris has introduced a way to fully virtualize data center networking and has simplified network management in the process.

Virtualization is a trend in the development of Solaris. Its already rich set of CPU and memory controls have been refined in Solaris Containers (or Zones) to fully virtualize workloads. ZFS introduced the concept of pooled storage to virtualize data management, and now Project Crossbow has introduced a new way to virtualize networking.

Project Crossbow fully virtualizes the Solaris network stack. Rather than a one-size-fits-all approach, network interface cards (NICs) can now be virtualized into one or more virtual NICs (VNICs). These VNICs can then be individually configured and tuned to take advantage of the physical NIC's in-hardware capabilities and workload needs.

The stack is even further virtualized by Etherstubs, the ability to create in-software switches to which VNICs can be assigned. Branded vWire, or "network in a box," you can create networks that act like real physical networks but exist entirely in software. The idea is that you could create, for instance, 100 Solaris Containers on a system, each with a VNIC connected to an Etherstub, to form a complete functioning network of virtual servers that looks and feels like a real network but is entirely in software on one box.

Beyond just virtualizing network components, Crossbow has re-envisioned IP Quality of Service (IPQoS). For any interface, you can define "flows," which describe some type of traffic. A flow might be an entire interface or perhaps only HTTP and HTTPS traffic, for example. Resource controls can be applied to these flows, such as traffic priority (low, medium, high), CPU binding and, most importantly, bandwidth limitations. With Crossbow you have the ability to limit an interface to only, say, 10 Mbps or perhaps limit only SMTP traffic to 40Mbps so it doesn't overwhelm a gigabit link. Moreover, these flows can also be audited (logged) for monitoring and reporting purposes.

Crossbow introduces an important new distinction between interfaces and data links. Traditionally, ifconfig has been the end-all, be-all of networking commands. But the command has been so completely over-burdened with new functionality that Sun's development team decided to introduce a new command that would handle data link administration, appropriately named "dladm." This new command is used for managing physical interfaces, creating VNICs and Etherstubs, creating and managing WiFi links or port aggregations ("trunking" or "teaming"), etc. The idea is that you create and manage data links with dladm and then interact with them as usual via ifconfig. Therefore, to use a VNIC, you use dladm to create a new VNIC from a physical NIC, then use ifconfig to plumb and assign IP information to the VNIC, just as you would any traditional NIC.

Simple Crossbow use case
The most basic use of Crossbow's capabilities would be to replace traditional virtual interfaces with VNICs. Most Unix operating systems have allowed you to associate multiple IP addresses with a single physical link (NIC) in the following way.

root@quadra ~$ ifconfig e1000g0 plumb netmask up
root@quadra ~$ ifconfig e1000g0:1 plumb netmask up
root@quadra ~$ ifconfig e1000g0:2 plumb netmask up
root@quadra ~$ ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet netmask ff000000 
e1000g0: flags=1000803<UP,BROADCAST,MULTICAST,IPv4> mtu 1500 index 3
        inet netmask ffffff00 broadcast
        ether 0:1c:c0:8c:2e:bf 
e1000g0:1: flags=1000803<UP,BROADCAST,MULTICAST,IPv4> mtu 1500 index 3
        inet netmask ffffff00 broadcast
e1000g0:2: flags=1000803<UP,BROADCAST,MULTICAST,IPv4> mtu 1500 index 3
        inet netmask ffffff00 broadcast

This might be done in order to support different virtual servers. The problem is that these virtual interfaces (:1, :2) share the same attributes as the parent link. They will appear to come from the same MAC address, they use the same tunings, they are on the same VLAN, etc.

If we instead use VNICs, we could simplify the configuration and provide far more flexibility.

root@quadra ~$ ifconfig e1000g0 unplumb
root@quadra ~$ dladm show-phys    
LINK         MEDIA                STATE      SPEED  DUPLEX    DEVICE
e1000g0      Ethernet             up         1000   full      e1000g0
root@quadra ~$ dladm create-vnic -l e1000g0 vnic0
root@quadra ~$ dladm create-vnic -l e1000g0 vnic1
root@quadra ~$ dladm create-vnic -l e1000g0 vnic2
root@quadra ~$ dladm show-vnic
LINK         OVER         SPEED  MACADDRESS           MACADDRTYPE         VID
vnic0        e1000g0      1000   2:8:20:4d:c3:34      random              0
vnic1        e1000g0      1000   2:8:20:49:52:87      random              0
vnic2        e1000g0      1000   2:8:20:ef:5e:49      random              0
root@quadra ~$ ifconfig vnic0 plumb netmask up
root@quadra ~$ ifconfig vnic1 plumb netmask up
root@quadra ~$ ifconfig vnic2 plumb netmask up
root@quadra ~$ ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet netmask ff000000 
vnic0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
        inet netmask ffffff00 broadcast
        ether 2:8:20:4d:c3:34 
vnic1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 5
        inet netmask ffffff00 broadcast
        ether 2:8:20:49:52:87 
vnic2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 6
        inet netmask ffffff00 broadcast
        ether 2:8:20:ef:5e:49 

You can see in this example that using dladm show-phys, we can view all our physical interfaces (NICs) and use those to create VNICs which are then configured using ifconfig just as though they were traditional network interfaces. Even though more commands are involved, consider how much cleaner the end result is. Also notice that each VNIC has its own MAC address, can be assigned an individual VLAN ID ("VID" above), etc.

New levels of control
Crossbow introduces the concept of link properties. Using properties, we can set a maximum bandwidth limit (in megabits per second), assign its processing to specific CPUs, modify its processing priority or modify its tagging behavior.

root@quadra ~$ dladm set-linkprop -p priority=medium vnic0
root@quadra ~$ dladm set-linkprop -p cpus=0,1 vnic0
root@quadra ~$ dladm set-linkprop -p maxbw=10 vnic0
root@quadra ~$ dladm show-linkprop vnic0
LINK         PROPERTY        PERM VALUE          DEFAULT        POSSIBLE
vnic0        autopush        --   --             --             -- 
vnic0        zone            rw   --             --             -- 
vnic0        state           r-   up             up             up,down 
vnic0        mtu             r-   1500           1500           1500 
vnic0        maxbw           rw      10          --             -- 
vnic0        cpus            rw   0,1            --             -- 
vnic0        priority        rw   medium         high           low,medium,high 
vnic0        tagmode         rw   vlanonly       vlanonly       normal,vlanonly 

In this example, we've downgraded vnic0's priority to medium, limited its processing to CPU's 0 and 1, and capped its bandwidth to 10 Mbps. All of these parameters are dynamic and can be changed at any time.

While having per data link control is great, Crossbow lets us get even more granular with the flowadm command. A flow is defined as a particular classification of traffic, typically identified by IP address, port, transport (IP, UDP, etc.) or RFC 2474 DS-Field. We can associate priority and bandwidth properties to these flows. This gives us simple IPQoS-like functionality without the painful configuration.

root@quadra ~$ flowadm add-flow -l vnic0 -a transport=tcp,local_port=80 -p priority=medium,maxbw=200 httpflow
root@quadra ~$ flowadm add-flow -l vnic0 -a transport=tcp,local_port=3306 -p priority=high mysqlflow

root@quadra ~$ flowadm show-flow
FLOW        LINK        IPADDR                         PROTO  PORT    DSFLD
httpflow    vnic0       --                             tcp    80      --
mysqlflow   vnic0       --                             tcp    3306    --
root@quadra ~$ flowadm show-flowprop 
FLOW         PROPERTY        VALUE          DEFAULT        POSSIBLE
httpflow     maxbw             200          --             200 
httpflow     priority        medium         --             medium 
mysqlflow    maxbw           --             --             ?
mysqlflow    priority        high           --             high 

In this example, we've created two flows. The first is for the vnic0 link we created earlier, and it defines a flow for HTTP traffic, which has a medium priority and is limited to 200 Mbps. The second flow is for MySQL traffic, which isn't rate limited and is given high priority.

This combines to give us a multiple levels of control over how our data links are used and provides a fine-grained capability to partition network capacity.

Additionally, dladm and flowadm can work in harmony with the Solaris Extended Accounting facility to provide auditing data; however, that topic exceeds the scope of this article. See the command man pages for details.

Empowering virtualization
Crossbow's ability to virtualize network interfaces and control how those data links are used really comes to its full potential when used in conjunction with virtualization technologies such as Solaris Containers, xVM (aka Xen) or VirtualBox. Unlike other operating systems, VNICs provide a single, uniform way to manage network virtualization for all three technologies. Because VNICs act like real network interfaces, you provide full network capabilities to each virtual environments, yet maintain full auditing and resource control capabilities independent of the individual implementation.

Another of Crossbow's features are Etherstubs, virtual switches that can be used to form internal private networks entirely in software. This is extremely powerful because we can simulate complete network topologies on a single box. To use them, we simply create a new Ethersub and create VNICs, which bind to the Etherstub instead of a physical interface.

root@quadra ~$ dladm create-etherstub vswitch1
root@quadra ~$ dladm show-etherstub
root@quadra ~$ dladm create-vnic -l vswitch1 vnic3
root@quadra ~$ dladm create-vnic -l vswitch1 vnic4
root@quadra ~$ dladm create-vnic -l vswitch1 vnic5
root@quadra ~$ dladm show-vnic
LINK         OVER         SPEED  MACADDRESS           MACADDRTYPE         VID
vnic0        e1000g0      0      2:8:20:4d:c3:34      random              0
vnic1        e1000g0      0      2:8:20:49:52:87      random              0
vnic2        e1000g0      0      2:8:20:ef:5e:49      random              0
vnic3        vswitch1     0      2:8:20:1c:53:33      random              0
vnic4        vswitch1     0      2:8:20:8f:26:c1      random              0
vnic5        vswitch1     0      2:8:20:ff:c3:61      random              0

Here we've created an Etherstub named "vswitch" and then created new VNICs that are attached to it. We could then provide these VNICs to our virtual environments as a private network.

Things can get really interesting when you create multiple Etherstubs and route between them. In the above example, for instance, we could assign both vnic2 and vnic3 to an xVM instance and then enable routing to allow a Solaris Zone on vnic5 access to the public network. The possibilities are endless. I've personally used this capability as a way to prototype new network topologies using a variety of experimental routing protocols.

I hope this article has given you a glimpse into the amazing capabilities Crossbow has to offer. With just a few easy commands, you can create all the virtual network interfaces you could ever want, mimic complex network topologies, audit link activity, assert fine-grained bandwidth controls, and much more. No more messing around with TUN/TAP drivers, no more fussing with complex IPQoS configurations -- just a powerful and generic tool that allows you to rethink modern network design.

ABOUT THE AUTHOR: Ben Rockwood is the director of systems at cloud computing infrastructure company Joyent Inc.. A Solaris expert and Sun evangelist, he lives just outside of Silicon Valley, Calif., with his smokin' hot wife Tamarah and their three children. Read his blog at

What did you think of this feature? Write to's Matt Stansberry about your data center concerns at

Dig Deeper on Linux servers

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.