Thursday, August 23, 2012

Nicira Network Virtualization Platform (NVP)

Yesterday, I had a short technical discussion with a Singapore-based Nicira staff member. They have a niche solution that is doing exactly what the Cisco Nexus and L2 MPLS (i.e. Ethernet over MPLS or Any Transport over MPLS) are trying to achieve: multi-tenancy Data Center Interconnect (DCI) by creating multiple layer 2 virtual networks (or pseudo-wire) across an IP network. Layer 2 networks are essential for many Data Center applications, especially Cloud Virtualization. Imagine performing server clustering, VMWare vMotion, Hyper-V Live Migration on multiple physical sites, which otherwise break by IP subnets. It should also be seriously considered as part of IT Disaster Recovery plans. As for multi-tenancy, you may have multiple network customers or tenants sharing the same underlying physical infrastructure. Each tenant should only see its respective overlay network without visibility into other virtual network - a similar concept to host virtualization and cloud computing.

How does Nicira NVP work? From what I understand in high level perspectives, a data path STT (Stateless Transport Tunneling) tunnel is established between 2 or more Open vSwitches (OVS) across an IP network. This MAC-over-IP tunnel is used to encapsulate all MAC layer traffic and transport them over an IP network, which effectively connect 2 servers (whether virtual or physical) on different sites as if they were on the same subnet or VLAN. 

As of now, the OVS can be integrated into ESX, KVM and XenServer hypervisors. There is also near-future plan for Hyper-V support (not sure if the plan would be cancelled, as Nicira is now acquired by its arch-rival VMWare). Alternatively, an ISO-based image can also be run off as a virtual or physical server as an OVS gateway connecting legacy systems to the Nicira network virtualization platform (NVP).

As for OVS management and control, the NVP Controller Cluster (housed on server clustering) is used to centrally managing and controlling all OVS along the control paths. I was told that even if there were a disconnection on the control paths, the OVS would continue to operate (even though not modifiable at this stage).


From what I observed, the new Cisco Overlay Transport Virtualization (OTV) is probably Nicira's arch-rival at this point. True enough that traditional networking MPLS and L2 pseudo-wires can be employed to perform the same tricks, they are either limited by performance, lack of MPLS aware devices or simply staff knowledge in the IP networks. Both Cisco OTV and Nicira NVP, on the other hand, can be easily established across any traditional IP-based networks.

And prices also do matter. I was told that one would need at least USD150K for a small POC setup to "try out" Nicira NVP. For the same price tag, I could also have purchased at least half-dozen of Cisco ASR1000 hardware (OTV is now supported on ASR platform from version XE3.5S onward). A princely sum that is pretty hard-to-justify for a software purchase to the management I suppose.


Friday, August 10, 2012

Database Availability Group on Exchange 2010

In Exchange 2010, high availability for mailbox servers no longer requires windows clustering. It's now replaced with a new concept called "Database Availability Group (DAG)". Think of it like a new clustering method for Exchange 2010 Mailbox servers. Up to 16 nodes can grouped under a DAG. Even though it's no longer the traditional windows clustering, it still retain many similar concepts. Creating DAG is pretty straightforward with the New-DatabaseAvailabilityGroup cmdlet or in the graphical EMC.

Network connection concept largely remains the same. You've to define the Group IP address (similar to Cluster IP address) on the public network. Replication is not recommended over this network. You should also leave the iSCSI networks alone. DAG replication is recommended over private networks. Quorum model remains the largely similar, even though it has become more transparent by employing a Witness Server to determine which subset of the split cluster should remain functioning.

One major difference is the mailbox database. Unlike Windows clustering, shared storage in SAN or file share is no longer required. Each node should "host" its own individual database copies. Rather, the same copy can be replicated throughout all nodes in a DAG - only one copy should be activated with the rests in passive. This can be easily achieved by using the Add-MailboxDatabaseCopy cmdlet. Do take note that the replicated path would follow exactly the same as the original hosting mailbox server. For example, if you host the original mailbox on "D:\MailDB", the rest would follow. If  you realise that you have differing hosts with different paths, you can still move the database path. However, the path can't be moved once it is replicated. Hence, you've to remove the database copy, move the path, and add the database copy again. For more info on moving the Mailbox Database Path for a Mailbox Database Copy, refer to this technet article.


Tuesday, August 7, 2012

Setting up Edge Transport Server in Exchange 2010

I was recently tasked to setup new Exchange 2010 for my organization. There is a special Exchange role called "Edge Transport Server (ETS)" that is meant for transporting messages with external networks, such as Internet. According to Microsoft, the Edge Transport Server in Exchange 2010 is secured by default and hence no need for additional hardening, such as using Security Configuration Wizard (SCW) template.

As ETS is typically placed on the network perimeter, it should not be joined to any Active Directory Domain to reduce attack surface. Ironically, this is not supported on Server Core. To link the ETS to the Exchange Organization through Edge Synchronization, there is a process called "Edge Subscription".

The process can be summarized as follows:
  1. Install the Edge Transport server role.
  2. Verify that the Hub Transport servers and the Edge Transport server can locate one another by using Domain Name System (DNS) name resolution. 
  3. Configure the objects and settings to be replicated to the Edge Transport server.
  4. On the Edge Transport server, create and export an Edge Subscription file by using "New-EdgeSubscription" cmdlet.
  5. Copy the Edge Subscription file to a Hub Transport server or a file share that's accessible from the Active Directory site that has your Hub Transport servers.
  6. Import the Edge Subscription file to your Active Directory site to which you want to subscribe your Edge Transport server. Use "Get-Help New-EdgeSubscription -examples" for reference. 

Typically, the ETS should be dual-homed with internal network interface connecting to the Hub Transport Server and external interface to the Internet. You may test the edge subscription by running "Test-EdgeSychronization" on the Exchange Management Shell (EMS) of Hub Transport Server. Once configured successfully, you can now configure the "MX" of your domain name to the external network addresss of the ETS. For high availability, setup two or more ETS on your network perimeter.

For further information, refer to this Technet article. For information on limiting message size and file attachment limits, refer to this link.