Wednesday, December 26, 2012

Hyper-V Network Virtualization Test

To extend further on my earlier basic testing on Hyper-V network virtualization, I have included cross-sites connection with physical network that is not on Hyper-V. This is my test setup:

Both HQ-HyperV-N1 and HQ-HyperV-N2 Hyper-V hosts are sitted on HQ site and DR-HyperV-N3 is on DR site. The necessary static routes (e.g. route add) should be configured on each host to access different IP subnets. All the VMs are pre-fixed with "Test*". TestVM01-3 VMs are hosted across IP subnets but they can reach each other on same L2 virtual subnet of 10.0.0.0/24 (denoted by blue VSID: 5001). Each VM is assigned with a pair of virtual IP/MAC that is mapped to a physical host address. Test-VM is the virtual gateway (red VSID: 5002) hosted on HQ-HyperV-N1 that routes traffic on virtual network to the physical network (vice-versa). Hereby, you may download my test powershell script and reference to the above network diagram.

Besides static routes, do take note that forwarding must be enabled on the network adapters of the Gateway (e.g. Test-GW) for routing. The forwarding can be enabled using the cmdlet "Set-NetIPInterface -InterfaceIndex [int-index] -AddressFamily IPv4 -Forwarding Enabled" for each adapter. However, the forwarding would still remain disabled upon host reboot. To make things easier, I added a feature "Remote Access" and enabled routing, so that the network adapters would always be in forwarding mode.

Wednesday, December 19, 2012

Mount Mailbox Database in Exchange 2010

My test mailbox servers lost all iSCSI connections abruptly. Upon restore, the mailbox database (DAG) was not mounted

According to this Technet for Exchange 2010, I should execute this cmdlet to get the MailDB01 mounted: 
Mount-Database -Identity Mailbox01\MailDB01 

But it didn't work. Instead, I had this error:
Cannot process argument transformation on parameter 'Identity'. Cannot convert value "mailbox01\maildb01" to type "Microsoft.Exchange.Configuration.Tasks.DatabaseIdParameter". Error: "'mailbox01\maildb01' is not a valid value for the identity.

Parameter name: Identity"

+ CategoryInfo : InvalidData: (:) [Mount-Database], ParameterBindin...mationException

+ FullyQualifiedErrorId : ParameterArgumentTransformationError,Mount-Database  

It must be another straight 'copy-and-paste' from Exchange 2007 documentation. Instead, the correct cmdlet for Exchange 2010 should be:
Get-MailboxDatabase -Identity MailDB01 | Mount-Database -force 

The "-force" option may be needed to ignore other previous errors. Once the database is mounted, I have to reseed the second failed copy. For further information, see "How to Reseed a Failed Mailbox Database Copy in Exchange Server 2010".

Friday, December 14, 2012

Enabling Hyper-V Network Virtualization to route across IP subnets in Provider Address Space

I've just tested out the new Microsoft SDN (a.k.a network virtualization) in Hyper-V 3.0. In summary, Hyper-V network virtualization enabled different VMs to be connected on the same L2 virtual subnet, even though the underlying Hyper-V hosts are running across various IP typologies  If you're looking for GUI to enable this new feature, you'll be in for disappointment, as SCVMM 2012 has yet to incorporate it at this point (except SCVMM SP1 Beta). Only PowerShell cmdlets can be used at the moment. To make things easier, there is a demo script available for you to download and modify. The script can be used without modification provided you've setup the Hyper-V infrastructure according to the given instructions. The demo setup should look like this:
According to above diagram, the blue VMs belong to blue tenant and the red VMs belong to the red tenant. Each respective color VM can only connect to same color VM residing on another host on the same virtual subnet of "10.0.0.0/24". Despite the overlapping IP address range, both color networks are virtually segregated by different VSIDs. Each pair of virtual IP and MAC address has to be "registered" with the underlying host IP address (i.e. Provider address) using New-NetVirtualizationLookupRecord cmdlet.

Notably, both Hyper-V hosts are connected on the same L2 subnets i.e. 192.168.4.0/24 (known as Provider Address" in the demo, which is so "unreal". In the real world, it's far more likely for providers hosts to be connected across different IP topology and subnets. Let's assume Host 2 be placed on "192.168.6.0/24". Using the same script won't suffice, you'll have to use the "New-NetVirtualizationProviderRoute" cmdlet to inform the underlying hosts how to reach the cross subnet.  After changing Host 2 provider address and adding a gateway router in-between both hosts, the new network setup should look like this:
Let's modify the original script (replace with actual Host and Gateway addresses accordingly):

1) On line 287, from:  New-NetVirtualizationProviderAddress -ProviderAddress "192.168.4.22" change to New-NetVirtualizationProviderAddress -ProviderAddress "192.168.6.22" to reflect the change in Host-2 physical IP address.

2) After line 274, add this for Host 1:
New-NetVirtualizationProviderRoute -InterfaceIndex $iface.InterfaceIndex -DestinationPrefix "192.168.6.0/24" -NextHop "192.168.4.1" -CimSession $Hosts[0]

3) After line 287, add this for Host 2:
New-NetVirtualizationProviderRoute -InterfaceIndex $iface.InterfaceIndex -DestinationPrefix "192.168.4.0/24" -NextHop "192.168.6.1" -CimSession $Hosts[1]

4) Clear the existing records by running "Remove-NetVirtualizationLookupRecord" on both hosts, re-run the script and both VMs should be able to ping each other again.

Wednesday, December 12, 2012

Calculation of Hyper-V Resource Utilization

Someone asked me about Hyper-V resource utilization. One can assign up-to 4 virtual processors (VP) or  vCPU per VM in Hyper-V 2008. And this VM suffered from unacceptably slow performance due to high CPU utilization. There were only a couple of VMs running per host. If he were to add more CPU cores to the parent host, would performance improve? 

The answer is most probably not unless you've already over-packed the host with many VMs.  Even if the host parent have many more CPU cores, it couldn't assign more than 1 physical CPU core to 1 VP. Although Microsoft uses the term "Logical Processor" (LP) instead of CPU core, it's an open secret that one LP represents one CPU core.

Let's take an example. We have a  Hyper-V cluster setup consisting of 3 nodes of Hyper-V host. This setup is mainly running RemoteApp (terminal servers) applications. Each node is running Windows Server 2008 R2. The hardware profile of each node consists of  4 x 6 CPU cores.

Each node is running 3 VMs (terminal servers) or total 9 VMs for the entire Hyper-V cluster. The CPU utilization of each VM is averaging 95-100% and many users have complained of slowness in performance. However, the CPU utilization of each parent host is only 50% for each host. In other words, all the VMs are only utilizing half of the 4 x 6 CPU cores per host, even though each VM is nearly saturated.

This is the “math”:

1)      Total no. of nodes in cluster = 3

2)      No. of logical processors (LP) per node = 4 x 6 CPU cores = 24

3)      Ratio of virtual CPU (vCPU) to LP (for best performance) = 1:1

4)      Max no. of vCPU per VM = 4

5)      No. of terminal server VMs per node = 3

6)      Current CPU resource utilized by all VMs per node = 4 x 3 = 12 LPs

Based on above, we calculated that our current CPU utilized by Hyper-V is just 50% per node.

Unused utilization = max no. of LP per node – utilized LP per node = 24 -  12 = 12 (or 50%). When we checked the resource monitor on each parent host, it is indeed the number. A further check with Microsoft support confirms this understanding.

Hence, my recommendation to that someone is to
1) assign more VMs per host, so that more CPU resource can be utilized; or
2) upgrade the Hyper-V hosts to Windows Server 2012 where up-to 64 vCPU can be assigned to each VM

Tuesday, December 11, 2012

Basic Networking of SCVMM 2012 SP1 Beta

The new Windows Server 2012 has a new feature that supports NIC teaming natively. After configuring Hyper-V NIC teaming on two hosts, I clustered them up. Everything was working well and fine. SCVMM 2012 SP1 beta is required to manage the newly minted Hyper-V cluster.

After setting up the new SCVMM server successfully, adding an existing cluster is easy. I also noticed more networking profiles have to be created before I can roll a new VM.
See the differences? It does look confusing. Fortunately, the System Center team has updated this blog to explain the relationship between various physical and virtual components: Networking in VMM 2012 SP1 – Logical Networks.

Overall SCVMM 2012 SP1 Network Architecture

From Hyper-V Host Perspective
In essence, the steps to create networking in SCVMM 2012 SP1:
  1. Start from creating new Logical Networks. Each Logical Network may contain multiple Network Sites. Each site may contain multiple VLANs and IP Subnets.
  2. On each Hyper-V host, associate the Virtual Switches (each virtual switch represents a physical NIC on the host) to the respective Logical Networks. 
  3. Create VM Network and associate it to a Logical Network.
  4. Under the hardware profile of a new VM, connect the new virtual NIC to a VM network. Select the appropriate Virtual Switch available to the parent host.

Example 1: Datacenter network contains 2 VLANs - VLAN 10 and VLAN 20 on Site A and VLAN 30 and VLAN 40 on Site B using VLAN trunking on the underlying physical switches. Create a new logical network named "Data Center Network". In it, add a network site A, then both VLAN 10 and 20. Assign the IP subnet to each VLAN. Add another network site B and add both VLAN 30 and 40 on this network site.

Example 2: Storage network contains 2 physical storage switches without VLAN trunking in both sites. Create a new logical network named "iSCSI Storage Network". In it, add both network site A and network site B. Assign VLAN tag 0 to each network site and assign the relevant IP subnet.

Logical switch and port profile, on the other hand, act as container to consistently configure identical capabilities (e.g. SR-IOV, VMQ, DHCP guards etc) for network adapters across multiple hosts. Instead of configuring individual ports on each host, you can specify the capabilities in port profiles and logical switches and apply them to the appropriate adapters.

Wednesday, December 5, 2012

Concept of Cisco Bridge Domain Interfaces (BDI)

Today, I came across a strange configuration on a Cisco ASR router. It's called "Bridge Domain Interfaces (BDI)". I did a search on Cisco website and the configuration looked simple. But it was short on concept explanation, which simply mentioned

"Bridge domain interface is a logical interface that allows bidirectional flow of traffic between a Layer 2 bridged network and a Layer 3 routed network traffic. Bridge domain interfaces are identified by the same index as the bridge domain. Each bridge domain represents a Layer 2 broadcast domain."

What is it used for? Why do we need it? After some thoughts and experiment, it seems to me that BDI is used to "bundle" one or more physical L2 interfaces and link it to a L3 logical interface for routing. And this L3 logical interface is the BDI. As Cisco routers won't allow you to configure IP address belonging to the same L2 subnet/domain on more than one routed interfaces, BDI is probably a workaround to overcome that limitation. It also reminds me of a routed port-channel. 

Consider the following diagram:

Both physical ports (Gi0/0/0 and Gi0/0/1) are linked to the same L2 domain (e.g. VLAN 100). 

According to Cisco, 
"An Ethernet Virtual Circuit (EVC) is an end-to-end representation of a single instance of a Layer 2 service being offered by a provider to a customer. It embodies the different parameters on which the service is being offered. In the Cisco EVC Framework, the bridge domains are made up of one or more Layer 2 interfaces known as service instances. A service instance is the instantiation of an EVC on a given port on a given router. Service instance is associated with a bridge domain based on the configuration."

I would interpret that a service instance is used to represent one L2 domain. More than 1 ports can belong to the same service instance.

Config mode:
interface range Gi0/0/0-1
  service instance 100 ethernet
    encapsulation dot1q 100 # get VLAN 100 tagged traffic
    rewrite ingress tag pop 1 symmetric #pop out all ingress VLAN 100 tags from switch
    bridge-domain 100 # identified as interface BDI 100 in below example config

Above config would create a service instance 100 that is linked to VLAN 100 L2 domain. Standard L3 config can be performed on interface BDI 100 for routing.

interface BDI100
  vrf forwarding VPNA
  ip address 1.1.1.1 255.255.255.0
  ip ospf 1 area 0

The physical interface can even join more than 1 bridge domain (up to 4096 per router). For example, connecting to VLAN 200 (also Bridge Domain 200) as well:

interface range Gi0/0/0-1
  service instance 100 ethernet
    encapsulation dot1q 100
    rewrite ingress tag pop 1 symmetric #pop out all ingress VLAN 100 tags from switch
    bridge-domain 100 # identify as BDI 100 in below example config
!
 service instance 200 ethernet
    encapsulation dot1q 200
    bridge-domain 200 # identified as BDI 200 

Monday, December 3, 2012

Host Cluster Over-committed with spare memory?

I've encountered an issue on SCVMM 2012. When I attempted to place a new VM on a Hyper-V cluster, there was an error "This configuration causes the host cluster to become overcommitted". I checked all the node properties and realized that there are still more than enough available RAM on each node. Why the overcommitted problem?

I came across the Technet forum page where Mike Briggs explained the memory calculation of SCVMM. First, SCVMM would sum up a new total memory requirement by adding up all memory used by all existing VMs and the new VM requirements to be deployed. It would then calculate whether the host cluster able to withstand the new requirement if the specified number of node failures are allowed to fail. The number of nodes allowed to fail is configured in the cluster reserve.

If you are confident of the overcommitted issue, simply adjust the cluster reserve number downward and the VM placement would continue successfully. The reserve number can be found on the the General tab of the host cluster properties of SCVMM console.

Friday, November 30, 2012

Hyper-V 3.0 with SOFS

I've tested Hyper-V clusters on WS2012 using Scale-Out File Server (SOFS) as SAN alternative for application server clusters like Hyper-V. My setup is as follows:
New VMs are created using Failover Cluster manager and attached to the SMB share on the SOFS cluster. I've also tested Quick and Live Migration over SMB3.0.

Here is the link for all necessary step-by-step. Please take note that SOFS is not suitable for all situations, especially for frequent small meta changes in files e.g. end-user file sharing etc (see "When to use Scale-Out File Server")

You might just ask why don't I attach the iSCSI LUN directly to the Hyper-V cluster i.e. 2 nodes instead of 4? Yes, I could also do.  I am trying to learn more about using SOFS. In future, I could just buy the cheaper non-RAID SAS disk arrays e.g. Dell MD12xx and directly attached them directly to SOFS using simple PCIe (i.e. non-RAID SAS HBA) to replace SAN storage for virtualization. See below TechNet Dell-Windows Server 2012 slide:



Thursday, November 29, 2012

EFS Recovery

There are 2 types of recovery for Encrypting File System (EFS): Key Recovery and Data Recovery. When there is a designated Key Recovery Agent (KRA) on a CA server, the KRA is authorized to retrieve the user's certificate and private key from the CA database. The user would then be able to use the recovered key to decrypt EFS files. The "Archive subject's encryption private key" in the template "Request Handling" tab should be enabled for archival. In addition, CA server must be prepared for key archival before any rollout, as the key archival should be encrypted by KRA key. As the KRA can retrieve any archived keys, there should be at least 2 different persons to be the CA administrator and the KRA separately. See "Understanding User Key Recovery".

Extract:
The recovery of a private key is a manual process that requires the user(s) to contact an administrative authority to perform the necessary processes. It should be a best practice of any organization to separate the roles of CA Officer and KRA as a minimum of two physical persons.

On the other hand, the Data Recovery Agent (DRA) is authorized the recover and decrypt all encrypted files. The DRA must be enrolled and added to the AD Group Policy to allow DRA to decrypt files.  Furthermore, DRA can be updated subsequently using Group Policy if there are any changes.

For further comparison (pros and cons) and the details on both recovery methods, refer to "Key Recovery vs Data Recovery Differences".

Wednesday, November 14, 2012

WSUS Installation on Windows Server 2012 Failed

I was trying to install Windows Server Update Services (WSUS) on a fresh Windows Server 2012. I wasn't expecting any errors, as it was built on a fresh installation. To my surprise, the error prompted "Fatal Error: Failed to start and configure the WSUS service" when the installation was supposed to be completing. So far,  it wasn't a pleasant experience on deploying the new WS8, as there were minor annoying bugs around. When a service wasn't running properly, you'll probably do better to uninstall and install the same service again, especially for in place OS upgrade.

I did the same trick again but the problem still persisted. When I opened the temp log file, I saw

2012-11-14 11:25:12  StartServer encountered errors. Exception=The request failed with HTTP status 503: Service Unavailable.
2012-11-14 11:25:12  Microsoft.UpdateServices.Administration.CommandException: Failed to start and configure the WSUS service
   at Microsoft.UpdateServices.Administration.PostInstall.Run()
   at Microsoft.UpdateServices.Administration.PostInstall.Execute(String[] arguments)
Fatal Error: Failed to start and configure the WSUS service

It must have to do with the IIS service. I checked the service and it was running fine. Restarting IIS service won't help either. On the IIS manager console, I stopped and deleted the "WSUS Administration" site. Re-start WSUS installation service process. Finally, the installation is complete!

Tips: In Windows Server 2008, TCP port 80 is used by default. In Windows Server 2012, TCP 8530 is used for HTTP and TCP 8531 for HTTPS. Be sure to enable the necessary firewall ports and direct WSUS clients to the correct ports e.g. http://wsus-server:8530 for http update

Tuesday, November 13, 2012

Activating Windows 8 and Windows Server 2012 on existing AD environment

As we putting new Windows 8 and Windows Server 2012 into existing AD environment, there are 2 things that need to be done. First, if you're still using Windows Server 2008 as KMS host, download and install the update 2757817. Otherwise, you'll see the following error when you activate KMS with the new key:

Error: 0xC004F050 The Software Licensing Service reported that the product key is invalid

Next, upgrade the existing key on the KMS host by running:
  1. "slmgr /upk" to uninstall existing key, 
  2. "slmgr /ipk xxxxx-xxxxx-xxxxx-xxxxx-xxxxx" to install new KMS key and;
  3. "slmgr /ato" to activate the new key.
  4. "slmgr /dlv" to verify the key has been successfully upgraded to support the new Windows 8. You should see "VOLUME_KMS_2012_C_channel" on the description.
For detailed step-by-step, check out this blog post.

Thursday, November 8, 2012

Hyper-V Network Virtualization

In one of my earlier posts, I talked about software-based network virtualizaton called "Nicira NVP". The key feature is about multi-tenancy Data Center Interconnect (DCI) by creating multiple layer 2 virtual networks (or pseudo-wire) across an IP network. Layer 2 networks are essential for many Data Center applications, especially for the "free" movement of Virtual Machines (VMs) across sites and IP topology. 

In the new Windows Server 2012, Hyper-V offers similar network virtualization capability using NVGRE, which is another standard L2-over-L3 tunnel. In short, Hyper-V in WS8 includes a "Nicira" software component for network virtualization that allows same virtual subnet addressing across sites and IP topology. For full long-winded story and presentation, please visit this TechEd 2012 site

Here, I would just extract a single slide that tells all:

As for joining the network virtualized environment to the non-network virtualized environment, Hyper-V Network Virtualization gateways are required to bridge the two environments. See "Hyper-V Network Virtualization Gateway Architectural Guide".


Gateways can come in different form factors. They can be built upon Windows Server 2012, incorporated into a Top of Rack (TOR) switch, put into an existing network appliance, or can be a stand-alone network appliance. F5 has announced one such network appliance (F5 To Deliver Microsoft Network Virtualization Gateway).

If you are looking for Technet reference,  click on "Network Virtualization technical details".

Tuesday, October 30, 2012

Remote EFS for Cross-Forest AD Users

I was working on a project that allows encrypting files on remote file share, which was not enabled by default. I've to setup the pre-requite infrastructure, including fully functional Active Directory, Public Key Infrastructure, roaming user profiles (RUP) and the file shares. Group policies would have to be configured to make everything happens. In addition, users from trusted forest must also be allowed to login to the resource forest, and be enrolled with EFS certificate to perform the same remote file encryption. I find there is a lack of clear and cohesive documentation regarding cross-forest support for remote EFS on Microsoft Technet. Here, I would attempt to document down what I did to make cross-forest for remote EFS operations work. Meanwhile, I would also thank the excellent Microsoft premium support team for providing me the necessary insights over multiple email to get it up and running.

Let's get started. First thing first, before performing cross-forest operations, you have to ensure same forest EFS operations work.

Understanding Remote EFS
Local EFS is pretty straightforward: encrypting file on local file-systems. Remote EFS comes in 2 versions: (1) remote encryption/decryption with SMB file share and (2) remote encryption/decryption with WebDAV (a.k.a Web folders)

For (1) SMB mode, roaming user profiles for EFS certificate must be enabled, so that the file share server may impersonate the users and load the EFS keys from roaming user profiles to perform encryption/decryption on users' behalf. To enable impersonation, the file server must be trusted for delegation. As the file is decrypted when it leaves the server, the data across the network is left in plain-text. Hence, you'll have to deploy additional IPSec or SSL to protect the transport path. 

As for (2) WebDAV mode, the file is automatically copied from the Web folder to the user’s computer, encrypted on the user’s computer, and then returned to the Web folder over HTTPS. The advantage is that the computer hosting the Web folder does not need to be trusted for delegation and does not require roaming or remote user profiles. However, it comes with a serious limitation. I've tested it to be rather "slow" in access performance and Microsoft recommends that the file size should be kept below 60MB.

Refer to this Technet link on Remote EFS Operations for further information. In this post, I would focus solely on (1) Remote EFS with SMB file share. 

Phase 1: Remote EFS on SMB file share within Forest
  • Provision network share and enable roaming user profiles for users. See "Deploy Roaming User Profiles"
  • Setup Active Directory Certificate Services and configure EFS certificate auto-enrollment for users. See "AD CS and PKI Step-by-Steps, Labs, Walkthroughs, HowTo, and Examples"
  • Login with a test user account on a client machine. Verify that an EFS certificate has been auto-enrolled using "certmgr.msc". Logout the user, so that the roaming cert can be synced to the RUP.
  • Login again with the same user account. Verify that the EFS cert has been synced to the RUP folders. See this roaming cert troubleshooting guide
  • Setup a file server and provision a network share for remote EFS. Delegate the file server to be trusted. See "Remote Storage of Encrypted Files Using SMB File Shares and WebDAV"
  • Map the network drive on the client machine. Try to create some files and encrypt them. Verify that the files are encrypted with the same certs on your profiles. Otherwise, the file server might be requesting a new EFS cert on your behalf instead of loading it from RUP. If the files fail to encrypt,  it could due to different roaming profile folder structures created by different Windows client version. See this troubleshooting guide.
Phase 2: Supporting cross-forest users for certificate auto-enrollment and remote EFS
  • Setup two-way forest trust. In this context, this forest (hosting resources, such as file servers etc)  is known as "resource forest". The forest where users come from is known as "account forest". Create new security group with "Domain Local" scope and add cross-forest users to this group.
  • Allow "Read" and "Auto-enrollment" for the cross-forest group to the EFS user template
  • Configure Cross forest certificate enrollment. Run the "PKIsync.ps1" script to sync PKI objects, which is critical to facilitate cross-forest enrollment. Run the command on the account forest with "Enterprise Admin" rights specifying the source and target forests. 
  • The Technet link for cross-forest enrollment above missed out the step of copying the pkiEnrollmentService object over (i.e. certutil -config can't be used). Use this command instead:
  • PKISync.ps1 -sourceforest source.local -targetforest target.local -type CA -cn "CA-Name"
  • To verify the cert templates have been copied over, use the "AD Sites and Services" on the target DC, click "View" and tick "Show Service Node". Look under the folder "Services\Public Key Services\Certificate Template" to check the templates.
  • Add the Issuing CA server to the "Cert Publishers" Domain Local group on the account forest. 
  • Enable LDAP referral support on enterprise CAs: certutil -setreg Policy\EditFlags +EDITF_ENABLELDAPREFERRALS
  • Deploy group policy to enable "Allow Cross-Forest User Policy and Roaming User Profiles" and set "roaming profile path for all users logging onto this computer" on the client machines for cross-forest users, as well as the file servers as shown below:
  • User group policy would not take effect for cross-forest user logon to domain computer. To configure user policy, configure loopback policy to facilitate auto-enrollment on client computers logon by cross-forest users  Even if you don't enable it explicitly, loopback with replace mode would still be enabled for cross-forest user login by default. Enable Auto-enrollment on user configuration. Leave the "Certificate Enrollment Policy" as "Not configured".

  • Test domain logon using a cross-forest user account. Perform the same verification procedures as earlier mentioned for single forest operations.
  • If remote EFS operations fail, despite auto-enrollment works and the RUP certs are synced, it might be due to the group policy fail to take effect on the file server. The server attempts to impersonate a cross-forest user but is disallowed by default as shown on event viewer below. Furthermore, the server is not aware of the location of RUP folders for cross-forest users. 

  • Apply gpupdate and gpresult /h gpresult.html on the file server. Ensure "Allow Cross-Forest User Policy and Roaming User Profiles" and "roaming profile path for all users logging onto this computer" are applied 
  • Cross-forest users should be able to perform remote EFS operations on this resource forest.

Thursday, October 25, 2012

Modifying Built-in AD Objects

While working on cross-forest cert enrollment, I noticed I couldn't add an cross-forest object to an AD built-in group object, e.g. "Cert Publishers" group. The built-in group is of "Global Security" scope that is unable to add objects outside a forest. Rightfully, it should be of "Domain Local" type, so that cross-forest objects can be added. If it's not a built-in object, we can simply change it to "Universal" and then "Domain Local" using "AD users and computers" console. 

Nevertheless, we can use the SYSTEM account to modify the built-in group scope.
  1. Download pstools from link: http://technet.microsoft.com/en-us/sysinternals/bb896649.aspx.
  2. Extract it and get psexec.exe and copy it to a Domain Controller (DC).
  3. Open new command and run the command psexec.exe -s “cmd”, click Accept/Yes.
  4. We will get a new command that run under System account.
  5. Under new command prompt, please run the command below:
                dsmod group “CN=Cert Publishers,CN=Users,DC=contoso,DC=com” -scope U
                dsmod group “CN=Cert Publishers,CN=Users,DC=contoso,DC=com” -scope L

This would convert built-in group scope to Domain Local.

Note: If the AD was first created on Windows Server 2000, the group scope type was created "Global Security" and would remain as it was, even when there was AD prep and upgrade subsequently.

Tuesday, October 23, 2012

Verifying SIDHistory of user accounts

When you're performing user and resource migration between forests with trust, it's important to enable SIDHistory and disable SID quarantine during the migration process. As migrated users on target forest will get new domain SIDs, it's important for them to retain the old SIDs in their originating forest. This is to ensure that the migrated users on new target forest are still able to access resources in the source forest until the migration is complete.  This blog post sums it up using Active Directory Migration Tool (ADMT).

But how to verify a SID history of the migrated user is of the same SID in the source forest?

On new target forest:
dsquery * -Filter "(samaccountname=userid)" -attr sidHistory

On source forest:
dsquery * -Filter "(samaccountname=userid)" -attr objectSid

Compare the SID values of both output and ensure they are the same.

Thursday, October 18, 2012

Setting the ACL for Home and Roaming Profiles

Wonder how should you set the ACL of Share and NTFS of the network share for Users' Home folders and Roaming Profiles? Check out this Technet Blog: Automatic creation of user folders for home, roaming profile and redirected folders

By default, all newly created folders are set with inheritable permissions that include Read permission for all users. As a result, users would be able to see all other users' home folders. Access Based Enumeration (ABE) is designed to prevent users from viewing other folders that they have no read access. It can be easily enabled on the "Share and Storage Management" console. However, inheritable permission get in the way because it permits all users to have "Read" access to all folders.

For ABE to work, you'll have to remove that inheritable permissions after the users' home folders are automatically created. You can have a Powershell Script that take in CSV file (exported by csvde) and remove all inheritable permissions on the user home folders. And this is my script:

import-csv C:\temp\users.csv | foreach-object {
  # individual user name
  $user = $_.sAMAccountName
  # user home folder
  $newPath = Join-Path "\\FileShare\Home$" -ChildPath $user
  $acl = Get-Acl $newPath
  # this would remove inheritable permission
  $acl.SetAccessRuleProtection($true,$false)
  # additional custom permission added (optional)
  $permission = "MyDomain\$user","Modify","Allow"
  $accessRule = New-Object System.Security.AccessControl.FileSystemAccessRule $permission
  $acl.SetAccessRule($accessRule)
  $acl | Set-Acl $newPath
}

If you happen to encounter situation whereby you can't move or remove the user profile folders, you'll have to take ownership of the folder recursively. Here're the command lines:
takeown /F folder-name /R /D y 
icacls folder-name /grant administrators:F /T

Saturday, September 15, 2012

New Storage Spaces, SMB3.0 and SOFS in Windows Server 2012

One big game-changing difference in Windows Server 2012 is the new Storage Spaces feature and SMB3.0. Storage Spaces organizes a bunch of hard-disks neatly into virtual storage pools that can be easily expanded by simply adding more hard-disks. Virtual storage pool can also be added to support Clustered Storage Space for fail-over clustering (see: "How to Configure a Clustered Storage Space in Windows Server 2012"). Minimally, each node needs to have direct access to shared disk resources (at least 3 x SAS drives) i.e. SAS JBOD with no RAID sub-system, such as using PCIe non-RAID HBA connection.

SMB3.0 provides many enhancements for improved performance, resiliency and security, including:
  1. SMB Scale-Out: transparently redirect SMB client connections to a different file server cluster node.  
  2. SMB Direct (SMB over RDMA): enables direct memory-to-memory data transfers between servers, with minimal CPU utilization and low latency, using standard RDMA-capable network adapters (iWARP, InfiniBand, and RoCE). Any application which accesses files over SMB can transparently benefit from SMB Direct.
  3. SMB Multichannel – takes advantage of multiple network interfaces to provide both high performance through bandwidth aggregation, and network fault tolerance through the use of multiple network paths to data on an SMB share.  Fast data transfers and network fault tolerance.
  4. Transparent Failover and node fault tolerance – Supporting business critical server application workloads requires the connection to the storage back end to be continuously available. The new SMB server and client cooperate to make failover of file server cluster nodes transparent to applications, for all file operations, and for both planned cluster resource moves and unplanned node failures.
  5. Secure data transfer with SMB encryption – protects data in-transit from eavesdropping and tampering attacks. Encrypting File System (EFS) is still required to protect data at rest though.
Reference: "SMB 2.2 is now SMB 3.0".

You can enable cluster disk as new Cluster Shared Volume (CSV). CSV enables all cluster nodes to "own" a "shared" volume at the same time i.e. Active-Active configuration. When CSV 1.0 was first introduced in W2K8 R2, it was only meant for Hyper-V storage to support Live Migration. Match the new CSV 2.0 storage with SMB3.0, it provides a real solid NAS-based alternative to SAN for performance and resiliency at a better value known as Scale-Out File Server (SOFS). Meanwhile, Microsoft is working with hardware partners to create a cluster-in-a-box (CiB) architecture if you prefer appliance-based SOFS solution. 

SOFS can be used as file-based storage spaces for Hyper-V and MS SQL clusters over SMB 3.0. Without expensive SAN storage (replaced by SOFS using shared SAS JBOD) in the picture, the new Hyper-V and SQL cluster would look like below (taken from TechEd 2012):


Subsequently, I did a quick test on SOFS using iSCSI storage. Even without fail-over clustering, "Share Nothing Live Migration" is also possible for non-clustered Hyper-V hosts using Hyper-V replica.

Having singing much praises to SOFS, do note that it is still not meant for every situation. Microsoft recommends that SOFS should not be used if your workload generates a high number of metadata operations, such as opening files, closing files, creating new files, or renaming existing files, which is typical for end-user file shares. Microsoft publishes the following chart to help you to decide when to use traditional file share and SOFS (taken from "When to use Scale-Out File Server"):


If you're already running 10Gigabit Ethernet or higher in your data center, you should further optimise your existing investment and leverage on the full performance benefits of SMB Direct (i.e. SMB over RDMA). Do note that the servers should have Network Interface Cards (NICs) that support RDMA (iWARP or RoCE). Here is a link on RDMA enabled NICs that support Windows Server 2012.

I've also come across this informative MVP blog about the new SOFS that can potentially replace SAN-based solution for server clustering. Here's the extract:

Scale Out File Server (SOFS)
Normally we want our storage to be fault tolerant. That’s because all of our VMs are probably on that single SAN (yes, some have the scale and budget for spanning SANs but that’s a whole different breed of organisation).  Normally we would need a SAN made up fault tolerant disk tray$, switche$, controller$, hot $pare disk$, and $o on. I think you get the point. Thanks to the innovations of Windows Server 2012, we’re going to get a whole new type of fault tolerant storage called a SOFS.

When I’ve talked about SOFS many have jumped immediately to think that it was only for small businesses.  Oh you fools!  Never assume!  Yes, SOFS can be for the small business (more later).  But where this really adds value is that larger business that feels like they are held hostage by their SAN vendors.  Organisations are facing a real storage challenge today.  SANs are not getting cheaper, and the storage scale requirements are rocketing.  SOFS offers a new alternative.  For a company that requires certain hardware functions of a SAN (such as replication) then SOFS offers an alternative tier of storage.  For a hosting company where every penny spent is a penny that makes them more expensive in the yes of their customers, SOFS is a fantastic way to provide economic, highly performing, scalable, fault tolerant storage for virtual machine hosting.

Monday, September 10, 2012

Authenticating SMTP users on Exchange Edge

I thought of authenticating all POP3/SMTP external users. POP3 access are provided by Client Access Server (CAS) and is joined to domain. No problem in authentication for POP access.

As for SMTP service, it is only provided by Hub Transport or Edge Transport. Hence, it made much security sense that I created a new Receive Connector on the edge server and enable "Basic Authentication". But when I configured Outlook client for SMTP authentication, the edge server rejected the authentication. Initially, I thought it could be due to Exchange ACL error or AD LDS faults within the Edge server. I came to realize that this is a wrong concept when I come to this Technet blog. Remember that AD LDS is for extending AD partition to the perimeter network and it's not meant for authentication (only a full Domain Controller or RODC does authentication but Exchange doesn't support the latter). 

Important note:
Configuring SMTP
Most commonly, however, your clients will be authenticating for the purposes of identifying themselves (sender permissions checks) and prove that they are allowed to relay. This authorization can be done by Edge only if it is in the domain. Since is not be the most common configuration, the Hub role may be more suited for this purpose


Sunday, September 9, 2012

Hyper-V vs. vSphere: Understanding the Differences

SolarWinds did a very good job at comparing Hyper-V and vSphere. The views are unbiased and independent. The upcoming Hyper-V 3.0 in Windows Server 2012 are also briefly covered. Here're the links:
  1. Webcast
  2. WhitePaper 
Microsoft also did a comparison (of course - from Microsoft's perspectives): 
Both Microsoft and VMWare did the comparison by highlighting their "strengths" and their competitor's "weaknesses". Hence, you can also get a balanced view by reading both whitepapers side-by-side. In my personal opinions, it's true that vSphere is still heading way ahead of Hyper-V R2. Hyper-V 3.0 will narrow the gaps significantly and offered even better than "good-enough" features for most enterprises. Coupled with "irresistible" unlimited "free" VM rights from Hyper-V Data-centre edition and hearing no further new revolutionary announcements from VMWare, it seems to me that VMWare might be fighting a losing head-to-head battles against the Redmond software giant.

Thursday, August 23, 2012

Nicira Network Virtualization Platform (NVP)

Yesterday, I had a short technical discussion with a Singapore-based Nicira staff member. They have a niche solution that is doing exactly what the Cisco Nexus and L2 MPLS (i.e. Ethernet over MPLS or Any Transport over MPLS) are trying to achieve: multi-tenancy Data Center Interconnect (DCI) by creating multiple layer 2 virtual networks (or pseudo-wire) across an IP network. Layer 2 networks are essential for many Data Center applications, especially Cloud Virtualization. Imagine performing server clustering, VMWare vMotion, Hyper-V Live Migration on multiple physical sites, which otherwise break by IP subnets. It should also be seriously considered as part of IT Disaster Recovery plans. As for multi-tenancy, you may have multiple network customers or tenants sharing the same underlying physical infrastructure. Each tenant should only see its respective overlay network without visibility into other virtual network - a similar concept to host virtualization and cloud computing.

How does Nicira NVP work? From what I understand in high level perspectives, a data path STT (Stateless Transport Tunneling) tunnel is established between 2 or more Open vSwitches (OVS) across an IP network. This MAC-over-IP tunnel is used to encapsulate all MAC layer traffic and transport them over an IP network, which effectively connect 2 servers (whether virtual or physical) on different sites as if they were on the same subnet or VLAN. 

As of now, the OVS can be integrated into ESX, KVM and XenServer hypervisors. There is also near-future plan for Hyper-V support (not sure if the plan would be cancelled, as Nicira is now acquired by its arch-rival VMWare). Alternatively, an ISO-based image can also be run off as a virtual or physical server as an OVS gateway connecting legacy systems to the Nicira network virtualization platform (NVP).

As for OVS management and control, the NVP Controller Cluster (housed on server clustering) is used to centrally managing and controlling all OVS along the control paths. I was told that even if there were a disconnection on the control paths, the OVS would continue to operate (even though not modifiable at this stage).


From what I observed, the new Cisco Overlay Transport Virtualization (OTV) is probably Nicira's arch-rival at this point. True enough that traditional networking MPLS and L2 pseudo-wires can be employed to perform the same tricks, they are either limited by performance, lack of MPLS aware devices or simply staff knowledge in the IP networks. Both Cisco OTV and Nicira NVP, on the other hand, can be easily established across any traditional IP-based networks.

And prices also do matter. I was told that one would need at least USD150K for a small POC setup to "try out" Nicira NVP. For the same price tag, I could also have purchased at least half-dozen of Cisco ASR1000 hardware (OTV is now supported on ASR platform from version XE3.5S onward). A princely sum that is pretty hard-to-justify for a software purchase to the management I suppose.


Friday, August 10, 2012

Database Availability Group on Exchange 2010

In Exchange 2010, high availability for mailbox servers no longer requires windows clustering. It's now replaced with a new concept called "Database Availability Group (DAG)". Think of it like a new clustering method for Exchange 2010 Mailbox servers. Up to 16 nodes can grouped under a DAG. Even though it's no longer the traditional windows clustering, it still retain many similar concepts. Creating DAG is pretty straightforward with the New-DatabaseAvailabilityGroup cmdlet or in the graphical EMC.

Network connection concept largely remains the same. You've to define the Group IP address (similar to Cluster IP address) on the public network. Replication is not recommended over this network. You should also leave the iSCSI networks alone. DAG replication is recommended over private networks. Quorum model remains the largely similar, even though it has become more transparent by employing a Witness Server to determine which subset of the split cluster should remain functioning.

One major difference is the mailbox database. Unlike Windows clustering, shared storage in SAN or file share is no longer required. Each node should "host" its own individual database copies. Rather, the same copy can be replicated throughout all nodes in a DAG - only one copy should be activated with the rests in passive. This can be easily achieved by using the Add-MailboxDatabaseCopy cmdlet. Do take note that the replicated path would follow exactly the same as the original hosting mailbox server. For example, if you host the original mailbox on "D:\MailDB", the rest would follow. If  you realise that you have differing hosts with different paths, you can still move the database path. However, the path can't be moved once it is replicated. Hence, you've to remove the database copy, move the path, and add the database copy again. For more info on moving the Mailbox Database Path for a Mailbox Database Copy, refer to this technet article.


Tuesday, August 7, 2012

Setting up Edge Transport Server in Exchange 2010

I was recently tasked to setup new Exchange 2010 for my organization. There is a special Exchange role called "Edge Transport Server (ETS)" that is meant for transporting messages with external networks, such as Internet. According to Microsoft, the Edge Transport Server in Exchange 2010 is secured by default and hence no need for additional hardening, such as using Security Configuration Wizard (SCW) template.

As ETS is typically placed on the network perimeter, it should not be joined to any Active Directory Domain to reduce attack surface. Ironically, this is not supported on Server Core. To link the ETS to the Exchange Organization through Edge Synchronization, there is a process called "Edge Subscription".

The process can be summarized as follows:
  1. Install the Edge Transport server role.
  2. Verify that the Hub Transport servers and the Edge Transport server can locate one another by using Domain Name System (DNS) name resolution. 
  3. Configure the objects and settings to be replicated to the Edge Transport server.
  4. On the Edge Transport server, create and export an Edge Subscription file by using "New-EdgeSubscription" cmdlet.
  5. Copy the Edge Subscription file to a Hub Transport server or a file share that's accessible from the Active Directory site that has your Hub Transport servers.
  6. Import the Edge Subscription file to your Active Directory site to which you want to subscribe your Edge Transport server. Use "Get-Help New-EdgeSubscription -examples" for reference. 

Typically, the ETS should be dual-homed with internal network interface connecting to the Hub Transport Server and external interface to the Internet. You may test the edge subscription by running "Test-EdgeSychronization" on the Exchange Management Shell (EMS) of Hub Transport Server. Once configured successfully, you can now configure the "MX" of your domain name to the external network addresss of the ETS. For high availability, setup two or more ETS on your network perimeter.

For further information, refer to this Technet article. For information on limiting message size and file attachment limits, refer to this link.

Monday, July 23, 2012

WebDAV EFS don't work on Windows Server 2008 (R2)?

You may have attempted to follow this Technet guide to test out Remote EFS on WebDAV folder. And it turned out that the encrypted file is either corrupted or not encrypted at all!

In order for EFS to work on WebDAV, you'll need to enable Custom Properties on IIS7. Follow this guide and it works like a charm!

If you're implementing Credential Roaming for EFS certificates, do take care of sufficient storage on Domain Controllers. Roaming certificates and keys are stored on DCs and would be replicated. Refer to this Technet link for considerations.

Wednesday, July 18, 2012

How to share EFS encrypted file

It's pretty straightforward to encrypt a file on local drives. All you need to do is to right-click on the "Properties" -> "General" -> "Advanced" and check on "Encrypt contents to secure data". This is provided that you have enrolled with a EFS certificate in your user certificate store. Credential roaming works great if you are going to login to multiple machines. 

To share the encrypted file with other users, you've to add their EFS certs to the file before they can access it. On the file that you intend to share, right-click on "Properties" -> "General" -> "Advanced" -> "Detail" -> "Add".
Click on "Find User".
Even though you have selected the user, you won't be able to add them. You've to first install the EFS cert to the "Other People" store in your personal cert store. Click on "View Certificate" and install this cert to your "Other People Store". Click on the "Add" user button again and you'll be able to add the cert to the encrypted file now.

Wednesday, June 20, 2012

Implementing NAP with 802.1x enforcement

In my earlier post, I've configured 802.1x with EAP-TLS. Now, I'm expanding the effort to Network Access Protection (NAP) with 802.1x enforcement. Machines that are validated compliant to the policy are able to access authorized network or VLAN. Otherwise, it would go into Guest VLAN for further remediation action. In NAP with 802.1x enforcement, clients would send Statement of Health (SoH) to the Windows NPS server for System Health Validation against the Health Policies on top of 802.1x authentication. The SoH would contain information pertaining to the Security Center of the Windows clients.

In this example, I would just configure the Health Policies to check the status of Windows Firewall. The Windows 7 client and the NPS (Windows Server 2008 R2) have been setup in a full AD environment with AD Certificate Services. All certificates have been issued and the network switch is configured with 802.1x settings.

On the NPS server, click on the "Configure NAP" to start the wizard. Follow the wizard instructions carefully. Go to the "Connection Request Policies" after the completion of wizard. Right-click on the NAP policy and click "Properties". Click on the "Settings" tab and edit on the "Microsoft Protected EAP (PEAP)". Read the below underlined description that you must configure the PEAP properties here.

Choose the correct server cert that is generated based on the "RAS and IAS Server" template as mentioned in my previous post. In addition, ensure that the below highlighted items are added and enabled. Edit on the "Smart Card or other certificate" to choose the correct cert and CA if you're using cert authentication.

On the client configuration, it would be more efficient to use Group Policy to configure and enable the NAP setting. On the computer configuration, create the "Wired Network (IEEE802.3) Policies" as shown below:

Ensure that the clients' PEAP authentication settings match the NPS server's. In addition, under the "Security Settings", edit the Startup of "Wired AutoConfig" and "Network Access Protection Agent" to "Automatic".  Next, go to "Network Access Protection" to enable "EAP Quarantine Enforcement Client". You may also like to configure other optional settings like "User Interface Settings".

Once the GPO is created, link it to the client OUs and run "gpupdate" on the Windows 7 client. Check the status on the event viewer. If everything runs well, try disabling the Windows firewall and it will be enabled back automatically for compliant. For more details and troubleshooting, refer to this NAP with 802.1x enforcement step-by-step guide.

Tuesday, June 12, 2012

A certificate could not be found that can be use with this EAP when configuring 802.1x on NPS

I was running the default 802.1x wizard to configure a new RADIUS server on Windows Server 2008 R2. I had an error that prompt "A certificate could not be found that can be use with this Extensible Authentication Protocol" as shown below:

But when I run the cert manager, I saw a computer certificate! So what's wrong?! It's the template. Most of the time, we configure auto-enrollment for machines based on Computer template. This time, you'll need the "RAS and IAS Server" template. Rather than auto-enrollment, you may want to perform a manual cert enrollment for the NPS server. Hence, I duplicate a new NPS server template from the "RAS and IAS Server". And yes, you'll also need to register the NPS server on AD using "netsh ras add registeredserver" command. Ensure that the NPS server is a member of the "RAS and IAS Server" security group on the AD.

To further ensure that the NPS server is using the "correct" cert, click "edit" on the PEAP or EAP-TLS authentication method and verify the cert as follows:


In summary (click for detailed step-by-step guide):
  1. Register the NPS server 
  2. Enroll a new cert based on "RAS and IAS Server" template
  3. Excellent link for NAP with 802.1x troubleshooting
  4. Setting up & verifying NAP CA to issue health certificates

Saturday, June 9, 2012

Re-Learning the Basic of Relational Database - Normalization, Primary Key and Foreign Key

The last time that I worked on SQL-based relational database was more than 10 years back when I first started my first job as a C/C++ and Java programmer. It's time for me to revise that knowledge again. Recently, I've picked up a book called "Microsoft SQL Server 2012: A Beginner's Guide" by a German professor named Dusan Petkovic. I've started chapter 1 and it's all about the basic of relational database on Normalization process and various key concepts. How I wished the author could explain in a even more simpler language.

Normal Forms
Normalization is the process of efficiently organizing data in a database by reducing data redundancy while ensuring the integrity of data dependency. Normal forms are used in this process to describe the stages of normalization. In theory, it started from stage one (the lowest form of normalization, referred to as first normal form or 1NF) through five (fifth normal form or 5NF). In practical applications, you'll often see the first three NFs (1NF, 2NF, and 3NF) and the last two NFs (4NF and 5NF) are seldom used.

First Normal Form (1NF)
1NF means that the value of any column in a row must be single valued (i.e. atomic). Imagine a table with following fields: Employee No (emp_no) and Project No (project_no) where the relationship is one-to-many i.e. one employee may take up multiple projects. The table may look like this:
emp_no         project_no
10102           (p1, p3)    


The table is not in 1NF, as project_no contains more than 1 value. Once we've ensured the rule of atomic, the table will be in 1NF as follows:
emp_no         project_no
10102               p1         
10102               p3         


Second Normal Form (2NF)
Primary key refers to the column of a table that is able to identify each row uniquely. In the earlier table, both emp_on and project_no form a composite primary key (i.e. having more than 1 column as primary key).  Expanding the same example with more columns that entails which department  the employees belong to, such as  department id (dept_id), department name (dept_name) and department location (dept_loc). A table may look like this:
emp_no         project_no       dept_id      dept_name       dept_loc
10102               p1                 d1             Sales                  L1    
10102               p3                 d1             Sales                  L1    
25348               p1                 d2             Marketing           L2     


Here, there is some redundancy on the dept_name and dept_loc. Not only the redundant info would take up more storage space, there is a chance of update error whenever the employee changes department or the department relocates. To be in 2NF, remove subsets of data that apply to multiple rows of a table and place them in separate tables. Create relationships between these new tables and their predecessors through the use of foreign keys (i.e. primary keys in other table). The resultant tables may look like this:
emp_no         project_no  
10102               p1           
10102               p3           
25348               p1           


emp_no           dept_id        dept_name       dept_loc
10102                  d1                Sales                  L1  
25348                  d2                Marketing           L2  

In the newly separated table, the emp_no is both the primary key and foreign key (i.e. reference key to a primary key of another table). Note: a table with a one-column primary key is always in 2NF.


Third Normal Form (3NF)
To be in 3NF, the table must first satisfy the requirements of 1NF and 2NF. Next, there must be no functional dependencies between the non-key columns. In the earlier separated table, it is not in 3NF because the dept_name is dependent on the dept_id, which is another non-key. To be in 3NF, another  new table is separated from it and the final resultant tables may look like this:
emp_no         project_no  
10102               p1           
10102               p3           
25348               p1           


emp_no           dept_id 
10102                  d1   
25348                  d2   


dept_id        dept_name       dept_loc
d1                   Sales                  L1   
d2                   Marketing           L2   


In the last table, dept_id becomes the primary key and all the three tables are now in 3NF.

Sunday, June 3, 2012

BIOS upgrade using bootable USB to DOS

Recently, I bought a new Acer Aspire 5560G notebook. It came with Win7 home premium. I wanted to start installing the new MS SQL 2012 on some Virtual Machine. Since VMWare is no longer giving away free VMWare workstation, the natural choice is for me to install Windows Server 2008 R2 on it that comes with free Hyper-V. 

Upon successful installation of the new OS, I noticed that rebooting and shutting down of this new notebook is not seamless. I've to press down the power button in order for it to shutdown completely. I thought it's the BIOS error and downloaded the latest BIOS update. Only then, I realised that the update can only be run on DOS mode. Hey, it's not Win98 and the newer MS OSes no longer come with DOS! (MS now has something called WinPE but it's still not DOS)

After searching the Internet high and low, I came across this good article that shared how to boot the machine into DOS using USB stick. It requires a free simple HP utility called "HP USB Disk Storage Format Tool". After formatting the USB stick with MS DOS system files, I copied the BIOS update DOS utilities. 

Rebooting the notebook using the USB stick, I've finally managed to upgrade the BIOS firmware. It's still couldn't solve the shutting down problem but at least I know of this easy-to-use method to boot any machine into DOS mode quickly.

Thursday, May 10, 2012

Making Changes to MST Config with Care

Multiple Spanning Tree (MST) is great for VLAN spanning tree management and load-balancing. In concept, you can have multiple VLANs and group them up into 2 regions. Instead of managing every individual VLAN spanning tree, you would just need to manage the two MST regions regardless of the number of VLANs.

However, for two or more switches to be in the same MST region, they must have the identical MST name,
VLAN-to-instance mapping, and MST revision number. Any changes applied to existing MST configuration on one switch but not on others would cause network disruptions. To minimize such disruptions, learn to make MST changes but only commit them until all switches are 'standardized' in the new MST configuration.

This is how to enter MST configuration sub-mode on the switch:
switch# configure terminal
switch(config)# spanning-tree mst configuration

This shows how to leave MST-submode configuration on the switch without committing the
changes:
switch(config-mst)# abort

This shows how to commit the changes and leave MST configuration sub-mode on the switch:
switch(config-mst)# exit

For more information, download this Cisco "Configuring MST" doc.

Tuesday, May 8, 2012

How to add new interfaces on Juniper SRX chassis cluster

There are many good JUNOS articles on setting up the Juniper SRX chassis. But I just want to summarize the steps on how to add new interfaces to existing chassis cluster. In other words, the following pre-requites are complete as follows:
  1. Configuring Chassis Cluster information on both nodes e.g. set chassis cluster-id 1 node 0 
  2. Configuring Redundancy Groups (RG) and specify which node should be the primary node for each RG. e.g. set chassis cluster redundancy-group 1 node 0 priority 200. This is also where you determine whether it is a Active-Passive or Active-Active setup
  3. Configuring Out-of-Band management interface for fxp0 - optional
  4. Configuring Virtual Routing instances (a.k.a VRF-lite in Cisco networking) - optional 
  5. Configure the number of Redundant Interfaces using "set chassis cluster reth-count n" where n is the number of reth.
  6. Configuring Redundant Interface (reth) using at least one interface from each node
  7. Configuring control link using fxp1 interface where configuration synchronization takes place between 2 nodes 
  8. Configuring fabric interface (fabn where n denotes the node id) consisting of at least one ethernet interface from each node
  9. Successful cluster setup!
After you have established the cluster successfully, you may wish to add more interfaces to it. The additional steps are as follows:

Step1: Increase the reth count by using
  • set chassis cluster reth-count n where n is the new number of reth interfaces
Step 2: Identify 2 similar interfaces (one from each node e.g. ge-0/0/2 and ge-8/0/2) to form a new reth. e.g. 
  • set interfaces ge-0/0/2 gigether-options redundant-parent reth2
  • set interfaces ge-8/0/2 gigether-options redundant-parent reth2
Step 3: Configure new reth2 by heading to "edit interfaces reth2"
  • Enable VLAN tagging if you intend to use VLAN: "set vlan-tagging"
  • Create new sub-interface: "set unit nnn vlan-id " where nnn is any sub-interface number.
  • Assign IP address to sub-interface: "set unit nnn family inet address 1.1.1.1/24" 
  • Return to top level edit: "top"
Step 4: Assign this interface to the virtual routing instance
  • set routing-instances interface reth2.nnn
Step 5: Assign this interface to the appropriate security zone
  • set security zones security-zone interfaces reth2.nnn
Step 6: Check new configurations and commit
  • top
  • show | compare rollback 0
  • commit