"WHY BROCADE VDX IN THE DATA CENTER OVER CISCO NEXUS?"
I've already started this with THIS POST about why Brocade VDX in the data center over the Cisco Nexus. Ill continue as time goes on as to why I believe Brocade is the best answer as opposed to Cisco.
I want to pass on to you guys just how easy it is to setup the VDX environment. I have three VDX6740s in a lab right now, and Im working with them to provide you guys some good information. Lets get going on this post to show you just how easy it is to setup a VCS (Virtual Cluster Switching) environment with the VDX product. Keep in mind, VDX is geared for data centers, and Ill get into the reasons why as I post more on this subject.
Below, Im cutting and pasting in the config I did to get the cluster formed and ready. I did do a firmware upgrade to 6.0.2, as you saw in a post this earlier this week. All I have to do to get the cluster formed is ONE command. Yes, thats right. Only one command in CLI on each VDX6740 to tie all VDXs together to form the VCS cluster.
First, I want to show you what you should expect to see when you run the "show vcs" command, after you put a management IP address on the box. I did that when I upgraded the firmware over FTP, as you saw in an earlier post this week.
sw0#
sh vcs
Config Mode :
Local-Only
VCS Mode :
Fabric Cluster
VCS ID :
1
Total Number of Nodes :
1
Rbridge-Id WWN Management IP VCS Status Fabric Status HostName
--------------------------------------------------------------------------------------------------------------
1 >10:00:00:27:F8:C7:D2:56* 10.10.10.10 Online Online sw0
Keep in mind, this is VDX number 1, the first in the cluster. Ill now set the vcsid and rbridge number for box number 1. VCSID ID will be the same across the VCS cluster. RBRIDGE ID will be different for each box, just FYI.
sw0#
vcs vcsid 10 rbridge-id 1 logical-chassis enable
This operation will perform a VCS cluster mode transition for this local node with new parameter settings. This will change the configuration to default and reboot the switch. Do you want to continue? [y/n]:
y
The VDX reboots. When it comes back up, Ill run the "show vcs" again, and this is what you will expect to see below.
sw0#
sh vcs
Config Mode : Distributed
VCS Mode : Logical Chassis
VCS ID : 10
VCS GUID : c35843f9-d60d-4949-b27d-93338d51f692
Total Number of Nodes : 1
Rbridge-Id WWN Management IP VCS Status Fabric Status HostName
--------------------------------------------------------------------------------------------------------------
1 >10:00:00:27:F8:C7:D2:56* 10.10.10.10 Online Online sw0
Notice that instead of "fabric cluster", we now have logical chassis. Logical chassis is so that we can manage all VDXs with the primary VDX only. It all looks like one box, no matter how many we add in. Not to mention the technical details behind logical-chassis mode.
Now, lets add the second box. Ive consoled into VDX number 2, and type in the following:
sw0#
vcs vcsid 10 rbridge-id 2 logical-chassis enable
This operation will perform a VCS cluster mode transition for this local node with new parameter settings. This will change the configuration to default and reboot the switch. Do you want to continue? [y/n]:
y
This unit reboots, and comes back up. Next, I physically tie the first VDX and second VDX together with a 10gig twin-axe cable. I then see the below to verify the fabric has formed.
sw0#
sh vcs
Config Mode : Distributed
VCS Mode : Logical Chassis
VCS ID : 10
VCS GUID : c35843f9-d60d-4949-b27d-93338d51f692
Total Number of Nodes : 2
Rbridge-Id WWN Management IP VCS Status Fabric Status HostName
--------------------------------------------------------------------------------------------------------------
1 >10:00:00:27:F8:C7:D2:56* 10.10.10.10 Online Online sw0
2 10:00:50:EB:1A:38:D7:DF 10.10.10.11 Online Online sw0
sw0#
sw0#
sh fabric isl
Rbridge-id: 1 #ISLs: 1
Src Src Nbr Nbr
Index Interface Index Interface Nbr-WWN BW Trunk Nbr-Name
----------------------------------------------------------------------------------------------
65 Te 1/0/2 65 Te 2/0/2 10:00:50:EB:1A:38:D7:DF
10G Yes "sw0"
Notice above, that the "show fabric isl" shows the physical connection, and that its 10gig. I also ran the "show vcs" command, which shows both VDXs in the cluster. Keep in mind, I did this with ONE command on each VDX. Now, lets add the third VDX that I have.
sw0#
vcs vcsid 10 rbridge-id 3 logical-chassis enable
This operation will perform a VCS cluster mode transition for this local node with new parameter settings. This will change the configuration to default and reboot the switch. Do you want to continue? [y/n]:
y
VDX number 3 reboots, and now, when I physically connect the 10Gig twin-axe cable in to the first VDX and type in "show vcs" on the primary VDX, I get the following:
sw0#
sh vcs
Config Mode : Distributed
VCS Mode : Logical Chassis
VCS ID : 10
VCS GUID : c35843f9-d60d-4949-b27d-93338d51f692
Total Number of Nodes : 3
Rbridge-Id WWN Management IP VCS Status Fabric Status HostName
--------------------------------------------------------------------------------------------------------------
1 >10:00:00:27:F8:C7:D2:56* 10.10.10.10 Online Online sw0
2 10:00:50:EB:1A:38:D7:DF 10.10.10.11 Online Online sw0
3 10:00:50:EB:1A:1D:8B:0B 10.10.10.12 Online Online sw0
Here are some other good commands to verify connection and get information:
sw0#
sho fabric isl
Rbridge-id: 1 #ISLs: 2
Src Src Nbr Nbr
Index Interface Index Interface Nbr-WWN BW Trunk Nbr-Name
----------------------------------------------------------------------------------------------
64 Te 1/0/1 64 Te 2/0/1 10:00:50:EB:1A:38:D7:DF
20G Yes "sw0"
79 Te 1/0/16 69 Te 3/0/6 10:00:50:EB:1A:1D:8B:0B
20G Yes "sw0"
Notice above, that I have 20Gig for each connection to the VDXs. I put two 10Gig twin-axe cables to each VDX, giving me 20Gig for each. I can do 8 (80Gig) for each VDX if I want, but I didnt have the cables to do that for my lab.
Now, lets look at exactly what ports are connected.
sw0#
sho fabric islports
Name: sw0
Type: 131.7
State: Online
Role: Fabric Principal
VCS Id: 10
Config Mode: Distributed
Rbridge-id: 1
WWN: 10:00:00:27:f8:c7:d2:56
FCF MAC:
Index Interface State Operational State
===================================================================
64 Te 1/0/1 Up ISL 10:00:50:eb:1a:38:d7:df "sw0" (downstream)(Trunk Primary)
65 Te 1/0/2 Up ISL (Trunk port, Primary is 1/0/1 )
66 Te 1/0/3 Down
67 Te 1/0/4 Down
68 Te 1/0/5 Down
69 Te 1/0/6 Down
70 Te 1/0/7 Down
71 Te 1/0/8 Down
72 Te 1/0/9 Down
73 Te 1/0/10 Down
74 Te 1/0/11 Down
75 Te 1/0/12 Down
76 Te 1/0/13 Down
77 Te 1/0/14 Down
78 Te 1/0/15 Up ISL (Trunk port, Primary is 1/0/16 )
79 Te 1/0/16 Up ISL 10:00:50:eb:1a:1d:8b:0b "sw0" (downstream)(Trunk Primary)
80 Te 1/0/17 Down
... (cut for brevity)
You can do any topology you like that makes sense for your customer. In this example, I have two VDX6740s hanging off of the first VDX6740. You can get as redundant as you like.
That is literally all there is to forming your VCS fabric. Lets recap. The CLI command on each VDX to form the cluster is "vcs vcsid (#) rbridge (#) logical-chassis enable". The VCSID ID # must be the same for all VDXs in the VCS cluster. The RBRIDGE ID # will be different for each VDX in the cluster. Also keep in mind that the firmware version for each VDX6740 must be the same.
When I compare this to the Cisco Nexus, the Brocade data center solution is much easier to form a data center fabric. You can refer to my Cisco Nexus posts for configuring 5Ks and FEXs and getting redundancy setup into the data center.
Post 1 Post 2
Also, here is another config post for the Cisco Nexus.
Post 3