Cisco Switching/Routing :: Datacenter Design With 2 Nexus 5K And 4 Nexus 2K?
Nov 13, 2012
we are planning a Nexus datacenter project with this layout:Our experiences with Nexus switches are not so large until now and the manuals are very extensive.Both N5K´s should be connected directly with all 4 N2K switches. I did not find a layout like this in the manuals. Only a design,where only 2 N2K are connected to one N5K, with this fex config:Now I´m not sure if it is right to make a config like this with the same slots and fex´s or with different slots and fex´s.
We have two catalyst 6506 switches with 10 gb u plinks and around 120 edge switches cat 3750-x switches. Still the module on the core wheere servers are connected is 1000mbps port.Now if we induct a nexus switch to the datacenter what kinds of benefits we can reap In a virtulised environment as well as real environment?following are the some of the queries.Can we reduce the number of edge switches? ( by virtual environment), Inter operabaility between cat ios and nexus ios, how this will affect the environement,What will be the over all benefits ?, What are the cons of this induction ?
This past networkers I was at the Cisco booth discussing how the 2248 can connect to the 5548 and have server connectivity. It was told to me that now, as of a fairly recent NX-OS release, you can have the 2248 going dual-homed to both 5548 via VPC and then have a server connected to both 2248 and be in active-active mode. Is this correct?
When we first deployed our 5548 and 2248 we had to put the 2248 in a straight pin mode, where it only had connections to one 5548 and then the server would dual connect to the 2248's and be in active-active mode. I was told that this changed with an NX-OS release however documentation still seems to be fragmented on what exactly is the case.
Small datacenter design. My requirements and setup will be as follows Dell PowerEdge M1000E Blade Chassis (initially one full chassis)Dell Powerconnect 10GbE Blade SwitchesDell Compellent Storage Array 10Gb iSCSI with redundant controllersDell Powerconnect 7024 dedicated external storage Virtual host blade servers 2 x Cisco ASA for firewall (5525-X or similar in active-active configuration)2 x redundant routers or switches as gateway to public internet I am looking to be able to segregate customers (approximately 100) into seperate VLANs at the access layer and route them up to the Cisco ASA firewalls using Dot1Q trunking for segregation. The Cisco ASA's will perform NAT functionality and route to the redundant gateways. I then need to police each customers traffic at the gateway to limit bandwidth and perform specific traffic marking along with simply routing out to the internet.
Budget is somewhat restrictive so I am looking for the most "cost effective" devices I can use at the gateway to perform the traffic policing/marking/routing for each customer.
I would like to make a design with 4 Nexus 5596UP. 2 of them equipped with Layer 3 Expansion Module so they can serve as core layer and the other 2 Nexus used as Layer 2 for aggregation server layer.The 2 Nexus in the core layer will run HSRP and will peer with ISP via BGP for Internet connection The 2 Nexus in the aggregation layer will be configured as layer 2 device and have FEX and switches connected to them.What I am ensure of is how the vpc and port-channel configuration should look like between the 4 nexus. What I was thinking is to run vpc between the 2 Nexus in the aggregation layer and between the 2 Nexus in the core layer. Than I was thinking of connecting each Nexus in the aggragtion layer to both Nexus in the core layer using port-channel and vice-versa.
We are facing issue of continous packet discards On nexus4001L link (int po2) to Nexus5020 switch. Nexus4001L is installed in IBM blade center server and we have FCOE enabled in this setup. [code]
I have been tasked to replace the existing Cat 6500 and 3750 switches by Nexus 7000 and Nexus 2000.I was told initially my boss plans to get 2 x Nexus 7000 and then eventually blow up to 4 x Nexus 7000s.For Nexus, is there a list of tasks / points that i need to consider for building the initial design?
Can i just link the Nexus 7000 like the following?
N7k-A ========= N7k-B | | lots of N2ks lots of N2ks
This is regarding Nexus core switch 7010. We are already running two nexus 7K with ten Nexus 5k. Currently we are going to add two new Nexus 5k in our DC. In the 7K we already running two VDC's.
The fans 1 & 2 in Module 1 on the Nexus5K are still experiencing the very high RPM and speed issue.
I have replaced the fan from another operational Nexus5K, and the fans are fine in the other Nexus. The replacement fans also have the same issues, so it is not a fan hardware issue.
There are no threshold alarms. the only log entry that is related to this is as follows:
%NOHMS-2-NOHMS_ENV_ERR_FAN_SPEED: System minor alarm in fan tray 1: fan speed is out of range on fan 1. 7950 to 12500 rpm expected. I have provided the output for both the fan detail and the temperature.
N5K-01# sh environment fan detail Fan: --------------------------------------------------- Module Fan Airflow Speed(%) Speed(RPM) Direction --------------------------------------------------- 1 1
I was reading a QoS walkthrough earlier to try to solve my problem and I noticed that in IOS, you can specify "match vlan" in a class map. This is not available in NX-OS. I'm not doing any routing on the 5K so I cannot match on ACL, and port where traffic is received is a trunk sharing other types of traffic I'd like to classify elsewise.
Just upgraded Nexus 7k from 5.2.1 to 5.2.7 (just system and kickstart image and NOT epld image). but after upgrading the one of the fex(n2k) dont seem to come online (this nexus 7k has two n2k and one of them came online and working fine)
I have a couple of Nexus 5ks that I want to put QOS on for the servers running behind it but also have voice running across it. Voice doesn't play well with jumbo frames so I'd like to put QOS only on the voice vlan.
I am working in my lab and I was adding a new L2/3 vlan
vlan 555 name test
int vlan 555 ip address 1.1.1.1/24 no shut
I have also ensured that this vlan is added to the port channel going to my Nexus5K's. I added the vlan to the 5K's and also ensured that vlan 555 is traversing the peer link. all is good there. I have also placed a device on a interface on the 2k as a access switchport on vlan 555
Here is my problem, the L3 interface will NOT come up on the 7K
LAB-DSW01# sh ip int brie IP Interface Status for VRF "default"(1) Interface IP Address Interface Status Vlan555 1.1.1.1 protocol-down/link-down/admin-up
I have gone throug just about everything I can think of and I am still unable to get this L3 interface to come up. I have other L3 interfaces on this device that are configured exactly the sameway without any issue at all. All the existing interfaces are working properly, it's jut this new interface that I am trying to add.
I am running version 6.0.1 on the 7K and 5.2.1.N1.4 on the 5K.
Does the nexus 7010 support virtual switching yet? All of the posts I have found from about a year ago say that it is going to be supported, but there were no dates listed. I heard the same thing from Cisco a while back, but haven't followed up with it.If it is supported finally are there any configuration guides available for it?
Lucien is a customer support engineer at the Cisco Technical Assistance Center. He currently works in the data center switching team supporting customers on the Cisco Nexus 5000 and 2000. He was previously a technical leader within the network management team. Lucien holds a bachelor's degree in general engineering and a master's degree in computer science from Ecole des Mines d'Ales. He also holds the following certifications: CCIE #19945 in Routing and Switching, CCDP, DCNIS, and VCP #66183
We are thinking of following classic design, would Nexus 5K can have 2 seperate connections to each VDC? Nexus 7K w/ different VDC (Internal / DMZ ) Can Nexus 5K have a VPC connection to Nexus 7K to Internal VDC as well as DMZ VDC, and seperate traffic?
I currenty have a Nexus 5010 connected to a core 3750X switch stack in a VPC trunk using 2 1Gbps links. I want to move this link to 2 10Gbps links without losing connectivity. So I want remove a 1G link and move it to 10G and then once that's up move the other 1G link to 10G hopefully without losing connectivity. So the question is, can I have a 1G and 10G link between the Nexus and 3750s in the same virtual port channel without causing problems?
I'm having a little trouble setting up NTP on our new Nexus 3064s. We are using a local Meinberg M300 as our server and the Nexus 3064 as a client. Before I submitted a TAC I was wondering if the community would mind double checking what I have. One major issue I've come across is that the Nexus 3064 will only take a 8 character NTP passphrase. We normally use a 32 char MD5 string. I setup a new 8 char passphrase on our Meinberg M300. I am not using fabric extenders or distribution to other Nexuses. I am using an interface vlan as our management interface per our current network setup. I am using a VRF. We use some public IPs so all IPs are xxx'ed out. [code]
After trying to downgrade a Nexus 7K from 5.2.1 to 5.1.5 by updating the boot & kickstart boot statements and reloading, I'm now stuck in an endless cycle of reloading. See below:
Is there a break sequence which will allow me to modify the boot statement back to the original via ROMMON or something similar?
how do we configure sub interface for nexus 7k?do we have to issue ma-address command under physical interface and than configure subinterface? if yes than what do we have to type the mac address for "mac-address" command?I can doing and than configure subinterface but the interface/subinterface didn't come up. do we have to bounce it couple times to bringe it up?
In our LAN network design, we have two Nexus 7010 switches on the core connected via vPC. Then LAN access switches are directly connected to the Core Nexus switches via regular port channels on the 3750's and vPC on Nexus. The core Nexus switches will be linked to an existing LAN network and the applications will be progressively migrated from the old to the new network.In this scenario, three VDCs are planned to be configured on the Nexus - One for the Interconnect (and WAN at a later stage), one for the LAN/local services and one for the building facilities/local services.
How to you setup ip routing on a Nexus 5500 I want to do vlan routing between an Nexus 5500 and Catalyst 3750. Nothing clever just have the 2 switches talk and vlans route between the two.
I have a pair of N5K's, down stream from them are from Fabric Interconnects and a UCS chassis. Upstream is a stack of 3750's then ASA5510's.
I am trying to backup the config to our TFTP server and I am getting 'no route to host'.. I tried to add a route, and found that N5K uses VRF's for routing?? .. After some looking I see there are two base VRF's 'management' and 'default'.. the management VRF has a default gateway entry and a single interface member (mgmt0).. when I look at the default VRF .. there are no interface members or routing entries.. Ok, I can handle that just add some interfaces and add a default gateway. Then I get lost:
I'm able to access the UCS manager..... so how the heck is that even possible if there's no gateway defined anywhere (or maybe I'm missing something?). My theory was: add all other ports but mgmt0 to the default VRF, and have the default gateway point out of the uplinks (a vPC).. but wasn't sure how that would affect anything and mainly just wanted to know how I was able to access the UCS manager in light of the fact that there is no default gateway anywhere that I could see...
The below nexus 5020 nx-os version and type/revision of my GLC-T is compatible with each other? I noticed also the "Transceiver calibration is invalid" when i do show int e1/8 transceiver details , what does it mean?
I need to upgrade my core switches at one of our locations (two 7009s with dual sups) from 6.0.1 to 6.1.2. After looking through the release notes it appears that this will be a disruptive upgrade?how long should I expect for the disruption? Are we talking a 7009 boot cycle (10 - 15 minutes) or something longer?How many disruptions can I expect? I suspect 1 per chassis during the failover to the standby but I'd like to validate.Is there any compelling reason to upgrade the EPDL? From what I can see, again from the release notes, this is only necessary with F2 cards if I were to upgrade to Sup2s . I'm in a healthcare environment and this upgrade will be affecting one of our major campuses so the more info I can get to the managers the more accepting they will be for the disruptions.
I created new VDCs. Since I have done so, there is not switchport command under the interface configuration.
The interesting this is that it is available on the admin VDC, but not the new VDC I created. I cannot create a peer VPC Peer link between my 2 Nexus switches. I did allocate ports to the new VDC and I did verify the enabled feature are the same.
I have 2 Nexus 5596UPs with a layer 3 cards that are exhibiting some very peculiar behavior. The systems are running 5.1(3)N1(1).I have configured 2 VRF contexts each running their own OSPF process. There is a static gateway of last resort configured on each VRF, which is to an upstream pair of 5585X's in Active/Active. Each OSPF process has the "default-information originate always" command configured, however, backbone neighbors are not recieving a gateway of last resort from the 5596UPs. The applicable configurations are show below. All other routing information is passing correctly between devices in the network. This network is not production, it is a proof of concept for a larger implementation.
We have Nexus 7K on production. 7K chasis is not load balancing with non-cisco devices with etherchannel or LACP..I have tried all load balancing algorithms but in vain. [code]