Cisco Switching/Routing :: Full Switch Outage After Enabling Jumbo Frames On Nexus 5010
Nov 22, 2011
I attempted to enable jumbo frames on a Nexus 5010 (NX-OS version 4.2(1)N1(1)). I created the policy map below and lost access to the switch.
policy-map type network-qos jumbo
class type network-qos class-default
mtu 9216
After recovery I see from the logs that all vlans and interface were suspended. I've attempted to look for reasons for a compatibility issue but I am unable to find what is checked and what could have been incompatible. The other troubling thing is the adjacent switch suspended its interfaces too but no change was done there. What I need to look out for so that this does not happen again?
2011 Nov 22 23:43:09 phx-ipcg1dwfcma %ETHPORT-3-IF_ERROR_VLANS_SUSPENDED: VLANs 1,10,601 on Interface port-channel1 are being suspen
ded. (Reason: QoSMgr Network QoS configuration incompatible)
2011 Nov 22 23:43:09 phx-ipcg1dwfcma %ETHPORT-5-IF_TRUNK_DOWN: Interface port-channel1, vlan 1,10,601 down
2011 Nov 22 23:43:09 phx-ipcg1dwfcma %ETHPORT-3-IF_ERROR_VLANS_SUSPENDED: VLANs 10 on Interface port-channel508 are being suspended.
I have a pair of Catalyst 3560 GB switches that are trunked with two of the standard ports, and that have trunk ports connecting to a failover pair of PIX 515e's. We're considering adding a pair of cluster database nodes and an iSCSI SAN, both of which would need a dedicated interconnect VLAN that I'd like to employ Jumbo frames on. I don't necessarily need the VLANs to traverse the firewall trunks since they're private interconnects, but I need each host to traverse the switch trunks.
Since it seems I can only enable Jumbo frames on the entire switch (current standard frame size is 1500 and jumbo is also 1500), when I enable it what kind of possible negative impact could this have on my trunked ports as well as my host connections? I've read mixed reviews of users with iSCSI SAN devices seeing terrible performance when enabling jumbo frames so I'm apprehensive about enabling them on an existing network.
I've to enable it on 3750 and nexus 7K switches. what are the steps involved? can we enable jumbo frame per port instead of enabling globally? i.e. we will only have few ports that will be using jumbo frames, rest of the ports will be using default 1500 MTU size.
We have a requirement to send span traffic to a destination port for monitoring purposes on two 5000s with some 2000 fex boxes attached. Some of the servers are making use of frames larger than 1500. we have not changed any mtu configuration on the 5000 since installation, and I can see the policy maps is still on 1500.
My first assumption would be that frames larger than 1500 will not be dropped, but it seemingly not (see below). is there a reason why the switch would forward jumbo frames? Also, is there a limitation on MTU for span traffic? There is a MTU command under the span session, but the maximum is 1518. From what I can read the frame will be truncated if it exceeds this. Does that mean the fragments will be dropped?
I believe i've enable jumbo frames on our Nexus 7010, one in each data-centre.
system jumbomtu 9216. Also on the interfaces mtu 9216. And can see MTU 9216 bytes, BW 20000000 Kbit, DLY 10 usec on the port-channel between them. Though when i ping between vlans at each site with large packets i get 30% drops and if i set the DF bit in IP header to yes - 100% loss.
8798 bytes from 10.200.12.2: icmp_seq=19 ttl=254 time=8.024 ms --- 10.200.12.2 ping statistics ---20 packets transmitted, 14 packets received, 30.00% packet loss
I have a Cisco Nexus 3064 that I am using as part of a flat network for the Lab. I have 30 Virtualization Servers(MS HyperV and VMware vSphere) connected to this switch and I want to enable jumbo frames. The Virtualization Servers are able to ping the local VM's using 8K bytes. However I am unable to ping from server to server using 8K bytes. I have configuration (in abbreviation). All the servers are in the same network which I configured as L2 ports with the "switchport" command. However, the interface "MTU" command is unavailable in L2 mode. I am only able to get the interface "MTU" command only in L3 mode with the "no switchport" command on the interface.
# int eth1/2-45 # no switchport # mtu 9216 # no shut
I can ping the servers with less than 1500 bytes, but anything larger fails.
The server team has asked me to implement jumbo frames on a single VLAN, the one they use for v motion. We have two pairs of 5548s, each pair running VPC for most connections. I am aware of many postings that describe how to enable jumbo frames globally, like this:
policy-map type network-qos jumbo class type network-qos class-default [code].....
I am not clear how I can extend this principle to one VLAN only.
Also, I am aware of a posting [URL], that shows some pitfalls of implementing jumbo frames in a VPC configuration. Pretty well all my connections are VPC, including all the FEXes, which are all dual-homed. In many cases, the VPC extends through to the servers so that the servers run port.channels across two FEXes. I am unclear whether the pitfalls are still valid, or whether I have to wait until my next maintenance slot (6 months away) to implement jumbo frames. Can jumbo frames be implement safely on the fly? How does enabling jumbo frames fit in with "conf sync" mode?
We are currently using two Nexus 5548UP's as our Datacenter Network Core. I have a pretty simple objective: I would like to enable Jumbo Frames on a single VLAN only(VLAN 65). This VLAN is used strictly for backups. I do not want to enable Jumbo Frames on the other VLANs(VLANs 1-10). Im not sure what the best way to do this is.....or if it is even possible but I am hoping to get some configuration examples.
I currently have 4 3560 switches connected in a Mesh topology. These are all set to use Jumbo Frames and so are all the Servers that are connected to these.I now need to connect a 2950 switch to 2 of the 3560's which will have only desktop computers connected to it but i do not want to configure Jumbo Frames on this and any of the desktops.
Basically I am trying to use Wireshark to do a packet capture on a Nexus 5010. I want to do a monitor session on on the switch so I can capture from a source port to a destination port on the same switch. I can configure the source port but when I go to configure the destination port I get "ERROR: Eth102/1/4: Configuration not allowed on fex interface". I have tried to reconfigure this port as a switchport but "switchport mode access" command does not take. I don't want to make any changes to any other ports but this one.
I understand that jumbo frames need to be enabled end-to-end. I have two ESX hosts connected at each site. I want to enable jumbo frames for those ports, but what if not all host on the ESX are using jumbo frames, will I have drops and connection failures? So if i have two sites, each with a 6509 connected via a trunk and need to enable jumbo frames for a vlan between the sites how do I accomplish this?If I enable jumbo frames on the trunk link how does that impact other traffic between the sites?
I have a Cisco Catalyst 3100 blade in a Dell server chassis that is trunked to a 6509.
When doing a protocol capture, I see large frames being sent from one of the servers in the chassis.
Example:
TCP:[Continuation to #1701] [Bad CheckSum]Flags=...AP..., SrcPort=HTTP(80), DstPort=2667, PayloadLen=2831, Seq=1489611217 - 1489614048, Ack=1719592331, Win=65535 I see lengths up to 6900+ bites coming from the server.
The switch has the global MTU set to 1500
system mtu routing 1500
and I can't seem to set this at the interface level. The server is configured to send 1500 length frames. Why am I seeing these jumbos? (the server is Windows 2003)
We have a number of sites which have high-speed L2 links which terminate on our L3 switches at each site. The ports are between the sites are placed in routed mode.
I would like to use Jumbo frame between two of the network which will communicate across sites and 1500 mtu on the rest, is this something which is possible?
From my understanding is the mtu is set on the interface therefore if I set the mtu on the L2 link ports on both sites to 9000 then would this cause a problem for the 1500?
I have a pair of N7K's in vPC topology with some FEXs attached. I am looking into enabling Jumbo frame on the N7K as well as the FEX. I understand Jumbo frame is enabled globally by default.
My question is I have some interfaces in a port-channel that I need jumbo frame enabled. Do I enable it at the port-channel interface or at the physical interface ? and is the change disruptive to the network ? I am running NX-OS 6.0.2.
I currenty have a Nexus 5010 connected to a core 3750X switch stack in a VPC trunk using 2 1Gbps links. I want to move this link to 2 10Gbps links without losing connectivity. So I want remove a 1G link and move it to 10G and then once that's up move the other 1G link to 10G hopefully without losing connectivity. So the question is, can I have a 1G and 10G link between the Nexus and 3750s in the same virtual port channel without causing problems?
Our Data Center Switch (5010) rebooted itself today, underneath the captured screen
NX5010-1(config-vlan)# Broadcast message from root (console) (Sun Feb 10 14:22:41 2013):
The system is going down for reboot NOW!
NX5010-1# sh system reset-reason ----- reset reason for Supervisor-module 1 (from Supervisor in slot 1) --- 1) At 740938 usecs after Sun Feb 10 14:22:41 2013 Reason: Reset triggered due to HA policy of Reset Service: nohms hap reset Version: 5.0(2)N1(1)
we have a old Nexus 5010 running version 5.0(3)N1(1)it is in a franckenblock(like frankenstein) ie . we bought the parts and create our own system design before VCE was created. He have since bought VCE V block for our production sites. we use the franc ken block to test before we migrate to the production v block 300 FIX the issue we have is the 5010 will only see two 1 Gb/s SFP fiber modules in the first 12 slots. All these slots are dual 1g/10G. If we add more than two, it claims not to see them. We tried swapping the sfp and using both sfg-GE-MM and GLC-SX-MM moduels, no difference at all 1g sfp are in the first 5 slots. only the first two slots are up.the others say "Link not connected" or "SFP not inserted"all five links are the same SFP and are plugged in other switches.the green link lights are on plugged in SFP, even when the CLI states they are not plugged uni tried both two types of 1 Gb/s SFP. sfg-GE-MM and GLC-SX-MM ,i move the sfp between slot 1,2 and 3,4,5. nothing changed,From "sh int brief" you can see that it can see the sfp ie they are all 1Gfrom "sh int status " it sees 1g but why does it say type 10G?why when i go to the "int e 1/5" and try switch port mode now, it can't mode and trunk at now there (;also how to i stop or clear EU51 %SYSMGR-2-TMP_DIR_FULL: System temporary directory usage is unexpectedly high at 87%.i put as much info in the attached file as i could.
We are looking for some latency in our net and I am trying to check if our STP implementations is running correctly - we have a simple flat network here and no blocked ports here - just two nx 5010 which are interconnected with two uplinks.A remote site - mirrored setup - with 2 10G dark fiber connections - one for each nx5010 - is connection a DR site. I have split the two sites in two stp domains by enabling bpdu-filter on the vpc between the two sites.
I have been running wireshark on the local segment for some time and see the STP RST ROOT announcement every 2. seconds - this is probably normal ? I was looking for some alternate root negotiation packets which would cause the mac tables to be flushed.
I have a Nexus 7000 plus 6 boxes NX2000 on backbone.I have configured on 7000 :
conf t system jumbomtu 9000 exitERROR: Ethernet111/1/1: requested config change not allowed ... ERROR: Ethernet122/1/48: requested config change not allowed 1/111/14 is a NX2000 port conf tinterface ethernet 1/111/14 switchport mtu 9000 exit
I have gotten this message : Error: MTU cannot be configured on satellite port(s) - Eth122/1/11 ?I have tried on a NX7000 TP port:ERROR: Ethernet10/45: MTU on L2 interfaces can only be set to default or system-jumboMTU ?Does JUMBOMTU configuration can be done only when there are no NX2000 configured ?
on some of our ports on Nexu 5000 and on the connected FEX we can see a lot of Jumbo Packets though there is not enableed any JumboFrame on the Switch, all Interface and system MTU is set to 1500.
DBE-LINZ-XX41# sh int Eth113/1/27 Ethernet113/1/27 is up Hardware: 100/1000 Ethernet, address: d0d0.fd1b.b69c (bia d0d0.fd1b.b69c)
I am not able to create more than 256 VLAN in Cisco Nexus 5010 switch. While creating I am getting "No VLAN resources available for VLAN creation" Details below -
Switch model - 5010 Software : NX OS 4.0 (1a)
Error Message: Nexus_5010(config)# vlan 417 ERROR: No VLAN resource available for VLAN creation.
I'm trying to create a vpc between a Nexus 5010 and Nexus 5020 switch. I recently upgraded the software so they are running the same version. I connect get a vpc link. Is there something wrong with my setup? Is a vpc between a 5010 and 5020 even possible? They are connected using a pair of Intel X520's in 802.3AD teaming mode. [code]
My monitoring tool is reporting alerts for high cpu utilization on Nexus 5010.Image is 4.1(3)N1(1) Only command supported on this code is sh proc cpu.The output of which does not really tell what is the current cpu utilization.How do i troubleshoot the cause of high cpu on nexus switches.
I am trying to configure get the QLogic 8240 card to work properly in ESXi 5.0. I want to be able to send the iSCSI traffic down the iSCSI portion of the card and use the ethernet portion of the card to do NFS.
Here are my vlans I am working with..
vlan 420 = fcoe vlan 500 = NFS vlan 1000 = iSCSI
I have my interface currenly set as the following on the Nexus 5000.
I am trying to determin if Jumbo frames are enabled on out Nexus 7000, and I am getting mixed info back from the swtich.I looks like the system jumbo MTU size is 9216 by default, but the interfaces all say the MTU of the interface is 1500 bytes. According to this article, the interface MTU should read 9216 is the jumbo frames are enabled globally. Is this correct. Is there a way to verify if Jumbo frame support is turned on? [code]
I found intermittent link down(20~40 seconds average) occurred about 1~10 times every month. SAP reported a lot of active connections are disconnected and I used a batch to ping and found "requested time out" about 30 seconds.And Windows, SQL server, Nexus 5010 do not show any errors. We run cluster and cluster does not fail over.And I don't know which cables or nics cause this issue. When it happened, almost all servers are unreachable. For example, SQL server 1 -> SQL server 2, IBM HS22-1 -> SQL server 1. However, some connections are not dropped sometimes. It varies each time.PS: I run this topology last year without any problems but it started intermittent link down from 2011/1/7. Because there is no errors in Nexus 5010, it is difficult to troubleshoot. Cisco TAC recommended us to implement virtual port channel yesterday. Could I use "errdisable detect cause" to detect what caused the intermittent link down? Is there any error logs or switch parameters/status can use to troubleshoot?
Here is an example of what each switch logs when a server drops offline. Sample logs taken between 5:32am and 5:35am on Feb 20. This particular one was having problems all weekend. Switch #1 encountered over 2000 interface resets. The corresponding VPC port on Switch #2 only had 13 resets.
NEXUS SWITCH #1 2012 Feb 20 05:32:09 q91-sw01-5010 %ETH_PORT_CHANNEL-5-FOP_CHANGED: port-channel10: first operational port changed from Ethernet1/10 to none 2012 Feb 20 05:32:09 q91-sw01-5010 %ETH_PORT_CHANNEL-5-PORT_DOWN: port-channel10: Ethernet1/10 is down 2012 Feb 20 05:32:09 q91-sw01-5010 %ETH_PORT_CHANNEL-5-PORT_DOWN: port-channel10: port-channel10 is down 2012 Feb 20 05:32:09 q91-sw01-5010 %ETHPORT-5-IF_DOWN_PORT_CHANNEL_MEMBERS_DOWN: Interface port-channel10 is down ( No operational members)
I'm not sure if I'm missing something basic here however i though that I'd ask the question. I recieved a request from a client who is trying to seperate traffic out of a IBM P780 - one set of VIO servers/clients (Prod) is tagged with vlan x going out LAG 1 and another set of VIO server/clients (Test) is tagged with vlan y and z going out LAG 2. The problem is that the management subnet for these devices is on one subnet.
The infrastructure is the host device is trunked via LACP etherchannel to Nexus 2148TP(5010) which than connects to the distribution layer being a Catalyst 6504 VSS. I have tried many things today, however I feel that the correct solution to get this working is to use an Isolated trunk (as the host device does not have private vlan functionality) even though there is no requirement for hosts to be segregated. I have configured:
1. Private vlan mapping on the SVI; 2. Primary vlan and association, and isolated vlan on Distribution (6504 VSS) and Access Layer (5010/2148) 3. All Vlans are trunked between switches 4. Private vlan isolated trunk and host mappings on the port-channel interface to the host (P780).
I haven't had any luck. What I am seeing is as soon as I configure the Primary vlan on the Nexus 5010 (v5.2) (vlan y | private-vlan primary), this vlan (y) does not forward on any trunk on the Nexus 5010 switch, even without any other private vlan configuration. I believe this may be the cause to most of the issues I am having. Has any one else experienced this behaviour. Also, I haven't had a lot of experience with Private Vlans so I might be missing some fundamentals with this configuration.