Cisco Switching/Routing :: Jumbo Frames Dropped On Nexus 7010?
Jul 5, 2012
I believe i've enable jumbo frames on our Nexus 7010, one in each data-centre.
system jumbomtu 9216. Also on the interfaces mtu 9216. And can see MTU 9216 bytes, BW 20000000 Kbit, DLY 10 usec on the port-channel between them. Though when i ping between vlans at each site with large packets i get 30% drops and if i set the DF bit in IP header to yes - 100% loss.
8798 bytes from 10.200.12.2: icmp_seq=19 ttl=254 time=8.024 ms --- 10.200.12.2 ping statistics ---20 packets transmitted, 14 packets received, 30.00% packet loss
I've to enable it on 3750 and nexus 7K switches. what are the steps involved? can we enable jumbo frame per port instead of enabling globally? i.e. we will only have few ports that will be using jumbo frames, rest of the ports will be using default 1500 MTU size.
We have a requirement to send span traffic to a destination port for monitoring purposes on two 5000s with some 2000 fex boxes attached. Some of the servers are making use of frames larger than 1500. we have not changed any mtu configuration on the 5000 since installation, and I can see the policy maps is still on 1500.
My first assumption would be that frames larger than 1500 will not be dropped, but it seemingly not (see below). is there a reason why the switch would forward jumbo frames? Also, is there a limitation on MTU for span traffic? There is a MTU command under the span session, but the maximum is 1518. From what I can read the frame will be truncated if it exceeds this. Does that mean the fragments will be dropped?
I have a Cisco Nexus 3064 that I am using as part of a flat network for the Lab. I have 30 Virtualization Servers(MS HyperV and VMware vSphere) connected to this switch and I want to enable jumbo frames. The Virtualization Servers are able to ping the local VM's using 8K bytes. However I am unable to ping from server to server using 8K bytes. I have configuration (in abbreviation). All the servers are in the same network which I configured as L2 ports with the "switchport" command. However, the interface "MTU" command is unavailable in L2 mode. I am only able to get the interface "MTU" command only in L3 mode with the "no switchport" command on the interface.
# int eth1/2-45 # no switchport # mtu 9216 # no shut
I can ping the servers with less than 1500 bytes, but anything larger fails.
The server team has asked me to implement jumbo frames on a single VLAN, the one they use for v motion. We have two pairs of 5548s, each pair running VPC for most connections. I am aware of many postings that describe how to enable jumbo frames globally, like this:
policy-map type network-qos jumbo class type network-qos class-default [code].....
I am not clear how I can extend this principle to one VLAN only.
Also, I am aware of a posting [URL], that shows some pitfalls of implementing jumbo frames in a VPC configuration. Pretty well all my connections are VPC, including all the FEXes, which are all dual-homed. In many cases, the VPC extends through to the servers so that the servers run port.channels across two FEXes. I am unclear whether the pitfalls are still valid, or whether I have to wait until my next maintenance slot (6 months away) to implement jumbo frames. Can jumbo frames be implement safely on the fly? How does enabling jumbo frames fit in with "conf sync" mode?
We are currently using two Nexus 5548UP's as our Datacenter Network Core. I have a pretty simple objective: I would like to enable Jumbo Frames on a single VLAN only(VLAN 65). This VLAN is used strictly for backups. I do not want to enable Jumbo Frames on the other VLANs(VLANs 1-10). Im not sure what the best way to do this is.....or if it is even possible but I am hoping to get some configuration examples.
I attempted to enable jumbo frames on a Nexus 5010 (NX-OS version 4.2(1)N1(1)). I created the policy map below and lost access to the switch.
policy-map type network-qos jumbo class type network-qos class-default mtu 9216
After recovery I see from the logs that all vlans and interface were suspended. I've attempted to look for reasons for a compatibility issue but I am unable to find what is checked and what could have been incompatible. The other troubling thing is the adjacent switch suspended its interfaces too but no change was done there. What I need to look out for so that this does not happen again?
2011 Nov 22 23:43:09 phx-ipcg1dwfcma %ETHPORT-3-IF_ERROR_VLANS_SUSPENDED: VLANs 1,10,601 on Interface port-channel1 are being suspen ded. (Reason: QoSMgr Network QoS configuration incompatible) 2011 Nov 22 23:43:09 phx-ipcg1dwfcma %ETHPORT-5-IF_TRUNK_DOWN: Interface port-channel1, vlan 1,10,601 down 2011 Nov 22 23:43:09 phx-ipcg1dwfcma %ETHPORT-3-IF_ERROR_VLANS_SUSPENDED: VLANs 10 on Interface port-channel508 are being suspended.
I currently have 4 3560 switches connected in a Mesh topology. These are all set to use Jumbo Frames and so are all the Servers that are connected to these.I now need to connect a 2950 switch to 2 of the 3560's which will have only desktop computers connected to it but i do not want to configure Jumbo Frames on this and any of the desktops.
I understand that jumbo frames need to be enabled end-to-end. I have two ESX hosts connected at each site. I want to enable jumbo frames for those ports, but what if not all host on the ESX are using jumbo frames, will I have drops and connection failures? So if i have two sites, each with a 6509 connected via a trunk and need to enable jumbo frames for a vlan between the sites how do I accomplish this?If I enable jumbo frames on the trunk link how does that impact other traffic between the sites?
I have a Cisco Catalyst 3100 blade in a Dell server chassis that is trunked to a 6509.
When doing a protocol capture, I see large frames being sent from one of the servers in the chassis.
Example:
TCP:[Continuation to #1701] [Bad CheckSum]Flags=...AP..., SrcPort=HTTP(80), DstPort=2667, PayloadLen=2831, Seq=1489611217 - 1489614048, Ack=1719592331, Win=65535 I see lengths up to 6900+ bites coming from the server.
The switch has the global MTU set to 1500
system mtu routing 1500
and I can't seem to set this at the interface level. The server is configured to send 1500 length frames. Why am I seeing these jumbos? (the server is Windows 2003)
I have a pair of Catalyst 3560 GB switches that are trunked with two of the standard ports, and that have trunk ports connecting to a failover pair of PIX 515e's. We're considering adding a pair of cluster database nodes and an iSCSI SAN, both of which would need a dedicated interconnect VLAN that I'd like to employ Jumbo frames on. I don't necessarily need the VLANs to traverse the firewall trunks since they're private interconnects, but I need each host to traverse the switch trunks.
Since it seems I can only enable Jumbo frames on the entire switch (current standard frame size is 1500 and jumbo is also 1500), when I enable it what kind of possible negative impact could this have on my trunked ports as well as my host connections? I've read mixed reviews of users with iSCSI SAN devices seeing terrible performance when enabling jumbo frames so I'm apprehensive about enabling them on an existing network.
We have a number of sites which have high-speed L2 links which terminate on our L3 switches at each site. The ports are between the sites are placed in routed mode.
I would like to use Jumbo frame between two of the network which will communicate across sites and 1500 mtu on the rest, is this something which is possible?
From my understanding is the mtu is set on the interface therefore if I set the mtu on the L2 link ports on both sites to 9000 then would this cause a problem for the 1500?
I have a Nexus 7000 plus 6 boxes NX2000 on backbone.I have configured on 7000 :
conf t system jumbomtu 9000 exitERROR: Ethernet111/1/1: requested config change not allowed ... ERROR: Ethernet122/1/48: requested config change not allowed 1/111/14 is a NX2000 port conf tinterface ethernet 1/111/14 switchport mtu 9000 exit
I have gotten this message : Error: MTU cannot be configured on satellite port(s) - Eth122/1/11 ?I have tried on a NX7000 TP port:ERROR: Ethernet10/45: MTU on L2 interfaces can only be set to default or system-jumboMTU ?Does JUMBOMTU configuration can be done only when there are no NX2000 configured ?
on some of our ports on Nexu 5000 and on the connected FEX we can see a lot of Jumbo Packets though there is not enableed any JumboFrame on the Switch, all Interface and system MTU is set to 1500.
DBE-LINZ-XX41# sh int Eth113/1/27 Ethernet113/1/27 is up Hardware: 100/1000 Ethernet, address: d0d0.fd1b.b69c (bia d0d0.fd1b.b69c)
This is regarding Nexus core switch 7010. We are already running two nexus 7K with ten Nexus 5k. Currently we are going to add two new Nexus 5k in our DC. In the 7K we already running two VDC's.
I am trying to determin if Jumbo frames are enabled on out Nexus 7000, and I am getting mixed info back from the swtich.I looks like the system jumbo MTU size is 9216 by default, but the interfaces all say the MTU of the interface is 1500 bytes. According to this article, the interface MTU should read 9216 is the jumbo frames are enabled globally. Is this correct. Is there a way to verify if Jumbo frame support is turned on? [code]
In our LAN network design, we have two Nexus 7010 switches on the core connected via vPC. Then LAN access switches are directly connected to the Core Nexus switches via regular port channels on the 3750's and vPC on Nexus. The core Nexus switches will be linked to an existing LAN network and the applications will be progressively migrated from the old to the new network.In this scenario, three VDCs are planned to be configured on the Nexus - One for the Interconnect (and WAN at a later stage), one for the LAN/local services and one for the building facilities/local services.
We recently replaced our core switch from a non-cisco vendor with a Nexus 7010. With our old core switch, I had the ability to log changes to the ARP table. So if there was a dhcp conflict or a vMotion event, it would show up in the "show log" output. I've not found a way to do that with the Nexus switch - or at least no way to view the log. I have the command: logging level arp 6
I have to upgrade a Nexus 7010 with dual Sup engines from 4.2(4) to 5.2 and am hoping it could be an ISSU. We are fine with an outage window.To upgrade from 4.2(4) to 5.2(5) I'll have to do a multi hop upgrade from 4.2(4) - 4.2(6) - 5.2(5) and each hop would take 40-60 minutes.do I spend 40-60 minutes for each hop, or just do a disruptive upgrade straight from 4.2(4) to 5.2(5)? Like I said, we are fine with an outage window.
I just deployed a nexus 7010 switch at a server farm. after deployment, it was notices that there are surges in latency across the network. The default gateway was then moved to the nexus, with this pinging from an host on the same subnet there is intermitent burst is latency
NEXUS>>>>>Server Ping of about 80ms and sometimes even times out. To me, this is strange. NX-OS Version is 5-02a
I'm looking to see if it is possible to run a vPC between to vDC's on a single 7010? We have a Production setup that runs dual 7010's with vPC's between the chassis but in our lab we only have a single 7010 with a 32 port 10gig module. I was thinking that maybe we could create 4 vDC's on the 7010 and run a vPC between the vDC's.
How to get a summary of netflow statistics on NX-OS? On IOS you could do sh ip cache flow which would show what I need? Can't find a similar command on the Nexus Platform.
We will install a new Supervisor Engine in our Nexus 7010.One Supervisor Engine is already installed an 1Year old.So the Problem is that both Supervisor Engines may have different NX-OS version.Could this lead to a problem?Does the installed Supervisor Engine "udate" the newer Supervisor engine?
i have a big problem because i configure a vlans with vrf and HSRP but, when i do "show hsrp brief", dont show this interfaces and, i can ping virtual IP. it seems hsrp dont work.
SWSERVSCAMILO_N7010_A# interface Vlan405 description smsc-fwatlas1 no shutdown [Code] ....
We have a couple of Nexus 7010's split into Core and Distribution VDCs. MGMT0 interfaces on each of the Nexus VDC's (including the Admin VDC) are configured with different IP address, but on the same subnet i.e 10.10.10.1/24 for admin, 10.10.10.2/24 for Core and 10.10.10.3/24 for Distribution. The MGMT 0 physical port on each Nexus is connected to a physical gig port on a 3750 X switch, and the 3750X has uplinks back to the Nexus configured for vPC.
When i ssh to the VDC MGMT0 IPs from the 3750X, i can access each of these VDCs without any problems. But if i enable routing on each of these links(OSPF) and advertise it to the WAN, i cannot see these routes advertised and also cannot see any of these routes in the local routing table.Just wondering if i have to enable these links on a VLAN and then advertise it to the WAN..But if this the case, VLANs cannot be created on the Admin(default VDC).
We have, for nearly 4 years, used EIGRP on our 6513 to make use of two unequal links to our branch offices. This worked because we could use the variance command and cause EIGRP to insert two routes into the table, one from each carrier. Thus it was we could balance the load to each one with a ratio similar to the ratio of the bandwidth of Link A to Link B.
We just purchased 2 Nexus 7010's to replace our single 6513 core.After much consternation we have found from our Ciscio SE that the Nexus 6.0.2 software rendition of EIGRP does not support variance.
Why would Cisco take their own propriatary protocol and then gut it by removing features? I'm quite ready to send these Nexus boxes back in favor of a newer 6500 series. MEC doesn't work like it is supposed to and the show-tech runs for over 24 hours without ever finishing (and this we can repeat on both boxes, multiple times).
We've opened a tac case but I just wondered for any work around for the 'variance' command?
I am facing issue with nexus 7010 login authentication by radius server. I have two nexus 7010, one of them is working perfectly. Other taking long time to authenticate. If i use local database to login it works perfectly. It works fine also if i login from console using radius for authentication.
Does the nexus 7010 support virtual switching yet? All of the posts I have found from about a year ago say that it is going to be supported, but there were no dates listed. I heard the same thing from Cisco a while back, but haven't followed up with it.If it is supported finally are there any configuration guides available for it?
you find attached my network architecture with 2 Nexus 7010 on core layer and 2 Nexus 5020 on distribution layer, each one with 1 N2148T fabric extender switch. PC-A1 and PC-A2 are connected to one N2148T, PC-B1 is connected to the other N2148T. Nexus-7000-1 is HSRP Active for all VLANs, Nexus-7000-2 is HSRP standby. PC-A1 and PC-A2 are connected to VLAN A, PC-B1 is connected to VLAN B. PC-A1 and PC-A2 have the same default gateway correspondent to IP HSRP on VLAN A. It happens that PC-A1 is able to ping PC-B1 while PC-A2 is unable to ping PC-B1. If I issue a traceroute from PC-A2 I see Nexus-7000-2’s physical IP address as the first hop even if Nexus-7000-2 is HSRP standby. After the first hop the traceroute is lost. If I shutdown Port-channel 20 on Nexus-5000-2, PC-A2 starts to ping PC-B1.I can’t understand what’s wrong in this architecture.