Cisco Switching/Routing :: Enable Jumbo Frames On Single VLAN (Nexus 5548UP)
Sep 17, 2012
We are currently using two Nexus 5548UP's as our Datacenter Network Core. I have a pretty simple objective: I would like to enable Jumbo Frames on a single VLAN only(VLAN 65). This VLAN is used strictly for backups. I do not want to enable Jumbo Frames on the other VLANs(VLANs 1-10). Im not sure what the best way to do this is.....or if it is even possible but I am hoping to get some configuration examples.
I've to enable it on 3750 and nexus 7K switches. what are the steps involved? can we enable jumbo frame per port instead of enabling globally? i.e. we will only have few ports that will be using jumbo frames, rest of the ports will be using default 1500 MTU size.
We have a requirement to send span traffic to a destination port for monitoring purposes on two 5000s with some 2000 fex boxes attached. Some of the servers are making use of frames larger than 1500. we have not changed any mtu configuration on the 5000 since installation, and I can see the policy maps is still on 1500.
My first assumption would be that frames larger than 1500 will not be dropped, but it seemingly not (see below). is there a reason why the switch would forward jumbo frames? Also, is there a limitation on MTU for span traffic? There is a MTU command under the span session, but the maximum is 1518. From what I can read the frame will be truncated if it exceeds this. Does that mean the fragments will be dropped?
I believe i've enable jumbo frames on our Nexus 7010, one in each data-centre.
system jumbomtu 9216. Also on the interfaces mtu 9216. And can see MTU 9216 bytes, BW 20000000 Kbit, DLY 10 usec on the port-channel between them. Though when i ping between vlans at each site with large packets i get 30% drops and if i set the DF bit in IP header to yes - 100% loss.
8798 bytes from 10.200.12.2: icmp_seq=19 ttl=254 time=8.024 ms --- 10.200.12.2 ping statistics ---20 packets transmitted, 14 packets received, 30.00% packet loss
I have a Cisco Nexus 3064 that I am using as part of a flat network for the Lab. I have 30 Virtualization Servers(MS HyperV and VMware vSphere) connected to this switch and I want to enable jumbo frames. The Virtualization Servers are able to ping the local VM's using 8K bytes. However I am unable to ping from server to server using 8K bytes. I have configuration (in abbreviation). All the servers are in the same network which I configured as L2 ports with the "switchport" command. However, the interface "MTU" command is unavailable in L2 mode. I am only able to get the interface "MTU" command only in L3 mode with the "no switchport" command on the interface.
# int eth1/2-45 # no switchport # mtu 9216 # no shut
I can ping the servers with less than 1500 bytes, but anything larger fails.
The server team has asked me to implement jumbo frames on a single VLAN, the one they use for v motion. We have two pairs of 5548s, each pair running VPC for most connections. I am aware of many postings that describe how to enable jumbo frames globally, like this:
policy-map type network-qos jumbo class type network-qos class-default [code].....
I am not clear how I can extend this principle to one VLAN only.
Also, I am aware of a posting [URL], that shows some pitfalls of implementing jumbo frames in a VPC configuration. Pretty well all my connections are VPC, including all the FEXes, which are all dual-homed. In many cases, the VPC extends through to the servers so that the servers run port.channels across two FEXes. I am unclear whether the pitfalls are still valid, or whether I have to wait until my next maintenance slot (6 months away) to implement jumbo frames. Can jumbo frames be implement safely on the fly? How does enabling jumbo frames fit in with "conf sync" mode?
I attempted to enable jumbo frames on a Nexus 5010 (NX-OS version 4.2(1)N1(1)). I created the policy map below and lost access to the switch.
policy-map type network-qos jumbo class type network-qos class-default mtu 9216
After recovery I see from the logs that all vlans and interface were suspended. I've attempted to look for reasons for a compatibility issue but I am unable to find what is checked and what could have been incompatible. The other troubling thing is the adjacent switch suspended its interfaces too but no change was done there. What I need to look out for so that this does not happen again?
2011 Nov 22 23:43:09 phx-ipcg1dwfcma %ETHPORT-3-IF_ERROR_VLANS_SUSPENDED: VLANs 1,10,601 on Interface port-channel1 are being suspen ded. (Reason: QoSMgr Network QoS configuration incompatible) 2011 Nov 22 23:43:09 phx-ipcg1dwfcma %ETHPORT-5-IF_TRUNK_DOWN: Interface port-channel1, vlan 1,10,601 down 2011 Nov 22 23:43:09 phx-ipcg1dwfcma %ETHPORT-3-IF_ERROR_VLANS_SUSPENDED: VLANs 10 on Interface port-channel508 are being suspended.
I currently have 4 3560 switches connected in a Mesh topology. These are all set to use Jumbo Frames and so are all the Servers that are connected to these.I now need to connect a 2950 switch to 2 of the 3560's which will have only desktop computers connected to it but i do not want to configure Jumbo Frames on this and any of the desktops.
I understand that jumbo frames need to be enabled end-to-end. I have two ESX hosts connected at each site. I want to enable jumbo frames for those ports, but what if not all host on the ESX are using jumbo frames, will I have drops and connection failures? So if i have two sites, each with a 6509 connected via a trunk and need to enable jumbo frames for a vlan between the sites how do I accomplish this?If I enable jumbo frames on the trunk link how does that impact other traffic between the sites?
I have a Cisco Catalyst 3100 blade in a Dell server chassis that is trunked to a 6509.
When doing a protocol capture, I see large frames being sent from one of the servers in the chassis.
Example:
TCP:[Continuation to #1701] [Bad CheckSum]Flags=...AP..., SrcPort=HTTP(80), DstPort=2667, PayloadLen=2831, Seq=1489611217 - 1489614048, Ack=1719592331, Win=65535 I see lengths up to 6900+ bites coming from the server.
The switch has the global MTU set to 1500
system mtu routing 1500
and I can't seem to set this at the interface level. The server is configured to send 1500 length frames. Why am I seeing these jumbos? (the server is Windows 2003)
I have a pair of Catalyst 3560 GB switches that are trunked with two of the standard ports, and that have trunk ports connecting to a failover pair of PIX 515e's. We're considering adding a pair of cluster database nodes and an iSCSI SAN, both of which would need a dedicated interconnect VLAN that I'd like to employ Jumbo frames on. I don't necessarily need the VLANs to traverse the firewall trunks since they're private interconnects, but I need each host to traverse the switch trunks.
Since it seems I can only enable Jumbo frames on the entire switch (current standard frame size is 1500 and jumbo is also 1500), when I enable it what kind of possible negative impact could this have on my trunked ports as well as my host connections? I've read mixed reviews of users with iSCSI SAN devices seeing terrible performance when enabling jumbo frames so I'm apprehensive about enabling them on an existing network.
We have a number of sites which have high-speed L2 links which terminate on our L3 switches at each site. The ports are between the sites are placed in routed mode.
I would like to use Jumbo frame between two of the network which will communicate across sites and 1500 mtu on the rest, is this something which is possible?
From my understanding is the mtu is set on the interface therefore if I set the mtu on the L2 link ports on both sites to 9000 then would this cause a problem for the 1500?
I have followed every piece of cisco documentation I could find on this and I still can't get vPC configured to actually work. The VLANs stay in a suspended state so no traffic flows across. Below is my configuration:vrf context management ip route 0.0.0.0/0 10.86.0.1vlan 1,vlan 86 name I.S_Infrastructure,vpc domain 1 role priority 1000 peer-keepalive destination 10.86.0.4,interface Vlan1,interface Vlan86 no shutdown description I.S._Infrastructure ip address 10.86.0.1/24,interface port-channel1 switchport mode trunk vpc peer-link spanning-tree port type normal,interface Ethernet1/1 switchport mode trunk channel-group 1 mode active,interface Ethernet1/2 switchport mode trunk channel-group 1 mode active ,interface Ethernet1/3,escription Connection to Mgmt0 switchport access vlan 86 speed 1000.
I have a Nexus 5548UP that would be managed by two organizations. Is it possible to set IP addresses for mgmt0 and an SVI (or an L3 interface) without using the L3 daughter card? I don't want to route between VLANs, just to separate management traffic.
I'm trying to get a node in SVI1 on VRF1 to speak to another node in SVI2 on VRF2 to reach each other. After hours of failure, I went to outside resources. Everything I read on the internet says it's not possible on this platform and at least one TAC engineer seems to agree.
I just can't believe such a high-end data center switch is not capable of handling such a basic feature.
We currently have an environment with a 4507 as the core switch connected to four stacks of 3750e's in the wiring closets. A pair of Nexus 5548UP's also hangs off the 4507, but at the moment more or less dedicated to a certain purpose..The 5548UP's have the L3 daughter card installed.
My question is: Can a pair of Nexus 5548UP's do a C4507's job? Would we be able to decomission the 4507 and replace with the existing 5548UP's + FEXes?
This past networkers I was at the Cisco booth discussing how the 2248 can connect to the 5548 and have server connectivity. It was told to me that now, as of a fairly recent NX-OS release, you can have the 2248 going dual-homed to both 5548 via VPC and then have a server connected to both 2248 and be in active-active mode. Is this correct?
When we first deployed our 5548 and 2248 we had to put the 2248 in a straight pin mode, where it only had connections to one 5548 and then the server would dual connect to the 2248's and be in active-active mode. I was told that this changed with an NX-OS release however documentation still seems to be fragmented on what exactly is the case.
How separate is the management interface on a Nexus 5548?
In context - what's the risk of having a layer 2 only Nx5K in a DMZ and running the managment ports down into an internal managment VLAN, to form peer-keepalive links and software upgrades.
I am looking to implement a QoS policy on a pair of Nexus 5548 UPs. FCoE is a factor here. I have created the following configuration and would like to get a few pairs of eyes to take a look at this for a quick sanity check.
How to make sure this config is valid. Also, I realize I'm applying an MTU of 9216 to all classes right now, this will be phased out incrementally.
class-map type qos match-all class-platinum match cos 5 class-map type qos match-all class-gold match cos 4 class-map type qos class-fcoe match cos 3 [code]....
I have 2 sites located approximately 30 kilometers apart. I will call them site 1 and site 2.The sites are connected by a Layer 2 1GB fibre connection.I would like to add 2 X Cisco nexus 5548UP switches at site 1 and connect these 2 X Cisco nexus 5548UP switches via GLBP
I would then like to add 2 X Cisco nexus 5548UP switches at site 2 and connect these 2 X Cisco nexus 5548UP switches via GLBP.I would then like to connect the 2 X Cisco nexus 5548UP switches at site 1 and the 2 X Cisco nexus 5548UP switches at site 2 via GLBP.
I just received a Nexus 5548 to configure as the core of the Datacenter LAN. Is it true that the VRFs created cannot talk to each other??? I can't seem to find any documentation on how to do this and at least one TAC engineer half-heartedly believes it's not possible, either.
Basically, I'm trying to get an SVI in VRF1 to be able to talk to an device on another SVI in VRF2.
I can't believe this high-end switch, that is so capable in every regard, cannot handle this feature.
The best option for load balancing between 2 X Cisco nexus 5548UP switches located at one site and connecting to 2 X Cisco nexus 5548UP switches located at another site.
The sites are connected via a 1GB fibre connection. I am unable to use GLBP until GLBP is supported in further software releases.
I'm seeing several error messages like these in the logs of my Nexus 5548UP switches.
2012 Apr 24 16:39:41.470 SSV_5K_SW2 %LLDP-5-SERVER_ADDED: Server with Chassis ID aaaa.bbbb.cccc Port ID mgmt0 management address X.X.X.X discovered on local port mgmt0 in vlan 0 with enabled capability Bridge
2012 May 2 05:05:00.627 COR_CCO-NX-5548-UP_01 %LLDP-5-SERVER_REMOVED: Server with Chassis ID aaaa.bbbb.cccd Port ID aaaa.bbbb.cccc on local port Eth1/1 has been removed
2012 May 2 05:06:40.328 COR_CCO-NX-5548-UP_01 %LLDP-5-SERVER_ADDED: Server with Chassis ID aaaa.bbbb.cccd Port ID aaaa/bbbb.cccc management address NIL discovered on local port Eth1/1 in vlan 0 with enabled capability None
I will say that these 5548s are serving as the distribution layer for a UCS chassis (2x 6120 FIs) but didn't know what kind of visibility the Nexus would have into that - the chassis keyword is what's alluding to this in my mind, and I'm seeing these messages whenever interfaces that connect downstream to the fabric interconnects are brought up or down.
In the existing network we have Cisco 2811 router connected to Corporate MPLS cloud. Cisco 2811 is connected to Catalyst 6509 switch(set based IOS with MSFC card). Along with that we have two Catalyst 5509. We are upgrading the access layer by replacing catalyst switches with Nexus 5548 & 2248.
For a purpose of testing I have connected 5548 & 2248. Created cPC and ether channels between two. SVI and HSRP configuredon 5548. I am terminating 2651 (test router) on 2248 port 101/1/1. On 5548 I have enabled EIGRP on vlans. I am unable to ping to 2651 from nexus switch 5548 and vice-versa. I can see both devices on CDP but I do not see eigrp neighborship formed.
What configuration should go in 2248 and 2651 in order to establish a connection between two? If test is successful then I will connect 2811 to 2248 during actual migration. I assume that in testing if it works for 2651 then it must work on 2811 router.
I have a Nexus 7000 plus 6 boxes NX2000 on backbone.I have configured on 7000 :
conf t system jumbomtu 9000 exitERROR: Ethernet111/1/1: requested config change not allowed ... ERROR: Ethernet122/1/48: requested config change not allowed 1/111/14 is a NX2000 port conf tinterface ethernet 1/111/14 switchport mtu 9000 exit
I have gotten this message : Error: MTU cannot be configured on satellite port(s) - Eth122/1/11 ?I have tried on a NX7000 TP port:ERROR: Ethernet10/45: MTU on L2 interfaces can only be set to default or system-jumboMTU ?Does JUMBOMTU configuration can be done only when there are no NX2000 configured ?
on some of our ports on Nexu 5000 and on the connected FEX we can see a lot of Jumbo Packets though there is not enableed any JumboFrame on the Switch, all Interface and system MTU is set to 1500.
DBE-LINZ-XX41# sh int Eth113/1/27 Ethernet113/1/27 is up Hardware: 100/1000 Ethernet, address: d0d0.fd1b.b69c (bia d0d0.fd1b.b69c)
I have a pair of Nexus 5548UPs that have some high priority servers running on them. Servers are ESX hosts running Nexus 1000v's. Each host has multple connections in a VPC to both 5548s. We have been having intermittant ping loss and slowness of traffic to the VM's on these hosts. I was poking around trying to figure out what the issue could be and found that the peer-keepalive command was not set to send the heart beat across the mgmt0 interface. I would like to change this to point it accross the mgmt0 interface. Any tips or advice for me on making this change with production servers running on the switches? I do not want to cause any loss to any systems when I make this change. [Code] ..........
We have recently upgraded oor LAN and we are using couple of Nexus5548UP switches in the core with 2960 stacks as access switches. Each access switches stack is connnected to both core switches with link being port-chanels and VPCs. All is working fine, but our SolarWinds management platform (NPM) is being flooded with "Physical Address changed" events. Here is an example of messages:
NSW_Core_2 - Ethernet1/7 Physical Address changed from 000000003811 to 73616D653811 NSW_Core_2 - Ethernet1/7 Physical Address changed from 200B82B43811 to 000000003811
For each interface I have messages like these repeating.I am not sure what those messages means or if there is actually anything wrong. Performance of the network is good, there are no errors on any interfaces and I do not see anything related in the switch loggs.
Need clarification on the VPC with 5k and 2248 Fabric Extenders. My question is can each fabric extender uplink to two different 5ks, and at the same time, have servers connected to two both fabric extenders with a VPC.So basically, the server NIC will team with two different fabric extenders, and each fabric extender will connect to two different 5k's.