We have a requirement to send span traffic to a destination port for monitoring purposes on two 5000s with some 2000 fex boxes attached.
Some of the servers are making use of frames larger than 1500. we have not changed any mtu configuration on the 5000 since installation, and I can see the policy maps is still on 1500.
My first assumption would be that frames larger than 1500 will not be dropped, but it seemingly not (see below). is there a reason why the switch would forward jumbo frames? Also, is there a limitation on MTU for span traffic? There is a MTU command under the span session, but the maximum is 1518. From what I can read the frame will be truncated if it exceeds this. Does that mean the fragments will be dropped?
I've to enable it on 3750 and nexus 7K switches. what are the steps involved? can we enable jumbo frame per port instead of enabling globally? i.e. we will only have few ports that will be using jumbo frames, rest of the ports will be using default 1500 MTU size.
I believe i've enable jumbo frames on our Nexus 7010, one in each data-centre.
system jumbomtu 9216. Also on the interfaces mtu 9216. And can see MTU 9216 bytes, BW 20000000 Kbit, DLY 10 usec on the port-channel between them. Though when i ping between vlans at each site with large packets i get 30% drops and if i set the DF bit in IP header to yes - 100% loss.
8798 bytes from 10.200.12.2: icmp_seq=19 ttl=254 time=8.024 ms --- 10.200.12.2 ping statistics ---20 packets transmitted, 14 packets received, 30.00% packet loss
I have a Cisco Nexus 3064 that I am using as part of a flat network for the Lab. I have 30 Virtualization Servers(MS HyperV and VMware vSphere) connected to this switch and I want to enable jumbo frames. The Virtualization Servers are able to ping the local VM's using 8K bytes. However I am unable to ping from server to server using 8K bytes. I have configuration (in abbreviation). All the servers are in the same network which I configured as L2 ports with the "switchport" command. However, the interface "MTU" command is unavailable in L2 mode. I am only able to get the interface "MTU" command only in L3 mode with the "no switchport" command on the interface.
# int eth1/2-45 # no switchport # mtu 9216 # no shut
I can ping the servers with less than 1500 bytes, but anything larger fails.
The server team has asked me to implement jumbo frames on a single VLAN, the one they use for v motion. We have two pairs of 5548s, each pair running VPC for most connections. I am aware of many postings that describe how to enable jumbo frames globally, like this:
policy-map type network-qos jumbo class type network-qos class-default [code].....
I am not clear how I can extend this principle to one VLAN only.
Also, I am aware of a posting [URL], that shows some pitfalls of implementing jumbo frames in a VPC configuration. Pretty well all my connections are VPC, including all the FEXes, which are all dual-homed. In many cases, the VPC extends through to the servers so that the servers run port.channels across two FEXes. I am unclear whether the pitfalls are still valid, or whether I have to wait until my next maintenance slot (6 months away) to implement jumbo frames. Can jumbo frames be implement safely on the fly? How does enabling jumbo frames fit in with "conf sync" mode?
We are currently using two Nexus 5548UP's as our Datacenter Network Core. I have a pretty simple objective: I would like to enable Jumbo Frames on a single VLAN only(VLAN 65). This VLAN is used strictly for backups. I do not want to enable Jumbo Frames on the other VLANs(VLANs 1-10). Im not sure what the best way to do this is.....or if it is even possible but I am hoping to get some configuration examples.
on some of our ports on Nexu 5000 and on the connected FEX we can see a lot of Jumbo Packets though there is not enableed any JumboFrame on the Switch, all Interface and system MTU is set to 1500.
DBE-LINZ-XX41# sh int Eth113/1/27 Ethernet113/1/27 is up Hardware: 100/1000 Ethernet, address: d0d0.fd1b.b69c (bia d0d0.fd1b.b69c)
I attempted to enable jumbo frames on a Nexus 5010 (NX-OS version 4.2(1)N1(1)). I created the policy map below and lost access to the switch.
policy-map type network-qos jumbo class type network-qos class-default mtu 9216
After recovery I see from the logs that all vlans and interface were suspended. I've attempted to look for reasons for a compatibility issue but I am unable to find what is checked and what could have been incompatible. The other troubling thing is the adjacent switch suspended its interfaces too but no change was done there. What I need to look out for so that this does not happen again?
2011 Nov 22 23:43:09 phx-ipcg1dwfcma %ETHPORT-3-IF_ERROR_VLANS_SUSPENDED: VLANs 1,10,601 on Interface port-channel1 are being suspen ded. (Reason: QoSMgr Network QoS configuration incompatible) 2011 Nov 22 23:43:09 phx-ipcg1dwfcma %ETHPORT-5-IF_TRUNK_DOWN: Interface port-channel1, vlan 1,10,601 down 2011 Nov 22 23:43:09 phx-ipcg1dwfcma %ETHPORT-3-IF_ERROR_VLANS_SUSPENDED: VLANs 10 on Interface port-channel508 are being suspended.
I currently have 4 3560 switches connected in a Mesh topology. These are all set to use Jumbo Frames and so are all the Servers that are connected to these.I now need to connect a 2950 switch to 2 of the 3560's which will have only desktop computers connected to it but i do not want to configure Jumbo Frames on this and any of the desktops.
I understand that jumbo frames need to be enabled end-to-end. I have two ESX hosts connected at each site. I want to enable jumbo frames for those ports, but what if not all host on the ESX are using jumbo frames, will I have drops and connection failures? So if i have two sites, each with a 6509 connected via a trunk and need to enable jumbo frames for a vlan between the sites how do I accomplish this?If I enable jumbo frames on the trunk link how does that impact other traffic between the sites?
I have a Cisco Catalyst 3100 blade in a Dell server chassis that is trunked to a 6509.
When doing a protocol capture, I see large frames being sent from one of the servers in the chassis.
Example:
TCP:[Continuation to #1701] [Bad CheckSum]Flags=...AP..., SrcPort=HTTP(80), DstPort=2667, PayloadLen=2831, Seq=1489611217 - 1489614048, Ack=1719592331, Win=65535 I see lengths up to 6900+ bites coming from the server.
The switch has the global MTU set to 1500
system mtu routing 1500
and I can't seem to set this at the interface level. The server is configured to send 1500 length frames. Why am I seeing these jumbos? (the server is Windows 2003)
I have a pair of Catalyst 3560 GB switches that are trunked with two of the standard ports, and that have trunk ports connecting to a failover pair of PIX 515e's. We're considering adding a pair of cluster database nodes and an iSCSI SAN, both of which would need a dedicated interconnect VLAN that I'd like to employ Jumbo frames on. I don't necessarily need the VLANs to traverse the firewall trunks since they're private interconnects, but I need each host to traverse the switch trunks.
Since it seems I can only enable Jumbo frames on the entire switch (current standard frame size is 1500 and jumbo is also 1500), when I enable it what kind of possible negative impact could this have on my trunked ports as well as my host connections? I've read mixed reviews of users with iSCSI SAN devices seeing terrible performance when enabling jumbo frames so I'm apprehensive about enabling them on an existing network.
We have a number of sites which have high-speed L2 links which terminate on our L3 switches at each site. The ports are between the sites are placed in routed mode.
I would like to use Jumbo frame between two of the network which will communicate across sites and 1500 mtu on the rest, is this something which is possible?
From my understanding is the mtu is set on the interface therefore if I set the mtu on the L2 link ports on both sites to 9000 then would this cause a problem for the 1500?
I have a Nexus 7000 plus 6 boxes NX2000 on backbone.I have configured on 7000 :
conf t system jumbomtu 9000 exitERROR: Ethernet111/1/1: requested config change not allowed ... ERROR: Ethernet122/1/48: requested config change not allowed 1/111/14 is a NX2000 port conf tinterface ethernet 1/111/14 switchport mtu 9000 exit
I have gotten this message : Error: MTU cannot be configured on satellite port(s) - Eth122/1/11 ?I have tried on a NX7000 TP port:ERROR: Ethernet10/45: MTU on L2 interfaces can only be set to default or system-jumboMTU ?Does JUMBOMTU configuration can be done only when there are no NX2000 configured ?
I am trying to determin if Jumbo frames are enabled on out Nexus 7000, and I am getting mixed info back from the swtich.I looks like the system jumbo MTU size is 9216 by default, but the interfaces all say the MTU of the interface is 1500 bytes. According to this article, the interface MTU should read 9216 is the jumbo frames are enabled globally. Is this correct. Is there a way to verify if Jumbo frame support is turned on? [code]
I have a little problem. My customer is using TACP-PLUS ALPHA (F4.0.3.alpha.v9). Well, the same user than have access to another Cisco equipment, with user test1 by sample, can configure anything in the equipment. But in the nexus 5000, el command "show user-account" indicate just the "network-operator" role. Well, I patch this situation with the next commands:
aaa authorization config-commands default group TACSERVER local aaa authorization commands default group TACSERVER local
Well, when I do a telnet into the nexus, I can shut the interfaces, config and anything. But, when I ingress by console, I can not to configure the interfaces.I understand that the Nexus 5000 the Tacacs configuration is global for VTY and Console (different in the Cisco equipment Routers by sample).
I have the following configured on my Nexus switches and works with success.
The problem I have is Once I switch of the ACS server I can log on to the Nexus as I have a admin user configured locally on the Nexus and the ACS server unfortunately can not run commands as it tries to point to the ACS server for auhtorization and the ACS server is turned off is it possible for the Nexus to ignore the authorization command if it can not see the ACS server ?
Feature tacacs+ ip tacacs source-interface vlan 705 tacacs-server host x.x.x.x key 7 "xxxxxx" aaa group server tacacs+ Test-switch (Test-switch is a group configured on ACS 5.2) [Code]...
I'm planning to upgrade N5K from 5.1(3)N2(1b) to 5.2(1)N1(4)."sh install all impact kickstart bootflash:n5000-uk9-kickstart.5.2.1.N1.4.bin system bootflash:n5000-uk9.5.2.1.N1.4.bin"reports:
The diagram below is the configuration we are looking to deploy, that way because we do not have VSS on the 6500 switches so we can not create only one Etherchannel to the 6500s.Our blades inserted on the UCS chassis have INTEL dual port cards, so they do not support full failover.
Questions I have are.
- Is this my best deployment choice? - vPC highly depend on the management interface on the Nexus 5000 for the keep alive peer monitoring, so what is going to happen if the vPC brakes due to: - one of the 6500 goes down - STP? - What is going to happend with the Etherchannels on the remaining 6500? - the Management interface goes down for any other reason - which one is going to be the primary NEXUS?
Below is the list of devices involved and the configuration for the Nexus 5000 and 65000.
Devices
· 2 Cisco Catalyst with two WS-SUP720-3B each (no VSS) · 2 Cisco Nexus 5010 · 2 Cisco UCS 6120xp · 2 UCS Chassis - 4 Cisco B200-M1 blades (2 each chassis) - Dual 10Gb Intel card (1 per blade)
Iam having some issue trying to configure snmp-server context vrf XXX.From some reason even if i put my VRF name i cant see anything about this vrfthis is the command i add:
Lucien is a customer support engineer at the Cisco Technical Assistance Center. He currently works in the data center switching team supporting customers on the Cisco Nexus 5000 and 2000. He was previously a technical leader within the network management team. Lucien holds a bachelor's degree in general engineering and a master's degree in computer science from Ecole des Mines d'Ales. He also holds the following certifications: CCIE #19945 in Routing and Switching, CCDP, DCNIS, and VCP #66183
We have our Nexus as our default gateway (101.1) and the default VLAN1 is setup with two subnets 101.X and 102.X. The DHCP server is using a superscope setup to accomodate the overflow of devices requesting IPs on 101, so when 101 is consumed persons are able to obtain a 102.X IP address. The setup is basic on superscope. The issue is some times the routing to the firewall with a 102.X is not always 100%. Somedays all goes well and the 102 subnet is routed out to the firewall and its a good day. However, such as today a 102.X address is not routing as it did 24 hours ago. I am perplexed as to why this is behaving unpredicatable. Here is running-config for VLAN1 to show the 102 as secondary address to VLAN1.
I have two switches: Nexus 3064 (ver 5.0(3)U1(2)) and Cisco 6509.The 6509:
1 x WS-x6708-10GE 8 port Ten Gig module version 3.5, firmware 12.2(18r)S1. 2 x sup 720's, with PFC3A. 2 x WS-x6348-rj-45 4 x WS-x6748-ge-tx
The IOS is ipservicesk9-mz 12.2(33)SXH8b.Both switches have been running fine for quite a while(not connected to each other). I then ran a fiber connection between port Nexus:1/48 and 6509:Te9/1.When I ping (of any packet size) from the Nexus to the 6509 @172.19.4.254, the 6509's CPU goes to 100%. On occasion, we will get 1 out of 20 packets back in the reply.I reduced the MTU size on the three 6509 parameters until the CPU stopped going 100%.The magic number is 4175 bytes. 4176 and higher == 100% CPU.I am willing to put the fiber link to 1500, but how does one change that for just one port on the Nexus? I tried and it refuses to set the mtu. I also tried to setup a new service-policy but that didn't seem to work either.The Nexus users are all jumbo frame users. The users on the 6509 are all 1500 byte frame users, except for one user on a 1 gig port (ws-6748-ge-tx) line card. It is this user whom would like jumbos.
is it possible to connect one Cisco Nexus 2000 fabric extender to two Cisco Nexus 5000 and use one link on the first side and two links on the other side?
we do not have an out-of-band management network and setting one up at this point is not being planned. We are mainly a swtiched environment and the only devices that are using L3 are the core switch for WAN purposes and the lab because it is mimicking the production environment. I have two Nexus switches that are sitting on the other side of a 3750 switch which is currently acting as a L3 device because this is a pre-production environment for a new project. We had an issue with management of the devices before but our workaround was to put them on the management vlan direcltly off of the core, allowing only management traffic to pass by means of mgmt0 on each device. The problem I'm having now is that I've now setup the mgmt0 interfaces on both for the keepalive link for vpc only (vpc traffic is going accross 2x10gb connections and the link to the 3750 is 1gb each trunked) and have lost my ability to use the mgmt0 connections for management. How to connect my management connection through either the 3750 or directly off the core switch (as that's what will happen once it's put into production)
when will be the command "default interface x/x" on the Nexus 5000 platform available? Even with latest software version (5.1.3.N2.1a) it is not possible. For Nexus7000 it's working fine with 5.2 train.Is there a feature request for it? If not here it is!!It's horrible to deconfigure many interfaces especially in N5k environments with many FEXes.
My network consist of that network device. cisco catalyst 3750 with stackwise, 2xnexsus 5000 series and servers.servers connected to nexsus switch. nexsus connect to 3750.
Each server have two link, one of them connect nexsus1 and other connect to nexsus2 switch.(same traffic) each nexsus have one link to 3750. At 3750 the nexsus link configurate etherchannel. but the flapping occur at 3750.
i understand that at 2 nexsus link have the same server source mac address so the flapping occur at 3750. how i solve this problem?
What is the purpose of these default configuration lines? What do they mean? I can't find an explanation of them anywhere. I believe some are written to the config when FCoE is enabled..
I would like to know exactly what they are doing.
class-map type qos class-fcoe class-map type queuing class-fcoe match qos-group 1
I am just wondering on how mismatched MTU sizes are handled in Layer-2 networks and also inside a particular switches internal architecture.Layer 2 devices do not do fragmentation in the even of MTU mismatch. is this because Layer 2 devices do not re-write header information (like inserting destination IP and next hop MAC into the newly created frame.) i believe this is what they call per-hop behaviour? if this not the reason, then...? assuming this is the reason, let me proceed to my next question. When we set MTU on an interface , there is no mention of direction (ingress or egress), so i take this as means in both directions. so if a jumbo frame comes in on an interface which is set to recieve jumbo frames and forwarding decision is made and the frame is scheduled to egress via an interface whose MTU is not set for Jumbo frames, will the switch drope the frame at the egress buffer? if not, this implies MTU is an ingress property(only for incoming packets). But, again if it drops the packet, then MTU shoud have been system wide or global configuration as opposed to interface level configuration (just like nexus 5000).