I am looking to implement a QoS policy on a pair of Nexus 5548 UPs. FCoE is a factor here. I have created the following configuration and would like to get a few pairs of eyes to take a look at this for a quick sanity check.
How to make sure this config is valid. Also, I realize I'm applying an MTU of 9216 to all classes right now, this will be phased out incrementally.
class-map type qos match-all class-platinum
match cos 5
class-map type qos match-all class-gold
match cos 4
class-map type qos class-fcoe
match cos 3
[code]....
I have followed every piece of cisco documentation I could find on this and I still can't get vPC configured to actually work. The VLANs stay in a suspended state so no traffic flows across. Below is my configuration:vrf context management ip route 0.0.0.0/0 10.86.0.1vlan 1,vlan 86 name I.S_Infrastructure,vpc domain 1 role priority 1000 peer-keepalive destination 10.86.0.4,interface Vlan1,interface Vlan86 no shutdown description I.S._Infrastructure ip address 10.86.0.1/24,interface port-channel1 switchport mode trunk vpc peer-link spanning-tree port type normal,interface Ethernet1/1 switchport mode trunk channel-group 1 mode active,interface Ethernet1/2 switchport mode trunk channel-group 1 mode active ,interface Ethernet1/3,escription Connection to Mgmt0 switchport access vlan 86 speed 1000.
We are looking to deploy two Nexus 7009 cores at our two datacenters. They are approximately 2 miles apart. We are hoping to have 10G Dark Fiber between the buildings and therefore dedicate a pair for FCOE between the cores using 10G Long Range SFP's. I read that the Nexus 5000 series had a limit of ~3 km for FCOE. Does the same hold true for the 7000 series? I thought I read somehwere that the buffers were larger on the 7000 series and therefore would be able to do ~30 km.
I have a Nexus 5548UP that would be managed by two organizations. Is it possible to set IP addresses for mgmt0 and an SVI (or an L3 interface) without using the L3 daughter card? I don't want to route between VLANs, just to separate management traffic.
I'm trying to get a node in SVI1 on VRF1 to speak to another node in SVI2 on VRF2 to reach each other. After hours of failure, I went to outside resources. Everything I read on the internet says it's not possible on this platform and at least one TAC engineer seems to agree.
I just can't believe such a high-end data center switch is not capable of handling such a basic feature.
We currently have an environment with a 4507 as the core switch connected to four stacks of 3750e's in the wiring closets. A pair of Nexus 5548UP's also hangs off the 4507, but at the moment more or less dedicated to a certain purpose..The 5548UP's have the L3 daughter card installed.
My question is: Can a pair of Nexus 5548UP's do a C4507's job? Would we be able to decomission the 4507 and replace with the existing 5548UP's + FEXes?
This past networkers I was at the Cisco booth discussing how the 2248 can connect to the 5548 and have server connectivity. It was told to me that now, as of a fairly recent NX-OS release, you can have the 2248 going dual-homed to both 5548 via VPC and then have a server connected to both 2248 and be in active-active mode. Is this correct?
When we first deployed our 5548 and 2248 we had to put the 2248 in a straight pin mode, where it only had connections to one 5548 and then the server would dual connect to the 2248's and be in active-active mode. I was told that this changed with an NX-OS release however documentation still seems to be fragmented on what exactly is the case.
How separate is the management interface on a Nexus 5548?
In context - what's the risk of having a layer 2 only Nx5K in a DMZ and running the managment ports down into an internal managment VLAN, to form peer-keepalive links and software upgrades.
I have 2 sites located approximately 30 kilometers apart. I will call them site 1 and site 2.The sites are connected by a Layer 2 1GB fibre connection.I would like to add 2 X Cisco nexus 5548UP switches at site 1 and connect these 2 X Cisco nexus 5548UP switches via GLBP
I would then like to add 2 X Cisco nexus 5548UP switches at site 2 and connect these 2 X Cisco nexus 5548UP switches via GLBP.I would then like to connect the 2 X Cisco nexus 5548UP switches at site 1 and the 2 X Cisco nexus 5548UP switches at site 2 via GLBP.
I just received a Nexus 5548 to configure as the core of the Datacenter LAN. Is it true that the VRFs created cannot talk to each other??? I can't seem to find any documentation on how to do this and at least one TAC engineer half-heartedly believes it's not possible, either.
Basically, I'm trying to get an SVI in VRF1 to be able to talk to an device on another SVI in VRF2.
I can't believe this high-end switch, that is so capable in every regard, cannot handle this feature.
The best option for load balancing between 2 X Cisco nexus 5548UP switches located at one site and connecting to 2 X Cisco nexus 5548UP switches located at another site.
The sites are connected via a 1GB fibre connection. I am unable to use GLBP until GLBP is supported in further software releases.
I'm seeing several error messages like these in the logs of my Nexus 5548UP switches.
2012 Apr 24 16:39:41.470 SSV_5K_SW2 %LLDP-5-SERVER_ADDED: Server with Chassis ID aaaa.bbbb.cccc Port ID mgmt0 management address X.X.X.X discovered on local port mgmt0 in vlan 0 with enabled capability Bridge
2012 May 2 05:05:00.627 COR_CCO-NX-5548-UP_01 %LLDP-5-SERVER_REMOVED: Server with Chassis ID aaaa.bbbb.cccd Port ID aaaa.bbbb.cccc on local port Eth1/1 has been removed
2012 May 2 05:06:40.328 COR_CCO-NX-5548-UP_01 %LLDP-5-SERVER_ADDED: Server with Chassis ID aaaa.bbbb.cccd Port ID aaaa/bbbb.cccc management address NIL discovered on local port Eth1/1 in vlan 0 with enabled capability None
I will say that these 5548s are serving as the distribution layer for a UCS chassis (2x 6120 FIs) but didn't know what kind of visibility the Nexus would have into that - the chassis keyword is what's alluding to this in my mind, and I'm seeing these messages whenever interfaces that connect downstream to the fabric interconnects are brought up or down.
In the existing network we have Cisco 2811 router connected to Corporate MPLS cloud. Cisco 2811 is connected to Catalyst 6509 switch(set based IOS with MSFC card). Along with that we have two Catalyst 5509. We are upgrading the access layer by replacing catalyst switches with Nexus 5548 & 2248.
For a purpose of testing I have connected 5548 & 2248. Created cPC and ether channels between two. SVI and HSRP configuredon 5548. I am terminating 2651 (test router) on 2248 port 101/1/1. On 5548 I have enabled EIGRP on vlans. I am unable to ping to 2651 from nexus switch 5548 and vice-versa. I can see both devices on CDP but I do not see eigrp neighborship formed.
What configuration should go in 2248 and 2651 in order to establish a connection between two? If test is successful then I will connect 2811 to 2248 during actual migration. I assume that in testing if it works for 2651 then it must work on 2811 router.
I have a pair of Nexus 5548UPs that have some high priority servers running on them. Servers are ESX hosts running Nexus 1000v's. Each host has multple connections in a VPC to both 5548s. We have been having intermittant ping loss and slowness of traffic to the VM's on these hosts. I was poking around trying to figure out what the issue could be and found that the peer-keepalive command was not set to send the heart beat across the mgmt0 interface. I would like to change this to point it accross the mgmt0 interface. Any tips or advice for me on making this change with production servers running on the switches? I do not want to cause any loss to any systems when I make this change. [Code] ..........
We have recently upgraded oor LAN and we are using couple of Nexus5548UP switches in the core with 2960 stacks as access switches. Each access switches stack is connnected to both core switches with link being port-chanels and VPCs. All is working fine, but our SolarWinds management platform (NPM) is being flooded with "Physical Address changed" events. Here is an example of messages:
NSW_Core_2 - Ethernet1/7 Physical Address changed from 000000003811 to 73616D653811 NSW_Core_2 - Ethernet1/7 Physical Address changed from 200B82B43811 to 000000003811
For each interface I have messages like these repeating.I am not sure what those messages means or if there is actually anything wrong. Performance of the network is good, there are no errors on any interfaces and I do not see anything related in the switch loggs.
Need clarification on the VPC with 5k and 2248 Fabric Extenders. My question is can each fabric extender uplink to two different 5ks, and at the same time, have servers connected to two both fabric extenders with a VPC.So basically, the server NIC will team with two different fabric extenders, and each fabric extender will connect to two different 5k's.
We are currently using two Nexus 5548UP's as our Datacenter Network Core. I have a pretty simple objective: I would like to enable Jumbo Frames on a single VLAN only(VLAN 65). This VLAN is used strictly for backups. I do not want to enable Jumbo Frames on the other VLANs(VLANs 1-10). Im not sure what the best way to do this is.....or if it is even possible but I am hoping to get some configuration examples.
I am seeing an issue that after deleting/recreating one of the VDC in Nexus 7K, VLAN is not been able to be configured within the VDC although it is not actually a reserved VLAN. Could it be anything missing in the license installation? the version of the image is NX-OS 6.1.2
I would like to know if the power the Nexus 7K allocates per module is configurable?For example, we are only using the 8 didicated ports on our N7K-M132XP-12 card. The Nexus budgets 750W for the module, but given that we will only ever use 8 of the 32 ports we would like to allocate the remaining power elsewhere.
We have two Nexus switches in our network, one of them is Nexus5020 other Nexus5596UP. System image is identical on both switches 5.2(1)N1(4). When we try to setup VPC between these switches we see that all configured vlans on VPC peer link between Nexus switches are blocked by spanning tree protocol with message "Bridge Assurance Inconsistent, VPC Peer-link Inconsistent". We still can't solve this problem.
Topology:
NEXUS_5020---Peer_link(Po2)---NEXUS_5596UP
/
/
Member_link (Po100) Member_link (Po100)
/
/
SERVER
Configuration:
NEXUS_5020: speed 1000 interface Vlan2000 no shutdown description VPC_keepalive_link vrf member VPC_kepalive ip address 10.55.55.2/30
We have Nexus 7009 switch and want to configure the span session
We are using F2 and M2 card both are in seperate differeent VDC.And out server is connected to M2 card on eth 4/6 and want to monitor the traffic from vlan 161Which is made on F2 card.
I have a Cisco Nexus 3064 that I am using as part of a flat network for the Lab. I have 30 Virtualization Servers(MS HyperV and VMware vSphere) connected to this switch and I want to enable jumbo frames. The Virtualization Servers are able to ping the local VM's using 8K bytes. However I am unable to ping from server to server using 8K bytes. I have configuration (in abbreviation). All the servers are in the same network which I configured as L2 ports with the "switchport" command. However, the interface "MTU" command is unavailable in L2 mode. I am only able to get the interface "MTU" command only in L3 mode with the "no switchport" command on the interface.
# int eth1/2-45 # no switchport # mtu 9216 # no shut
I can ping the servers with less than 1500 bytes, but anything larger fails.
What is the purpose of these default configuration lines? What do they mean? I can't find an explanation of them anywhere. I believe some are written to the config when FCoE is enabled..
I would like to know exactly what they are doing.
class-map type qos class-fcoe class-map type queuing class-fcoe match qos-group 1
I have two 5548s as core. 8 FEXs are multihomed (advanced vPC topology?) to both the cores.Suppose, I have to configure a bunch of ports on the FEXs, say Eth101/1/10 - 20. I would login to the first core and apply the configs.
My question is - do I have to do the same on the second core also? Or would the first core replicate the stuff to the second core? I know about port-profiles/CFS and such. But, without that would it automatically sync to second core?
For testing purpose, I went to Core 1 Eth101/1/10 and put a description "TEST". Wrote the config. After 5 minutes logged into second core and did show run Eth101/1/10. But, the description "TEST" didn't show up there.
Also, doing sh run on any FEX port is faster on one of the cores and very slow on second core... all the FEXs have 20 GB uplink to core 1 & 2 (so total 40GB in vPC, max pinning 1)
This is regarding CISCO logging configuration.We palnned to implement enable logging on all the cisco nexus switchs.we are running HP arc sight in our DC this device monitor all the CISCO devices.We want to enable logging with this Arc sight device.Just I would like to know about config commands for Nexus device, what is the command to enable logs which is include "who is login & logout?, interface down information?,who was did conf t ? & every logs"
we've been using IOS for a long time, but are relatively new to NX-OS. We've got a central syslog server that all our devices log to. No matter what we do, we can't get our Nexus switches to log there. Here's my current attempt:
Nexus 7009, NX-OS 6.0(1)
# sh logging server Logging server: enabled {redacted} server severity: debugging server facility: local7 server VRF: default
[code].....
The default VRF is working. I see log entries in the logfile, but nothing arrives at the syslog server. It's not a config issue on the server, because tcpdump shows that no packets arrive from the IP for loopback 0.
Got an odd problem with trunking, all vlans except vlan1 trunk perfectly. Link is from a pair of dualhomed FEX 2248TP's to some 3650G Switches. Nexus running - version 5.1(3)N2(1) 3560's running - 12.2(53)SE2 & 15.0(1)SE2
We have a Nx5548up pair connected to FI6248 via a vPC. We had to reboot a FI (in order to configure more FC ports) ; following that reboot, we meet many issues.The first log shows the vPC down showing the FI rebbot.2013 May 23 12:31:45 sw-n5kup-fr-eqx-01 %ETH_PORT_CHANNEL-5 PORT_INDIVIDUAL_DOWN: individual port Ethernet1/2 is down