Cisco Switching/Routing :: Nexus 5548UP To 2248 Design
Aug 10, 2012
This past networkers I was at the Cisco booth discussing how the 2248 can connect to the 5548 and have server connectivity. It was told to me that now, as of a fairly recent NX-OS release, you can have the 2248 going dual-homed to both 5548 via VPC and then have a server connected to both 2248 and be in active-active mode. Is this correct?
When we first deployed our 5548 and 2248 we had to put the 2248 in a straight pin mode, where it only had connections to one 5548 and then the server would dual connect to the 2248's and be in active-active mode. I was told that this changed with an NX-OS release however documentation still seems to be fragmented on what exactly is the case.
I am looking to see if Nexus 5596UP & Nexus 2248TP GE compatible with SFP-10G-SR? The reason is because a consultant was hired on to "design" the network layout and has decided to purchase Cisco SFP+ Copper Twinax Cables which have a 10M limit. A small handful of the Data Center racks are 10-15M away... just out of reach of the Twinax. I would prefer NOT to move the LAN row so that it is more centered in the room. Can I use the SFP-10G-SR to connect the 2 switches (5596 & 2248) together? This SFP has a 26M reach on standard 10gig fiber, the small cost increase per connection is of no concern.
I have a pair of 2248 Fex's where I'm currently terminating several VPC's to the servers, each with one port per Fex. Is it possible to run a VPC to a server with 4 ports, i.e 2 ports per Fex? I saw some discussions indicating it wasn't possible on a 2148 Fex, but would be on the 2248. The ports will just a dot1q trunks
I have a pair of 5596 running in a vPC with Nexus 2248 connected to each N5596. When I do the command "show fex" I get the following output on the 2nd 5596
Number Description State Model Serial ------------------------------------------------------------------------ 101 FEX101 AA Version Mismatch N2K-C2248TP-E-1GE SSI16390705 102 FEX102 AA Version Mismatch N2K-C2248TP-E-1GE SSI163704AD 122 FEX122 Online N2K-C2232PP-10GE SSI16370195
I'm running version 5.1(3)N1(1) on both of the 5K's. I have looked through all the configuration and I am not understanding why I am getting this error. I have tried to look it up on [URL], but not having a ton of luck.
I have a dual-homed fabric (Nexus 2248 dual attached to two Nexus 5020's via vPC). On this Nexus 2248 is a server that has a four port LACP etherchannel. The ports do not appear to be load balancing correctly. The output below shows the four ports in use and it clearly shows port e138/1/10 as getting the most use. When I use the "show port-channel load-balance forwarding-path..." command on either of the vPC switches for various source and destination IP's that use this link, it shows them correctly load-balancing across the four ports. But we do not see this when looking at stats on both the server side and the switch side.
we are planning a Nexus datacenter project with this layout:Our experiences with Nexus switches are not so large until now and the manuals are very extensive.Both N5K´s should be connected directly with all 4 N2K switches. I did not find a layout like this in the manuals. Only a design,where only 2 N2K are connected to one N5K, with this fex config:Now I´m not sure if it is right to make a config like this with the same slots and fex´s or with different slots and fex´s.
I have followed every piece of cisco documentation I could find on this and I still can't get vPC configured to actually work. The VLANs stay in a suspended state so no traffic flows across. Below is my configuration:vrf context management ip route 0.0.0.0/0 10.86.0.1vlan 1,vlan 86 name I.S_Infrastructure,vpc domain 1 role priority 1000 peer-keepalive destination 10.86.0.4,interface Vlan1,interface Vlan86 no shutdown description I.S._Infrastructure ip address 10.86.0.1/24,interface port-channel1 switchport mode trunk vpc peer-link spanning-tree port type normal,interface Ethernet1/1 switchport mode trunk channel-group 1 mode active,interface Ethernet1/2 switchport mode trunk channel-group 1 mode active ,interface Ethernet1/3,escription Connection to Mgmt0 switchport access vlan 86 speed 1000.
I have a Nexus 5548UP that would be managed by two organizations. Is it possible to set IP addresses for mgmt0 and an SVI (or an L3 interface) without using the L3 daughter card? I don't want to route between VLANs, just to separate management traffic.
I'm trying to get a node in SVI1 on VRF1 to speak to another node in SVI2 on VRF2 to reach each other. After hours of failure, I went to outside resources. Everything I read on the internet says it's not possible on this platform and at least one TAC engineer seems to agree.
I just can't believe such a high-end data center switch is not capable of handling such a basic feature.
We currently have an environment with a 4507 as the core switch connected to four stacks of 3750e's in the wiring closets. A pair of Nexus 5548UP's also hangs off the 4507, but at the moment more or less dedicated to a certain purpose..The 5548UP's have the L3 daughter card installed.
My question is: Can a pair of Nexus 5548UP's do a C4507's job? Would we be able to decomission the 4507 and replace with the existing 5548UP's + FEXes?
I am looking to implement a QoS policy on a pair of Nexus 5548 UPs. FCoE is a factor here. I have created the following configuration and would like to get a few pairs of eyes to take a look at this for a quick sanity check.
How to make sure this config is valid. Also, I realize I'm applying an MTU of 9216 to all classes right now, this will be phased out incrementally.
class-map type qos match-all class-platinum match cos 5 class-map type qos match-all class-gold match cos 4 class-map type qos class-fcoe match cos 3 [code]....
I have 2 sites located approximately 30 kilometers apart. I will call them site 1 and site 2.The sites are connected by a Layer 2 1GB fibre connection.I would like to add 2 X Cisco nexus 5548UP switches at site 1 and connect these 2 X Cisco nexus 5548UP switches via GLBP
I would then like to add 2 X Cisco nexus 5548UP switches at site 2 and connect these 2 X Cisco nexus 5548UP switches via GLBP.I would then like to connect the 2 X Cisco nexus 5548UP switches at site 1 and the 2 X Cisco nexus 5548UP switches at site 2 via GLBP.
I just received a Nexus 5548 to configure as the core of the Datacenter LAN. Is it true that the VRFs created cannot talk to each other??? I can't seem to find any documentation on how to do this and at least one TAC engineer half-heartedly believes it's not possible, either.
Basically, I'm trying to get an SVI in VRF1 to be able to talk to an device on another SVI in VRF2.
I can't believe this high-end switch, that is so capable in every regard, cannot handle this feature.
I'm seeing several error messages like these in the logs of my Nexus 5548UP switches.
2012 Apr 24 16:39:41.470 SSV_5K_SW2 %LLDP-5-SERVER_ADDED: Server with Chassis ID aaaa.bbbb.cccc Port ID mgmt0 management address X.X.X.X discovered on local port mgmt0 in vlan 0 with enabled capability Bridge
2012 May 2 05:05:00.627 COR_CCO-NX-5548-UP_01 %LLDP-5-SERVER_REMOVED: Server with Chassis ID aaaa.bbbb.cccd Port ID aaaa.bbbb.cccc on local port Eth1/1 has been removed
2012 May 2 05:06:40.328 COR_CCO-NX-5548-UP_01 %LLDP-5-SERVER_ADDED: Server with Chassis ID aaaa.bbbb.cccd Port ID aaaa/bbbb.cccc management address NIL discovered on local port Eth1/1 in vlan 0 with enabled capability None
I will say that these 5548s are serving as the distribution layer for a UCS chassis (2x 6120 FIs) but didn't know what kind of visibility the Nexus would have into that - the chassis keyword is what's alluding to this in my mind, and I'm seeing these messages whenever interfaces that connect downstream to the fabric interconnects are brought up or down.
In the existing network we have Cisco 2811 router connected to Corporate MPLS cloud. Cisco 2811 is connected to Catalyst 6509 switch(set based IOS with MSFC card). Along with that we have two Catalyst 5509. We are upgrading the access layer by replacing catalyst switches with Nexus 5548 & 2248.
For a purpose of testing I have connected 5548 & 2248. Created cPC and ether channels between two. SVI and HSRP configuredon 5548. I am terminating 2651 (test router) on 2248 port 101/1/1. On 5548 I have enabled EIGRP on vlans. I am unable to ping to 2651 from nexus switch 5548 and vice-versa. I can see both devices on CDP but I do not see eigrp neighborship formed.
What configuration should go in 2248 and 2651 in order to establish a connection between two? If test is successful then I will connect 2811 to 2248 during actual migration. I assume that in testing if it works for 2651 then it must work on 2811 router.
I have a pair of Nexus 5548UPs that have some high priority servers running on them. Servers are ESX hosts running Nexus 1000v's. Each host has multple connections in a VPC to both 5548s. We have been having intermittant ping loss and slowness of traffic to the VM's on these hosts. I was poking around trying to figure out what the issue could be and found that the peer-keepalive command was not set to send the heart beat across the mgmt0 interface. I would like to change this to point it accross the mgmt0 interface. Any tips or advice for me on making this change with production servers running on the switches? I do not want to cause any loss to any systems when I make this change. [Code] ..........
We have recently upgraded oor LAN and we are using couple of Nexus5548UP switches in the core with 2960 stacks as access switches. Each access switches stack is connnected to both core switches with link being port-chanels and VPCs. All is working fine, but our SolarWinds management platform (NPM) is being flooded with "Physical Address changed" events. Here is an example of messages:
NSW_Core_2 - Ethernet1/7 Physical Address changed from 000000003811 to 73616D653811 NSW_Core_2 - Ethernet1/7 Physical Address changed from 200B82B43811 to 000000003811
For each interface I have messages like these repeating.I am not sure what those messages means or if there is actually anything wrong. Performance of the network is good, there are no errors on any interfaces and I do not see anything related in the switch loggs.
Need clarification on the VPC with 5k and 2248 Fabric Extenders. My question is can each fabric extender uplink to two different 5ks, and at the same time, have servers connected to two both fabric extenders with a VPC.So basically, the server NIC will team with two different fabric extenders, and each fabric extender will connect to two different 5k's.
We are currently using two Nexus 5548UP's as our Datacenter Network Core. I have a pretty simple objective: I would like to enable Jumbo Frames on a single VLAN only(VLAN 65). This VLAN is used strictly for backups. I do not want to enable Jumbo Frames on the other VLANs(VLANs 1-10). Im not sure what the best way to do this is.....or if it is even possible but I am hoping to get some configuration examples.
I would like to make a design with 4 Nexus 5596UP. 2 of them equipped with Layer 3 Expansion Module so they can serve as core layer and the other 2 Nexus used as Layer 2 for aggregation server layer.The 2 Nexus in the core layer will run HSRP and will peer with ISP via BGP for Internet connection The 2 Nexus in the aggregation layer will be configured as layer 2 device and have FEX and switches connected to them.What I am ensure of is how the vpc and port-channel configuration should look like between the 4 nexus. What I was thinking is to run vpc between the 2 Nexus in the aggregation layer and between the 2 Nexus in the core layer. Than I was thinking of connecting each Nexus in the aggragtion layer to both Nexus in the core layer using port-channel and vice-versa.
1. We would like to pre-provision a 2248TP FEX on my 5596UP (NEXUS 5596 running 5.1(3)N2(1a)). Problem is that I can't choose this FEX model. I have the choice of a 2248T or a 2248TP-E, but no 2248TP. [code]
2. on pair of NEXUS 5596 running 5.1(3)N2(1a) with a Layer-3 module installed in both.When doing Enhanced vPC - connecting all FEXs dual-homed to both 5596 - how many FEXs can I have in total ?
Got an odd problem with trunking, all vlans except vlan1 trunk perfectly. Link is from a pair of dualhomed FEX 2248TP's to some 3650G Switches. Nexus running - version 5.1(3)N2(1) 3560's running - 12.2(53)SE2 & 15.0(1)SE2
We have a Nx5548up pair connected to FI6248 via a vPC. We had to reboot a FI (in order to configure more FC ports) ; following that reboot, we meet many issues.The first log shows the vPC down showing the FI rebbot.2013 May 23 12:31:45 sw-n5kup-fr-eqx-01 %ETH_PORT_CHANNEL-5 PORT_INDIVIDUAL_DOWN: individual port Ethernet1/2 is down
We have 2 sites, each with 2 x 4506 switches which will be connected togther using an etherchannel. The switches will provide access ports for client devices and will be configured with HSRP to provide gateway redundancy. SW1 will be HSRP active.2 metro ethernet links will be installed in each site which will connect back to our HQ sites. OSPF will be used over the backbone to provide resiliency and to allow shortest path routing to each HQ and to prevent traffic over the HQ to HQ link.
The 4506 will be trunked togther with an SVI for providing OSFP adjacency.For the traffic flow from SW2 to HQ2, traffic will hit SW1 and then route back to SW2 and then to HQ2. Is this the best way to do this? Should a second link be connected between switches just for routing or should something like GLBP be used?
I have a typical LAN environment that spans across a large warehouse. I have done a lot of redesigning of the environment to satisfy the need for a disaster recover plan. I now have created a LAN with multiple v lans and must also connect all the access layer switches back to the core switch where the servers are.
I was thinking of something simple such as Port channel of 2 Gbps across the backbone and simple floating static routes . I have then moved my wan access link to a 3750 and implemented routing a CEF at each of the 3 core switches (blue). My question is more of design.
We have remote office where we have 2921 router with 6 layer 2 switches. We have few servers which need to be in specific vlan.
2921 router does not have switching engine we are using this to support VOIP.
So on 2921 router i created 6 sub interfaces for each vlan and assign them to their specfic vlans. Then I have trunk connection to switch 1. Now switch 1 connects to all other switches in the network. As our company design all layer 2 switches should be transparent mode. i tested them i can ping from one switch to all other switches.
Router vtp mode i set to transparent mode and from all switches i can ping the router sub interfaces.
I have been tasked to replace the existing Cat 6500 and 3750 switches by Nexus 7000 and Nexus 2000.I was told initially my boss plans to get 2 x Nexus 7000 and then eventually blow up to 4 x Nexus 7000s.For Nexus, is there a list of tasks / points that i need to consider for building the initial design?
Can i just link the Nexus 7000 like the following?
N7k-A ========= N7k-B | | lots of N2ks lots of N2ks