I am looking to see if Nexus 5596UP & Nexus 2248TP GE compatible with SFP-10G-SR? The reason is because a consultant was hired on to "design" the network layout and has decided to purchase Cisco SFP+ Copper Twinax Cables which have a 10M limit. A small handful of the Data Center racks are 10-15M away... just out of reach of the Twinax. I would prefer NOT to move the LAN row so that it is more centered in the room. Can I use the SFP-10G-SR to connect the 2 switches (5596 & 2248) together? This SFP has a 26M reach on standard 10gig fiber, the small cost increase per connection is of no concern.
I have a pair of 5596 running in a vPC with Nexus 2248 connected to each N5596. When I do the command "show fex" I get the following output on the 2nd 5596
Number Description State Model Serial ------------------------------------------------------------------------ 101 FEX101 AA Version Mismatch N2K-C2248TP-E-1GE SSI16390705 102 FEX102 AA Version Mismatch N2K-C2248TP-E-1GE SSI163704AD 122 FEX122 Online N2K-C2232PP-10GE SSI16370195
I'm running version 5.1(3)N1(1) on both of the 5K's. I have looked through all the configuration and I am not understanding why I am getting this error. I have tried to look it up on [URL], but not having a ton of luck.
This past networkers I was at the Cisco booth discussing how the 2248 can connect to the 5548 and have server connectivity. It was told to me that now, as of a fairly recent NX-OS release, you can have the 2248 going dual-homed to both 5548 via VPC and then have a server connected to both 2248 and be in active-active mode. Is this correct?
When we first deployed our 5548 and 2248 we had to put the 2248 in a straight pin mode, where it only had connections to one 5548 and then the server would dual connect to the 2248's and be in active-active mode. I was told that this changed with an NX-OS release however documentation still seems to be fragmented on what exactly is the case.
I have a pair of 2248 Fex's where I'm currently terminating several VPC's to the servers, each with one port per Fex. Is it possible to run a VPC to a server with 4 ports, i.e 2 ports per Fex? I saw some discussions indicating it wasn't possible on a 2148 Fex, but would be on the 2248. The ports will just a dot1q trunks
I have a dual-homed fabric (Nexus 2248 dual attached to two Nexus 5020's via vPC). On this Nexus 2248 is a server that has a four port LACP etherchannel. The ports do not appear to be load balancing correctly. The output below shows the four ports in use and it clearly shows port e138/1/10 as getting the most use. When I use the "show port-channel load-balance forwarding-path..." command on either of the vPC switches for various source and destination IP's that use this link, it shows them correctly load-balancing across the four ports. But we do not see this when looking at stats on both the server side and the switch side.
**************** Config info below. This is a vPC pair and the port configs are identical on both switches so I'm only showing the configs for one switch to keep it simple.
dc5020-3g# sh port-channel load-balance Port Channel Load-Balancing Configuration: System: source-dest-ip Port Channel Load-Balancing Addresses Used Per-Protocol: Non-IP: source-dest-mac IP: source-dest-ip source-dest-mac
I have 2 Nexus 5596UPs with a layer 3 cards that are exhibiting some very peculiar behavior. The systems are running 5.1(3)N1(1).I have configured 2 VRF contexts each running their own OSPF process. There is a static gateway of last resort configured on each VRF, which is to an upstream pair of 5585X's in Active/Active. Each OSPF process has the "default-information originate always" command configured, however, backbone neighbors are not recieving a gateway of last resort from the 5596UPs. The applicable configurations are show below. All other routing information is passing correctly between devices in the network. This network is not production, it is a proof of concept for a larger implementation.
We have setup a pair of Nexus 5596 L3 switches with 2 x 10Gbps LACP links between them to act as the vpc peer link. We also have another 2 x 10Gbps LACP links between the 5596 switches to carry non VPC VLANs, this is required to provide EIGRP routing between the switches and an upstream router.I have read that it is possible to setup the vpc keep-alive link over an SVI instead of the management interfaces. Is it ok to run the keep-alive SVI over the second LACP non VPC VLAN trunk or is it recommened to keep this seperate?
i have: two nexus 5596 connected each other the mgmt0 is NOT in use SVI for keepalives with IP address and /30 netmask vpc-keepalives running over fiber in e1/1. this works well uplinks to datacenter distribution switch (Cat 6500 VSS) over fiber on port-channel 1 (e1/2 and e1/10), also carrying the management VLAN (vlan 14). SVI with an IP address for management purposes
I can't get this to work. i can ping my whole network from the nexus, but not the nexus from my network. also pinging inside the mgmt vlan is not possible.
we have configured VpC between two Nexus 5596, for Vpc-Keep-alive-link we configured L3 interface with 1G (GLC-T) ,it shows the below status message "L3 not Ready" with interface LED glows in Yellow in color. is this a physical layer problem
Ethernet VLAN Type Mode Status Reason Speed Po Ch Eth1/17 -- eth routed down L3 not ready 1000(D) -- Eth1/18 1 eth access down SFP not inserted 10G(D) -- Eth1/19 1 eth access down SFP not inserted 10G(D) -- Eth1/20 1 eth access down SFP not inserted 10G(D) -- Eth1/21 1 eth access down SFP not inserted 10G(D) --
We have HSRP between NexusA and NexusB with access layer switches connecting to the core using VPC, We are trying to setup a VAM server Voice recording for Siemens phones. We need to span all voice vlan and point it to the VAM server the VAM server connects to a 3750 Stack considering the amount of traffic multiple span session can generate I plan to move the server to the Nexus directly and run a Local Span Session.
1- As we have two Nexus running HSRP and VAM server only connects physically to one NexusA (I can run local span on that nexusA) the Second NexusB is not directly connected to the VAM server I plan to run ERSPAN so if this is the best design and which path will the span traffic take from Nexus B to NexusA will it go through the access layer switches depending on the vlans allowed on the uplinks or will it go through the 20 Gig uplink between the two Nexus allowing all vlans (VPN peer links) ? WE have approximately 10 voice vlans, Do we an example config for ERSPAN session where the source are vlans (As I am for fimilliar with RSPAN) ?
It looks like the deny statement is not working as I can see all routes I am redistributing. I even did a deny on a specific route and I still see it in the routing table on another router in the autonomous system.The same below works fine on IOS platform. [code]
I have 2 datacenters running same equipement (two Nexu 5596 with FEX).I just took a look at the log just to see if everything is ok and I saw that I have the same error message (a lot of times) at both location :
%SYSMGR-FEX100-5-HEARTBEAT_LOSS: Service "satctrl" heartbeat loss 2 ,max 7,I though it was a problem with my peerklink-keepalive connection but I see the word FEX ....so i'm not sure...
Note that at both locations, my Nexus are connected back to back through the management port using transceivers. So it's a copper cable from the first nexus, going into a transceiver, going to another transceiver in fiber and then back to copper to the other nexus.
I am deploying a pair of Nexus 5596's with 3750 POE switches in the closets. I'm looking for a best practice as how to configure the Nexus 5596 to support proper QoS for EF at the core.
I currently have Nexus 5596 pair with VPC peer link Po1 between them. My goal is to connect our new Nexus 7Ks to the 5K's using Fabric Path. My question is during this inital setup with the 7K's. Can I use the same port channel number on the 7K's as I did the 5K's? Is the port channel locally significat?
i want to remove the sync-profile on each of two synchronized Nexus 5596UP without loosing the config stored in Switch-Profile. That means without connectivity interruption and re-configuration of interfaces in "conf t mode", for example. Since NX-OS Release 5.2(1)N1(1) there is a new command
switch(config-sync)# no switch-profile abc profile-onlyprofile-only—Deletes the switch profile without the local configuration.
the phone connects to the 3750-A access layer switch (VTP mode client) which connects to the nexus 5596 (The nexus is the layer 3 device and set to vtp mode server) and finally we have a Voice recorder that connects to another access layer 3750-B switch.(VTP mode client)
For voice recording I need to setup RSPAN and the nexus5596 does not support RSPAN will the following have any impact on the nexus
IF I move the 3750-B to VTP server mode and run the command remote span on the VLan I need to Rspan its going to update the VTP data base in short it will update the vlan.dat file for all the switch in that VTP domain.
AS the vtp update reaches the nexus saying there is a change (keeping in mind the nexus does not support rspan not sure hows its going to handle that request and how its going to update its vlan.dat file)
Is it going to incremment the VTP revision number? can it crash the vlan.dat file on the nexus ? or do nothing and ignore the update and stop the update from proceeding to the 3750 A switch?
The below nexus 5020 nx-os version and type/revision of my GLC-T is compatible with each other? I noticed also the "Transceiver calibration is invalid" when i do show int e1/8 transceiver details , what does it mean?
I wanted to know that in nexus 7009, can i use mix of F2/M1/M2 series line cards ? will they work with each other ? Lets say i have F2 line card and M2 line card, will servers attached to them will communicate with each other ?
1. We would like to pre-provision a 2248TP FEX on my 5596UP (NEXUS 5596 running 5.1(3)N2(1a)). Problem is that I can't choose this FEX model. I have the choice of a 2248T or a 2248TP-E, but no 2248TP. [code]
2. on pair of NEXUS 5596 running 5.1(3)N2(1a) with a Layer-3 module installed in both.When doing Enhanced vPC - connecting all FEXs dual-homed to both 5596 - how many FEXs can I have in total ?
Does ACS v4.2 support the addition of the Nexus switches? We have a few new Nexus devices that have been added to ACS, but cannot be accessed successfully. A msg re: role based authentication is received. Do I have to do something special in ACS to support this?
I am running LMS 3.2 and can not see the Nexus 5596 / ME-3600X-24FS-M Cisco switches on Cisco works LMS 3.2. Where I need them most is DFM the devices come up as unknown. An example below 10.125.202.1 is NExus 5596 and the rest are ME3600.
I was trying to setup a Nexus (5596 running NX-OS 5.1(3)N2(1)) to use the "ip ospf name-lookup"command that I am using on IOS-based routers. Unfortunately this command does not appear to be supported on NX-OS and I cannot find a replacement.Is this another feature that's left out of NX-OS?
I need to implement LACP HP servers mostly DL 380 g7 with Intel based dual port with two types of Cisco equipment first scenario server connected to 3750x stack of 4 switch's .second scenario same server type connected to two Cisco Nexsus 5596 . My question regarding two type of connection.Is it possible to do active active ?Would it give fault tolerance ?With HP LACP implementation is there known issue or should i expect latency with such configuration?What is the maximal lag- channel group that is possible per type?
For a simple 4x sites design for backup consolidation (3x sites connected to one central site), does the design in the attached JPEG make sense and is the configuration achievable using the specified parts?
Brief Explanation: - Each site is to have servers with 2x 10Gb/s Ethernet (both teamed/bonded/etc) and one NIC on each of the 2x switches in the site (if possible). Each site will also have 2x Cisco Nexus 5596UP Switch (though the P version may be used instead of the UP). - Two of the sites are within 10KM of the central site - The last site is about 35KM from the central site - It is required to have a minimum bandwidth of 10Gb/s between any site and the central site - I have specified 10Gb/s LW SFP+ and 10Gb/s LW SFP+ (for the 2x 10KM and 1x 35KM sites) - are there any special licenses needed? - as asked before, can such a "simple" design work as-is? - what sort of single-mode fiber would fit the bill (especially for the 40KM link) - 9nm?
I have pair of 5596 switches in vPC. One host say "HOST A" is connected to the primary vPC peer and other "HOST B" on secondary vPC peer.Both are in same VLAN 10. Both hosts are vpc orphan ports as their NIC is configured in active/standby mode.I have configured span session on both vPC peers with span source as VLAN 10 in rx mode.Span destination is connected to secondary vPC peer. The issue here is that I am not able to capture the traffic originating from HOST A destined to HOST B which is traversing vPC peer-link.Same issue occurs for the traffic in reverse way and span destination on primary vPC peer. In a nutshell, any traffic which crosses vPC peer-link is not getting captured.
What could be the issue and is there any solution for it. Below mentioned is the span config and relevant interfaces. [code]
We are facing issue of continous packet discards On nexus4001L link (int po2) to Nexus5020 switch. Nexus4001L is installed in IBM blade center server and we have FCOE enabled in this setup. [code]
I have been tasked to replace the existing Cat 6500 and 3750 switches by Nexus 7000 and Nexus 2000.I was told initially my boss plans to get 2 x Nexus 7000 and then eventually blow up to 4 x Nexus 7000s.For Nexus, is there a list of tasks / points that i need to consider for building the initial design?
Can i just link the Nexus 7000 like the following?
N7k-A ========= N7k-B | | lots of N2ks lots of N2ks