Cisco Switching/Routing :: Dual Nexus 5596UP To 6509-VSS
Jun 10, 2013
I have two Nexus 5596UP that will be connected together via VPC-Peerlink. From there I want to connect both 5596UP's to a 6509-VSS via VPC.The Nexus 5596UP's will be essentially layer 2 switches, all routing will be done the 6509-VSS's.
I would like to make a design with 4 Nexus 5596UP. 2 of them equipped with Layer 3 Expansion Module so they can serve as core layer and the other 2 Nexus used as Layer 2 for aggregation server layer.The 2 Nexus in the core layer will run HSRP and will peer with ISP via BGP for Internet connection The 2 Nexus in the aggregation layer will be configured as layer 2 device and have FEX and switches connected to them.What I am ensure of is how the vpc and port-channel configuration should look like between the 4 nexus. What I was thinking is to run vpc between the 2 Nexus in the aggregation layer and between the 2 Nexus in the core layer. Than I was thinking of connecting each Nexus in the aggragtion layer to both Nexus in the core layer using port-channel and vice-versa.
I am trying to work up a config based on equipment that was ordered before I joined my current employer.
I will be deploying N2Ks at the top of each rack. Each N2K will be dual homed to two different N5K's. Being new to the Nexus, I understand that the N2K's have no brains and are dependent on the N5K's they connected to. Wasnt sure how to tell each 5K that the 2K that was dual connected to it needed to be able to move between N5K's based on failure/availability. I havent been able to find a sample config of what this will look like anywhere on the Nexus section of the Cisco site.
The next step after this will be to connect N5K_1 to blade on a 6509 and N5K_2 to a different blade on the same 6509. I will be installing two of the 10Gig blades in the 6509. Havent been able to find any sample configs on what this would look like either. We are upgrading the sup engines on the 6509 to the new 2T version.
When I read Nexus 5K install guide , I found the follow :The Cisco Nexus 5596UP switch has the following features: # •48 fixed 1- and 10-Gigabit Ethernet server connection ports on the back of the switch AND The 48 fixed ports support 8-, 4-, 2-, or 1-Gbps Fibre Channel transceivers and 1- or 10-Gigabit Ethernet transceivers. Does these is a conflict ?The 48 fixed port on this switch support only 1- and 10-Gigabit Ethernet or 8-, 4-, 2-, or 1-Gbps Fibre Channel and 1- or 10-Gigabit simultaneously ?
We have two Nexus switches in our network, one of them is Nexus5020 other Nexus5596UP. System image is identical on both switches 5.2(1)N1(4). When we try to setup VPC between these switches we see that all configured vlans on VPC peer link between Nexus switches are blocked by spanning tree protocol with message "Bridge Assurance Inconsistent, VPC Peer-link Inconsistent". We still can't solve this problem.
Topology:
NEXUS_5020---Peer_link(Po2)---NEXUS_5596UP
/
/
Member_link (Po100) Member_link (Po100)
/
/
SERVER
Configuration:
NEXUS_5020: speed 1000 interface Vlan2000 no shutdown description VPC_keepalive_link vrf member VPC_kepalive ip address 10.55.55.2/30
I'm trying to get the VFC up in B22-FEX blade in Dell which is connecting to Nexus 5596UP.
The message I get is
# sh int vfc1033 vfc1033 is down (Error Disabled - VLAN L2 down on Eth interface) Bound interface is port-channel3 Hardware is Ethernet Port WWN is 24:08:00:2a:6a:0d:db:3f Admin port mode is F, trunk mode is on
One of the two supervisors in an IOS 6509-E did not come back up after a power outage. The failed supervisor in slot 5 was replaced and it booted successfully. However, the supervisor in slot 5 only booted up to a "Cold" state. I did notice the Hw version of the replacement module in slot 5 is 4.9 while the Hw version in the supervisor module in slot 6 is 4.8. What command do I need to issue to bring the supervisor module in slot 5 from "Cold" to "Hot"? [code]
I hve the above setup and I need to setup multicast between the 2 servers. The nexu7k is a layer 2 switch and the 6500 is a lyer3.Both servers would be sending/receving traffic.
I have to upgrade a Nexus 7010 with dual Sup engines from 4.2(4) to 5.2 and am hoping it could be an ISSU. We are fine with an outage window.To upgrade from 4.2(4) to 5.2(5) I'll have to do a multi hop upgrade from 4.2(4) - 4.2(6) - 5.2(5) and each hop would take 40-60 minutes.do I spend 40-60 minutes for each hop, or just do a disruptive upgrade straight from 4.2(4) to 5.2(5)? Like I said, we are fine with an outage window.
we have a new IBM Bladechassie with two Cisco nexus 4001i switches.I have configured one external port on each nexus and connected them to a cisco 6509 with 1G cisco SFP-modules and MM fibre.Both the nexus and 6509 ports are configured as trunk ports, and set speed to 1000.I see light in the SFP-modules on both devices, and through the fibre. When I connect the devices, the link doesn't come up. No light on the ports, the nexus says "link not connected", and the 6509 says "notconnect". I have tried reconfiguring the ports in many ways, even as accessports, nothing seems to work. If I move the SFP and fibre from the 6509 over to a trunk port on a cisco c2960-24TC-L, the link comes up and everything is working fine. why this work on a 2960 and not my 6509 coreswitch? One of the configs I've tried on the 6509:interface GigabitEthernet2/20description *IBM Bladechassie 2 NW1*switchportswitchport trunk encapsulation dot1qswitchport trunk native vlan 34switchport trunk allowed vlan 34switchport mode trunkend
We have two 5548 switches connected to a pair of 6509 running in VSS mode. I am trying to understand the benefit of having bridge assurance on the uplink ports.
If we have the command spanning-tree port type network enabled we cannot do a non disruptive upgrade. If there is bridge assurance on the uplink it warns you of this. Yet if I do not run bridge assurance on the uplinks I can do a upgrade without any disruption.
Why in god would I enable bridge assurance on this VPC link if I cannot do a non disruptive upgrade?
I am in the early planning stages for a 6509 to Nexus 7K migration. Based on my experience with the 7K's at a previous company where we ran into a lot of issues, I am trying to be very careful.
I am more at home with the 6500 chassis and know what I can do with them. I remember running into a limitation on the Nexus that involved their not supporting span sessions like the 6500's do. Is that still the case ?
If that isnt an option in the short term, I will need to look at a substantial investment in ethernet tap's to replace the lost span functionality because the security group's heavy use of span sessions.
I have the Cisco VSS consisting two chassis 6509.I have the system Active-Dual detection via Enhanced PAgP with one neighbor - standalone cisco 3750. All works good.I want to add one more neighbor - cisco stack 3750x with 3 members. Will this scheme work? And what is in danger, if the stack is split into two parts?
I am trying to interconnect a pair of Nexus 5548 at adjacent sites, using 2 2960-S switches at each site, the reason being that the Multimode Fiber between the sites will only support 100mb and I need this up while I finish having SMF laid.
I have attached a diagram, just debating whether to use etherchannel or vPC - would like to hear some opinions...
Assume the interconnect between the 5548's needs to be 802.1q trunk
currently nexus 2000 and nexus 7000 does not support dual connection. you can not connect to 1 nexus 2000 to 2 nexus 7000 chasis. But for the nexus 5000, you can. what is the problem to to support this feature on Nexus 7000s? 5000s and 7000s run same software.
We are planning to have attach topology with nexus 5548 using vpc. Let me know if this i possible. I want to configure dual NIC linux server using LACP active mode to connect to two 5548 in VPC for redudancy as well as use of full access layer bandwidth. On nexus this will be access port in single port channel in single VPC link.
I have the following connectivity :Nexus(7004) - M1 8Port card with x2-10GB-SR <----------------------> 6509 -- 6704 card -- xenpak-10GB-SR ,The fiber link is not coming up.
I am trying to understand what load balancing method is used on a port channel on a Nexus switch . I have a server connected by a VPC to two Nexus switches. The nexus switches are only acting as layer 2 switches. I have a 6509 connected via a upstream link that does all of the routing for my VLANS. If have a server connected to the Nexus switches and it talks to a server on my 6509 what load balancing happens on the Nexus going across VPC 27 which is a layer 2 trunk going up to my 6509. Is it done on layer 2 or layer 3 flows?
My Nexus shows the default load balancing configurations
Port Channel Load-Balancing Configuration:System: source-dest-ip Port Channel Load-Balancing Addresses Used Per-Protocol:Non-IP: source-dest-macIP: source-dest-ip source-dest-mac
I wanted to know if any has the Nexus 7009 chassis installed into a 600 wide rack with the sides fitted and if they are experiencing heat issues?
My client will be replacing their aging 6509 chassis with 7009 devices, but the physicals dont tally with the install guidelines for the 7009 series chassis. The current install of the 6509s does not tally with the recommended install guidelines for those either, but they have not expereienced any heat issues...
The 7009 will be fitted with 2xSUP2E, 3x48portSFP-F2E cards and 2x10GSFP-M2 cards with 2x6K PSUs. I am genuinely concerned they may cook these devices, but space restrictions look like vetoing the upgrade to 800 wide racks. Likewise moving to 7010 chassis may prove tricky due to existing other installs within the racks limiting vertical space.
I have a two fiber connection from our Central Office(6513) to Remote office (6509). I have a requirement that on the remote office if one of the fiber goes down, the second fiber should work as a failover. I am planning to use SUP720-3B SFP to connect to the CO.
Can I connet one fiber to Sup720-3b G5/1 & another fiber connection to G5/2? or Can I connet one fiber to Sup720-3b G5/1 & another fiber connection to G6/2? I am running EIGRP between sites. Any sample config.
Our customer is willing to have a Cisco Nexus 5020 to provide server connectivity and this Nexus would go connected to their core switch 6509. They are concern about Spanning tree compatibility between the Nexus and the 6509. Are they fully compatible for Spanning tree?
1. We would like to pre-provision a 2248TP FEX on my 5596UP (NEXUS 5596 running 5.1(3)N2(1a)). Problem is that I can't choose this FEX model. I have the choice of a 2248T or a 2248TP-E, but no 2248TP. [code]
2. on pair of NEXUS 5596 running 5.1(3)N2(1a) with a Layer-3 module installed in both.When doing Enhanced vPC - connecting all FEXs dual-homed to both 5596 - how many FEXs can I have in total ?
I currently have a couple of 6509 chassis (router/switches) with the following hardware blades:
x3 48 ports x1 NAM x2 Sup720 Running 12.2(18)SXF3
I am keeping the four Sup720 modules and have purchased new versions of the others blades including two new 6509-E chassis?Can I take my stand-by Sup720 out of the production machine and insert it into the new chassis?
I currently have a couple of 6509 chassis (router/switches) with the following hardware blades:
x3 48 ports x1 NAM x2 Sup720
Running 12.2(18)SXF3.I am keeping the four Sup720 modules and have purchased new versions of the others blades including two new 6509-E chassis. Can I take my stand-by Sup720 out of the production machine and insert it into the new chassis?
We are facing issue of continous packet discards On nexus4001L link (int po2) to Nexus5020 switch. Nexus4001L is installed in IBM blade center server and we have FCOE enabled in this setup. [code]
I have been tasked to replace the existing Cat 6500 and 3750 switches by Nexus 7000 and Nexus 2000.I was told initially my boss plans to get 2 x Nexus 7000 and then eventually blow up to 4 x Nexus 7000s.For Nexus, is there a list of tasks / points that i need to consider for building the initial design?
Can i just link the Nexus 7000 like the following?
N7k-A ========= N7k-B | | lots of N2ks lots of N2ks
we are planning a Nexus datacenter project with this layout:Our experiences with Nexus switches are not so large until now and the manuals are very extensive.Both N5K´s should be connected directly with all 4 N2K switches. I did not find a layout like this in the manuals. Only a design,where only 2 N2K are connected to one N5K, with this fex config:Now I´m not sure if it is right to make a config like this with the same slots and fex´s or with different slots and fex´s.
we've had an issue with our network, we have 2 6509 connected with redundancy, which are connected with 2 x 4900 Switches, from which are connected to a ESX Chassis for visualization, the thing is that the ESX stopped working, and the 4900 switches, and the main core were suffering from overload, they hang on it very well, in order to stop the overload, one of the links to the ESX Chassis were disconnected from one of the 4900 switches. The CPU usage from the 4900 and the core(6509) went down below 40%, and then they started to migrate the virtual servers from the chassis to another 2 chassis that were added right after. They were actually working well, but suddenly the 6509 changed to the other supervisor after everything was OK. We were wondering what could have been the cause of this, maybe the virtual servers migrations, maybe the overload from the ESX ? We also had a few question, is there any need to reload the cores every few months as a planned task ? Because the cores have been up for more than 1 year. And also is there any kind of of tool to monitor the CPU status, or the status overall from the cores or the switches ?
The have around 80 staff and I think the current infrastructure is overkill for the size of the company. The current kit is old and they have no GB ethernet ports. They currently have:-
Core Switch: 1x Cisco c6509with a 48 port fast ethernet module (WS-X6248-RJ-45) and an 8 port fibre module (WS-X6408A-GBIC)
I'm looking to replace this with something with 72 ethernet ports and 8 fibre ports
Access Switches: 2x 3500Replacement needs at least 48 ports and 2 fibre modules each
and 2x 5500Replacement needs at least 72 ports and 2 fibre modules each.
I have two ISPs. Each is on it's own subnet connected to the 6509 MSFC/Switch. FW1 is on 100.1.100.0/30 and FW2 is on 200.1.200.0/30 subnet. My goal is route all traffice going to the Internet from subnet 10.133.3.0/24 to FW1 and all other subnets across the organization to FW2. I am not sure if I need to use ACL / Static route combo, or just a static routes or ACLS?