Cisco Switching/Routing :: Connecting Nexus 7k VPC And Orphan Ports?
Mar 20, 2013
we have two nexus 7k connected via vPC peer. We have edge switch connected to the core using HSRP via vPC.Now we have 1 orphan port connected to each Nexus (WLC).The problem is i cant seem to connect / ping the WLC (only 1 of them) that is connected to the orphan port and i think it is probably due to the packet arriving at the secondary HSRP and traversing through peer-link and dropping the packet.
Now what is the best practise for HSRP with vPC for orphan ports ? The problem is i can only ping 1 wlc from a machine. on doing a traceroute i find that the packets seems reach N7k1 and reach wlc that is connected to its own port but not to the WLC that is connected to N7k2 due to the packet travesing through peer-link and dropping at the peer-link.Now what is the best practise to sort this out and reach both WLC at the same time ? Do i move the WLC 2 to N7k1 ?
nyone know if "vpc-orphan-port suspend" works if i put on N2k interface. not the fex link. example i have fex 101 and i put on eth 101/1/10 will it suspend the port on N2K connected to secondary N5K when peer link is down?
the nexus 7k or 5k can be devided into 4 virtual devices ( using vdc)and making 8 uplink ports in the 2k will allow us to use the extender for all the 4 vdc's with 2 uplinks ( for redundancy ) from each vdc
we have several uplink ports on a verity of cisco switches connecting to the nexus 7000, recording CRC errors.most are trunked ports with the following configuration. [code]
I have two 5548s as core. 8 FEXs are multihomed (advanced vPC topology?) to both the cores.Suppose, I have to configure a bunch of ports on the FEXs, say Eth101/1/10 - 20. I would login to the first core and apply the configs.
My question is - do I have to do the same on the second core also? Or would the first core replicate the stuff to the second core? I know about port-profiles/CFS and such. But, without that would it automatically sync to second core?
For testing purpose, I went to Core 1 Eth101/1/10 and put a description "TEST". Wrote the config. After 5 minutes logged into second core and did show run Eth101/1/10. But, the description "TEST" didn't show up there.
Also, doing sh run on any FEX port is faster on one of the cores and very slow on second core... all the FEXs have 20 GB uplink to core 1 & 2 (so total 40GB in vPC, max pinning 1)
i would like to monitor traffic between multiple source ports to multiple destination ports on a nexus 7k. i lknow when you set up monitor session is between source and destination (laptop or traffic analyser) but is there a way i can set up between source and multiple destination ports and capture that traffic ?
In our LAN network design, we have two Nexus 7010 switches on the core connected via vPC. Then LAN access switches are directly connected to the Core Nexus switches via regular port channels on the 3750's and vPC on Nexus. The core Nexus switches will be linked to an existing LAN network and the applications will be progressively migrated from the old to the new network.In this scenario, three VDCs are planned to be configured on the Nexus - One for the Interconnect (and WAN at a later stage), one for the LAN/local services and one for the building facilities/local services.
We have 2 nexus 5K installed in our data centre recently and we are connecting new three servers to nexus switches. Each server has 2 10GB ports . 1 port of serverA is connected 5K1 and other port is connected 5K2 ( sameway other 2 server connected to Neuxs 5K1 and 5K2 Switches).So do we need to create each VPC with Portchannel (like VPC 1,2 and 3 ) for each server connection?
Does Nexus 7K support Multiple VDCs sharing ports on a single line card. One of our cisco parnter engineers stated that cisco doenst recommend using same line card for multiple VDCs.The second VDC (Non-Default VDC) will be used four our Outside, and DMZ Segment, and to phyiscally segregate our Firewall from our Internal/Inside Core Switch without using a physical DMZ Switch.I know Cisco used the Nexus in this way in their PCI DSS 2.0 Compliance Document. Module is N7K-M148GT-11L
Mod Ports Module-Type Model Status --- ----- -------------------------------- ------------------ ------------ 1 48 10/100/1000 Mbps Ethernet XL Mod N7K-M148GT-11L Mod Ports Module-Type Model Status --- ----- -------------------------------- ------------------ ------------ 1 48 10/100/1000 Mbps Ethernet XL Mod N7K-M148GT-11L
I'm trying to get the VFC up in B22-FEX blade in Dell which is connecting to Nexus 5596UP.
The message I get is
# sh int vfc1033 vfc1033 is down (Error Disabled - VLAN L2 down on Eth interface) Bound interface is port-channel3 Hardware is Ethernet Port WWN is 24:08:00:2a:6a:0d:db:3f Admin port mode is F, trunk mode is on
I have a Cisco Catalyst 2960 series switch. When I connect another switch or a VMWare Esx server on one of the ports, the port blinks for few seconds and then goes off. When I connect a desktop or laptop or a windows server the ports stay led and I can access resources on the network. However, when connect another switch or a vmware esx server, the port is disabled.
What config in the switch can I check to make sure the ports stay on regardless when I connect a desktop / laptop, switch or vmware server?
I am in the process of migrating our existing server farm subnets to our new Nexus server farm and I discovered something I wasn’t expecting. My intention is to migrate our existing legacy server farm which is comprised of for paired 3750 switches off of our core 6509s and onto the Nexus and connect them to the 2232s via multi gig port-channel connections, two port channels per switch stack.
NOTE this is expected to be a temporary move as next year we intend to install additional N2Ks and move servers over to these directly. But to minimize the outage/downtime it will be better to move the subnets and switchs all at once.
These connections would be grouped 1 gig connections as port channels, one from each switch into one of the two 2232s.
Problem I discovered is Cisco does not intend to have switches connected to the Nexus and it immediately disables the ports when they see BPDUs.
I found a config that does work and it does fail over from one port-channel connection to the other but with the limitation that when the original port channel comes back online it does not fail back over to the original one, an acceptable situation for us. But I am wondering if Cisco would support this design if we did experience issues down the road.
The only issue I really see is to get it to work the config is different on the two N5Ks, see the pert config below for the connections. Both are running the same OS
augs1-ba-ar17# sh ver Cisco Nexus Operating System (NX-OS) Software TAC support: [URL]
We are facing issue of continous packet discards On nexus4001L link (int po2) to Nexus5020 switch. Nexus4001L is installed in IBM blade center server and we have FCOE enabled in this setup. [code]
I have been tasked to replace the existing Cat 6500 and 3750 switches by Nexus 7000 and Nexus 2000.I was told initially my boss plans to get 2 x Nexus 7000 and then eventually blow up to 4 x Nexus 7000s.For Nexus, is there a list of tasks / points that i need to consider for building the initial design?
Can i just link the Nexus 7000 like the following?
N7k-A ========= N7k-B | | lots of N2ks lots of N2ks
we are planning a Nexus datacenter project with this layout:Our experiences with Nexus switches are not so large until now and the manuals are very extensive.Both N5K´s should be connected directly with all 4 N2K switches. I did not find a layout like this in the manuals. Only a design,where only 2 N2K are connected to one N5K, with this fex config:Now I´m not sure if it is right to make a config like this with the same slots and fex´s or with different slots and fex´s.
Does the nexus 7010 support virtual switching yet? All of the posts I have found from about a year ago say that it is going to be supported, but there were no dates listed. I heard the same thing from Cisco a while back, but haven't followed up with it.If it is supported finally are there any configuration guides available for it?
Lucien is a customer support engineer at the Cisco Technical Assistance Center. He currently works in the data center switching team supporting customers on the Cisco Nexus 5000 and 2000. He was previously a technical leader within the network management team. Lucien holds a bachelor's degree in general engineering and a master's degree in computer science from Ecole des Mines d'Ales. He also holds the following certifications: CCIE #19945 in Routing and Switching, CCDP, DCNIS, and VCP #66183
We have our Nexus as our default gateway (101.1) and the default VLAN1 is setup with two subnets 101.X and 102.X. The DHCP server is using a superscope setup to accomodate the overflow of devices requesting IPs on 101, so when 101 is consumed persons are able to obtain a 102.X IP address. The setup is basic on superscope. The issue is some times the routing to the firewall with a 102.X is not always 100%. Somedays all goes well and the 102 subnet is routed out to the firewall and its a good day. However, such as today a 102.X address is not routing as it did 24 hours ago. I am perplexed as to why this is behaving unpredicatable. Here is running-config for VLAN1 to show the 102 as secondary address to VLAN1.
you find attached my network architecture with 2 Nexus 7010 on core layer and 2 Nexus 5020 on distribution layer, each one with 1 N2148T fabric extender switch. PC-A1 and PC-A2 are connected to one N2148T, PC-B1 is connected to the other N2148T. Nexus-7000-1 is HSRP Active for all VLANs, Nexus-7000-2 is HSRP standby. PC-A1 and PC-A2 are connected to VLAN A, PC-B1 is connected to VLAN B. PC-A1 and PC-A2 have the same default gateway correspondent to IP HSRP on VLAN A. It happens that PC-A1 is able to ping PC-B1 while PC-A2 is unable to ping PC-B1. If I issue a traceroute from PC-A2 I see Nexus-7000-2’s physical IP address as the first hop even if Nexus-7000-2 is HSRP standby. After the first hop the traceroute is lost. If I shutdown Port-channel 20 on Nexus-5000-2, PC-A2 starts to ping PC-B1.I can’t understand what’s wrong in this architecture.
I have an environment where i have two nexus 7010 switches, along with 2 nexus 5510's. I need to run OSPF as a layer 3 routing protocol between the vpc peer links. I have 1 link being used as a keep alive link, and 3 other links being used as a VpC link.
1) Is it best to configure a separate Vpc VLAN i.e 1010
2) Is it best to configure a vrf context keep-alive
3) just have the management address as the peer ip's.
I have a pair of 5548P switches with the L3 daughter cards installed. Using the base license as I just need RIP routing. I have the two switches setup and have the RIP routing feature enabled. When I "turn on" routing using RIP I do not get any routes from my existing L3 switch (3750). I probably don't have routing setup correctly. With the 3750 IOS, I just turned on RIP with the router rip command and added a couple of network statements. On the nexus I have run router rip {instance} and left it at that. I am not getting any routes from my 3750. The 5548s are using the management ports and are connected to my existing network with L2 trunks. Does any know of a setup guide for RIP? I have used the Nexus 7000 RIP guide but still can't get it to work.
This is regarding Nexus core switch 7010. We are already running two nexus 7K with ten Nexus 5k. Currently we are going to add two new Nexus 5k in our DC. In the 7K we already running two VDC's.
The fans 1 & 2 in Module 1 on the Nexus5K are still experiencing the very high RPM and speed issue.
I have replaced the fan from another operational Nexus5K, and the fans are fine in the other Nexus. The replacement fans also have the same issues, so it is not a fan hardware issue.
There are no threshold alarms. the only log entry that is related to this is as follows:
%NOHMS-2-NOHMS_ENV_ERR_FAN_SPEED: System minor alarm in fan tray 1: fan speed is out of range on fan 1. 7950 to 12500 rpm expected. I have provided the output for both the fan detail and the temperature.
N5K-01# sh environment fan detail Fan: --------------------------------------------------- Module Fan Airflow Speed(%) Speed(RPM) Direction --------------------------------------------------- 1 1
I was reading a QoS walkthrough earlier to try to solve my problem and I noticed that in IOS, you can specify "match vlan" in a class map. This is not available in NX-OS. I'm not doing any routing on the 5K so I cannot match on ACL, and port where traffic is received is a trunk sharing other types of traffic I'd like to classify elsewise.
Just upgraded Nexus 7k from 5.2.1 to 5.2.7 (just system and kickstart image and NOT epld image). but after upgrading the one of the fex(n2k) dont seem to come online (this nexus 7k has two n2k and one of them came online and working fine)
I have a couple of Nexus 5ks that I want to put QOS on for the servers running behind it but also have voice running across it. Voice doesn't play well with jumbo frames so I'd like to put QOS only on the voice vlan.
I am working in my lab and I was adding a new L2/3 vlan
vlan 555 name test
int vlan 555 ip address 1.1.1.1/24 no shut
I have also ensured that this vlan is added to the port channel going to my Nexus5K's. I added the vlan to the 5K's and also ensured that vlan 555 is traversing the peer link. all is good there. I have also placed a device on a interface on the 2k as a access switchport on vlan 555
Here is my problem, the L3 interface will NOT come up on the 7K
LAB-DSW01# sh ip int brie IP Interface Status for VRF "default"(1) Interface IP Address Interface Status Vlan555 1.1.1.1 protocol-down/link-down/admin-up
I have gone throug just about everything I can think of and I am still unable to get this L3 interface to come up. I have other L3 interfaces on this device that are configured exactly the sameway without any issue at all. All the existing interfaces are working properly, it's jut this new interface that I am trying to add.
I am running version 6.0.1 on the 7K and 5.2.1.N1.4 on the 5K.