Cisco Switching/Routing :: Nexus 7010 OSPF Passive-interface Default Not Showing
Nov 6, 2011
I'm currently working on a plan to migrate our 6500's over to our new 7010's. At the time of the migration I want to tighten up our OSPF design and configure OSPF for "passive-interface default" then allow only those interfaces that should have OSPF neighbors to send the hellos. The issue is that the command is not showing up under the OSPF process. What's even more interesting is that the Nexus 5.x Unicast Routing Configuration Guide shows that the "passive-interface default" command should be an option to enter.
I'm currently running version 5.1(4) (though looking to upgrade to 5.2 during my migration testing). I would rather configure the passive-interface via the routing process versus having to enter it on every interface.
Ok I didn't setup my OSPF on my 7010. Today I found out that any static route I put into my 7010 gets sent into to my MPLS network. My 6509's you have to "Tag" the static rout for this to happen. Was under the impression the same was necessary for the 7010 or at least it had to "match" an access list. How can I fix the below so that by default all static routes are not resdistributed into OSPF? [CODE]...
I have an environment where i have two nexus 7010 switches, along with 2 nexus 5510's. I need to run OSPF as a layer 3 routing protocol between the vpc peer links. I have 1 link being used as a keep alive link, and 3 other links being used as a VpC link.
1) Is it best to configure a separate Vpc VLAN i.e 1010
2) Is it best to configure a vrf context keep-alive
3) just have the management address as the peer ip's.
We have a couple of Nexus 7010's split into Core and Distribution VDCs. MGMT0 interfaces on each of the Nexus VDC's (including the Admin VDC) are configured with different IP address, but on the same subnet i.e 10.10.10.1/24 for admin, 10.10.10.2/24 for Core and 10.10.10.3/24 for Distribution. The MGMT 0 physical port on each Nexus is connected to a physical gig port on a 3750 X switch, and the 3750X has uplinks back to the Nexus configured for vPC.
When i ssh to the VDC MGMT0 IPs from the 3750X, i can access each of these VDCs without any problems. But if i enable routing on each of these links(OSPF) and advertise it to the WAN, i cannot see these routes advertised and also cannot see any of these routes in the local routing table.Just wondering if i have to enable these links on a VLAN and then advertise it to the WAN..But if this the case, VLANs cannot be created on the Admin(default VDC).
I have configured the ip telnet source-interface Loopback 0 command on a Nexus7010, but when I telnet to another device and do a show users, the ip address is of the closest interface to the device I telnet to, not the ip address of the Loopback. All interfaces are in vrf default. I am running 5.1(6) NXOS.
I understand the vlans on the catalyst side of the house on 2900 to 6500 Catalyst switches.
This 7010 running nx-os 5.1(3) I did not setup, but have to manage it. Hasn't really been a proble till now.
My nexus 7010 has a Layer 2 only vlan 11. It is "Active" but the interface is "shutdown". Yet, it is passing traffic across the directly connected ports on the nexus 7010 and to other switches in my network. Vlan 11 is being set out via VTP to all my switches and things are running fine.
I need to create another L2 only Vlan. I can't seem to find any docs that indicate that a Layer2 vlan Interface on nx-os should be in "shutdown" mode as part of the setup. I do see in the docs where it has to be set "Active" as part of the process.
Is this the correct way to seutp a L2 only vlan on nex-os? Leave the interface in "shutdown" but make it "Active"?
Mystery Vlan 4 and 6 The mystery deepens. I have other L2 vlans ,Vlan4&6 that are NOT defined as "Interface Vlan4" in the nexus config, yet it is applied to GigE ports on the nexus and these Vlans 4/6is also being sent out VTP to all switches. Even weirder is that these vlans have names associated with the numbers. These are valid Vlans that were configured on the old 6509 before the Nexus was installed.
I have checked all switches, NONE are running in Server mode for VTP, all are in CLIENT. The nexus 7010 is the only device running in VTP Server mode.
I've been having a problem with my cisco routers (7600s) where sub-interfaces that we create for ldp tunnels are added automatically to the main ospf process as no passive when created. In order, here is how to reproduce the issue:
- Configure ospf process as "passive-interface default"
- Configure interfaces that have to be active as "no passive-interface blah"
- ospf works as expected.
- Create new sub- interface somewhere with encapsulation on a certain vlan for xconnect.
- New sub-interface gets added as "no passive-interface" in main ospf process.
- When adding a new port-channel interface, behavior is the same.
Is that normal for cisco, should I continue removing sub-interfaces manually every time from the ospf process?
when will be the command "default interface x/x" on the Nexus 5000 platform available? Even with latest software version (5.1.3.N2.1a) it is not possible. For Nexus7000 it's working fine with 5.2 train.Is there a feature request for it? If not here it is!!It's horrible to deconfigure many interfaces especially in N5k environments with many FEXes.
I'd like to use IP SLAs and object tracking to define static routes for specific source/destination traffic across some WAN links we have. I've done this in IOS and it's worked fantastically, but I've not found where/how to do this on the Nexus 7010 platform (or any Nexus platform) as of yet. I could have sworn that this was going to be introduced in the 6.x code? Below is an example of how we do this in the IOS world:
track 11 ip sla 1 reachability delay down 15 up 15 ip sla 1
Esentially this gives us the option of using a "failover" default route. I've attached a basic diagram to explain what we are trying to do with IP SLAs and object checking. The tracking should be configured against an SLA that uses icmp and the static routes should be configured against the tracking.
I have a customer who is having some issues with 5m passive HP twinax cables, 537965-001, with a Chelsio 10G NIC. Aside from NIC driver issue, if NX-OS recognizes this SFP+, should it be expected to work in a 5020 running 4.2(1)N1(1)? Whether Cisco has certified passive HP twinax cables? I have included output to 'transceiver details' to a Cisco twinax and the HP (WL GORE) cables.
Nexus5020# show interface eth 1/7 transceiver details Ethernet1/7 sfp is present name is CISCO-MOLEX type is SFP-H10GB-CU3M
In our LAN network design, we have two Nexus 7010 switches on the core connected via vPC. Then LAN access switches are directly connected to the Core Nexus switches via regular port channels on the 3750's and vPC on Nexus. The core Nexus switches will be linked to an existing LAN network and the applications will be progressively migrated from the old to the new network.In this scenario, three VDCs are planned to be configured on the Nexus - One for the Interconnect (and WAN at a later stage), one for the LAN/local services and one for the building facilities/local services.
We recently replaced our core switch from a non-cisco vendor with a Nexus 7010. With our old core switch, I had the ability to log changes to the ARP table. So if there was a dhcp conflict or a vMotion event, it would show up in the "show log" output. I've not found a way to do that with the Nexus switch - or at least no way to view the log. I have the command: logging level arp 6
I've got two Nexus 7010's running HSRP north bound to a pair of ASA's, and BGP south bound to four 6509's. Is it possible to advertise default route to BGP neighbor (or prefer it via MED), only if the node is HSRP-active?
Essentially the goal is to create symmetry for inbound/outbound traffic. Only way I can think of so far is via an EEM script, so that when it sees HSRP go active via syslog, it would kick off an action to remove ASN prepend, or reduce MED, and the opposite if HSRP goes standby.
I have to upgrade a Nexus 7010 with dual Sup engines from 4.2(4) to 5.2 and am hoping it could be an ISSU. We are fine with an outage window.To upgrade from 4.2(4) to 5.2(5) I'll have to do a multi hop upgrade from 4.2(4) - 4.2(6) - 5.2(5) and each hop would take 40-60 minutes.do I spend 40-60 minutes for each hop, or just do a disruptive upgrade straight from 4.2(4) to 5.2(5)? Like I said, we are fine with an outage window.
I just deployed a nexus 7010 switch at a server farm. after deployment, it was notices that there are surges in latency across the network. The default gateway was then moved to the nexus, with this pinging from an host on the same subnet there is intermitent burst is latency
NEXUS>>>>>Server Ping of about 80ms and sometimes even times out. To me, this is strange. NX-OS Version is 5-02a
I'm looking to see if it is possible to run a vPC between to vDC's on a single 7010? We have a Production setup that runs dual 7010's with vPC's between the chassis but in our lab we only have a single 7010 with a 32 port 10gig module. I was thinking that maybe we could create 4 vDC's on the 7010 and run a vPC between the vDC's.
We will install a new Supervisor Engine in our Nexus 7010.One Supervisor Engine is already installed an 1Year old.So the Problem is that both Supervisor Engines may have different NX-OS version.Could this lead to a problem?Does the installed Supervisor Engine "udate" the newer Supervisor engine?
We have, for nearly 4 years, used EIGRP on our 6513 to make use of two unequal links to our branch offices. This worked because we could use the variance command and cause EIGRP to insert two routes into the table, one from each carrier. Thus it was we could balance the load to each one with a ratio similar to the ratio of the bandwidth of Link A to Link B.
We just purchased 2 Nexus 7010's to replace our single 6513 core.After much consternation we have found from our Ciscio SE that the Nexus 6.0.2 software rendition of EIGRP does not support variance.
Why would Cisco take their own propriatary protocol and then gut it by removing features? I'm quite ready to send these Nexus boxes back in favor of a newer 6500 series. MEC doesn't work like it is supposed to and the show-tech runs for over 24 hours without ever finishing (and this we can repeat on both boxes, multiple times).
We've opened a tac case but I just wondered for any work around for the 'variance' command?
I am facing issue with nexus 7010 login authentication by radius server. I have two nexus 7010, one of them is working perfectly. Other taking long time to authenticate. If i use local database to login it works perfectly. It works fine also if i login from console using radius for authentication.
Does the nexus 7010 support virtual switching yet? All of the posts I have found from about a year ago say that it is going to be supported, but there were no dates listed. I heard the same thing from Cisco a while back, but haven't followed up with it.If it is supported finally are there any configuration guides available for it?
you find attached my network architecture with 2 Nexus 7010 on core layer and 2 Nexus 5020 on distribution layer, each one with 1 N2148T fabric extender switch. PC-A1 and PC-A2 are connected to one N2148T, PC-B1 is connected to the other N2148T. Nexus-7000-1 is HSRP Active for all VLANs, Nexus-7000-2 is HSRP standby. PC-A1 and PC-A2 are connected to VLAN A, PC-B1 is connected to VLAN B. PC-A1 and PC-A2 have the same default gateway correspondent to IP HSRP on VLAN A. It happens that PC-A1 is able to ping PC-B1 while PC-A2 is unable to ping PC-B1. If I issue a traceroute from PC-A2 I see Nexus-7000-2’s physical IP address as the first hop even if Nexus-7000-2 is HSRP standby. After the first hop the traceroute is lost. If I shutdown Port-channel 20 on Nexus-5000-2, PC-A2 starts to ping PC-B1.I can’t understand what’s wrong in this architecture.
I use Nexus 7010 as our layer 3 router.I have ssh feature turned on so I can manage it from the management interface. I just found out that users can use putty to ssh to the local SVI interface of the NEXUS. Although they still need username and password to login but we dont want them even able to bring up the welcome screen.Example, user's IP is : 172.16.25.100 , they can ssh to 172.16.25.1 which is the NX SVI interface.
I believe i've enable jumbo frames on our Nexus 7010, one in each data-centre.
system jumbomtu 9216. Also on the interfaces mtu 9216. And can see MTU 9216 bytes, BW 20000000 Kbit, DLY 10 usec on the port-channel between them. Though when i ping between vlans at each site with large packets i get 30% drops and if i set the DF bit in IP header to yes - 100% loss.
8798 bytes from 10.200.12.2: icmp_seq=19 ttl=254 time=8.024 ms --- 10.200.12.2 ping statistics ---20 packets transmitted, 14 packets received, 30.00% packet loss
I have a senario where i'm going to have 2 Nexus 7010 connected as a core, and i'm going to have 4 5510's connected in a star formation. Each nexus 5510 will connect to the nexus core Via two 10Gb links. Each nexus 5510 will have 2 links attached to The core switches in vPc's.
I've got a pair of Nexus 7010's running vPC. I am having a Multicast issue with a cluster of Linux servers that need to talk Multicast for cluster/high-availability operation. All the servers need to talk to a single multicast address and I am having trouble getting them to communicate. I believe I need to enable IP IGMP Snooping Querier on the N7K's and it needs to be enabled on the VLAN where the servers reside. How to enable IP IGMP Snooping Querier on a VLAN ?
Yesterday I configured the 7010 Nexus switch. I created a VDC and allocated few ports and configured VLAN for testing. After enabling feature interface-vlan i was allowed to configured L3 interface for the vlan. I assigned ip address and connected few server to check the reachability but it says Destination Host Unreachable.