Cisco WAN :: OSPF Route Between Nexus 7010 And ASR1002?
Sep 16, 2012
I cannot receive any OSPF route from Nexus to ASR1002 even they are both OSPF neighbour. I have attached the config for both, Both Nexus and ASR part of Area0.
Config
ASR1002#sh ip ospf neighbor
Neighbor ID Pri State Dead Time Address Interface10.165.117.12 1 FULL/BDR 00:00:35 10.231.175.226 GigabitEthernet0/0/0
Ok I didn't setup my OSPF on my 7010. Today I found out that any static route I put into my 7010 gets sent into to my MPLS network. My 6509's you have to "Tag" the static rout for this to happen. Was under the impression the same was necessary for the 7010 or at least it had to "match" an access list. How can I fix the below so that by default all static routes are not resdistributed into OSPF? [CODE]...
I'm currently working on a plan to migrate our 6500's over to our new 7010's. At the time of the migration I want to tighten up our OSPF design and configure OSPF for "passive-interface default" then allow only those interfaces that should have OSPF neighbors to send the hellos. The issue is that the command is not showing up under the OSPF process. What's even more interesting is that the Nexus 5.x Unicast Routing Configuration Guide shows that the "passive-interface default" command should be an option to enter.
I'm currently running version 5.1(4) (though looking to upgrade to 5.2 during my migration testing). I would rather configure the passive-interface via the routing process versus having to enter it on every interface.
I'd like to use IP SLAs and object tracking to define static routes for specific source/destination traffic across some WAN links we have. I've done this in IOS and it's worked fantastically, but I've not found where/how to do this on the Nexus 7010 platform (or any Nexus platform) as of yet. I could have sworn that this was going to be introduced in the 6.x code? Below is an example of how we do this in the IOS world:
track 11 ip sla 1 reachability delay down 15 up 15 ip sla 1
[Code]....
Esentially this gives us the option of using a "failover" default route. I've attached a basic diagram to explain what we are trying to do with IP SLAs and object checking. The tracking should be configured against an SLA that uses icmp and the static routes should be configured against the tracking.
I've got two Nexus 7010's running HSRP north bound to a pair of ASA's, and BGP south bound to four 6509's. Is it possible to advertise default route to BGP neighbor (or prefer it via MED), only if the node is HSRP-active?
Essentially the goal is to create symmetry for inbound/outbound traffic. Only way I can think of so far is via an EEM script, so that when it sees HSRP go active via syslog, it would kick off an action to remove ASN prepend, or reduce MED, and the opposite if HSRP goes standby.
I am working on Nexus 7010 with NX-OS 5.1.5. I have to delete the static route 10.10.0.0/16 via 10.16.0.21. [code] I try to remove the route with the command "no ip route 10.10.0.0/16 10.16.0.21" and I have the message below % Route not deleted, it does not exist..I don't understand why I have this message because the static route exist.
I have an environment where i have two nexus 7010 switches, along with 2 nexus 5510's. I need to run OSPF as a layer 3 routing protocol between the vpc peer links. I have 1 link being used as a keep alive link, and 3 other links being used as a VpC link.
1) Is it best to configure a separate Vpc VLAN i.e 1010
2) Is it best to configure a vrf context keep-alive
3) just have the management address as the peer ip's.
I have a Catalyst switch that is redistributing some static routes into OSPF. These are received on a Nexus 7K and appear in the database however the 7K does not add them to its routing table, one of the routes is ignored and not added. I haven't got a clue why this is happening.
The routes on the Catalyst are as follows with ID of 172.30.255.22:
ip route 172.24.59.0 255.255.255.0 10.56.7.46 ip route 192.168.168.0 255.255.255.0 10.56.7.62
sh ip ro 172.24.59.0/24IP Route Table for VRF "default"'*' denotes best ucast next-hop'**' denotes best mcast next-hop'[x/y]' denotes [preference/metric] 172.24.59.0/24, ubest/mbest: 1/0 *via 172.30.253.10, Po7, [110/20], 20w4d, ospf-NCC, type-2
sh ip ro 192.168.168.0/24IP Route Table for VRF "default"'*' denotes best ucast next-hop'**' denotes best mcast next-hop'[x/y]' denotes [preference/metric] Route not found
I have 2 ASBR routers, AGFR01RTR03 and AGFR02RTR03, performing OSPF to OSPF redistribution in both ways for the same ***. They also do summarization for our private addressing scheme. It is all working just fine for that part (neighbors, summarization, redistribution).
Let's focus on AGDC01RTR01 with a specific entry here (IP subnet is fake) :
Routing entry for 1.1.1.0/25 Known via "ospf 1000", distance 110, metric 300, type inter area Last update from 10.2.244.76 on GigabitEthernet5/1, 1d03h ago Routing Descriptor Blocks: * 10.2.244.76, from 10.2.1.249, 1d03h ago, via GigabitEthernet5/1 Route metric is 300, traffic share count is 1
I am running ASR1002 with latest XE IOS version asr1000rp1-adventerprisek9.03.02.01.S.151-1.S1.bin configuration bellow
router bgp 65000 bgp router-id 1.1.1.1 bgp log-neighbor-changes timers bgp 5 15 ! address-family ipv4 vrf LABR01-VRF bgp router-id 1.1.1.1 neighbor bgprrclient peer-group neighbor bgprrclient remote-as 65001 neighbor bgprrclient password 7 1234 neighbor bgprrclient update-source Loopback0 neighbor bgprrclient version 4 neighbor bgprrclient route-reflector-client neighbor bgprrclient route-map set_weight in I then tried to create new route-map and get error that match next-hop can not be used on inbound
route-map set_weight permit 10 match ip next-hop prefix-list thirdparty match as-path 1 set weight 1000
LAB-ASR1002(config)#route-map set_weight permit 10LAB-ASR1002(config-route-map)# match ip next-hop prefix-list thirdparty% "set_weight" used as BGP inbound route-map, nexthop match not supported% not supported match will behave as route-map with no match% "set_weight" used as BGP inbound route-map, nexthop match not supported% not supported match will behave as route-map with no match% "set_weight" used as BGP inbound route-map, nexthop match not supported% not supported match will behave as route-map with no match% "set_weight" used as BGP inbound route-map, nexthop match not supported% not supported match will behave as route-map with no match% "set_weight" used as BGP inbound route-map, nexthop match not supported% not supported match will behave as route-map with no match Not sure why Cisco is not supporting a pretty basic feature for BGP route maps.I tried looking into matching other variables but I am unable to get same result as I have same routes on bgp table from multible inbound peers.
I also get this message when configuring tacacs. I looked for "new" cli but no luck:LAB-ASR1002(config)#tacacs-server host 2.2.2.2 This cli will be deprecated soon. Use new server cli
Trying to implement PBR in N7K? I found that there is not track mechanism can use in "set next-hop ip", so if the next-hop is unreachable that the route might be died.
i have a couple of nexus 7010 (5(2)3a) connected to a checkpoint in HA(active/active)I have now to configure multicast mac..i found this commmand: [code]
There are times the Nexus CPU goes high around 70% but its not a constant occurance. Is there something to worry. Quite hard to find out which process caused it as it happens very brief. [code]
This is regarding Nexus core switch 7010. We are already running two nexus 7K with ten Nexus 5k. Currently we are going to add two new Nexus 5k in our DC. In the 7K we already running two VDC's.
why a subnet wouldn't be passed on to just one participating OSPF device?
I have two routers and an ASA, all of which are in area 0, it's a pretty simple config. The two routers are connected to some other devices (also in area 0) that pass of an external route to a particular subnet, let's call it 192.168.4.0. The routers are getting it just fine, but the ASA is not:
In our LAN network design, we have two Nexus 7010 switches on the core connected via vPC. Then LAN access switches are directly connected to the Core Nexus switches via regular port channels on the 3750's and vPC on Nexus. The core Nexus switches will be linked to an existing LAN network and the applications will be progressively migrated from the old to the new network.In this scenario, three VDCs are planned to be configured on the Nexus - One for the Interconnect (and WAN at a later stage), one for the LAN/local services and one for the building facilities/local services.
We recently replaced our core switch from a non-cisco vendor with a Nexus 7010. With our old core switch, I had the ability to log changes to the ARP table. So if there was a dhcp conflict or a vMotion event, it would show up in the "show log" output. I've not found a way to do that with the Nexus switch - or at least no way to view the log. I have the command: logging level arp 6
I have to upgrade a Nexus 7010 with dual Sup engines from 4.2(4) to 5.2 and am hoping it could be an ISSU. We are fine with an outage window.To upgrade from 4.2(4) to 5.2(5) I'll have to do a multi hop upgrade from 4.2(4) - 4.2(6) - 5.2(5) and each hop would take 40-60 minutes.do I spend 40-60 minutes for each hop, or just do a disruptive upgrade straight from 4.2(4) to 5.2(5)? Like I said, we are fine with an outage window.
I have two Nexus 7010 in the data center. I'm unable to poll SNMP data from one of the NEXUS 7010s. The other Nexus 7010 is working fine. I have compared the SNMP configurations, and they are identical. When I do "show snmp" on the non-working switch, I have SNMP packets in "Unknown Context name", not sure why. I have done show vdc and it matches the working switch output. Here is output of show snmp from the non-working switch: 133 SNMP packets input 0 Bad SNMP versions 0 Unknown community name 0 Illegal operation for community name supplied 0 Encoding errors 0 Number of requested variables 0 Number of altered variables 0 Get-request PDUs 0 Get-next PDUs 0 Set-request PDUs 0 No such name PDU 0 Bad value PDU 0 Read Only PDU 0 General errors 0 Get Responses 133 Unknown Context name0 SNMP packets output 0 Trap PDU 0 Too big errors 0 No such name errors 0 Bad values errors 0 General errors 0 Get Requests 0 Get Next Requests 0 Set Requests 0 Get Responses 0 Silent drops.
I just deployed a nexus 7010 switch at a server farm. after deployment, it was notices that there are surges in latency across the network. The default gateway was then moved to the nexus, with this pinging from an host on the same subnet there is intermitent burst is latency
NEXUS>>>>>Server Ping of about 80ms and sometimes even times out. To me, this is strange. NX-OS Version is 5-02a
I'm looking to see if it is possible to run a vPC between to vDC's on a single 7010? We have a Production setup that runs dual 7010's with vPC's between the chassis but in our lab we only have a single 7010 with a 32 port 10gig module. I was thinking that maybe we could create 4 vDC's on the 7010 and run a vPC between the vDC's.
How to get a summary of netflow statistics on NX-OS? On IOS you could do sh ip cache flow which would show what I need? Can't find a similar command on the Nexus Platform.
We will install a new Supervisor Engine in our Nexus 7010.One Supervisor Engine is already installed an 1Year old.So the Problem is that both Supervisor Engines may have different NX-OS version.Could this lead to a problem?Does the installed Supervisor Engine "udate" the newer Supervisor engine?
i have a big problem because i configure a vlans with vrf and HSRP but, when i do "show hsrp brief", dont show this interfaces and, i can ping virtual IP. it seems hsrp dont work.
SWSERVSCAMILO_N7010_A# interface Vlan405 description smsc-fwatlas1 no shutdown [Code] ....
We have a couple of Nexus 7010's split into Core and Distribution VDCs. MGMT0 interfaces on each of the Nexus VDC's (including the Admin VDC) are configured with different IP address, but on the same subnet i.e 10.10.10.1/24 for admin, 10.10.10.2/24 for Core and 10.10.10.3/24 for Distribution. The MGMT 0 physical port on each Nexus is connected to a physical gig port on a 3750 X switch, and the 3750X has uplinks back to the Nexus configured for vPC.
When i ssh to the VDC MGMT0 IPs from the 3750X, i can access each of these VDCs without any problems. But if i enable routing on each of these links(OSPF) and advertise it to the WAN, i cannot see these routes advertised and also cannot see any of these routes in the local routing table.Just wondering if i have to enable these links on a VLAN and then advertise it to the WAN..But if this the case, VLANs cannot be created on the Admin(default VDC).
We have, for nearly 4 years, used EIGRP on our 6513 to make use of two unequal links to our branch offices. This worked because we could use the variance command and cause EIGRP to insert two routes into the table, one from each carrier. Thus it was we could balance the load to each one with a ratio similar to the ratio of the bandwidth of Link A to Link B.
We just purchased 2 Nexus 7010's to replace our single 6513 core.After much consternation we have found from our Ciscio SE that the Nexus 6.0.2 software rendition of EIGRP does not support variance.
Why would Cisco take their own propriatary protocol and then gut it by removing features? I'm quite ready to send these Nexus boxes back in favor of a newer 6500 series. MEC doesn't work like it is supposed to and the show-tech runs for over 24 hours without ever finishing (and this we can repeat on both boxes, multiple times).
We've opened a tac case but I just wondered for any work around for the 'variance' command?
As a Senior Network Engineer I have entered into a bit of a debate with our Architect about the use of the mgmt0 interfaces on the nexus 7010 switch (dual-sups, M2 and F2 linecards).I would like to know opinion of the Cisco support network.
I believe the mgmt0 interface should left alone for control plane traffic only and Out Of Band management access (ie ssh). At the moment I have made a subnet for all VDCs with the mgmt0 (vrf management) sitting in a common subnet. The physical mgmt0 interfaces from both SUPs are connected a management hand off switch. The mgmt0s also serves as our control plane for VPCs. The VPC peer-link however is using main interfaces of the line-cards.
The opinions;
- The Architect thinks we should use all the mgmt0 interfaces for snmp, ntp, tacacs netflow-analysis and switch management.
- However, I think I should use a traditional Loopback to perform these functions within the linecards. The mgmt0 should only be used if traditional restricted switch access has failed.
My Basis;
the Loopback never goes down, uses multiple paths (the OOB hand off switch could fail closing switch management access completely). The mgmt0 should be used as a last resort of management access to CMP.
I am facing issue with nexus 7010 login authentication by radius server. I have two nexus 7010, one of them is working perfectly. Other taking long time to authenticate. If i use local database to login it works perfectly. It works fine also if i login from console using radius for authentication.