Cisco Switching/Routing :: Nexus 7010 - How To Block SSH Access On SVI Interfaces
Jun 4, 2012
I use Nexus 7010 as our layer 3 router.I have ssh feature turned on so I can manage it from the management interface. I just found out that users can use putty to ssh to the local SVI interface of the NEXUS. Although they still need username and password to login but we dont want them even able to bring up the welcome screen.Example, user's IP is : 172.16.25.100 , they can ssh to 172.16.25.1 which is the NX SVI interface.
This is regarding Nexus core switch 7010. We are already running two nexus 7K with ten Nexus 5k. Currently we are going to add two new Nexus 5k in our DC. In the 7K we already running two VDC's.
In our LAN network design, we have two Nexus 7010 switches on the core connected via vPC. Then LAN access switches are directly connected to the Core Nexus switches via regular port channels on the 3750's and vPC on Nexus. The core Nexus switches will be linked to an existing LAN network and the applications will be progressively migrated from the old to the new network.In this scenario, three VDCs are planned to be configured on the Nexus - One for the Interconnect (and WAN at a later stage), one for the LAN/local services and one for the building facilities/local services.
We recently replaced our core switch from a non-cisco vendor with a Nexus 7010. With our old core switch, I had the ability to log changes to the ARP table. So if there was a dhcp conflict or a vMotion event, it would show up in the "show log" output. I've not found a way to do that with the Nexus switch - or at least no way to view the log. I have the command: logging level arp 6
I have to upgrade a Nexus 7010 with dual Sup engines from 4.2(4) to 5.2 and am hoping it could be an ISSU. We are fine with an outage window.To upgrade from 4.2(4) to 5.2(5) I'll have to do a multi hop upgrade from 4.2(4) - 4.2(6) - 5.2(5) and each hop would take 40-60 minutes.do I spend 40-60 minutes for each hop, or just do a disruptive upgrade straight from 4.2(4) to 5.2(5)? Like I said, we are fine with an outage window.
I just deployed a nexus 7010 switch at a server farm. after deployment, it was notices that there are surges in latency across the network. The default gateway was then moved to the nexus, with this pinging from an host on the same subnet there is intermitent burst is latency
NEXUS>>>>>Server Ping of about 80ms and sometimes even times out. To me, this is strange. NX-OS Version is 5-02a
I'm looking to see if it is possible to run a vPC between to vDC's on a single 7010? We have a Production setup that runs dual 7010's with vPC's between the chassis but in our lab we only have a single 7010 with a 32 port 10gig module. I was thinking that maybe we could create 4 vDC's on the 7010 and run a vPC between the vDC's.
How to get a summary of netflow statistics on NX-OS? On IOS you could do sh ip cache flow which would show what I need? Can't find a similar command on the Nexus Platform.
We will install a new Supervisor Engine in our Nexus 7010.One Supervisor Engine is already installed an 1Year old.So the Problem is that both Supervisor Engines may have different NX-OS version.Could this lead to a problem?Does the installed Supervisor Engine "udate" the newer Supervisor engine?
i have a big problem because i configure a vlans with vrf and HSRP but, when i do "show hsrp brief", dont show this interfaces and, i can ping virtual IP. it seems hsrp dont work.
SWSERVSCAMILO_N7010_A# interface Vlan405 description smsc-fwatlas1 no shutdown [Code] ....
We have a couple of Nexus 7010's split into Core and Distribution VDCs. MGMT0 interfaces on each of the Nexus VDC's (including the Admin VDC) are configured with different IP address, but on the same subnet i.e 10.10.10.1/24 for admin, 10.10.10.2/24 for Core and 10.10.10.3/24 for Distribution. The MGMT 0 physical port on each Nexus is connected to a physical gig port on a 3750 X switch, and the 3750X has uplinks back to the Nexus configured for vPC.
When i ssh to the VDC MGMT0 IPs from the 3750X, i can access each of these VDCs without any problems. But if i enable routing on each of these links(OSPF) and advertise it to the WAN, i cannot see these routes advertised and also cannot see any of these routes in the local routing table.Just wondering if i have to enable these links on a VLAN and then advertise it to the WAN..But if this the case, VLANs cannot be created on the Admin(default VDC).
We have, for nearly 4 years, used EIGRP on our 6513 to make use of two unequal links to our branch offices. This worked because we could use the variance command and cause EIGRP to insert two routes into the table, one from each carrier. Thus it was we could balance the load to each one with a ratio similar to the ratio of the bandwidth of Link A to Link B.
We just purchased 2 Nexus 7010's to replace our single 6513 core.After much consternation we have found from our Ciscio SE that the Nexus 6.0.2 software rendition of EIGRP does not support variance.
Why would Cisco take their own propriatary protocol and then gut it by removing features? I'm quite ready to send these Nexus boxes back in favor of a newer 6500 series. MEC doesn't work like it is supposed to and the show-tech runs for over 24 hours without ever finishing (and this we can repeat on both boxes, multiple times).
We've opened a tac case but I just wondered for any work around for the 'variance' command?
I am facing issue with nexus 7010 login authentication by radius server. I have two nexus 7010, one of them is working perfectly. Other taking long time to authenticate. If i use local database to login it works perfectly. It works fine also if i login from console using radius for authentication.
Does the nexus 7010 support virtual switching yet? All of the posts I have found from about a year ago say that it is going to be supported, but there were no dates listed. I heard the same thing from Cisco a while back, but haven't followed up with it.If it is supported finally are there any configuration guides available for it?
you find attached my network architecture with 2 Nexus 7010 on core layer and 2 Nexus 5020 on distribution layer, each one with 1 N2148T fabric extender switch. PC-A1 and PC-A2 are connected to one N2148T, PC-B1 is connected to the other N2148T. Nexus-7000-1 is HSRP Active for all VLANs, Nexus-7000-2 is HSRP standby. PC-A1 and PC-A2 are connected to VLAN A, PC-B1 is connected to VLAN B. PC-A1 and PC-A2 have the same default gateway correspondent to IP HSRP on VLAN A. It happens that PC-A1 is able to ping PC-B1 while PC-A2 is unable to ping PC-B1. If I issue a traceroute from PC-A2 I see Nexus-7000-2’s physical IP address as the first hop even if Nexus-7000-2 is HSRP standby. After the first hop the traceroute is lost. If I shutdown Port-channel 20 on Nexus-5000-2, PC-A2 starts to ping PC-B1.I can’t understand what’s wrong in this architecture.
I believe i've enable jumbo frames on our Nexus 7010, one in each data-centre.
system jumbomtu 9216. Also on the interfaces mtu 9216. And can see MTU 9216 bytes, BW 20000000 Kbit, DLY 10 usec on the port-channel between them. Though when i ping between vlans at each site with large packets i get 30% drops and if i set the DF bit in IP header to yes - 100% loss.
8798 bytes from 10.200.12.2: icmp_seq=19 ttl=254 time=8.024 ms --- 10.200.12.2 ping statistics ---20 packets transmitted, 14 packets received, 30.00% packet loss
I have a senario where i'm going to have 2 Nexus 7010 connected as a core, and i'm going to have 4 5510's connected in a star formation. Each nexus 5510 will connect to the nexus core Via two 10Gb links. Each nexus 5510 will have 2 links attached to The core switches in vPc's.
The way I intend to configure the vPC's is this the best way. If i get a vPC dual active scenario what would happen. All ports will be forwarding all VLAN traffic this is how I intend to have it work.
I've got a pair of Nexus 7010's running vPC. I am having a Multicast issue with a cluster of Linux servers that need to talk Multicast for cluster/high-availability operation. All the servers need to talk to a single multicast address and I am having trouble getting them to communicate. I believe I need to enable IP IGMP Snooping Querier on the N7K's and it needs to be enabled on the VLAN where the servers reside. How to enable IP IGMP Snooping Querier on a VLAN ?
Yesterday I configured the 7010 Nexus switch. I created a VDC and allocated few ports and configured VLAN for testing. After enabling feature interface-vlan i was allowed to configured L3 interface for the vlan. I assigned ip address and connected few server to check the reachability but it says Destination Host Unreachable.
I have an environment where i have two nexus 7010 switches, along with 2 nexus 5510's. I need to run OSPF as a layer 3 routing protocol between the vpc peer links. I have 1 link being used as a keep alive link, and 3 other links being used as a VpC link.
1) Is it best to configure a separate Vpc VLAN i.e 1010
2) Is it best to configure a vrf context keep-alive
3) just have the management address as the peer ip's.
I am working on Nexus 7010 with NX-OS 5.1.5. I have to delete the static route 10.10.0.0/16 via 10.16.0.21. [code] I try to remove the route with the command "no ip route 10.10.0.0/16 10.16.0.21" and I have the message below % Route not deleted, it does not exist..I don't understand why I have this message because the static route exist.
I am sure that F1 linecards on Nexus weren’t able to support L3 functionality, so my query is does the F2 linecard (N7k-F248XP-25) on Nexus 7010 support Layer 3?
We have 2 nexus 7010 switches configured with HSRP in the network. For all the vlans core1 is Master and Core2 is standby. In the current setup we have external dhcp server and dhcp relay is configured for all the vlans on Master and standby switch. The setup is running the IOS 5.2
Activity Done: During the Maintainacne activity, we isolated core1 switch in the network by disabling the vpc/keepalive and all the uplinks from access switch. The core2 switch was master for all the vlans.
Issue observed: It has been observed that new users were not getting ip address from the dhcp server. The ethereal capture showed that dhcp server was not getting the dhcp requests from the core2 switch. We disabled the dhcp feature in core2 and enabled again with dhcp relay again configured on vlan interfaces .even after doing this no change was observed in behaviour. Finally we got core1 back in network by enabling all the links.
Observation: The moment VPC link came up between the core switches, users started getting ip's from dhcp. Then we started enabling all the uplinks on core1.Core1 again become master for all the vlans and users continued getting ip’s. Network running fine.
Further Testing
1. For one of the vlan, core 2 switch has been made primary and for new users checked the dhcp functionality and it was working fine. The aim was to identify if anything wrong on core 2 related to dhcp relay
2.Again we changed the priority for this vlan and made core1 master for the same. This time we disabled this vlan on core1 and tried new user with core 2 became master and dhcp functionality worked fine for new user. Actually in this case we have simulated the same behaviour when we observed the issue with only difference of VPC was not available during the issue time as core 1 was isolated form network Inputs needed.
Is there any known behaviour for dhcp functionality when VPC is unavailable? If we see the test scenario2 (wherein core1 was master for the vlan and we disable this vlan on core 1 and core 2 was able to relay dhcp requests for new users in this vlan.) it was actually same as scenario we observed during issue time..
Ok I didn't setup my OSPF on my 7010. Today I found out that any static route I put into my 7010 gets sent into to my MPLS network. My 6509's you have to "Tag" the static rout for this to happen. Was under the impression the same was necessary for the 7010 or at least it had to "match" an access list. How can I fix the below so that by default all static routes are not resdistributed into OSPF? [CODE]...
I have configured the ip telnet source-interface Loopback 0 command on a Nexus7010, but when I telnet to another device and do a show users, the ip address is of the closest interface to the device I telnet to, not the ip address of the Loopback. All interfaces are in vrf default. I am running 5.1(6) NXOS.
I've Nexus 7010 switch installed in my DC. I've conncected Cisco router to one of the Ports. On the same port I'm getting following error message and hence, unable to form EIGRP neighborship.
DR-CORE-SW-S01-NEXUS7K %MODULE-2-MOD_SOMEPORTS_FAILED: Module 1 (serial: JAF*******NGK) reported failure on ports 1/2-1/2 (Ethernet) due to R2D2 : Speed patch failed - no frames transmitted in device 143 (error 0xc8f0 1273)
I'm currently working on a plan to migrate our 6500's over to our new 7010's. At the time of the migration I want to tighten up our OSPF design and configure OSPF for "passive-interface default" then allow only those interfaces that should have OSPF neighbors to send the hellos. The issue is that the command is not showing up under the OSPF process. What's even more interesting is that the Nexus 5.x Unicast Routing Configuration Guide shows that the "passive-interface default" command should be an option to enter.
I'm currently running version 5.1(4) (though looking to upgrade to 5.2 during my migration testing). I would rather configure the passive-interface via the routing process versus having to enter it on every interface.
All the other switches / routers on quering for SNMP sysName.0 returns their FQDN in our network. The Nexus 7010 and 5020 switches in the network return only their name. "hostname xx" and "ip domain-name xx" defined on all the devices. The SNMP MIB is matching. There are no other SNMP related issues. How can I get the FQDN for these devices ?
$ snmpget -v 2c -c public m-65k-00.core sysName.0 SNMPv2-MIB::sysName.0 = STRING: m-65k-00.core.abcd.com $ snmpget -v 2c -c public m-N7K-00.core sysName.0
The company I work have finally decided to enter the 21st century and invest in a new telephone system (Interactive Intelligence) to replace the legacy system which has served us well for the past 10 years. The project has only just started and involves upgrading sections of CAT3 cabling to CAT6, replacing Cisco 3550 switches in one area of the building with Cisco 4507 switches and upgrading our Core switches with Cisco Nexus 7010's. The area that concerns me most is enabling the network for qos as I have very little experience with it. At the moment Im trying to read as much documentation as I can on QOS to bring myself up to speed.
The access layer switches will consist of a mixture of Cisco 3750 & 4507 switches connected to Cisco Nexus 7010 switches which will form a collapsed aggregation & core layer.
Basically, how I should approach this daunting task of making sure the network will support VOIP.