Cisco :: Nexus 7009 / Cannot Connect To It Via SSH
Feb 21, 2013
I have a nexus 7009 that used to work connecting via SSH. However now I cannot connect to it via ssh. It appears the SSH is connects but doing a show users from the console shows nothing connected other than the console connection.
We have Nexus7009 at client network but due to limitation of Nexus switches that they can not be directly integrate Nexus with RSA so client has purchased cisco ACS for the AAA. We are able to do the authentication and authorization via ACS.However clients wants to further integrate the ACS with RSA so that authentication should happen via RSA and authorization should happen ACS. Is that possible ? if yes, how can i configure the ACS ?
I have setup my radius server access on the Nexus but am unable to authenticate through putty. If I do a radius-server test on the Nexus it says I authenticate. Here is the log I am getting.
2012 Mar 14 16:03:21 switch-a %AUTHPRIV-4-SYSTEM_MSG: pam_unix(aaa:auth): check pass; user unknown - aaad
I need to upgrade my core switches at one of our locations (two 7009s with dual sups) from 6.0.1 to 6.1.2. After looking through the release notes it appears that this will be a disruptive upgrade?how long should I expect for the disruption? Are we talking a 7009 boot cycle (10 - 15 minutes) or something longer?How many disruptions can I expect? I suspect 1 per chassis during the failover to the standby but I'd like to validate.Is there any compelling reason to upgrade the EPDL? From what I can see, again from the release notes, this is only necessary with F2 cards if I were to upgrade to Sup2s . I'm in a healthcare environment and this upgrade will be affecting one of our major campuses so the more info I can get to the managers the more accepting they will be for the disruptions.
I need to figure out the max power consumption of 7009. The issue is, at this point i am not sure what modules will be used, so just to give an estimate, how we calculate the max power consumption of nexus 7009 ?
We are migrating from Catalyst 6509 IOS platforms to Nexus 7009. There's the normal differences in commands which is well doucumented. We do have some quite large files containing ACLs varying from 10's of lines to several 1000's of lines. Our normal upload would be done using tftp and then issuing the command 'conf net' on the the 6509. This is no longer the way to do this on NX-OS. I've tried copy ftp: running-config which works fine for small files but for big ones it takes a long time, in some cases I've see it takes 20-30 minutes. The initilal tftp uplaod to the 7009 seems OK but the copy into the running-config is the bit that takes time and initially I thought I'd killed the 7009!! It did finally come back to the prompt. Are the 7009's simply not designed for large ACLs? I did try the configure session (Session Manager) but I couldn't see a way of uploading a file. I tried creating a new session and then exiting it, copying in a file of the same format and then commiting it but it didn't seem to acknowledge the file (checksum?).
I wanted to know that in nexus 7009, can i use mix of F2/M1/M2 series line cards ? will they work with each other ? Lets say i have F2 line card and M2 line card, will servers attached to them will communicate with each other ?
We have Nexus 7009 switch and want to configure the span session
We are using F2 and M2 card both are in seperate differeent VDC.And out server is connected to M2 card on eth 4/6 and want to monitor the traffic from vlan 161Which is made on F2 card.
We are looking to deploy two Nexus 7009 cores at our two datacenters. They are approximately 2 miles apart. We are hoping to have 10G Dark Fiber between the buildings and therefore dedicate a pair for FCOE between the cores using 10G Long Range SFP's. I read that the Nexus 5000 series had a limit of ~3 km for FCOE. Does the same hold true for the 7000 series? I thought I read somehwere that the buffers were larger on the 7000 series and therefore would be able to do ~30 km.
I wanted to know if any has the Nexus 7009 chassis installed into a 600 wide rack with the sides fitted and if they are experiencing heat issues?
My client will be replacing their aging 6509 chassis with 7009 devices, but the physicals dont tally with the install guidelines for the 7009 series chassis. The current install of the 6509s does not tally with the recommended install guidelines for those either, but they have not expereienced any heat issues...
The 7009 will be fitted with 2xSUP2E, 3x48portSFP-F2E cards and 2x10GSFP-M2 cards with 2x6K PSUs. I am genuinely concerned they may cook these devices, but space restrictions look like vetoing the upgrade to 800 wide racks. Likewise moving to 7010 chassis may prove tricky due to existing other installs within the racks limiting vertical space.
We've gotten two Nexus 7009's in and I'm starting to configure them when I found I couldn't add VDCs. I found there was no license installed but the only licenses I found that came with them are "Cisco DCNM for LAN Enterprise Lic for one Nexus 7000 Chassis". So my question is this - do I need to configure a DCNM server to get the license pushed to these two 7009s or should there be another PAK for each chassis that I can register and get my enterprise services?
we've been using IOS for a long time, but are relatively new to NX-OS. We've got a central syslog server that all our devices log to. No matter what we do, we can't get our Nexus switches to log there. Here's my current attempt:
Nexus 7009, NX-OS 6.0(1)
# sh logging server Logging server: enabled {redacted} server severity: debugging server facility: local7 server VRF: default
[code].....
The default VRF is working. I see log entries in the logfile, but nothing arrives at the syslog server. It's not a config issue on the server, because tcpdump shows that no packets arrive from the IP for loopback 0.
Is there any challenge to upgrade core switch 6500 series from Nexus 7009 which runs NxOS, because i have 3750X series switches connected at distribution and access layer in my network topology??
Is there any challenge if we place NxOS in core and IOS in distribution and Access layer??? how we are able to match sh run config in existing 6500 switch to Nexus 7009 NXOS?
We have a vPC cluster of two Nexus 7009 that needs to be connected with a VSS cluster of two Catalyst 6509s. The VSS has been working fine for a while and the vPC cluster is new equipment.
Attached there is a detailed diagram of the connections; the VSS cluster connects the interfaces Ten1/2/8 and Ten 2/2/8 using the PortChannel 28 going to the the vPC cluster to the interfaces Eth 4/18 of each switch.
Both the vPC and the VSS are well configured; last night we tried to brought up the connection between the two clusters but only the first interface comes up within the etherchannel; the secondary one did not come up and shows (not receiving LACP packets).
We know Layer 1 is fine because if we remove the interface from the EtherChannel it does come up; but causes some STP loop and bring the network down; thus the solution is to form a EtherChannel.
At the VSS Clúster we see LACP packets being sent with sh lacp counters but we DO NOT see LACP packets being received in the interface of the secondary Nexus.
Right now, this is not possible to troubleshoot since it is a production enviroment; so I'm looking for problems with the configuration or recommendations to follow in order to apply them tomorrow night during a new maintenance window.
We are running 4x n5k and started with the vPC feature. So my question is, if i can connect a vpc-pair to another vpc-pair?In the cisco docs i can find examples for connecting a vpc-pair to a single switch, or server (with and without fex)But there is nothing about how to connect 4 n5k via vPC feature.
We are facing issue of continous packet discards On nexus4001L link (int po2) to Nexus5020 switch. Nexus4001L is installed in IBM blade center server and we have FCOE enabled in this setup. [code]
I have been tasked to replace the existing Cat 6500 and 3750 switches by Nexus 7000 and Nexus 2000.I was told initially my boss plans to get 2 x Nexus 7000 and then eventually blow up to 4 x Nexus 7000s.For Nexus, is there a list of tasks / points that i need to consider for building the initial design?
Can i just link the Nexus 7000 like the following?
N7k-A ========= N7k-B | | lots of N2ks lots of N2ks
we are planning a Nexus datacenter project with this layout:Our experiences with Nexus switches are not so large until now and the manuals are very extensive.Both N5K´s should be connected directly with all 4 N2K switches. I did not find a layout like this in the manuals. Only a design,where only 2 N2K are connected to one N5K, with this fex config:Now I´m not sure if it is right to make a config like this with the same slots and fex´s or with different slots and fex´s.
we have a new IBM Bladechassie with two Cisco nexus 4001i switches.I have configured one external port on each nexus and connected them to a cisco 6509 with 1G cisco SFP-modules and MM fibre.Both the nexus and 6509 ports are configured as trunk ports, and set speed to 1000.I see light in the SFP-modules on both devices, and through the fibre. When I connect the devices, the link doesn't come up. No light on the ports, the nexus says "link not connected", and the 6509 says "notconnect". I have tried reconfiguring the ports in many ways, even as accessports, nothing seems to work. If I move the SFP and fibre from the 6509 over to a trunk port on a cisco c2960-24TC-L, the link comes up and everything is working fine. why this work on a 2960 and not my 6509 coreswitch? One of the configs I've tried on the 6509:interface GigabitEthernet2/20description *IBM Bladechassie 2 NW1*switchportswitchport trunk encapsulation dot1qswitchport trunk native vlan 34switchport trunk allowed vlan 34switchport mode trunkend
is it possible to connect one Cisco Nexus 2000 fabric extender to two Cisco Nexus 5000 and use one link on the first side and two links on the other side?
We have a setup of 2 Nexus 7000 chassis and several fexes (N2K-C2248TP-1GE). The fexes are connected through a port-channel to a single nexus 7000 (no vpc). (Fex 1 to Nexus 1, fex 2 to Nexus 2, fex 3 to nexus 1 etc).Are there guidelines on how to connect a server to those fexes.
I can see several possible scenario's at our site. I have drawn some scenario's on a design. I can't find detailed information on which setup is possible and which is not. The goal is to have as much redundancy as we can.When using scenario 1, do I configure an orphan port on the uplink to this server?
We are looking forward to implement IBM BladeCentre Swiches connectivity with Nexus 2k module.I would like to brief you about my network over as follows:
Currentl Solution : we want to connect the IBM Blade Switches (4) as demonstrated in the attached diagram which will be connected to Nexus 2k module as Ether-channel Access port.
Initially we been proposed by our vendor with this design and Now vendor is recommending us to connect the Blade switch etiher to Nexus 5k switch or directly to 6513 core-switches instead of Nexus 2k modules as they say Nexus 2k modules are only to connect the Edge devices .
We do not have ports available on Nexus 5010 to connect the cables from IBM Blade Centre Switches. Otherthan that, incase if we go ahead and connect the IBM Blade switches as access ether-channel access port with Nexus 2k module what are consequences we have to face realted Spanning tree or any other.
As per my understanding, Cisco Nexus 2232 can only connect to HP c7000 Chassis if we are using a Pass Through Switch in the HP c7000. Cisco Nexus 2232 can only connect to End Hosts and not to a switch. Is there a New Feature added in Nexus 2232, which enables it to connect to a Switch like HP Flex Fabric ?
I am trying the connection above using 10G spf connectors. Is there any special configuration in order to have a trunk (with 2 disctinct vlans) betwen NX7000 and DELL 6224 ?
I have One UCS 210M2 server and two Nexus 7K.We want to configure portchannel(etherchannel) UCS two NIC card and connect to each nexus 7K.Is this design supported, connecting UCS port-channel on two nexus 7K? we dont have 5K or 2K switches.What configuration required from Nexus 7K to achieve this vPC directly with UCS 210M2 servers?
Trying to implement PBR in N7K? I found that there is not track mechanism can use in "set next-hop ip", so if the next-hop is unreachable that the route might be died.
I have multicast routing setup between two Nexus 7K's.These Nexus 7K's act as the distribution layer and they connect to 48 different 4507 access layer switches (VPC) and 3 6509 core layer switches.The 4507 swiches have two supervisor modules. One acting as active and the other as hot standby. Let's assume that multicast is working on the active module. If I pull out the active module the hot standby takes it's place. This is where multicast stops working on the switch.How can I get both Nexus 7K's to work with multicast at the same time. Here is an example of show ip mroute from both Nexus 7K's:note that DR02 has a lot more entries in the routing table. Is this working as designed?
DR01-C7018# sh ip mroute IP Multicast Routing Table for VRF "default" (*, 224.0.0.0/4), bidir, uptime: 1y11w, pim ip Incoming interface: Ethernet3/1, RPF nbr: 172.18.254.109 Outgoing interface list: (count: 1) Ethernet3/1, uptime: 29w3d, pim, (RPF)