Cisco Switching/Routing :: Nexus 7K Out Of Band Management Interface?
Dec 8, 2012
regarding to the out of band Management interface , if I configured an intervace vlan to be as a managment interface for one vdc ( the default vdc ), when I connected to this vdc via telnet , can I switch to any other vdc ? ( suppose that I have the Admin role which allows me to enter and config all the vdc's )If that is possible so that I dont have to make a dedicated managment ip for each VDC I need to do that only if I want to make vdc admin's account to allow some users to access specific vlans only , is that true ?
i have: two nexus 5596 connected each other the mgmt0 is NOT in use SVI for keepalives with IP address and /30 netmask vpc-keepalives running over fiber in e1/1. this works well uplinks to datacenter distribution switch (Cat 6500 VSS) over fiber on port-channel 1 (e1/2 and e1/10), also carrying the management VLAN (vlan 14). SVI with an IP address for management purposes
I can't get this to work. i can ping my whole network from the nexus, but not the nexus from my network. also pinging inside the mgmt vlan is not possible.
How separate is the management interface on a Nexus 5548?
In context - what's the risk of having a layer 2 only Nx5K in a DMZ and running the managment ports down into an internal managment VLAN, to form peer-keepalive links and software upgrades.
We have a couple of Nexus 7010's split into Core and Distribution VDCs. MGMT0 interfaces on each of the Nexus VDC's (including the Admin VDC) are configured with different IP address, but on the same subnet i.e 10.10.10.1/24 for admin, 10.10.10.2/24 for Core and 10.10.10.3/24 for Distribution. The MGMT 0 physical port on each Nexus is connected to a physical gig port on a 3750 X switch, and the 3750X has uplinks back to the Nexus configured for vPC.
When i ssh to the VDC MGMT0 IPs from the 3750X, i can access each of these VDCs without any problems. But if i enable routing on each of these links(OSPF) and advertise it to the WAN, i cannot see these routes advertised and also cannot see any of these routes in the local routing table.Just wondering if i have to enable these links on a VLAN and then advertise it to the WAN..But if this the case, VLANs cannot be created on the Admin(default VDC).
What is the point of it? It is not a remote console. If i reboot the switch i cannot get back to the out of band management port unless the switch is fully running. Is this only for security purposees? so all telnet/ssh is from an Out of band network?
I've got a 3750X, IOS 15.0 IP Base license, reset to factory defaults, and I want to use the FastEthernet0 out-of-band management port on the backside next to the console port. The idea is that this port should provide a management interface that does not participate in the routing table. Problem is, contrary to the documentation, that configuring an IP address on the interface does make it show up in 'show ip route'. So it's still part of the routing table. Also, I'm unable to find the commands to change this and set a default gateway for just the management interface. I'm pretty sure this has to be possible, I remember seeing something similar on an ASA once. The 3750 configuration guide on Cisco.com does not seem to mention it.I considered using VRF but it's an IP Base license, no VRF.
I want to configure management for some Nexus 5548's?I wanted to manage the switches via an SVI. I have read the following document which gives details about the Management SVI but doesn't answer all questions.[URL]I am not running any layer 3 functionality on the switch, no layer3 license (which it mentions in the above link) Will I still be able to create a management SVI. I know I will need to enable the feature 'interface-vlan' to setup a Management SVI, does that require a license?
I have a Nexus 5548UP that would be managed by two organizations. Is it possible to set IP addresses for mgmt0 and an SVI (or an L3 interface) without using the L3 daughter card? I don't want to route between VLANs, just to separate management traffic.
we do not have an out-of-band management network and setting one up at this point is not being planned. We are mainly a swtiched environment and the only devices that are using L3 are the core switch for WAN purposes and the lab because it is mimicking the production environment. I have two Nexus switches that are sitting on the other side of a 3750 switch which is currently acting as a L3 device because this is a pre-production environment for a new project. We had an issue with management of the devices before but our workaround was to put them on the management vlan direcltly off of the core, allowing only management traffic to pass by means of mgmt0 on each device. The problem I'm having now is that I've now setup the mgmt0 interfaces on both for the keepalive link for vpc only (vpc traffic is going accross 2x10gb connections and the link to the 3750 is 1gb each trunked) and have lost my ability to use the mgmt0 connections for management. How to connect my management connection through either the 3750 or directly off the core switch (as that's what will happen once it's put into production)
I am setting up a link between buildings that uses wireless links. I'm using Layer 3 routed ports on 2 3560 switches to handle the routing between sites. Normally I would just put these in a /30 and then the switches handle the rest. However, the wireless access points have a web interface for managing them that I want to be able to access, but it's only available on the single NIC that also carries traffic. What would be the best way of making this work? Should I make the link a /29 and give the access points an IP in the same range? If this is the case what do I use for the default gateway for the access points?
I have included a diagram to try to explain the issue clearer. The IP addresses in black are what I would do if this were a standard cable (and indeed this will work, but I wont be able to access the admin interface of the wireless AP) and the red ip addresses are the alternative if I use a /29 (but as I said, I'm not sure what to use for the default gateways).
how do we configure sub interface for nexus 7k?do we have to issue ma-address command under physical interface and than configure subinterface? if yes than what do we have to type the mac address for "mac-address" command?I can doing and than configure subinterface but the interface/subinterface didn't come up. do we have to bounce it couple times to bringe it up?
I want to bring up 40G interface between two nexus 3064 over the fiber but it's not coming up. Have configured the switch for 48*10G and 4*40G. I'm using QSFP on both the switches, OM3 straight fiber cable with MPO connector. The interfaces are not coming up. Notably, it comes up with Coax 3M cable. So it's fine with coax but not with fiber.
I'm facing a problem regarding loss of ping packets when i do ping test from nexus3k to another nexus3k connected directly.however there is no error counters on the interfaces on both of devices.the ping failutre is occurring only whenever i do ping test with a large number of ping packets.I don't see the ping loss symptom with default ping test (default ping test is 5 packets).
H/W : N3K-C3548P-10G S/W : 5.0(3)A1(1) nexus3k# ping 1.1.1.2 PING 1.1.1.2 (1.1.1.2): 56 data bytes 64 bytes from 1.1.1.2: icmp_seq=0 ttl=254 time=2.732 ms 64 bytes from 1.1.1.2: icmp_seq=0 ttl=254 time=2.732 ms
when will be the command "default interface x/x" on the Nexus 5000 platform available? Even with latest software version (5.1.3.N2.1a) it is not possible. For Nexus7000 it's working fine with 5.2 train.Is there a feature request for it? If not here it is!!It's horrible to deconfigure many interfaces especially in N5k environments with many FEXes.
Customer production environment is nexus 5000 use 1 G interface * 4 and config Port-channel ( LACP ) uplink to C3560 , The port channel link is 802.1q trunk , but Data transfer is low , the sh int display as follow :
Why transfer performance pool and how to fix
N-5548UP# sh int ethernet 1/30Ethernet1/30 is up Hardware: 1000/10000 Ethernet, address: 547f.ee14.ed25 (bia 547f.ee14.ed25) MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA Port mode is trunk full-duplex, 1000 Mb/s, media type is 10G Beacon is turned off Input flow-control is off, output flow-control is off Rate mode is dedicated Switchport monitor is off EtherType is 0x8100 Last link flapped 9week(s) 6day(s) Last clearing of "show interface" counters 20w2d 30 seconds input rate 152 bits/sec, 19 bytes/sec, 0 packets/sec 30 [Code]...
I am trying to determin if Jumbo frames are enabled on out Nexus 7000, and I am getting mixed info back from the swtich.I looks like the system jumbo MTU size is 9216 by default, but the interfaces all say the MTU of the interface is 1500 bytes. According to this article, the interface MTU should read 9216 is the jumbo frames are enabled globally. Is this correct. Is there a way to verify if Jumbo frame support is turned on? [code]
I have configured the ip telnet source-interface Loopback 0 command on a Nexus7010, but when I telnet to another device and do a show users, the ip address is of the closest interface to the device I telnet to, not the ip address of the Loopback. All interfaces are in vrf default. I am running 5.1(6) NXOS.
I'm currently working on a plan to migrate our 6500's over to our new 7010's. At the time of the migration I want to tighten up our OSPF design and configure OSPF for "passive-interface default" then allow only those interfaces that should have OSPF neighbors to send the hellos. The issue is that the command is not showing up under the OSPF process. What's even more interesting is that the Nexus 5.x Unicast Routing Configuration Guide shows that the "passive-interface default" command should be an option to enter.
I'm currently running version 5.1(4) (though looking to upgrade to 5.2 during my migration testing). I would rather configure the passive-interface via the routing process versus having to enter it on every interface.
Here is an example of what each switch logs when a server drops offline. Sample logs taken between 5:32am and 5:35am on Feb 20. This particular one was having problems all weekend. Switch #1 encountered over 2000 interface resets. The corresponding VPC port on Switch #2 only had 13 resets.
NEXUS SWITCH #1 2012 Feb 20 05:32:09 q91-sw01-5010 %ETH_PORT_CHANNEL-5-FOP_CHANGED: port-channel10: first operational port changed from Ethernet1/10 to none 2012 Feb 20 05:32:09 q91-sw01-5010 %ETH_PORT_CHANNEL-5-PORT_DOWN: port-channel10: Ethernet1/10 is down 2012 Feb 20 05:32:09 q91-sw01-5010 %ETH_PORT_CHANNEL-5-PORT_DOWN: port-channel10: port-channel10 is down 2012 Feb 20 05:32:09 q91-sw01-5010 %ETHPORT-5-IF_DOWN_PORT_CHANNEL_MEMBERS_DOWN: Interface port-channel10 is down ( No operational members)
I understand the vlans on the catalyst side of the house on 2900 to 6500 Catalyst switches.
This 7010 running nx-os 5.1(3) I did not setup, but have to manage it. Hasn't really been a proble till now.
My nexus 7010 has a Layer 2 only vlan 11. It is "Active" but the interface is "shutdown". Yet, it is passing traffic across the directly connected ports on the nexus 7010 and to other switches in my network. Vlan 11 is being set out via VTP to all my switches and things are running fine.
I need to create another L2 only Vlan. I can't seem to find any docs that indicate that a Layer2 vlan Interface on nx-os should be in "shutdown" mode as part of the setup. I do see in the docs where it has to be set "Active" as part of the process.
Is this the correct way to seutp a L2 only vlan on nex-os? Leave the interface in "shutdown" but make it "Active"?
Mystery Vlan 4 and 6 The mystery deepens. I have other L2 vlans ,Vlan4&6 that are NOT defined as "Interface Vlan4" in the nexus config, yet it is applied to GigE ports on the nexus and these Vlans 4/6is also being sent out VTP to all switches. Even weirder is that these vlans have names associated with the numbers. These are valid Vlans that were configured on the old 6509 before the Nexus was installed.
I have checked all switches, NONE are running in Server mode for VTP, all are in CLIENT. The nexus 7010 is the only device running in VTP Server mode.
We are facing issue of continous packet discards On nexus4001L link (int po2) to Nexus5020 switch. Nexus4001L is installed in IBM blade center server and we have FCOE enabled in this setup. [code]
I have been tasked to replace the existing Cat 6500 and 3750 switches by Nexus 7000 and Nexus 2000.I was told initially my boss plans to get 2 x Nexus 7000 and then eventually blow up to 4 x Nexus 7000s.For Nexus, is there a list of tasks / points that i need to consider for building the initial design?
Can i just link the Nexus 7000 like the following?
N7k-A ========= N7k-B | | lots of N2ks lots of N2ks
we are planning a Nexus datacenter project with this layout:Our experiences with Nexus switches are not so large until now and the manuals are very extensive.Both N5K´s should be connected directly with all 4 N2K switches. I did not find a layout like this in the manuals. Only a design,where only 2 N2K are connected to one N5K, with this fex config:Now I´m not sure if it is right to make a config like this with the same slots and fex´s or with different slots and fex´s.
I have a Cisco ASA 5505 and I have my internal and external interfaces configured but I currently cannot ping from the inside to an IP Address on the outside. I had this setup and working and I have another set of equirement that I am replacing that is working with my service provider so I know it is a configuration issue. When I ping 4.2.2.2 for example I get:
Destination host unreachable
Do I need to add a static route from my inside interface to my outside interfaces?
In our organisation we have multiple Nexus 5000 switches, which Cisco LMS 4.2.2 cannot get the running-config and startup-config from with the Archive Management process. When it does try to get them, I get a error as follows:
*** Device Details for SF-DERA-01 *** Protocol ==> Unknown / Not Applicable Selected Protocols with order ==> TFTP,SSH,SCP
what Cisco LAN Management Solution is required to support Cisco Nexus 5548P switches and Cisco Nexus 5596UP switches?These new Cisco switches are being implementing on customer network and he ask us that he requires these equipments be supported on a LMS solution (customer currently is using LMS 3.2.1)
I am just going to deploy some new 4900Ms for a customer. Want to know if configuring management for 4900 (everything like NTP, AAA, SNMP , DNS ) is doable through management interface in management VRF and there are no caveats to be aware of.