Cisco Switching/Routing :: Configure Sub Interface For Nexus 7k?
Jan 11, 2013
how do we configure sub interface for nexus 7k?do we have to issue ma-address command under physical interface and than configure subinterface? if yes than what do we have to type the mac address for "mac-address" command?I can doing and than configure subinterface but the interface/subinterface didn't come up. do we have to bounce it couple times to bringe it up?
I want to bring up 40G interface between two nexus 3064 over the fiber but it's not coming up. Have configured the switch for 48*10G and 4*40G. I'm using QSFP on both the switches, OM3 straight fiber cable with MPO connector. The interfaces are not coming up. Notably, it comes up with Coax 3M cable. So it's fine with coax but not with fiber.
regarding to the out of band Management interface , if I configured an intervace vlan to be as a managment interface for one vdc ( the default vdc ), when I connected to this vdc via telnet , can I switch to any other vdc ? ( suppose that I have the Admin role which allows me to enter and config all the vdc's )If that is possible so that I dont have to make a dedicated managment ip for each VDC I need to do that only if I want to make vdc admin's account to allow some users to access specific vlans only , is that true ?
How separate is the management interface on a Nexus 5548?
In context - what's the risk of having a layer 2 only Nx5K in a DMZ and running the managment ports down into an internal managment VLAN, to form peer-keepalive links and software upgrades.
We have a couple of Nexus 7010's split into Core and Distribution VDCs. MGMT0 interfaces on each of the Nexus VDC's (including the Admin VDC) are configured with different IP address, but on the same subnet i.e 10.10.10.1/24 for admin, 10.10.10.2/24 for Core and 10.10.10.3/24 for Distribution. The MGMT 0 physical port on each Nexus is connected to a physical gig port on a 3750 X switch, and the 3750X has uplinks back to the Nexus configured for vPC.
When i ssh to the VDC MGMT0 IPs from the 3750X, i can access each of these VDCs without any problems. But if i enable routing on each of these links(OSPF) and advertise it to the WAN, i cannot see these routes advertised and also cannot see any of these routes in the local routing table.Just wondering if i have to enable these links on a VLAN and then advertise it to the WAN..But if this the case, VLANs cannot be created on the Admin(default VDC).
I'm facing a problem regarding loss of ping packets when i do ping test from nexus3k to another nexus3k connected directly.however there is no error counters on the interfaces on both of devices.the ping failutre is occurring only whenever i do ping test with a large number of ping packets.I don't see the ping loss symptom with default ping test (default ping test is 5 packets).
H/W : N3K-C3548P-10G S/W : 5.0(3)A1(1) nexus3k# ping 1.1.1.2 PING 1.1.1.2 (1.1.1.2): 56 data bytes 64 bytes from 1.1.1.2: icmp_seq=0 ttl=254 time=2.732 ms 64 bytes from 1.1.1.2: icmp_seq=0 ttl=254 time=2.732 ms
when will be the command "default interface x/x" on the Nexus 5000 platform available? Even with latest software version (5.1.3.N2.1a) it is not possible. For Nexus7000 it's working fine with 5.2 train.Is there a feature request for it? If not here it is!!It's horrible to deconfigure many interfaces especially in N5k environments with many FEXes.
Customer production environment is nexus 5000 use 1 G interface * 4 and config Port-channel ( LACP ) uplink to C3560 , The port channel link is 802.1q trunk , but Data transfer is low , the sh int display as follow :
Why transfer performance pool and how to fix
N-5548UP# sh int ethernet 1/30Ethernet1/30 is up Hardware: 1000/10000 Ethernet, address: 547f.ee14.ed25 (bia 547f.ee14.ed25) MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA Port mode is trunk full-duplex, 1000 Mb/s, media type is 10G Beacon is turned off Input flow-control is off, output flow-control is off Rate mode is dedicated Switchport monitor is off EtherType is 0x8100 Last link flapped 9week(s) 6day(s) Last clearing of "show interface" counters 20w2d 30 seconds input rate 152 bits/sec, 19 bytes/sec, 0 packets/sec 30 [Code]...
I am working for an Air Force client and am adding a handful of 5548s into their network. My question is how Tacacs+ is configured. My hands are tied in regards to testing in an operational environment so I want to ensure the configs are correct prior to deployment/maintenance window and avoid any remote issues.
I have read the "Cisco Press - TACACS+" config guide and it was somewhat vague in regards to operational deployment.
When I try to set the following command string, aaa authentication login default group tacacs+ local, the NX-OS asks me the input a "server group name". There are no server groups configured. Do I need them? Can I get by without configuring a group name because the client probably will not.
The Cisco IOS devices are configured with normal aaa authentication/authorization parameters. Also, do the VTY ports default to sshv2 and the correct tacacs+ parameters with the "transport input ssh" command (not available)?
With most of my Layer2/Layer3 switches, I'm accustom to giving them a SVI on my management VLAN, and calling it a day. I can't find in the Cisco Nexus guides how to do something similar; everything points to the mgmt0 physical interface, which seems like I need to uplink it to an access port on another switch. Can somebody point me in the right direction for how to do give the Nexus an IP that I can ssh/snmp into it across a trunk for management? I must just be missing the keyword.. NX-OS is still quite a different beast.I see in the manual it says: "SSH has the following prerequisites: You have configured IP on a Layer 3 interface, out-of-band on the mgmt 0 interface or inband on an Ethernet interface." Cisco Nexus 5000 Series Switch CLI Software Configuration Guide page 284, How do I configure an IP on a Layer 3 interface on a Nexus?
I am trying to determin if Jumbo frames are enabled on out Nexus 7000, and I am getting mixed info back from the swtich.I looks like the system jumbo MTU size is 9216 by default, but the interfaces all say the MTU of the interface is 1500 bytes. According to this article, the interface MTU should read 9216 is the jumbo frames are enabled globally. Is this correct. Is there a way to verify if Jumbo frame support is turned on? [code]
According to Cisco, Nexus 1010 can host up to (6) Virtual Service blades. I can't find out how many Virtual Supervisor Modules and Virtual Ethernet Modules that make up one Nexus 1000v switches can be supported by each Virtual Service Blades. In other words, how many Nexus 1000v switches can be created with Nexus 1010 appliance?how to configure Nexus 1000v switches with vmware. without Nexus 1010, the standalone nexus 1000v switches was configured from vCenter as an OVF. But how to configure Nexus 1000v switches with vmware where nexus switches are hosted on Nexus 1010 appliance.
I have configured the ip telnet source-interface Loopback 0 command on a Nexus7010, but when I telnet to another device and do a show users, the ip address is of the closest interface to the device I telnet to, not the ip address of the Loopback. All interfaces are in vrf default. I am running 5.1(6) NXOS.
I'm currently working on a plan to migrate our 6500's over to our new 7010's. At the time of the migration I want to tighten up our OSPF design and configure OSPF for "passive-interface default" then allow only those interfaces that should have OSPF neighbors to send the hellos. The issue is that the command is not showing up under the OSPF process. What's even more interesting is that the Nexus 5.x Unicast Routing Configuration Guide shows that the "passive-interface default" command should be an option to enter.
I'm currently running version 5.1(4) (though looking to upgrade to 5.2 during my migration testing). I would rather configure the passive-interface via the routing process versus having to enter it on every interface.
Here is an example of what each switch logs when a server drops offline. Sample logs taken between 5:32am and 5:35am on Feb 20. This particular one was having problems all weekend. Switch #1 encountered over 2000 interface resets. The corresponding VPC port on Switch #2 only had 13 resets.
NEXUS SWITCH #1 2012 Feb 20 05:32:09 q91-sw01-5010 %ETH_PORT_CHANNEL-5-FOP_CHANGED: port-channel10: first operational port changed from Ethernet1/10 to none 2012 Feb 20 05:32:09 q91-sw01-5010 %ETH_PORT_CHANNEL-5-PORT_DOWN: port-channel10: Ethernet1/10 is down 2012 Feb 20 05:32:09 q91-sw01-5010 %ETH_PORT_CHANNEL-5-PORT_DOWN: port-channel10: port-channel10 is down 2012 Feb 20 05:32:09 q91-sw01-5010 %ETHPORT-5-IF_DOWN_PORT_CHANNEL_MEMBERS_DOWN: Interface port-channel10 is down ( No operational members)
I am deploying a pair of Nexus 5596's with 3750 POE switches in the closets. I'm looking for a best practice as how to configure the Nexus 5596 to support proper QoS for EF at the core.
We've gotten two Nexus 7009's in and I'm starting to configure them when I found I couldn't add VDCs. I found there was no license installed but the only licenses I found that came with them are "Cisco DCNM for LAN Enterprise Lic for one Nexus 7000 Chassis". So my question is this - do I need to configure a DCNM server to get the license pushed to these two 7009s or should there be another PAK for each chassis that I can register and get my enterprise services?
I understand the vlans on the catalyst side of the house on 2900 to 6500 Catalyst switches.
This 7010 running nx-os 5.1(3) I did not setup, but have to manage it. Hasn't really been a proble till now.
My nexus 7010 has a Layer 2 only vlan 11. It is "Active" but the interface is "shutdown". Yet, it is passing traffic across the directly connected ports on the nexus 7010 and to other switches in my network. Vlan 11 is being set out via VTP to all my switches and things are running fine.
I need to create another L2 only Vlan. I can't seem to find any docs that indicate that a Layer2 vlan Interface on nx-os should be in "shutdown" mode as part of the setup. I do see in the docs where it has to be set "Active" as part of the process.
Is this the correct way to seutp a L2 only vlan on nex-os? Leave the interface in "shutdown" but make it "Active"?
Mystery Vlan 4 and 6 The mystery deepens. I have other L2 vlans ,Vlan4&6 that are NOT defined as "Interface Vlan4" in the nexus config, yet it is applied to GigE ports on the nexus and these Vlans 4/6is also being sent out VTP to all switches. Even weirder is that these vlans have names associated with the numbers. These are valid Vlans that were configured on the old 6509 before the Nexus was installed.
I have checked all switches, NONE are running in Server mode for VTP, all are in CLIENT. The nexus 7010 is the only device running in VTP Server mode.
I am trying to configure EIGRP on my ASA DMZ Interface - topology as follows: [code] The ASA is currently configured for EIGRP with the inside 3560x switch and passing routing updates properly.However, the ASA will not send/receive routing updates to/from the DMZ 3560x switch - the two devices do establish eigrp neighbor relationship. [code]
how to configure this. I did it in the past but kind of forgot how I did it.I have a stacked 3750 (two physical switches) connecting to a 2960.
I am creating trunk ports with limited access to VLAN 300, 600, and 700.
There is two interfaces connected from the 3750's(one on each physical stack member) to the 2960.I have the physical interfaces configured exactly the same.
Should I keep the configuration on the physical ports and not configure the Port-Channel Interfaces? Do I need to configure port-channel load balancing? Is the channel-group mode sufficient? Goal is to basically create 2 links to the 2960 to double the bandwidth and provide redundancy.
I am trying to configure a loop back interface like so: [URL], on the following device:
C3550 Software (C3550-IPSERVICESK9-M), Version 12.2(50)SE, RELEASE SOFTWARE (fc1on port gig0/1 which is using a 1000Base-SX adapter. This is for troubleshooting purposes and it does not appear to be a feasible option. Is there another way to accomplish in the IOS?
We are facing issue of continous packet discards On nexus4001L link (int po2) to Nexus5020 switch. Nexus4001L is installed in IBM blade center server and we have FCOE enabled in this setup. [code]