I have a senario where i'm going to have 2 Nexus 7010 connected as a core, and i'm going to have 4 5510's connected in a star formation. Each nexus 5510 will connect to the nexus core Via two 10Gb links. Each nexus 5510 will have 2 links attached to The core switches in vPc's.
I want to monitor our backup server (commvault) as it is saying it's library (Data Domain) is going off line.[code] The issue is I am seeing a lot of unicast traffic (on Wireshark) that has nothing to do with the server on E2/11. Some of it is from different VLANs... There is way too much data (multi-Mbps) to keep wireshark running very long to capture our intermitten problem.
We have two catalyst 6506 switches with 10 gb u plinks and around 120 edge switches cat 3750-x switches. Still the module on the core wheere servers are connected is 1000mbps port.Now if we induct a nexus switch to the datacenter what kinds of benefits we can reap In a virtulised environment as well as real environment?following are the some of the queries.Can we reduce the number of edge switches? ( by virtual environment), Inter operabaility between cat ios and nexus ios, how this will affect the environement,What will be the over all benefits ?, What are the cons of this induction ?
I have defined a trunk between a nexus 5k and cat 3750 as a pvlan trunk - now I would like to add redundance and performance and tried to establish a vpc between my par of nexus's and the 3750 stack - but the nexus tell me that the port-channel doesn't support pvlan's - and then - ehh - do I get any benefits of running the trunk as a pvlan trunk at all?
In our LAN network design, we have two Nexus 7010 switches on the core connected via vPC. Then LAN access switches are directly connected to the Core Nexus switches via regular port channels on the 3750's and vPC on Nexus. The core Nexus switches will be linked to an existing LAN network and the applications will be progressively migrated from the old to the new network.In this scenario, three VDCs are planned to be configured on the Nexus - One for the Interconnect (and WAN at a later stage), one for the LAN/local services and one for the building facilities/local services.
We recently replaced our core switch from a non-cisco vendor with a Nexus 7010. With our old core switch, I had the ability to log changes to the ARP table. So if there was a dhcp conflict or a vMotion event, it would show up in the "show log" output. I've not found a way to do that with the Nexus switch - or at least no way to view the log. I have the command: logging level arp 6
I have to upgrade a Nexus 7010 with dual Sup engines from 4.2(4) to 5.2 and am hoping it could be an ISSU. We are fine with an outage window.To upgrade from 4.2(4) to 5.2(5) I'll have to do a multi hop upgrade from 4.2(4) - 4.2(6) - 5.2(5) and each hop would take 40-60 minutes.do I spend 40-60 minutes for each hop, or just do a disruptive upgrade straight from 4.2(4) to 5.2(5)? Like I said, we are fine with an outage window.
I just deployed a nexus 7010 switch at a server farm. after deployment, it was notices that there are surges in latency across the network. The default gateway was then moved to the nexus, with this pinging from an host on the same subnet there is intermitent burst is latency
NEXUS>>>>>Server Ping of about 80ms and sometimes even times out. To me, this is strange. NX-OS Version is 5-02a
I'm looking to see if it is possible to run a vPC between to vDC's on a single 7010? We have a Production setup that runs dual 7010's with vPC's between the chassis but in our lab we only have a single 7010 with a 32 port 10gig module. I was thinking that maybe we could create 4 vDC's on the 7010 and run a vPC between the vDC's.
We will install a new Supervisor Engine in our Nexus 7010.One Supervisor Engine is already installed an 1Year old.So the Problem is that both Supervisor Engines may have different NX-OS version.Could this lead to a problem?Does the installed Supervisor Engine "udate" the newer Supervisor engine?
We have a couple of Nexus 7010's split into Core and Distribution VDCs. MGMT0 interfaces on each of the Nexus VDC's (including the Admin VDC) are configured with different IP address, but on the same subnet i.e 10.10.10.1/24 for admin, 10.10.10.2/24 for Core and 10.10.10.3/24 for Distribution. The MGMT 0 physical port on each Nexus is connected to a physical gig port on a 3750 X switch, and the 3750X has uplinks back to the Nexus configured for vPC.
When i ssh to the VDC MGMT0 IPs from the 3750X, i can access each of these VDCs without any problems. But if i enable routing on each of these links(OSPF) and advertise it to the WAN, i cannot see these routes advertised and also cannot see any of these routes in the local routing table.Just wondering if i have to enable these links on a VLAN and then advertise it to the WAN..But if this the case, VLANs cannot be created on the Admin(default VDC).
We have, for nearly 4 years, used EIGRP on our 6513 to make use of two unequal links to our branch offices. This worked because we could use the variance command and cause EIGRP to insert two routes into the table, one from each carrier. Thus it was we could balance the load to each one with a ratio similar to the ratio of the bandwidth of Link A to Link B.
We just purchased 2 Nexus 7010's to replace our single 6513 core.After much consternation we have found from our Ciscio SE that the Nexus 6.0.2 software rendition of EIGRP does not support variance.
Why would Cisco take their own propriatary protocol and then gut it by removing features? I'm quite ready to send these Nexus boxes back in favor of a newer 6500 series. MEC doesn't work like it is supposed to and the show-tech runs for over 24 hours without ever finishing (and this we can repeat on both boxes, multiple times).
We've opened a tac case but I just wondered for any work around for the 'variance' command?
I am facing issue with nexus 7010 login authentication by radius server. I have two nexus 7010, one of them is working perfectly. Other taking long time to authenticate. If i use local database to login it works perfectly. It works fine also if i login from console using radius for authentication.
Does the nexus 7010 support virtual switching yet? All of the posts I have found from about a year ago say that it is going to be supported, but there were no dates listed. I heard the same thing from Cisco a while back, but haven't followed up with it.If it is supported finally are there any configuration guides available for it?
you find attached my network architecture with 2 Nexus 7010 on core layer and 2 Nexus 5020 on distribution layer, each one with 1 N2148T fabric extender switch. PC-A1 and PC-A2 are connected to one N2148T, PC-B1 is connected to the other N2148T. Nexus-7000-1 is HSRP Active for all VLANs, Nexus-7000-2 is HSRP standby. PC-A1 and PC-A2 are connected to VLAN A, PC-B1 is connected to VLAN B. PC-A1 and PC-A2 have the same default gateway correspondent to IP HSRP on VLAN A. It happens that PC-A1 is able to ping PC-B1 while PC-A2 is unable to ping PC-B1. If I issue a traceroute from PC-A2 I see Nexus-7000-2’s physical IP address as the first hop even if Nexus-7000-2 is HSRP standby. After the first hop the traceroute is lost. If I shutdown Port-channel 20 on Nexus-5000-2, PC-A2 starts to ping PC-B1.I can’t understand what’s wrong in this architecture.
I use Nexus 7010 as our layer 3 router.I have ssh feature turned on so I can manage it from the management interface. I just found out that users can use putty to ssh to the local SVI interface of the NEXUS. Although they still need username and password to login but we dont want them even able to bring up the welcome screen.Example, user's IP is : 172.16.25.100 , they can ssh to 172.16.25.1 which is the NX SVI interface.
I believe i've enable jumbo frames on our Nexus 7010, one in each data-centre.
system jumbomtu 9216. Also on the interfaces mtu 9216. And can see MTU 9216 bytes, BW 20000000 Kbit, DLY 10 usec on the port-channel between them. Though when i ping between vlans at each site with large packets i get 30% drops and if i set the DF bit in IP header to yes - 100% loss.
8798 bytes from 10.200.12.2: icmp_seq=19 ttl=254 time=8.024 ms --- 10.200.12.2 ping statistics ---20 packets transmitted, 14 packets received, 30.00% packet loss
I've got a pair of Nexus 7010's running vPC. I am having a Multicast issue with a cluster of Linux servers that need to talk Multicast for cluster/high-availability operation. All the servers need to talk to a single multicast address and I am having trouble getting them to communicate. I believe I need to enable IP IGMP Snooping Querier on the N7K's and it needs to be enabled on the VLAN where the servers reside. How to enable IP IGMP Snooping Querier on a VLAN ?
Yesterday I configured the 7010 Nexus switch. I created a VDC and allocated few ports and configured VLAN for testing. After enabling feature interface-vlan i was allowed to configured L3 interface for the vlan. I assigned ip address and connected few server to check the reachability but it says Destination Host Unreachable.
I have an environment where i have two nexus 7010 switches, along with 2 nexus 5510's. I need to run OSPF as a layer 3 routing protocol between the vpc peer links. I have 1 link being used as a keep alive link, and 3 other links being used as a VpC link.
1) Is it best to configure a separate Vpc VLAN i.e 1010
2) Is it best to configure a vrf context keep-alive
3) just have the management address as the peer ip's.
I am working on Nexus 7010 with NX-OS 5.1.5. I have to delete the static route 10.10.0.0/16 via 10.16.0.21. [code] I try to remove the route with the command "no ip route 10.10.0.0/16 10.16.0.21" and I have the message below % Route not deleted, it does not exist..I don't understand why I have this message because the static route exist.
We have 2 nexus 7010 switches configured with HSRP in the network. For all the vlans core1 is Master and Core2 is standby. In the current setup we have external dhcp server and dhcp relay is configured for all the vlans on Master and standby switch. The setup is running the IOS 5.2
Activity Done: During the Maintainacne activity, we isolated core1 switch in the network by disabling the vpc/keepalive and all the uplinks from access switch. The core2 switch was master for all the vlans.
Issue observed: It has been observed that new users were not getting ip address from the dhcp server. The ethereal capture showed that dhcp server was not getting the dhcp requests from the core2 switch. We disabled the dhcp feature in core2 and enabled again with dhcp relay again configured on vlan interfaces .even after doing this no change was observed in behaviour. Finally we got core1 back in network by enabling all the links.
Observation: The moment VPC link came up between the core switches, users started getting ip's from dhcp. Then we started enabling all the uplinks on core1.Core1 again become master for all the vlans and users continued getting ip’s. Network running fine.
1. For one of the vlan, core 2 switch has been made primary and for new users checked the dhcp functionality and it was working fine. The aim was to identify if anything wrong on core 2 related to dhcp relay
2.Again we changed the priority for this vlan and made core1 master for the same. This time we disabled this vlan on core1 and tried new user with core 2 became master and dhcp functionality worked fine for new user. Actually in this case we have simulated the same behaviour when we observed the issue with only difference of VPC was not available during the issue time as core 1 was isolated form network Inputs needed.
Is there any known behaviour for dhcp functionality when VPC is unavailable? If we see the test scenario2 (wherein core1 was master for the vlan and we disable this vlan on core 1 and core 2 was able to relay dhcp requests for new users in this vlan.) it was actually same as scenario we observed during issue time..
Ok I didn't setup my OSPF on my 7010. Today I found out that any static route I put into my 7010 gets sent into to my MPLS network. My 6509's you have to "Tag" the static rout for this to happen. Was under the impression the same was necessary for the 7010 or at least it had to "match" an access list. How can I fix the below so that by default all static routes are not resdistributed into OSPF? [CODE]...
I have configured the ip telnet source-interface Loopback 0 command on a Nexus7010, but when I telnet to another device and do a show users, the ip address is of the closest interface to the device I telnet to, not the ip address of the Loopback. All interfaces are in vrf default. I am running 5.1(6) NXOS.