Cisco Switching/Routing :: Connecting 3570 Switches To Nexus 2232 To Migrate Server Subnets
Nov 17, 2011
I am in the process of migrating our existing server farm subnets to our new Nexus server farm and I discovered something I wasn’t expecting. My intention is to migrate our existing legacy server farm which is comprised of for paired 3750 switches off of our core 6509s and onto the Nexus and connect them to the 2232s via multi gig port-channel connections, two port channels per switch stack.
NOTE this is expected to be a temporary move as next year we intend to install additional N2Ks and move servers over to these directly. But to minimize the outage/downtime it will be better to move the subnets and switchs all at once.
These connections would be grouped 1 gig connections as port channels, one from each switch into one of the two 2232s.
Problem I discovered is Cisco does not intend to have switches connected to the Nexus and it immediately disables the ports when they see BPDUs.
I found a config that does work and it does fail over from one port-channel connection to the other but with the limitation that when the original port channel comes back online it does not fail back over to the original one, an acceptable situation for us. But I am wondering if Cisco would support this design if we did experience issues down the road.
The only issue I really see is to get it to work the config is different on the two N5Ks, see the pert config below for the connections. Both are running the same OS
augs1-ba-ar17# sh ver
Cisco Nexus Operating System (NX-OS) Software
TAC support: [URL]
sh fex FEX FEX FEX FEX Number Description State Model Serial ------------------------------------------------------------------------ 105 FEX0105 Image Download N2K-C2232TM-E-10GE sh fex FEX FEX FEX FEX Number Description State Model Serial ------------------------------------------------------------------------ 105 FEX0105 Offline N2K-C2232TM-E-10GE
As per my understanding, Cisco Nexus 2232 can only connect to HP c7000 Chassis if we are using a Pass Through Switch in the HP c7000. Cisco Nexus 2232 can only connect to End Hosts and not to a switch. Is there a New Feature added in Nexus 2232, which enables it to connect to a Switch like HP Flex Fabric ?
I try to connect a cisco blade switch 3020 to Nexus 2232 with etherchannel, but when I connect the second link a obtain flapping on vlans. [code] why the vlans are flapping ? it's something wrong on the config ? [code]
I want to be sure for this before ordering: is the Enterprise-class stackable switches "WS-C3750X-48T-S" model can be stackable with "WS-C3750G-48TS" model.
I have two separate companys both with staff at two locations and thier own networks connected with a wireless antenna which provides a high speed LAN connection between offices. I only have a single path through this antenna bridge. I have an SG200-08 switch at each end. What I am attempting to do is utlise the switches to take the two subnets at one office, combine them to one for transfer through the antenna bridge, and then resolve them into the two separate networks again at the other end.
I have a video feed coming into my 3570. It comes in at 5 minute input rate 18777000 bits/sec, 1695 packets/sec. However, the uplink to the router is much different, 5 minute output rate 130000 bits/sec, 28 packets/sec. I am in a lab and about ready to go into testing phase for a project when we discovered this problem, as this video feed is not veiwable on the other end.
Below is the config and capture from the switch.
BLOSSw1#sh int g1/0/6GigabitEthernet1/0/6 is up, line protocol is up (connected) Hardware is Gigabit Ethernet, address is a44c.112f.3506 (bia a44c.112f.3506) MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec, reliability 255/255, txload 1/255, rxload 4/255 Encapsulation ARPA, loopback not set Keepalive not set Full-duplex, 1000Mb/s, link type is auto, media type is 10/100/1000BaseTX SFP input flow-control is off, output flow-control is unsupported ARP type: ARPA, ARP Timeout 04:00:00 Last input never, output 00:00:00, output hang never Last clearing of "show interface" counters 15:16:25 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute
i've got a stack of 3750's configured, everything is working fine, but the web interface for the stack will only present the web based express configuration page. I can't get it to go away. what needs to be done to clear whatever flag or register that causes this behavior. the cli is fine, it's not trying to force epxress setup there, only in the web interface. will reloading the software image fix this?
I have a 2232 dual homed to 2 5548's via a port-channel/ vpc. I have one 5548A and configure the port for the 2232 with a vlan, plug into that port and it doesn't come up (inactive). I go to 5548b (Primary) and configure the port and it comes up.
I have two ospf processes running on a single 3570 edge router that has a dedicated transport circuit back to our network core. We are adding an additional "transport" only circuit into a new location that is also apart of the second ospf process backbone which will connect back to our core. There will also be a 3750 for this new circuit termination. Currently we are only redistributing ospf process 2 into ospf process 1 (1 = core backbone).
#router ospf 1 #redistribute ospf 2 subnet
We have no need to have ospf process 1 redistributed into the process 2 tables. That being said, when we add an additional transport ciruit, or path back to our core backbone, will this configuration present any issues with the redistribution process and failover.
In our LAN network design, we have two Nexus 7010 switches on the core connected via vPC. Then LAN access switches are directly connected to the Core Nexus switches via regular port channels on the 3750's and vPC on Nexus. The core Nexus switches will be linked to an existing LAN network and the applications will be progressively migrated from the old to the new network.In this scenario, three VDCs are planned to be configured on the Nexus - One for the Interconnect (and WAN at a later stage), one for the LAN/local services and one for the building facilities/local services.
We have 2 nexus 5K installed in our data centre recently and we are connecting new three servers to nexus switches. Each server has 2 10GB ports . 1 port of serverA is connected 5K1 and other port is connected 5K2 ( sameway other 2 server connected to Neuxs 5K1 and 5K2 Switches).So do we need to create each VPC with Portchannel (like VPC 1,2 and 3 ) for each server connection?
we have two nexus 7k connected via vPC peer. We have edge switch connected to the core using HSRP via vPC.Now we have 1 orphan port connected to each Nexus (WLC).The problem is i cant seem to connect / ping the WLC (only 1 of them) that is connected to the orphan port and i think it is probably due to the packet arriving at the secondary HSRP and traversing through peer-link and dropping the packet.
Now what is the best practise for HSRP with vPC for orphan ports ? The problem is i can only ping 1 wlc from a machine. on doing a traceroute i find that the packets seems reach N7k1 and reach wlc that is connected to its own port but not to the WLC that is connected to N7k2 due to the packet travesing through peer-link and dropping at the peer-link.Now what is the best practise to sort this out and reach both WLC at the same time ? Do i move the WLC 2 to N7k1 ?
I'm trying to get the VFC up in B22-FEX blade in Dell which is connecting to Nexus 5596UP.
The message I get is
# sh int vfc1033 vfc1033 is down (Error Disabled - VLAN L2 down on Eth interface) Bound interface is port-channel3 Hardware is Ethernet Port WWN is 24:08:00:2a:6a:0d:db:3f Admin port mode is F, trunk mode is on
I've been asked whether we can use HP-branded 10G SFP+s (P/N 455885-001) in Nexus 2Ks to provide 10G connections to HP C-Class enclosures. We've used HP-branded twinax, and Cisco-branded SFP+s and twinax, but we have a raft of HP 10G SFP+s sat in a store room gathering dust and now we want to save some money by not having to buy the Cisco parts to match.
Configuring OSPFv2 on a Nexus 5K switches, after configuring area 0 or area 10 it shows as 0.0.0.0 or 0.0.0.10 instead, I'm planning to uplink a couple of ASAs with OSPF enabled, just wondering if the area format showing will be a problem, is this how is supposed to look in the Nexus 5K? and will the 5K be able to form adjacensies with other non-Nexus devices that have area 0 and 10?
I have 2 sites located approximately 30 kilometers apart. I will call them site 1 and site 2.The sites are connected by a Layer 2 1GB fibre connection.I would like to add 2 X Cisco nexus 5548UP switches at site 1 and connect these 2 X Cisco nexus 5548UP switches via GLBP
I would then like to add 2 X Cisco nexus 5548UP switches at site 2 and connect these 2 X Cisco nexus 5548UP switches via GLBP.I would then like to connect the 2 X Cisco nexus 5548UP switches at site 1 and the 2 X Cisco nexus 5548UP switches at site 2 via GLBP.
I have a small doubt with Nexus 7k,5K,2k & 1K.We want to backup the running config to my desktop through tftp.When i tried to backup from Nexus switches showing like below Nexus 7K. [code]
It's showing two choices which one I have to follow "copy running-config startup-config" or "copy running-config startup-config Vdc-all". [code]
It's showing two choices which one I have to follow "copy running-config startup-config" or "copy running-config startup-config fabric"
It's showing three choices which one I have to follow "copy running-config startup-config" or "copy running-config startup-config fabric" or "copy running-config startup-config vdc-all". [code]
We are in the plan of implementing a new 10gig network. For this we have chosen the Nexus 5596 and 2232 pair. What are the transceivers/accesories I should order for connecting 10Gig servers of NC552SFP dual port NICs to the 2232 in High availability (NIC 1 to 2232PP 1 and NIC 2 to 2232PP 2). Should I order the FEX SFP+ uplink conector to conect 5k and 2232 also (or its aded to the Nexus box itself) ? Also, is it fine to have Nexus 2232 in floor 2 and Parent neus in Floor 3 (say seperatd by a max of 400meters)?
The best option for load balancing between 2 X Cisco nexus 5548UP switches located at one site and connecting to 2 X Cisco nexus 5548UP switches located at another site.
The sites are connected via a 1GB fibre connection. I am unable to use GLBP until GLBP is supported in further software releases.
We have Nexus 7009 switch and want to configure the span session
We are using F2 and M2 card both are in seperate differeent VDC.And out server is connected to M2 card on eth 4/6 and want to monitor the traffic from vlan 161Which is made on F2 card.
I have 2 NEXUS switches in VPC. When I create/delete a VLAN gets updated in the same switch, but the VLAN information does not get propagated to the neighboring switch. Revision number remains the same even if I modify the VLAN. Had also tried changing the VTP status to transparent mode & reverting it to client mode but no luck.Have double confirmed on the VTP domain names & password. However the same setup in different location which works perfectly fine.
According to Cisco, Nexus 1010 can host up to (6) Virtual Service blades. I can't find out how many Virtual Supervisor Modules and Virtual Ethernet Modules that make up one Nexus 1000v switches can be supported by each Virtual Service Blades. In other words, how many Nexus 1000v switches can be created with Nexus 1010 appliance?how to configure Nexus 1000v switches with vmware. without Nexus 1010, the standalone nexus 1000v switches was configured from vCenter as an OVF. But how to configure Nexus 1000v switches with vmware where nexus switches are hosted on Nexus 1010 appliance.
I am trying to understand what load balancing method is used on a port channel on a Nexus switch . I have a server connected by a VPC to two Nexus switches. The nexus switches are only acting as layer 2 switches. I have a 6509 connected via a upstream link that does all of the routing for my VLANS. If have a server connected to the Nexus switches and it talks to a server on my 6509 what load balancing happens on the Nexus going across VPC 27 which is a layer 2 trunk going up to my 6509. Is it done on layer 2 or layer 3 flows?
My Nexus shows the default load balancing configurations
Port Channel Load-Balancing Configuration:System: source-dest-ip Port Channel Load-Balancing Addresses Used Per-Protocol:Non-IP: source-dest-macIP: source-dest-ip source-dest-mac
We are looking forward to implement IBM BladeCentre Swiches connectivity with Nexus 2k module.I would like to brief you about my network over as follows:
Currentl Solution : we want to connect the IBM Blade Switches (4) as demonstrated in the attached diagram which will be connected to Nexus 2k module as Ether-channel Access port.
Initially we been proposed by our vendor with this design and Now vendor is recommending us to connect the Blade switch etiher to Nexus 5k switch or directly to 6513 core-switches instead of Nexus 2k modules as they say Nexus 2k modules are only to connect the Edge devices .
We do not have ports available on Nexus 5010 to connect the cables from IBM Blade Centre Switches. Otherthan that, incase if we go ahead and connect the IBM Blade switches as access ether-channel access port with Nexus 2k module what are consequences we have to face realted Spanning tree or any other.
Connecting a legacy Nortel switch (425/450/470/BPS) to a Nexus 7000 via gigabit fiber? I have a customer trying to do it and they say that the connection never comes up. The support on the Nortel stuff is long since expired, so Avaya is not being particularly useful. Apparently Cisco says the issue is "fast link pulse to the BayStack to determine the capabilities of the uplink and the BayStack is returning all zeros." I have not verified this and actually have not yet gotten my hands on the Nexus side of things
We are planning to have attach topology with nexus 5548 using vpc. Let me know if this i possible. I want to configure dual NIC linux server using LACP active mode to connect to two 5548 in VPC for redudancy as well as use of full access layer bandwidth. On nexus this will be access port in single port channel in single VPC link.
I have a lot microbursts in my network and i looking 10G switches with big buffers. Which models have biggest buffers ? I think about 1-2U (nax 4U) switch with up to 60-100 10G ports. Something like nexus 3064 (he has only 9MB shared buffers AFAIK). Besides deep buffers i need also:
- trill or another ethernet ring topology like erps,eapsv2, - Multi chassis LAG, - virtual routers, policy routing - dcb - 40G interfaces will be plus