I have 2 6504's running HSRP as my core. They are each etherchannel'd to my Datacenter switch (3750 Stack) -- see image below.The problem i a having is with the etherchannel status:
Core1 PO11 status w (waiting)
Core2 PO11 status P(bundled)
DC11 PO48 status P -- but only to Core2 - the interfaces to Core1 are suspended. (See attached configuration documents) None of the devices have any information in the logs. I run this same configuration in my central location, but we are running Nexus 7000's. With the 6500's, do I need to split the port channels on the 3750 to allow them to negotiate the etherchannel? If I split the portchannels, are there any concerns? Should I expect to see the etherchannel status as P (Bundled) or H (Hot-Standby)?
I came across this Multichassis EtherChannel Features when read about information from Cisco Smart Business Architecture.After checking further, knowing that Cataly stwitch 6500 supports this feature.provide information that beside Catalyst 6500, is there any other model of Catalyst switch can support this feature?
I have stack of 2 switches 3750?I config etherchannel between them.
here is result
2 Po2(SD) LACP Fa1/0/15(I) Fa2/0/15(I)
Both ports are up up but standalone Int port channel 2 is down down.Need to know if this is default behaviour when we config etherchannel between stack switches?
configuring EtherChannel between 3750-X cross-stack and 6509E switch. I use two ports on 3750s, and two ports on 6509. I just need it as a trunk. For some elusive reason one port on 3750 keeps being err-disabled, and one on 6509 notconnected.
We have microsoft servers and other application servers (around 12 in nos) which should have gig connections to the access switch. In turn this access switch will be connected to our distribution switch 4503. Which model of access switch best fits from the below 3 models. It should be cost effective as well.
We have two catalyst 6506 switches with 10 gb u plinks and around 120 edge switches cat 3750-x switches. Still the module on the core wheere servers are connected is 1000mbps port.Now if we induct a nexus switch to the datacenter what kinds of benefits we can reap In a virtulised environment as well as real environment?following are the some of the queries.Can we reduce the number of edge switches? ( by virtual environment), Inter operabaility between cat ios and nexus ios, how this will affect the environement,What will be the over all benefits ?, What are the cons of this induction ?
I am planning to migrate the core switch from cisco 3750 to Cisco catalysts 6513 switch. What could be the best approach to minimize the downtime or avoid disrupting the production. I have couple of thoughts, one method is to build the core and then replace the existing core, another option is to build the new switch as the second VTP server and once it recieves all the VTP information then disconnect the old server.
I sort of messed up and upgraded the IOS on one supervisor on a 6500 without doing the second, saved and reloaded. How to I get the 2nd one working again? When I issued a show module the "normal" Active sup shows active and the standby shows as Supervisor.
I know that the 6500 with a Sup 720 reserves power for a redundant 720. If there is no plan to install that redundant Sup, is there a means of releasing that reserved power? I know that one approach would be to insert a card into that slot to cut the reserve down, but I need to reclaim all of that power.
I am trying to review the port-channel configuration on a 6500 series. I am issuing the "show etherchannel summary" command and the out put shows the Group, Port-Channel, and Ports. It does not show me the protocol that is in use such as PAgP or LACP. Does this have to do with the Etherchannel in the "On" mode rather than "active", or "Auto"?
I have a Cisco 6500 series switch with VS-C6509E-S720-10G ,I have two redundant supervisors between two chassis on the LAN with no add-on line cards ?
I need to know if I can use the redundant supervisor 10 Gb uplinks to form a layer-2 Port channel between the two 6500 switches as i do not want to use want to keep the port idle additionally I need more bandwidth between the two switches for my server farm?
We bought Cisco sup engine WS-SUP32-GE-3B for 6500 switches 2 nos for redundancy. I have connected 6 systems on each sup engine ports. How to clarify whether both sup engine will forward the data while one is Master and other is standby?
We have two 6k5 configured as vss and it is working fine. For each floor we have a 3750 stack. In each 3750 stack we have 4 switches. From the first 3750 we have a fiber link to the first 6k5 and from the fourth switch we have another fiber link to the other 6k5 switch, for each floor/3750 stack, both configured as etherchannel. We installled each 3750 stack/floor in different maintenance windows; in all these installations we have problems configuring the etherchannel. After we configure the etherchannel on 3750 stack and in the vss, the etherchannel does not go up/up. We see the etherchannel up/down; after a lot off work, try and error, configuring shut/ no shut, adding and excluding links from the etherchannel; in any aleatory moment the etherchannel goes up/up and works fine. Is there any bug related with these IOS versions, or is there any right procedure to configure etherchannel from vss to 3750 stack?
let me know the exact meaning of attach ( yellow marked one)? the contractor was saying "it doesn't mean two switches as there is a built in redundancy in Cisco switch)"I don't think he is correct as I never heard about built in redundancy in Cisco Router/switchAny comment as this will affect the numbers from 55 (3750 v2) to 110....
I was configuring link aggregation between a Cisco3750 and Cisco SG200 and the switched network went down just a few minutes after the port channel came on.I rebooted the SG200 and all hosts came back up for a minute before I lost them again. The etherchannel was between two trunking ports. I never set link aggregation on the SG200, could that be the reason? All machines are connected to the SG200. The 3750 is only being used as a layer 3 device for inter vlan traffic.
We have a new building where we added 3 network closets with 3750x stacks.We have 2 fiber gig ports connected to 2 microwave units and etherchanneled the to ports
We have the same setup at the corporate office except the connection is 2 fiber gig ports on a 6509. The 6509 is doing the routing
Now the problem is that we lost the connection to the management interface on only one of the three closets after a few days running. But some workstations at the corp office could reach it... others could not. The workstation were on the same VLAN. We solved the problem for a few days by shutting on of the fiber ports on the etherchannel. But is started happening again... now with PCs being installed at the new location.
The PC that was not working on the 3750x side could ping across to the 6509 but no further then that. It seems like an arp issue not routing because some devices are reachable and fully operational throughout the network.
Here is the configuration:
3750x:Cisco IOS Software, Version 12.2(55)SE1 Current configuration : 118 bytes!interface Port-channel1switchport trunk encapsulation dot1qswitchport mode trunkno snmp trap link-statusend !interface GigabitEthernet1/1/1description 2gig to STB-6509switchport trunk
We've installed cisco devices accross our site in the last year or so and slowing getting on top of it now. How ever our old unmanaged kit seems to be out performing it. It's most like down to my misconfiguration which has lead me to here. Below is the details of hardware and configuration between devices. 3750 Core consisting of the following stacked. [code] I've got MRTG monitoring traffic and the throughput seems to max out 24m/s,
We are configuring a new EMC VNX, and plan to use Ether channel with our 3750x stack. We would like to configure it for both additional bandwidth and for redundancy. What is the best configuration to use? Should we trunk the channel or use switch port mode access, and use channel-group on or protocol LACP (which state active or passive)?
For etherchannel of 2 links on a 3750 switch, if 1 link ever gets saturated, will the other link be used as well if the excess traffic is part of the same flow?Or that excess traffic will simply be dropped?
I am wanting to etherchannel from a 3750 stack to core Layer 3 switches (also 3750) with a cable going to each core switch, I have put both core switches and the stack under a 28bit subnet mask, but I dont seem to be load balancing across both links.
We have a pair of WS-C3750X-24T-S in a stack and four WS-C2960S-48TS-L in a stack of their own. There is not really anything too fancy configured (no special VLAN configuration/trunks or etc.) but the 3750 do have two ports configured as L3 for routing. We are not trying to use those ports for EtherChannel. These devices are running IOS 12.2(55)SE3 Essentially we are attempting to make an EtherChannel group using port 48 on all four of the 2960's in their stack (four ports). On the 3750 we will configure an EtherChannel group using port 23 and 24 on both switches (four ports). We then connect them up to form a four member EtherChannel.The ports on both ends are configured as mode ON and they are all 1Gb ports. I elected mode on because I understand at least one of the EtherChannel protocols will not work cross stack. What I would like to ask is whether the above configuration is possible or are we hitting some sort of limitation of EtherChannel cross stack, etc..? I cannot find anything to suggest this configration is invalid, but thought I would ask to see if I missed something in the EtherChannel articles.
As we know there are three layer in cisco Network Model:
-Core -Distribution -Access
So my question is in Core / Distribution layer should i use Etherchannel between switches or use Stacking if switches are stackble.For ex: suppose I have two cisco 3750 switches . so should i use etherchannel between them or use stacking in core layer?What are the advantages and disadv of both.
According to the following Cisco webpage, cross-stack 10-Gigabit EtherChannel is possible on the 3750 with up to two 10-Gigabit modules, using LACP: url...However, the webpage doesn't confirm if this functionality is restricted to a particular 3750 model, such as 3750E, or if it applies to all models. It also doesn't specify any particular IOS feature set. I've tried Cisco Software Advisor but it doesn't list this feature on either the 3750E or 3750-X. This is possible on the 2960-S switches (i.e. cross-stack 10gig EtherChannel)?
I have a 3750 as core switch, adding 2 stacks of 2960S to connect. I want to establish etherchannel between the 3750 and each additonal 2960S stack, do the channel group numbers between the 3750 and the new 2960s have to match? 3750 has two channel-groups(1 and 2) already configured. Need to know, I would create 2 additional channel groups (number 3,4) for each of the etherchannels between the 3 2960S Stacks and 3750? OR channel-group # is local to the device.
We would like to add another Nexus5k to this topology. However, it has to be a zero downtime infrastructure add-on. When setting up the keep-alive, peer-link, vPC and vdc domain, will there be any upset in network traffic on the current N5k?Also, are the Nexus5k configurations synchronized or are they independent from one another? Before setting up the new 5k, should i configure it to teh 6509's, and vPC's to the Nexus2k's before setting up peer-link?
If I dual connect my access switch to my 6509s running vss, what will happen, will spanning tree still block one of the ports if I don't set up an etherchannel?
If I have two stackable switches one switch uplinks to one 6509 core switch and the other connection uplinks to another 6509 core switch, and also the other stackable switch does not connect to the core switches. Because I am using hsrp and also we are not using vss then one uplink to the core is not in used only ones is so then how does creating an etherchannel between does two uplinks to both core switches benefit me in anyway such as more bandwith and using both uplinks at the same time or I'm I wrong?
Small datacenter design. My requirements and setup will be as follows Dell PowerEdge M1000E Blade Chassis (initially one full chassis)Dell Powerconnect 10GbE Blade SwitchesDell Compellent Storage Array 10Gb iSCSI with redundant controllersDell Powerconnect 7024 dedicated external storage Virtual host blade servers 2 x Cisco ASA for firewall (5525-X or similar in active-active configuration)2 x redundant routers or switches as gateway to public internet I am looking to be able to segregate customers (approximately 100) into seperate VLANs at the access layer and route them up to the Cisco ASA firewalls using Dot1Q trunking for segregation. The Cisco ASA's will perform NAT functionality and route to the redundant gateways. I then need to police each customers traffic at the gateway to limit bandwidth and perform specific traffic marking along with simply routing out to the internet.
Budget is somewhat restrictive so I am looking for the most "cost effective" devices I can use at the gateway to perform the traffic policing/marking/routing for each customer.
For intervlan routing, Is 'IP routing' command enabled by default on a 6500 series switches based on the IOS?and on 3750 switches, do we need to enable the "ip routing" command manually for intervlan routing?