Cisco Switching/Routing :: LACP Over Dot1q Tunnel With 4506-E And IOS 15
Mar 14, 2013
i'm desperately trying to get LACP working over a dot1q Tunnel. The "Service Provider" Switches are two 4506-E Switches with SUP7-E connected via a 10G Link, running on cat4500e-universalk9.SPA.03.03.00.SG.151-1.SG
sample config:
dot1q tag vlan native
interface GigabitEthernet3/1
switchport access vlan 2001
In fact i receive traffic on a one client per vlan basis (traffic is PPPoE), i receive all this traffic on a router, collecting all these vlan on a bridge where the pppoe packets are treated.When I use a transeiver to convert operator fiber arrival to my router copper media interface, i have no problem....
When I use dot1q-tunnel to make the same on my 3750, packets seems to be corrupted.I get PPPoE timeouts and packet loss, not regulary, totally stochastic...
I made dozen of tests and different settings, without success I first thougt of MTU issues. [code] I made tests with system MTU and/or system jumbo MTU above 1500, without success.I didn't found any known caveats on 3750 running Version 12.2(25r)SEE4 related to dot1q-tunnel.
Is it possible to do dot1q-tunneling on the new Cisco Calalyst 2960 Compact series switches? I know that the 3560 series support it, but im unable to find any information about the 2960C series, personally i doubt it as the standard 2960 series don't support it.
If you have a router with multiple direct vanilla FE (non trunked) interfaces on a switch trying to send QOS tagged packets to a wifi bridge several switches away does the trunking in the switched infrastructure mess with the qos tags if no qos is configured on the switches.
Does it depend on the switch? We have new 2960's running 12.2 and a few older 2950's running 12.1
enable dot1q encapsulation on two ethernet ports on a 1721 router. I am able to configure it on the built in fastethernet port, but not on any interface provided by a WIC-1ENET or a WIC-4ESW. I have an application that requires two physical ethernet ports that support dot1q encapsulation.
I have a Catalyst 4500 L3 Switch Software (cat4500e UNIVERSAL-M), Version 03.02.00.XO RELEASE SOFTWARE (fc2). So I just wanted to verify that the switch only does dot1q encapsulation because the switchport trunk encapsulation dot1q command does not work.
I'm setting up a new 4900m running cat4500e-ipbase-mz.122-53.SG5.bin. I'm attempting to create Port-Channels as a Trunk for uplink to a 4503 running cat4500-ipbase-mz.122-37.SG1.bin.When I attempt the command "switchport trunk encapsulation dot1q" it errors out.
I am trying to configure a 4507 R chassis with Dual SUP but i cannot see teh switchpot mode trunk encapsulation dot1q?
I have typed:
interface GigabitEthernet5/1 description DOWNLINK toxxxxxx switchport mode trunk channel-group 11 mode on ! I have have searched all other commands and sub-commands but could only find dot1q-tunnel which I beleive is for QINQ or some QoS featues and lot for L2 encapsulations?
the puzzling is:
XXX-Core4507#sh int gi5/1 trunk
Port Mode Encapsulation Status Native vlan Gi5/1 off 802.1q notrnk-bndl 1 (Po11)
when I connect the dostribution switch a 3507 to this int gi 5/1, both interfaces do come up?
I've got LACP-enabled port-channels between a Cisco 3750 stack and a few different switches (some Cisco 3750s and some Juniper EX2200s). The Ciscos are all sending slow LACP updates, the Junipers are sending fast LACP updates (but the Cisco they connect with is responding with slow LACP updates).
I have a couple of questions:
1) what are the pros and cons of slow vs fast updates? my research has led me to the conclusion that fast updates are better for network resiliency as long as you have plenty of bandwidth overhead (which I do at the moment). is there anything to add to this conclusion?
2) is there any way to configure the Cisco 3750s for fast updates?
We're trying to configure our Cisco 4507 (Supervisor Engine IV) to allow a new Dell server with a pair of Broadcom 5708 GigE NIC's to aggregate its NIC's to give us a 2gbps link to the switch.
So far we seem to have got the team and LACP up and enabled, but the adaptor that the Broadcom Admin Util creates for the team is only showing a 1gbps connection where I would have expected it to show as 2gbps.
The individual NICs show as connected at 1gbps. We're not Cisco experts so are struggling on how to get the 2 NICs to aggregate.
On the server side we've done nothing other than create a team using 802.3ad LINk Aggregation using LACP.
This is what I think the relevent output from "sho conf" is, more available if needed.
version 12.2 boot system flash bootflash:cat4000-i9s-mz.122-18.EW1.bin ! interface Port-channel2
I am trying to configure a 6509 as the passive receiver from a Dell Force10 10Ge switch with 2 sfp to 2 gig ports on our 6509 switch, I see LACP is up on both sides but cannot pass traffic, I have only 2 vlans that will carry across the aggregate link from our vmware boxes, this is just a temp until I get a 10ge in our 6509 chassis.
I have conneccted a server with LACP on Nexus extender. I am starting different file copy from diffferent sources to this server, it does not load balance.
I have two N5K (5020) switches with NX-OS - 5.0(3)N2(1). These two switches form VPC domain: peer-link = 2*10Gb ports (1/17-18) and peer-keepalive link over managements ports.Also I have two HP servers with two 10 Gb ports on each server.Each server conected by one link to each N5K switch (1/9-10). N5K downlinks configured as access ports with LACP Active mode.There is only one VLAN (1).When "no shut" command entered on N5K access ports - ports going in "not connected" status, begin flap and then going in "linkFlapErrDisabled" state.In attach - "sh run" from N5K.
We have 2x WS-C3560X-48 with the 10Gb SFP C3KX-NM-10G module.I want to use the 10Gb SFP (with a redundant 1Gb link) between each of the switches.Below is the configuration what I think I should use. Is this correct?
SWITCH1 int TE1/1 switchport trunk encapsulation dot1q switchport mode trunk channel-group 1 mode active
[code]....
We also want to utilise Link Agrrigation for the servers that attach to the switch. Would this config be correct?
port-channel load-balance src-ip int range x switchport mode access switchport access vlan 1 (They are all on the default vlan) spanning-tree portfast channel-group x mode active
(Then I would configure the LACP config on the server) Is there anything I am missing?
I need to implement LACP HP servers mostly DL 380 g7 with Intel based dual port with two types of Cisco equipment first scenario server connected to 3750x stack of 4 switch's .second scenario same server type connected to two Cisco Nexsus 5596 . My question regarding two type of connection.Is it possible to do active active ?Would it give fault tolerance ?With HP LACP implementation is there known issue or should i expect latency with such configuration?What is the maximal lag- channel group that is possible per type?
I am running the latest version of 12.2(55)SE6 on the catalyst and I am looking for this command "lacp rate fast" but it is not there:
lab-c3750#conf t Enter configuration commands, one per line. End with CNTL/Z. lab-c3750(config)#interface g1/0/5 lab-c3750(config-if)#lacp ? port-priority LACP priority on this interface lab-c3750(config-if)#lacp
According to this link, it is supposed to work: [URL]Am I missing something?
If after having an active peer-link port-channel between a couple of nexus 7010 I change lacp system-priority on both boxes and then I add a new member interface on that port-channel , that interface will be aligned with new lacp system-priority value but the old member do not and so I have to reload those modules which include old member interface.
we have 6500 series switches by default all port channel loadbalancing is src-dst-ip.Now we have a requirement to change LACP load balance method to src-dst-port.
1) If we change LACP loadbalance method whar are the effects we are going have on our core network and all the existing servers
We are planning to have attach topology with nexus 5548 using vpc. Let me know if this i possible. I want to configure dual NIC linux server using LACP active mode to connect to two 5548 in VPC for redudancy as well as use of full access layer bandwidth. On nexus this will be access port in single port channel in single VPC link.
I have a Cisco 3750G-48PS Running IOS 12.2(40) and was wondering how many physical ports are supported in an LACP Configuration. Is it 4 or 8 ports in a single switch.
If I have a dual 3750 stacked together, and I want to configure (8) port LACP, Can I take (4) ports from each switch in the stack and LACP together.
I have a Cisco 6513 switches connected to HP VC Flex 10 Module. The (2) 10Gb ports on a Cisco Switch connected to VC Flex-10 in LACP mode.
I need to move those (2) 10Gb ports on Cisco Switch 10Gb Module to a different 10Gb module on a same Switch without bringing the ports down since it is a live environment.
What I would do is to configure a same port channel ID on a new 10Gb module and then move port one by one. unplug one port and connect to the new port on a module. While I will be unplugging the first port the other active port will keep sending traffic and as soon as I plug in on another port, both ports will be active.
I have a situation where the site-to-site tunnel is already established using PPTP IPSec VPN with non Cisco Gateways terminating the link on each end. These non Cisco Gateways do not support L2TP tunneling, and there is no plan to change them.Beyond the Gateways on both ends, we have a Cisco 4500 series switch. We need to forward the 802.1q tagged VLANs between the two sites. Is it possible to use 802.1Q tunneling in this case, going via a PPTP tunnel ?
Cisco's setup uses dot1q-tunnel over a L2protocol-tunnel to preserve the original client VLAN tagging, so does this mean that the only option we have is to setup a L2TP tunnel at the Cisco device endpoints, and have that tunnel go through the existing PPTP tunnel (established between the 2 non Cisco VPN Gateways) ?
I have configured a portchannel with 3 ports - lacp active mode I want to pxe boot esx server with Hp Proliant DL 360 G7.
During Pxe Boot the physical Interfaces must be in (I) state - standalone.
The server receives via dhcp the ip but after getting the ip the link goes down for a short time and the switchport goes down and then in (w) state - waiting to be aggregated. Now the server tries to load via tftp the image but the server gets a timeout because the switchport is in state (w). It takes a long time til the switchport comes back in standalone mode (I).
Is it possible to modify the time how long the switchport is in the waiting to be aggregated state on 3750x ?
Would the right command be lacp rate-fast? But this command is not available on 3750x, only 6000 series.
I am having an issue on a Cisco 3750 stack where when the stack master is rebooted, all my lacp port-channels drop and then come back up again. After doing some investigation It seems that it is happening because of lacp using the stack master mac-address as part of the system-id, so when the stack master reboots, the stack mac changes. I see that there is the command: stack-mac persistent timer 0
There is this warning about using this command:
When you configure this feature, a warning message displays the consequences of your configuration. You should use this feature cautiously. Using the old master MAC address elsewhere in the domain could result in lost traffic.
My question are:
Are there any other consequences to using this command (apart from moving the switch/mac to another location in the network)It mentions 'If the entire switch stack reloads, it acquires the MAC address of the master as the stack MAC address' Is this still the case if you have the stack-mac persistent timer to 0? Does using channel-group mode on for the port-channels still use the same mechanism of having a system-id? (Will the channels flap using 'mode on' when rebooting the stack master.
My management has tasked me to give them a high level overview of the different switching we can choose for our new building.
This is what I know so far.4 Closets, each closet has 450 ports,One MDF room that is will contain one UCS Chassis and a Nimble iSCSI SAN.
I am working on the spreadsheet and it looks like this (Not totally filled):
2960s3560x3750x45064510Approx cost (Each, 48PORT, POE+, 10G uplink, Dual PS, IP BASE) 6K7K8K45K75KMax Capacity192432432192384Backplane speed206464520520ProLeast ExpensiveStackable to 9Stackable to 9ProDual PSDual PSDual PSDual PSDual PSProLayer 3 opt Layer 3 optDual SupsDual SupsConExpensiveExpensiveConNo Dual PSConLayer 2 OnlyCannot stack more than 4 For the MDF I would like to use 2 Nexus 5548's with FEX's, and the layer 3 daughter board. For the IDF's I was thinking of two 4010's.
I am implementing a guest wireless network to work alongside my internal network. The guest network will use the existing switching network and will be separated by VLANs. I have the ASA set so that traffic can get to it and out to the Internet. I can set up a workstation on the same VLAN as my guest network and can route inside my network (strictly doing this for testing purposes). Where I am having problems is with the Catalyst 4506 switches and the ip routing. I had two separate "ip route" statements defined on my switches.
ip route 10.200.2.0 255.255.255.0 10.200.2.254 ip route 0.0.0.0 0.0.0.0 10.100.100.254
I have discovered that the traffic is always following the default route despite the fact that my IP address on my test workstation falls in the 10.200.2.x network. I was looking at documentation and found that it is possible to set up policy-based routing on the core switches. Can you have two "ip route" statements defined like this to segreate traffic or do I have to use PBR for routing (or a combination) in this case? If I define PBR then how does that impact my existing routing? I need to make sure that I can still route the existing traffic while I'm configuring this change.
We have 2 sites, each with 2 x 4506 switches which will be connected togther using an etherchannel. The switches will provide access ports for client devices and will be configured with HSRP to provide gateway redundancy. SW1 will be HSRP active.2 metro ethernet links will be installed in each site which will connect back to our HQ sites. OSPF will be used over the backbone to provide resiliency and to allow shortest path routing to each HQ and to prevent traffic over the HQ to HQ link.
The 4506 will be trunked togther with an SVI for providing OSFP adjacency.For the traffic flow from SW2 to HQ2, traffic will hit SW1 and then route back to SW2 and then to HQ2. Is this the best way to do this? Should a second link be connected between switches just for routing or should something like GLBP be used?
CiscoSwitch1(4506) has 3 VLANs(12,13,14) and Switch2(4948) has 3 different VLANs(22,23,24) and IP routing has been enabled in both switches with SVI interfaces for each vlan. intervlan routing is works fine.Now there is a requirement to connect these switches together. Vlan 12 on the Cisco switch 4506 has to be made available from vlan 22 from Switch2(4948). basically Vlan 12 is having a multicast source (225.0.0.0 & 226.0.0.0) which should be accessabile from vlan 22 of cisco switch 4948.I got 2 ideas
1) Create a trunk between these switches and configure L2 vlan(12) in cisco 4948...i know theoritically it should work but what my concern is Ip routing enabled in both switches will it create any issues? is it a gud solution to this requirement?
2) Create a separate IP network on the ports connecting to both switches and set up routes to the networks.ex- console(config)#ip route 192.168.10.10 255.255.255.0 192.168.20.1.