Is it possible to rate limit on a L2 trunk port on a 3750?
current port config and ios are as follows;
interface GigabitEthernet1/0/50 description *** Connection to Fiber Link *** switchport trunk encapsulation dot1q switchport trunk allowed vlan 1,172 switchport mode trunk end flash:c3750-advipservicesk9-mz.122-46.SE.bin
i was wondering if the "srr-queue bandwidth limit 10" command would work to limit the output from this interface to be 10 % of the port bandwidth and then the same command could be done on the other side.
I want to limit the bandwidth of my Catalyst 3750 series switch, I read the cisco documentation and I applied the commands but I didn't get the wanted results.
For the outbound traffic it's ok, but for the inbound traffic I used policing but I get an unstable traffic. I used, an access list and a class-map to classify the traffic and then a policy-map.(I followed the steps mentioned in this site: [URL]
I have a 2921 where I am shaping some traffic based on sub net on my lan. I have applied the shaping policy to the lan interface in the outgoing direction.
Topology is as follows: ISP - ASA - ROUTER - LAN Policy map: Policy Map shape-lan [code]....
I am seeing a lot of no-buffer drops on the policy and I am wondering what the best solution is to solve this: Class-map: tc-class (match-any) 8730680 packets, 10803689863 bytes 5 minute offered rate 4453000 bps, drop rate 0 bps [code]....
Should I just be increasing the queue-limit or should I be changing something else?
I apologize in advance if this is a novice inquiry, but our company just switched from Point-to-Point T1's to Metro Ethernet.
On one point-to-point, from our main office to one of our high profile locations, we had two bonded T1's. Now this site has a 3 Mbps Metro-E link, but it's being over-saturated. I don't know what type of QOS implementation our T1 provider had, but it prevented flooding. Now, I'm getting horrendous latency as the office peak hours approach since there is no QOS on the mesh by our Metro-E providers.
Ultimately, my question is: what's the best way to set a Fast Ethernet port on a Cisco 1800 series router to limit all bandwidth to 3 Mbps? At the moment, I don't have a preference in which traffic takes priority. I tried the rate-limit command, along with a speed calculator I found online, but that slowed the network down immensely.
Buy a router RV120W, and one of the reasons is limit of bandwidth (QoS). I set up a profile of 1-256 kbps limit, and apply it to the only VLAN that is configured, but does not work and can navigate using the full bandwidth of the internet connection. My firmware version is 1.0.2.6
We have an ASA 5525 running version 8.6(1)2 and a 10 MG pipe. I have execs that want to limit bandwidth on users for stuff like youtube, stream media, and downloads. I found the article on ‘Bandwidth Management(Rate Limit) Using QoS Policies’ so it appears our firewall can do what we want. I’m not a cisco person. My knowledge is limited when it comes to configuration – that’s why we have SmartNet.
Can bandwidth be limited on end users and/or can they limit the ‘bandwidth rate limit’ to just youtube, steaming media, and downloads? If so, what should the limit be? and I’m assume this would be for ‘incoming’ traffic only? we’re running into some bandwidth hogs – usually youtube and/or streaming media. We have a Barracuda web filter which we’ve used to block and monitor activity but I simply do not have time to babysit this all day. I should also mention we do have critical data running up and down the pipe; such as credit card processing, DB replication between in house DB and hosted website, TPCx and EDI, FTP, and such that we don’t want restricted.
How to rate limit a 3560 inbound and outbound using different QoS methods. I've read about vlan class maps/policy maps, using the rate limit command on the physical interface, using the srr-queue bandwidth command(it's a gig switch so not sure that would work) and marking all packets and then applying QoS. I'm just learning QoS so trying to figure all of this out and find the best way to do things.
Also, I was told to do this because it's not advisable to have a connection to your ISP that is not 10mb or 100mb on a switch, since they are not divisible by 10 and it can cause issues?
I am using Cisco 3560 as distrubution switch and want to limit port 445 traffic on 1 MB and applied rate limit statment on Gi0/1 port but switch unable to limit said traffic.rate-limit output access-group 120 1024000 128000 128000 conform-action transmit exceed-action drop.
I am having an issue with VoiP phones giving me an insufficient bandwidth message. I have three remote locations connected to our main building using 2 Mb point to point ethernet solutions through TWC. Each remote location has a Cisco WS-C3560-24PS running IOS C3560-IPBASE-M, version 12.2(25) and have the cable modems plugged into port 1 on them. The remote buildings are labeled 192.168.101.xxx, 192.168.102.xxx, and 192.168.103.xxx. There are 14-16 VoiP phones in each remote building. The main building being in the subnet of 192.168.100.xxx. I have the 3560s connecting to a single port on a 2801 in the main building, all using the subnet of 192.168.253.xxx The phone server sits in our network at 192.168.100.203. I have created the ACLs, class maps, and policy maps on all of the equipment.
For the remote buildings I have the following:
ACL =========== Extended IP access list VOIP permit tcp any host 192.168.100.203 dscp ef permit tcp any host 192.168.100.203 eq 5566
[Code]....
I have put a hub in to capture traffic via Wireshark to see if DSCP flags are being appropriately marked and I do see that all VoiP packets are getting marked with as EF. However, I have been receiving phone calls from people in the remote buildings stating that their phones will cut out, flash Insufficient Bandwidth on the LCD displays and then the call will cut back in. I am wondering if the 2801 is not applying QoS with the rate-limits in mind since it is set to 100 Mb, or is it an issue with trying to take 3 remote locations and bring them down into 1 port on the 2801?
How is it i can implement the command 'ip multicast rate-limit out group-list <access-list>' but i get the error "ip multicast rate-limit" command is not supported on 6509?
Is it an IOS limitation or a limitation of the switch series and subsequently can't be used at all?
How (and is) it possible to rate limit pps on an interface (physical/logical), on a 6509-E?The porpuse is to protect from attacks which lead to very high pps, bypassing traffic rate-limits, and effecting the device's performance
I am working on a QoS design which I hope to test at some point, but at this stage its from the books.My question is how to decide which queue and threshold to use for video traffic, then lower priority traffic.I understand the shaping and sharing commands, its the queuing and threshold bit I'm not clear on.The plan is to use the priority-queue for EF marked voice, this will be policed on ingress to provide an upper limit to EF traffic levels, then my second priority traffic will be video. Which queue will get serviced first once the priority queue is empty, and how do I decide which threshold to allocate my video traffic to? The document ion is not at all clear, I want to prioritse my traffic in the following order:
1 voice, use the priority queue 2 video, this to get serviced ahead of data, after voice. 3 interactive data 4 Bulk data 5 Best effort
So Q1 settings are ignored due to priority queue. Q2 gets 70%, Q3 25% etc.Is it as simple as putting video into Q2 T1, then interactive data into Q2 T2, will Q2T1 get a higher priority over Q2 T2 once the PQ is serviced?
We currently have a site with a very simple topology that uses a 3750X switch stack for a collapsed core. Everyday, the users have a conference call and experience poor voice quality.Its not bad when users call from several conference phones, but when everyone calls in on individual phones, there is choppy and almost inaudible voice quality experienced. The voice traffic flow would be as follows: Phone <-> 3750 switch <-> Voice GW We have packet captures showing that RTP packet loss is occuring from the phone to the voice gateway, but none from the voice gateway to the phones. We also have drops in the output queues that match drops on the asics. I can reset the counters and they will be clear until the call, and then they increment significantly during the call. The voice gateway and phones are non-Cisco. The switch stack has 6 switches. We are trusting the DSCP settings on the phones. All the queue drops from the phones are usually in queues 0-3, but all drops on the voice gateway is in queue 0. Below are the QoS settings; they are mostly default and we have not changed any queuing, thresholds, or buffers. Should we specify larger buffers and threshold for a designated queue and send EF traffic to that queue?
MySwitch#sh mls qos QoS is enabled QoS ip packet dscp rewrite is disabled Typical Port GigabitEthernet1/0/4 trust state: trust dscp
I've been working on breaking down and understanding the default auto qos configuration on a Cisco 3750 in the hopes of putting together a QoS strategy that will fit our environment. I'm having some difficulty understanding how the "mls qos queue-set output" syntax works.
From another post, at [URL], the author offers the following example and explanation;
How come there is syntax stating "threshold 2" when in the succeeding part the 400 refers to thresshold 1 and threshold 2 again? The syntax 400 400 is, apparently, already referring to thresshold 1 and 2, no?
I am looking for step-by-step configuration on how to enable rate-limit and traffic shaping on Cisco 6513 vlan interfaces. I am not able to find this particular document on CCO.
I am configuring a 3560 to provide internet access for our customers and I need to make sure they don't use more bandwidth than they have contracted for.I see that the 3560 supports the rate-limit command, but was told that I should use traffic shaping and policing along with access lists to manage the bandwidth.Is there a reason that I should avoid using the rate-limit command - it looks much simpler.
I want to limit the bandwidth going to remote site on the switch connecting to our netapp.We have a 4 port channel group setup on our 3750x switch going to our netapp storage. We have a Wan 100mb link to our remote site and we want only 60MBs of that link to be used for Netapp traffic all other local traffic needs to use the full amount of the bandwidth to the netapp.
Is possible to allocate bandwidth in this way and how would I go about this? We dont have access to the routers for the link and they plug directly into a port on our cisco.
After opening up Solarwinds NPM, I noticed that a few of my interfaces had lots of discards (who knows how long it's been sets the counters were reset)
interface GigabitEthernet1/0/25description Etherchannel to MamaCassswitchport trunk encapsulation dot1qswitchport mode trunkswitchport nonegotiatepriority-queue outchannel-group 4 mode on
interface GigabitEthernet2/0/25description Etherchannel to MamaCassswitchport trunk encapsulation dot1qswitchport mode trunkswitchport nonegotiatepriority-queue outchannel-group 4 mode on
interface Port-channel4switchport trunk encapsulation dot1qswitchport mode trunkswitchport nonegotiate,It looks as if priority-queue was configured outbound on these interfaces, could this be the cause of the transmit discards which are now up to 79,835, I just reset the counters on the interfaces a little while ago.
I'm not the best in the world when it comes to QoS, we do have some VoIP phones, but they are only a specific network, and do not travel outside, since there are used mainly for VoIP training. I do know both interfaces are running the default of FIFO.
I feel that 3560 and 3750 perform differently with the following two commands:
srr-queue bandwidth shape 5 0 0 0 srr-queue bandwidth limit 50 On 3750, the bandwidth for queue 1 is limited to 100mbps x 50% / 5 = 10mbps On 3560, the bandwidth for queue 1 is limited to the smaller value of BW / shape weight and BW x limit%.
Does it sound about right? is there a way to check for mls qos input queue drops? The show mls qos interface xxx stat only shows the output queue drops. Maybe for some reason the input queue never drops?
When configuring QoS on 3750s/3560s, we're mapping packets to particular interface output queues with commands such as: [code] The command to see what's actually being enqueued, dropped, etc. is: [code]
Note that these queues are numbered 0 - 3, and not 1 - 4. We've been assuming that the first queue number in the "mls qos" (i.e., 1) command maps to the first queue (i.e., 0) in the "show mls qos" command.
I've been fighting what seems to be an increased number of outqueue drops on our core stack and edge switches for the last 3 or 4 weeks.(The core consists of a stack of 5 3750s in 32-gig stack mode. The wkgrp switches are 3560s. all are at 12.2.52) The wkgrp switches are directly connected to users. We use Nortel IP phones with the phone inline with the user PC. auto-neg to 100/full. [code] However I have tried turning off QOS on a couple of workgroup switches (no mls qos, but left individual port configurations the same) but am still seeing drops.Since I have disabled qos on the switches in question (no mls qos) (not the core tho) I am presuming these commands have no affect on the switch operation and therefore cannot be related to the problem. With QOS turned off one would presume that it is general congestion - especially at the user edge where busy PC issues might contribute. So I wanted to see if I could see any instances of packets in the output queues building up.
I wrote some scripts and macros that essentially did a snapshot of 'show int' every 20 seconds or so, and looked for instances of 'Queue: x/' where x was greater than zero.What I found after several days of watching the core stack, and a few of the workgroup switches that are most often displaying the behavior, was that I NEVER saw ANY packets in output queues. I often saw packets in Input queues for VLAN1, once in a great while I would see packets on input queues for fa or Gi interfaces, but NEVER on output queues. [ code] Additionally, when I look (via snmp) at interface utilization on interfaces showing queue drops (both core and wkgroup), they are occurring at ridiculously low utilization levels (as low as 4 to 8%). I've tried to look for microbursts between the core and a wkgroup switch where the core interface was experiencing drops, but haven't seen any (using observer suite). [code] While the queue-drop counts aren't critically high at this point, they are happening more frequently than in the past and I would like to understand what is going on... In most cases, no error counters are incrementing for these interfaces. Is there some mechanism besides congestion that could cause output queue drops?
We have a guest wireless setup but I need to rate limit the users so no one hogs all the bandwidth. The WLC is connected into a 3750 which is doing all the routing between the vlans. I know I cannot shape the traffic on the 3750.
I would like to limit the bandwidth available to a different target machine. I have been trying to do this on a Cisco 3750, but first I came across this message:
% QoS: policy-map with police action at parent level not supported ...
Then this:
% QoS: policy-map child ... ClassMap BackLimit only support MATCH-INPUT INTERFACE
Is there any documentation, I searched the forum but all I see are complex solutions, I just wanted to limit the bandwidth for a machine that is in a different site. I wanted to apply the policy on an interface SVI.
I have the requirement to assign an asymmetric bandwith limit to each port on a switch (example: 4Mbps downlink, 1Mbps uplink). I've been searching and found the option to apply policers or srr-queue mechanism to achive this, however this only applies for one direction only as far as I know. Catalyst 2960 familiy is preferred, however if this is not possible, will possibly jump to the 3560X family.
We run a workers camp here and we currently have around 2500-3000 people using our 100MB internet pipe. We are upgrading the pipe to 200MB soon but I still would like to limit how much bandwidth everyone is using.
We allow streaming media such as Netflix, youtube, apple TV and of course .So it gets full pretty fast. We have QOS implemented although I wasn't here when it was done so I don't know a lot about it. I would like to limit IPs to a certain amount of bandwidth. [code]
I am running the latest version of 12.2(55)SE6 on the catalyst and I am looking for this command "lacp rate fast" but it is not there:
lab-c3750#conf t Enter configuration commands, one per line. End with CNTL/Z. lab-c3750(config)#interface g1/0/5 lab-c3750(config-if)#lacp ? port-priority LACP priority on this interface lab-c3750(config-if)#lacp
According to this link, it is supposed to work: [URL]Am I missing something?