We are upgrading from 3550 - 3560 switches.On the 3550's we have this on each interface: [code] The 3550's wont accept the wrr-queue commands. How to set these on the 3560's.
I am working on a QoS design which I hope to test at some point, but at this stage its from the books.My question is how to decide which queue and threshold to use for video traffic, then lower priority traffic.I understand the shaping and sharing commands, its the queuing and threshold bit I'm not clear on.The plan is to use the priority-queue for EF marked voice, this will be policed on ingress to provide an upper limit to EF traffic levels, then my second priority traffic will be video. Which queue will get serviced first once the priority queue is empty, and how do I decide which threshold to allocate my video traffic to? The document ion is not at all clear, I want to prioritse my traffic in the following order:
1 voice, use the priority queue 2 video, this to get serviced ahead of data, after voice. 3 interactive data 4 Bulk data 5 Best effort
So Q1 settings are ignored due to priority queue. Q2 gets 70%, Q3 25% etc.Is it as simple as putting video into Q2 T1, then interactive data into Q2 T2, will Q2T1 get a higher priority over Q2 T2 once the PQ is serviced?
We currently have a site with a very simple topology that uses a 3750X switch stack for a collapsed core. Everyday, the users have a conference call and experience poor voice quality.Its not bad when users call from several conference phones, but when everyone calls in on individual phones, there is choppy and almost inaudible voice quality experienced. The voice traffic flow would be as follows: Phone <-> 3750 switch <-> Voice GW We have packet captures showing that RTP packet loss is occuring from the phone to the voice gateway, but none from the voice gateway to the phones. We also have drops in the output queues that match drops on the asics. I can reset the counters and they will be clear until the call, and then they increment significantly during the call. The voice gateway and phones are non-Cisco. The switch stack has 6 switches. We are trusting the DSCP settings on the phones. All the queue drops from the phones are usually in queues 0-3, but all drops on the voice gateway is in queue 0. Below are the QoS settings; they are mostly default and we have not changed any queuing, thresholds, or buffers. Should we specify larger buffers and threshold for a designated queue and send EF traffic to that queue?
MySwitch#sh mls qos QoS is enabled QoS ip packet dscp rewrite is disabled Typical Port GigabitEthernet1/0/4 trust state: trust dscp
We're having some issues with a 3560 CPE. It's uplink is a GE fiber link, customer port is FE RJ45. We see a lot of TX-frames being dropped at the FE port,but none at the GE port. Even when the customer is only at ~50Mbit/s downstream traffic. When customer sending ~50Mbit/s upstream, there are no TX- drops at the GE link. Is this a normal behaviour? From what I know the physical medium shouldn't have any inpact on this since drops occurs in the port- asic, and not in physical transmission.Do the buffer-sizes between GE and FE differ? What could we do to optimize the flow and reduce drops? QoS is set to off and no modifications on the queues have been made on the interfaces.
I have a 2921 where I am shaping some traffic based on sub net on my lan. I have applied the shaping policy to the lan interface in the outgoing direction.
Topology is as follows: ISP - ASA - ROUTER - LAN Policy map: Policy Map shape-lan [code]....
I am seeing a lot of no-buffer drops on the policy and I am wondering what the best solution is to solve this: Class-map: tc-class (match-any) 8730680 packets, 10803689863 bytes 5 minute offered rate 4453000 bps, drop rate 0 bps [code]....
Should I just be increasing the queue-limit or should I be changing something else?
I've been working on breaking down and understanding the default auto qos configuration on a Cisco 3750 in the hopes of putting together a QoS strategy that will fit our environment. I'm having some difficulty understanding how the "mls qos queue-set output" syntax works.
From another post, at [URL], the author offers the following example and explanation;
How come there is syntax stating "threshold 2" when in the succeeding part the 400 refers to thresshold 1 and threshold 2 again? The syntax 400 400 is, apparently, already referring to thresshold 1 and 2, no?
One one of our Cisco 6509s I've globally enabled QoS and set a port to "trust cos". However when I look at the queueing for that interface, I notice that the receive queue thresholds have not changed to the default.
I'm kind of new to QoS so I'm not sure if I'm missing something.
We are using a Cisco 6509 running 12.2(33)SXI3 using the WS-X6724-SFP card. I thought I'd be seeing the default tail-drop thresholds, but instead I still see the defaults as if QoS were not enabled:
Queueing Mode In Rx direction: mode-cos Receive queues [type = 1q8t]: Queue Id Scheduling Num of thresholds
I can see drops on the 6509 Queue for interface gi1/6 , qos is disabled globaly with qos disabled all packets are in one Queue using best effort my question is if I can see drops using the sh queueing int Gi1/6 command why I am not seeing any drops when I run the Sh int (interface number ) command. [code]
We tested a QoS in a Cisco 3750E, IOS: 12.2(58)SE2.Voice traffice in the correct Q without any problem, but all the others traffic the Defualt Q (0), tried to capture the traffic and tcp/udp port are correct.Any thing wrong with my ACL or DSCP - CoS?? ( that ACL works fin on 4500 and 6500) [code]
I have a Cisco 2960G switch and one of the ports was configured with srr-queue bandwidth limit 90 - I need to remove this bandwidth limiting from this interface. [code]
I can see drops on one of our busy L3 vlan in the input queue and are going up very frequently.System image file is "sup-bootflash:s72033-psv-mz.122-18.SXD3.binHardware = 6509
After opening up Solarwinds NPM, I noticed that a few of my interfaces had lots of discards (who knows how long it's been sets the counters were reset)
interface GigabitEthernet1/0/25description Etherchannel to MamaCassswitchport trunk encapsulation dot1qswitchport mode trunkswitchport nonegotiatepriority-queue outchannel-group 4 mode on
interface GigabitEthernet2/0/25description Etherchannel to MamaCassswitchport trunk encapsulation dot1qswitchport mode trunkswitchport nonegotiatepriority-queue outchannel-group 4 mode on
interface Port-channel4switchport trunk encapsulation dot1qswitchport mode trunkswitchport nonegotiate,It looks as if priority-queue was configured outbound on these interfaces, could this be the cause of the transmit discards which are now up to 79,835, I just reset the counters on the interfaces a little while ago.
I'm not the best in the world when it comes to QoS, we do have some VoIP phones, but they are only a specific network, and do not travel outside, since there are used mainly for VoIP training. I do know both interfaces are running the default of FIFO.
i have issues logging into one of our core switches.its a 6509 switch but i cannot log in remotely.when i try to console in on the console port, i cannot log in instead i get the above error message.I haven't rebooted yet but would it solve the problem as this switch is a production switch.
I have a Cisco Catalyst 2960 with IOS Release12.2(53)SE (because of a contract I can not update it) -> the release notes for this version describe the following:
When auto-QoS is enabled on the switch, priority queuing is not enabled. Instead, the switch uses shaped round robin (SRR) as the queuing mechanism. The auto-QoS feature is designed on each platform based on the feature set and hardware limitations, and the queuing mechanism supported on each platform might be different. There is no workaround. (CSCee22591)
My config is as follows:
interface FastEthernet0/1 switchport access vlan 200 switchport mode access srr-queue bandwidth share 10 10 60 20 priority-queue out mls qos trust dscp auto qos voip trust no cdp enable network-policy 1 spanning-tree portfastMy question now is:When the priority queue is not enabled with auto-qos because of the software bug is it nevertheless enabled with the additional priority-queue out command?
I feel that 3560 and 3750 perform differently with the following two commands:
srr-queue bandwidth shape 5 0 0 0 srr-queue bandwidth limit 50 On 3750, the bandwidth for queue 1 is limited to 100mbps x 50% / 5 = 10mbps On 3560, the bandwidth for queue 1 is limited to the smaller value of BW / shape weight and BW x limit%.
Does it sound about right? is there a way to check for mls qos input queue drops? The show mls qos interface xxx stat only shows the output queue drops. Maybe for some reason the input queue never drops?
we have two 6509E, as our core switches. Recently I noticed that on some connections I have a high output queue drop rate.
These 4 x 2 interfaces (gigabit) are connected to our blade encolure, consisting of 4 x WS-CBS3120X-S. The utilization of the links is really quite low, when I see the increase of the drops. (~=60Mbps). All the links are fiber (SFP) and the distance between the core switches and the enclosure is about 15-20m.
I am not aware of any service degradation on the part of the servers. No CRCs, collisions etc, on the interfaces, apart from the drops.
The line card is a WS-X6748-SFP, but other interfaces don't seem to be experiencing any problems.
I have a customer that is seeing output drops on a 2960S with mls qos not enabled. It appears that they are getting bursts traffic on the switch that is filling up the buffers, hence causing the drops. I have a couple of questions:
1. What are the default queue/buffer settings when mls qos is NOT enabled on the switch. 2. Is there any good documentation out there regarding the buffer sizes of the different switch models
The customer is looking for an answer as to whether or not replacing the 2960S with a higher model would eliminate the output drops WITHOUT having to mess with QOS/buffer/drop threshold settings on the switch, and Cisco doesn't seem to make the buffer sizes readily available for the the smaller Catalyst switches.
i have a 3560 connecting to a sp with limited bandwidth. i have one interface on the switch whose traffic i do not want to drop. i want this traffic to go into the high priority queue. i am not sure how this should be configured, but here is my best guess and my current qos configuration on the switch:
When configuring QoS on 3750s/3560s, we're mapping packets to particular interface output queues with commands such as: [code] The command to see what's actually being enqueued, dropped, etc. is: [code]
Note that these queues are numbered 0 - 3, and not 1 - 4. We've been assuming that the first queue number in the "mls qos" (i.e., 1) command maps to the first queue (i.e., 0) in the "show mls qos" command.
I previously put this is the 'Video over IP' discussions group, which is not getting any response.I have a 6509 and need to configure QoS on gig line cards that have 1P3Q8T queue structure. I've already got the mls qos configured, and have the correct class maps built for Voice, Video, and Signalling.(I'm not doing autoqos). I have only four classes; voice, video, signalling, and default (best effort).I need to configure the commands on the individual gig ports for the appropriate bandwidths as shown below:
1. 300 VoIP G711 calls x 100kbs/call = 30mbs of priority queue DSCP = EF (46) CoS = 5 2. 150 Video conference calls x 1.5mbs/call = 225mbs DSCP = CS4 (32) CoS = 4 3. 450 Signalling x 13kps/call = 5.8mbs (round to 6mbs) DSCP = CS3 (24) CoS = 3 4. Default class is not marked DSCP & CoS = 0
We're having kind of a problem with our Catalyst 4507r switches. If we do a "show interface" command we're getting a lot of "Total output drops" on some of our interfaces. It seems to be most of the time on the same vlan.I was wondering if it has got something to do with QOS or queue selection As we don't have any QOS markings configured, is it possible that all traffic is using only one of the four tx queue's?
I am trying to implement priority queuing (LLQ) on a pair of 10GE links between a 4507 with Sup6E and a 4948 which are configured as an etherchannel. I am unable to configure a priority queue on the 4507. I am running into the following issues:
I want to have a priority queue for voice traffic and specify minimum bandwidth for a critical application. If I configure a class with the priority command it will not let me use the bandwidth command on another class unless the priority class is policed. If I try it without the police command I get the message "bandwidth kbps/percent command cannot co-exist with strict priority in the same policy-map ". If I add a police statement to the priority class then I don't get this error.
When I try to apply the resulting service-policy to the physical interface it says "% A service-policy with non-queuing actions should be attached to the port-channel associated with this physical port" and does not add the command to the config.
If I try to associate the same policy-map to the port-channel rather than the physical interface it says "% A service-policy with queuing actions can be attached in output direction only on physical ports" and does not add the command to the config.
All of the other interfaces on the 4500 are working OK. The trunks have auto qos voip trust configured and access ports are marking the critical application traffic.
The 4507 is running 12.2(44)SG1 EnterpriseK9. I don't have the luxury to upgrade blindly to fix the problem unless I can identify a specific bug that is causing the problem.
We are replacing two 3750E switches with 4500X using cat4500e-universalk9.SPA.03.03.02.SG.15111.GS2
1. is there a command reference available for this ios - can't seem to find out 2. Im using GLC-T gbics and we normally would set the speed to either 100 or 1000 now that option does not seem to be available. 3. when I entered username etc. I got a message " CLI deprecated soon".
I do not have the option to run sh mls qos commands. I am trying to look at the cos-map on my 7200 router. The code I am running is c7200-p-mz.122- 25.s9.bin.I also do not see the mls qos command listed globally and it is not an available command in config t mode.
SSH commands not available in IOS cat4500e-universalk9.SPA.03.02.00.XO.150-2.XO.bin I just recently upgraded to universal k9 as the k9 versions usually include the crypto, shh commands however I still do not have access to these commands, is there anything I must to to enable these?
I'm in the process of upgrading the IOS of our 6500 switch and unfortunately, the images were messed up by other users. Here's the output of show redundancy. [code] would i have any issues if I reload this slave supervisor engine to load the SXI8 IOS?
I've been fighting what seems to be an increased number of outqueue drops on our core stack and edge switches for the last 3 or 4 weeks.(The core consists of a stack of 5 3750s in 32-gig stack mode. The wkgrp switches are 3560s. all are at 12.2.52) The wkgrp switches are directly connected to users. We use Nortel IP phones with the phone inline with the user PC. auto-neg to 100/full. [code] However I have tried turning off QOS on a couple of workgroup switches (no mls qos, but left individual port configurations the same) but am still seeing drops.Since I have disabled qos on the switches in question (no mls qos) (not the core tho) I am presuming these commands have no affect on the switch operation and therefore cannot be related to the problem. With QOS turned off one would presume that it is general congestion - especially at the user edge where busy PC issues might contribute. So I wanted to see if I could see any instances of packets in the output queues building up.
I wrote some scripts and macros that essentially did a snapshot of 'show int' every 20 seconds or so, and looked for instances of 'Queue: x/' where x was greater than zero.What I found after several days of watching the core stack, and a few of the workgroup switches that are most often displaying the behavior, was that I NEVER saw ANY packets in output queues. I often saw packets in Input queues for VLAN1, once in a great while I would see packets on input queues for fa or Gi interfaces, but NEVER on output queues. [ code] Additionally, when I look (via snmp) at interface utilization on interfaces showing queue drops (both core and wkgroup), they are occurring at ridiculously low utilization levels (as low as 4 to 8%). I've tried to look for microbursts between the core and a wkgroup switch where the core interface was experiencing drops, but haven't seen any (using observer suite). [code] While the queue-drop counts aren't critically high at this point, they are happening more frequently than in the past and I would like to understand what is going on... In most cases, no error counters are incrementing for these interfaces. Is there some mechanism besides congestion that could cause output queue drops?
I am looking for soem best-practice and useful logging commands on 6500 and 3750 platforms. Some of them I have listed below. Is there any important ones I am missing Also, I need to know what kind of recommended logging level is for buffer and what is loggign level for syslog server?
I've got two routers, Cisco 2911's with 15.1(4)M1 on one and 15.0(1)M5 on another.
I'm trying to set up ip sla for vrrp tracking but the commands seem gimped? I don't even have an option for ip sla <operation number>. All I've got is ip sla responder/server/key-chain.