today I witnessed a cisco n5k that stopped playing fair. For a yet unkown reason several interfaces started to show output errors all begining within the same second. While i instantly thought this would be a wiring issue I began to ask myself what an output error actually means. Google usually brings up output drops, not regular output errors. So what is it and how can a 10G fiber interface even detect that there is a problem without receiving what it was sending?
I have a AIR-AP1121G-A-K9 running c1100-k9w7-tar.123-7.JA2 (Autonomous)We have monitoring setup with Orion NPM and we consistently see output errors, Transmit discards and big buffer errors The users at the site have not reporting any issues but was wondering how to prevent these or are these normal?What causes the output errors on Wireless Radio ? How to troubleshoot further ?
Radio0-802.11G Total Output Errors 0 47749 Small Buffer Misses 4 misses 139 misses
I have a 3750X-24T in our production environment that is showing very high number of OQD's in the 'show int sum' output for 4 of the Gigabit interfaces; the interfaces are each in a seperate port channel and there are no OQD's for the relevant port channel and there are no output drops showing in the output for the 'sh int' command for each interface.
The following are the OQD's for the relenvant interfaces Gi1/1/1: 0 Gi1/1/2: 0 Gi1/1/3: 0 Gi1/1/4: 0 Gi2/1/1: 4252879251 Gi2/1/2: 4251090833 Gi2/1/3: 4251754140 Gi2/1/4: 4294942102 Po1: 0 Po2: 0 Po3: 0 P04: 0
Gi1/1/1 and Gi2/1/2 assigned to Po1, and so on. IOS version: C3750E-IPBASEK9-M 12.2(58)SE2
I've been working on breaking down and understanding the default auto qos configuration on a Cisco 3750 in the hopes of putting together a QoS strategy that will fit our environment. I'm having some difficulty understanding how the "mls qos queue-set output" syntax works.
From another post, at [URL], the author offers the following example and explanation;
How come there is syntax stating "threshold 2" when in the succeeding part the 400 refers to thresshold 1 and threshold 2 again? The syntax 400 400 is, apparently, already referring to thresshold 1 and 2, no?
we have two 6509E, as our core switches. Recently I noticed that on some connections I have a high output queue drop rate.
These 4 x 2 interfaces (gigabit) are connected to our blade encolure, consisting of 4 x WS-CBS3120X-S. The utilization of the links is really quite low, when I see the increase of the drops. (~=60Mbps). All the links are fiber (SFP) and the distance between the core switches and the enclosure is about 15-20m.
I am not aware of any service degradation on the part of the servers. No CRCs, collisions etc, on the interfaces, apart from the drops.
The line card is a WS-X6748-SFP, but other interfaces don't seem to be experiencing any problems.
When configuring QoS on 3750s/3560s, we're mapping packets to particular interface output queues with commands such as: [code] The command to see what's actually being enqueued, dropped, etc. is: [code]
Note that these queues are numbered 0 - 3, and not 1 - 4. We've been assuming that the first queue number in the "mls qos" (i.e., 1) command maps to the first queue (i.e., 0) in the "show mls qos" command.
We're having kind of a problem with our Catalyst 4507r switches. If we do a "show interface" command we're getting a lot of "Total output drops" on some of our interfaces. It seems to be most of the time on the same vlan.I was wondering if it has got something to do with QOS or queue selection As we don't have any QOS markings configured, is it possible that all traffic is using only one of the four tx queue's?
I am looking at the interface stats of port Fa1/0/2 and see something strange. Ouput drops are 42Billion in 16mins, then 21249 few seconds later, then followed by 42Billion drops again, then 21444...and so forth..I keep getting an entirely different output drops reading everytime i refresh within seconds of each refresh!
sh int fa1/0/2 FastEthernet1/0/2 is up, line protocol is up (connected) Hardware is Fast Ethernet, address is ecc8.8266.d604 (bia ecc8.8266.d604) Description: MSGMERGF1 MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec, reliability 255/255, txload 12/255, rxload 11/255
I've been fighting what seems to be an increased number of outqueue drops on our core stack and edge switches for the last 3 or 4 weeks.(The core consists of a stack of 5 3750s in 32-gig stack mode. The wkgrp switches are 3560s. all are at 12.2.52) The wkgrp switches are directly connected to users. We use Nortel IP phones with the phone inline with the user PC. auto-neg to 100/full. [code] However I have tried turning off QOS on a couple of workgroup switches (no mls qos, but left individual port configurations the same) but am still seeing drops.Since I have disabled qos on the switches in question (no mls qos) (not the core tho) I am presuming these commands have no affect on the switch operation and therefore cannot be related to the problem. With QOS turned off one would presume that it is general congestion - especially at the user edge where busy PC issues might contribute. So I wanted to see if I could see any instances of packets in the output queues building up.
I wrote some scripts and macros that essentially did a snapshot of 'show int' every 20 seconds or so, and looked for instances of 'Queue: x/' where x was greater than zero.What I found after several days of watching the core stack, and a few of the workgroup switches that are most often displaying the behavior, was that I NEVER saw ANY packets in output queues. I often saw packets in Input queues for VLAN1, once in a great while I would see packets on input queues for fa or Gi interfaces, but NEVER on output queues. [ code] Additionally, when I look (via snmp) at interface utilization on interfaces showing queue drops (both core and wkgroup), they are occurring at ridiculously low utilization levels (as low as 4 to 8%). I've tried to look for microbursts between the core and a wkgroup switch where the core interface was experiencing drops, but haven't seen any (using observer suite). [code] While the queue-drop counts aren't critically high at this point, they are happening more frequently than in the past and I would like to understand what is going on... In most cases, no error counters are incrementing for these interfaces. Is there some mechanism besides congestion that could cause output queue drops?
My router, a Cisco 7204 with NPE 300, is experiencing output drops and input errors on a fastethernet interface. I have a 100Mbps connection with less than 15Mbps utilization at peak times.
FastEthernet1/0 is up, line protocol is up Hardware is DEC21140, address is 0014.a985.1a1c (bia 0014.a985.1a1c) Internet address is 38.102.66.134/30 MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec, reliability 255/255, txload 3/255, rxload 1/255
I've got router as vpn-concentrator which receives vpn site-to-site connections from 10 branches with cisco 881 and cisco 1941.I started cacti monitoring and found out that there are too many errors on interfaces.URL.
I have seen an error in GRE configured between two routers over WAN. i am monitoring the WAN link and GRe tunnel via WhatsupGold NMS and it reported that Gre tunnel having packet loss sometimes and this time it affects the services and traffic passing over tunnel.sh int t101 shows output drops . is that the problem ? i have read that i have to adjust MTC size but i tried to change the tunnel MTU to 1400 but still sh int t101 shows MTU as 1514 ? What could be the problem of output drops in my tunnel link. [code]
In my cisco 3845 router I can see output packet drop in some of the interfaces.I suspect that router is processing packet beyond its mix throughput limit. Moreover when i run show int fax/y switching command I can see packet drop by RP process.
We are receiving a large amount of output drops on 4 interfaces that connect to microwave wireless.This is playing up with our Management system. Saw a bug report but that was for 3750, not the IOS version but was a similar issue. [code] The output errors tend to happen at the same time. All ports are trunks however there are other trunks on the switch that are not having these issues. [code]
Interface Speed Local pair Pair length Remote pair Pair status --------- ----- ---------- ------------------ ----------- -------------------- Gi0/40 100M Pair A 2 +/- 4 meters Pair A Normal Pair B 2 +/- 4 meters Pair B Normal Pair C 2 +/- 4 meters Pair C Short Pair D 2 +/- 4 meters Pair D Short
From the command
test cable-diagnostics tdr int gi 0/40
It's normal?If not, then. the problem is on the cable or on one of the interfaces?The interface is connected between a fastethenert on a 2811 router and a 3560-48 switch.The cable is a straight through cat 5e cable. (I have changed several cables with same result).
I am trying to get my Cisco 2811 to authenticate four DSL connections and load balance them by attaching the four DSL to a switch (each DSL going to a seperate VLAN) and then trunking all four VLAN's to a ethernet interface on the 2811. My issue, I can not get more than one DSL to authenticate at a time, for example Dialer1 will connect and then it will disconnect and Dialer2 will connect, etc... I have the modems in bridge mode and I am using seperate user/pass for each DSL account and I have verified that the user/pass are correct.
Below is my config from the router and a couple of messages that came across the console.
*Apr 27 01:12:11.915: %DIALER-6-UNBIND: Interface Vi2 unbound from profile Di1 *Apr 27 01:12:11.927: %LINK-3-UPDOWN: Interface Virtual-Access2, changed state to down
I have a Cisco 2811 router with 2 ADSL interfaces and we have 2 internet lines associated with it. One is with Telstra and one is with iiNet. All internet traffic from inside the office is routed through Telstra line. The iiNet line is used for incoming emails, establishing VPN etc. The problem is that we are not able to "PING" the iiNet's IP address from the outside world however, PING works for Telstra IP. What do I need to configure on the router if I want to PING and Putty in the router using both Telstra's and iiNet's IP from outside world?
I had a question regarding the Cisco 2811. I tried fitting an NM 2FE-2W card and the fast Ethernet interfaces of the module never get recognized. Is there a special command to enable it or isn't the module supported at all by the router. If not , is there a way to have the router possess more than the 2 Fast Ethernet interfaces it already has? Let me know if any more info is needed from my end.
Another question I have out of curiosity. Is it possible to make the controller T1 port of a VWIC 1MFT-T1 or a VWIC 2MFT-T1 act as a fast Ethernet.
I have a task of setting up bandwidth limit on the 2811 router Fastethernet interfaces.The scenario is:We have a 4MB Internet connection and would like to allocate bandwidth usage to users.
Fastethernet 0/0 needs to be set with 256KB output and 2048 input. This is going to be connected to a wireless router. Fastethernet 0/1 needs to be configured with 2048 output.I could also use SDM if that's easier than using CLI.
I've got an issue with SNMP and netflow tools. They are displaying different data for the same (sub)interfaces.I've got metroethernet link which connects root A (Cisco 7606, 12.2(18)SXF8) and root B (Cisco 2811, 12.3(11)TS). MPLS is configured on the link (behind root B there is no more MPLS). I'm attaching root configurations (I've ommited parts of config).Interfaces are:
Root A - gi2/6.2144 Root B - fa0/1
I've configured SNMP and netflow on both devices. I'm using two SNMP tools (CA Spectrum and eHealth) and two netflow tools (CA NetQoS Reporter Analyzer and Fluke Networks NetFlow Tracker) to collect the data. SNMP tools show the same info for defined (sub)interface.Netflow tools also show the same info for defined (sub)interface. I'm attaching reports from one SNMP tool and one netflow tool for the same time period.
1. Looking at SNMP tool, it can be seen quite amount of that data in both in and out direction. 2. Looking at netflow tool, it can be seen quite amount of that data in out direction, while in direction shows small amount of data.
I'm aware that Cisco has difficulties with SNMP counters on subinterfaces. I'm also aware that MPLS netflow has its own difficulties.Root B netflow configuration is quite simple as it has just 2 interfaces to configure netflow on (Fa0/0 and Fa0/1). So I would guess SNMP and netflow data should match, but they don't. When you look at SNMP tool reports for roots A and B, it can be seen that traffic volume is practically mirrored.
The WCS haven't have new event since the count of event goes 40000.And, the wcs-3-0.log shows INFO[stspoll] Event Queue seems full.In the FAQ which says:#The WCS keeps the last 40,000 events in the system and clears them up after seven days. An event or alarm can have 1000 bytes on average.shouldn't it clears them up after seven days? how to clean events by manual?
We're testing the reference system shown in the figure below. System Description Four 2960 switches are used for transport;Equipment 1 and Equipment 2 exchange packets for synchronization;To reach synchronization Equipment 1 and 2 must exchange data with a very low jitter. 2960 Configuration details Four our test puprose, we're using 100Mbit/s ports (22 and 23) as trunk.In order to obtain minimum jitter We performed these configurations:We Enabled QoS;We Marked Synchronization packets with CoS 7 and DSCP 63;We marked other kind of traffic inserted in different ports) with CoS 0;We set "trust DSCP" on trunk ports;On the trunk ports we mapped traffic with CoS 7/DSCP 63 (and only this) on output queue 1;We enabled the expedite queue (priority-queue out). QuestionWith these settings we aim at forcing our synchronization packtes to precede other kind of traffic and go from Equipment 1 to Equipment 2 with minimum jitter.Unfortunately we experienced high jitter when both synchronization packets and other traffic are sent through the systems.
I am working on a QoS design which I hope to test at some point, but at this stage its from the books.My question is how to decide which queue and threshold to use for video traffic, then lower priority traffic.I understand the shaping and sharing commands, its the queuing and threshold bit I'm not clear on.The plan is to use the priority-queue for EF marked voice, this will be policed on ingress to provide an upper limit to EF traffic levels, then my second priority traffic will be video. Which queue will get serviced first once the priority queue is empty, and how do I decide which threshold to allocate my video traffic to? The document ion is not at all clear, I want to prioritse my traffic in the following order:
1 voice, use the priority queue 2 video, this to get serviced ahead of data, after voice. 3 interactive data 4 Bulk data 5 Best effort
So Q1 settings are ignored due to priority queue. Q2 gets 70%, Q3 25% etc.Is it as simple as putting video into Q2 T1, then interactive data into Q2 T2, will Q2T1 get a higher priority over Q2 T2 once the PQ is serviced?
We currently have a site with a very simple topology that uses a 3750X switch stack for a collapsed core. Everyday, the users have a conference call and experience poor voice quality.Its not bad when users call from several conference phones, but when everyone calls in on individual phones, there is choppy and almost inaudible voice quality experienced. The voice traffic flow would be as follows: Phone <-> 3750 switch <-> Voice GW We have packet captures showing that RTP packet loss is occuring from the phone to the voice gateway, but none from the voice gateway to the phones. We also have drops in the output queues that match drops on the asics. I can reset the counters and they will be clear until the call, and then they increment significantly during the call. The voice gateway and phones are non-Cisco. The switch stack has 6 switches. We are trusting the DSCP settings on the phones. All the queue drops from the phones are usually in queues 0-3, but all drops on the voice gateway is in queue 0. Below are the QoS settings; they are mostly default and we have not changed any queuing, thresholds, or buffers. Should we specify larger buffers and threshold for a designated queue and send EF traffic to that queue?
MySwitch#sh mls qos QoS is enabled QoS ip packet dscp rewrite is disabled Typical Port GigabitEthernet1/0/4 trust state: trust dscp
We're having some issues with a 3560 CPE. It's uplink is a GE fiber link, customer port is FE RJ45. We see a lot of TX-frames being dropped at the FE port,but none at the GE port. Even when the customer is only at ~50Mbit/s downstream traffic. When customer sending ~50Mbit/s upstream, there are no TX- drops at the GE link. Is this a normal behaviour? From what I know the physical medium shouldn't have any inpact on this since drops occurs in the port- asic, and not in physical transmission.Do the buffer-sizes between GE and FE differ? What could we do to optimize the flow and reduce drops? QoS is set to off and no modifications on the queues have been made on the interfaces.
We are upgrading from 3550 - 3560 switches.On the 3550's we have this on each interface: [code] The 3550's wont accept the wrr-queue commands. How to set these on the 3560's.
Vlan interface would be dropping packets on the input queue? Refer to the drops/flushes below. This is from a 6500 with a Sup720, there are a number of vlans on it. This 6500 and it's HSRP partner are exhibiting the same symptoms on all the vlans I bothered to check. This particular vlan is quite lightly used, there are only about fifteen user PC's (each with 100 Mb interfaces) on it.
There is a bit of information on input queue drops on Cisco, but this is focused on physical interfaces where I can understand some packets being dropped. I would think that Vlan interfaces would have different issues.I note the "no buffer" errors as well, that also concerns me, especially as that counter is quite close to the "flushes".
Vlan123 is up, line protocol is up Hardware is EtherSVI, address is 00d0.04fd.6000 (bia 00d0.04fd.6000) Description: Vlan123 Internet address is 10.123.123.7/24 MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive not supported ARP type: ARPA, ARP Timeout 04:00:00 [Code] .......
I have a 2921 where I am shaping some traffic based on sub net on my lan. I have applied the shaping policy to the lan interface in the outgoing direction.
Topology is as follows: ISP - ASA - ROUTER - LAN Policy map: Policy Map shape-lan [code]....
I am seeing a lot of no-buffer drops on the policy and I am wondering what the best solution is to solve this: Class-map: tc-class (match-any) 8730680 packets, 10803689863 bytes 5 minute offered rate 4453000 bps, drop rate 0 bps [code]....
Should I just be increasing the queue-limit or should I be changing something else?