I have the ME3400 deployed in an the following design. 8 100Meg ports connects to Cisco 2955s, and the 1Gig port uplinks to a Cisco 3560. My CDP neighbour table only shows an entries for the uplink Gig port. If I look at the CDP stats in the show cdp Interfaces Fastethernet 0/1,, I see CDP packets being sent every 60, but nothing returning.
We have also tried Loopguard feature enabled on all ports: but after some period same problem is repeated.
version information is below: Cisco IOS Software, ME340x Software (ME340x-METROIPACCESSK9-M), Version 12.2(25)SEG3, RELEASE SOFTWARE (fc2) Copyright (c) 1986-2007 by Cisco Systems, Inc. Compiled Wed 25-Jul-07 22:56 by amvarma
What this crash means? This is a 6509 with a single Sup720 Cisco IOS Software, s72033_rp Software (s72033_rp-ADVIPSERVICESK9_WAN-M), Version 12.2(33)SXI, RELEASE SOFTWARE (fc2)System returned to ROM by s/w reset at 04:42:07 PST8PDT Mon Jan 2 2012 (SP by bus error at PC 0x40C6681C, address 0x424B).
I have 3560X-24T-S switches with IP Services loaded at remote sites that I have been having troubles with. Originally, they had IOS 12.2(58)SE1 on them. I have up to four diverse paths via point-to-point microwave at the remote sites. The Microwave equipment is by Microwave Networks and is a Prodeus M series which Ethernet support. The original issue manifested itself as hardware loopback errors on some of the ports that were connected to the microwave links.
My experience in the past has been that when a hardware loopback error occurred it is usually a bad switch port. In this case however, if I disconnected all of the microwave Ethernet links and rebooted the switch and then connected the Ethernet connections to the microwave links again, everything works fine. No hardware loopback errors. That is, until the next switch reboot. Then the hardware loopback error would return. Interestingly, it would come back on different ports connected to the microwave links every time. So if a reboot was done without disconnecting the microwave Ethernet links the hardware loopback error would change from one microwave link to another after each reboot.
I then went through and read the lengthy release notes for IOS version 15.0(2)SE and found several fixes that I thought could fix my issue. So I downloaded it and updated a couple of the offending switches (not all of them were having this problem). After going through the second update required to resolve the 'open file error' that happens going between 12.2(58)SE1 and 15.0(2)SE the problem seemed to be resolved on the offending switches. So, I went ahead and updated the IOS on all of the switches with point-to-point microwave connections.
I now have one switch that was updated that is crashing and rebooting continuously when the Ethernet links for the point-to-point microwave are connected. Again, if I disconnect all the microwave links and reboot, it comes up fine ands stays fine when the microwave links are connected back up. It will work fine until the next reboot and then the crash and reboot loop starts over again. Below is a portion of the putty log when the crash occurs:
---------------------------------------------------------------------------------------------------------------------- previous memory block, bp = 0x59BF838, memorypool type is Processor data check, ptr = 0x59BF860 ========= Dump bp = 0x59BF87C ====================== 59BF77C: 0 0 0 FD0110DF AB1234CD FFFE0000 56 383FDB4 59BF79C: 212AC68 59BF838 59BF6F4 80000042 1 0 A504F53 543A2050 59BF7BC: 6F727441 53494320 506F7274 204C6F6F 70626163 6B205465 73747320 3A20456E
We had an "Event" on our process network at the mill yesterday. I connected a new WS-C3560V2-48TS-S to our network and we lost communications with all of the other switches.
The core is a stack of 3 WS-C3750 switches, one is a -24TS-1U and the other two are -12S-S. These connect to 10 WS-C3560G-24TS and 5 Rockwell Stratix 8000 (IE3000) over fiber. I am planning on replaceing two 24 port switches with the new 48 port. I had the switch configured and running at my work bench. It was connected to the network with one sfp module and ran all weekend with no issues. Yesterday afternoon I took it to the network cabinet and installed it. I powered it up and connected two sfp modules to the fiber patch panel and made the connection at the core stack.
Everything looked OK. I had communication link lights working on everything. Within minutes, we lost communication with every switch connected to the core stack. I shut down the new 48 port switch and the network slowly came back up.
The new 48 port is configured with Flex Links for the fiber redundancy protocol and was connected to each of the 3750G-12S-S stack members.
I though it might have been a power issue but the stack is UPS protected and shows it has been up for over 10 weeks. I'm not even sure "Crash" is the best decription for what happened. The new switch has a high enough IP address that it would not take over as a IGMP Querier.
I have a cisco 2821 router in rommon and displaying the message '' softwre forced crash '' and '' checksum error'' .I tried to do rommon tftpdnld but as the image is self decompressing into the ram it again crashes with the same error although i have done it with various valid ios but in vain.
The switch 6500-E is frequently crashing whenever the pm scp process reach 100% .I got it under the command "remote command switch show proce cpu".How to solve the problem. I opened the TAC and the engineer says to upgrade the image12.2(18) sxf.Is there any other work around that would avoid to reimage the switch and reloading.
I got some unexpected System crash, and this happens to 2 different routers on the same network.. (We suffer a Crash and change the router, happens the same to the new router after some time, maybe 40 minutes!!)
Here is the last console report for the new router....
Preparing to dump core... 4w1d: %SYS-2-WATCHDOG: Process aborted on watchdog timeout, process = IP NAT Age
About an hour ago I had the master switch on one of my 3750x (WS-C3750X-48PF-S) stacks crash. The only two items we've found that could have caused this issue are the roughly 1.3 million big buffer misses and several of the following in the syslog
SLT:WARN:No exporter configured for smartlog! I do not have smartlogging turned on, nor is there a netflow exporter configured
sh logging smartlogsmartlog is disabledsmartlog exporter:smartlog pkt length: 64 Total pkts processed: 0Total DHCP Snooping pkts processed: 0Total DAI pkts processed: 0Total IPSG pkts processed: 0Total ACL pkts processed: 0
I did not see any traffic spikes prior to the crash.
This stack has been stable since it's last IOS upgrade from 12.2(58)SE1 to 12.2(58)SE2 back in October 2011 so this has me a little worried.
I am not able to find the exact bug for Cisco 3750E stack - Debug Exception (Could be NULL pointer dereference) Exception (0x2000) error. Closest i can find is CSCsa72400 which only affects ver 12.2(20)SE4.All the stacks (3 switches) are running 12.2(50)SE3, It appears that the switch 1 crashed and reloaded. My hunch is its software but i cant find any related bugs. It could be hardware issue as well ?
Problem I have encountered when upgrading a Cisco 6509 chassis with a new supervisor card from Sup2 to Sup 720b. The 6509 loads and then crashes completely, and when rebooted reloads into rommon.The same upgrade was performed on a similar switch with no apparent problems.
Got a problem with my 1760 router. Bought it from ebay and booted it on today and got this error,It has 180224K/16384K bytes of memory and 2 paritions of 32768K flash.I erased both partitions and put a different version of the IOS on (still 12.4) and there is no difference, still get the errors.These aren't on any of my other 1760 routers so I assume they are linked to the problem.
Our switch had a little crash-fest this morning at 2:30 AM. I did find a web page about diagnosing Software Forced Crash Exceptions, but it did not look like ours was one of the more easily-identifiable ones.
It may be worth noting that we've only used this switch for about a month, everything seemed fine until now. When we got the switch it did not have any GigE modules, and this week we put 2 into it and have been using them for 2 servers.
It looks like the switch was crashing repeatedly over a period of 20 minutes, and then it stopped and normalized. In the logs of the router that this switch uplinks into, we could see the ethernet port flapping during the time that the switch wasn't reachable.
Here's the Show Stack on the switch:
Sfld_3550# show stackMinimum process stacks:Free/Size Name4404/6000 vegas_flash init3352/6000 SaveCrashBuffer5716/6000 CDP BLOB8512/9000 IP Background5596/6000 vqpc_shim_create_addr_tbl5584/6000 SPAN Subsystem5552/6000 SASL MAIN4944/6000 vegas IPC process8704/9000 cdp init process5404/6000 RADIUS INITCONFIG4928/6000 Vegas CrashBuffer5664/6000 URPF stats2536/3000 Rom Random Update
I have a WS-C3560X-24 and attached to that are some 9 acces switches, for some weeks now my 3560 reboots some time what couse that the other 9 switches are down for some minuts as well and i dont want this of course. the reboot happens at random times and some times one week not and then like yesterday afternoon it rebooted again.
when i check the Flash directory there is no crash file and when i look at the logging its clean and just shows the startup. it's not the powersuply it's redundend and more L3 switches are attached to this power source and they dont reboot.
L3_AIM#sh versionCisco IOS Software, C3560E Software (C3560E-UNIVERSALK9-M), Version 12.2(55)SE3, RELEASE SOFTWARE (fc1)Technical Support: [URL] Copyright (c) 1986-2011 by Cisco Systems,
I have one Catalyst 6509E chassis and two SUP720. The bootup sequence on SUP 720 (standby hot) failed . Messages that appear on SUP 720, on the console port indicate o software crash. I don't have a flash card in SUP720.
This is the bootup process:
System Bootstrap, Version 8.5(3) Copyright (c) 1994-2008 by cisco Systems, Inc. Cat6k-Sup720/SP processor with 1048576 Kbytes of main memory
I have many WiSM WLC's running 126.96.36.199. One WLC was rebooted few days ago but there was no crash file and nothing in logs say why this issue happened.There was a power problem at the same time the WLC rebooted (some switches and PE's was rebooted as well) but if it is a power issue why only one WLC inside the WiSM rebooted and the other WLC is still working fine with no reboot?I have 5 WiSM modules connected to the same 6500 box, only one WLC was rebooted which indicates a crash but no crash file registered for it.Is there anyway I can find the reason why that WLC was rebooted?
I have an Cisco ME3400-24TS-A Switch with is not behaving normal.
I have already erased its flash, uploaded new IOS but could not fix the issue. However it boots normally and pass all tests show in boot process. Issue is this the i cant access or ping the computers attached to its ports from one to other.
However i can ping the switch vlan 1 IP from all computers attached to it.
When i tried Debug All Command, its shows the following:
debug all This may severely impact network performance. Continue? (yes/[no]): yes All possible debugging has been turned on Switch# *Mar 1 00:03:41.467: special_oce_change_vectors: select debug vectors
I was trying few days ago a tutorial of SSH tunneling, but I got an error,"server unexpectedly closed network connection", although I did everything like in the video tutorial and port 433 was not closed.
i bought a new linksys WRT120N router and configured it. I have internet and connection is good.But i experience a very anoying problem. Router restarts from time to time. And it work some time and it restarts again.I updated the firmware expecting that this will solve this problem, but it seems that it didn't solve it. Routrer continue to restarts unexpectedly.
We have multiple sites that are linked via MPLS (L3) circuits. We have good size circuits for Internet at two main sites (HQ and QC) and smaller sites come to HQ site to go to internet. We are running ospf (Cisco L3 switches) with service provider (ME3400) at these two main sites and service provider then redistributes routes back into MPLS via BGP and then smaller sites ME3400 learn these routes. i am injecting default routes from HQ and QC, but Telco is only redistributing default from HQ. So large pipe Internet at QC is not being used effeciently. Also if MPLS at HQ fails, then we are told, we need to call Telco and they will make change in their network to now start distributing default from QC.it was my understanding that telco can use BGP communities and advertize one default as prefered and second with higher cost, so that failover can occur automatically. And that they can also set up so that west cost sites use HQ and east cost sites can use QC for going to internet, but they say it is not possible.it the least, can I do something like this at my end for failover for internet, in case MPLS at HQ goes down (soon we will be setting up a point to point VPN tunnel between HQ and QC so that MPLS failure at HQ will trigger advertisement of HQ routes over tunnel via QC into MPLS, so other sites can then come to HQ thru QC over this tunnel. At QC Cisco router (to detect loss of default route from HQ and then start advertizing default from QC)
router ospf 1 default-information originate always route-map From_HQ exit ip access-list standard From_HQ
I replaced a similar router from a competing company with the AC900 N900 router. I open Live Mail (IMAP) and keep it open during the day. Since switching over, when I periodically look at email on Live Mail, I frequently get a "Windows Live Mail" message saying "Your server unexpectedly terminated the connection. Possible causes for this include server problems, network problems, or a long period of inactivity." I have never received this message from either one of the previous routers - LinkSys, Netgear.