cisco 4500 output drops

GngoghGngogh Member Posts: 165 ■■■□□□□□□□
Hi, I have a access switch with a 10gb upstream interface and a 1gb downstream interface, i can see from the show interfaces [int] that traffic is being dropped on the upstream downstream direction.

TenGigabitEthernet1/1 is up, line protocol is up (connected)
Hardware is Ten Gigabit Ethernet Port, address is f4cf.e213.7e30 (bia f4cf.e213.7e30)
Description: MATE.A01
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 5/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, link type is auto, media type is 1000BaseSX
input flow-control is on, output flow-control is on
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:45, output never, output hang never
Last clearing of "show interface" counters never
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 7556661
Queueing strategy: Class-based queueing
Output queue: 0/40 (size/max)
5 minute input rate 5364000 bits/sec, 1098 packets/sec
5 minute output rate 22402000 bits/sec, 2980 packets/sec
863690200 packets input, 401065551603 bytes, 0 no buffer
Received 6568826 broadcasts (1428513 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 input packets with dribble condition detected
2787171938 packets output, 1962145408543 bytes, 0 underruns
0 output errors, 0 collisions, 4 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier
0 output buffer failures, 0 output buffers swapped out
Most probably the cause of these drops are due to the fact that the traffic is comming from a higher bandwitdh interface to a lower bandwith one. the buffer gets full quickly and the switch as to discard some packets.

Is there a command where i can see the packet drops at a buffer level? Are there more causes for output drops? how can i troubleshoot these output drops more in detail?

Comments

  • Legacy UserLegacy User Unregistered / Not Logged In Posts: 0 ■□□□□□□□□□
    Is it showing drops on both sides? I know for nexus's show queuing int e<mod/port> would show it but on the 4500 don't think there is one.. The drops could be from dirty fiber or a bad sfp. Even though its not showing CRC errors it could be due to a low laser light from a bad fiber or something. If there is a redundant link I would fluke it and check the db loss. You could swap out the cable/sfp first then possibly move the position on the fiber panel if theres space.. clear the counters and monitor that interface.
  • GngoghGngogh Member Posts: 165 ■■■□□□□□□□
    Hi, thanks for the the reply, i have ruled out any problems with the fiber, since i have 20 more sites with the same topology and with loads of drops downstream. Since i have applied class-based queueing on these interface im able to see if the policy-map is droping traffic.

    Class-map: class-default (match-any)
    19731402087 packets
    Match: any
    Queueing
    queue limit 1504 packets
    (queue depth/total drops) 0/0
    (bytes output) 6794488568175
    bandwidth remaining 25%
    dbl
    Probabilistic Drops: 117901 Packets
    Belligerent Flow Drops: 7785763 Packets

    The class-default is droping Belligerent Flows (UDP aggressive flows) which consume large amount of buffers, and drop this aggressive flow data while trying to keep as much of the fragile flow or adaptive flows such as TCP flows or fragile flows. This is a congestion avoidance technique used on the Supervisor Engine of the 4500.
  • Legacy UserLegacy User Unregistered / Not Logged In Posts: 0 ■□□□□□□□□□
    Well you didn't mention it was a problem at other sites ;P

    Are there any plans to start upgrading to 10gb for the ALs?
  • GngoghGngogh Member Posts: 165 ■■■□□□□□□□
    Well I have no idea if my client will upgrade the downstream to 10gb. I know that the downstream switches support that bandwidth, but I guess my client doesnt want to spend money in 10gb spf modules.
Sign In or Register to comment.