Category Archives: Cisco

Cisco 887 router LAN or Cisco switch port shown as down, down?

Yet another ‘unusual’ Cisco IOS Ethernet port issue – you’d almost think I bring this on myself by having a complicated home network!

I was configuring a Cisco 887VA-M on my home network and had finished so I plugged the FastEthernet0 port into my underdesk Cisco 2960X so it was part of the LAN.

On the connected console to the 887 I noticed
*Nov  9 14:38:23.303: %LINK-3-UPDOWN: Interface FastEthernet0, changed state to down
*Nov  9 14:38:56.947: %LINK-3-UPDOWN: Interface FastEthernet0, changed state to down
*Nov  9 14:39:29.127: %LINK-3-UPDOWN: Interface FastEthernet0, changed state to down
*Nov  9 14:40:01.179: %LINK-3-UPDOWN: Interface FastEthernet0, changed state to down
*Nov  9 14:40:33.307: %LINK-3-UPDOWN: Interface FastEthernet0, changed state to down
*Nov  9 14:41:05.595: %LINK-3-UPDOWN: Interface FastEthernet0, changed state to down
*Nov  9 14:41:37.479: %LINK-3-UPDOWN: Interface FastEthernet0, changed state to down

eh?

Down messages but no up message. Lets look at the interfaces on the box:

887VAM_RR:#sh ip int br
*Nov  9 14:42:09.687: %LINK-3-UPDOWN: Interface FastEthernet0, changed state to down
Any interface listed with OK? value “NO” does not have a valid configuration

Interface                  IP-Address      OK? Method Status                Protocol
ATM0                       unassigned      YES manual down                  down
Dialer1                    unassigned      YES manual up                    up
Ethernet0                  unassigned      YES unset  administratively down down
FastEthernet0              unassigned      YES unset  down                  down
FastEthernet1              unassigned      YES unset  down                  down
FastEthernet2              unassigned      YES unset  down                  down
FastEthernet3              unassigned      YES unset  down                  down
NVI0                       unassigned      NO  unset  up                    up
Virtual-Access1            unassigned      YES unset  up                    up
Vlan1                      192.168.70.253  YES manual down                  down

Change the cable. No difference.

Try another port on the 887. No difference.

Try another port on the 2960X. No difference.

Nothing special listed when I look at the FastEthernet interface on the router:

887VAM_RR:#sh int faste0
FastEthernet0 is down, line protocol is down

or the VLAN interface:

887VAM_RR:#sh int vlan1
Vlan1 is down, line protocol is down

Is the router damaged? – No

Is there an issue with the FastEthernet controller? – No

Dodgy VLAN.DAT file in the flash? – No

Is the FastEthernet0 interface not part of VLAN1:

887VAM_RR:#sh vlan-switch

VLAN Name                             Status    Ports
—- ——————————– ———
1    default                                 active    Fa0, Fa1, Fa2, Fa3

No issue there then.

Reload, that will work. No difference.

Speed, Duplex – I am clutching at straws now. Nothing seems amiss and nothing works regardless of what values I set.

Nothing found in Google that is relevant (hence why I am writing this article to help you, the reader, out if it happens to you)

Reload again.

Hey what’s this?

*Nov  9 15:25:03.383: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0, changed state to down
*Nov  9 15:25:03.383: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet1, changed state to down
*Nov  9 15:25:03.383: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet2, changed state to down
*Nov  9 15:25:03.383: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet3, changed state to down
*Nov  9 15:25:05.915: %CDP-4-DUPLEX_MISMATCH: duplex mismatch discovered on FastEthernet0 (not half duplex), with Switch GigabitEthernet1/0/4 (half duplex).
*Nov  9 15:25:07.143: %LINK-3-UPDOWN: Interface FastEthernet0, changed state to down
*Nov  9 15:25:42.375: %LINK-3-UPDOWN: Interface FastEthernet0, changed state to down

So my router can see across the Ethernet link to the other side. What is going on!

887VAM_RR:>sh cdp ne
Capability Codes: R – Router, T – Trans Bridge, B – Source Route Bridge
                  S – Switch, H – Host, I – IGMP, r – Repeater, P – Phone,
                  D – Remote, C – CVTA, M – Two-port Mac Relay

Device ID        Local Intrfce     Holdtme    Capability  Platform  Port ID
Switch           Fas 0              139            R S I  WS-C2960X Gig 1/0/4

so I can see the other side. Yet still with the messages:

*Nov  9 15:25:07.143: %LINK-3-UPDOWN: Interface FastEthernet0, changed state to down
*Nov  9 15:25:42.375: %LINK-3-UPDOWN: Interface FastEthernet0, changed state to down

Where is the other side gone now?!

887VAM_RR:#sh cdp nei
Capability Codes: R – Router, T – Trans Bridge, B – Source Route Bridge
                  S – Switch, H – Host, I – IGMP, r – Repeater, P – Phone,
                  D – Remote, C – CVTA, M – Two-port Mac Relay

Device ID        Local Intrfce     Holdtme    Capability  Platform  Port ID

Right let’s plug in a laptop to the Ethernet port instead:

*Nov  9 16:06:42.543: %LINK-3-UPDOWN: Interface FastEthernet0, changed state to up
*Nov  9 16:06:43.543: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0, changed state to up
*Nov  9 16:07:11.563: %LINEPROTO-5-UPDOWN: Line protocol on Interface Vlan1, changed state to up

Boom!

So it must be the other end causing the issue. Then I notice that the other end never had a link light. I never noticed because the 2960X is under the desk.

Then it dawns on me. I set the ports on the 2960X to bpduguard to make sure that if I plug in a switch it doesn’t cause a spanning tree loop. That is why the port doesn’t come up on the Ethernet 877 router because the LAN ports are actually 4 switch ports. Doh!

Checking the log on the 2960X shows:

000723: Nov  9 15:53:07.620: %PM-4-ERR_DISABLE: bpduguard error detected on Gi1/0/6, putting Gi1/0/6 in err-disable state
000724: Nov  9 15:53:37.616: %PM-4-ERR_RECOVER: Attempting to recover from bpduguard err-disable state on Gi1/0/6
000725: Nov  9 15:53:39.629: %SPANTREE-2-BLOCK_BPDUGUARD: Received BPDU on port Gi1/0/6 with BPDU Guard enabled. Disabling port.
000726: Nov  9 15:53:39.629: %PM-4-ERR_DISABLE: bpduguard error detected on Gi1/0/6, putting Gi1/0/6 in err-disable state
000727: Nov  9 15:54:09.618: %PM-4-ERR_RECOVER: Attempting to recover from bpduguard err-disable state on Gi1/0/6
000728: Nov  9 15:54:11.646: %SPANTREE-2-BLOCK_BPDUGUARD: Received BPDU on port Gi1/0/6 with BPDU Guard enabled. Disabling port.

so that explains why the port on the router goes down every 30 seconds or so, I had autorecovery set on the 2960X, so it would bring the port back up, see the BPDU from the router LAN switch ports and disable the Ethernet port facing the router again.

interface GigabitEthernet1/0/6
 switchport mode access
 spanning-tree portfast
 spanning-tree bpduguard enable

easily fixed with

Switch(config-if)#no spanning-tree bpduguard

3,2,1, fix the duplex issue, and we are back in action and I can get on and configure the ATM/Dialer1 interface.

so next time you see

%LINK-3-UPDOWN: Interface FastEthernet0, changed state to down

maybe it will be the other end that needs to be sorted.

 

 

 

 

 

 

 

 

Advertisements

Cisco switch interface Up but Line Protocol Down?

It is pretty unusual to find an Ethernet interface on a Cisco device which looks like it is working at Layer 1, so you get a Green link light on the Cisco device, but where it is not working at Layer 2 – so you can see no incoming Ethernet packets.

Of course I had just such an instance yesterday when VOIP phones were not picking up an IP address from the DHCP server running on a Cisco switch. Other devices clearly where, including same make/model VOIP phones in other parts of the network.

What was common was that all the phones with problems were connected eventually back to port G1/0/12 on the Cisco switch which had the DHCP server. This had a link light…

I looked at the interface:

Switch#sh int g1/0/12
GigabitEthernet1/0/12 is up, line protocol is down (monitoring)
  Hardware is Gigabit Ethernet, address is 5017.ff29.9c0c (bia 5017.ff29.9c0c)
  MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 100Mb/s, media type is 10/100/1000BaseTX
  input flow-control is off, output flow-control is unsupported
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input never, output 04:41:16, output hang never
  Last clearing of "show interface" counters never
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     0 packets input, 0 bytes, 0 no buffer
     Received 0 broadcasts (0 multicasts)
     0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog, 0 multicast, 0 pause input
     0 input packets with dribble condition detected

So this interface has Line Protocol down, why?

I didn’t really focus on the word ‘monitoring’. Maybe it was a duplex or speed issue causing the non-passage of packets – but the negotiated value (Full-duplex, 100Mb/s) was right.

Maybe it was the cable. I decided to do a TDR test, because this was a modern day IOS and I could!

Switch#test cable-diagnostics tdr int g1/0/12
TDR test started on interface Gi1/0/12
A TDR test can take a few seconds to run on an interface
Use 'show cable-diagnostics tdr' to read the TDR results.

Switch#show cable-diagnostics tdr int g1/0/12
TDR test last run on: September 22 15:28:12

Interface Speed Local pair Pair length        Remote pair Pair status
--------- ----- ---------- ------------------ ----------- --------------------
Gi1/0/12  100M  Pair A     N/A                N/A         Not Completed
                Pair B     N/A                N/A         Not Completed
                Pair C     N/A                N/A         Not Completed
                Pair D     N/A                N/A         Not Completed

Okay, err, so no results. So I wondered whether I had used this switch for something else and forgotten to reset it – sometimes I do this when I need a couple of ports to monitor something. So I did a search in the config:


Switch#sh run | inc moni
monitor session 1 source interface Gi1/0/1
monitor session 1 destination interface Gi1/0/12

Bingo!

So I switched this off:

Switch#conf t
Enter configuration commands, one per line.  End with CNTL/Z.
Switch(config)#no monitor session 1
Switch(config)#exit

Switch#sh run | inc moni
Switch#

and immediately the line protocol came up.

Sep 22 15:38:33.746: %LINK-3-UPDOWN: Interface GigabitEthernet1/0/12, changed state to up
Sep 22 15:38:33.750: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet1/0/12, line protocol is up (connected)

which was easily confirmed by looking at the interface again:


Switch#sh int g1/0/12
  Hardware is Gigabit Ethernet, address is 5017.ff29.9c0c (bia 5017.ff29.9c0c)
  MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 100Mb/s, media type is 10/100/1000BaseTX
  input flow-control is off, output flow-control is unsupported
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:05, output 00:00:00, output hang never
  Last clearing of "show interface" counters never
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  5 minute input rate 3000 bits/sec, 3 packets/sec
  5 minute output rate 4000 bits/sec, 3 packets/sec
     268 packets input, 33790 bytes, 0 no buffer
     Received 44 broadcasts (4 multicasts)
     0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog, 4 multicast, 0 pause input
     0 input packets with dribble condition detected
     94229 packets output, 75993256 bytes, 0 underruns
     0 output errors, 0 collisions, 1 interface resets
     1 unknown protocol drops
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier, 0 pause output
     0 output buffer failures, 0 output buffers swapped out

Lesson – always factory reset switches before you use them for some other purpose.

Shrewsoft VPN on Windows, Cisco ASA access, and the curious ACL order problem

I was working with a Cisco ASA customer that wished to remain with classic IPSEC IKEv1 access from Windows 8 clients (rather than SSL or Anyconnect client access). Cisco no longer make a VPN client that will load onto Windows 8 (they have no 64 bit support), so I recommended they use the Shrewsoft VPN client.

All seemed to be going well, until the customer reported an issue with some of the v2.2.2 clients whereby they could not access privately addressed hosts over the VPN connection from the Shrewsoft client but using another login account, they could.

We collectively scratched our heads over this until it was realised that if the Split-Tunnel ACL had two or more lines AND the first line gave access to a single host (rather than a subnet), then the entire ACL failed to provide any access. If the Split-Tunnel ACL listed the entries with a subnet first, then the subsequent lines could be single hosts without any issue.

So this would fail because the host entry is listed first:

access-list example_fails extended permit ip host 192.0.0.225 172.16.200.0 255.255.255.0
access-list example_fails extended permit ip 192.9.1.0 255.255.255.0 172.16.200.0 255.255.255.0
access-list example_fails extended permit ip 192.9.215.0 255.255.255.0 172.16.200.0 255.255.255.0

but reorder it and it will work:

access-list example_works extended permit ip 192.9.1.0 255.255.255.0 172.16.200.0 255.255.255.0
access-list example_works extended permit ip 192.9.215.0 255.255.255.0 172.16.200.0 255.255.255.0
access-list example_works extended permit ip host 192.0.0.225 172.16.200.0 255.255.255.0

 

The same ACL in the order that would not work with the Shrewsoft client (example_fails), would work with a 32 bit Cisco IPSEC VPN client, and with a native OSX VPN client. So it was a bug.

Beware Children and HomePlug networks if you want to avoid Self-Looped problems!

As you might expect, I have a rather complicated home network. At the ‘core’ is a Cisco Catalyst switch which makes problem debugging very easy – normally.

Alongside the normal UTP connected devices across the house, I also use HomePlug to push Ethernet over my electrical ring mains. This connects a couple of wireless access points and things like a Microsoft Xbox 360. In fact there are two HomePlug networks connected together using UTP (as I have two different electrical supplies I am using different HomePlug network names to avoid Spanning Tree loops caused by the signal leaking out of the house and back in on the other supply – yes that does happen and yes that does cause a loop!).

Now two evenings ago my middle son moved the Xbox to a different place in the house. Yesterday my eldest son reported Internet access problems from a desktop PC that is connected to the HomePlug network (though it also has wireless). When I got round to looking at it, I noticed that the connection from the HomePlug network to the Catalyst switch was down. There was no green LED on the switch port.

(ignore times in these log snippets as some of this is recreated for the benefit of this article)

So I logged into the switch and looked at the log. It said I had a DTP flap:

Apr 23 10:20:47.501: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/12, changed state to down
Apr 23 10:20:52.573: %PM-4-ERR_DISABLE: dtp-flap error detected on Fa0/12, putting Fa0/12 in err-disable state
Apr 23 10:20:54.577: %LINK-3-UPDOWN: Interface FastEthernet0/12, changed state to down

and had disabled the port because of the error (err-disabled status):

Cat3550# sh int faste0/12
FastEthernet0/12 is down, line protocol is down (err-disabled)
Hardware is Fast Ethernet, address is 000b.465b.000c (bia 000b.465b.000c)
Description: Connection to EthernetOverPower network
MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Auto-duplex, Auto-speed, media type is 10/100BaseTX
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:13, output 00:00:14, output hang never
Last clearing of “show interface” counters 14:51:06
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 250000 bits/sec, 431 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
4566009 packets input, 813554637 bytes, 0 no buffer
Received 4566005 broadcasts (0 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 1872430 multicast, 0 pause input
0 input packets with dribble condition detected
8038 packets output, 634788 bytes, 0 underruns
0 output errors, 0 collisions, 1 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out

Note there was masses of input traffic but no output traffic.

I just assumed that this was some temporary blip and CLEARed the interface, but it kept coming back. So to save time I thought I would just put in automatic recovery from a DTP-Flap:

errdisable recovery cause dtp-flap
errdisable recovery interval 30

Sure enough recovery kicked in:

Cat3550#sh errdisable recovery
ErrDisable Reason            Timer Status
—————–            ————–
arp-inspection               Disabled
bpduguard                    Disabled
channel-misconfig            Disabled
dhcp-rate-limit              Disabled
dtp-flap                     Enabled
gbic-invalid                 Disabled
l2ptguard                    Disabled
link-flap                    Disabled
mac-limit                    Disabled
link-monitor-failure         Disabled
loopback                     Disabled
oam-remote-failure           Disabled
pagp-flap                    Disabled
port-mode-failure            Disabled
psecure-violation            Disabled
security-violation           Disabled
sfp-config-mismatch          Disabled
storm-control                Disabled
udld                         Disabled
unicast-flood                Disabled
vmps                         Disabled

Timer interval: 30 seconds

Interfaces that will be enabled at the next timeout:

Interface       Errdisable reason       Time left(sec)
———       —————–       ————–
Fa0/12                  loopback            14

Looking at the port config I realised that I had DTP trunking desirable:

interface FastEthernet0/12
description Connection to EthernetOverPower network
switchport mode dynamic desirable

Aha easily fixed, I would just change the port to an access port since it was not connecting to a DTP capable device – that HomePlug only has one port!

interface FastEthernet0/12
description Connection to EthernetOverPower network
switchport mode access

(portfast is switched off because this is connecting to a spanning-tree capable device)

but then I saw loopback errors.

Apr 23 10:32:41.202: %ETHCNTR-3-LOOP_BACK_DETECTED: Keepalive packet loop-back detected on FastEthernet0/12.
Apr 23 10:32:41.202: %PM-4-ERR_DISABLE: loopback error detected on Fa0/12, putting Fa0/12 in err-disable state
Apr 23 10:32:43.218: %LINK-3-UPDOWN: Interface FastEthernet0/12, changed state to down

This was becoming annoying. The Catalyst switch port was showing a flashing Orange/Amber LED. Masses of traffic was coming into the port but there was no output traffic.

Cat3550# sh int faste0/12
FastEthernet0/12 is up, line protocol is up (connected)
Hardware is Fast Ethernet, address is 000b.465b.000c (bia 000b.465b.000c)
Description: Connection to EthernetOverPower network
MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec,
reliability 255/255, txload 1/255, rxload 3/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 100Mb/s, media type is 10/100BaseTX
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:03, output hang never
Last clearing of “show interface” counters 15:04:59
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 1546000 bits/sec, 2057 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
5581549 packets input, 908408095 bytes, 0 no buffer
Received 5581487 broadcasts (0 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 2887912 multicast, 0 pause input
0 input packets with dribble condition detected
8198 packets output, 648628 bytes, 0 underruns
0 output errors, 0 collisions, 1 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out

Switching on loopback recovery just brought the port back long enough for another loopback packet to disable it. No real traffic was passing. There was no impact on the rest of the ‘real’ network.

Plugging into a different switch port disabled the link because of BPDU protection:

Apr 22 20:03:53.808: %SPANTREE-2-BLOCK_BPDUGUARD: Received BPDU on port Fa0/10 with BPDU Guard enabled. Disabling port.
Apr 22 20:03:53.808: %PM-4-ERR_DISABLE: bpduguard error detected on Fa0/10, putting Fa0/10 in err-disable state
Apr 22 20:03:53.816: %SPANTREE-2-BLOCK_BPDUGUARD: Received BPDU on port Fa0/10 with BPDU Guard enabled. Disabling port.

This is expected:

interface FastEthernet0/10
switchport mode access
spanning-tree portfast
spanning-tree bpduguard enable

I looked at what Spanning Tree thought was going on:

Cat3550#sh span

VLAN0001
Spanning tree enabled protocol ieee
Root ID    Priority    32769
Address     000b.465b.0000
This bridge is the root
Hello Time   2 sec  Max Age 20 sec  Forward Delay 15 sec

Bridge ID  Priority    32769  (priority 32768 sys-id-ext 1)
Address     000b.465b.0000
Hello Time   2 sec  Max Age 20 sec  Forward Delay 15 sec
Aging Time 300

Interface           Role Sts Cost      Prio.Nbr Type
——————- —- — ——— ——– ——————————–
Fa0/12              Desg BLK 19        128.12   P2p self-looped
Fa0/18              Desg FWD 19        128.18   P2p Edge
Fa0/20              Desg FWD 19        128.20   P2p Edge
Fa0/22              Desg FWD 19        128.22   P2p
Fa0/24              Desg FWD 19        128.24   P2p
Gi0/1               Desg FWD 4         128.25   P2p

(Obviously I should get round to moving out of VLAN1…)

So the port is seen as self-looped and STP Blocked. Why?

I switched off all the other HomePlug devices. No difference.

At this point, I figured that I must have some sort of weird interference from outside given that on this HomePlug network I was using the default HomePlug network name (yes I know I should have changed this) and I knew that the signal can leak outside the house to the sub-station (another hair-pulling debug session from a few weeks ago I found when bridging two default named HomePlug networking using UTP).

I resolved to change the HomePlug network name on all the original HomePlug network devices. And that is when I found the issue. Whilst changing the first HomePlug, I noticed a remote HomePlug device and realised I had missed the multi-port HomePlug unit which connects to the Xbox. The Xbox had been removed and my middle son had (uncharacteristically) tidied up the end of the Ethernet cable plugged into the Xbox by putting it one of the other spare Ethernet ports on that device!

That was the cause of the problem and the reason why loopback packets were seen. Removing the cable fixed the issue – the Catalyst switch port LED went green, Spanning Tree saw the loop go away,

Cat3550#sh span

VLAN0001
Spanning tree enabled protocol ieee
Root ID    Priority    32769
Address     000b.465b.0000
This bridge is the root
Hello Time   2 sec  Max Age 20 sec  Forward Delay 15 sec

Bridge ID  Priority    32769  (priority 32768 sys-id-ext 1)
Address     000b.465b.0000
Hello Time   2 sec  Max Age 20 sec  Forward Delay 15 sec
Aging Time 300

Interface           Role Sts Cost      Prio.Nbr Type
——————- —- — ——— ——– ——————————–
Fa0/12              Desg LIS 19        128.12   P2p
Fa0/18              Desg FWD 19        128.18   P2p Edge
Fa0/20              Desg FWD 19        128.20   P2p Edge
Fa0/22              Desg FWD 19        128.22   P2p
Fa0/24              Desg FWD 19        128.24   P2p
Gi0/1               Desg FWD 4         128.25   P2p

and the interface started to pass packets properly. (LIS means listening, then the port goes to FWD if there are no STP issues).

No relevant causes for the ‘self-looped’ message appear on Google so perhaps this will help someone, and I will make sure I explain why looping back an Ethernet interface is bad to my son.

Easily done, not always easy to diagnose. At least I have a reason to change that default HomePlug network name now!

Update (Jan 2013):
I did change the HomePlug network name.

I also had a recurrence of a self-looped network seen by Spanning Tree on the port connected to the HomePlug network. This occurred after I replaced a Cisco Catalyst 2950 that was connecting my two (differently named) HomePlug networks with a Cisco Catalyst 3550. So the network now looked like

Cisco 3550_1 Fa0/12 >> HomePlug Network 1 >>> Cisco 3550_2 >> HomePlug Network 2

I also had a couple of VLANs on the intermediate 3550_2 and perhaps something went wobbly because I saw intermittent connectivity when looking at it from the intermediate 3550_2. Debugging the issue from 3550_1 showed the port was going up and down (because I have auto-recovery). I switched off all the HomePlug except the one connected to 3550_1 and the one connected to 3550_2. Using the HomePlug utility on a laptop connected directot to 3550_1’s HomePlug showed 3 remote devices. One was the HomePlug connected to 3550_2, one was something unknown (but it looked like a Cisco MAC address to me), and one was a device with ffffffff as the MAC address! Only HomePlug should be seen of course. Powering down the 3550_2 HomePlug cured the issue so it must have tickled a bug – possibly because the 3550_2 was sending Switchport Mode Dynamic Desirable packets (and thereby saying it was happy to become a trunk port)?

Cisco Network Assistant and “Could not create Java machine”

The Cisco Network Assistant (CNA) is software that provides a more richly featured GUI interface to a low-medium range IOS Cisco switch. You can use the CLI of course, and there is a web interface (if you have not upgraded your switch and blown away the HTML files).

I was running version 5.4 though I had not used it for ages. When I had tried to start it I got a “Could not create a Java machine” error from my XP machine

Image

I figured I had installed so much software in between that I had broken something or it didn’t like the newer versions of Java, or just that I really shouldn’t still be running XP.

So I decided to upgrade it to 5.8.2 but afterwards I got the same error.

Poking around, I saw that the real issue was that I didn’t have the memory free that Java wanted when it started the application. It greedily expects to grab a Gig for itself! However you can change this quite easily using a text editor by modifying the entry in the properties file for CNA (normal risks apply when editing system startup files – only do this if you have a clue).

The file exists in C:\Program Files\Cisco Systems\Cisco Network Assistant\startup\startup.properties

Image

Change the value of
JVM_MAXIMUM_HEAP=1024m
to
JVM_MAXIMUM_HEAP=512m

as shown here:

Image

You should now be able to start the application. On my small network, I didn’t see an issue by reducing the startup memory size.

Debugging DHCP and the IP HELPER in Cisco IOS

When deploying VLANs in enterprise networks, you often find that you have to provide access to a core DHCP server. In IOS you can do this by using the IP HELPER command to define the address for the DHCP server within the VLAN configuration, as I show below, where 192.0.0.3 is the DHCP server:


interface Vlan10
description HQ Staff Network
ip address 192.0.1.254 255.255.255.0
ip helper-address 192.0.0.3
!
interface Vlan50
description Wireless Vlan
ip address 192.0.5.254 255.255.255.0
ip helper-address 192.0.0.3

Using the DEBUG IP DHCP SERVER PACKET command, we can see what happens when a client device makes a DHCP request for an IP address:


002121: *Apr 29 17:37:55.549: DHCPD: Reload workspace interface Vlan10 tableid 0.
002122: *Apr 29 17:37:55.549: DHCPD: tableid for 192.0.1.254 on Vlan10 is 0
002123: *Apr 29 17:37:55.549: DHCPD: client's VPN is .
002124: *Apr 29 17:37:55.549: DHCPD: using received relay info.
002125: *Apr 29 17:37:55.549: DHCPD: Looking up binding using address 192.0.1.254
002126: *Apr 29 17:37:55.549: DHCPD: setting giaddr to 192.0.1.254.
002127: *Apr 29 17:37:55.549: DHCPD: BOOTREQUEST from 0100.248c.6e62.52 forwarded to 192.0.0.3.

The (in this case) switch sees the incoming DHCP broadcast and knows it has to help the device get an IP address. It forwards the packet to the DHCP server on the address in the IP HELPER command and inserts it’s own address on the device VLAN into the packet that it sends to the DHCP server. This is placed in the GI field (the Gateway Information) field, and the DHCP server will use the value in that field to determine which scope on the server should be used for the request to be handled.

At one recent site, devices in a newly defined VLAN were not getting an IP address assigned. The helper address was correct, and the administrator said the DHCP server was set up in the same way as scopes for other VLANs. I could see the BOOTREQUESTs being forwarded but I could see no replies. Since the DHCP server was in a VLAN directly served off the same switch, and there were no access control lists in the way, it had to be an issue with the server. The scope looked identical to all the other scopes on the server – until I noticed that it had been set up to only serve to BOOTP requests. BOOTP is not DHCP. The default service parameter in Windows 2008 R2 DHCP server is Automatic, which serves BOOTP and DHCP. Fat fingers had mistakenly caused only BOOTP to be selected. Once changed, the reply from the server was seen:


003119: *Apr 29 18:08:50.052: DHCPD: Reload workspace interface Vlan10 tableid 0.
003120: *Apr 29 18:08:50.052: DHCPD: tableid for 192.0.1.254 on Vlan10 is 0
003121: *Apr 29 18:08:50.052: DHCPD: client's VPN is .
003122: *Apr 29 18:08:50.052: DHCPD: using received relay info.
003123: *Apr 29 18:08:50.052: DHCPD: Looking up binding using address 192.0.1.254
003124: *Apr 29 18:08:50.052: DHCPD: setting giaddr to 192.0.1.254.
003125: *Apr 29 18:08:50.052: DHCPD: BOOTREQUEST from 0100.248c.6e62.52 forwarded to 192.0.0.3.

003129: *Apr 29 18:08:52.972: DHCPD: forwarding BOOTREPLY to client 0024.8c6e.6252.
003130: *Apr 29 18:08:52.972: DHCPD: no option 125
003131: *Apr 29 18:08:52.972: DHCPD: Check for IPe on Vlan10
003132: *Apr 29 18:08:52.972: DHCPD: creating ARP entry (192.0.1.100, 0024.8c6e.6252).
003133: *Apr 29 18:08:52.972: DHCPD: unicasting BOOTREPLY to client 0024.8c6e.6252 (192.0.1.100).

DEL_REASON_PEER_NOT_RESPONDING with Cisco VPN client and ASA

I recently came across an issue while converting a Cisco PIX 6.3.3 firewall to a Cisco ASA firewall running 7.2(5).

It was a simple problem, caused by a simple oversight, but it took quite a while for the cause to become apparent.

If a VPN client attempted to connect (using IPSEC/UDP), it would fail and a log of the session would show DEL_REASON_PEER_NOT_RESPONDING as the cause. The ASA never seemed to show any relevant debug information, in fact it seemed to be oblivious to the fact that a client was trying to connect.

Here is the full client log (in this case from an OSX machine), with the peer address changed to 1.1.1.1:

Cisco Systems VPN Client Version 4.9.01.0230

Copyright (C) 1998-2009 Cisco Systems, Inc. All Rights Reserved.

Client Type(s): Mac OS X

Running on: Darwin 10.4.0 Darwin Kernel Version 10.4.0: Fri Apr 23 18:28:53 PDT 2010; root:xnu-1504.7.4-1/RELEASE 1386 i386 Config file directory: /etc/opt/cisco-vpnclient

1              17:40:35.421 08/11/2010 Sev=Info/4     CM/Ox43100002 Begin connection process

2              17:40:35.422 08/11/2010 Sev=Info/4     CM/Ox43100004 Establish secure connection using Ethernet

3              17:40:35.422 08/11/2010 Sev=Info/4     CM/Ox43100024 Attempt connection with server “1.1.1.1”

4              17:40:35.422 08/11/2010 Sev=Info/4     CVPND/Ox43400019 Privilege Separation: binding to port: (500).

5              17:40:35.422 08/11/2010 Sev=Info/4     CVPND/Ox43400019 Privilege Separation: binding to port: (4500).

6              17:40:35.422 08/11/2010 Sev=Info/6     IKE/Ox4300003B Attempting to establish a connection with 1.1.1.1.

7              17:40:35.510 08/11/2010 Sev=Info/4     IKE/Ox43000013 SENDING >>> ISAKMP OAK AG (SA, KE, NON, ID, VID(Xauth), VID(dpd), VID(Frag), VID(Nat-T), VID(Unity)) to 1.1.1.1

8              17:40:35.552 08/11/2010 Sev=Info/4     IPSEC/Ox43700008 IPSec driver successfully started

9              17:40:35.552 08/11/2010 Sev=Info/4     IPSEC/Ox43700014 Deleted all keys

10             17:40:40.552 08/11/2010 Sev=Info/4     IKE/0x43000021 Retransmitting last packet!

11             17:40:40.552 08/11/2010 Sev=Info/4     IKE/0x43000021 SENDING >>> ISAKMP OAK AG (Retransmission) to 1.1.1.1

12             17:40:45.552 08/11/2010 Sev=Info/4     IKE/0x43000021 Retransmitting last packet!

13             17:40:45.552 08/11/2010 Sev=Info/4     IKE/0x43000021 SENDING >>> ISAKMP OAK AG (Retransmission) to 83.244.132.215

14             17:40:50.552 08/11/2010 Sev=Info/4     IKE/0x43000021 Retransmitting last packet!

15             17:40:50.552 08/11/2010 Sev=Info/4     IKE/0x43000013 SENDING >>> ISAKMP OAK AG (Retransmission) to 1.1.1.1

16             17:40:55.552 08/11/2010 Sev=Info/4     IKE/Ox43000017 Marking IKE SA for deletion (I_Cookie=15E3249D9DD68494 R_Cookie=0000000000000000) reason = DEL_REASON_PEER_NOT_RESPONDING

17             17:40:56.052 08/11/2010 Sev=Info/4     IKE/Ox4300004B Discarding IKE SA negotiation (I_Cookie=15E3249D9DD68494 R_Cookie=0000000000000000) reason = DEL_REASON_PEER_NOT_RESPONDING

18             17:40:56.052 08/11/2010 Sev=Info/4      CM/Ox43100014 Unable to establish Phase 1 SA with server “1.1.1.1” because of “DEL—REASON—PEER—NOT—RESPONDING”

19             17:40:56.052 08/11/2010 Sev=Info/5      CM/Ox43100025 Initializing CVPNDry

20             17:40:56.053 08/11/2010 Sev=Info/4      CVPND/Ox4340001F Privilege Separation: restoring MTU on primary interface.

21             17:40:56.053 08/11/2010 Sev=Info/4     IKE/Ox43000001 IKE received signal to terminate VPN connection

22             17:40:56.552 08/11/2010 Sev=Info/4      IPSEC/Ox43700014 Deleted all keys

23             17:40:56.552 08/11/2010 Sev=Info/4     IPSEC/Ox43700014 Deleted all keys

24             17:40:56.552 08/11/2010 Sev=Info/4      IPSEC/Ox43700014 Deleted all keys

25             17:40:56.552 08/11/2010 Sev=Info/4      IPSEC/Ox4370000A IPSec driver successfully stopped

There are lots of other articles explaining what DEL_REASON_PEER_NOT_RESPONDING might mean, but none say what turned out to be the (in hindsight) obvious answer… no crypto map had been applied to the outside interface.

The PIX had the single line:

crypto map remote interface outside

(where remote was the name of the list of commands that matched the incoming LAN to LAN and dynamic VPN clients)

This had been missed from the ASA configuration.

Once entered, everything sprang into life. I don’t know whether the error was made by me tweaking parts of the configuration or ignored by the Cisco PIX to ASA Migration tool (seems unlikely) but as usual, it is obvious when know what is missing.