Untitled document
& Switching v5 Workbook -
Advanced Technology Labs - IP
CCIE Routing
CCIE R&S v5
Routing
Routing to NBMA Interfaces
A Note On Section Initial Configuration Files: You must load the
initial configuration files for the section, named Basic IP Addressing,
which can be found in
. Reference the Advanced Technology Labs With
Addressing Diagram to complete this task.
Task
Configure R1 and R2 with IPv4 default routes through the DMVPN cloud with a next-
hop of R5.
Ensure that the route is valid as long as the Tunnel interface is in the UP
state.
Configure R5 with IPv4 static routes for R1’s and R2’s Loopback0 prefixes through
the DMPVN cloud.
Ensure that R1, R2, and R5 can all ping each other’s Loopback0 interfaces.
Configuration
R1:
ip route 0.0.0.0 0.0.0.0 Tunnel0 155.1.0.5
R2:
ip route 0.0.0.0 0.0.0.0 Tunnel0 155.1.0.5
R5:
ip route 150.1.1.1 255.255.255.255 155.1.0.1
ip route 150.1.2.2 255.255.255.255 155.1.0.2
Verification
When configuring a static route, the following options are available:
specify only the next-hop value; route is valid as long as a route exists for the next-
hop value.
Specify only the local outgoing interface; route is valid as long as the interface is in
the UP/UP state.
Specify both next-hop value and local outgoing interface.
When the third option is selected, the local outgoing interface behaves like a
condition for the next-hop value and should be read like: this static route is valid only
if the configured next-hop value is reachable over the configured interface, which
means as long as the interface is in the UP/UP state and has nothing to do with
IP/ARP/NHRP functionality with the next-hop. Check connectivity between
Loopback0 interfaces of the routers:
R1#ping 150.1.5.5
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 150.1.5.5, timeout is 2 seconds:!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 56/64/88 ms
!R1#ping 150.1.2.2
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 150.1.2.2, timeout is 2 seconds:!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 ms
!R2#ping 150.1.5.5
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 150.1.5.5, timeout is 2 seconds:!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 56/59/60 ms
!R2#ping 150.1.1.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 150.1.1.1, timeout is 2 seconds:!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 ms
!R5#ping 150.1.1.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 150.1.1.1, timeout is 2 seconds:!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 56/58/60 ms
!R5#ping 150.1.2.2
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 150.1.2.2, timeout is 2 seconds:!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/2 ms
DMVPN Phase 2 and Phase 3 use a multipoint GRE (mGRE) interface on both hubs
and spokes; read more on this in the DMVPN section of the workbook. When
configuring static routes over mGRE interfaces in DMVPN, the following restrictions
apply:
On spokes, all three options outlined above are available: next-hop value, local
outgoing interface, or both.
On hubs, you must specify the next-hop value at all times, so using only the local
outgoing interface is not a functional solution.
When traffic is routed over the mGRE interface of the DMVPN cloud, the router
must perform GRE encapsulation, so it needs to know the source IP address (which
is identified from the configured
tunnel source
command) and destination IP
address (NBMA IP address of the remote spoke/hub, which is determined through
NHRP unless statically configured), so ARP has no role in this process. Given the
dynamic built-in design of DMVPN, the only static NHRP mappings are configured
on spokes for the hub, to allow spokes to successfully and dynamically register their
NBMA and tunnel IP addresses with the hub. When a DMVPN member needs to
resolve an NBMA address dynamically, it sends an NHRP Resolution Request to
the NHS (Next-Hop-Server), which is always the hub. For this reason, because the
hub is the only NHS in the cloud and it cannot query itself, static routing with only
the local outgoing interface on the hub is not functional. To better understand the
process, let’s first configure static routing on the spokes with only the outgoing exit
interface; make sure to first remove the routes provided by the solution on spokes.
R1:
ip route 0.0.0.0 0.0.0.0 Tunnel0
R2:
ip route 0.0.0.0 0.0.0.0 Tunnel0
Before generating any traffic, note that on spokes, only the static NHRP mappings
for the hub exist.
R1#show ip nhrp
155.1.0.5/32 via 155.1.0.5
Tunnel0 created 00:42:45, never expire
Type: static, Flags: used
NBMA address: 169.254.100.5
!
!R2#show ip nhrp
155.1.0.5/32 via 155.1.0.5
Tunnel0 created 00:43:15, never expire
Type: static, Flags: used
NBMA address: 169.254.100.5
Generate traffic between Loopbck0 interfaces of spokes and notice the newly
created NHRP entries; the first packet is dropped until the NHRP Resolution
Request process is successful.
R1#ping 150.1.2.2
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 150.1.2.2, timeout is 2 seconds:.!!!!
Success rate is 80 percent (4/5), round-trip min/avg/max = 1/1/2 ms
!R2#ping 150.1.1.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 150.1.1.1, timeout is 2 seconds:.!!!!
Success rate is 80 percent (4/5), round-trip min/avg/max = 1/1/2 ms
!R1#show ip nhrp
155.1.0.1/32 via 155.1.0.1
Tunnel0 created 00:00:04, expire 00:04:55
Type: dynamic, Flags: router unique local
NBMA address: 169.254.100.1
(no-socket) 155.1.0.2/32 via 155.1.0.2
Tunnel0 created 00:00:04, expire 00:04:55Type: dynamic, Flags: router implicit used nhop
NBMA address: 169.254.100.2
155.1.0.5/32 via 155.1.0.5
Tunnel0 created 00:29:30, never expire
Type: static, Flags: used
NBMA address: 169.254.100.5 155.1.2.2/32 via 155.1.2.2
Tunnel0 created 00:00:11, expire 00:02:53Type: dynamic, Flags: used temporary
NBMA address: 169.254.100.5
Now configure the static routes on the hub using only the outgoing interface, to see
that packets are dropped due to NHRP failure; make sure to first remove the routes
provided by the solution on hub.
R5:
ip route 150.1.1.1 255.255.255.255 Tunnel0
ip route 150.1.2.2 255.255.255.255 Tunnel0
Before generating any traffic, note that on the hub, dynamic NHRP entries exist as
all spokes have registered to the hub.
R5#show ip nhrp
155.1.0.1/32 via 155.1.0.1
Tunnel0 created 00:35:29, expire 00:04:31Type: dynamic, Flags: unique registered used nhop
NBMA address: 169.254.100.1
155.1.0.2/32 via 155.1.0.2
Tunnel0 created 00:05:17, expire 00:04:42Type: dynamic, Flags: unique registered used nhop
NBMA address: 169.254.100.2
155.1.0.3/32 via 155.1.0.3
Tunnel0 created 01:28:56, expire 00:04:01Type: dynamic, Flags: unique registered used nhop
NBMA address: 169.254.100.3
155.1.0.4/32 via 155.1.0.4
Tunnel0 created 01:28:56, expire 00:03:21Type: dynamic, Flags: unique registered used nhop
NBMA address: 169.254.100.4
Generate traffic to Loopback0 interfaces of R1 or R2, and note the debug output and
that traffic is not functional as the tunnel destination cannot be resolved through
NHRP (no NHS to query).
R5#debug nhrp
NHRP protocol debugging is on
!R5#debug ip packet detail
IP packet debugging is on (detailed)
!R5#ping 150.1.1.1 repeat 1
Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 150.1.1.1, timeout is 2 seconds:.
Success rate is 0 percent (0/1)
!
NHRP: nhrp_ifcache: Avl Root:7F498830D308
NHRP: NHRP could not map 150.1.1.1 to NBMA, cache entry not found
NHRP: MACADDR: if_in null netid-in 0 if_out Tunnel0 netid-out 1
NHRP: Checking for delayed event NULL/150.1.1.1 on list (Tunnel0).
NHRP-MPLS: tableid: 0 vrf:
NHRP: No delayed event node found.
NHRP: nhrp_ifcache: Avl Root:7F498830D308.
!
IP: s=155.1.0.5 (local), d=150.1.1.1, len 100, local feature
FIBipv4-packet-proc: route packet from (local) src 155.1.0.5 dst 150.1.1.1
FIBfwd-proc: Default:150.1.1.0/24 process level forwarding
FIBfwd-proc: depth 0 first_idx 0 paths 1 long 0(0)
FIBfwd-proc: try path 0 (of 1) v4-ap-Tunnel0 first short ext 0(-1)
FIBfwd-proc: v4-ap-Tunnel0 valid
FIBfwd-proc: Tunnel0 no nh type 3 - deag
ICMP type=8, code=0, feature skipped, Logical MN local(14), rtype 0, forus FALSE, sendself FALSE, mtu 0, fwdchk
FIBfwd-proc: ip_pak_table 0 ip_nh_table 65535 if Tunnel0 nh none deag 1 chg_if 0 via fib 0 path type attached prefix
FIBfwd-proc: Default:150.1.1.0/24 not enough info to forward via fib (Tunnel0 none)
FIBipv4-packet-proc: packet routing failed
IP: tableid=0, s=155.1.0.5 (local), d=150.1.1.1 (Tunnel0), routed via RIB
IP: s=155.1.0.5 (local), d=150.1.1.1 (Tunnel0), len 100, sending
ICMP type=8, code=0
IP: s=155.1.0.5 (local), d=150.1.1.1 (Tunnel0), len 100, output feature
ICMP type=8,.
code=0, feature skipped, TCP Adjust MSS(56), rtype 1, forus FALSE, sendself FALSE, mtu 0, fwdchk FALSE
Although the technically correct solution is to fix the static routing by using a next-hop
value, a static NHRP mapping can be configured on the hub for both R1 and R2
Loopback0 prefixes.
R5:
interface Tunnel0
ip nhrp map 150.1.2.2 169.254.100.2
ip nhrp map 150.1.1.1 169.254.100.1
Test IPv4 connectivity again and note the added static NHRP mappings on the hub.
R5#ping 150.1.1.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 150.1.1.1, timeout is 2 seconds:!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/6 ms
!R5#ping 150.1.2.2
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 150.1.2.2, timeout is 2 seconds:!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/3 ms
!R5#show ip nhrp static
150.1.1.1/32 via 150.1.1.1
Tunnel0 created 00:01:02, never expire
Type: static, Flags:
NBMA address: 169.254.100.1
150.1.2.2/32 via 150.1.2.2
Tunnel0 created 00:01:32, never expire
Type: static, Flags:
NBMA address: 169.254.100.2
For a better understanding, verify the CEF entries on the hub.
R5#show ip cef 150.1.1.1 internal
150.1.1.1/32, epoch 2, flags attached, refcount 5, per-destination sharing
sources: Adj
subblocks:
Adj source: IP midchain out of Tunnel0, addr 150.1.1.1 7F4987C7E3B8
Dependent covered prefix type adjfib, cover 150.1.1.0/24
ifnums:
Tunnel0(14): 150.1.1.1
path 7F4990A5C010, path list 7F498E385E00, share 1/1, type adjacency prefix, for IPv4
attached to Tunnel0, adjacency IP midchain out of Tunnel0, addr 150.1.1.1 7F4987C7E3B8
output chain: IP midchain out of Tunnel0, addr 150.1.1.1 7F4987C7E3B8 IP adj out of GigabitEthernet1.100, addr 169.2
!R5#show ip cef 150.1.2.2 internal
150.1.2.2/32, epoch 2, flags attached, refcount 5, per-destination sharing
sources: Adj
subblocks:
Adj source: IP midchain out of Tunnel0, addr 150.1.2.2 7F4987C7E1D8
Dependent covered prefix type adjfib, cover 150.1.2.0/24
ifnums:
Tunnel0(14): 150.1.2.2
path 7F4990A5C220, path list 7F498E385FE0, share 1/1, type adjacency prefix, for IPv4
attached to Tunnel0, adjacency IP midchain out of Tunnel0, addr 150.1.2.2 7F4987C7E1D8
output chain: IP midchain out of Tunnel0, addr 150.1.2.2 7F4987C7E1D8 IP adj out of GigabitEthernet1.100, addr 169.2