Vous êtes sur la page 1sur 8

6PE and 6VPE

In a nutshell, both 6PE and 6VPE enable you to multiplex IPv6 as a service across an IPv4-only
MPLS core using dual-stack PE routers. BGP is used to distribute IPv6 routing-information and IPv4
signaled MPLS LSPs are used to forward the IPv6 traffic across an IPv6-free-core.
Originally, the technology was developed to enable a network for IPv6 without upgrading all the
routers at once. And even though most routers offer support for IPv6 by now, 6PE and 6VPE can
still offer some benefits. For instance, a service provider with a very comprehensive RSVP TE
scheme in place can use a single set of LSPs for all IPv4 and IPv6 traffic (as well as all the other
MPLS applications). This way, all traffic can use the same RSVP signaled tunnels with the
corresponding backup facilities.
Apart from the whys, you’ll find that when you’re working on a Juniper device, the configuration and
verification of either of these technologies is pretty straightforward. Let’s go over the following
scenario:

We’ll start the 6PE part with the configuration of a BGP session between PE1 and CE1:

PE1:

set protocols bgp group ce family inet6 unicast


set protocols bgp group ce neighbor 2001:db8:1::0 peer-as 65789
set interfaces xe-0/2/1 unit 95 vlan-id 95
set interfaces xe-0/2/1 unit 95 family inet6 address 2001:db8:1::1/127

By configuring the BGP session with the ‘inet6 unicast’ address family, we enable the exchange of
IPv6 routing information between the PE and CE router. Keep in mind that by default, a BGP session
configured on a Juniper router will be enabled for the ‘inet’ family. When another address family is
configured, the inet family is no longer automatically enabled. This means that the session we just
configured can only be used for IPv6:
inetzero@pe1> show bgp neighbor 2001:db8:1::
Peer: 2001:db8:1::+179 AS 65789 Local: 2001:db8:1::1+62997 AS 65000
Type: External State: Established Flags:
Last State: OpenConfirm Last Event: RecvKeepAlive
Last Error: Open Message Error
Options:
Address families configured: inet6-unicast
Holdtime: 90 Preference: 170
Number of flaps: 0
Error: 'Open Message Error' Sent: 6 Recv: 0
Peer ID: 10.0.0.1 Local ID: 172.16.1.1 Active Holdtime: 90
Keepalive Interval: 30 Group index: 1 Peer index: 0
BFD: disabled, down
Local Interface: xe-0/2/1.95
NLRI for restart configured on peer: inet6-unicast
NLRI advertised by peer: inet6-unicast
NLRI for this session: inet6-unicast
Peer supports Refresh capability (2)
Stale routes from peer are kept for: 300
Peer does not support Restarter functionality
NLRI that restart is negotiated for: inet6-unicast
NLRI of received end-of-rib markers: inet6-unicast
NLRI of all end-of-rib markers sent: inet6-unicast
Peer supports 4 byte AS extension (peer-as 65789)
Peer does not support Addpath
Table inet6.0 Bit: 10000
RIB State: BGP restart is complete
Send state: in sync
Active prefixes: 4
Received prefixes: 5
Accepted prefixes: 5
Suppressed due to damping: 0
Advertised prefixes: 4
Last traffic (seconds): Received 2 Sent 11 Checked 18
Input messages: Total 118 Updates 2 Refreshes 0 Octets 2372
Output messages: Total 121 Updates 3 Refreshes 0 Octets 2636
Output Queue[0]: 0 (inet6.0, inet6-unicast)

After establishing the BGP session, we can verify that the PE router is receiving routes and we can
verify loopback IPv6 connectivity by issuing a ping from the PE to the CE device:
pinetzero@pe1> show route receive-protocol bgp 2001:db8:1:: table inet6.0

inet6.0: 17 destinations, 21 routes (17 active, 0 holddown, 0 hidden)


Prefix Nexthop MED Lclpref AS path
2001:db8:1::/127 2001:db8:1:: 65789 I
* 2001:db8:1017::/64 2001:db8:1:: 65789 I
* 2001:db8:1018::/64 2001:db8:1:: 65789 I
* 2001:db8:1019::/64 2001:db8:1:: 65789 I
2001:db8:1111::1/128
* 2001:db8:1:: 65789 I

inetzero@pe1> ping 2001:db8:1111::1 rapid


PING6(56=40+8+8 bytes) 2001:db8:1::1 --> 2001:db8:1111::1
!!!!!
--- 2001:db8:1111::1 ping6 statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/std-dev = 0.722/1.888/4.999/1.620 ms
But what we really want is to establish IPv6 connectivity between CE1 and CE2 across our IPv4
based MPLS core network. After taking care of the BGP session between CE2 and PE2, we are left
with two more steps that are required to accomplish this: signaling the routing information across the
network and enabling the IPv4 based MPLS core to forward IPv6 traffic.
First, the 6PE signaling:

The PE routers need to advertise the IPv6 information to the RR and the RR needs to reflect this
information to the PE routers. The configuration for this could be as follows:
pe1:

set protocols bgp group ibgp type internal


set protocols bgp group ibgp local-address 172.16.1.1
set protocols bgp group ibgp family inet6 labeled-unicast explicit-null
set protocols bgp group ibgp neighbor 172.16.1.4

pe2:

set protocols bgp group ibgp type internal


set protocols bgp group ibgp local-address 172.16.1.2
set protocols bgp group ibgp family inet6 labeled-unicast explicit-null
set protocols bgp group ibgp neighbor 172.16.1.4

rr:

set protocols bgp group ibgp type internal


set protocols bgp group ibgp local-address 172.16.1.4
set protocols bgp group ibgp family inet6 labeled-unicast explicit-null
set protocols bgp group ibgp cluster 0.0.0.1
set protocols bgp group ibgp neighbor 172.16.1.2
set protocols bgp group ibgp neighbor 172.16.1.1

We can verify this configuration on the RR by issuing the following command:


inetzero@rr> show bgp summary
Groups: 1 Peers: 2 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet6.0
8 0 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped...
172.16.1.1 65000 7 5 0 0 1:42 Establ inet6.0: 0/4/4/0
172.16.1.2 65000 7 6 0 0 1:42 Establ inet6.0: 0/4/4/0

We can see that the RR is receiving and accepting IPv6 routes. The routes are not active though,
let’s see why that is:
inetzero@rr> show route receive-protocol bgp 172.16.1.1 table inet6.0 hidden detail
2001:db8:1111::1/128

inet6.0: 8 destinations, 8 routes (0 active, 0 holddown, 8 hidden)


2001:db8:1111::1/128 (1 entry, 0 announced)
Accepted
Route Label: 2
Nexthop: ::ffff:172.16.1.1
Localpref: 100
AS path: 65789 I

The next-hop attribute is an IPv4-mapped IPv6 address that is automatically derived from the
loopback IP address of the advertising PE. In order for an RR to reflect this route, it needs to be able
to resolve that next-hop. We can accomplish this by configuring a static route on the RR:
set routing-options rib inet6.3 static route ::ffff:172.16.1.0/125 receive

Let’s issue the ‘show bgp summary’ command again:


inetzero@rr> show bgp summary
Groups: 1 Peers: 2 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet6.0
8 8 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn
State|#Active/Received/Accepted/Damped...
172.16.1.1 65000 9 10 0 0 2:45 Establ
inet6.0: 4/4/4/0
172.16.1.2 65000 10 9 0 0 2:45 Establ
inet6.0: 4/4/4/0

This printout tells us that the routes are now active, received and accepted. Let’s look at the same
route on PE2:
inetzero@pe2> show route receive-protocol bgp 172.16.1.4 table inet6.0 2001:db8:1111::1/128
extensive hidden

inet6.0: 14 destinations, 17 routes (10 active, 0 holddown, 4 hidden)


2001:db8:1111::1/128 (1 entry, 0 announced)
Accepted
Route Label: 2
Nexthop: ::ffff:172.16.1.1
Localpref: 100
AS path: 65789 I (Originator)
Cluster list: 0.0.0.1
Originator ID: 172.16.1.1

The route was received by PE2. Same as on the RR, the route is hidden due to the fact that the PE
cannot resolve this next-hop. The solution we used to make the RR resolve the route will not work
on the PE routers. This is because the PE routers not only need to resolve the route, they also need
to forward the IPv6 traffic across the already existing IPv4 MPLS core. Let’s have a look at the LDP
routes the PE2 router currently has:
inetzero@pe2> show route protocol ldp table inet.3
inet.3: 6 destinations, 6 routes (6 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

172.16.1.1/32 *[LDP/9] 00:12:34, metric 1


> to 192.168.1.37 via xe-0/2/1.11, Push 301936
to 192.168.1.33 via xe-0/2/2.10, Push 300928
172.16.1.3/32 *[LDP/9] 00:12:34, metric 1
> to 192.168.1.33 via xe-0/2/2.10, Push 300944
172.16.1.4/32 *[LDP/9] 00:12:08, metric 1
> to 192.168.1.33 via xe-0/2/2.10, Push 301040

On the PE, the inet6.3 table is completely empty:


inetzero@pe2> show route protocol ldp table inet6.3
inetzero@pe2>

There is a knob that will make the router install IPv4 mapped IPv6 addresses in the inet6.3 table for
all of the individual LDP and or RSVP entries in the inet.3 table:
set protocols mpls ipv6-tunneling

This command needs to be configured on all of the PE routers that will be handling the IPv6 traffic.
After configuring it on PE2, we can observe several new entries in the inet6.3 table:
inetzero@pe2> show route protocol ldp table inet6.3

inet6.3: 6 destinations, 6 routes (6 active, 0 holddown, 0 hidden)


+ = Active Route, - = Last Active, * = Both

::ffff:172.16.1.1/128
*[LDP/9] 00:00:03, metric 1
to 192.168.1.37 via xe-0/2/1.11, Push 301936
> to 192.168.1.33 via xe-0/2/2.10, Push 300928
::ffff:172.16.1.3/128
*[LDP/9] 00:00:03, metric 1
> to 192.168.1.33 via xe-0/2/2.10, Push 300944
::ffff:172.16.1.4/128
*[LDP/9] 00:00:03, metric 1
> to 192.168.1.33 via xe-0/2/2.10, Push 301040

The ipv6-tunneling knob will copy routing information from the inet.3 to the inet6.3 table for both LDP
as well as RSVP routes. In doing so, the destinations will be converted into IPv4 mapped IPv6
addresses. In order to provide for connectivity between CE1 and CE2, we need to configure this on
PE1 and PE2.
After configuring ipv6-tunneling on both PE routers, lets verify connectivity by issuing a ping from
CE1 to CE2:
inetzero@ce1> ping routing-instance ce1 source 2001:db8:1111::1 2001:db8:2222::2 rapid
PING6(56=40+8+8 bytes) 2001:db8:1111::1 --> 2001:db8:2222::2
.....
--- 2001:db8:2222::2 ping6 statistics ---
5 packets transmitted, 0 packets received, 100% packet loss

Woops! We still have one thing left to do. We need to enable the PE routers’ uplinks for IPv6
processing. The routers are advertising the IPv6 routes with an ‘explicit-null’ label (label 2). The
penultimate router pops the outer label that is used to reach the egress LSR and forwards the IPv6
packets with the explicit-null label to the last router. Since label 2 packets are treated as native IPv6
packets, the egress LSR will not process them if the uplink is not configured to forward IPv6 traffic. To
have the egress LSR process the IPv6 traffic to the ce, we need to add the inet6 family to all of the
uplinks that both PE1 and PE2 have towards the core network:
pe1:
set interfaces xe-0/2/1 unit 2 family inet6
set interfaces xe-0/2/0 unit 2 family inet6
pe2:
set interfaces xe-0/2/1 unit 11 family inet6
set interfaces xe-0/2/2 unit 10 family inet6

Now, after issuing the same ping from CE1 to CE2, we can observe the following:
inetzero@ce1> ping routing-instance ce1 source 2001:db8:1111::1 2001:db8:2222::2 rapid
PING6(56=40+8+8 bytes) 2001:db8:1111::1 --> 2001:db8:2222::2
!!!!!
--- 2001:db8:2222::2 ping6 statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/std-dev = 0.830/0.882/1.040/0.079 ms

Another thing worth noting is that the commit error you’d see when the ‘explicit-null’ from ‘family inet6
labeled-unicast explicit-null’ was omitted has disappeared on newer Junos OS releases. In recent
Junos OS releases, without the ‘explicit-null’ configuration statement, every individual IPv6 next-hop
that is connected to the PE will have a label allocated to all of the IPv6 routes reachable via that
next-hop.
With ‘explicit-null’ enabled:
inetzero@pe1> show route advertising-protocol bgp 172.16.1.4 extensive

inet6.0: 17 destinations, 21 routes (17 active, 0 holddown, 0 hidden)


* 2001:db8:1017::/64 (1 entry, 1 announced)
BGP group ibgp type Internal
Route Label: 2
Nexthop: Self
Flags: Nexthop Change
Localpref: 100
AS path: [65000] 65789 I
* 2001:db8:1018::/64 (1 entry, 1 announced)
BGP group ibgp type Internal
Route Label: 2
Nexthop: Self
Flags: Nexthop Change
Localpref: 100
AS path: [65000] 65789 I
* 2001:db8:1019::/64 (1 entry, 1 announced)
BGP group ibgp type Internal
Route Label: 2
Nexthop: Self
Flags: Nexthop Change
Localpref: 100
AS path: [65000] 65789 I
* 2001:db8:1111::1/128 (1 entry, 1 announced)
BGP group ibgp type Internal
Route Label: 2
Nexthop: Self
Flags: Nexthop Change
Localpref: 100
AS path: [65000] 65789 I

Without explicit-null enabled:


inetzero@pe1> show route advertising-protocol bgp 172.16.1.4 extensive
inet6.0: 17 destinations, 21 routes (17 active, 0 holddown, 0 hidden)
* 2001:db8:1017::/64 (1 entry, 1 announced)
BGP group ibgp type Internal
Route Label: 299968
Nexthop: Self
Flags: Nexthop Change
Localpref: 100
AS path: [65000] 65789 I
* 2001:db8:1018::/64 (1 entry, 1 announced)
BGP group ibgp type Internal
Route Label: 299968
Nexthop: Self
Flags: Nexthop Change
Localpref: 100
AS path: [65000] 65789 I
* 2001:db8:1019::/64 (1 entry, 1 announced)
BGP group ibgp type Internal
Route Label: 299968
Nexthop: Self
Flags: Nexthop Change
Localpref: 100
AS path: [65000] 65789 I
* 2001:db8:1111::1/128 (1 entry, 1 announced)
BGP group ibgp type Internal
Route Label: 299968
Nexthop: Self
Flags: Nexthop Change
Localpref: 100
AS path: [65000] 65789 I

If the PE would have had several BGP neighbors, every individual neighbor would be allocated a
different label.Adding 6VPE to a network already enabled for 6PE requires little more than the
activation of a new address family. We’ll start out with the configuration of the VPN:

PE1:

set routing-instances c1 instance-type vrf


set routing-instances c1 interface xe-0/2/1.98
set routing-instances c1 vrf-target target:65000:1
set routing-instances c1 protocols bgp group ce as-override
set routing-instances c1 protocols bgp group ce neighbor 2001:db8:1::5 peer-as 65003
PE2:
set routing-instances c1 instance-type vrf
set routing-instances c1 interface xe-0/2/1.99
set routing-instances c1 vrf-target target:65000:1
set routing-instances c1 protocols bgp group ce as-override
set routing-instances c1 protocols bgp group ce neighbor 2001:db8:1::7 peer-as 65003

We also need to enable the PE routers and the RR to distribute this routing-information. We can do
this by configuring the following on PE1, PE2 and the RR:
set protocols bgp group ibgp family inet6-vpn unicast

The 6VPE routes are advertised with an IPv4 mapped IPv6 address. Because we already enabled
the PE routers for IPv6-tunnling earlier, this is not a problem. The 6PE routes had the explicit-null
label allocated to them, so the uplinks on the PE routers were enabled for IPv6. With 6VPE, this
would not have been necessary. Same as with IPv4 VPN routes, every 6VPE route will use two
labels. One label is the VPN label and the other label is used to reach the egress LSR. When the
penultimate router pops the outer label, the traffic that is forwarded is still encapsulated by the VPN
label.
The verification of the 6VPE setup is pretty straightforward. Check the BGP sessions with the CE
devices, check the routes send to and received from the RR and finish up with a ping:
inetzero@pe1> show bgp summary instance c1
Groups: 1 Peers: 1 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
c1.inet6.0 10 9 0 0 0 0
c1.mdt.0 0 0 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped...
2001:db8:1::5 65003 64 67 0 0 28:02 Establ
c1.inet6.0: 4/5/5/0
inetzero@pe1> show route receive-protocol bgp 172.16.1.4 table c1.inet6.0

c1.inet6.0: 13 destinations, 14 routes (13 active, 0 holddown, 0 hidden)


Prefix Nexthop MED Lclpref AS path
* 2001:db8:1::6/127 ::ffff:172.16.1.2 100 I
* 2001:db8:1013::/64 ::ffff:172.16.1.2 100 65003 I
* 2001:db8:1014::/64 ::ffff:172.16.1.2 100 65003 I
* 2001:db8:1015::/64 ::ffff:172.16.1.2 100 65003 I
2001:db8:4444::4/128
* ::ffff:172.16.1.2 100 65003 I
inetzero@pe1> show route advertising-protocol bgp 172.16.1.4 table c1.inet6.0
c1.inet6.0: 13 destinations, 14 routes (13 active, 0 holddown, 0 hidden)
Prefix Nexthop MED Lclpref AS path
* 2001:db8:1::4/127 Self 100 I
* 2001:db8:1009::/64 Self 100 65003 I
* 2001:db8:1010::/64 Self 100 65003 I
* 2001:db8:1011::/64 Self 100 65003 I
2001:db8:3333::3/128
* Self 100 65003 I

From 6VPE-CE1:
inetzero@6vpe-ce1> ping routing-instance 6vpe-1 source 2001:db8:3333::3 2001:db8:4444::4 rapid
PING6(56=40+8+8 bytes) 2001:db8:3333::3 --> 2001:db8:4444::4
!!!!!
--- 2001:db8:4444::4 ping6 statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/std-dev = 0.866/1.312/1.930/0.495 ms

This article covered the basic 6PE and 6VPE configuration and verification

Vous aimerez peut-être aussi