Académique Documents
Professionnel Documents
Culture Documents
System Architecture
This chapter describes the FAN architecture for AMI with IOK in the head-end and the system
components, along with their specific roles in the architecture, and design specifications.
This chapter includes the following major topics:
• System Topology, page 3-1
• System Components, page 3-3
• System Design Specifications, page 3-7
• FAN—Network Infrastructure and Routing, page 3-8
• FAN Network Management, page 3-23
• FAN—Security, page 3-32
• FAN—High Availability, page 3-41
• FAN—Sizing and Scaling, page 3-42
• Architecture Summary and Implementation Guidelines, page 3-42
System Topology
Figure 3-1 shows the overview of the system topology.
ASA 5545-X
IPsec tunnel
Firewall with
IPS/IDS
Ulity Data Center
outside
WAN VLAN 50
Ethernet Acve ECC CA
Directory Server
375609
meter
RF mesh
The smart meters or the Connected Grid endpoints interconnect with the CGR 1000 Series routers over
the sub-GHz RF mesh network, forming the Neighborhood Area Network (NAN). The CGRs, or the field
area routers, connect to the WAN backhaul, which may be public or private (i.e., utility owned). The
choices for WAN backhaul include Ethernet/Fiber, Cellular 3G/4G, WiMAX, and others. The solution
is validated using an Ethernet connection as a WAN backhaul, hence the Ethernet ports of the CGR are
configured and enabled.
The Energy Operations Center houses all the application head-end components, such as collection
engine, OMS, DMS, MDMS, etc. and communication head-end components, such as network
management services, directory services, AAA services, etc. The communications head-end in the
current solution is the virtualized head-end in a box, namely the Cisco Industrial Operations Kit (IOK).
The Energy Operations Center may be co-located with the Data Center in some deployments or may be
distinct. The solution leverages the functionality of certain components housed in the utility data center.
They should be reachable from the head-end systems by IP.
The Cisco ASA firewall secures the WAN link to the head-end containing the Cisco Industrial
Operations Kit (IOK). The IOK exposes the components, which are internally wired, through the vmnic0
and vmnic1 network interfaces.
The vmnic0 interface interconnects the TPS, Registration Authority, and head-end routers and must be
connected to the DMZ subnet.
The vmnic1 interface interconnects the head-end routers, FND, orchestrator, registration authority, and
the RSA based Certificate Authority and must be connected to the data center subnet.
System Components
This section describes the components used in the solution and their functional roles.
Active Directory
The Active Directory is a part of the Utility Data center and provides directory services. The Active
Directory can act as a user identity data store for FreeRADIUS when there are a large number of meters
to be authenticated. For smaller number of devices, FreeRADIUS’ local database may be used. It stores
identity information of the CGR 1000 series routers and meters and provides authentication of the
devices in the Field Area Network.
Collection Engine
The collection engine is a centralized AMI application management tool which receives meter data from
the NAN and processes or forwards it to other AMI applications. It provides the interface between the
metering system and utility processes such as meter data management, billing, and outage management.
Firewall
A high performance, application-aware firewall with IPS/IDS capability should be installed between the
WAN and the head-end infrastructure at the EOC. The firewall performs inspection of IPv4 and IPv6
traffic from/to the FAN. Its throughput capacity must match the volume of traffic flowing between the
application servers and the FANs.
Cisco Adaptive Security Appliances (ASA) 5585-X running release Cisco ASA Software Release 9.3
should be used. Cisco ASA 5585-X is a high-performance data center security solution. For smaller
deployments, low and mid-range firewalls such as the ASA 5525-X and the ASA 5545-X may be used.
The ASA FirePOWER module may be added for next generation firewall services such as Intrusion
Prevention (IPS), Application Visibility Control (AVC), URL filtering, and Advanced Malware
Protection (AMP).
Firewalls can be configured for multiple (virtual) security contexts. For instance, FAR provisioning
network servers can be on a different context from infrastructure servers for segmentation. Firewalls are
best deployed in pairs to permit failover in case of malfunction.
FAN—Cisco Products
Table 3-1 lists the Cisco products used in the implementation of the Field Area Network.
FAN—Third-Party Products
Table 3-2 lists the third-party products used in the Field Area Network.
375610
FAN—Network Infrastructure and Routing
Smart Meters
A smart meter is different from a legacy meter in that it is capable of two-way communication and
typically has an IP address. In the context of the FAN, the smart meters form the Connected Grid
endpoints. These smart meters are IP-enabled grid devices with an embedded IPv6-based
communication stack powered by the Cisco SDK library.
Refer the Cisco Developer Network (CDN) to learn more about IP enablement for partner technologies.
Mgmt: CSMP
Applicaons
CoAP
UDP/TCP
802.1x / EAP-TLS
based Access Control Soluon
375611
MR-FSK
The Connected Grid Endpoints (CGEs) or smart meters form an RF-based mesh network. The endpoints
are capable of IEEE 802.15.4g Smart Utility Networks (SUN) and IEEE 802.15.4e MAC sub-layer
enhancements supporting PHYs SUN, which are physical layer communication standards for low-rate
personal area networks. The CGEs are certified for the Wi-SUN 1.0 PHY profile. The current
implementation supports frequencies in the range of 902-928 MHz, with 64 non-overlapping channels
and 400 kHz spacing for North America. A subset of North-America frequency bands for Brazil,
Australia, Hong-Kong, Japan, etc. require modification of the endpoints at the time of manufacturing.
Sub 900 MHz 863-870MHz (or new 870-876 MHz frequency band allocated by CEPT) for Europe are
expected in the near future.
The 6LoWPAN (IPv6 over Low power Wireless Personal Area Networks) interface acts as an adaptation
layer over the IEEE 802.15.4 layer to enable IPv6 communication on the IEEE 802.15.4g/e RF mesh. It
provides header compression, IPv6 datagram fragmentation, and optimized IPv6 neighbor discovery,
thus enabling efficient IPv6 communication over the low-power and lossy links such as the ones defined
by IEEE 802.15.4.
The smart meters are provisioned with IPv6 addresses by the Field Area Routers, i.e., the CGR 1000
series routers through DHCP for IPv6 services. They also receive additional parameters such as IOT
FND and CE IPv6 address through DHCP for IPv6.
The CGR is provisioned with a Wireless Personal Area Network (WPAN) module and each PAN in the
NAN maps to a specific WPAN module in the CGR.
The WPAN module in the CGR provides the following functionality:
• 902-to-928 MHz ISM band frequency hopping technology
• Dynamic network discovery and self-healing network capabilities based on IPv6, IEEE 802.15.4e/g
(IETF 6LoWPAN [RFC 6282]), and IETF RPL
• Robust security functionality including Advanced Encryption Standard (AES) 128-bit encryption,
IEEE 802.1X, and IEEE 802.11i based-mesh security
• WPAN module firmware upgrade functionality
CGR1 CGR2
PAN1 PAN2
RF mesh associated
with PAN 2
RF mesh associated
with PAN 1
CGR1 CGR2
PAN1
RF mesh associated
with PAN 2 375612
RF mesh associated
with PAN 1
CGEs implement standard IPv6 services. The IPv6 layer also uses the mesh interface to forward IPv6
datagrams across other communication modules.
RFC 768 User Datagram Protocol (UDP) is the recommended transport layer over 6LoWPAN.
Table 3-3 summarizes the protocols applied at each layer of the neighborhood area network.
CG-Mesh
CG-Mesh is the embedded firmware for Smart Grid assets within a Neighborhood Area Network that
supports an end-to-end IPv6 communication network using mesh networking technology. CG-Mesh is
embedded in Smart Grid endpoints, such as residential electric meters using IP Layer 3 mesh networking
technology, that perform end-to-end IPv6 networking functions on the communication module.
Connected Grid Endpoints (CGEs) support an IEEE 802.15.4e/g interface and standards-based IPv6
communication stack, including security and network management.
CG-Mesh supports a frequency-hopping radio link, network discovery, link-layer network access
control, network-layer auto configuration, IPv6 routing and forwarding, firmware upgrade, and power
outage notification.
CG-Mesh Deployment
CG-Mesh Formation
When meters join the network on booting for the first time, the process is referred to as cold boot. A cold
boot is when the meter has not yet been authenticated because it is the first time the meter is joining the
network or the meter key has expired.
The process is referred to as warm boot when the meter has a working key, in which case authentication
has already been established and the meter joins the mesh quickly.
The steps followed for the initial connected grid mesh formation (cold boot) are outlined in Figure 3-5.
WAN RADIUS
CGR
Connected Grid CSMP
Router
IEEE IEEE
802.1X, RPL DHCPv6 RPL
802.15.4
802.11i
Meter
joining the mesh
In case of a node reboot or PAN migration (warm start), the node has cached data and the last two steps
may be omitted.
The steps involved in the warm boot of meters are shown in Figure 3-6.
RADIUS Serve
Server
r r DHCP Server FND Server
WAN RADIUS
RA
ADIU
US
CGR
Connected Grid CSMP
Router
IEEE IEEE
IEEE
E
802.1x,
80
02.1x, RPL DHCPv6 RPL
802.15.4
802.11i
80
02.1
11i
Meter joining
the mesh
375614
credentials
Figure 3-7 Meters Joining the Network—New Meters Joining a Multi-Hop Mesh
WAN RADIUS
CGR
Connected Grid CSMP
Router
PPAN
PAN
wireless mesh
over UDP
Meter
Already existing RPL
in the mesh
IEEE IEEE RPL DHCPv6
802.15.4 802.1x,
802.11i
Meter
joining the mesh
Network Mesh
esh Acce
Access Route Pv6 Address
IPv6 Addre Route FND
375615
Discovery Control Discovery Assignment Registration Registration
Ulity
Energy Operaons
Center
IP WAN
CGR 1000
375616
Smart
meters
Frequency Hopping
CGEs implement frequency hopping across 64 channels with 400-kHz spacing in the 902-to-928 MHz
ISM band. The frequency hopping protocol maximizes the use of the available spectrum by allowing
multiple sender-receiver pairs to communicate simultaneously on different channels. The frequency
hopping protocol also mitigates the negative effects of narrowband interferers.
CGEs allow each communication module to follow its own channel-hopping schedule for unicast
communication and synchronize with neighboring nodes to periodically listen to the same channel for
broadcast communication. This enables all nodes within a CGE PAN to use different parts of the
spectrum simultaneously for unicast communication when nodes are not listening for a broadcast
message. Using this model, broadcast transmissions can experience higher latency than with unicast
transmissions.
When a communication module has a message destined for multiple receivers, it waits until its neighbors
are listening on the same channel for a transmission. The size of a broadcast listening window and the
period of such listening windows determine how often nodes listen for broadcast messages together
rather than listening on their own channels for unicast messages.
CG-Mesh uses the communication module hardware in a way that is compliant with the IEEE
802.15.4e/g MAC/PHY specification. CG-Mesh uses the following PHY parameters:
• Operating Band: 902 to 928 MHz
• Number of Channels: 64
• Channel Spacing: 400 kHz
• Modulation Method: Binary FSK
• 150 kbaud data rate, 75 bit rate due to FEC
• Maximum output Power: 28 dBm
Enhanced Beacon (EB) messages allow communication modules to discover PANs that they can join.
CGEs also use EB messages that disseminate useful PAN information to devices that are in the process
of joining the PAN. Joining nodes are nodes that have not yet been granted access to the PAN. As such,
joining nodes cannot communicate IPv6 datagrams with neighboring devices. The EB message is the
only message sent in the clear that can provide useful information to joining nodes. CGRs drive the
dissemination process for all PAN-wide information.
Joining devices also use the RSSI (Received Signal Strength Indication) value of the received EB
message to determine if a neighbor is likely to provide a good link. The transceiver hardware provides
the RSSI value. Neighbors that have an RSSI value below the minimum threshold during the course of
receiving EB messages are not considered for PAN access requests.
CGEs support the following performance-enhancing parameters:
• Network discovery time—To assist field installations, CGEs support mechanisms that allow a node
to determine whether or not it has good connectivity to a valid mesh network.
• Network formation time—To assist field installations, CGEs use mechanisms that allow up to 5,000
nodes in a single WPAN to go through the complete network-discovery, access-control, network
configuration, route formation, and application registration process.
• Network restoration time—The mechanism that aids the rerouting of traffic during a link failure.
• Power outage notification (PON)—CG-Mesh supports timely and efficient reporting of power
outages and conserves energy by notifying the communication module and neighboring nodes of the
outage. Communication modules unaffected by the power outage gather and forward the
information to a CGR. See Figure 3-9.
CGR
P UD
UD P
P
UD
P
UD
Power Outage
375617
NAN Routing
Routing in the 6LoWPAN NAN subnet employs the IPv6 Layer-3 RPL (IPv6 Routing Protocol for Lossy
and Low Power networks) protocol. Smart meters act as RPL nodes while the CGR 1000 acts as an RPL
Directed Acrylic Graph (DAG) Root and stores information reported in Destination Advertisement
Object (DAO) messages to forward datagrams to individual nodes within the mesh network. RPL
constructs the routing tree of the meters. Hence, a Destination Oriented Directed Acrylic Graph
(DODAG) is formed, which is rooted at a single point, namely the CGR.
In the context of AMI, meters act as forwarding nodes. Hence, their default mode should be RPL
non-storing mode.
When a routable IPv6 address is assigned to its CG-Mesh interface, the CGE completes the RPL Tree
formation by sending DAO messages informing the DODAG root of its IPv6 address and the IPv6
addresses of its parents.
When receiving the DAO messages that have been collecting the route information through the upstream
CGEs, the CGR 1000 (DODAG Root) builds the downstream RPL route to node. At this stage, a CGE is
now fully operational having completed its authentication and CG-Mesh network registration. CGR1000
constructs a source-route to the node when external devices such as, FND, try to reach the node.
RPL
Instance
Down
Up
2
RANK
3
375618
IP Addressing
For most FAN deployments, planning for addressing may be initially required. The IPv4 addressing plan
must be derived from the utility’s existing scheme, while the IPv6 addressing plan will most likely be
new. In all cases, it is assumed that the network will be dual-stack.
Table 3-4 shows FAN devices with their IPv4 and IPv6 capabilities.
IPv4 Addressing
IPv4 prefixes assigned to FANs might be either public or private. A private IPv4 prefix, as documented
in RFC 1918, must never be advertised outside the private domain of the utility.
The following devices in FAN are expected to require an IPv4 address and are dependent on the utility
policy:
• FARs: CGR 1000 Series router
– Loopback
– Tunnel endpoint
– Layer 3 Ethernet and Wi-Fi interfaces
• Head-end routers
• Application head-end servers
• Communication head-end servers
IPv6 Addressing
IPv6 prefixes assigned to FANs can be either global or private (Unique Local IPv6 Unicast Addresses
(ULA)).
• Global IPv6 prefix: Obtained through one of the five Regional Internet Registries (RIR): AFRINIC,
APNIC, ARIN, LACNIC, or RIPE. The entity requesting the prefix from the RIR must be registered
with the RIR as either a Local Internet Registry (LIR) or end-user organization. A global prefix
might alternately be obtained from an ISP.
A utility should consider registering as a LIR to obtain its own IPv6 prefix and therefore be fully
independent from any churn in the ISP addressing architecture.
RIRs define policies regarding allocation of an IPv6 prefix and the prefix size. A RIR prefix
allocation is by default ::/32 prefix for a LIR, and ::/48 for an end-user organization. The RIR
policies also define how larger or smaller prefixes can be allocated to a LIR and an end-user
organization.
A justification, based on the number of sites and hosts, must be given for the non-default allocation.
The number of FAN sites and subnets drive the decision to register as a LIR or as an end-user
organization and further justify the requests made for prefix allocation and size.
• ULA IPv6 prefix: A unique local address (ULA) IPv6 prefix, documented in RFC 4193, is allowed
to be “nearly unique”. It starts with a FC00::/7 value but the following 41 bits, global routing ID,
allow an addressing space far greater than the 3-private IPv4 prefixes (10.0.0.0/8, 172.16.0.0/12,
192.168.0.0/16) documented in RFC 1918. The size of the global routing ID effectively produces a
pseudo “uniqueness.” Note, however, that there is currently no central registration of ULA prefixes.
The main differences between selecting a global or ULA IPv6 prefix are the following:
• A global prefix requires registration to the RIR either as LIR or an end-user organization, requiring
paper work and fees, before getting and justifying an IPv6 prefix allocation. A ULA does not require
this registration.
• Filtering at the border of the utility routing domain.
– A ULA IPv6 prefix must NEVER be advertised to the Internet routing table.
– A global IPv6 prefix or portions of its address space might be advertised to the Internet routing
table and incoming traffic MUST be properly filtered to block any undesirable traffic.
• Internet access: A ULA-based addressing architecture requires the IPv6-to-IPv6 Network Prefix
Translation (IPv6 NPT, RFC 6296) device(s) to be located at the Internet border. Remote workforce
management use cases might require Internet access, such as third-party technicians connecting to
their corporate network from a FAN site, an FND operator using the Google map features, etc. For
web access, web proxies can be a solution.
Once an IPv6 prefix has been allocated for the FAN, a hierarchy numbering the regions, districts,
sites, subnets, and devices must be properly structured. IPv6 addressing is classless, but the 128-bit
address can be split between a routing prefix, upper 64 bits, the Interface Identifier (IID), and the
lower 64 bits. A hierarchical structure eases the configuration of route summarization and filtering.
HER
HER
RA cluster FND +
cluster Free RSA
TPS ESR CSR Oracle
CSR Radius CA
5921 1000v
1000v DB
Collecon
Engine
DMZ subnet Data center subnet
IPSec tunnel
Legend
Virtual machines
WAN within IOK
375619
smart meter
DHCP Services
DHCPv6 is the preferred address allocation mechanism for the AMI. In the small-scale FAN head-end
solution with IOK, Cisco IOS running on the CGR 1000 is leveraged as the DHCP server. The pool of
addresses may be provided in the configuration template of the IOK. Optionally, a centralized IPv6
DHCP server like the Cisco Network Registrar (CNR) may be used.
CGEs implement a DHCPv6 client for IPv6 address auto-configuration. CG-Mesh uses the DHCPv6
Rapid Commit option to reduce the traffic to only “Solicit” and “Reply” messages; therefore the
DHCPv6 server (namely the CGR 1000) must support this option.
CGEs implement a DHCPv6 client, while the CGR acts as a DHCPv6 server. A joining node might not
be within range of a CGR and must use a neighboring communication module to make DHCPv6
requests. No DHCPv6 server address needs to be configured on a CGE.
The Cisco IOS on the CGR acts as a DHCP server and accepts address assignment requests and renewals
and assigns the addresses from predefined groups of addresses contained within DHCP address pools.
In IPv6 networking, prefix delegation is used to assign a network address prefix to a user site such as a
PAN, by configuring the CGR with the prefix to be used for each PAN.
Each 6LoWPAN subnet gets assigned an IPv6 multicast group compliant with the unicast-prefix-based
multicast address (RFC 3306). For instance, a PAN rooted at the IPv6 address of
2001:dead:beef:240::/64 has a corresponding multicast address of ff38:0040:2001:dead:beef:240::1.
DHCP services of the IOK are used to provide IP addresses to the IOT FND and other virtual machines
IOK.
The user should provide the IPv6 prefix for each PAN during ZTD staging by the IOK.
IP Unicast Forwarding
CGEs or smart meters implement a route-over architecture where forwarding occurs at the network layer.
CGEs examine every IPv6 datagram that they receive and determine the next-hop destination based on
information contained in the IPv6 header. CGEs do not use any information from the link-layer header
to perform next-hop determination.
CGEs implement the options for carrying RPL information in data plane datagrams. The routing header
allows a node to specify each hop that a datagram must follow to reach its destination.
The CGE communication stack offers four priority queues for QoS and supports differentiated classes
of service when forwarding IPv6 datagrams to manage interactions between different application traffic
flows as well as control-plane traffic. CGEs implement a strict-priority queuing policy, where
higher-priority traffic always takes priority over lower-priority traffic.
The traffic on CGEs is marked by the vendor implementation (configuration functionality is not
available). If required, traffic can be re-marked on the CGR.
IP Multicast
IPv6 multicast is required between the FND or Collection Engine (head-end system) and the CG-Mesh
endpoints when performing the following:
• Software upgrades of the endpoints by the FND
• Demand reset messages from the Collection Engine
• Demand response messages from the Collection Engine
• Targeted applications pings (a group of meters on a given feeder, for example) by the FND
• Messaging a group of meters with the same read time/cycle by the collection engine
There is no IPv6 multicast requirement between the FND and the CGR 1000 Series router when
performing a Cisco IOS software upgrade.
PIM is the protocol of choice for multicast traffic in the Field Area Network. The PIM-SSM is a data
delivery model that best supports one-to-many broadcast applications. PIM-SSM builds trees that are
rooted in just one source, offering a more secure and scalable model for a limited amount of applications
(mostly broadcasting of content). In SSM, an IP datagram is transmitted by an IP unicast source S to an
SSM destination address G, which is the multicast group IP address, and receivers can receive this
datagram by subscribing to channel (S,G).
This is the ideal model since the smart meters or CGEs are not capable of IPv6 multicast.
Thus, each 6LoWPAN subnet associated with the CGR acts as a multicast group compliant with the
unicast-prefix-based multicast address as per RFC 3306, as mentioned earlier. For instance, a PAN
rooted at the IPv6 address of 2001:dead:beef:240::/64 has a corresponding multicast address of
ff38:0040:2001:dead:beef:240::1.
CGEs deliver IPv6 multicast messages that have an IPv6 destination address scope larger than link-local
when using a Layer 2 broadcast. When CGEs receive a global-scope IPv6 multicast message, the node
delivers the message to higher layers if the node is subscribed to the multicast address. CGEs then
forward the message to other nodes by transmitting the same IPv6 multicast message over the mesh
interface. CGEs use an IPv6 hop-by-hop option containing a sequence number to ensure that a message
is not received and forwarded more than once.
HER
HER
RA cluster FND +
cluster Free RSA
TPS ESR CSR Oracle
CSR Radius CA
5921 1000v
1000v DB
Collecon
Engine
DMZ subnet Data center subnet
point (RP)
Legend
WAN Virtual machines
within IOK
IP Mulcast
data flow
The following are the steps involved in the IP multicast data flow:
Step 1 Each 6LoWPAN subnets gets assigned an IPv6 multicast group compliant with the unicast-prefix-based
multicast address (RFC 3306). That is, each PAN is a multicast group.
Step 2 The IoT FND must be configured to enable IPv6 multicast.
Step 3 The collection engine must be configured to map the application multicast address to a single IPv6
multicast address per 6LoWPAN subnet.
Step 4 The CGR is configured with Multicast Listener Discovery v2 (MLDv2) on the tunnel interface to join
the MLD group and communicate with the HER. IPv6 multicast agent should be set up for
communication with FND and Collection engine. For a FAR communicating with redundant HERs
within an IOK, the MLDv2 join has to be configured on a loopback interface instead of the GRE tunnel
interface. The FAR is then configured with feature PIM6 that enables the active tunnel to listen to the
multicast traffic, thus making the FAR act as a PIM6 router. For this particular solution with IOK, the
second option is preferred, namely, the MLDv2 join is configured on the loopback interface.
Step 5 Each HER is configured with PIM6 SSM, forwarding the appropriate multicast traffic to the
unicast-prefix-based multicast address of the CGR 1000.
Step 6 IoT FND and Collection Engine are the sources of multicast traffic. FND sends a message to the
appropriate IPv6 address to target a PAN.
Step 7 The Layer 2 switch in the EOC, if any, must have MLD snooping enabled.
Step 8 The CSR, which is the head-end router, acts as the RP for PIM6 sparse mode. The multicast traffic is
forwarded towards the CGR.
Step 9 The multicast traffic is encapsulated and transmitted through the IPsec tunnel.
Step 10 The CGR 1000 receives the IPv6 multicast traffic and forwards it to the meters as a Layer 2 broadcast
over the CG mesh. The individual meters can forward the Layer 3 multicast packets after they are
mapped to a Layer 2 broadcast.
Note Changing the routing protocol after pre-ZTD configuration by the IOK is not a recommended practice.
The Field Area Router, namely the CGR 1000 Series Router allows redistribution of the RPL routes
including the WPAN prefix as well as the external RPL.
Before redistributing RPL in OSPFv3, OSPFv3 must be configured on the uplink tunnel interface. This
is orchestrated by the IOK as a part of the ZTD staging process.
COAP GIS
Push and pull models used in Cisco IoT FND Corporate
Server, DB, Internet Enterprise
appropriate network places of Field Area
Networks DHCPv4/v6 provisioning Server
Provisioning of CGR1000, CGE and AMI Operaons
Nort
HER tunnels hbou
nd A
PI
NetConf
IPAM (DNS/DHCP)
CSR
1000v DHCPv4/v6
AAA Server
Public or
Private RPDON Head-End System,
RPDON
Outage Reporng
WAN server System, Meter Data
CGR 1000 Management, etc.
CA Server
RA Server
Radius SIEM
Directory Services
g
Syslo
Syslog Syslog Server
375621
commercial
NAN devices are managed through the Field Network Director (FND) and a FAR can be locally managed
with CG-Device manager as well.
The CG-Mesh has no physical user interfaces such as buttons or display and therefore all configuration
and management occur through Constrained Application Protocol (CoAP) Simple Management Protocol
(CSMP) from Cisco IoT Field Network Director.
IOK Orchestration
The orchestrator of the IOK performs the following tasks, which can be managed from its web-portal:
• Monitors VM status and provides VM restart functionality.
• Provides license import capability for HER, Certificate Authority and FND.
• Displays system topology with IP information.
• Display user XML configuration file utilized for deployment.
• Tracks and displays event log.
• Provides IOK system backup and restore.
• Provides IOK upgrade with patch file.
• Facilitates the deployment of the CGR 1000 routers. This is also referred to as pre-ZTD
configuration or ZTD staging.
HER
RA FND +
CSR Free RSA
TPS ESR Oracle
1000v Radius CA
5921 (cluster) DB
HTTPS
IPsec tunnel
Legend
VMs within IOK
WAN IPv4 GRE
IPv6 over
tunnel
375622
CGR 1000
Figure 3-14 provides an overview of Zero Touch deployment performed by the FND with the interaction
between the IOK components.
The following are the detailed steps involved in the ZTD for CGR 1000 series routers:
Step 1 The CGR 1000 routers are pre-configured with unique IDevID certificate. Uplink network credentials
(cellular, Ethernet, etc.), WPAN SSID, and address/port of tunnel provisioning service in the FND are
configured by the IOK at the time of router ZTD staging.
Step 2 The CGR 1000 series routers are deployed on-site so that they can join the uplink network(s).
Step 3 Simple Certificate Enrollment Protocol (SCEP) phase—The CGR 1000 communicates with the
Registration Authority to procure the Utility signed LDevID certificate through SCEP, Simple
Certificate Enrollment protocol. Each CGR permitted to enroll should have a valid entry in the Active
Directory or the FreeRADIUS’ AAA database. Each entry must have the CGR’s serialNumber as the
username and a user defined string (the default is cisco) as the password. The FreeRADIUS must be
configured to return the RADIUS attribute cisco-av-pair=pki:cert-application=all on successful
authorization. This RADIUS attribute permits the Registration Authority’s PKI infrastructure to grant
requests by the SCEP requestor for any application.
The SCEP process is controlled by an embedded event manager (EEM) policy in the firmware written
in the TCL script. It is triggered by one of the following three events:
• Periodic (600 seconds by default)
• Certificate Enrollment completion
• Manually triggered by executing event manager run rm_ztd_scep.tcl
Once the certificates have been successfully enrolled the script activates a CGNA profile to initiate the
next stage of ZTD.
2 3
Check entry
CGR Serial no., password = “cisco” in AD
Free
RA AD
5 Radius
4
IDevID SCEP
7 6
1 LDevID
RSA CA
375623
CGR 1000
The steps involved in the SCEP process are as follows (see Figure 3-15):
Step 1 The CGR 1000 series router is factory configured with the IDevID X.509 RSA certificate. The CGR
sends an SCEP request to the Registration Authority (RA) with its IDevID.
Step 2 The Registration Authority forwards the request to the FreeRADIUS server with the CGR’s serial
number as the username and the password cisco.
Step 3 The FreeRADIUS server checks the local/Active Directory for a corresponding entry.
Step 4 If a valid entry is present, the FreeRADIUS server passes the authorization message to the Registration
Authority, which forwards the SCEP request to the RSA based Certificate Authority.
Step 5 The RSA based Certificate Authority issues an LDevID X.509 RSA certificate and sends it to the
Registration Authority.
Step 6 The Registration Authority relays the LDevID certificate to the CGR.
Step 7 Tunnel provisioning phase (see Figure 3-16)—The CGR 1000 Series router contacts TPS with a
tunnel-provisioning request using HTTPS on port 9120. TPS forwards the request to FND on port 9120.
FND sends the tunnel configuration to be deployed on the CGR to the TPS using port 9122. FND
configures the HER with the necessary tunnel configurations using NETCONF. TPS forwards the tunnel
configuration to CGR using port 8443. The CGR configures itself with the obtained tunnel configuration.
A tunnel is then established between HER and CGR and hereafter CGR can communicate directly with
the HER.
HER
TPS
3 HER
CSR
CSR
Free
FND +
Oracle
RSA
1000v Radius CA
TPS obtains 1000v DB
informaon of
HER
corresponding
to the CGR
Data center subnet
DMZ subnet
4
Tunnel
parameters
5
communicated IPsec tunnel The tunnel between the Legend
to the CGR HER and the CGR is
established and the VMs within IOK
WAN CGR can communicate
with the FND directly.
1
Request for tunnel
provisioning sent to TPS
by CGNA
375624
CGR 1000
Step 8 The CGR 1000 Series router opens a mutual-authentication HTTPS connection to the registration service
in FND and sends discovery information.
HER
HER
RA cluster FND +
cluster Free RSA
TPS ESR CSR Oracle
CSR Radius CA
5921 1000v
1000v DB
IPsec tunnel
FND 8
Legend
Virtual machines
WAN within IOK
Data
encrypted packet
6 and sent data flow
upstream
QoS policy
5 if any, CGR 1000
applied
Response 1
Once the meters are
from
4 authencated, they can
meters
communicate with the
375625
smart meter FND directly.
FAN—Security
Figure 3-18 shows an overview of security management in the FAN.
AAA, DNS,
Scheduler Subscriber FND DHCPv6
NTP OMS DMS
Data services
Historian Grid State
Data Management Directory
Services SCADA
Firewall Cerficate
SCEP Authority
IP security services in
EOC
Public or Private
IP infrastructure
375626
Security across the layers is a critical aspect of the AMI architecture. Cisco Connected Grid security
solutions provide critical infrastructure-grade security to control access to critical utility assets, monitor
the network, mitigate threats, and protect grid facilities. The solutions enhance overall network
functionality while simultaneously making security easier and less costly to manage.
The following are some of the security principles governing the architecture:
• Prevent unauthorized users and devices from accessing the head-end systems.
• Protect the FAN from cyber attacks.
• Relevant security threats must be identified and addressed.
• Relevant regulatory requirements should be met.
• Maximize visibility into the network environment, devices, and events.
• The FAN is dual stack and thus security policies must apply to both IPv4 and IPv6.
• Preventing intruders from gaining access to field area router configuration or tampering with data.
• Containing malware from proliferation that can impair the FAN.
• Segregating network traffic so that mission-critical data in a multi-services FAN is not
compromised.
• Assuring QoS for critical data flow, whereas possible denial-of-service (DoS) attack traffic is
policed.
• Real-time monitoring of the FAN for immediate response to threats.
• Provisioning of network security services, which allows utilities to efficiently deploy and manage
FANs.
• Improve risk management and satisfy compliance and regulatory requirements such as NERC-CIP
with assessment, design, and deployment services.
There are four categories of security design topics, explained in detail in the following sections.
Access Control
Utility facilities, assets, and data should be secured with user authentication and access control. The
fundamental element of access control is to have strong identity mechanisms for all grid elements-users,
devices, and applications. It is equally important to perform mutual authentication of both nodes
involved in the communications for it to be considered secure.
Certificate-Based Authentication
Elliptical curve cryptography (ECC) is the cryptography algorithm of choice for smart meters, since it
is suitable for low power and lossy networks. Hence, the smart meters must carry the ECDSA P256
certificate.
To summarize, RSA algorithm is used for authentication of CGRs and ECC algorithm is used for
authentication of meters. It is recommended to install certificates with a lifetime of 5 years.
Table 3-5 illustrates the FAN devices that support RSA and ECC cryptography in the context of AMI.
CG Mesh Authentication
IP WAN
Supplicant
n “IEEE 802.1x Relay CGR - Split Authencaon Server
Split Authencator” Authencator Cerficate Authority
EAPoL-Start Acve Directory
EAPoL-Start
relayed over UDP (EAPoU)
EAP-Request Identy
Send
Subject EAP-Response Identy
of X.509
RADIUS Access-Request
EAP-TLS
in EAPoL EAP-TLS in EAPoL in UDP EAP-TLS in Radius
All EAP-TLS is encapsulated aer this point
Send EAP Request Identy
Subject
EAP Response Identy
of X.509
Cert EAP Request TLS Start
EAP Client Response Hello
Send AAA
Send EAP-Request Server Hello, Cerficate, Key Exchange X.509 Cert
Own EAP-Response Client Key Exchange, Cert
X.509
PMK
PMK EAP-Request Finished
375627
CG-Mesh WPAN Network Access Control (WNAC) authenticates a node before the node gets an IPv6
address. CG-Mesh WPAN WNAC uses standard, widely-deployed security protocols that support
Network Access Control, in particular IEEE 802.1X using EAP-TLS to perform mutual authentication
between a joining low power and lossy Network (LLN) device and an AAA server. In addition, CG-Mesh
uses the secure key management mechanisms introduced in the IEEE 802.11i to allow the CGR 1000 to
securely manage the link keys within a PAN for all CG-Mesh devices.
LLNs are typically composed of multiple hops and CG-Mesh is used to support EAPOL over multi-hop
networks. In particular, the Supplicant (LLN device) might not be within direct link connectivity of the
Authenticator (CGR 1000). CG-Mesh uses the Split Authenticator as a communication relay for the
Authenticator. All devices that have successfully joined the network also serve as a Split Authenticator,
accepting EAPOL frames from those devices that are attempting to join the network. Because CG-Mesh
performs IP-layer routing, the Split Authenticator relays EAPOL frames between a joining device and
an Authenticator using UDP. By introducing a Split Authenticator, the authentication and key
management protocol is identical to an LLN device regardless of whether it is a single hop from the CGR
1000 or multiple hops away.
The CGR 1000 and CG-Mesh devices use the IEEE 802.11 key hierarchy in persistent state to minimize
the overhead of maintaining and distributing group keys. In particular, an LLN device first checks if it
has a valid Group Temporal Key (GTK) by verifying the key with one of its neighbors. If the GTK is
valid, the node can begin communicating in the network immediately. Otherwise, the device then checks
if it has a valid Pairwise Temporal Key (PTK) with the CGR 1000. The PTK is used to securely distribute
the GTK. The same handshake messages might be used to refresh the GTK. If the PTK is valid, the CGR
1000 initiates a two-way handshake to communicate the current GTK. Otherwise, the device checks if it
has a valid Pairwise Master Key (PMK) with the CGR 1000. If the PMK is valid, the CGR 1000 initiates
a two-way handshake to establish a new PTK and communicate the current GTK. Otherwise, the device
will request a full EAP-TLS authentication exchange. This hierarchical decision process minimizes the
security overhead in the normal case, where devices might migrate from network-to-network due to
environmental changes or network formation after a power outage.
The CG-Mesh meter must go through the following five stages of authentication before it connects with
the CGR 1000:
• Stage 1: Key information exchange.
• Stage 2: 8021X/EAP-TLS authentication (ECC cipher suite certificate).
• Stage 3: 802.11i four-way handshake to establish a Pairwise Temporal Key (PTK) between a device
and a CGR -Pairwise Master Key (PMK) confirmation, Pairwise Transient Key (PTK) derivation,
and Group Temporal Key (GTK) distribution.
• Stage 4: Group key handshake.
• Stage 5: Secure data communication.
A compromised node is one where the device can no longer be trusted by the network. To evict
compromised nodes from a network, the CGR must communicate a new Group Temporal Key (GTK) to
all nodes in the PAN except those being evicted. The new GTK has a valid lifetime that begins
immediately. After the new GTK is distributed to all allowed nodes, the CGR invalidates the old GTK.
After the old GTK is invalidated, those nodes that did not receive the new GTK can no longer participate
in the network and are considered evicted.
One of the critical security requirements in the FAN is to ensure data integrity and confidentiality for
data from smart meters and distribution automation devices when it traverses any public or private
network. Data confidentiality uses encryption mechanisms available at various layers of the
communication stack. For example, an IPv6 node in the last mile, namely, a smart meter, can encrypt
data using Advanced Encryption Standard (AES) at the following layers:
• Layer 2 (IEEE 802.15.4g or IEEE P1901.2)
• Layer 3 (IP Security [IPsec])
IP Tunnels
If AMI traffic traverses a public WAN of any kind, data should be encrypted with standards-based IPsec.
This approach is advisable even if the WAN backhaul is a private network. A site-to-site IPsec VPN can
be built between the FAR and the WAN aggregation router in the control center. The Cisco Connected
Grid solution implements a sophisticated key generation and exchange mechanism for both link-layer
and network-layer encryption. This significantly simplifies cryptographic key management and ensures
that the hub-and-spoke encryption domain not only scales across thousands of field area routers but also
across millions of meters and grid endpoints.
IP tunnels are a key capability for all FAN use cases forwarding various traffic types over the backhaul
WAN infrastructure. Various tunneling techniques may be used, but it is important to evaluate individual
technique’s OS support, performance, and scalability for the CGR 1000 and head-end router platforms.
The following are tunneling considerations:
• IPsec tunnel—To protect the data integrity of all traffic over the WAN. This could be an IPv4 IPsec
or IPv6 IPsec tunnel in case of a WAN infrastructure that supports IPv4 as well as IPv6.
• IPv6 over GRE within an IPv4 IPsec tunnel—To transport the AMI IPv6 meter traffic over a WAN
infrastructure that does not support native IPv6 traffic. A network configuration with an outer IPsec
tunnel over IPv4 inside (which is an IPv6 GRE tunnel) should be used.
IOK orchestration facilitates the latter with the implementation of FlexVPN tunnels.
Figure 3-20 shows a tunnel between the CGR and the HER.
Head-end
Router
CSR
IPSec tunnel IPv6 over 1000v
CGR 1000 IPv4 GRE
375628
tunnel HER
Figure 3-21 shows the structure of the packet header transmitted through the VPN Tunnel.
Figure 3-21 Structure of the Packet Header Transmitted through the VPN Tunnel
375629
IPv4 header IPSec header header
header
FlexVPN
FlexVPN is a flexible and scalable Virtual Private Network (VPN) solution based on IPsec and IKEv2.
To secure meter data communication with the head-end across the WAN, FlexVPN is recommended. The
IOT FND establishes FlexVPN tunnels between the head-end routers and the CGRs as a part of the ZTD
process.
FlexVPN integrates various topologies and VPN functions under one framework. The Static Virtual
Tunnel Interface (SVTI) VPN model that the CGR 1000 used previously required explicit management
of tunnel endpoints and associated routes, which is not scalable. FlexVPN simplifies the deployment of
VPNs by providing a unified VPN framework that is compatible with legacy VPN technologies.
The following are some of the benefits of FlexVPN:
• Allows use of a single tunnel for both IPv4 and IPv6, when the medium supports it.
• Supports NAT/PAT traversal.
• Supports QoS in both directions—hub-to-spoke and spoke-to-hub.
• Supports Virtual Routing and Forwarding (VRF).
• Reduces control plane traffic for costly links with support for tuning of parameters. In this solution,
IPsec is configured in the tunnel mode.
• IKEv2 has fewer round trips in a negotiation than IKEv1; two round trips versus five for IKEv1 for
a basic exchange.
• Has built-in dead peer detection (DPD).
• Has built-in configuration payload and user authentication mode.
• Has built-in NAT traversal (NAT-T). IKEv2 uses ports 500 and 4500 for NAT-T.
• Improved re-keying and collision handling.
• A single security association (SA) can protect multiple subnets, which improves scalability. Support
for Multi-SA DVTI support on the hub.
• Asymmetric authentication in site-to-site VPNs, where each side of a tunnel can have different
preshared keys, different certificates, or one side a key and the other side a certificate.
In the FlexVPN VPN model, the head-end router acts as the FlexVPN hub and CGRs act as the FlexVPN
spokes. The tunnel interfaces on the CGR acquire their IP addresses from address pools configured
during IOK installation. These addresses only have local significance between the HER and the CGR.
Since the CGR’s tunnel addresses are both dynamic and private to the HER, NMS must address the CGRs
by their loopback interface in this network architecture. Conversely, the CGR sources its traffic using its
loopback interface.
Before the FlexVPN tunnel is established, the CGR can only communicate to the HER in the head-end
network. This is done over the WAN backhaul via a low priority default route. During FlexVPN
handshake, route information is exchanged between the HER and the CGR. The CGR learns the
head-end routes (IPv4 and IPv6) through FlexVPN. The head-end router learns the neighborhood subnet
information as an external route redistributed into the OSPF domain and as reachable through the tunnel
interface.
Note It is very important that the PAN associated with the CGR is defined prior to the FlexVPN handshake as
subsequent PAN configuration changes are cannot be exchanged.
The IOK consists of 5 head-end routers and is a load balancing solution. The number of HERs enabled
can be configured by the user. One of them functions as the master. The load balancing solution for
IKEv2-redirect requests treats a single HSRP group containing IKEv2 gateways (HERs) within the LAN
as a single cluster. The solution does the following:
• Runs HSRP to choose a master from among the gateways of the HSRP group or cluster. The Virtual
IP address (VIP) of the HSRP group does not change across elections requiring no configuration
change at the remote clients.
• All other gateways within the same HSRP group send load updates periodically to the master.
• IKEv2 redirect API contacts the load-balancing solution for knowing the least loaded IKEv2
gateway (within the same HSRP group) to which the IKEv2 client is redirected.
Figure 3-22 shows the load balancing of the FlexVPN tunnels.
HER cluster
VIP = 10.1.1.100
1. The active router or the Master HER keeps
receiving the load information from the
other HERs.
2. CGR1 connects to VIP 10.1.1.100. The
Master HER checks the load table and
redirects it to HER 1.
3. Similarly, CGR2 connects to VIP
10.1.1.100. The Master HER checks the
load table and redirects it to HER 2.
4. If the Master HER goes down, HSRP elects
1 or 2 as the active router, and will now
maintain load info and redirect clients
accordingly.
375630
CGR 2 CGR 1
The head-end routers are pre-configured to support FlexVPN, hence no additional configuration on the
HER is required to create the tunnel. Configurations only need to be made on the CGR and this is done
as a part of the ZTD process. Registration of the CGR does not occur until the IPsec tunnel is
successfully brought up.
Data Segmentation
A simple but powerful network security technique is to logically separate different functional elements
that should never be communicating with each other. For example, in the distribution grid, smart meters
should not be communicating to Distribution Automation devices and vice versa. Similarly, traffic
originating from field technicians should be logically separated from AMI and DA traffic. The Cisco
Connected Grid security architecture supports tools such as VLANs and Generic Routing Encapsulation
(GRE) to achieve network segmentation. To build on top of that, access lists and firewall features can be
configured on field area routers to filter and control access in the distribution and substation part of the
grid.
VLAN Segmentation
In order to segregate traffic, the VLAN design shown in Table 3-6 should be used.
VLAN ID Description
DMZ subnet VLAN (VLAN 30) For DMZ components within the IOK.
Data center subnet VLAN (VLAN 20) For data center components in the IOK.
Collection Engine/AMI operations VLAN (VLAN For application head-end components.
40)
Utility data center VLAN (VLAN 50) For components residing in the Utility data center.
Black hole VLAN (VLAN 90) All unused ports are assigned to this VLAN as a
security measure.
Figure 3-1 on page 3-2, which illustrates the FAN system topology, shows VLAN segmentation.
Firewall
All traffic originating from the FAN is aggregated at the control-center tier and needs to be passed
through a high performance firewall, especially if it has traversed through a public network. This firewall
should implement zone-based policies to detect and mitigate threats. The Cisco ASA with FirePOWER
Services, which brings distinctive, threat-focused, next-generation security services to the ASA 5585-X
firewall products, is recommended for the solution. It provides comprehensive protection from known
and advanced threats, including protection against targeted and persistent malware attacks.
The firewall must be configured in transparent mode. The interface connecting to the IOK must be
configured as the inside interface and the interface connecting to the WAN link must be configured as
outside, as shown in Figure 3-23.
Cisco IOK
vmnic 1
inside
ASA Firewall
outside
WAN
375631
CGR
Firewalls are best deployed in pairs to avoid a single point of failure in the network.
The guidelines for configuring the firewall are as follows:
• NAN to Head-end and vice-versa: ACLs should be configured to permit traffic between the CGRs
and the Head-end router at the ASA.
• Security levels are defined as follows:
– NAN facing interface— outside: 0
– Head-end facing interface—inside: 100
Based on Table 3-7, ACLs may be configured on the firewall.
NERC-CIP Compliance
NERC-CIP is also considered, as FAN may be included by regulations as the smart grid evolves. The
NERC-CIP requirements, taking a holistic defense-in-depth approach, are considered good security
practices which are applicable to smart grid beyond the regulated electricity generation and transmission
today.
The AMI endpoints, namely the smart meters, do not fall within the classification of High or Medium
impact assets. However, physical security of the endpoints remains a concern. Physical compromise may
lead to tampering of meters and the cryptographic key may be retrieved from the microprocessor. This
becomes especially dangerous if several meters share the same key. This can be mitigated by device
authentication for meters, encryption of key exchange, VPN from aggregation points, etc. Cisco’s
implementation of Zero Touch deployment capability and security from the meter communications to
the Meter Data Management System provides an additional layer to the application layer security
implemented by the meter vendor.
A basic tenet of security design is to ensure that devices, endpoints, and applications cannot be
compromised easily and are resistant to cyber attacks. With that goal in mind, the Cisco 1000 Series
Connected Grid Routers are built with tamper-resistant mechanical designs. The Cisco 1240 Connected
Grid Router (CGR 1240), which is an outdoor model, is equipped with a physical lock and key
mechanism. This makes it extremely difficult for any rogue entity to open or uninstall the device from
the pole-top mounting. The device generates NMS alerts if the router door or chassis is opened.
Additionally, each router motherboard is equipped with a dedicated security chip that provides the
following:
• Secure unique device identifier (802.1AR)
• Immutable identity and certifiable cryptography
• Entropy source with true randomization
• Memory protection and image signing and validation
• Tamper-proof secure storage of configuration and data
CGR 1000 series router images are digitally signed to validate the authenticity and integrity of the
software. For AMI deployments using the Cisco Connected Grid architecture, meters also have a
tamper-resistant design, generate an alert on tampering, and maintain local audit trails for all sensitive
events. Firmware images for meters are digitally signed. Similarly, to help ensure authenticity and
integrity of commands delivered from the AMI head-end system (HES) to meters, the commands are
digitally signed.
Further, the following is recommended:
• Unused ports on the switches are shut down.
• BPDU guard be enabled on the switches.
• First hop security (FHS) is enabled on the FAN devices.
FAN—High Availability
IOK does not support high availability of head-end systems, such as, the IoT FND. However, high
availability is factored into the design at the network layer at key junctures and is cross-referenced below.
• PAN migration of endpoints in case of CGR failure (see NAN Network Connectivity, page 3-8)
• RPL is the choice of routing protocol, which supports low power and lossy networks (see NAN
Routing, page 3-16)
• Load balancing of HERs to support FlexVPN tunnels (see IP Tunnels, page 3-36)
Additionally, RAID (Redundant Array of Independent Discs) may be set up with ESXi to provide
resiliency and high availability for storage. Refer to VMware’s guide for VMware storage best practices
at:
• https://www.vmware.com/files/pdf/support/landing_pages/Virtual-Support-Day-Storage-Best-Practic
es-June-2012.pdf
Figure 3-24 Data Flow between Smart Meters and Collection Engine
Industrial Operaons Kit(IOK)
HER
HER
RA cluster FND +
cluster Free RSA
TPS ESR CSR Oracle
CSR Radius CA
5921 1000v
1000v DB
Collecon
Engine
DMZ subnet Data center subnet
IPSec tunnel
Legend
Virtual machines
WAN within IOK
375632
smart meter
Step 1 The Cisco Industrial Operations Kit software bundle is installed on a server meeting the specified
requirements, such as the Cisco UCS C-460. All the components of the head-end should be brought up
and the number of HERs to be enabled can be decided by the user at this stage. Each HER can support
up to 300 FARs. The Cisco IOK is deployed in the Energy Operations Center of the utility facility.
Step 2 The dual-stack CGR 1000 series routers to be deployed as field area routers are bootstrapped by the IOK
by connecting them to the UCS server through their router console(s). The CGRs must be provisioned
with the WPAN module. The bootstrap configuration is done using the ZTD staging feature of the IOK
after providing a few parameters as prompted by the GUI.
Step 3 The data center components, such as the ECC based Certificate Authority, Active Directory, and NTP
are installed and brought up or, if they already exist in the utility data center, connectivity to the
components from the IOK’s head-end router should be ensured.
Step 4 The firewall in the head-end is configured in the transparent mode in order to support multicast traffic
and appropriate ports are enabled to allow AMI traffic.
Step 5 The Collection Engine is installed in the energy operations center.
Step 6 The CGRs are deployed on the field and are typically pole mounted. The IP address of the CGR
interfaces is configured at this stage.
Step 7 The CGR attempts to enroll its utility provided LDevID certificate by the SCEP process. This process is
automatically triggered by periodically attempting communication with the Registration Authority. It
can also be manually triggered as described in Zero-Touch Deployment Staging by IOK, page 3-26.
Step 8 Once the CGR procures the utility provided LDevID certificate, it proceeds to attempt communication
with the tunnel provisioning server in order to establish an IPsec-based VPN tunnel with the head-end
system. This process is detailed in Zero-Touch Deployment Staging by IOK, page 3-26. The tunnel is an
IPv4 IPsec tunnel, within which is an IPv6 over IPv4 GRE tunnel to help transmit IPv6 data. Once the
tunnel to the head-end router is successfully established, the CGR can communicate with the head-end
systems provided the firewall is appropriately configured to pass AMI traffic.
Step 9 The smart meters can now be deployed on the field, with RF planning considerations, using tools such
as the ATDI ICS designer or ASSET by TEOCO. The meters form an RF-based connected grid mesh and
join with the CGR by creating an RPL tree. The meters are authenticated by the process outlined in CG