Académique Documents
Professionnel Documents
Culture Documents
Revision History
Revision
Date
1.2
1.1.1
1.1
1.0
Comments
Document prepared for release 1.2 of Intel Open Network Platform Server 1.2.
Changed two links to the following:
https://01.org/sites/default/files/page/vbng-scripts.tgz
https://01.org/sites/default/files/page/qat_patches_netkeyshim.zip
Contents
1.0 Audience and Purpose ................................................................................................. 5
2.0 Summary .................................................................................................................... 7
2.1
2.1.1
2.1.2
5.1.1
5.1.2
5.2
5.2.1
5.3
5.3.1
5.4
OpenStack (Juno).............................................................................................................. 18
vIPS ..................................................................................................................................... 26
5.4.1
6.1.1
6.1.2
Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack ........... 32
6.2
6.2.1
6.3
6.3.1
6.3.2
6.3.3
Appendix A
A.1
Appendix B
Appendix C
Glossary ......................................................................................................53
Appendix D
References ..................................................................................................55
NOTE:
1.0
The primary audiences for this document are architects and engineers implementing the Intel Open
Network Platform Server Reference Architecture using Open Source software. Software ingredients
include:
DevStack*
OpenStack*
OpenDaylight*
Data Plane Development Kit (DPDK)*
Intel DPDK Accelerated vSwitch
Open vSwitch*
Fedora 20*
This document provides a guide for integration and performance characterization using the Intel Open
Network Platform Server (Intel ONP Server). Content includes high-level architecture, setup and
configuration procedures, integration learnings, and a set of baseline performance data. This
information is intended to help architects and engineers evaluate Network Function Virtualization (NFV)
and Software Defined Network (SDN) solutions.
An understanding of system performance is required to develop solutions that meet the demanding
requirements of the telecom industry and transform telecom networks. Workload examples are
described and are useful for evaluating other NFV workloads.
Ingredient versions, integration procedures, configuration parameters, and test methodologies all
influence performance. The performance data provided here does not represent best possible
performance, but rather provides a baseline of what is possible using out-of-box open source software
ingredients.
The purpose of documenting configurations is not to imply any preferred methods. However, providing
a baseline configuration of well tested procedures can help to achieve optimal system performance
when developing an NFV/SDN solution.
NOTE:
2.0
Summary
The Intel ONP Server uses Open Source software to help accelerate SDN and NFV commercialization
with the latest Intel Architecture Communications Platform.
This document describes how to setup and configure controller and compute nodes for evaluating and
developing NFV/SDN solutions using the Intel Open Network Platform ingredients.
Platform hardware is based on a Intel Xeon DP Server with the following:
Intel Xeon Processor Series E5-2697 V3
Intel 82599 10 GbE Controller
The host operating system is Fedora* 20 with Qemu-kvm virtualization technology. Software
ingredients include Data Plane Development Kit (DPDK), Open vSwitch, Intel DPDK Accelerated
vSwitch, OpenStack, and OpenDaylight.
Figure 2-1
Figure 2-2 shows a generic SDN/NFV setup. In this configuration, Orchestrator and Controller
(management and control plane) and compute node (data plane) run on different server nodes. Note
that many variations of this setup can be deployed.
Figure 2-2
The test cases described in this document were designed to illustrate certain baseline performance and
functionality using the specified ingredients, configurations, and specific test methodology. A simple
network topology was used, as shown in Figure 2-2.
Test cases are designed to:
Baseline packet processing (such as data plane) performance with host and VM configurations.
Verify communication between controller and compute nodes.
Validate basic controller functionality.
2.1
The following examples of network services are included as use-cases that have been tested with the
Intel Open Network Platform Server Reference Architecture.
2.1.1
Suricata is a high performance Network IDS, IPS, and Network Security Monitoring engine developed
by the OISF, its supporting vendors, and the community.
http://suricata-ids.org/
2.1.2
Intel Data Plane Performance Demonstrators Border Network Gateway (BNG) using DPDK.
https://01.org/intel-data-plane-performance-demonstrators/downloads/bng-application-v013
A Broadband (or Border) Network Gateway may also be known as a Broadband Remote Access Server
(BRAS) and routes traffic to and from broadband remote access devices, such as digital subscriber line
access multiplexers (DSLAM). This network function is included as an example of a workload that can
be virtualized on the Intel ONP Server.
Additional information on the performance characterization of this vBNG implementation can be found
at:
http://networkbuilders.intel.com/docs/Network_Builders_RA_vBRAS_Final.pdf
Refer to Border Network Gateway for information on setting up and testing the vBNG application with
Intel DPDK Accelerated vSwitch or to Appendix B for more information on running the BNG as an
appliance.
NOTE:
10
3.0
Hardware Components
Table 3-1
Item
Description
Notes
Platform
Intel
Processors
Cores
10 physical cores/CPU
Memory
NICs (82599)
BIOS
SE5C600.86B.02.01.0002.082220131453
Release Date: 08/22/2013
BIOS Revision: 4.6
Table 3-2
Hyper-threading enabled.
Item
Description
Notes
Platform
Processors
Cores
14 physical cores/CPU
Memory
Intel
NICs (82599)
2x
BIOS
11
NOTE:
12
4.0
Software Versions
Table 4-1
Software Versions
Software Component
Function
Version/Configuration
Fedora 20 x86_64
Host OS
3.15.6-200.fc20.x86_64
Qemukvm
Virtualization technology
1.7.1
vSwitch
v1.2.0
Open vSwitch
vSwitch
OpenStack
SDN Orchestrator
DevStack
https://github.com/openstack-dev/devstack.git
commit id 6210bb0a6139b20283de115f87aa7a381b04670f
Commit id b35839f3855e3b812709c6ad1c9278f498aa9935
Commit id d6f700db33aeab68916156a98971aef8cfa53a2e
OpenDaylight
SDN Controller
HeliumSR1
Suricata
IPS application
BNG DPPD
Broadband Network
Gateway DPDK
Performance
Demonstrator Application
DPPD v013
PktGen
Software Network
Package Generator
https://01.org/intel-data-plane-performance-demonstrators/downloads
v.2.7.7
13
4.1
Table 4-2
Software Ingredients
Software
Component
Software
Sub-components
Patches
Fedora 20
Location
Comments
http://download.fedoraproject.org/
pub/fedora/linux/releases/20/Fedora/
x86_64/iso/Fedora-20-x86_64-DVD.iso
Data Plane
Development Kit
(DPDK)
http://dpdk.org/git/dpdk
Intel DPDK
Accelerated
vSwitch (OVDK)
https://github.com/01org/dpdk-ovs.git
Commit id
99213f3827bad956d74e2259d0684401
2ba287a4
v1.2.0
Commit id
6210bb0a6139b20283de115f87aa7a38
1b04670f
Open vSwitch
https://github.com/openvswitch/
ovs.git
Commit id
b35839f3855e3b812709c6ad1c9278f4
98aa9935
OpenStack
DevStack
Patches for
DevStack and
Nova
https://github.com/openstack-dev/
devstack.git
Commit id
d6f700db33aeab68916156a98971aef8
cfa53a2e
Three patches
downloaded as one
tarball. Then follow the
instructions to deploy
the Nodes.
Two patches
downloaded as one
tarball. Then follow the
instructions to deploy
http://nexus.opendaylight.org/content/
repositories/opendaylight.release/org/
opendaylight/integration/distributionkaraf/0.2.1-Helium-SR1/distributionkaraf-0.2.1-Helium-SR1.tar.gz
Intel ONP
Server Release
1.2 Script
https://download.01.org/packetprocessing/ONPS1.2/
onps_server_1_2.tar.gz
BNG DPPD
Broadband Network
Gateway DPDK
Performance
https://01.org/intel-data-planeperformance-demonstrators/dppd-bngv013.zip
PktGen
Software Network
Package Generator
https://github.com/Pktgen/PktgenDPDK.git
commit id
5e8633c99e9771467dc26b64a4ff232c7
e9fba2a
BNG Helper
scripts
Suricata
14
https://01.org/sites/default/files/page/
vbng-scripts.tgz
Package from Fedora 20.
5.0
This section describes the installation and configuration instructions to prepare the controller and
compute nodes.
5.1
This section describes how to prepare both the controller and compute nodes with the right BIOS
settings and operating system installation. The preferred operating system is Fedora 20, although it is
considered relatively easy to use this solutions guide for other Linux distributions.
5.1.1
BIOS Settings
Table 5-1
BIOS Settings
Setting for
Controller Node
Setting for
Compute Node
Enabled
Disabled
Processor C3
Disabled
Disabled
Processor C6
Disabled
Disabled
Configuration
Intel
Vt-d)
Disabled
Enabled
(OpenStack Numa
Placement only)
Enabled
Disabled
MLC Streamer
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Performance
Performance
Enabled
Off
Enabled
Enabled
15
5.1.2
Following are some generic instructions for installing and configuring the operating system. Other ways
of installing the operating system are not described in this solutions guide, such as network installation,
PXE boot installation, USB key installation, etc.
5.1.2.1
1. Download the 64-bit Fedora 20 DVD (not Fedora 20 Live Media) from the following site:
http://fedoraproject.org/en/get-fedora#formats
or from direct URL:
http://download.fedoraproject.org/pub/fedora/linux/releases/20/Fedora/x86_64/iso/Fedora20-x86_64-DVD.iso
2. Burn the ISO file to DVD and create an installation disk.
5.1.2.2
Fedora 20 Installation
Use the DVD to install Fedora 20. During the installation, click Software selection, then choose the
following:
1. C Development Tool and Libraries
2. Development Tools
Also create a user stack and check the box Make this user administrator during the installation. The
user stack is used in OpenStack installation.
Note:
Please make sure to download and use the onps_server_1_2.tar.gz tarball. Start with the
README file. Youll get instructions on how to use Intels scripts to automate most of the
installation steps described in this section and this saves you time. When using Intels
scripts you can jump to Section 5.4 after installing OpenDaylight based on the instructions
described in Section 6.2.1.
5.1.2.3
Some packages are not installed with the standard Fedora 20 installation, but are required by Intel
Open Network Platform Software (ONPS) components. These packages should be installed by the user.
git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster
python-cliff
ONPS supports Fedora kernel 3.15.6, which is newer than native Fedora 20 kernel 3.11.10. To upgrade
to 3.15.6, follow these steps:
1. Download kernel packages.
wget https://kojipkgs.fedoraproject.org//packages/kernel/3.15.6/200.fc20/x86_64 /
kernel-3.15.6-200.fc20.x86_64.rpm
wget https://kojipkgs.fedoraproject.org//packages/kernel/3.15.6/200.fc20/x86_64 /
kernel-devel-3.15.6-200.fc20.x86_64.rpm
wget https://kojipkgs.fedoraproject.org//packages/kernel/3.15.6/200.fc20/x86_64 /
kernel-modules-extra-3.15.6-200.fc20.x86_64.rpm
16
After installing the required packages, the operating system should be updated with the following
command:
yum update -y
This command upgrades to the latest kernel that Fedora supports. In order to maintain kernel version
(3.15.6), the yum configuration file needs modified with this command:
echo "exclude=kernel*" >> /etc/yum.conf
before running yum update.
After the update completes, the system needs to be rebooted.
5.1.2.4
For OpenStack, the following services were disabled: selinux, firewall, and NetworkManager. Run the
following commands:
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config systemctl disable
firewalld.service
systemctl disable NetworkManager.service
The following services should be enabled: ntp, sshd, and network. Run the following commands:
systemctl
systemctl
systemctl
chkconfig
enable ntpd.service
enable ntpdate.service
enable sshd.service
network on
It is important to keep the timing synchronized between all nodes. It is also necessary to use a known
NTP server for all nodes. Users can edit etc/ntp.conf to add a new server and remove default servers.
The following example replaces a default NTP server with a local NTP server 10.0.0.12 and comments
out other default servers
sed -i 's/server
sed -i 's/server
g' /etc/ntp.conf
sed -i 's/server
g' /etc/ntp.conf
sed -i 's/server
g' /etc/ntp.conf
17
5.2
This section describes the controller node setup. It is assumed that the user successfully followed the
operating system installation and configuration sections.
Note:
5.2.1
Make sure to download and use the onps_server_1_2.tar.gz tarball. Start with the README
file. Youll get instructions on how to use Intels scripts to automate most of the installation
steps described in this section and this saves you time.
OpenStack (Juno)
This section documents features and limitations that are supported with the Intel DPDK Accelerated
vSwitch and OpenStack Juno.
5.2.1.1
Network Requirements
General
At least two networks are required to build OpenStack infrastructure in a lab environment. One network
is used to connect all nodes for OpenStack management (management network), and the other one is
a private network, exclusively for an OpenStack internal connection (tenant network) between
instances (or virtual machines).
One additional network is required for Internet connectivity, as installing OpenStack requires pulling
packages from various sources/repositories on the Internet.
Some users might want to have Internet and/or external connectivity for OpenStack instances (virtual
machines). In this case, an optional network can be used.
The assumption is that the targeting OpenStack infrastructure contains multiple nodes; one is controller
node and one or more are compute node(s).
Network Configuration Example
The following is an example of how to configure networks for OpenStack infrastructure. The example
uses four network interfaces as follows:
ens2f1: For Internet network - Used to pull all necessary packages/patches from repositories on the
Internet; configured to obtain a DHCP address.
ens2f0: For Management network - Used to connect all nodes for OpenStack management;
configured to use network 10.11.0.0/16.
p1p1: For Tenant network - Used for OpenStack internal connections for virtual machines;
configured with no IP address.
p1p2: For Optional External network - Used for virtual machine Internet/external connectivity;
configured with no IP address. This interface is only in the Controller node if external network is
configured. For Compute node, this interface is not needed.
Note that, among these interfaces, interface for virtual network (in this example, p1p1) must be an
82599 port because it is used for DPDK and Intel DPDK Accelerated vSwitch. Also note that a static IP
address should be used for interface of management network.
In Fedora 20, the network configuration files are located at:
/etc/sysconfig/network-scripts/
18
To configure a network on the host system, edit the following network configuration files:
ifcfg-ens2f1 DEVICE=ens2f1
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=dhcp
ifcfg-ens2f0
DEVICE=ens2f0
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=static
IPADDR=10.11.12.11
NETMASK=255.255.0.0
ifcfg-p1p1
DEVICE=p1p1
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none
ifcfg-p1p2
DEVICE=p1p2
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none
Note:
Do not configure the IP address for p1p1 (10 Gb/s interface); otherwise, DPDK does not
work when binding the driver during OpenStack Neutron installation.
Note:
10.11.12.11 and 255.255.0.0 are static IP address and net mask to the management
network. It is necessary to have static IP address on this subnet. The IP address
10.11.12.11 is just an example.
5.2.1.2
Storage Requirements
By default, DevStack uses blocked storage (Cinder) with a volume group, stack-volumes. If not
specified, stack-volumes is created with 10 Gb/s space from a local file system. Note that stackvolumes is the name for the volume group, not more than 1 volume.
The following example shows how to use spare local disks, /dev/sdb and /dev/sdc, to form stackvolumes on a controller node by running the following commands:
pvcreate /dev/sdb
pvcreate /dev/sdc
vgcreate stack-volumes /dev/sdb /dev/sdc
5.2.1.3
General
DevStack is used to deploy OpenStack in this example. The following procedure uses an actual example
of an installation performed in an Intel test lab, consisting of one controller node (controller) and one
compute node (compute).
Controller Node Installation Procedures
The following example uses a host for controller node installation with the following:
Hostname: sdnlab-k01
Internet network IP address: Obtained from DHCP server
19
20
21
Q_AGENT=openvswitch
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch
Q_ML2_PLUGIN_TYPE_DRIVERS=vlan,flat,local
ENABLE_TENANT_TUNNELS=False
ENABLE_TENANT_VLANS=True
PHYSICAL_NETWORK=physnet1
ML2_VLAN_RANGES=physnet1:1000:1010
OVS_PHYSICAL_BRIDGE=br-p1p1
MULTI_HOST=True
[[post-config|$NOVA_CONF]]
[DEFAULT]
firewall_driver=nova.virt.firewall.NoopFirewallDriver
novncproxy_host=0.0.0.0
novncproxy_port=6080
9. Install DevStack
cd /home/stack/devstack/
./stack.sh
10. For a successful installation, the following shows at the end of screen output:
stack.sh completed in XXX seconds
where XXX is the number of seconds.
11. For controller node only Add physical port(s) to the bridge(s) created by the DevStack
installation. The following example can be used to configure the two bridges: br-p1p1 (for virtual
network) and br-ex (for external network).
sudo ovs-vsctl add-port br-p1p1 p1p1
sudo ovs-vsctl add-port br-ex p1p2
12. Make sure proper VLANs are created in the switch connecting physical port p1p1. For example, the
previous local.conf specifies VLAN range of 1000-1010; therefore matching VLANs 1000 to 1010
should be configured in the switch.
22
5.3
This section describes how to complete the setup of the compute nodes. It is assumed that the user has
successfully completed the BIOS settings and operating system installation and configuration sections.
Note:
Please make sure to download and use the onps_server_1_2.tar.gz tarball. Start with the
README file. Youll get instructions on how to use Intels scripts to automate most of the
installation steps described in this section and this saves you time.
5.3.1
Host Configuration
5.3.1.1
General
Deploying OpenStack and Intel DPDK Accelerated vSwitch using DevStack on a compute node follows
the same procedures as on the controller node. Differences include:
Required services are nova compute, neutron agent, and Rabbit.
Intel DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent.
Compute Node Installation Example
The following example uses a host for compute node installation with the following:
Hostname: sdnlab-k02
Lab network IP address: Obtained from DHCP server
OpenStack Management IP address: 10.11.12.2
User/password: stack/stack
Note the following:
No_proxy setup: Localhost and its IP address should be included in the no_proxy setup. In addition,
hostname and IP address of the controller node should also be included. For example:
export no_proxy=localhost,10.11.12.2,sdnlab-k01,10.11.12.1
Differences in the local.conf file:
The service host is the controller, as well as other OpenStack servers, such as MySQL, Rabbit,
Keystone, and Image. Therefore, they should be spelled out. Using the controller node example
in the previous section, the service host and its IP address should be:
SERVICE_HOST_NAME=sdnlab-k01
SERVICE_HOST=10.11.12.1
The only OpenStack services required in compute nodes are messaging, nova compute, and
neutron agent, so the local.conf might look like:
disable_all_services
enable_service rabbit
enable_service n-cpu
enable_service q-agt
23
The user has option to use ovdk or openvswitch for neutron agent:
Q_AGENT=ovdk
or
Q_AGENT=openvswitch
Note:
Note:
If both are specified in the same local.conf file, the later one overwrites the
previous one.
For the OVDK and accelerated OVS huge pages setting, specify number of huge pages to be
allocated and mounting point (default is /mnt/huge/).
OVDK_NUM_HUGEPAGES=8192
or
OVS_NUM_HUGEPAGES=8192
For this version, Intel uses specific versions for OVDK or Accelerated OVS from their respective
repositories. Specify the following in the local.conf file if OVDK or accelerated OVS is used:
OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f
OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935
Binding the physical port to the bridge is through the following line in local.conf. For example,
to bind port p1p1 to bridge br-p1p1, use:
OVS_PHYSICAL_BRIDGE=br-p1p1
A sample local.conf file for compute node with ovdk agent follows:
# Compute node
[[local|localrc]]
FORCE=yes
MULTI_HOST=True
HOST_NAME=$(hostname)
HOST_IP=10.11.12.2
HOST_IP_IFACE=ens2f0
SERVICE_HOST_NAME=10.11.12.1
SERVICE_HOST=10.11.12.1
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=$SERVICE_HOST
ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
DATABASE_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=no-token-password
HORIZON_PASSWORD=password
RABBIT_PASSWORD=password
disable_all_services
enable_service rabbit
24
enable_service n-cpu
enable_service q-agt
DEST=/opt/stack_LOGFILE=$DEST/stack.sh.log
SCREEN_LOGDIR=$DEST/screen
SYSLOG=True
LOGDAYS=1
Q_AGENT=ovdk
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch
Q_ML2_PLUGIN_TYPE_DRIVERS=vlan
OVDK_NUM_HUGEPAGES=8192
OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f
ENABLE_TENANT_TUNNELS=False
ENABLE_TENANT_VLANS=True
ML2_VLAN_RANGES=physnet1:1000:1010
PHYSICAL_NETWORK=physnet1
OVS_PHYSICAL_BRIDGE=br-p1p1
[[post-config|$NOVA_CONF]]
[DEFAULT]
firewall_driver=nova.virt.firewall.NoopFirewallDriver
vnc_enabled=True
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=$HOST_IP
A sample local.conf file for compute node with accelerated ovs agent follows.
# Compute node
#
[[local|localrc]]
FORCE=yes
MULTI_HOST=True
HOST_NAME=$(hostname)
HOST_IP=10.11.12.2
HOST_IP_IFACE=ens2f0
SERVICE_HOST_NAME=sdnlab-k01
SERVICE_HOST=10.11.12.1
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=$SERVICE_HOST
ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
DATABASE_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=no-token-password
HORIZON_PASSWORD=password
RABBIT_PASSWORD=password
disable_all_services
enable_service rabbit
enable_service n-cpu
enable_service q-agt
DEST=/opt/stack
25
LOGFILE=$DEST/stack.sh.log
SCREEN_LOGDIR=$DEST/screen
SYSLOG=True
LOGDAYS=1
Q_AGENT=openvswitch
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch
Q_ML2_PLUGIN_TYPE_DRIVERS=vlan
OVS_NUM_HUGEPAGES=8192
OVS_DATAPATH_TYPE=netdev
OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935
ENABLE_TENANT_TUNNELS=False
ENABLE_TENANT_VLANS=True
ML2_VLAN_RANGES=physnet1:1000:1010
PHYSICAL_NETWORK=physnet1
OVS_PHYSICAL_BRIDGE=br-p1p1
[[post-config|$NOVA_CONF]]
[DEFAULT]
firewall_driver=nova.virt.firewall.NoopFirewallDriver
vnc_enabled=True
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=$HOST_IP
5.4
vIPS
The vIPS used is Suricata, which should be installed as an rpm package as previously described in a
VM. In order to configure it to run in inline mode (IPS) use the following:
1. Turn on IP forwarding.
# sysctl -w net.ipv4.ip_forward=1
2. Mangle all traffic from one vPort to the other using a netfilter queue.
# iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE
# iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE
3. Have Suricata run in inline mode using the netfilter queue.
# suricata -c /etc/suricata/suricata.yaml -q 0
4. Enable ARP proxying.
# echo 1 > /proc/sys/net/ipv4/conf/eth1/proxy_arp
# echo 1 > /proc/sys/net/ipv4/conf/eth2/proxy_arp
5.4.1
1. Turn on IP forwarding.
# sysctl -w net.ipv4.ip_forward=1
2. In the source, add the route to the sink.
# route add -net 192.168.200.0/24 eth1
3. At the sink, add the route to the source.
# route add -net 192.168.100.0/24 eth1
26
6.0
This section describes how to bring up the VMs in a compute node, connect them to the virtual
network(s), verify the functionality.
Note:
Currently, it is not possible to have more than one virtual network in a multi-compute node
setup. Although, it is possible to have more than one virtual network in a single compute
node setup.
6.1
6.1.1
6.1.1.1
Default Settings
27
6.1.1.2
Customer Settings
The following examples describe how to create a custom VM image, flavor, and aggregate/availability
zone using OpenStack commands. The examples assume the IP address of the controller is 10.11.12.1.
1. Create a credential file, admin-cred, for admin user. The file contains the following lines:
export
export
export
export
OS_USERNAME=admin
OS_TENANT_NAME=admin
OS_PASSWORD=password
OS_AUTH_URL=http://10.11.12.1:35357/v2.0/
2. Source admin-cred to the shell environment for actions of creating glance image, aggregate/
availability zone, and flavor.
source admin-cred
3. Create an OpenStack glance image. A VM image file should be ready in a location accessible by
OpenStack.
glance image-create --name <image-name-to-create> --is-public=true --containerformat=bare --disk-format=<format> --file=<image-file-path-name>
The following example shows the image file, fedora20-x86_64-basic.qcow2, is located in a NFS
share and mounted at /mnt/nfs/openstack/images/ to the controller host. The following command
creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant
can use this glance image).
glance image-create --name fedora-basic --is-public=true --container-format=bare
--disk-format=qcow2 --file=/mnt/nfs/openstack/images/fedora20-x86_64-basic.qcow2
4. Create host aggregate and availability zone:
First, find out the available hypervisors and then use the information for creating aggregate/
availability zone.
nova hypervisor-list
nova aggregate-create <aggregate-name> <zone-name>
nova aggregate-add-host <aggregate-name> <hypervisor-name>
The following example creates an aggregate named aggr-g06 with one availability zone named
zone-g06, and the aggregate contains one hypervisor named sdnlab-g06.
nova aggregate-create aggr-g06 zone-g06
nova aggregate-add-host aggr-g06 sdnlab-g06
5. Create flavor. Flavor is a virtual hardware configuration for the VMs; it defines the number of virtual
CPUs, size of virtual memory and disk space, among others.
The following command creates a flavor named onps-flavor with an ID of 1001, 1024 Mb virtual
memory, 4 Gb virtual disk space, and 1 virtual CPU.
nova flavor-create onps-flavor 1001 1024 4 1
28
6.1.1.3
Example VM Deployment
The following example describes how to use a customer VM image, flavor, and aggregate to launch a
VM for a demo Tenant, using OpenStack commands. Again, the example assumes the IP address of the
controller is 10.11.12.1.
1. Create a credential file, demo-cred for a demo user. The file contains the following lines:
export
export
export
export
OS_USERNAME=demo
OS_TENANT_NAME=demo
OS_PASSWORD=password
OS_AUTH_URL=http://10.11.12.1:35357/v2.0/
2. Source demo-cred to the shell environment for actions of creating tenant network and instance
(VM).
source demo-cred
3. Create network for tenant demo. Take the following steps:
a. Get tenant demo.
keystone tenant-list | grep -Fw demo
The following creates a network with a name of net-demo for tenant with ID
10618268adb64f17b266fd8fb83c960d:
neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo
b. Create subnet.
neutron subnet-create --tenant-id <demo-tenant-id> --name <subnet_name>
<network-name> <net-ip-range>
The following creates a subnet with a name of sub-demo and CIDR address 192.168.2.0/24for
network net-demo.
neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d
--name sub-demo net-demo 192.168.2.0/24
4. Create instance (VM) for tenant demo. Take the following steps:
a. Get the name and/or ID of the image, flavor, and availability zone to be used for creating
instance.
glance image-list
nova flavor-list
nova aggregate-list
neutron net-list
b. Launch an instance (VM), using information obtained from previous step.
nova boot --image <image-id> --flavor <flavor-id> --availability-zone <zonename> --nic net-id=<network-id> <instance-name>
c. The new VM should be up and running in a few minutes.
5. Log into the OpenStack dashboard using the demo user credential; click Instances under Project
in the left pane, the new VM should show in the right pane. Click instance name to open Instance
Details view, then click Console in the top menu to access the VM, as follows:
29
6.1.1.4
Local vIPS
Figure 6-1
Local vIPS
Configuration
1. OpenStack brings up the VMs and connects them to the vSwitch.
2. IP addresses of the VMs get configured using the DHCP server. VM1 belongs to one subnet and VM3
to a different one. VM2 has ports on both subnets.
3. Flows get programmed to the vSwitch by the OpenDaylight controller (Section 6.2).
Data Path (Numbers Matching Red Circles)
1. VM1 sends a flow to VM3 through the vSwitch.
2. The vSwitch forwards the flow to the first vPort of VM2 (active IPS).
3. The IPS receives the flow, inspects it and (if not malicious) sends it out through its second vPort.
4. The vSwitch forwards it to VM3.
30
6.1.1.5
Remote vIPS
Figure 6-2
Remote iVPS
Configuration
1. OpenStack brings up the VMs and connects them to the vSwitch.
2. The IP addresses of the VMs get configured using the DHCP server.
Data Path (Numbers Matching Red Circles)
1. VM1 sends a flow to VM3 through the vSwitch inside compute node 1.
2. The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2.
3. The vSwitch of compute node 2 forwards the flow to the first port of the vHost, where the traffic
gets consumed by VM1.
4. The IPS receives the flow, inspects it, and (provided it is not malicious) sends it out through its
second port of the vHost into the vSwitch of compute node 2.
5. The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port
of the 82599 in compute node 1.
6. The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow
gets terminated.
31
6.1.2
NUMA was implemented as a new feature in the OpenStack Juno release. NUMA placement enables an
OpenStack administrator to ping particular NUMA nodes for guest systems optimization. With a SR-IOV
enabled network interface card, each SR-IOV port is associated with a Virtual Function (VF). OpenStack
SR-IOV pass-through enables a guest access to a VF directly.
6.1.2.1
To enable the previous features, follow these steps to configure compute node:
1. The server hardware support IOMMU or Intel VT-d. To check whether IOMMU is supported, run the
command and the output should show IOMMU entries.
dmesg | grep -e IOMMU
Note:
IOMMU cab be enabled/disabled through a BIOS setting, under Advanced and then
Processor.
32
6.1.2.2
Devstack Configurations
In the following text, the example uses a controller with IP address 10.11.12.1 and compute
10.11.12.4. PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output
(10fb for physical function and 10ed for VF):
lspci -nn | grep 82599
On Controller node:
1. Edit Controller local.conf. Note that the same local.conf file of Section 5.2.1.3 is used here but
adding the following:
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,sriovnicswitch
[[post-config|$NOVA_CONF]]
[DEFAULT]
scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,
ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter,
NUMATopologyFilter
pci_alias={"name":"niantic","product_id":"10ed","vendor_id":"8086"}
[[post-config|/$Q_PLUGIN_CONF_FILE]]
[ml2_sriov]
supported_pci_vendor_devs = 8086:10fb, 8086:10ed
2. Run ./stack.sh
On Compute node:
1. Edit /opt/stack/nova/requirements.txt, add libvirt-python>=1.2.8.
echo "libvirt-python>=1.2.8" >> /opt/stack/nova/requirements.txt
2. Edit Compute local.conf for accelerated OVS. Note that the same local.conf file of Section 5.3.1.1 is
used here.
33
6.1.2.3
1. After stacking is successful on both the controller and compute nodes, verify the PCI pass-through
device(s) are in the OpenStack deatbase.
mysql -uroot -ppassword -h 10.11.12.1 nova -e 'select * from pci_devices'
2. The output should show entry(ies) of PCI device(s) similar to the following:
| 2014-11-18 19:41:14 | NULL
| NULL
|
0 | 1 |
3 | 0000:08:10.0
| 10ed
| 8086
| type-VF | pci_0000_08_10_0 | label_8086_10ed | available |
{"phys_function": "0000:08:00.0"} | NULL
| NULL
|
0 |
34
Access the VM from the OpenStack Horizon, the new VM shows two virtual network interfaces. The
interface with a SR-IOV VF should show a name of ensX, where X is a numerical number. For example,
ens5. If a DHCP server is available for the physical interface (p1p1 in this example), the VF gets an IP
address automatically; otherwise, users can assign an IP address to the interface the same way as a
standard network interface.
To verify network connectivity through a VF, users can set up two compute hosts and create a VM on
each node. After obtaining IP addresses, the VMs should communicate with each other as with a normal
network.
6.2
Using OpenDaylight
This section describes how to download, install and setup a OpenDaylight Controller.
6.2.1
35
Karaf might take a long time to start or feature. Install might fail if the host does not have network
access. Youll need to setup the appropriate proxy settings.
6.3
This section describes how to install and run a Border Network Gateway on a compute node that is
prepared as described in Section 5.1 and Section 5.3. The example interface names from these sections
have been maintained in this section too. Also for simplicity, the BNG is using the handle_none
configuration mode, which makes it work as a L2 forwarding engine. The BNG is more complex than
this and users who are interested to explore more of its capabilities should read https://01.org/inteldata-plane-performance-demonstrators/quick-overview.
The setup to test the functionality of the vBNG follows:
36
6.3.1
37
11. Make sure that the correct PCI addresses are listed in the script bind_to_igb_uio.sh.
12. Download BNG packages.
wget https://01.org/sites/default/files/downloads/intel-data-plane-performancedemonstrators/dppd-bng-v013.zip
13. Extract DPPD BNG sources.
unzip dppd-bng-v013.zip
14. Build BNG DPPD application.
yum -y install ncurses-devel
cd dppd-BNG-v013
make
15. Refer to Section 6.3.3, Extra Preparations on the Compute Node before running the BNG
application in the VM inside the compute node
16. Make sure that the application starts.
./build/dppd -f config/handle_none.cfg
The handle none configuration should be passing all through traffic between ports, which is essentially
similar to the L2 forwarding test. The config directory contains additional complex BNG configurations
and Pktgen scripts. Additional BNG specific workloads can be found in the dppd-BNG/v013pktgenscripts directory.
Following is a sample graphic of the BNG running in a VM with 2 ports:
38
6.3.2
The back-to-back host can be any Intel Xeon processor-based system, or it can be any Compute
Node that has been prepared using the instructions in Section 5.1 and Section 5.3. For simplicity Intel
assumes the later was the case. Also assume that the git directory for stack user is in /home/stack/
git/.
1. In the git directory get the source from Github.
git clone https://github.com/Pktgen/Pktgen-DPDK.git
cd Pktgen-DPDK
2. An extra package must be installed for Pktgen to compile correctly.
yum -y install libpcap-devel
Pktgen comes with its own distribution of DPDK sources. This bundled version of DPDK must be
used. Note that it contains some WindRiver* specific helper libraries that are not in the default
DPDK distribution, which Pktgen depends on.
3. The $RTE_TARGET variable must be set to a specific value. Otherwise, these libraries will not build.
cd
vi .bashrc
Add the following three lines to the end:
export RTE_SDK=$HOME/Pktgen-DPDK/dpdk
export RTE_TARGET=x86_64-pktgen-linuxapp-gcc
export PKTGEN_DIR=$HOME/Pktgen-DPDK
4. Re-login, or execute the following command:
. .bashrc
5. Build the basic DPDK libraries and extra helpers.
cd $RTE_SDK
make install T=$RTE_TARGET
6. Build Pktgen.
cd examples/pktgen
make
7. Adapt the dpdk_nic_bind.py script accordingly to the actual NICs in use so both interfaces are
bound to igb_uio so DPDK can use them. See the details of the command the follows:
tools/dpdk_nic_bind.py --status
8. Use onps_pktgen-64-bytes-UDP-2ports.sh from onps_server_1_2.tar.gz.
39
9. Now run the script as root after the Compute node has been setup as in Section 6.3.3, the VM of
the BNG has been prepared as in Section 6.3.1 inside the VM, and the BNG has been run inside the
VM.
6.3.3
40
type: internal
4. Move the p1p2 physical port under the same bridge as p1p1.
#ovs-vsctl del-port p1p2
#ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy
option:port=1
add-flow
add-flow
add-flow
add-flow
br-p1p1
br-p1p1
br-p1p1
br-p1p1
in_port=16,dl_type=0x0800,idle_timeout=0,action=output:3
in_port=3,dl_type=0x0800,idle_timeout=0,action=output:16
in_port=4,dl_type=0x0800,idle_timeout=0,action=output:2
in_port=2,dl_type=0x0800,idle_timeout=0,action=output:4
41
42
q-svc
q-agt
q-dhcp
q-l3
q-meta
neutron
horizon
LOGFILE=$DEST/stack.sh.log
SCREEN_LOGDIR=$DEST/screen
SYSLOG=True
LOGDAYS=1
#ODL start
Q_HOST=$HOST_IP
enable_service odl-server
enable_service odl-compute
ODL_MGR_IP=10.11.10.7
ENABLED_SERVICES+=,n-api,n-crt,n-obj,n-cpu,n-cond,n-sch,n-novnc,n-cauth,n-cauth,nova
ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch,c-bak
43
Q_PLUGIN=ml2
Q_AGENT=openvswitch
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,opendaylight
#Q_ML2_PLUGIN_TYPE_DRIVERS=vlan,flat,local
Q_ML2_TENANT_NETWORK_TYPE=vxlan
ENABLE_TENANT_TUNNELS=True
#ENABLE_TENANT_VLANS=True
PHYSICAL_NETWORK=physnet1
ML2_VLAN_RANGES=physnet1:1000:1010
#OVS_PHYSICAL_BRIDGE=brMULTI_HOST=True
[[post-config|$NOVA_CONF]]
#disable nova security groups
[DEFAULT]
firewall_driver=nova.virt.firewall.NoopFirewallDriver
novncproxy_host=0.0.0.0
novncproxy_port=6080
Here is a sample local.conf for Compute Node.
[[local|localrc]]
FORCE=yes
MULTI_HOST=True
HOST_NAME=<name of this machine>
HOST_IP=<ip of this machine>
HOST_IP_IFACE=<isolated interface>
SERVICE_HOST_NAME=<name of the controller machine>
SERVICE_HOST=<ip of controller machine>
Q_HOST=$SERVICE_HOST
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=<ip of controller machine>
ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
DATABASE_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=no-token-password
HORIZON_PASSWORD=password
RABBIT_PASSWORD=password
disable_all_services
enable_service
enable_service
enable_service
enable_service
rabbit
n-cpu
q-agt
odl-compute
DEST=/opt/stack
LOGFILE=$DEST/stack.sh.log
SCREEN_LOGDIR=$DEST/screen
SYSLOG=True
LOGDAYS=1
44
A.1
After starting the OpenDaylight controller, as described in Section 6.1, run a stack on the controller and
compute nodes.
Login to http://<control node ip address>:8080 to start horizon gui.
Verify that the node shows up in the following GUI.
45
46
47
48
Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file, it is
possible to disable the bundle from the OSGi console. However, there does not appear to be a way to
make this persistent, so it must be done each time the controller restarts.
49
Once the controller is up and running, connect to the OSGi console. The ss command displays all of the
bundles that are installed and their status. Adding a string(s) filters the list of bundles. List the OVSDB
bundles:
osgi> ss ovs
"Framework is launched."
id
106
112
262
State
ACTIVE
ACTIVE
ACTIVE
Bundle
org.opendaylight.ovsdb.northbound_0.5.0
org.opendaylight.ovsdb_0.5.0
org.opendaylight.ovsdb.neutron_0.5.0
Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in
this case).
Disable the OVSDB neutron bundle and then list the OVSDB bundles again:
osgi> stop 262
osgi> ss ovs
"Framework is launched."
id
106
112
262
State
ACTIVE
ACTIVE
RESOLVED
Bundle
org.opendaylight.ovsdb.northbound_0.5.0
org.opendaylight.ovsdb_0.5.0
org.opendaylight.ovsdb.neutron_0.5.0
Now the OVSDB neutron bundle is in the RESOLVED state, which means that it is not active.
50
51
NOTE:
52
Appendix C Glossary
Acronym
Description
ATR
COTS
Commercial OffThe-Shelf
DPI
FCS
GRE
GRO
IOMMU
Kpps
KVM
LRO
MSI
MPLS
Mpps
NIC
pps
QAT
QinQ
RA
Reference Architecture
RSC
RSS
SP
Service Provider
SR-IOV
TCO
TSO
53
NOTE:
54
Appendix D References
Document Name
Internet Protocol version 4
Source
http://www.ietf.org/rfc/rfc791.txt
http://www.faqs.org/rfc/rfc2460.txt
http://www.intel.com/content/www/us/en/ethernet-controllers/8259910-gbe-controller-datasheet.html
Intel DDIO
https://www-ssl.intel.com/content/www/us/en/io/direct-data-i-o.html?
http://www.intel.com/content/www/us/en/network-adapters/10-gigabitnetwork-adapters/10-gbe-ethernet-flexible-port-partitioning-brief.html
http://download.intel.com/design/intarch/papers/324176.pdf
http://ftp.sunet.se/pub/Linux/distributions/bifrost/seminars/workshop2011-03-31/Openflow_1103031.pdf
Wu, W., DeMar,P. & Crawford,M (2012). A TransportFriendly NIC for Multicore / Multiprocessor Systems
http://arxiv.org/ftp/arxiv/papers/1106/1106.0443.pdf
IA packet processing
http://www.intel.com/p/en_US/embedded/hwsw/technology/packetprocessing
http://networkbuilders.intel.com/docs/
network_builders_RA_packet_processing.pdf
http://networkbuilders.intel.com/docs/network_builders_RA_NFV.pdf
DPDK
http://www.intel.com/go/dpdk
https://01.org/packet-processing
55
LEGAL
By using this document, in addition to any agreements you have with Intel, you accept the terms set forth below.
You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel
products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which
includes subject matter disclosed herein.
INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY
ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN
INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL
DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR
WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT,
COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.
Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software,
operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and
performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when
combined with other products.
The products described in this document may contain design defects or errors known as errata which may cause the product to
deviate from published specifications. Current characterized errata are available on request. Contact your local Intel sales office or
your distributor to obtain the latest specifications and before placing your product order.
Intel technologies may require enabled hardware, specific software, or services activation. Check with your system manufacturer or
retailer. Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or
configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your
purchase. For more complete information about performance and benchmark results, visit http://www.intel.com/performance.
All products, computer systems, dates and figures specified are preliminary based on current expectations, and are subject to change
without notice. Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling, and
provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your
actual performance.
No computer system can be absolutely secure. Intel does not assume any liability for lost or stolen data or systems or any damages
resulting from such losses.
Intel does not control or audit third-party web sites referenced in this document. You should visit the referenced web site and confirm
whether referenced data are accurate.
Intel Corporation may have patents or pending patent applications, trademarks, copyrights, or other intellectual property rights that
relate to the presented subject matter. The furnishing of documents and other materials and information does not provide any
license, express or implied, by estoppel or otherwise, to any such patents, trademarks, copyrights, or other intellectual property
rights.
2014 Intel Corporation. All rights reserved. Intel, the Intel logo, Core, Xeon and others are trademarks of Intel Corporation in the
U.S. and/or other countries. *Other names and brands may be claimed as the property of others.
56