Vous êtes sur la page 1sur 7

978-1-4244-2677-5/08/$25.

00 2008 IEEE 1 of 7
CORE: A REAL-TIME NETWORK EMULATOR

Jeff Ahrenholz, Claudiu Danilov, Thomas R. Henderson, Jae H. Kim
Boeing Phantom Works
P.O. Box 3707, MC 7L-49, Seattle, WA 98124-2207
{jeffrey.m.ahrenholz; claudiu.b.danilov; thomas.r.henderson; jae.h.kim}@boeing.com

ABSTRACT
We present CORE (Common Open Research Emulator), a
real-time network emulator that allows rapid instantiation
of hybrid topologies composed of both real hardware and
virtual network nodes. CORE uses FreeBSD network stack
virtualization to extend physical networks for planning,
testing and development, without the need for expensive
hardware deployments.

We evaluate CORE in wired and wireless settings, and
compare performance results with those obtained on
physical network deployments. We show that CORE scales
to network topologies consisting of over a hundred virtual
nodes emulated on a typical server computer, sending and
receiving traffic totaling over 300,000 packets per second.
We demonstrate the practical usability of CORE in a
hybrid wired-wireless scenario composed of both physical
and emulated nodes, carrying live audio and video
streams.

Keywords: Network emulation, virtualization, routing,
wireless, MANET
1. INTRODUCTION
The Common Open Research Emulator, or CORE, is a
framework for emulating networks on one or more PCs.
CORE emulates routers, PCs, and other hosts and
simulates the network links between them. Because it is a
live-running emulation, these networks can be connected
in real-time to physical networks and routers. The acronym
stems from the initial use of this emulator to study open
source routing protocols, but as we describe below, weve
extended the capabilities of CORE to support wireless
networks.

CORE is based on the open source Integrated Multi-
protocol Network Emulator/Simulator (IMUNES) from the
University of Zagreb [1]. IMUNES provides a patch to the
FreeBSD 4.11 or 7.0 operating system kernel to allow
multiple, lightweight virtual network stack instances
[2][3][4]. These virtual stacks are interconnected using
FreeBSDs Netgraph kernel subsystem. The emulation is
controlled by an easy-to-use Tcl/Tk GUI. CORE forked
from IMUNES in 2004. Certain pieces were contributed
back in 2006, and the entire system will soon be released
under an open source license. In addition to the IMUNES
basic network emulation features, CORE adds support for
wireless networks, mobility scripting, IPsec, distributed
emulation over multiple machines, control of external
Linux routers, a remote API, graphical widgets, and
several other improvements. In this paper we present and
evaluate some of these features that make CORE a
practical tool for realistic network emulation and
experimentation.

The remainder of this paper is organized as follows: we
present related work in Section 2. Then we provide an
overview of COREs features in Section 3, and highlight
the implementation of wireless networking in Section 4
and distributed emulation in Section 5. We then examine
the performance of the CORE emulator for both wired and
wireless networks in Section 6 and present a typical hybrid
emulation scenario in Section 7, and end the paper with
our conclusions.
2. RELATED WORK
In surveying the available software that allows users to run
real applications over emulated networks, we believe that
CORE stands out in the following areas: scalability, ease
of use, application support, and network emulation
features.

Simulation tools, such as ns-2 [6], ns-3 [7], OPNET [8],
and QualNet [9] typically run on a single computer and
abstract the operating system and protocols into a
simulation model for producing statistical analysis of a
network system. In contrast, network emulation tools, such
as PlanetLab [10], NetBed [11], and MNE [12] often
involve a dedicated testbed or connecting real systems
under test to specialized hardware devices. CORE is a
hybrid of the two types of tools, emulating the network
stack of routers or hosts through virtualization, and
simulating the links that connect them together. This way
it can provide the realism of running live applications on
an emulated network while requiring relatively
inexpensive hardware.

Machine virtualization tools, such as VMware [13],
Virtual PC [14], or Parallels [15], have become
increasingly popular, mainly due to the availability of
hardware that can drive multiple operating systems at the

2 of 7

Figure 1. CORE Graphical User Interface
same time with reasonable performance. Operating system
virtualization tools, such as Xen [16], UML [17], KVM
[18], and OpenVZ [19], are mainly used for isolating
multiple Linux server environments driven by the same
hardware machine. CORE belongs to the class of
paravirtualization techniques, where only part of the
operating system is made virtual. In this case, only the
isolation of processes and network stacks is employed,
resulting in virtual machine instances that are as
lightweight as possible. Machine hardware such as disks,
video cards, timers, and other devices are not emulated,
but shared between these nodes. This lightweight
virtualization allows CORE to scale to over a hundred
virtual machines running on a single emulation server.

From a network layering perspective, CORE provides
high-fidelity emulation for the network layer and above,
but uses a simplified simulation of the link and physical
layers. The actual operating system code implements the
TCP/IP network stack, and user or system applications that
run in real environments can run inside the emulated
machine. This is in contrast to simulation techniques,
where abstract models represent the network stack, and
protocols and applications need to be ported to the
simulation environment.

Because CORE emulation runs in real time, real machines
and network equipment can connect and interact with the
virtual networks. Unlike some network emulations, CORE
runs on commodity PCs.
3. CORE OVERVIEW
A complete CORE system consists of a Tcl/Tk GUI,
FreeBSD 4.11 or 7.0 with a patched kernel, custom kernel
modules, and a pair of user-space daemons. See Figure 2
for an overview of the different components.

3.1. CORE GUI
The graphical user interface is scripted in the Tcl/Tk
language which allows for rapid development of X11 user
interfaces. The user is presented with an empty drawing
canvas where nodes of various types can easily be placed
and linked together. An example of a running CORE GUI
is shown in Figure 1. Routers, PCs, hubs, switches, INEs
(inline network encryptors) and other nodes are available
directly from the GUI. Effects such as bandwidth limits,
delay, loss, and packet duplication can be dynamically
assigned to links. Addressing and routing protocols can be
configured and the entire setup can be saved to text-based
configuration file. A start button allows the user to enter an

3 of 7

Execute mode which instantiates the topology in the
kernel. Once running, the user may double-click on any
node icon to get a standard Unix shell on that virtual node
for invoking commands in real-time. In addition, several
other tools and widgets can be used to interact with and
inspect the live-running emulation.

core_span
netgraph
system
Tcl/Tk GUI
core_wlan
CORE
API
ng_wlan
NIC
FreeBSD
kernel
virtual images
(vimages)
tunnels
userspace

Figure 2. Overview of CORE Components

3.2. Network stack virtualization
CORE uses the FreeBSD network stack virtualization
provided by the VirtNet project [4], which allows for
multiple virtual instances of the OS network stack to be
run concurrently. The existing networking algorithms and
code paths in FreeBSD are intact, but operate on this
virtualized state. All global network variables such as
counters, protocol state, socket information, etc. have their
own private instance [5].

Each virtual network stack is assigned its own process
space, using the FreeBSD jail mechanism, to form a
lightweight virtual machine. These are named virtual
images (or vimages) by the VirtNet project and are created
using a new vimage command. Unlike traditional virtual
machines, vimages do not feature an entire operating
system running on emulated hardware. All vimages run the
same kernel and share the same file system, processor,
memory, clock, and other resources. Network packets can
be passed between virtual images simply by reference
through the in-kernel Netgraph system, without the need
for a memory copy of the payload. Because of this
lightweight emulation support, a single host system can
accommodate numerous (over 100) vimage instances, and
the maxmimum throughput supported by the emulation
system does not depend on size of the packet payload, as
we will demonstrate in Section 6.

3.3. Network link simulation
Netgraph is a modular networking system provided by the
FreeBSD kernel, and a Netgraph instantiation consists of a
number of nodes arranged into graphs. Nodes can
implement protocols or devices, or may process data.
CORE utilizes this system at the kernel level to connect
multiple vimages to each other, or to other Netgraph nodes
such as hubs, switches, or RJ 45 jacks connecting to the
outside world. Each wired link in CORE is implemented as
an underlying Netgraph pipe node. The pipe was originally
introduced by IMUNES as a means to apply link effects
such as bandwidth traffic shaping, delay, loss, and
duplicates. One could create a link between two routers,
for example, having 512 kbps bandwidth, 37 ms
propagation delay, and a bit error of 1/1000000. These
parameters can be adjusted on the fly, as the emulation
runs. CORE modifies this pipe node slightly for
implementing wireless networks and also adds a random
jitter delay option.

3.4 External connectivity
CORE provides users with a RJ 45 node that directly maps
to an Ethernet interface on the host machine, allowing
direct connectivity between the virtual images inside a
running emulation and external physical networks. Each
RJ 45 node is assigned to one of the Ethernet interfaces on
the FreeBSD host, and CORE takes over the settings of
that interface, such as its IP address, etc., and also transfers
all traffic passing through that physical port to the
emulation environment. This way, the user may physically
attach any network device to that port and packets will
travel between the real and emulated worlds in real time.
4. WIRELESS NETWORKS
CORE provides two modes of wireless network emulation:
a simple, on-off mode where links are instantiated and
break abruptly based on the distance between nodes, and a
more advanced model that allows for custom wireless link
effects. Nodes, each corresponding to separate vimages,
may be manually moved around on the GUI canvas while
the emulation is running, or mobility patterns may be
scripted. In the current version, CORE wireless emulation
does not perform detailed layer 1 and 2 modeling of a
wireless medium, such as 802.11, and does not model
channel contention and interference. Instead, it focuses on
realistic emulation of layers 3 and above, while relying on
the adoption of external RF models by providing a
standard link model API.

The implementation of the on-off wireless mode is based
on the Netgraph hub node native to FreeBSD, which
simply forwards all incoming packets to every node that is
connected to it (Figure 3, left). We added a hash table to
the Netgraph hub and created a new wlan node, where a
hash of the source and destination node IDs determines
connectivity between any two nodes connected to the
wlan. The hash table is controlled by the position of the
nodes on the CORE GUI. We represent this wlan node as a
small wireless cloud on the CORE canvas. Vimage nodes
can be joined to a wireless network by drawing a link

4 of 7

between the vimage and this cloud. Nodes that are moved
a certain distance away from each other fall out of range
and can no longer communicate through the wlan node
(Figure 3, center).

1
hub
2
3 4
1
hub
2
3 4
1
wlan
2
3 4
1
wlan
2
3 4
55 ms
37 ms
29 ms
hash lookup on/off forward to all tag packet 55ms

Figure 3. WLAN Kernel Module

The advanced wireless model allows CORE to apply
different link effects between each pair of wireless nodes
(Figure 3, right). Each wireless vimage is connected to the
wlan node with a pipe that is capable of applying different
per packet effects depending on the source and destination
of each packet. The wlan kernel module hash table stores,
in addition to node connectivity, the parameters that
should be applied between each pair of nodes. A tag is
added to the packet as it passes through, being read by the
pipe. The pipe then applies the link effects contained in the
tag instead of its globally-configured effects.

To determine more complex link effects between nodes,
we use a modular C daemon to compute the distance and
link effects calculations, instead of using the Tcl/Tk GUI.
This allows for swapping out different wireless link
models, depending on the configuration. Wireless link
effects models can set the statistical link parameters of
bandwidth, delay, loss, duplicates, and jitter. The CORE
GUI and the link effects daemon communicate through an
API. When the topology is executed, the GUI sends node
information to the daemon, which then calculates link
effects depending on the configured model. For example, a
simple link effects model available in the CORE default
setup is an increasing delay and loss model as the distance
between two nodes increases. Different link models can
use the same API to interact with the CORE GUI for
emulating various layer 1 and 2 wireless settings.

Once the link effects wireless daemon computes the
appropriate statistical parameters, it configures the wlan
kernel module directly through the libnetgraph C library
available in FreeBSD, and informs the GUI of links
between nodes and their parameters for display purposes.
5. DISTRIBUTED EMULATION
The in-kernel emulation mechanism ensures that each
CORE virtual image is very lightweight and efficient;
however, the applications that are potentially running on
the virtual nodes, such as routing daemons, traffic
generators, or traffic loggers, even though running
independently, need to share the memory and CPU of the
host computer. For example, in an emulation environment
with all nodes running the OSPF routing daemon available
in the Quagga open source package, we were able to
instantiate 120 emulated routers on a regular computer. To
increase the scalability of the system, we developed the
capability to distribute an emulation scenario across
multiple FreeBSD host systems, each of them
independently emulating part of the larger topology. When
using a distributed emulation, each emulated node needs to
be configured with the physical machine that will be used
to emulate that node. The controller GUI uses this
information to compute partial topologies composed of
nodes running at individual emulation hosts. The control
GUI then distributes these partial topologies to the
emulation hosts, which in turn emulate the partial
topologies independently. When a link connects two nodes
that are emulated on different FreeBSD hosts, a tunnel is
created between the two physical machines to allow data
packets to flow between the two emulated nodes. We use a
separate C daemon, named Span, to instantiate these
tunnels.

5.1 Connecting emulation hosts
The CORE Span tool uses the Netgraph socket facility to
bridge emulations running on different machines using a
physical network. One way to connect two CORE
emulations would be using the RJ 45 jack described earlier
in this paper. However, this limits the number of
connections to the number of Ethernet devices available on
the FreeBSD machine, and requires the emulation hosts to
be physically collocated in order to be directly connected.
Span allows any number of Netgraph sockets to be created
and tunnels data using normal TCP/IP sockets between
machines. Each Netgraph socket appears as a node in the
Netgraph system, which can be connected to any emulated
virtual image, and a user-space socket on the other end.
Span operates by managing the mapping between these
various sockets.

Span also runs on Linux and Windows systems, and sets
up a TAP virtual interface as the tunnel endpoint. This
allows Linux or Windows machines to participate in the
emulated network, as any data sent out the virtual interface
goes across a tunnel and into the CORE emulated network.

A different way to connect CORE machines together is by
using the Netgraph kernel socket or ksocket. This allows
opening a socket in the kernel that connects directly to
another machines kernel. The sockets appear as Netgraph
nodes that can be connected to any emulated node. CORE
uses the ksocket to connect together WLAN nodes that
belong to the same wireless network, but are emulated on

5 of 7

different machines. The WLAN node forwards all data to a
connected ksocket without performing the hash table
lookup. It also prepends the packet with the source ID of
the originating Netgraph node. When receiving data from a
ksocket, the remote wlan node uses the source ID tag from
the packet for the hash table lookup. This allows emulation
of a large wireless network with some wireless nodes
distributed over multiple emulation machines.
6. PERFORMANCE
The performance of CORE is largely hardware and
scenario dependent. Most questions concern the number of
nodes that it can handle. This depends on what processes
each of the nodes is running and how many packets are
sent around the virtual networks. The processor speed
appears to be the principal bottleneck.

Here we consider a typical single-CPU Intel Xeon 3.0GHz
server with 2.5GB RAM running CORE 3.1 for FreeBSD
4.11. We have found it reasonable to run 30-40 nodes each
running Quagga with OSPFv2 and OSPFv3 routing. On
this hardware CORE can instantiate 100 or more nodes,
but at that point it becomes critical as to what each of the
nodes is doing.

Because this software is primarily a network emulator, the
more appropriate question is how much network traffic it
can handle. In order to test the scalability of the system,
we created CORE scenarios consisting of an increasing
number of routers linked together, one after the other, to
form a chain. This represents a worst-case routing scenario
where each packet traverses every hop. At each end of the
chain of routers we connected CORE to a Linux machine
using an RJ 45 node. One of the Linux machines ran the
iperf benchmarking utility in server mode, and the other
ran the iperf client that connects through the chain of
emulated routers. TCP packets are sent as fast as possible
to measure the maximum throughput available for a TCP
application.

For this test, the links between routers were configured
with no bandwidth, delay, or other link restrictions, so
these tests did not exercise the packet queuing of the
system. The two Ethernet interfaces connected the Linux
machines at 100M full-duplex. Only emulated wired links
were used inside of CORE, and by default each emulated
router was running the Quagga 0.99.9 routing suite
configured with OSPFv2 and OSPFv3 routing.

The iperf utility transmitted data for 10 seconds and
printed the throughput measured for each test run. We
changed the TCP maximum segment size (MSS) value,
which governs the size of the packets transmitted, for four
different MSS values: 1500, 1000, 500, and 50. The
number of router hops in the emulated network was
increased from 1 to 120. The resulting iperf measurements
are shown in Figure 4. In Figure 5, we plot the total
number of packets per second handled by the entire
system. This is the measured end-to-end throughput
multiplied by the number of hops and divided by the
packet size. This value represents the number of times the
CORE system as a whole needed to deal with sending or
receiving packets.

0
0 20 40 60 80 100 120
Number of Hops
10
20
30
40
50
60
70
80
90
100
T
h
r
o
u
g
h
p
u
t

(
M
b
p
s
)
mss=1500 mss=1000 mss=500 mss=50

Figure 4. iperf Measured Throughput

0
50000
0 20 40 60 80 100 120
Number of Hops
T
h
100000
150000
200000
250000
300000
350000
r
o
u
g
h
p
u
t

(
t
o
t
a
l

p
a
c
k
e
t
s
/
s
e
c
)
mss=1500 mss=1000 mss=500 mss=50

Figure 5. Total Packets per Second

The measured throughput in Figure 4 shows that the
CORE system can sustain maximum transfer rates (for the
100M link) up to about 30 nodes. At this point the CPU
usage reaches its maximum of 100% usage. Even when
emulating 120 nodes, the network was able to forward
about 30 Mbps of data.

Figure 5 shows a linear increase of the number of packets-
per-second with the number of hops as the link is
saturated. Then the packets-per-second rate levels off at
about 300,000 pps. This is where the CPU usage hits
100%. This suggests that the performance of the CORE

6 of 7

system is bounded by the number of packet operations per
second; other factors such as the size of the packets and
the number of emulated hops are not the limiting
performance factor, as send or receive operations are
implemented in the kernel simply as reference transfers.

These tests consider only the performance of a single
system. The FreeBSD 7.0 version of CORE supports
symmetric multiprocessing (SMP) systems, and with CPU
usage being the main bottleneck, a multiprocessor system
should perform even better. The current version has
somewhat limited SMP support but development of the
kernel virtualization continues with the focus on adopting
virtual network stacks in the CURRENT FreeBSD
development branch, so a separate patch will not be
required. As described in Section 5, we have also added
support to distribute the emulation across multiple physical
machines, allowing for greater performance; but this
introduces a new performance bottleneck the available
resources of the physical networks that tunnel data
between emulations hosts.
7. HYBRID SCENARIO
CORE has been used for demonstrations, research and
experimentation. One frequent use of CORE is to extend a
network of physical nodes when a limited amount of
hardware is available. In this section we show a typical use
case of CORE. The scenario includes eight Linux routers
communicating with 802.11a wireless radios, shown as
rectangular black systems in Figure 6. Each Linux router
also features an Ethernet port used by CORE as a control
channel. CORE has been extended to remotely control
these Linux routers, and can govern the actual connectivity
of the wireless interfaces by inserting and removing
iptables firewall rules in Linux as links are created and
broken from the GUI. The identical Quagga OSPFv3-
MANET [20] routing protocol code is run on the Linux
routers and on the FreeBSD emulated routers. The network
is expanded by including six emulated wired routers and
ten additional emulated wireless routers, for a total of 24
routing nodes. The wired routers are shown near the top
left of Figure 6, and the wireless routers appear in the
bottom left of Figure 6. Span is used in this scenario to
link together the two physical CORE servers (CORE 1 and
CORE 2), each responsible for emulating portions of the
network.
Video
Server
Video
Client
CORE 2 CORE 1
Figure 6. Hybrid Scenario

A laptop, labeled Video Client in , is used to
display a video stream transmitted by one of the Linux
routers labeled Video Server. The video stream first
traverses the physical 802.11 network and then into a Span
tunnel that sends the data into one of the CORE machines.
The packets are forwarded through the emulated network,
first in an OSPFv2 wired network and then into an
OSPFv3-MANET wireless network. Finally, the video
stream enters another Span tunnel that connects to the
virtual interface of the Windows laptop where the video
client displays the video. This path is depicted with a green
line in Figure 6.
Figure 6

Performance of the video stream can be viewed on the
laptop screen as the wireless nodes are moved around, in

7 of 7

either the real wireless network or the emulated one. In
this scenario we observed that the OSPFv3-MANET
routing protocol behaves similarly between the real Linux
systems and the emulated FreeBSD nodes, as we would
expect from the same code running in both platforms.
CONCLUSION AND FUTURE DIRECTIONS
The CORE network emulator was introduced and briefly
compared with other emulation, simulation, and
virtualization tools. The CORE GUI and FreeBSD kernel
components were described, along with two modes of
wireless networks and distributing the emulation across
multiple FreeBSD systems. The performance of the system
was characterized with a series of throughput tests. Finally,
the practical usability of the system was demonstrated by
presenting a hybrid wired-wireless scenario that combined
physical and emulated nodes.

The key features of CORE include scalability, ease of use,
the potential for running real applications on a real TCP/IP
network stack, and the ability to connect the live running
emulation with physical systems.

Future work continues on the CORE tool to make it more
modular. The wireless daemon is being improved with
better support for pluggable wireless models. Experiments
are being performed to merge CORE emulation with
existing, validated simulation models for layers 1 and 2.
Management of instantiating and running the emulation is
being moved to a daemon, away from the monolithic
Tcl/Tk GUI. Components of this daemon are being
developed to take advantage of Linux virtualization
techniques in addition to the existing FreeBSD vimages.
The CORE system will be released as open source in the
near future.

REFERENCES


[1] Integrated Multi-Protocol Emulator/Simulator,
http://www.tel.fer.hr/imunes/

[2] Zec, M. Implementing a Clonable Network Stack in the FreeBSD Kernel,
USENIX 2003 Proceedings, November 2003.

[3] Zec, M., and Mikuc, M. Operating SystemSupport for Integrated Network
Emulation in IMUNES, ACM ASPLOS XI, October 2004.

[4] The FreeBSD Network Stack Virtualization Project,
http://imunes.net/virtnet/

[5] Zec, M. Network Stack Virtualization, EuroBSDCon 2007, September
2007.

[6] The Network Simulator - ns-2, http://www.isi.edu/nsnam/ns/

[7] ns-3 Project, http://www.nsnam.org/

[8] OPNET Modeler: Scalable Network Simulation,
http://www.opnet.com/solutions/network_rd/modeler.html

[9] Scalable Network Technologies: QualNet Developer,
http://www.scalable-networks.com/products/developer.php

[10] PlanetLab: an open platformfor deploying, http://www.planet-lab.org/

[11] Emulab - Network Emulation Testbed Home,
http://boss.netbed.icics.ubc.ca/

[12] Mobile Network Emulator (MNE),
http://cs.itd.nrl.navy.mil/work/proteantools/mne.php

[13] VMware Server, http://www.vmware.com/products/server/

[14] Microsoft Virtual PC,
http://www.microsoft.com/windows/products/winfamily/virtualpc/default.mspx

[15] Parallels Workstation, http://www.parallels.com/en/workstation/

[16] Xen Hypervisor, http://www.xen.org/xen/

[17] The User-Mode Linux Kernel, http://user-mode-linux.sourceforge.net/

[18] Kernel Based Virtual Machine, http://kvm.qumranet.com/kvmwiki

[19] OpenVZ Wiki, http://wiki.openvz.org/Main_Page

[20] P. Spagnolo and T. Henderson, Comparison of Proposed OSPF MANET
Extensions, in Proceedings IEEE Military Communications Conference
MILCOM, vol. 2. IEEE, Oct. 2006.

Vous aimerez peut-être aussi