Vous êtes sur la page 1sur 36

Hyper-V Networking

Aidan Finn

About Aidan Finn

Technical Sales Lead at MicroWarehouse (Dublin)


Working in IT since 1996
MVP (Virtual Machine)
Experienced with Windows Server/Desktop, System Center,
virtualisation, and IT infrastructure
@joe_elway
http://www.aidanfinn.com
http://www.petri.co.il/author/aidan-finn
Published author/contributor of several books

Books

System Center
2012 VMM

Windows Server
2012 Hyper-V

Networking Basics

Hyper-V Networking Basics


Management OS

VLAN ID = 101

Virtual Machines

VLAN ID = 102

VLAN Trunk

Virtual NICs
Generation 1 VMs can have:
(Synthetic) network adapter
Requires drivers (Hyper-V integration
components/services)
Does not do PXE boot
Best performance
Legacy network adapter
Emulated - does not require Hyper-V drivers
Does offer PXE
Bad performance
Generation 2 VMs have synthetic network adapters with PXE
7

Hyper-V Extensible Switch

Hyper-V Extensible
Switch

Replaces Virtual
Network
Handles network traffic
between:
Virtual machines
The physical network
The management OS
NIC = network
adapter

Layer-2 virtual interface


Programmatically
managed
Extensible

Virtual Switch Types


External:
Allow VMs to talk to each other physical network and host
Normally used
Internal
Allow VMs to talk to each other and host
VMs cannot communicate to VMs on another host
Normally only ever seen in a lab
Private
Allow VMs to talk to each other
VMs cannot communicate to VMs on another host
Sometimes seen but replaced by Hyper-V network
virtualization or VLANs
9

Switch Extensibility
Extension Types
Capturing
Monitoring
Example: InMon sFlow

Filtering
Packet monitoring/security
Example: 5nine Security

Forwarding
Does all the above & more
Example: Cisco Nexus
1000V

NIC Teaming

NIC Teaming

Provides load balancing and failover (LBFO)


Load balancing:
Spread traffic across multiple physical NICs.
This provides link aggregation not necessarily a
single virtual pipe.

Failover:
If one physical path (NIC or top-of-rack switch) fails
then traffic automatically moved to another NIC in the
team.

Built-in and fully supported for Hyper-V and


Failover Clustering since WS2012

NIC Teaming Features

Microsoft supported no more calls to NIC


vendors for teaming support or getting told to turn
off teaming
Vendor agnostic can mix NIC manufacturers in
a single team
Up to:
32 NICs at same speed in physical machines
2 virtual NICs at same speed in a VM

Configure teams to meet server needs


Team management is easy!
Server Manger, LBFOADMIN.EXE, VMM, or

Terminology
Team
Interfaces,
Team NICs, or
tNICs

Team
Team
members
--or-Network
Adapters

Connection Modes
Switch Independent mode
Doesnt require any configuration of a
switch
Protects against adjacent switch
failures
Allows Standby NIC

Switch
independent
team

Switch dependent modes


1. Static Teaming
Configured on switch

2. LACP Teaming
Also known as IEEE 802.1ax or 802.3ad

Requires configuration of the


adjacent switch

Switch
dependent
team

Load Distribution Modes


1. Address Hash comes in 3 flavors
4-tuple hash: (Default distribution mode) uses the RSS hash if
available, otherwise hashes the TCP/UDP ports and the IP
addresses. If ports not available, uses 2-tuple instead.
2-tuple hash: hashes the IP addresses. If not IP traffic uses MACaddress hash instead.
MAC address hash: hashes the MAC addresses.

2. Hyper-V port
Hashes the port number on the Hyper-V switch that the traffic is
coming from. Normally this equates to per-VM traffic. Best if
using DVMQ.

3. Dynamic (Added in WS2012 R2)


Spread a single stream of data across team members using
flowlets. The default option in WS2012 R2.

NIC Teaming Virtual Switch

Choose the team connection


mode that is required by your
switches
Choose either Hyper-V Port or
Dynamic (WS2012 R2) load
distribution
Hyper-V Port provides
predictable incoming paths and
DVMQ acceleration.
Dynamic enables a single virtual
NIC to spread traffic across
multiple team members at once.

NIC Team

NIC Teaming Physical NICs

Address Hash will isolate a


single stream of traffic on one
physical NIC.
Dynamic enables a since virtual
NIC to spread traffic across
multiple team members at once.

Networking Stack

Choose the team connection


mode that is required by your
switches
Choose either Address Hash
or Dynamic load distribution

NIC Team

NIC Teaming Virtual Machines

Can be configured in guest


OS of a WS2012 or later VM.
Teams the VMs virtual NICs.
Configuration is locked.
You must allow NIC teaming
in the advanced properties of
the virtual NIC in the VM
settings.
Set-VMNetworkAdapter VM01
AllowTeaming On/Off

Virtual Machine NIC


Team

Demo: NIC Teaming

Hardware Offloads

RSS

100% utilized
Logical Processors {
Cores {

Core 1

Core 2

Core 3

Processors (Hyperthreading) {

Core 4

10

Core 5

CPU 0

11

Core 6

12

13

Core 7

14

15

Core 8

16

17

Core 9

18

19

Core 10

CPU 1

Management OS
Management

Live Migration

Cluster

SMB 3.0

Backup

rNIC1

rNIC2

Virtual Machine
NIC Team

20

21

Core 11

22

23

Core 12

DVMQ

100% utilized
Logical Processors {
Cores {

Core 1

Core 2

Core 3

Processors (Hyperthreading) {

Core 4

10

Core 5

CPU 0

11

Core 6

12

13

Core 7

14

15

Core 8

16

17

Core 9

18

19

Core 10

CPU 1

Management OS
Management

Live Migration

Cluster

SMB 3.0

Backup

rNIC1

rNIC2

Virtual Machine
NIC Team

20

21

Core 11

22

23

Core 12

RSS and DVMQ


Consult your network card/server
manufacturer
Can use Get- SetNetAdapterRSS to configure.
Dont change anything unless
you need to
RSS and DVMQ are
incompatible on the same NIC
so design hosts accordingly

vRSS
Added in WS2012 R2
RSS provides extra processing capacity for inbound traffic to a
physical server
Using cores beyond Core 0.
vRSS does the same thing in the guest OS of a VMM
Using additional virtual processors.
Allows inbound networking to VMM to scale out.
Obviously requires VMs with additional virtual processors.
The physical NICs used by the virtual switch must support DVMQ.
Enable RSS in the advanced NIC properties in the VMs guest OS

vRSS

100% utilized
CPU 0

CPU 2

CPU 1

CPU 3

CPU 4

CPU 5

CPU 6

CPU 7

Management OS
Management
Live Migration

Cluster

SMB 3.0

Backup

rNIC1

rNIC2

Virtual Machine NIC


Team

Demo: vRSS

Single-Root I/O (SR-IOV)


Virtual function on capable NIC presented directly to VM
Bypasses user mode in Management OS
Network stack
Virtual Switch (logical connection present)
Cannot team NICs in Management OS can team NICs in VM
Super low latency virtual networking, less h/w usage
Requires SR-IOV ready:
Motherboard
BIOS
NIC
Windows Server 2012/Hyper-V Server 2012 (or later) host
Can Live Migrate to/from capable/incapable hosts

SR-IOV Illustrated
Host
Root Partition

Host
Virtual
Machine

Hyper-V Switch

Root Partition

Virtual
Machine

Hyper-V Switch
Virtual
Function

Virtual NIC

Routing
VLAN Filtering
Data Copy

Physical
NIC

Network I/O path without SRIOV

Routing
VLAN Filtering
Data Copy

SR-IOV Physical NIC

Network I/O path with SRIOV

Implementing SR-IOV
All management OS
networking features are
bypassed
You must create SR-IOV
virtual switches to begin with:
New-VMSwitch IOVSwitch1 NetAdapterName pNIC1
EnableIOV $True

Install Virtual Function driver


in guest OS
To get teaming:
Create 2 virtual switches
Enable guest OS teaming in vNIC
advanced settings
Team in the guest OS

NIC Team

Virtual NIC 1

SR-IOV Enabled
Virtual Switch 1

Physical NIC 1

Virtual NIC 2

SR-IOV Enabled
Virtual Switch 2

Physical NIC 2

The Real World: SR-IOV


Not cloud or admin friendly:
Requires customization in the guest OS
How many hosting or end users can you trust with admin rights
over in-guest NIC teams?
In reality:
SR-IOV is intended for huge hosts or few VMs with low latency
requirements
You might never implement SR-IOV outside of a lab

IPsec Task Offload (IPSecTO)


IPsec encrypts/decrypts traffic between a client and server.
Done automatically based on some rule.
Can be implemented by a tenant independently of the cloud
administrators
It uses processor resources in a cloud this could have a significant
impact.
Using IPSecOffloadV2 enabled NICs, Hyper-V can offload IPsec
processing from VMs to the hosts NIC(s).

Consistent Device Naming (CDN)


Every Windows admin hates Local Area Connection, Local Area
Connection 2, etc.
Network devices randomly named based on order of PNP
discovery
Modern servers (Dell 12th gen, HP Gen8) can store network port
device names
WS2012 and later can detect these names
Uses device name to name network connections:
Port 1
Port 2
Slot 1 1
Slot 1 1

Converging Networks
Not a new concept from hardware vendors
Introduces as a software solution in WS2012
Will cover this topic in the High Availability session

SMB 3.0
No longer just a file & print protocol
Learn more in the SMB 3.0 and Scale-Out File Server session

Thank You!

Aidan Finn
@joe_elway
www.aidanfinn.com
Petri IT Knowledgebase

Vous aimerez peut-être aussi