Vous êtes sur la page 1sur 350

VS5OS_LectGuideVo11.

book Page 1 Monday, June 25, 2012 10:07 PM

VMware vSphere: Optimize and Scale

Student Manual Volume 1 ESXi 5.0 and vCenter Server 5.0

VMware Education Services VMware, Inc. www.vmware.com/education

VS5OS_LectGuideVo11.book Page 2 Monday, June 25, 2012 10:07 PM

VMware vSphere: Optimize and Scale ESXi 5.0 and vCenter Server 5.0 Part Number EDU-EN-VSOS5-LECT1-STU Student Manual Volume 1 Revision A Copyright/Trademark Copyright 2012 VMware, Inc. All rights reserved. This manual and its accompanying materials are protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/ patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. The training material is provided as is, and all express or implied conditions, representations, and warranties, including any implied warranty of merchantability, fitness for a particular purpose or noninfringement, are disclaimed, even if VMware, Inc., has been advised of the possibility of such claims. This training material is designed to support an instructor-led training course and is intended to be used for reference purposes in conjunction with the instructor-led training course. The training material is not a standalone training tool. Use of the training material for self-study without class attendance is not recommended. These materials and the computer programs to which it relates are the property of, and embody trade secrets and confidential information proprietary to, VMware, Inc., and may not be reproduced, copied, disclosed, transferred, adapted or modified without the express written approval of VMware, Inc. Course development: Carla Guerwitz, Mike Sutton, John Tuffin Technical review: Jonathan Loux, Brian Watrous, Linus Bourque, Undeleeb Din Technical editing: Jeffrey Gardiner Production and publishing: Ruth Christian, Regina Aboud

www.vmware.com/education

VS5OS_LectGuideVo11.book Page i Monday, June 25, 2012 10:07 PM

TA B L E

OF

C ONTENTS
Course Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 You Are Here . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Typographical Conventions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 VMware Certification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 VMware Online Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 vSphere Production Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 VMware Management Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 You Are Here . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12 Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13 Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14 Methods to Run Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15 ESXi Shell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 Accessing ESXi Shell Locally. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17 Accessing ESXi Shell Remotely . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 vCLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20 vMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21 vMA Hardware and Software Requirements . . . . . . . . . . . . . . . . . . . . .22 Configuring vMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23 Connecting to the Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 Deploying vMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25 Configuring vMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26 Adding a Target Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27 vMA Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28 Joining vMA to Active Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29 Command Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30 vMA Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31 esxcfg Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32 esxcfg Equivalent vicfg Commands Examples . . . . . . . . . . . . . . . . . . . .33 Managing Hosts with vMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34 Common Connection Options for vCLI Execution . . . . . . . . . . . . . . . . .35 Common Connection Options for vCLI Execution (2) . . . . . . . . . . . . . .37 vicfg Command Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .39 Entering and Exiting Host Maintenance Mode . . . . . . . . . . . . . . . . . . . .40 vmware-cmd Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .41 Format for Specifying Virtual Machines with vmware-cmd . . . . . . . . .42 Connection Options for vmware-cmd . . . . . . . . . . . . . . . . . . . . . . . . . . .43 Managing Virtual Machines Using vmware-cmd . . . . . . . . . . . . . . . . . .45 Listing and Registering Virtual Machines with vmware-cmd . . . . . . . .46 Retrieving Virtual Machine Attributes with vmware-cmd . . . . . . . . . . .47 Managing Virtual Machine Snapshots with vmware-cmd . . . . . . . . . . .49
i

MODULE 1

MODULE 2

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page ii Monday, June 25, 2012 10:07 PM

Powering Virtual Machines On and Off with vmware-cmd . . . . . . . . . .50 Connecting Devices with vmware-cmd. . . . . . . . . . . . . . . . . . . . . . . . . .51 Working with the AnswerVM API . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53 esxcli Command Hierarchies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54 Example esxcli command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55 Lab 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56 Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .57 Key Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58

MODULE 3

Performance in a Virtualized Environment . . . . . . . . . . . . . . . . . . . . . . .59 You Are Here . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60 Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61 Module Lessons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .62 Lesson 1: Performance Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .63 Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .64 Virtualization Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65 VMware ESXi Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .66 Virtualization Performance: First Dimension . . . . . . . . . . . . . . . . . . . . .67 Virtualization Performance: Second Dimension . . . . . . . . . . . . . . . . . . .68 Virtualization Performance: Third Dimension . . . . . . . . . . . . . . . . . . . .69 Performance Factors in a vSphere Environment . . . . . . . . . . . . . . . . . . .70 Traditional Best Practices for Performance . . . . . . . . . . . . . . . . . . . . . . .71 What Is a Performance Problem? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72 Performance Troubleshooting Methodology . . . . . . . . . . . . . . . . . . . . . .73 Basic Troubleshooting Flow for ESXi Hosts . . . . . . . . . . . . . . . . . . . . .74 Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .75 Lesson 2: Virtual Machine Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . .76 Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .77 What Is the Virtual Machine Monitor? . . . . . . . . . . . . . . . . . . . . . . . . . .78 What Is Monitor Mode? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .79 Challenges of x86 Hardware Virtualization . . . . . . . . . . . . . . . . . . . . . .80 CPU Software Virtualization: Binary Translation . . . . . . . . . . . . . . . . .81 CPU Hardware Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .82 Intel VT-x and AMD-V (First Generation) . . . . . . . . . . . . . . . . . . . . . . .83 CPU Virtualization Overhead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .84 Memory Management Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85 MMU Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .86 Software MMU: Shadow Page Tables . . . . . . . . . . . . . . . . . . . . . . . . . .87 Hardware MMU Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .88 Memory Virtualization Overhead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .89 Choosing the Default Monitor Mode . . . . . . . . . . . . . . . . . . . . . . . . . . .90 Overriding the Default Monitor Mode . . . . . . . . . . . . . . . . . . . . . . . . . .91 Application Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .92
VMware vSphere: Optimize and Scale

ii

VS5OS_LectGuideVo11.book Page iii Monday, June 25, 2012 10:07 PM

Displaying the Default Monitor Mode . . . . . . . . . . . . . . . . . . . . . . . . . .93 Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95 Lesson 3: Monitoring Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96 Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .97 Commonly Used Performance Monitoring Tools . . . . . . . . . . . . . . . . . .98 Overview Performance Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .99 Advanced Performance Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100 Chart Options: Real-Time and Historical . . . . . . . . . . . . . . . . . . . . . . .101 Chart Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103 Objects and Counters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .105 Statistics Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .106 Rollup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107 Saving Charts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109 resxtop Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Using resxtop Interactively . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Navigating resxtop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Sample Output from resxtop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Using resxtop in Batch and Replay Modes . . . . . . . . . . . . . . . . . . . . . . 115 Analyzing Batch Mode Output with Perfmon . . . . . . . . . . . . . . . . . . . . 117 Guest Operating System-Based Performance Tools . . . . . . . . . . . . . . . 118 Perfmon DLL in VMware Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .120 Choosing the Right Tool (1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121 Choosing the Right Tool (2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .123 Lab 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .124 Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .125 Key Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .126

MODULE 4

Network Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .127 You Are Here . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .128 Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .129 Module Lessons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .130 Lesson 1: Introduction to vSphere Distributed Switch . . . . . . . . . . . . .131 Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .132 Distributed Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133 Benefits of Distributed Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .134 Distributed Switch Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .135 Viewing Distributed Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .136 Managing Virtual Adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .137 Managing Physical Adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .138 Enabling IPv6 on the ESXi Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .139 Connecting a Virtual Machine to a Distributed Port Group . . . . . . . . .140 Distributed Switch Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141 Editing General Distributed Switch Properties . . . . . . . . . . . . . . . . . . .143
iii

Contents

VS5OS_LectGuideVo11.book Page iv Monday, June 25, 2012 10:07 PM

Editing Advanced Distributed Switch Properties . . . . . . . . . . . . . . . . .144 Editing Distributed Port Group Properties . . . . . . . . . . . . . . . . . . . . . .145 Distributed Switch Configuration: .dvsData Folder . . . . . . . . . . . . . . .146 Standard Switch and Distributed Switch Feature Comparison . . . . . . .148 Lab 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .149 Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .150 Lesson 2: Distributed Switch Features . . . . . . . . . . . . . . . . . . . . . . . . .151 Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .152 Distributed Switch Port Binding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .153 Port-Binding Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .155 VLAN Policies for Distributed Port Groups . . . . . . . . . . . . . . . . . . . . .156 What Is a Private VLAN? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .157 Types of Secondary Private VLANs . . . . . . . . . . . . . . . . . . . . . . . . . . .158 Promiscuous Private VLANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .159 Isolated Private VLANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .160 Community Private VLANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .161 Private VLAN Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .162 Private VLANs and Physical Switches . . . . . . . . . . . . . . . . . . . . . . . . .164 Physical Switch PVLAN-Aware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .166 Configuring and Assigning Private VLANs . . . . . . . . . . . . . . . . . . . . .168 Discovery Protocols. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .170 Configuring CDP or LLDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .171 Viewing CDP Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .173 Viewing LLDP Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .174 What Is Network I/O Control? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .175 Configuring System-Defined Network Resource Pools . . . . . . . . . . . .177 User-Defined Network Resource Pools. . . . . . . . . . . . . . . . . . . . . . . . .179 Configuring a User-Defined Network Resource Pool . . . . . . . . . . . . . .181 What Is NetFlow? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .183 Network Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .184 Network Flow Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .186 Configuring NetFlow on a Distributed Switch . . . . . . . . . . . . . . . . . . .188 What Is Port Mirroring? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .190 Creating a Port Mirroring Session: General Properties . . . . . . . . . . . . .191 Creating a Port Mirroring Session: Source and Destination . . . . . . . . .193 Lab 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .195 Lab 5 (Optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .196 Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .197 Key Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .198

MODULE 5

Networking Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .199 You Are Here . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .200 Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .201


VMware vSphere: Optimize and Scale

iv

VS5OS_LectGuideVo11.book Page v Monday, June 25, 2012 10:07 PM

Module Lessons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .202 Lesson 1: Networking Virtualization Concepts . . . . . . . . . . . . . . . . . .203 Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .204 Network I/O Virtualization Overhead . . . . . . . . . . . . . . . . . . . . . . . . . .205 vmxnet Network Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .207 Virtual Network Adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .208 Network Performance Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .210 TCP Checksum Offload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 TCP Segmentation Offload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .212 Jumbo Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .214 Using DMA to Access High Memory . . . . . . . . . . . . . . . . . . . . . . . . .215 10 Gigabit Ethernet (10GigE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .216 NetQueue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .217 vSphere DirectPath I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .218 SplitRx Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .220 VMCI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .221 Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .222 Lesson 2: Monitoring Networking I/O Activity . . . . . . . . . . . . . . . . . .223 Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .224 Network Capacity Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .225 vSphere Client Networking Statistics . . . . . . . . . . . . . . . . . . . . . . . . . .226 vSphere Client Network Performance Chart . . . . . . . . . . . . . . . . . . . . .227 resxtop Networking Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .228 resxtop Network Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .229 vSphere Client or resxtop? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .230 Lab 6 Introduction: Test Cases 1 and 2 . . . . . . . . . . . . . . . . . . . . . . . . .231 Lab 6 Introduction: Test Case 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .232 Lab 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .233 Lab 6 Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .234 Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .235 Lesson 3: Command-Line Network Management . . . . . . . . . . . . . . . .236 Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .237 Command-Line Overview for Network Management . . . . . . . . . . . . .238 Using the DCUI to Configure the Management Network . . . . . . . . . . .239 Managing Virtual Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .241 Retrieving Network Port Information . . . . . . . . . . . . . . . . . . . . . . . . . .242 Setting Virtual Switch Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . .243 Listing, Creating, and Deleting Standard Switches . . . . . . . . . . . . . . . .244 Managing a VMkernel Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .245 Listing, Adding, and Removing Port Groups . . . . . . . . . . . . . . . . . . . .246 Configuring a Port Group with a VLAN ID . . . . . . . . . . . . . . . . . . . . .247 Linking and Unlinking Uplink Adapters . . . . . . . . . . . . . . . . . . . . . . . .248 Configuring the SNMP Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .249
Contents v

VS5OS_LectGuideVo11.book Page vi Monday, June 25, 2012 10:07 PM

Configuring DNS and Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .250 Managing Uplinks in Distributed Switches . . . . . . . . . . . . . . . . . . . . . .251 Using the net-dvs Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .252 net-dvs Output (1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .253 net-dvs Output (2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .254 Lab 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .255 Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .256 Lesson 4: Troubleshooting Network Performance Problems . . . . . . . .257 Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .258 Review: Basic Troubleshooting Flow for ESXi Hosts . . . . . . . . . . . . .259 Dropped Network Packets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .261 Dropped Receive Packets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .263 Dropped Transmit Packets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .265 Random Increase in Data Transfer Rate . . . . . . . . . . . . . . . . . . . . . . . .267 Networking Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .269 Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .270 Key Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .271

MODULE 6

Storage Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .273 You Are Here . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .274 Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .275 Module Lessons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .276 Lesson 1: Storage APIs and Profile-Driven Storage . . . . . . . . . . . . . . .277 Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .278 VMware vSphere Storage APIs - Array Integration . . . . . . . . . . . . . . .279 VMware vSphere Storage APIs - Storage Awareness . . . . . . . . . . . . .281 Benefits Provided by Storage Vendor Providers . . . . . . . . . . . . . . . . . .283 Configuring a Storage Vendor Provider . . . . . . . . . . . . . . . . . . . . . . . .284 Profile-Driven Storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .285 Storage Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .287 Virtual Machine Storage Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . .288 Overview of Steps for Configuring Profile-Driven Storage . . . . . . . . .289 Using the Virtual Machine Storage Profile . . . . . . . . . . . . . . . . . . . . . .291 Checking Virtual Machine Storage Compliance . . . . . . . . . . . . . . . . . .292 Identifying Advanced Storage Options . . . . . . . . . . . . . . . . . . . . . . . . .293 N_Port ID Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .294 N_Port ID Virtualization Requirements . . . . . . . . . . . . . . . . . . . . . . . .295 vCenter Server Storage Filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .296 Identifying and Tagging SSD Devices . . . . . . . . . . . . . . . . . . . . . . . . .298 Configuring Software iSCSI Port Binding . . . . . . . . . . . . . . . . . . . . . .299 VMFS Resignaturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .301 Pluggable Storage Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .302 VMware Default Multipathing Plug-in . . . . . . . . . . . . . . . . . . . . . . . . .304
VMware vSphere: Optimize and Scale

vi

VS5OS_LectGuideVo11.book Page vii Monday, June 25, 2012 10:07 PM

Overview of the MPP Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .305 Path Selection Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .307 Lab 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .308 Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .309 Lesson 2: Storage I/O Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .310 Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 What Is Storage I/O Control? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .312 Storage I/O Control Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . .313 Configuring Storage I/O Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .314 Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .316 Lesson 3: Datastore Clusters and Storage DRS . . . . . . . . . . . . . . . . . .317 Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .318 What Is a Datastore Cluster? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .319 Datastore Cluster Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .320 Relationship of Host Cluster to Datastore Cluster . . . . . . . . . . . . . . . .321 Storage DRS Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .322 Initial Disk Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .323 Migration Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .324 Configuration of Storage DRS Migration Thresholds. . . . . . . . . . . . . .325 Storage DRS Affinity Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .327 Adding Hosts to a Datastore Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . .328 Adding Datastores to the Datastore Cluster . . . . . . . . . . . . . . . . . . . . .329 Storage DRS Summary Information . . . . . . . . . . . . . . . . . . . . . . . . . . .330 Storage DRS Migration Recommendations . . . . . . . . . . . . . . . . . . . . .331 Storage DRS Maintenance Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . .332 Backups and Storage DRS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .334 Storage DRS Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .335 Storage DRS and Storage I/O Control. . . . . . . . . . . . . . . . . . . . . . . . . .336 Lab 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .337 Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .338 Key Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .339

Contents

vii

VS5OS_LectGuideVo11.book Page viii Monday, June 25, 2012 10:07 PM

viii

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 1 Monday, June 25, 2012 10:07 PM

MODULE 1

Course Introduction
Slide 1-1

Course Introduction
Course Introduction

Module 1

VMware vSphere: Optimize and Scale

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 2 Monday, June 25, 2012 10:07 PM

Importance
Slide 1-2

This course will equip administrators with advanced skills for configuring and maintaining a highly available and scalable VMware vSphere environment. This course focuses on the optimization and scalability of VMware vSphere ESXi hosts and VMware vCenter Server. This course will help prepare IT professionals for the VMware Certified Advanced Professional Data Center Administration [V5] certification (VCAP5-DCA).

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 3 Monday, June 25, 2012 10:07 PM

Learner Objectives
Slide 1-3

1
Course Introduction

After this course, you should be able to do the following:

Configure and manage ESXi networking and storage for the large and sophisticated enterprise. Manage changes to the vSphere environment. Optimize the performance of all vSphere components. Troubleshoot operational faults and identify their root causes. Use the ESXi Shell and the VMware vSphere Management Assistant to manage vSphere. Use VMware vSphere Auto Deploy to provision ESXi hosts.

Module 1 Course Introduction

VS5OS_LectGuideVo11.book Page 4 Monday, June 25, 2012 10:07 PM

You Are Here


Slide 1-4

Course Introduction VMware Management Resources Performance in a Virtualized Environment

Storage Optimization CPU Optimization Memory Performance VM and Cluster Optimization Host and Management Scalability

Network Scalability Network Optimization Storage Scalability

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 5 Monday, June 25, 2012 10:07 PM

Typographical Conventions
Slide 1-5

1
Course Introduction

The following typographical conventions are used in this course:


Monospace Filenames, folder names, path names, command names: the bin directory What the user types: Type ipconfig and press Enter. Graphical user interface items: the Configuration tab Book titles and emphasis: vSphere Datacenter Administration Guide Placeholders: <ESXi_host_name>

Monospace bold Boldface Italic

<filename>

Module 1 Course Introduction

VS5OS_LectGuideVo11.book Page 6 Monday, June 25, 2012 10:07 PM

VMware Certification
Slide 1-6

The VMware Certified Advanced Professional Data Center Administration [V5] certification (VCAP5DCA) program is for technical professionals who can demonstrate their skills in vSphere and vCenter technologies in relation to the datacenter.

VCDX VCAP-DCA VCAP-DCD VCP

Expert Advanced Professional Professional

For details on VMware certifications, go to: http://mylearn.vmware.com/portals/certification

Your first level of certification is VMware Certified Professional on vSphere 5 (VCP5). After you achieve VCP5 status, you are eligible to pursue the next level of certification: VMware Certified Advanced Professional on vSphere 5 (VCAP5). This program is available in Datacenter Administration (DCA) or Datacenter Design (DCD) or both. This program is appropriate for VCPs who are ready to enhance their skills with the virtual infrastructure and add industry-recognized credentials to their list of accomplishments. DCA is for system administrators, consultants, and technical support engineers who can demonstrate their skills in VMware vSphere and VMware vCenter datacenter technologies. These participants should also be able to demonstrate their knowledge of application and physicalinfrastructure services and their integration with the virtual infrastructure. DCD is for IT architects and consulting architects who are capable of designing VMware solutions in a large, multisite enterprise environment. These architects have a deep understanding both of VMware core components and of the relationships of the core components to storage and networking. They also have a deep understanding of datacenter design methodologies. They know applications and physical infrastructure, and the relationships of applications and physical infrastructure to the virtual infrastructure. This course gives you most of the information that you need for the VCAP5-DCA exam, but it does not give you everything. To best prepare for this exam, use the VCAP5-DCA exam blueprint as a
6 VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 7 Monday, June 25, 2012 10:07 PM

study guide. The blueprint includes the list of topics in the exam and references for these topics. Hands-on experience is a key component to passing the exam.
To obtain the exam blueprint:

1
Course Introduction

1. Click the VCAP5-DCA link on the VMware Certification Web page at

http://mylearn.vmware.com/portals/certification.
2. Click the Exam Blueprint link near the bottom of the page.

After you achieve DCA and DCD status, you are eligible to pursue the highest level of certification: VMware Certified Design Expert (VCDX). This elite group includes design architects who are highly skilled in VMware enterprise deployments. The program is designed for veteran professionals who want to validate and demonstrate their expertise in VMware virtual infrastructure. For more about certifications, go to the certification Web page at http://mylearn.vmware.com/ portals/certification.

Module 1 Course Introduction

VS5OS_LectGuideVo11.book Page 8 Monday, June 25, 2012 10:07 PM

VMware Online Resources


Slide 1-7

VMware Communities: http://communities.vmware.com

Start a discussion, and access communities and user groups. Access the knowledge base, documentation, technical papers, and compatibility guides. Access the course catalog and worldwide course schedule. Access information about advanced courses to continue on your virtualization training path.

VMware Support: http://www.vmware.com/support

VMware Education: http://www.vmware.com/education

For easy access to online resources, install the VMware toolbar.

Making full use of VMware technical resources saves you time and money. Go first to the extensive VMware Web-based resources. The VMware Communities Web page provides tools and knowledge to help users maximize their investment in VMware products. VMware Communities provides information about virtualization technology in technical papers, documentation, a knowledge base, discussion forums, user groups, and technical newsletters. The VMware Support page provides a central point from which you can view support offerings, create a support request, and download products, updates, drivers and tools, and patches. The VMware Education page enables you to view the course catalog and the latest schedule of courses offered worldwide. This page also provides access information about the latest advanced courses offered worldwide. For quick access to communities, documentation, downloads, support information, and more, install the VMware Support Toolbar, a free download available at http://vmwaresupport.toolbar.fm. Periodically check the VMware Education Services Web page for the current list of released courses.
Since this course uses the command line quite a bit, you might point out a handy poster available at http://blogs.vmware.com/esxi/2011/09/vsphere-50-cli-reference-poster.html.

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 9 Monday, June 25, 2012 10:07 PM

vSphere Production Documentation


Slide 1-8

For vSphere product documentation, go to:


http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html

Course Introduction

See the current vSphere manuals at this Web site for information about configuration maximums and installation requirements.

vSphere documentation is available on the VMware Web site at http://www.vmware.com/support/ pubs/vsphere-esxi-venter-server-pubs.html. From this page, you have access to all the vSphere guides, including guides for optional modules or products. On this page, the vSphere 5.0 Documentation Center link provides access to the online version of all vSphere documentation.

Module 1 Course Introduction

VS5OS_LectGuideVo11.book Page 10 Monday, June 25, 2012 10:07 PM

10

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 11 Monday, June 25, 2012 10:07 PM

MODULE 2

VMware Management Resources


Slide 2-1

2
2

Module 2

VMware Management Resources

VMware vSphere: Optimize and Scale

11

VS5OS_LectGuideVo11.book Page 12 Monday, June 25, 2012 10:07 PM

You Are Here


Slide 2-2

Course Introduction VMware Management Resources Performance in a Virtualized Environment

Storage Optimization CPU Optimization Memory Performance VM and Cluster Optimization Host and Management Scalability

Network Scalability Network Optimization Storage Scalability

12

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 13 Monday, June 25, 2012 10:07 PM

Importance
Slide 2-3

Performing configuration and troubleshooting tasks from the command line is a very useful skill that a VMware vSphere administrator should have. VMware vSphere Command-Line Interface (vCLI), which is available with VMware vSphere Management Assistant (vMA), provides the administrator with these command-line capabilities.

2
VMware Management Resources

Module 2 VMware Management Resources

13

VS5OS_LectGuideVo11.book Page 14 Monday, June 25, 2012 10:07 PM

Learner Objectives
Slide 2-4

After this module, you should be able to do the following:

Understand the purpose of the vCLI commands. Discuss the options for running commands. Deploy and configure vMA. Use vmware-cmd for virtual machine operations.

14

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 15 Monday, June 25, 2012 10:07 PM

Methods to Run Commands


Slide 2-5

Ways to get command-line access on an VMware vSphere ESXi host:

VMware vSphere ESXi Shell


VMware Management Resources

vMA, which includes the vCLI package

A few methods exist for accessing the command prompt on an VMware vSphere ESXi host. VMware recommends that you use VMware vSphere Command-Line Interface (vCLI) or VMware vSphere Management Assistant (vMA) to run commands against your ESXi hosts. Run commands directly in VMware vSphere ESXi Shell only in troubleshooting situations.
Be brief on the slide. The next few slides describe ESXi Shell, vCLI, and vMA in more detail.

Module 2 VMware Management Resources

15

VS5OS_LectGuideVo11.book Page 16 Monday, June 25, 2012 10:07 PM

ESXi Shell
Slide 2-6

ESXi Shell includes a set of fully supported ESXCLI commands and a set of commands for diagnosing and repairing ESXi hosts. Use ESXi Shell only at the request of VMware technical support.

You should be familiar with how ESXi Shell works in case VMware technical support directs you to use it.

ESXi Shell can be accessed:

Locally, from the direct console user interface (DCUI) Remotely, from a Secure Shell (SSH) session

An ESXi system includes a direct console user interface (DCUI) that enables you to start and stop the system and perform a limited set of maintenance and troubleshooting tasks. The DCUI includes ESXi Shell, which is disabled by default. You can enable ESXi Shell in the DCUI or by using VMware vSphere Client. You can enable local shell access or remote shell access: Local shell access enables you to log in to the shell directly from the DCUI. Secure Shell (SSH) is a remote shell that enables you to connect to the host with a shell, such as PuTTY. ESXi Shell includes all ESXCLI commands, a set of deprecated esxcfg- commands, and a set of commands for troubleshooting and remediation.

16

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 17 Monday, June 25, 2012 10:07 PM

Accessing ESXi Shell Locally


Slide 2-7

To access ESXi Shell locally, you require physical access to the DCUI and root privileges. By default, the local ESXi Shell is disabled.

VMware Management Resources

Enable the local ESXi Shell from the DCUI or from the VMware vSphere Client. In the main DCUI screen, press Alt+F1 to open a virtual console window to the host.

After you enable ESXi Shell access, you can access the local shell.

If you have access to the DCUI, you can enable the ESXi Shell from there.
To enable the ESXi Shell in the DCUI: 1. In the DCUI of the ESXi host, press F2 and provide credentials when prompted. 2. Scroll to Troubleshooting Options and press Enter. 3. Select Enable ESXi Shell and press Enter.

On the left, Enable ESXi Shell changes to Disable ESXi Shell. On the right, ESXi Shell is Disabled changes to ESXi Shell is Enabled.
4. Press Esc until you return to the main DCUI screen.

Module 2 VMware Management Resources

17

VS5OS_LectGuideVo11.book Page 18 Monday, June 25, 2012 10:07 PM

Accessing ESXi Shell Remotely


Slide 2-8

You can access ESXi Shell remotely with a secure shell client like SSH or PuTTY.

The SSH service must be enabled first.

This service is disabled by default.

Disable SSH access when you are done using it.

Enable SSH on an ESXi host only as a last resort for troubleshooting. Enabling SSH creates a major security vulnerability and reduces ESXi resources.

If you enable SSH access, do so only for a limited time. SSH should never be left open on an ESXi host in a production environment. If SSH is enabled for the ESXi Shell, you can run shell commands by using an SSH client, such as SSH or PuTTY.
To enable SSH from the vSphere Client: 1. Select the host and click the Configuration tab. 2. Click Security Profile in the Software panel. 3. In Services, click Properties. 4. Select SSH and click Options. 5. Change the SSH options. To change the Startup policy across reboots, click Start and stop

with host and reboot the host.


6. Click OK.

18

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 19 Monday, June 25, 2012 10:07 PM

To enable the local or remote ESXi Shell from the vSphere Client: 1. Select the host and click the Configuration tab. 2. Click Security Profile in the Software panel. 3. In Services, click Properties.

4. Select ESXi Shell and click Options. 5. Change the ESXi Shell options. To change the Startup policy across reboots, click Start and

VMware Management Resources

stop with host and reboot the host.


6. Click OK.

The ESXi Shell timeout setting specifies how long, in minutes, you can leave an unused session open. By default, the timeout for the ESXi Shell is 0, which means the session remains open even if it is unused. If you change the timeout, for example, to 30 minutes, you have to log in again after the timeout period has elapsed. To modify the ESXi Shell Timeout:
1. In the Direct Console, follow these steps. 2. Select Modify ESXi Shell timeout and press Enter. 3. Enter the timeout value in minutes and press Enter.

In the vSphere Client, follow these steps:


1. In the Configuration tabs Software panel, click Advanced Settings. 2. In the left panel, click UserVars. 3. Find UserVars.ESXiShellTimeOut and enter the timeout value in minutes. 4. Click OK.

Module 2 VMware Management Resources

19

VS5OS_LectGuideVo11.book Page 20 Monday, June 25, 2012 10:07 PM

vCLI
Slide 2-9

The vCLI command set enables you to run common system administration commands against ESXi hosts. You can run most vCLI commands against a VMware vCenter Server system and target the ESXi hosts that it manages. vCLI commands normally require the following options to connect and log in to a server:

--server <name> --username <user> --password <string>

vCLI commands run on top of the VMware vSphere SDK for Perl. vCLI commands are available as a standalone installation package for Linux or Windows systems packaged with vMA

vCLI provides a command-line interface for ESXi hosts. Multiple ESXi hosts can be managed from a central system on which vCLI is installed. Normally, vCLI commands require you to enter options that specify the server name, the user name, and the password for the server that you want to run the command against. Methods exist that enable you to bypass entering the user name and password options, and, sometimes, the server name option. Two of these methods are described later in this module. For details about vCLI, see Getting Started with vSphere Command-Line Interfaces and vSphere Command-Line Interface Concepts and Examples at http://www.vmware.com/support/pubs/vsphereesxi-vcenter-server-pubs.html.

20

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 21 Monday, June 25, 2012 10:07 PM

vMA
Slide 2-10

vMA is a virtual appliance that includes the following:

SUSE Linux Enterprise Server 11 SP1 VMware Tools vCLI vSphere SDK for Perl Java JRE version 1.6 vi-fastpass an authentication component for the appliance

VMware Management Resources

vMA is a downloadable appliance that includes several components, including vCLI. vMA enables administrators to run scripts or agents that interact with ESXi hosts and VMware vCenter Server systems without having to authenticate each time. vMA is easy to download, install, and configure through the vSphere Client.

Module 2 VMware Management Resources

21

VS5OS_LectGuideVo11.book Page 22 Monday, June 25, 2012 10:07 PM

vMA Hardware and Software Requirements


Slide 2-11

Hardware requirements:

AMD Opteron, rev E or later Intel Processors with EM64T and VT enabled vMA can be deployed on:

Software requirements the following:

vSphere 4.0 Update 2 or later vSphere 4.1 and 5.0 vCenter Server 4.0 Update 2 or later vCenter Server 4.1 and 5.0

By default, vMA uses the following:

One virtual processor 600MB of RAM 3GB virtual disk

To set up vMA, you must have an ESXi host. Because vMA runs a 64-bit Linux guest operating system, the ESXi host on which it runs must support 64-bit virtual machines. The 3GB virtual disk size requirement might increase, depending on the extent of centralized logging enabled on the vMA appliance. The recommended memory for vMA is 600MB.

22

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 23 Monday, June 25, 2012 10:07 PM

Configuring vMA
Slide 2-12

Deploy

Configure

Add Targets

Authenticate
VMware Management Resources

To set up a vMA appliance:


1. 2. 3. 4.

Deploy vMA from a URL or a downloaded file. Configure vMA virtual machine and time-zone network settings. Add target servers to vMA. Target servers include the vCenter Server system or ESXi hosts or both. Initialize vi-fastpass authentication.

You have to initialize vi-fastpass only if you want to enter vCLI commands without specifying a user name and password for the vCenter Server system or an ESXi host.
Be brief on the slide. The next few slides discuss these tasks (deploy, configure, add targets, and authenticate) in more detail.

Module 2 VMware Management Resources

23

VS5OS_LectGuideVo11.book Page 24 Monday, June 25, 2012 10:07 PM

Connecting to the Infrastructure


Slide 2-13

vMA command paths


vCenter Server

vMA
vSphere SDK for Perl API private vCenter protocol

ESXi host

vMA commands directly targeted at the hosts are sent using the VMware vSphere SDK for Perl API. Commands sent to the host through the vCenter Server system are first sent to the vCenter Server system, using the vSphere SDK for Perl API. Using a private protocol that is internal to vCenter Server, commands are sent from the vCenter Server system to the host.

24

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 25 Monday, June 25, 2012 10:07 PM

Deploying vMA
Slide 2-14

Deploy vMA like any other virtual appliance.

2
VMware Management Resources

vMA is deployed like any other virtual appliance. After the appliance is deployed to the infrastructure, the user can power it on and start configuring vMA. The vMA appliance is available from the download page on the VMware Web site.

Module 2 VMware Management Resources

25

VS5OS_LectGuideVo11.book Page 26 Monday, June 25, 2012 10:07 PM

Configuring vMA
Slide 2-15

Configure vMA at the command prompt or through the Web interface:

https://<appliance_name_or_IP_address>:5480 Log in as vi-admin.

From the Web interface, you can do the following: Configure time-zone settings Configure network and proxy server settings Update vMA to the latest version
https://vma.vclass.local:5480

After vMA is deployed, your next step is to configure the appliance. When you start the vMA virtual machine the first time, you can configure it. The appliance can be configured either by opening a console to the appliance or by pointing a Web browser to the appliance. The vi-admin account is the administrative account on the vMA appliance and exists by default. During the initial power-on, you are prompted to choose a password for this user account. Although the vMA appliance is Linux-based, logging in as root has been disabled.

26

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 27 Monday, June 25, 2012 10:07 PM

Adding a Target Server


Slide 2-16

A target server is a server that you access from vMA:

Either a vCenter Server system or ESXi host

To add a vCenter Server system as a target server:


VMware Management Resources

1. Log in as vi-admin. 2. Run vifp addserver <vCenter_Server_system>.


a. Enter a vCenter Server user name with administrator privilege. b. Enter the users password. c. Agree to store this information in the credential store.

3. Run vifp listservers to verify that the vCenter Server system has

been added as a target. 4. Run vifptarget -s <vCenter_Server_system> to set the target as the default for the current vMA session. 5. Test operation by running vicfg-nics l vihost <ESXi_host>.
After you configure vMA, you can add target servers that run the supported vCenter Server or ESXi versions. The vifp interface enables administrators to add, list, and remove target servers and to manage the vi-admin users password. After a server is added as a vMA target, you must run the vifptarget command. This command enables seamless authentication for remote vCLI and vSphere SDK for Perl API commands. Run vifptarget <server> before you run vCLI commands or vSphere SDK for Perl scripts against that system. The system remains a vMA target across vMA reboots, but running vifptarget again is required after each logout. You can establish multiple servers as target servers and then call vifptarget once to initialize all servers for vi-fastpass authentication. You can then run commands against any target server without additional authentication. You can use the --server option to specify the server on which to run commands.

Module 2 VMware Management Resources

27

VS5OS_LectGuideVo11.book Page 28 Monday, June 25, 2012 10:07 PM

vMA Authentication
Slide 2-17

The vi-fastpass authentication component supports unattended authentication to vCenter Server system or ESXi host targets:

vMA authenticated commands

Prevents the user from having to continually add login credentials to every command being executed Facilitates unattended scripted operations

logging ESXi vCenter Server

The vMA authentication interface enables users and applications to authenticate with the target servers by using vi-fastpass or Active Directory (AD). While adding a server as a target, the administrator can determine whether the target must use vi-fastpass or AD authentication. For vifastpass authentication, the credentials that a user has on the vCenter Server system or ESXi host are stored in a local credential store. For AD authentication, the user is authenticated with an AD server. When you add an ESXi host as a fastpass target server, vi-fastpass creates two users with obfuscated passwords on the target server and stores the password information on vMA: vi-admin with administrator privileges vi-user with read-only privileges The creation of vi-admin and vi-user does not apply for AD authentication targets. When you add a system as an AD target, vMA does not store information about the credentials. To use the AD authentication, the administrator must configure vMA for AD.

28

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 29 Monday, June 25, 2012 10:07 PM

Joining vMA to Active Directory


Slide 2-18

vMA can be configured for Active Directory (AD), so the ESXi hosts and vCenter Server systems can be added to vMA without having to store passwords in the vMA credential store.

2
VMware Management Resources

Active Directory

vCenter Server

vMA

ESXi host

Configure vMA for Active Directory authentication so that ESXi hosts and vCenter Server systems added to Active Directory can be added to vMA. Joining the vMA to Active Directory prevents you from having to store the passwords in the vMA credential store. This approach is a more secure way of adding targets to vMA. Ensure that the DNS server configured for vMA is the same as the DNS server of the domain. You can change the DNS server by using the vMA Console to the Web UI. Ensure that the domain is accessible from vMA. Ensure that you can ping the ESXi and vCenter Server systems that you want to add to vMA. Ensure also that pinging resolves the IP address to the target servers domain.
To add vMA to a domain: 1. From the vMA console, run the following command: sudo domainjoin-cli join <domain_name> <domain_admin_user> 2. When prompted, provide the Active Directory administrators password. 3. Restart vMA.

Module 2 VMware Management Resources

29

VS5OS_LectGuideVo11.book Page 30 Monday, June 25, 2012 10:07 PM

Command Structure
Slide 2-19

vCLI syntax on a vMA appliance:


<command> <conn_options> <target_option> <command_options>

A vCLI command targeted directly at an ESXi host:


vicfg-nics --server ESXa --username root --password vmware -l

A vCLI command targeted at an ESXi host through a vCenter Server instance:


vicfg-nics --server vC1 --username vcadmin --password vmware -vihost ESXa -l

The slide shows syntax and vCLI command examples. The <target_option> is necessary only if you are sending the command to an ESXi host through the vCenter Server system. In this case, the <target_option> specifies to which host the vCenter Server system should forward the command. The example command (vicfg-nics -l) displays information about the physical network interface cards on an ESXi host.

30

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 31 Monday, June 25, 2012 10:07 PM

vMA Commands
Slide 2-20

vMA includes the following commands: esxcli

resxtop
VMware Management Resources

svmotion vicfg-* commands esxcfg-* commands (deprecated) vifs vihostupdate vmkfstools vmware-cmd

The vCLI command set is part of vMA. For more information about the commands included in vMA, see Getting Started with vSphere Command-Line Interfaces and vSphere Command-Line Interface Concepts and Examples at http://www.vmware.com/support/pubs/vsphere-esxi-vcenterserver-pubs.html.

Module 2 VMware Management Resources

31

VS5OS_LectGuideVo11.book Page 32 Monday, June 25, 2012 10:07 PM

esxcfg Commands
Slide 2-21

Many vCLI commands used esxcfg commands in scripts to manage VMware ESX/ESXi 3.x and 4.x hosts: In vCLI many vicfg commands are equivalent to esxcfg commands.

Commands that use esxcfg are still available for compatibility reasons and might become obsolete. Use vicfg commands when developing new scripts.

For many of the vCLI commands, you might have used scripts with corresponding service commands with an esxcfg prefix to manage ESX 3.x hosts. To facilitate easy migration from ESX/ ESXi 3.x and later versions of ESXi, a copy of each vicfg- command that uses an esxcfg- prefix is included in the vCLi package. Commands with the esxcfg prefix are available mainly for compatibility reasons and might become obsolete.

32

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 33 Monday, June 25, 2012 10:07 PM

esxcfg Equivalent vicfg Commands Examples


Slide 2-22

esxcfg Command esxcfg-advcfg esxcfg-cfgbackup esxcfg-nics esxcfg-vswitch

Equivalent vicfg Command


VMware Management Resources

vicfg-advcfg vicfg-cfgbackup vicfg-nics vicfg-vswitch

The slide lists some examples of vicfg commands for which an esxcfg prefix is available.
A complete list of commands is available in the vSphere 5 Documentation Center.

Module 2 VMware Management Resources

33

VS5OS_LectGuideVo11.book Page 34 Monday, June 25, 2012 10:07 PM

Managing Hosts with vMA


Slide 2-23

Host management task Reboot and shut down hosts. Enter and exit maintenance mode. Back up and restore host configuration settings. Add ESXi hosts to an Active Directory domain.

vMA command vicfg-hostops vicfg-hostops vicfg-cfgbackup vicfg-authconfig

Host management commands can stop and reboot ESXi hosts, back up configuration information, and manage host updates. You can also use a host management command to make your host join an AD domain or exit from a domain.
Be brief on the slide. These tasks and commands are discussed on the next several slides.

34

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 35 Monday, June 25, 2012 10:07 PM

Common Connection Options for vCLI Execution


Slide 2-24

Connection Option --cacertsfile --config --credstore --encoding --passthroughauth --passthroughauthpackage --password --portnumber

Description

Specifies the CA certificate file Path to a configuration file Name of credential store file Specifies the encoding to use Use Microsoft Windows Security SSPI Specifies Domain-level authentication protocol to be used. Log in password Uses specified port to connect

VMware Management Resources

The slide lists options that are available for all vCLI commands.
--cacertsfile

Used to specify the Certificate Authority file in PEM format, to verify the identity of the vCenter Server system or the ESXi host to run the command on. Can be used, for example, to prevent man in the middle attacks. Uses the configuration file at the specified location. Specify a path that is readable for the current directory. Name of credential store file. Defaults to <HOME>/ .vmware/credstore/vicredentials.xml on Linux and <APPDATA>/VMware/credstore/vicredentials.xml on Windows. Commands for setting up the credential store are included in the vSphere SDK for Perl.

--config

--credstore

--encoding

Specifies the encoding to be used. Use --encoding to specify the encoding vCLI should map to when it is running on a foreign language system.
35

Module 2 VMware Management Resources

VS5OS_LectGuideVo11.book Page 36 Monday, June 25, 2012 10:07 PM

--passthroughauth

This option specifies the system should use the Microsoft Windows Security Support Provider Interface (SSPI) for authentication. Trusted users are not prompted for a user name and password. Use this option with --passthroughauth to specify a domain-level authentication protocol to be used by Windows. By default, SSPI uses the Negotiate protocol which means that client and server to negotiate a protocol that both support. Uses the specified password when used with --username to log in to the server. Uses the specified port to connect to the system specified by --server. Default port is 443.

--passthroughtauthpackage

--password

--portnumber

36

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 37 Monday, June 25, 2012 10:07 PM

Common Connection Options for vCLI Execution (2)


Slide 2-25

Connection Option --protocol --savesessionfile --server --sessionfile --url --username --vihost

Description Uses the specified protocol to connect

2
VMware Management Resources

Saves the session to the specified file The ESXi or vCenter Server host Uses the specified file to load a saved session Connects to vSphere Web Services SDK URL User name to log in to system. Name of ESXi host to run the command against

--protocol

Uses the specified protocol to connect to the system specified by --server. Default is HTTPS. Saves a session to the specified file. The session expires if it has been unused for 30 minutes. Uses the specified ESXi of vCenter Server system. Default is localhost. Uses the specified session file to load a previously saved session. The session must be unexpired. Connects to the specified vSphere Web Services SDK URL.

--savesessionfile

--server

--sessionfile

--url

Module 2 VMware Management Resources

37

VS5OS_LectGuideVo11.book Page 38 Monday, June 25, 2012 10:07 PM

--username

Uses the specified user name. If you do not specify a user name and password on the command line, the system prompts you and does not echo your input to the screen. When you run a vCLI command with --server option pointing to vCenter Server system, use --vihost to specify the ESXi host to run the command against.

--vihost

38

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 39 Monday, June 25, 2012 10:07 PM

vicfg Command Example


Slide 2-26

Use the vicfg-hostops command with the shutdown or reboot operation:

vicfg-hostops <conn_options> --operation shutdown <cmd_options>

VMware Management Resources

vicfg-hostops <conn_options> --operation reboot <cmd_options>

Examples:
vicfg-hostops -server esxi01 -username root -password vmware1! -operation shutdown vicfg-hostops -server esxi01 -username root -password vmware1! -operation reboot --force vicfg-hostops -server esxi01 -username root -operation shutdown -cluster LabCluster

The command prompts for user names and passwords if you do not specify them.

An ESXi host can be shut down and restarted using the vicfg-host command options. If a host managed by vCenter Server is shut down by this command, the host is disconnected from vCenter Server but not removed from the inventory. No equivalent ESXCLI command is available. You can shut down or reboot all hosts in a cluster or datacenter by using the --cluster or
--datacenter option.

In the first and second examples, the connection options (<conn_options>) used are --server, --username, and --password. But in the third example, the --password option is omitted. In this case, you are prompted to enter the password when you run this command.
NOTE

vicfg- commands will be deprecated in future releases. Use esxcli commands instead where possible.

Module 2 VMware Management Resources

39

VS5OS_LectGuideVo11.book Page 40 Monday, June 25, 2012 10:07 PM

Entering and Exiting Host Maintenance Mode


Slide 2-27

Use the vicfg-hostops command with the enter, exit, or info operations:

vicfg-hostops <conn_options> --operation enter <cmd_options> vicfg-hostops <conn_options> --operation exit <cmd_options> vicfg-hostops <conn_options> --operation info <cmd_options>

Examples:
vicfg-hostops -server vc01 -username administrator --operation info -cluster LabCluster vicfg-hostops -server vc01 -username administrator --operation enter -action poweroff

vicfg-hostops:
Does not work with VMware vSphere Distributed Resource Scheduler Suspends the virtual machines by default: Use the -action poweroff option to power off virtual machines.

A host can be placed in maintenance mode by using the vicfg-hostops command. When the command is run, the host does not enter maintenance mode until all of the virtual machines running on the host are either shut down, migrated, or suspended.
vicfg-hostops does not work with VMware vSphere Distributed Resource Scheduler .

You can put all hosts in a cluster or datacenter in maintenance mode by using the --cluster or
--datacenter option.

The --operation info option can be used to check whether the host is in maintenance mode or in the Entering Maintenance Mode state.

40

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 41 Monday, June 25, 2012 10:07 PM

vmware-cmd Overview
Slide 2-28

The vmware-cmd command can be used to perform virtual machine operations. vmware-cmd provides an interface to perform operations on a virtual machine. Examples of virtual machine operations are the following:

2
VMware Management Resources

Retrieve power state Register a virtual machine Unregister a virtual machine Set virtual machine configuration variables Manage virtual machine snapshots

vmware-cmd was included in earlier versions of the ESX Service Console. A vmware-cmd command has been available in the vCLI package since ESXi version 3.0. vmware-cmd is a legacy

tool and supports the usage of VMware vSphere VMFS (VMFS) paths for virtual machine configuration files. As a rule, use datastore paths to access virtual machine configuration files. You can manage virtual machines with the vSphere Client or the vmware-cmd vCLI command. Using vmware-cmd you can to the following: Register and unregister virtual machines Retrieve virtual machine information Manage snapshots Turn the virtual machine on and off Add and remove virtual devices Prompt for user input.

Module 2 VMware Management Resources

41

VS5OS_LectGuideVo11.book Page 42 Monday, June 25, 2012 10:07 PM

Format for Specifying Virtual Machines with vmware-cmd


Slide 2-29

A virtual machine path is usually required when you run vmware-cmd. You can specify the virtual machine path using one of the following formats: [mystorage] testvms/VM1/VM1.vmx

/vmfs/volumes/mystorage1/testvms/VM1/VM1.vmx Register a virtual machine:

Examples:

vmware-cmd s <connection options> /vmfs/volumes/mystorage/MyVM/MyVM.vmx vmware-cmd -H <vc_system> -U <user> -P <password> --vihost <esxi_host> /vmfs/volumes/mystorage/testvm/testvm.vmx start soft

Power on a virtual machine:

When you run vmware-cmd, the virtual machine path is usually required. You can specify the virtual machine using one of the following formats: Datastore prefix style: '[ds_name] relative_path', for example:
'[myStorage1] testvms/VM1/VM1.vmx' (Linux) "[myStorage1] testvms/VM1/VM1.vmx" (Windows)

UUID-based path: folder/subfolder/file, for example:


'/vmfs/volumes/mystorage/testvms/VM1/VM1.vmx' (Linux) "/vmfs/volumes/mystorage/testvms/VM1/VM1.vmx" (Windows)

42

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 43 Monday, June 25, 2012 10:07 PM

Connection Options for vmware-cmd


Slide 2-30

Connection Option -H -h | --vihost -O -U -P --config --credstore --sessionfile

Description

Specifies an ESXi host or a vCenter Server system Specifies the target host if the H options specifies a vCenter Server system. Specifies an alternate port. The default is 443. Specifies the name of the user who connects to the target. Specifies the password of the user. Specifies the location of a configuration file that specifies connection information. Specifies the name of a credential store file. Specifies the name of a session file that was saved earlier

VMware Management Resources

The vmware-cmd vCLI command supports only the following connection options. Other vCLI connection options are not supported, for example, you cannot use variables because the corresponding option is not supported. The connection options for vmware-cmd are different from the connection options that are used for vicfg- commands. -H <host> Target ESXi or vCenter Server system. -h <target> When you run vmware-cmd with the -H option pointing to a vCenter Server system, use -h to specify the ESXi host to run the command against. -O <port> Alternative connection port. The default port number is 902. -U <username> User who is authorized to log in to the host that is specified by --server or --vihost. -P <password> Password of the user who is specified by <username>. Required if a user is specified. --config <connection_config_file> Location of a configuration file that specifies connection information.

Module 2 VMware Management Resources

43

VS5OS_LectGuideVo11.book Page 44 Monday, June 25, 2012 10:07 PM

--credstore <cred_store> Name of a credential store file. --sessionfile <session_file> Name of a session file that was saved earlier using the vSphere SDK for Perl session/save_session.pl script.

44

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 45 Monday, June 25, 2012 10:07 PM

Managing Virtual Machines Using vmware-cmd


Slide 2-31

Virtual machine operations that can be performed with vmware-cmd:

Listing and registering virtual machines Retrieving virtual machine attributes Managing virtual machine snapshots Powering virtual machine on and off Connecting and disconnecting virtual devices Working with the AnswerVM API

VMware Management Resources

Module 2 VMware Management Resources

45

VS5OS_LectGuideVo11.book Page 46 Monday, June 25, 2012 10:07 PM

Listing and Registering Virtual Machines with vmware-cmd


Slide 2-32

Registering a virtual machine means adding the virtual machine to the inventory of the vCenter Server system or the ESXi host. Examples:

List all registered virtual machines:

vmware-cmd l vmware-cmd <connection_options> -s register /vmfs/volumes/Storage1/testvm/testvm.vmx

Register a virtual machine:

Registering or unregistering a virtual machine means adding the virtual machine to the vCenter Server or ESXi inventory or removing the virtual machine. If you register a virtual machine with a vCenter Server system, and then remove it from the ESXi host, an orphaned virtual machine results. Call vmware-cmd -s unregister with the vCenter Server system as the target to resolve the issue.
To list, unregister, and register virtual machines: 1. Run vmware-cmd -l to list all registered virtual machines on a server. vmware-cmd -H <vc_server> -U <login_user> -P <login_password> --vihost <esx_host> -l 2. Run vmware-cmd -s unregister to remove a virtual machine from the inventory. The system returns 0 to indicate success, 1 to indicate failure. vmware-cmd -H <vc_server> -U <login_user> -P <login_password> --vihost <esx_host> -s unregister /vmfs/volumes/Storage2/testvm/testvm.vmx 3. Run vmware-cmd -s register to add a virtual machine to the inventory. vmware-cmd -H <vc_server> -U <login_user -P <login_password --vihost <esx_host> -s register /vmfs/volumes/Storage2/testvm/testvm.vmx 46 VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 47 Monday, June 25, 2012 10:07 PM

Retrieving Virtual Machine Attributes with vmware-cmd


Slide 2-33

You can use vmware-cmd to retrieve a number of different virtual machine attributes. Examples: The getuptime options retrieves the uptime of the guest operating system:

2
VMware Management Resources

vmware-cmd <connection_options> /vmfs/volumes/Storage1/testvm/testvm.vmx getuptime getproductinfo getproductinfo getproductinfo getproductinfo product platform build, majorversion, minorversion gettoolslastactive

The getproductinfo lists the VMware product information.

vmware-cmd includes options for retrieving information about a virtual machine. Each option requires that you specify the virtual machine path. You must also specify connection options, which differ from other vCLI commands.

You can use vmware-cmd options to retrieve several different virtual machine attributes: The getuptime option retrieves the uptime of the guest operating system on the virtual machine, in seconds.
vmware-cmd -H <vc_system> -U <user> -P <password> --vihost <esx_host> /vmfs/volumes/Storage2/testvm/testvm.vmx getuptime

The getproductinfo product option lists the VMware product that the virtual machine runs on.
vmware-cmd -H <vc_system> -U <user> -P <password> --vihost <esx_host> /vmfs/volumes/Storage2/testvm/testvm.vmx getproductinfo product

The getproductinfo platform option lists the platform that the virtual machine runs on.
vmware-cmd -H <vc_system> -U <user> -P <password> --vihost <esx_host> /vmfs/volumes/Storage2/testvm/testvm.vmx getproductinfo platform

Module 2 VMware Management Resources

47

VS5OS_LectGuideVo11.book Page 48 Monday, June 25, 2012 10:07 PM

The getproductinfo build, getproductinfo majorversion, or getproductinfo minorversion options retrieve version information. The getstate option retrieves the execution state of the virtual machine, which can be on, off, suspended, or unknown.
vmware-cmd -H <vc_system> -U <user> -P <password> --vihost <esx_host> /vmfs/volumes/Storage2/testvm/testvm.vmx getstate

The gettoolslastactive option indicates whether VMware Tools is installed and whether the guest operating system is responding normally.
vmware-cmd -H <vc_system> -U <user> -P <password> --vihost <esx_host> /vmfs/volumes/Storage2/testvm/testvm.vmx gettoolslastactive

48

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 49 Monday, June 25, 2012 10:07 PM

Managing Virtual Machine Snapshots with vmware-cmd


Slide 2-34

You can take a snapshot while a virtual machine is running, shutdown, or suspended.

Examples:

To take a snapshot, you must specify the description and quiesce flag (0 or 1) and the memory flag (0 or 1):

VMware Management Resources

vmware-cmd <connection_options> /vmfs/volumes/Storage1/testvm/testvm.vmx createsnapshot VMSnap01 Test Snapshot Apr 12 2012 0 0 vmware-cmd <connection_options> /vmfs/volumes/Storage1/testvm/testvm.vmx hassnapshot vmware-cmd <connection_options> /vmfs/volumes/Storage1/testvm/testvm.vmx revertsnapshot vmware-cmd <connection_options> /vmfs/volumes/Storage1/testvm/testvm.vmx removesnapshots

Check if a virtual machine has a snapshot.

Revert and remove a snapshot

A snapshot captures the entire state of the virtual machine at the time you take the snapshot. Virtual machine state includes the following aspects of the virtual machine. Memory state. Contents of the virtual machines memory. Settings state. Virtual machine settings. Disk state. State of all the virtual machines virtual disks. When you revert to a snapshot, you return these items to the state they were in at the time that you took the snapshot. If you want the virtual machine to be running or to be shut down when you start it, make sure that it is in that state when you take the snapshot. You cannot use vmware-cmd to revert to a named snapshot. To revert to a named snapshot, use the vSphere Client. Use the hassnapshot option to check if a virtual machine already has a snapshot, The command returns 1 if the virtual machine already has a snapshot. Otherwise the command will return a 0. The removesnapshot option removes all snapshots associated with a virtual machine. You can use snapshots as restoration points when you install update packages or during a branching process, such as installing different versions of a program. Taking snapshots ensures that each installation begins from an identical baseline.
Module 2 VMware Management Resources 49

VS5OS_LectGuideVo11.book Page 50 Monday, June 25, 2012 10:07 PM

Powering Virtual Machines On and Off with vmware-cmd


Slide 2-35

You can start, stop, and suspend virtual machines by using vmware-cmd. With hard power operations vmware-cmd immediately and unconditionally shuts down, resets, or suspends the virtual machine. Examples:

Power on a virtual machine:

vmware-cmd <connection_options> /vmfs/volumes/Storage1/testvm/testvm.vmx start soft vmware-cmd <connection_options> /vmfs/volumes/Storage1/testvm/testvm.vmx suspend soft

Power off a virtual machine gracefully:

You can start, reboot, stop, and suspend virtual machines by using vmware-cmd. You must supply a value for the powerop_mode flag, which can be soft or hard. You must have the current version of VMware Tools installed and running in the guest operating system to use a soft power operation. When you specify soft as the powerop_mode value, the result of the call depends on the operation: Stop: vmware-cmd attempts to shut down the guest operating system and powers off the virtual machine. Reset: vmware-cmd attempts to shut down the guest operating system and reboots the virtual machine. Suspend: vmware-cmd attempts to run a script in the guest operating system before suspending the virtual machine. When you specify hard power operations, vmware-cmd immediately and unconditionally shuts down, resets, or suspends the virtual machine.

50

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 51 Monday, June 25, 2012 10:07 PM

Connecting Devices with vmware-cmd


Slide 2-36

You can connect and disconnect devices using vmware-cmd.

Examples:

Connect an IDE CDROM to a virtual machine:

VMware Management Resources

vmware-cmd <connection_options> /vmfs/volumes/Storage1/testvm/testvm.vmx connectdevice CD/DVD Drive 2 vmware-cmd <connection_options> /vmfs/volumes/Storage1/testvm/testvm.vmx disconnectdevice CD/DVD Drive 2

Disconnect a IDE CDROM from a virtual machine:

You can connect and disconnect virtual devices by using the connectdevice and disconnectdevice options. The selected guest operating system determines which of the available devices that you can add to a given virtual machine. The virtual hardware that you connect is displayed in the hardware list that is displayed in the Virtual Machine Properties wizard. You can reconfigure virtual machine hardware while the virtual machine is running, if the following conditions are met: The virtual machine has a guest operating system that supports hot-plug functionality. See the Operating System Installation documentation. The virtual machine is using hardware version 7. The following examples illustrate connecting and disconnecting a virtual device: The connectdevice option connects the virtual IDE device CD/DVD Drive 2 to the specified virtual machine.
vmware-cmd -H <vc_system> -U <user> -P <password> --vihost <esx_host> / vmfs/volumes/Storage2/testvm/testvm.vmx connectdevice "CD/DVD Drive 2"

Module 2 VMware Management Resources

51

VS5OS_LectGuideVo11.book Page 52 Monday, June 25, 2012 10:07 PM

The disconnectdevice option disconnects the virtual device.


vmware-cmd -H <vc_system> -U <user> -P <password> --vihost <esx_host> /vmfs/volumes/Storage2/testvm/testvm.vmx disconnectdevice "CD/DVD Drive 2"

52

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 53 Monday, June 25, 2012 10:07 PM

Working with the AnswerVM API


Slide 2-37

AnswerVM API allows users to provide input to questions.

Some virtual machine operations can be blocked until the question is answered. vmware-cmd --answer allows you to access the input. This option can be used when you want to configure a virtual machine based on a users input.

VMware Management Resources

The AnswerVM API allows users to provide input to questions, thereby allowing blocked virtual machine operations to complete. The vmware-cmd --answer option allows you to access the input. You might use this option when you want to configure a virtual machine based on a userss input. For example: The user clones a virtual machine and provides the default virtual disk type. When the user powers on the virtual machine, it prompts for the desired virtual disk type.

Module 2 VMware Management Resources

53

VS5OS_LectGuideVo11.book Page 54 Monday, June 25, 2012 10:07 PM

esxcli Command Hierarchies


Slide 2-38

esxcli namespace esxcli fcoe namespace esxcli hardware namespace esxcli iscsi namespace esxcli license namespace esxcli network namespace esxcli software namespace esxcli storage namespace esxcli system namespace esxcli vm namespace

You can manage many aspects of an ESXi host with the ESXCLI command set. You can run ESXCLI commands as vCLI commands or run them in the ESXi Shell in troubleshooting situations. vicfg- commands will be deprecated in future releases. Use esxcli commands instead where possible. The slide lists the hierarchy of name spaces and commands for each ESXCLI name space.
NOTE

vicfg- commands will be deprecated in future releases. Use esxcli commands instead where possible.
Be brief on the slide. Storage-related esxcli commands and network-related esxcli commands are discussed in their respective modules. Tell the students that they will use esxcli commands in the lab.

54

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 55 Monday, June 25, 2012 10:07 PM

Example esxcli command


Slide 2-39

Use the esxcli command with the vm namespace to list all the virtual machine processes.

esxcli <conn_options> vm process list

VMware Management Resources

With esxcli vm command you can display all the virtual machine processes on the ESXi system. This command lists only running virtual machines on the system.

Module 2 VMware Management Resources

55

VS5OS_LectGuideVo11.book Page 56 Monday, June 25, 2012 10:07 PM

Lab 1
Slide 2-40

In this lab, you will use vMA to manage ESXi hosts and virtual machines.
1. Access your student desktop system. 2. Verify that your vSphere licenses are valid. 3. Power on a virtual machine. 4. Enable the ESXi Shell and SSH services. 5. Log in to vMA and connect to your vCenter Server and ESXi host. 6. Use esxcli commands to retrieve information about an ESXi host. 7. Use vicfg commands to configure NTP. 8. Use vmware-cmd to manage virtual machines.

56

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 57 Monday, June 25, 2012 10:07 PM

Review of Learner Objectives


Slide 2-41

You should be able to do the following:

Understand the purpose of the vCLI commands. Discuss the options for running commands. Deploy and configure vMA. Use vmware-cmd for virtual machine operations.

VMware Management Resources

Module 2 VMware Management Resources

57

VS5OS_LectGuideVo11.book Page 58 Monday, June 25, 2012 10:07 PM

Key Points
Slide 2-42

vCLI is a command-line interface to manage the infrastructure, either by running commands or by executing scripts. vMA is a virtual appliance that is used to manage the infrastructure at the command prompt. vMA includes vCLI. vMA can be used to monitor, configure, and manage hosts, storage, and virtual networking.

Questions?

58

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 59 Monday, June 25, 2012 10:07 PM

MODULE 3

Performance in a Virtualized Environment


Slide 3-1

Module 3

3
Performance in a Virtualized Environment

VMware vSphere: Optimize and Scale

59

VS5OS_LectGuideVo11.book Page 60 Monday, June 25, 2012 10:07 PM

You Are Here


Slide 3-2

Course Introduction VMware Management Resources Performance in a Virtualized Environment

Storage Optimization CPU Optimization Memory Performance VM and Cluster Optimization Host and Management Scalability

Network Scalability Network Optimization Storage Scalability

60

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 61 Monday, June 25, 2012 10:07 PM

Importance
Slide 3-3

Virtual machines and physical computers have many important differences regarding performance. These differences include the monitoring tools available and the overall performance troubleshooting methodology. The virtual machine monitor (VMM) is a key component in the VMware vSphere environment. The mode in which the VMM runs can significantly affect application performance. You should understand the VMM modes and when to override the default.

3
Performance in a Virtualized Environment

Module 3 Performance in a Virtualized Environment

61

VS5OS_LectGuideVo11.book Page 62 Monday, June 25, 2012 10:07 PM

Module Lessons
Slide 3-4

Lesson 1: Lesson 2: Lesson 3:

Performance Overview Virtual Machine Monitor Monitoring Tools

62

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 63 Monday, June 25, 2012 10:07 PM

Lesson 1: Performance Overview


Slide 3-5

Lesson 1: Performance Overview

3
Performance in a Virtualized Environment

Module 3 Performance in a Virtualized Environment

63

VS5OS_LectGuideVo11.book Page 64 Monday, June 25, 2012 10:07 PM

Learner Objectives
Slide 3-6

After this module, you should be able to do the following:

Review the components of the VMware vSphere ESXi architecture. Describe the fundamentals of performance. Define a performance problem. Discuss the vSphere performance troubleshooting methodology.

64

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 65 Monday, June 25, 2012 10:07 PM

Virtualization Architectures
Slide 3-7

Hosted architecture

VMware Workstation VMware Fusion

Bare-metal architecture (hypervisor)

ESXi

3
Performance in a Virtualized Environment

VMware offers two types of virtualization architectures: hosted and bare-metal. In a hosted architecture, the virtualization layer runs as an application on top of an operating system and supports the broadest range of hardware configurations. Examples of hosted architecture are VMware Workstation and VMware Fusion. In contrast, bare-metal (or hypervisor) architecture installs the virtualization layer directly on a clean x86-based system. Because a hypervisor has direct access to the hardware resources, rather than going through an operating system, it is more efficient than a hosted architecture. As a result, a hypervisor delivers greater performance and scalability.

Module 3 Performance in a Virtualized Environment

65

VS5OS_LectGuideVo11.book Page 66 Monday, June 25, 2012 10:07 PM

VMware ESXi Architecture


Slide 3-8

guest
TCP/IP

guest

file system

CPU is controlled by the scheduler and virtualized by the VMM.

monitor

virtual machine monitor (VMM)


virtual NIC virtual SCSI

scheduler

memory allocator

virtual switch

file system

Memory is allocated by the VMkernel and virtualized by the VMM.

VMkernel

NIC drivers

I/O drivers

physical hardware

Network and I/O devices are proxied though native device drivers.

VMware vSphere ESXi is a platform for running many virtual machines on a single physical machine. The VMkernel does not run virtual machines directly. Instead, it runs a virtual machine monitor (VMM) that in turn is responsible for execution of the virtual machine. The VMM is a thin layer that provides virtual x86 hardware to the overlying operating system. It is through the VMM that the virtual machine leverages key technologies in the VMkernel, such as memory management, scheduling, and the network and storage stacks. Each VMM is devoted to one virtual machine. To run multiple virtual machines, the VMkernel starts multiple VMM instances. The VMM handles CPU and memory virtualization as well as some aspects of device virtualization. The VMkernel storage and network stacks manage I/O requests to host devices through their device drivers. The VMkernel resource management component partitions the physical resources by using a proportional share scheduler to allocate CPU and memory, network traffic shaping to allocate network bandwidth, and a mechanism to allocate disk I/O bandwidth.

66

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 67 Monday, June 25, 2012 10:07 PM

Virtualization Performance: First Dimension


Slide 3-9

First dimension:

Single virtual machine on a single physical machine:

virtual machine
vCPU

The hypervisor sits between the virtual machine and the physical hardware.

Factor that affects performance:

hypervisor

VMM overhead
PCPU

Performance in a Virtualized Environment

host

A virtualized environment has various factors that affect performance. These factors depend on the dimension of virtualization. The first dimension of virtualization is running a single virtual machine on a physical machine. A physical environment has a one-to-one correspondence between the guest operating system and the physical hardware. The guest operating system and its applications run directly on the hardware of the physical machine. However, in a virtualized environment, the guest operating system and its applications do not run directly on the physical hardware. Instead, they run in a virtual machine. Sitting between the virtual machine and the physical machine is the hypervisor, which is where the VMM resides. When running a single virtual machine on a physical machine, VMM overhead is the main factor that affects performance.

Module 3 Performance in a Virtualized Environment

67

VS5OS_LectGuideVo11.book Page 68 Monday, June 25, 2012 10:07 PM

Virtualization Performance: Second Dimension


Slide 3-10

Second dimension:

Multiple virtual machines on a single physical machine:

The hypervisor sits between the virtual machines and physical hardware.

Factor that affects performance: Scheduling overhead and insufficient resources such as network and storage bandwidth
virtual machine virtual machine virtual machine

hypervisor

PCPU

host

Because server consolidation is a main goal of virtualization, your environment will most likely be at the second dimension of virtualization: running multiple virtual machines on a physical machine. At this level, virtual machines must share the resources. The hypervisor schedules virtual machines to run and it coordinates access to memory, networks, and storage. If the performance of your virtual machine is degraded, several factors can affect the performance at this dimension: scheduling overhead, insufficient network bandwidth, or slow or overloaded storage.

68

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 69 Monday, June 25, 2012 10:07 PM

Virtualization Performance: Third Dimension


Slide 3-11

Third dimension:

VMware vSphere Distributed Resource Scheduler (DRS):

Reduces the impact of second dimension performance issues

Factor that affects performance: Aggressive VMware vSphere vMotion migration


virtual machine virtual machine hypervisor virtual machine virtual machine virtual machine hypervisor virtual machine

3
Performance in a Virtualized Environment

vMotion

host

host

The third dimension of virtualization is running multiple virtual machines in a VMware vSphere Distributed Resource Scheduler (DRS) cluster. With a DRS cluster, you can pool the CPU and memory resources of multiple hosts, so a virtual machine is no longer tied to a single host. However, a cost is associated with running virtual machines in a DRS cluster. One cost, for example, is the amount of VMware vSphere vMotion migration that the DRS cluster performs to balance virtual machines across hosts.

Module 3 Performance in a Virtualized Environment

69

VS5OS_LectGuideVo11.book Page 70 Monday, June 25, 2012 10:07 PM

Performance Factors in a vSphere Environment


Slide 3-12

Hardware:

CPU Memory Storage Network VMM Virtual machine settings Applications

Software:

ESXi and virtual machine performance tuning is complicated because virtual machines share underlying physical resources: CPU, memory, storage, and networks. If you overcommit any of these resources, you might see performance bottlenecks. For example, if too many virtual machines are CPU-intensive, you might see slow performance because all of the virtual machines must share the underlying physical CPU. In addition, performance can suffer due to inherent imbalances in the system. For example, disks are much slower than CPU, so if you have a disk-intensive application, you might see slower-thanexpected performance. This possible effect applies to both physical and virtual environments. Expectations must be set appropriately. Finally, configuration issues or inadvertent user errors might lead to poor performance. For example, you might configure LUNs improperly so that two virtual machines are accessing different LUNs that span the same set of underlying disk drives. In this case, you might see worse disk performance than expected because of contention for the disks or because of abnormal access patterns. Or a user might use a symmetric multiprocessing (SMP) virtual machine when a single-processor virtual machine would work well. You might also see a situation where a user sets shares but then forgets about resetting them. Doing so might result in poor performance because of the changing characteristics of other virtual machines in the system.
70 VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 71 Monday, June 25, 2012 10:07 PM

Traditional Best Practices for Performance


Slide 3-13

Understand and account for the target applications profile.

Optimally configure virtual machines. Improve virtualization performance by using newer platforms when available.

3
Performance in a Virtualized Environment

Monitor consolidated workloads closely to discover new bottlenecks quickly.

To get the best performance out of your VMware vSphere environment, you should follow these traditional best practices: Understand your applications profile and workload characteristics. Ultimately, it is the application that benefits from your performance-tuning efforts. Optimally configure your virtual machine. Configure your virtual machine to provide the best possible environment for your application to run. If possible, use newer hardware to improve virtualization performance. Newer hardware platforms have the latest features for improving system performance. Monitor your workloads closely and periodically to discover new bottlenecks in your dynamic vSphere environment. vSphere monitoring tools enable you to do this monitoring.

Module 3 Performance in a Virtualized Environment

71

VS5OS_LectGuideVo11.book Page 72 Monday, June 25, 2012 10:07 PM

What Is a Performance Problem?


Slide 3-14

A performance problem should be defined in the context of an ongoing performance management and capacity planning process. Depending on the methodology in place, a performance problem might be defined as:

When an application fails to meet its predetermined service-level agreement (SLA) When application response falls outside a predetermined range of baseline performance When users report slow response time or poor throughput

Ideally, a performance problem should be defined in the context of an ongoing performance management process. Performance management refers to the process of establishing performance requirements for applications, in the form of service-level agreements (SLAs). Performance management then also tracks and analyzes the achieved performance to ensure that those requirements are met. A complete performance management methodology includes collecting and maintaining baseline performance data for applications, systems, and subsystems, for example, storage and network. In the context of performance management, a performance problem exists when an application fails to meet its predetermined SLA. Depending on the specific SLA, the failure might be in the form of excessively long response times or throughput below some defined threshold. If no SLAs exist, then the perceived performance problem can be quantified by comparing current performance metrics to baseline performance data. If you decide that the difference is severe enough, troubleshoot to bring performance back within a predetermined range of baseline. In environments where no SLAs have been established and where no baselines have been maintained, user complaints about slow response time or poor throughput might lead to the declaration that a performance problem exists. In this situation, you must define a clear statement of the problem and set performance targets before undertaking performance troubleshooting.
72 VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 73 Monday, June 25, 2012 10:07 PM

Performance Troubleshooting Methodology


Slide 3-15

A performance troubleshooting methodology must provide guidance on how to find the root cause of performance problems and how to fix the cause when determined. To accomplish this, you have to address the following questions:

What are the success criteria for this problem? Where do you start looking for problems? How do you know what to look for to identify a problem? How do you find the root cause of a problem that you have identified? What do you change to fix the root cause? Where do you look next if no problem is found?

3
Performance in a Virtualized Environment

The first step of a performance troubleshooting methodology is to decide on the criteria for successful completion. SLAs, baseline data, or an understanding of user expectations can be used. The process of setting SLAs or selecting appropriate performance goals is beyond the scope of this course, but it is critical to the success of the troubleshooting process. Without defined goals, performance troubleshooting can turn into endless performance tuning. Having decided on an end goal, the next step in the process is to decide where to start looking for observable problems. There can be many different places to start an investigation and many different ways to proceed. The goal of the performance troubleshooting methodology presented in this course is to select, at each step, the component that is most likely to be responsible for the observed problems. If a problem is identified, then it is important to determine the cause of the problem and to understand possible solutions for resolving the problem. After a problem has been identified and resolved, the performance of the environment should be reevaluated in the context of the completion criteria. If the criteria are satisfied, then the troubleshooting process is complete. Otherwise, the problem checks should be followed again to identify additional problems. As in all performance tuning and troubleshooting, it is important to fix only one problem at a time so that the effect of each change can be properly understood.
Module 3 Performance in a Virtualized Environment 73

VS5OS_LectGuideVo11.book Page 74 Monday, June 25, 2012 10:07 PM

Basic Troubleshooting Flow for ESXi Hosts


Slide 3-16
1. Check for VMware Tools status. 2. Check for resource pool CPU saturation. 3. Check for host CPU saturation. 4. Check for guest CPU saturation. 5. Check for active virtual machine memory swapping. 6. Check for virtual machine swap wait. 7. Check for active virtual machine memory compression. 8. Check for an overloaded storage device. 9. Check for dropped receive packets. 10. Check for dropped transmit packets. 14. Check for random increase in I/O latency on a shared storage device. 15. Check for random increase in data transfer rate on network controllers. 12. Check for high CPU ready time on virtual machines running in under-utilized hosts. 13. Check for slow storage device. 11. Check for using only one vCPU in an SMP virtual machine.

16. Check for low guest CPU utilization. 17. Check for past virtual machine memory swapping. 18. Check for high memory demand in a resource pool. 19. Check for high memory demand in a host. 20. Check for high guest memory demand.

Definite problems

Likely problems

Possible problems

A large percentage of performance problems in a vSphere environment are caused by a few common causes. As a result, the vSphere troubleshooting methodology is to follow a fixed-order flow through the most common observable performance problems. For each problem, checks are specified on specific performance metrics made available by the vSphere performance monitoring tools. The graphic represents the basic troubleshooting flow for an ESXi host. The problem checks in this flow cover the most common or serious problems that cause performance issues on ESXi hosts. The three categories of observable problems are the following: Definite problems These are conditions that have a direct effect on observed performance, and should be corrected. Likely problems These are conditions that in most cases lead to performance problems, but that in some circumstances might not require correction. Possible problems These are conditions that might be indicators of the causes of performance problems, but might also reflect normal operating conditions.
Do not go into detail about the problems listed in the flowchart. The key point of the slide is that a troubleshooting methodology for ESXi hosts exists and that these problems are discussed in the next several modules.

74

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 75 Monday, June 25, 2012 10:07 PM

Review of Learner Objectives


Slide 3-17

You should be able to do the following:

Review the components of the ESXi architectures Describe the fundamentals of performance Define a performance problem Discuss the vSphere performance troubleshooting methodology

3
Performance in a Virtualized Environment

Module 3 Performance in a Virtualized Environment

75

VS5OS_LectGuideVo11.book Page 76 Monday, June 25, 2012 10:07 PM

Lesson 2: Virtual Machine Monitor


Slide 3-18

Lesson 2: Virtual Machine Monitor

76

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 77 Monday, June 25, 2012 10:07 PM

Learner Objectives
Slide 3-19

After this lesson, you should be able to do the following:

Define the VMM. Discuss the monitor modes. Discuss when to override the default monitor mode.

3
Performance in a Virtualized Environment

Module 3 Performance in a Virtualized Environment

77

VS5OS_LectGuideVo11.book Page 78 Monday, June 25, 2012 10:07 PM

What Is the Virtual Machine Monitor?


Slide 3-20

guest

guest

virtual machine monitor

virtual machine monitor


virtual NIC virtual SCSI

The VMM is a thin layer that provides virtual x86 hardware to a virtual machines guest operating system. VMM modes:

VMkernel
scheduler memory allocator

virtual switch

file system I/O drivers

NIC drivers

Software virtualization Hardware virtualization Paravirtualization

physical hardware

The VMM is a thin layer that provides virtual x86 hardware to the guest operating system in a virtual machine. This hardware includes a virtual CPU, virtual I/O devices, and timers. The VMM leverages key technologies in the VMkernel, such as scheduling, memory management, and the network and storage stacks. Each VMM is devoted to one virtual machine. To run multiple virtual machines, the VMkernel starts multiple VMM instances. Each VMM instance partitions and shares the CPU, memory, and I/O devices to successfully virtualize the system. The VMM can be implemented using hardware virtualization, software virtualization, or paravirtualization techniques. Paravirtualization refers to the communication between the guest operating system and the hypervisor to improve performance and efficiency. The value proposition of paravirtualization is in lower virtualization overhead, but the performance advantage of paravirtualization over hardware or software virtualization can vary greatly, depending on the workload. Because paravirtualization cannot support unmodified operating systems (for example, Windows 2000/XP), its compatibility and portability is poor. Paravirtualization can also introduce significant support and maintainability issues in production environments because it requires deep modifications to the operating system kernel. The rest of this lesson focuses on hardware virtualization and software virtualization techniques.
78 VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 79 Monday, June 25, 2012 10:07 PM

What Is Monitor Mode?


Slide 3-21

The virtual CPU is comprised of the following:

The virtual instruction set The virtual memory management unit (MMU)

The combination of techniques used to virtualize the instruction set and memory is called the monitor execution mode, or monitor mode. The VMM can implement the monitor mode with either software or hardware techniques, or a combination of both.

3
Performance in a Virtualized Environment

The virtual CPU consists of the virtual instruction set and the virtual memory management unit (MMU). An instruction set is a list of instructions that a CPU executes. The MMU is the hardware that maintains a mapping between virtual addresses and physical addresses in memory. The combination of techniques used to virtualize the instruction set and memory determines a monitor execution mode (also called a monitor mode). The VMM identifies the ESXi host hardware platform and its available CPU features, then chooses a monitor mode for a particular guest operating system on that hardware platform. The VMM might choose a monitor mode that uses hardware virtualization techniques, software virtualization techniques, or a combination of hardware and software techniques.
Although there are three ways to implement the monitor mode (paravirtualization, hardware virtualization, and software virtualization), only hardware and software virtualization techniques are discussed in detail.

Module 3 Performance in a Virtualized Environment

79

VS5OS_LectGuideVo11.book Page 80 Monday, June 25, 2012 10:07 PM

Challenges of x86 Hardware Virtualization


Slide 3-22

x86 operating systems run directly on the hardware and assume that they own the hardware.

Ring 3 Ring 2 Ring 1 Ring 0

The operating system must execute its privileged instructions in ring 0.

direct execution of user and OS requests

Virtualizing the x86 architecture requires placing a virtualization layer under the operating system.

x86 operating systems are designed to run directly on the bare-metal hardware, so they assume that they fully control the computer hardware. The x86 architecture offers four levels of privilege to operating systems and applications to manage access to the computer hardware: ring 0, ring 1, ring 2, and ring 3. Though user-level applications typically run in ring 3, the operating system needs to have direct access to the memory and hardware and must execute its privileged instructions in ring 0. To create and manage the virtual machines that deliver shared resources, virtualizing the x86 architecture requires placing a virtualization layer under the operating system (which expects to be in the most privileged ring, ring 0). Further complicating the situation, some sensitive instructions cannot effectively be virtualized because they have different semantics when they are not executed in ring 0. The difficulty in trapping and translating these sensitive and privileged instruction requests at runtime was the challenge that originally made x86 architecture virtualization look impossible. VMware resolved the challenge in 1998 by developing a software virtualization technique called binary translation.

80

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 81 Monday, June 25, 2012 10:07 PM

CPU Software Virtualization: Binary Translation


Slide 3-23

Binary translation is the original approach to virtualizing the (32-bit) x86 instruction set. With binary translation:
Ring 3

direct execution of user requests

The VMM runs in ring 0 for isolation and performance. The guest operating system runs in ring 1. Applications continue to run in ring 3.

Ring 2 Ring 1 Ring 0 binary translation of OS requests

Performance in a Virtualized Environment

Binary translation allows the VMM to run in ring 0 for isolation and performance while moving the guest operating system to ring 1. Ring 1 is a higher privilege level than ring 3 and a lower privilege level than ring 0. VMware can virtualize any x86 operating system by using a combination of binary translation and direct execution techniques. With binary translation, the VMM dynamically translates all guest operating system instructions and caches the results for future use. The translator in the VMM does not perform a mapping from one architecture to another. Instead, it translates from the full unrestricted x86 instruction set to a subset that is safe to execute. In particular, the binary translator replaces privileged instructions with sequences of instructions that perform the privileged operations in the virtual machine rather than on the physical machine. This translation enforces encapsulation of the virtual machine while preserving the x86 semantics as seen from the perspective of the virtual machine. Meanwhile, user-level code is directly executed on the processor for high-performance virtualization. Each VMM provides each virtual machine with all the services of the physical system, including a virtual BIOS, virtual devices, and virtualized memory management.

Module 3 Performance in a Virtualized Environment

81

VS5OS_LectGuideVo11.book Page 82 Monday, June 25, 2012 10:07 PM

CPU Hardware Virtualization


Slide 3-24

In addition to software virtualization, the following hardware virtualization is available:

Intel VT-x AMD-V Design the hardware to make running virtual machines easier for a VMM. Allow a VMM to do away with binary translation while still being able to fully control the execution of a virtual machine.

Both designs have similar goals:

In addition to software virtualization, hardware virtualization is supported. Intel has the Intel Virtualization Technology (Intel VT-x) feature. AMD has the AMD Virtualization (AMD-V) feature. Intel VT-x and AMD-V are similar in aim but different in detail. Both designs aim to simplify virtualization techniques. Both designs allow a VMM to do away with binary translation while still being able to fully control the execution of a virtual machine. This accomplishment is realized by restricting which kinds of privileged instructions the virtual machine can execute without intervention by the VMM.

82

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 83 Monday, June 25, 2012 10:07 PM

Intel VT-x and AMD-V (First Generation)


Slide 3-25

Both Intel VT-x and AMD-V have a CPU execution mode feature that does the following:

Ring 3 Ring 2

direct execution of user requests

Allows the VMM to run in a root mode below ring 0 Automatically traps privileged and sensitive calls to the hypervisor

Ring 1

Performance in a Virtualized Environment

Ring 0

Stores the guest operating Root mode system state in virtual privilege machine control structures level (Intel VT-x) or virtual machine control blocks (AMD-V)

OS requests trap to VMM without binary translation.

With the first generation of Intel VT-x and AMD-V, a CPU execution mode feature was introduced that allows the VMM to run in a root mode below ring 0. Privileged and sensitive calls are set to automatically trap to the VMM. The guest state is stored in virtual machine control structures (Intel VT-x) or virtual machine control blocks (AMD-V). The VMM gives the CPU to a virtual machine for direct execution (an action called a VM entry) up until the point when the virtual machine tries to execute a privileged instruction. At that point, the virtual machine execution is suspended and the CPU is given back to the VMM (an action called a VM exit). The VMM then inspects the virtual machine instruction that caused the exit as well as other information provided by the hardware in response to the exit. With the relevant information collected, the VMM emulates the virtual machine instruction against the virtual machine state and then resumes execution of the virtual machine with another VM entry.

Module 3 Performance in a Virtualized Environment

83

VS5OS_LectGuideVo11.book Page 84 Monday, June 25, 2012 10:07 PM

CPU Virtualization Overhead


Slide 3-26

Time spent by the VMM does the following:

Increases host CPU utilization Increases latency (for example, virtual machine exit latency)
sensitive operations direct execution additional CPU time

Throughput is unaffected when CPU headroom is available.

The VMM incurs a CPU cost of correctly virtualizing sensitive instructions. Examples of sensitive instructions are privileged execution, processor fault handling, I/O port access, and interrupts. The CPU time spent by the VMM increases host CPU utilization and can also increase latency, specifically virtual machine exit latency. However, when there are enough CPU resources, performance is unaffected. In general, the overhead associated with the VMM is minimal and typically does not affect the overall system performance. In addition, the amount of virtual machine exit latency has decreased because hardware virtualization support improves with every CPU generation.
For example, on Intel architecture, the VM exit latency for Prescott is about 1,400 CPU cycles. For Cedar Mill it is about 1,200 CPU cycles. Merom is about 600 CPU cycles, Penryn is about 500 CPU cycles. Nehalem is about 200 CPU cycles.

84

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 85 Monday, June 25, 2012 10:07 PM

Memory Management Concepts


Slide 3-27

Processes see virtual memory, not physical memory.

virtual memory

process 1

process 2

The guest operating system uses page tables to map virtual memory addresses to physical memory addresses.

physical memory

3
Performance in a Virtualized Environment

The MMU and TLB quickly translate virtual addresses to physical addresses. If an address is not found in the TLB, then the page table is consulted.

TLB
virtual physical

virtual > physical mapping

address address

Beyond CPU virtualization, the next critical component is memory virtualization. Memory virtualization involves sharing the physical system memory and dynamically allocating it to virtual machines. Virtual machine memory virtualization is very similar to the virtual memory support provided by modern operating systems. Applications see a contiguous address space that is not necessarily tied to the underlying physical memory in the system. The operating system keeps mappings of virtual memory addresses to physical memory addresses in a page table. However, all modern x86 CPUs include an MMU and a translation look-aside buffer (TLB) to optimize virtual memory performance. The MMU translates virtual addresses to physical addresses. The TLB is a cache that the MMU uses to speed up these translations. If the requested address is present in the TLB, then the physical address is quickly located and accessed. This activity is called a TLB hit. If the requested address is not in the TLB (a TLB miss), then the page table is consulted instead. The page table walker receives the virtual address and traverses the page table tree to produce the corresponding physical address. When the page table walk is complete, the virtual/physical address pair is inserted into the TLB to accelerate future access to the same address.

Module 3 Performance in a Virtualized Environment

85

VS5OS_LectGuideVo11.book Page 86 Monday, June 25, 2012 10:07 PM

MMU Virtualization
Slide 3-28

VM 1 process 1 virtual memory guest physical memory host physical memory process 2 process 1

VM 2 process 2

To run multiple virtual machines on a single system, another level of memory virtualization must be used: host physical memory. The VMM maps guest physical addresses (GA) to host physical addresses (HA). To support the guest operating system, the MMU must be virtualized by using the following:

Software technique: shadow page tables Hardware technique: Intel EPT and AMD RVI

To run multiple virtual machines on a single system, another level of memory virtualization is required. This level is host physical memory (also known as machine memory). The guest operating system continues to control the mapping of virtual addresses to its physical addresses, but the guest operating system cannot have direct access to host physical memory. The VMM is responsible for mapping guest physical memory to host physical memory. To accomplish this, the MMU must be virtualized. The software technique for virtualizing the MMU uses shadow page tables. Hardware support for MMU virtualization is found on both the Intel and the AMD platforms. On the Intel platform, the feature is called Extended Page Tables (EPT). On the AMD platform, the feature is called Rapid Virtualization Indexing (RVI).

86

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 87 Monday, June 25, 2012 10:07 PM

Software MMU: Shadow Page Tables


Slide 3-29

VM 1 virtual memory (VA) guest physical memory (GA) process 1 process 2 process 1

VM 2 process 2

host physical memory (HA)

Performance in a Virtualized Environment

Shadow page tables are created for each primary page table. They consist of two mappings: VA > GA and GA > HA, and they accelerate memory access.

The VMM is responsible for mapping the guest physical memory to host physical memory. To virtualize the MMU in software, the VMM creates a shadow page table for each primary page table that the virtual machine is using. The VMM populates the shadow page table with the composition of two mappings: VA > GA Virtual memory addresses to guest physical memory addresses. This mapping is specified by the guest operating system and is obtained from the primary page table. GA > HA Guest physical memory addresses to host physical memory addresses. This mapping is defined by the VMM and VMkernel. By building shadow page tables that capture this composite mapping, the VMM can point the hardware MMU directly at the shadow page tables. This direct pointing allows the virtual machines accessing of memory to run at native speed while being assured that the virtual machine cannot access host physical memory that does not belong to it.

Module 3 Performance in a Virtualized Environment

87

VS5OS_LectGuideVo11.book Page 88 Monday, June 25, 2012 10:07 PM

Hardware MMU Virtualization


Slide 3-30

AMD RVI and Intel EPT permit two levels of address mapping in the hardware:

TLB
VA GA

VA > GA mapping guest PT pointer guest VMM nested PT pointer GA > HA mapping

These two levels eliminate the need for the VMM to synchronize shadow page tables with guest page tables.

Software MMU is where the VMM maps guest physical pages to host physical pages in the shadow page tables, which are exposed to the hardware. The VMM also synchronizes shadow page tables to guest page tables (mapping of virtual addresses to guest physical addresses, also known as logicalto-physical mapping). With hardware MMU, the guest operating system still does the logical-to-physical mapping. The VMM maintains the mapping of guest physical addresses to host physical addresses (also known as physical-to-machine mappings) in an additional level of page tables called nested page tables. The guest page tables and nested page tables are exposed to hardware. When a virtual address is accessed, the hardware walks the guest page tables, as in the case of native execution. But for every guest physical page accessed during the guest page table walk, the hardware also walks the nested page tables to determine the corresponding host physical page. This translation eliminates the need for the VMM to synchronize shadow page tables with guest page tables. However, the extra operation also increases the cost of a page walk, thereby affecting the performance of applications that stress the TLB. This cost can be reduced by using large pages, thus reducing the stress on the TLB for applications with good spatial locality. For optimal performance, the ESXi VMM and VMkernel aggressively try to use large pages for their own memory when hardware MMU is used.

88

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 89 Monday, June 25, 2012 10:07 PM

Memory Virtualization Overhead


Slide 3-31

Software MMU virtualization incurs CPU overhead when the following occur:

New processes are created:

New address spaces need to be created.

Context switches are made:


Address spaces are switched.

A large number of processes are running:


Shadow page tables need to be maintained.

Performance in a Virtualized Environment

Pages are being allocated and deallocated

Hardware MMU virtualization incurs CPU overhead when a TLB miss occurs.

With MMU software virtualization, shadow page tables are used to accelerate memory access and thereby improve memory performance. However, shadow page tables consume additional memory and incur CPU overhead in certain situations: When new processes are created, the virtual machine updates a primary page table. The VMM must trap the update and propagate the change into the corresponding shadow page table or tables. This slows down memory mapping operations and the creation of new processes in virtual machines. When the virtual machine switches context from one process to another, the VMM must intervene to switch the physical MMU to the shadow page table root of the new process. When running a large number of processes, shadow page tables need to be maintained. When allocating pages, and the virtual machine touches memory for the first time, the shadow page table entry mapping this memory must be created on demand. Doing so slows down the first access to memory. (The native equivalent is a TLB miss.) For most workloads, hardware MMU virtualization provides an overall performance win over shadow page tables, but there are exceptions to this rule: workloads that suffer frequent TLB misses or that perform few context switches or page table updates.
Module 3 Performance in a Virtualized Environment 89

VS5OS_LectGuideVo11.book Page 90 Monday, June 25, 2012 10:07 PM

Choosing the Default Monitor Mode


Slide 3-32

The VMM can choose from three monitor modes:

BT: Binary translation and shadow page tables HV: AMD-V or Intel VT-x and shadow page tables HWMMU: AMD-V with RVI, or Intel VT-x with EPT

The VMM determines a set of possible monitor modes to use, then picks one (default monitor mode). The decision is based on the following: The physical CPUs features and guest operating system type Configuration file settings

The monitor mode has three valid combinations: BT Binary translation and shadow page tables HV AMD-V or Intel VT-x and shadow page tables HWMMU AMD-V with RVI, or Intel VT-x with EPT (RVI is inseparable from AMD-V, and EPT is inseparable from Intel VT-x) BT, HV, and HWMMU are abbreviations used by the ESXi host to identify each combination. When a virtual machine is powering on, the VMM inspects the physical CPUs features and the guest operating system type to determine the set of possible execution modes. The VMM first finds the set of modes allowed. Then it restricts the allowed modes by configuration file settings. Finally, among the remaining candidates, it chooses the preferred mode.

90

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 91 Monday, June 25, 2012 10:07 PM

Overriding the Default Monitor Mode


Slide 3-33

For most workloads, the default monitor mode works well.

Use the VMware vSphere Client to override the default monitor mode.

3
Performance in a Virtualized Environment

For the majority of workloads, the default monitor mode chosen by the VMM works best. The default monitor mode for each guest operating system on each CPU has been carefully selected after a performance evaluation of available choices. However, some applications have special characteristics that can result in better performance when using a nondefault monitor mode. These should be treated as exceptions, not the rule. The chosen settings are honored by the VMM only if the settings are supported on the intended hardware. For example, if you select Use software for instruction set and MMU virtualization for a 64-bit guest operating system running on a 64-bit Intel processor, the VMM will choose Intel VT-x for CPU virtualization instead of BT. This choice is made because BT is not supported for 64-bit guest operating systems on this processor.

Module 3 Performance in a Virtualized Environment

91

VS5OS_LectGuideVo11.book Page 92 Monday, June 25, 2012 10:07 PM

Application Examples
Slide 3-34

Application example Financial software Citrix or Apache Web server Java applications

Memoryintensive? No Yes

Sensitive to TLB miss costs? No No

Virtualization: SW or HW? Both perform equally well. HW virtualization performs better. With large pages, HW virtualization is better. With small pages, SW virtualization is better. Depends on which cost is higher: memory virtualization overhead or TLB cost. Benchmark.

No

Yes

Databases

Yes

Yes

The table shows a sample list of applications and which memory management virtualization technique (hardware or software) provides better performance for them. The applications are characterized as memory-intensive or non-memory-intensive and whether or not they are sensitive to TLB misses. To review, a TLB miss occurs when a memory address requested by the application is not present in the TLB cache and the page table must be consulted instead.

92

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 93 Monday, June 25, 2012 10:07 PM

Displaying the Default Monitor Mode


Slide 3-35

From the ESXi console, search for the keyword MONITOR in vmware.log.
From the ESXi console:

3
Performance in a Virtualized Environment You can detect the monitor setting by looking at the virtual machines log file, vmware.log. If the virtual machine is on an ESXi host, log in to the local or remote console of the host and use the grep command to search for the keyword MONITOR in the virtual machines log file. The search finds any line in the output that contains the word MONITOR. The -i option tells grep to ignore case. Allowed modes The monitor modes that can be set for the virtual machine, based on the hosts hardware: BT Software instruction set and software MMU HV Hardware instruction set and software MMU HWMMU Hardware instruction set and hardware MMU User requested modes The monitor modes selected by the user (for example, using the VMware vSphere Client interface). GuestOS preferred modes The monitor modes supported by the guest operating system. Filtered list The list of monitor modes that will work for this virtual machine. The first item in the list is the monitor mode that is currently set for the virtual machine.
Module 3 Performance in a Virtualized Environment 93

VS5OS_LectGuideVo11.book Page 94 Monday, June 25, 2012 10:07 PM

If the virtual machine is running on an ESXi host, you can download vmware.log to your desktop by using, for example, the datastore browser in the vSphere Client. Use an editor to look at the log file and search for the keyword MONITOR.

94

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 95 Monday, June 25, 2012 10:07 PM

Review of Learner Objectives


Slide 3-36

You should be able to do the following:

Define the VMM. Discuss the monitor modes. Discuss when to override the default monitor mode.

3
Performance in a Virtualized Environment

Module 3 Performance in a Virtualized Environment

95

VS5OS_LectGuideVo11.book Page 96 Monday, June 25, 2012 10:07 PM

Lesson 3: Monitoring Tools


Slide 3-37

Lesson 3: Monitoring Tools

96

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 97 Monday, June 25, 2012 10:07 PM

Learner Objectives
Slide 3-38

After this module, you should be able to do the following:

Discuss the available monitoring tools. Use VMware vCenter Server performance charts. Use the resxtop commands. Discuss guest operating systembased performance tools. Describe how to choose the right tool for a given situation.
Performance in a Virtualized Environment

Module 3 Performance in a Virtualized Environment

97

VS5OS_LectGuideVo11.book Page 98 Monday, June 25, 2012 10:07 PM

Commonly Used Performance Monitoring Tools


Slide 3-39

VMware performance charts:

Viewed with vSphere Client:

Available when connected either to the ESXi host directly or to vCenter Server Two types of charts: overview and advanced

resxtop Guest operating systembased tools:

Perfmon, Iometer, and so on. Some performance tools and benchmarking tools might not work accurately in a virtual machine.

Several monitoring tools are available for you to use in a vSphere environment: VMware performance charts You display performance charts in a vSphere Client connected either to the ESXi host directly or to a VMware vCenter Server system. Performance charts provide a lot of useful information, even if they do not provide all the performance counters that you need to analyze performance. The two types of charts are overview and advanced. resxtop The resxtop commands are an excellent means of collecting every performance statistic needed and making them available in a way that is useful to analysis. Guest-based performance monitoring is an inaccurate means of evaluating performance in virtual deployments. Because VMware products provide a virtual interface to the hardware, traditional performance tools based on measuring hardware resources might not be accurate. As a result, tools like Windows Perfmon or the Linux top command do not provide accurate measurements of CPU use. Performance analysis on virtual deployments should always use host-based tools, for example, vCenter Server performance charts or resxtop.

98

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 99 Monday, June 25, 2012 10:07 PM

Overview Performance Charts


Slide 3-40

VMware performance charts:

Viewed with vSphere Client:

partial Overview panel for a hosts performance charts

Performance in a Virtualized Environment

The two types of VMware performance charts are overview charts and advanced charts. The overview performance charts show the performance statistics that VMware considers most useful for monitoring performance and diagnosing problems. Depending on the object that you select in the inventory, the performance charts in the Overview panel provide a quick view of how your host or virtual machine is doing. The example in the slide shows a partial view of the overview performance charts for an ESXi host. In the Overview panel, you can view a total of ten different performance charts for a host, four of which are shown in the slide.

Module 3 Performance in a Virtualized Environment

99

VS5OS_LectGuideVo11.book Page 100 Monday, June 25, 2012 10:07 PM

Advanced Performance Charts


Slide 3-41

Chart Type Objects Chart Options Counters Rollup Statistics Type

The advanced performance charts provide a graphical display of statistical data for an ESXi host or objects in vCenter Server. You can examine data about clusters, datacenters, datastores, hosts, resource pools, virtual machines, and vApps. To easily compare different metrics, you can view charts for an object side by side, for example, CPU usage and memory usage. Such comparisons enable you to troubleshoot performance issues, monitor usage trends, do capacity planning, and determine which resources to increase and decrease on your hosts and virtual machines. To help you sort the data and display it in useful formats, the vSphere Client allows you to customize the advanced performance charts with the following parameters: real-time or historical statistics, chart type, objects and counters, statistics type, and rollup.

100

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 101 Monday, June 25, 2012 10:07 PM

Chart Options: Real-Time and Historical


Slide 3-42

vCenter Server stores statistics at different granularities.

Time interval Real-Time (past hour) Past Day Past Week Past Month Past Year

Data frequency 20 seconds 5 minutes 30 minutes 2 hours 1 day

Number of samples 180 288


Performance in a Virtualized Environment

336 360 365

The vSphere Client can show both real-time information and historical information. Real-time information is information generated for the past hour at a 20-second granularity. Historical information is information generated for the past day, week, month, or year, at varying granularities. By default, vCenter Server has four collection intervals: day, week, month, and year. Each interval specifies a length of time that statistics are archived in the vCenter Server database. You can configure which intervals are enabled and for what period of time. You can also configure the number of data counters used during a collection interval by setting the collection level. Together, the collection interval and the collection level determine how much statistical data is collected and stored in your vCenter Server database. For example, using the table in the slide, past-day statistics show one data point every 5 minutes, for a total of 288 samples. Past-year statistics show 1 data point per day, or 365 samples. Real-time statistics are not stored in the database. They are stored in a flat file on ESXi hosts and in memory on vCenter Server systems. ESXi hosts collect real-time statistics for the host or the virtual machines available on the host. Real-time statistics are collected directly on an ESXi host every 20 seconds. If you query for real-time statistics in the vSphere Client, vCenter Server queries each host directly for the data. It does not process the data at this point. vCenter Server only passes the data to the vSphere Client. The processing occurs in a separate operation, depending on the host type.

Module 3 Performance in a Virtualized Environment

101

VS5OS_LectGuideVo11.book Page 102 Monday, June 25, 2012 10:07 PM

On ESXi hosts, the statistics are kept for 30 minutes, after which 90 data points have been collected. The data points are aggregated, processed, and returned to vCenter Server. Now, vCenter Server archives the data in the database as a data point for the day collection interval. To ensure that performance is not impaired when collecting and writing the data to the database, cyclical queries are used to collect data counter statistics. The queries occur for a specified collection interval. At the end of each interval, the data calculation occurs.

102

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 103 Monday, June 25, 2012 10:07 PM

Chart Types
Slide 3-43

Line Graph:

Each instance is shown separately. Graphs are stacked on top of one another.

Stacked Graph:

Use stacked charts to compare data across virtual machines. Example shows breakdown of CPU use:

Performance in a Virtualized Environment

Per virtual machine Per vCPU by each virtual machine

Stacked Graph (per virtual machine):

Available to ESXi hosts only:

Used to compare metrics between virtual machines

The performance charts graphically display CPU, memory, disk, network, and storage metrics for devices and entities that are managed by vCenter Server. Chart types include line charts, pie charts, bar charts, and stacked charts. You view the performance charts for an object that is selected in the inventory. From the vSphere Client Performance tab, you can view overview charts and advanced charts for an object. Both the overview charts and the advanced charts use the following chart types to display statistics: Line charts Display metrics for a single inventory object. The data for each performance counter is plotted on a separate line in the chart. For example, a network chart for a host can contain two lines: one showing the number of packets received, and one showing the number of packets transmitted. Stacked charts Display metrics for children of the selected parent object. For example, a hosts stacked CPU usage chart displays CPU usage metrics for each virtual machine on the host. The metrics for the host itself are displayed in separate line charts. Stacked charts are useful in comparing resource allocation and usage across multiple hosts or virtual machines. Each metric group is displayed on a separate chart for a managed entity. For example, hosts have one chart that displays CPU metrics and one that displays memory metrics. It is important to consider that not all metrics are normalized to 100 percent, so stacked charts might exceed the y-axis of the chart.
Module 3 Performance in a Virtualized Environment 103

VS5OS_LectGuideVo11.book Page 104 Monday, June 25, 2012 10:07 PM

Bar charts Display storage metrics for datastores in a selected datacenter. Each datastore is represented as a bar in the chart, and each bar displays metrics based on file type (virtual disks, snapshots, swap files, and other files). Pie charts Display storage metrics for a single datastore or virtual machine. Storage information is based on file type or virtual machine. For example, a pie chart for a datastore displays the amount of storage space occupied by the five largest virtual machines on that datastore. A pie chart for a virtual machine displays the amount of storage space that is occupied by virtual machine files.

104

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 105 Monday, June 25, 2012 10:07 PM

Objects and Counters


Slide 3-44

Objects are instances or aggregations of devices.

Examples:

vCPU0, vCPU1, vmhba1:1:2, aggregate over all NICs

Counters identify which statistics to collect. Examples:

CPU: used time, ready time, usage (%) NIC: network packets received Memory: memory swapped

Performance in a Virtualized Environment

vCenter Server enables the user to determine how much or how little information about a specific device type is displayed. You can control the amount of information that a chart displays by selecting one or more objects and counters. An object refers to an instance for which a statistic is collected. For example, you might collect statistics for an individual CPU (for example, vCPU0, vCPU1), all CPUs, a host, or a specific network device. A counter represents the actual statistic that you are collecting. For example, the amount of CPU used or the number of network packets per second for a given device.

Module 3 Performance in a Virtualized Environment

105

VS5OS_LectGuideVo11.book Page 106 Monday, June 25, 2012 10:07 PM

Statistics Type
Slide 3-45

The statistics type is the unit of measurement used during the statistics interval.

Statistics type Rate Delta Absolute

Description Value over the current interval Change from previous interval Absolute value (independent of interval)

Example CPU usage (MHz) CPU ready time Memory active

The statistics type refers to the measurement used during the statistics interval and is related to the unit of measurement. The measurement is one of the following: Rate Value over the current statistics interval Delta Change from previous statistics interval Absolute Absolute value (independent of the statistics interval) For example, CPU Usage is a rate, CPU Ready is a delta, and Memory Active is an absolute value.

106

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 107 Monday, June 25, 2012 10:07 PM

Rollup
Slide 3-46

Rollup is the conversion function between statistics intervals.

5 minutes of past-hour statistics are converted to 1 past-day value.

15 twenty-second statistics are rolled up into a single value. 6 five-minute statistics are rolled up into a single value. Rollup type Average Summation Latest Conversion function Average of data points Sum of data points Last data point Sample statistic CPU usage (average) CPU ready time (milliseconds) Uptime (days)

30 minutes of past-day statistics are converted to 1 past-week value.

3
Performance in a Virtualized Environment

Other rollup types: Minimum, Maximum

When looking at different historical intervals, data is displayed at different granularities. Past-hour statistics are shown at a 20-second granularity, and past-day statistics are shown at a 5-minute granularity. The averaging that is done to convert from one time interval to another is called rollup. The three different rollup types are listed below. The rollup type determines the type of statistical values returned for the counter: Average The data collected during the interval is aggregated and averaged. Minimum The minimum value is rolled up. Maximum The maximum value is rolled up. The minimum and maximum values are collected and displayed only in collection level 4. Minimum and maximum rollup types are used to capture peaks in data during the interval. For real-time data, the value is the current minimum or current maximum. For historical data, the value is the average minimum or average maximum.

Module 3 Performance in a Virtualized Environment

107

VS5OS_LectGuideVo11.book Page 108 Monday, June 25, 2012 10:07 PM

For example, the following information for the CPU usage chart shows that the average is collected at collection level 1 and the minimum and maximum values are collected at collection level 4. Counter: usage Unit: Percentage (%) Rollup Type: Average (Minimum/Maximum) Collection Level: 1 (4) To view or modify the statistics level (or collection level), in the vSphere Client menu bar, select Administration > vCenter Server Settings. Select Statistics in the left pane to view or modify the statistics parameters, which includes the statistics level. Summation The data collected is summed. The measurement displayed in the performance chart represents the sum of data collected during the interval. Latest The data collected during the interval is a set value. The value displayed in the performance chart represents the current value. For example, if you look at the CPU Used counter in a CPU performance chart, the rollup type is summation. So, for a given 5-minute interval, the sum of all of the 20-second samples in that interval is represented.

108

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 109 Monday, June 25, 2012 10:07 PM

Saving Charts
Slide 3-47

Click the Save Chart icon above the graph to save performance chart information. Can be saved in these formats:

JPEG BMP GIF PNG Microsoft Office Excel Workbook

3
Performance in a Virtualized Environment

You can save data from the advanced performance charts to a file in various graphics formats or in Microsoft Excel format. When you save a chart, you select the file type, then save the chart to the location of your choice.

Module 3 Performance in a Virtualized Environment

109

VS5OS_LectGuideVo11.book Page 110 Monday, June 25, 2012 10:07 PM

resxtop Utility
Slide 3-48

Use the resxtop utility to examine real-time resource usage for ESXi hosts. resxtop can be run in these modes:

Interactive mode Batch mode Replay mode

The resxtop commands enable command-line monitoring and collection of data for all system resources: CPU, memory, disk, and network. When used interactively, this data can be viewed on different types of screens, one each for CPU statistics, memory statistics, network statistics, and disk adapter statistics. This data includes some metrics and views that cannot be accessed using the overview or advanced performance charts. The three modes of execution for resxtop are the following: Interactive mode (the default mode) All statistics are displayed as they are collected, showing how the ESXi host is running in real time. Batch mode Statistics are collected so that the output can be saved in a file and processed later. Replay mode Data that was collected by the vm-support command is interpreted and played back as resxtop statistics. This mode does not process the output of batch mode. For more details on resxtop, see vSphere Resource Management Guide at http://www.vmware.com/support/pubs.

110

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 111 Monday, June 25, 2012 10:07 PM

Using resxtop Interactively


Slide 3-49

To start resxtop interactively:

Log in to a system installed with VMware vSphere Command-Line Interface (vCLI). Run resxtop with one or more connection parameters. Example:

# resxtop --server vc01.vmeduc.com --username administrator --vihost esxi01.vmeduc.com # resxtop --server esxi01.vmeduc.com --username root

3
Performance in a Virtualized Environment

To run resxtop interactively, you must first log in to a system with VMware vSphere CommandLine Interface (vCLI) installed. To do this, you must either download and install a vCLI package on a Linux host or deploy VMware vSphere Management Assistant (vMA) to your ESXi host. vMA is a preconfigured Linux appliance. Versions of the vCLI package are available for Linux and Windows systems. However, because resxtop is based on a Linux tool, it is only available in the Linux version of vCLI. After vCLI is set up and you have logged in to the vCLI system, start resxtop from the command prompt. For remote connections, you can connect to an ESXi host either directly or through vCenter Server.
resxtop has the following connection parameters: --server [server] --username [username] --password [password] --vihost [vihost]

[server] A required field that refers to the name of the remote host to connect to. If connecting directly to the ESXi host, use the name of that host. If your connection to the ESXi host is indirect (that is, through vCenter Server), use the name of the vCenter Server system for this option.

Module 3 Performance in a Virtualized Environment

111

VS5OS_LectGuideVo11.book Page 112 Monday, June 25, 2012 10:07 PM

[vihost] If connecting indirectly (through vCenter Server), this option refers to the name of the ESXi host that you want to monitor. You must use the name of the ESXi host as shown in the vCenter Server inventory. If connecting directly to the ESXi host, this option is not used. [portnumber] Port number to connect to on the remote server. The default port is 443, and unless this is changed on the server, this option is not needed. [username] User name to be authenticated when connecting to the remote host. The remote server prompts you for a password. The following command line is an example of running resxtop to monitor the ESXi host named esxi01.vmeduc.com. Instead of logging in to the ESXi host, the user logs in to the vCenter Server system named vc01.vmeduc.com as user administrator to access the ESXi host:
# resxtop --server vc01.vmeduc.com --username administrator --vihost esxi01.vmeduc.com

The following command line is another example of running resxtop to monitor the ESXi host named esxi01.vmeduc.com. However, this time the user logs directly in to the ESXi host as user root:
# resxtop --server esxi01.vmeduc.com --username root

In both examples, you are prompted to enter the password of the user that you are logging in as, for example, administrator or root.

112

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 113 Monday, June 25, 2012 10:07 PM

Navigating resxtop
Slide 3-50

When using resxtop in interactive mode, type a character to change the screen or behavior. Commands are case-sensitive.
c m d u v CPU view (default) Memory view Disk (adapter) view Disk (device) view h Virtual disk view q n Network view

f/F Add or remove statistic columns V Virtual machine view Help Quit

3
Performance in a Virtualized Environment

resxtop supports several single-key commands when run in interactive mode. Type these

characters to change the screen or behavior: c Switch to the CPU resource utilization screen (this is the default screen). m Switch to the memory resource utilization screen. d Switch to the storage (disk) adapter resource utilization screen. u Switch to the storage (disk) device resource utilization screen. v Switch to virtual disk resource utilization screen. n Switch to the network resource utilization screen. f/F -- Displays a panel for adding or removing statistics columns on the current panel V Display only virtual machines in the screen. h Display the help screen. q Quit interactive mode. The single-key commands are case-sensitive. Using the wrong case can produce unexpected results.
Module 3 Performance in a Virtualized Environment 113

VS5OS_LectGuideVo11.book Page 114 Monday, June 25, 2012 10:07 PM

Sample Output from resxtop


Slide 3-51

host statistics

Per world statistics (CPU screen)

Type V (uppercase V):

Per virtual machine statistics

Here is an example of the output generated from resxtop. You can view several screens. The CPU screen is the default. resxtop refreshes the screen every 5 seconds by default.
resxtop displays statistics based on worlds. A world is equivalent to a process in other operating

systems. A world can represent a virtual machine and a VMkernel component. The following column headings help you understand worlds: ID World ID. In some contexts, resource pool ID or virtual machine ID. GID Resource pool ID of the running worlds resource pool or virtual machine. NAME Name of running world. In some contexts, resource pool name or virtual machine name. To filter the output so that only virtual machines are shown, type V (uppercase V) in the resxtop window. This command hides the system worlds so that you can concentrate on the virtual machine worlds.

114

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 115 Monday, June 25, 2012 10:07 PM

Using resxtop in Batch and Replay Modes


Slide 3-52

To run resxtop in batch mode and print all performance counters:

resxtop a b > analysis.csv The a option shows all statistics.

Always start your virtual machines before running resxtop in batch mode. resxtop will produce virtual machine data based only on the virtual machines that were running at the time the command was launched.

3
Performance in a Virtualized Environment

To run resxtop in replay mode: Use vm-support and resxtop to create a file with sampled performance data and replay the file. For example: vm-support S d 300 l 30 resxtop r <filename>

resxtop can also be run in batch mode. In batch mode, the output is stored in a file, and the data can be read using the Windows Perfmon utility. You must prepare for running resxtop in batch

mode.
To prepare to run resxtop in batch mode 1. Run resxtop in interactive mode. 2. In each screen, select the columns you want. 3. Type W (uppercase) to save this configuration to a file (by default ~/.esxtop4rc). To run resxtop in batch mode 1. Start resxtop to redirect the output to a file, as shown in the slide. The filename must have a .csv extension. The utility does not enforce this, but the postprocessing tools require it. 2. Use tools like Microsoft Excel and Perfmon to process the statistics collected.

In batch mode, resxtop rejects interactive commands. In batch mode, the utility runs until it produces the number of iterations requested (by using the -n option) or until you end the process by pressing Ctrl+C.

Module 3 Performance in a Virtualized Environment

115

VS5OS_LectGuideVo11.book Page 116 Monday, June 25, 2012 10:07 PM

To run resxtop in replay mode: 1. Use the vm-support command to capture sampled performance data in a file. For example, the command vm-support -S -d 300 -l 30 runs vmsupport in snapshot mode. The -S restricts the collection of diagnostic data, the -d 300 collects data for 300 seconds (five minutes) and the -l 30 sets up a 30 second sampling interval. 2. Replay the file with the resxtop command. For example, resxtop -r <filename> replays

the captured performance data in an resxtop window. vm-support can be run from a remote command line. For details see: http://kb.vmware.com/selfservice/microsites/ search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1010705 http://kb.vmware.com/selfservice/microsites search.do?cmd=displayKC&docType=kc&externalId=1967&sliceId=1&docTypeID=DT_KB_1 _1&dialogID=343976180&stateId=1%200%20343980403

116

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 117 Monday, June 25, 2012 10:07 PM

Analyzing Batch Mode Output with Perfmon


Slide 3-53

3
Performance in a Virtualized Environment

Running resxtop in batch mode with all the counters enabled results in a large CSV file that cannot easily be parsed. resxtop is constructed so that the batch output file can be readily consumed by Perfmon. Perfmon can be used for: Quickly analyzing results. Generating smaller CSV files of a subset of the data that can be more easily consumed by other analysis tools, such as Microsoft Excel.
To open and view the batch output CSV file 1. Transfer the .csv file to a Windows system and start Perfmon. 2. Right-click the graph and select Properties. 3. Click the Source tab. Under Data source, select the Log files radio button. 4. Click the Add button. Browse to and select the .csv file created by esxtop. Click OK. 5. Click the Apply button.

Click OK.

Module 3 Performance in a Virtualized Environment

117

VS5OS_LectGuideVo11.book Page 118 Monday, June 25, 2012 10:07 PM

Guest Operating System-Based Performance Tools


Slide 3-54

In general, guest operating systembased performance tools are not aware of the virtual nature of the hardware that they are running on and do not provide accurate metrics.

Guest tools are safe to use that do the following:

Measure non-time-based metrics (such as free/swapped memory times) Strictly generate load (such as Iometer).

Often, application-specific counters are available only through guestbased tools:


For example, for Microsoft server applications such as SQL Server or Exchange

VMware Tools in vSphere includes a Perfmon DLL that provides additional counters that give the guest visibility into host CPU and memory usage.

Because VMware products provide a virtual interface to the hardware, traditional performance instrumentation that is based on measuring hardware resources might not be accurate. The problems seen as a result of usage of traditional in-guest performance measurements come from three areas: The measurements are unaware of work being performed by the virtualization software. Guest operating systems do not have complete information on the resources that are being used by the virtualization software. This includes memory management, scheduling, and other support processes. The way in which guest operating systems keep time is different and ineffective in a virtual machine The measurements visibility into available CPU resources is based on the fraction of the CPU that they have been provided by the virtualization software. Often, counters that measure an applications performance are available only through tools running in the guest operating system. For example, monitoring Microsoft server applications, such as SQL Server or Exchange Server, requires using guest-based tools. VMware Tools in vSphere includes a Perfmon DLL that provides additional counters that give the guest visibility into host CPU and memory usage. The original Perfmon counters are inaccurate
118 VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 119 Monday, June 25, 2012 10:07 PM

in a guest because the counters are unaware of work that is being performed by the virtualization software.

3
Performance in a Virtualized Environment

Module 3 Performance in a Virtualized Environment

119

VS5OS_LectGuideVo11.book Page 120 Monday, June 25, 2012 10:07 PM

Perfmon DLL in VMware Tools


Slide 3-55

The Perfmon DLL provides processor and memory counters to access host statistics from inside a virtual machine.

VMware Tools includes a Perfmon DLL that lets you monitor key host statistics from inside a virtual machine running a Windows operating system. Perfmon is an SNMP-based performance monitoring tool for Windows operating systems. Perfmon measures performance statistics on a regular interval and saves the statistics in a file. Administrators choose the time interval, the file format, and which statistics are monitored. The ability to choose which statistics to monitor is based on the available counters. Installing VMware Tools provides the Perfmon performance counters VM Processor and VM Memory. Using these counters in Perfmon enables you to view actual use so that you can compare it with the statistics viewed from the vSphere Client. Third-party developers can provide instruments for their agents to access these counters using Windows Management Instrumentation. Because the Perfmon DLL features enable a user in a guest operating system to view sensitive ESXi host metrics, the feature is disabled by default. To enable it, change the tools.guestlib.enableHostInfo parameter to TRUE in the .vmx file of the virtual machine.

120

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 121 Monday, June 25, 2012 10:07 PM

Choosing the Right Tool (1)


Slide 3-56

vSphere Client:

Is the primary tool for observing performance and configuration data for one or more ESXi hosts Provides access to the most important configuration and performance information

Does not require high levels of privilege to access the data Gives access to detailed performance data of a single ESXi host Provides fast access to a large number of performance metrics Requires root-level access

resxtop:

Performance in a Virtualized Environment

Checking for observable performance problems requires looking at values of specific performance metrics, such as CPU use or disk response time. vSphere provides two main tools for observing and collecting performance data. The vSphere Client, when connected directly to an ESXi host, can display real-time performance data about the host and virtual machines. When connected to a vCenter Server system, the vSphere Client can also display historical performance data about all of the hosts and virtual machines managed by that server. For more detailed performance monitoring, resxtop can be used to observe and collect additional performance statistics. The vSphere Client is usually your primary tool for observing performance and configuration data for ESXi hosts. For complete documentation on obtaining and using the vSphere Client, see vSphere Installation and Setup Guide at http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-serverpubs.html. The advantages of the vSphere Client are that it is easy to use, provides access to the most important configuration and performance information, and does not require high levels of privilege to access the performance data. The resxtop utility provides access to detailed performance data from a single ESXi host. In addition to the performance metrics available through the vSphere Client, it provides access to advanced performance metrics that are not available elsewhere. The advantages of resxtop are the amount of available data and the speed with which the command makes it possible to observe a large number of performance metrics. The disadvantage of resxtop is that it requires root-level
Module 3 Performance in a Virtualized Environment 121

VS5OS_LectGuideVo11.book Page 122 Monday, June 25, 2012 10:07 PM

access privileges to the ESXi host. For documentation on accessing and using resxtop, see vSphere Resource Management Guide at http://www.vmware.com/support/pubs/vsphere-esxivcenter-server-pubs.html.

122

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 123 Monday, June 25, 2012 10:07 PM

Choosing the Right Tool (2)


Slide 3-57

Guest operating systembased monitoring tools can lead to inaccurate metrics in some situations (for example, overcommitted CPU resources). Accuracy of in-guest tools is dependent on the guest operating system and kernel version being used.

3
Performance in a Virtualized Environment

When troubleshooting performance problems in a vSphere environment, you should use the performance monitoring tools provided by vSphere rather than tools provided by the guest operating system. The vSphere tools are more accurate than tools running in a guest operating system. Though guest operating system monitoring tools are more accurate in vSphere than in previous releases of ESXi, situations, such as when CPU resources are overcommitted, can still lead to inaccuracies in in-guest reporting. The accuracy of in-guest tools also depends on the guest operating system and kernel version being used. Thus, you should use the tools provided by vSphere when actively investigating performance issues.

Module 3 Performance in a Virtualized Environment

123

VS5OS_LectGuideVo11.book Page 124 Monday, June 25, 2012 10:07 PM

Lab 2
Slide 3-58

In this lab, you will use the vSphere Client performance charts and the resxtop command.
1. Start database activity in your test virtual machine. 2. Display custom vSphere Client performance charts. 3. Start resxtop. 4. Explore the resxtop screens. 5. Run resxtop in batch mode. 6. Use Windows Perfmon to display batch mode output. 7. Clean up for the next lab.

124

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 125 Monday, June 25, 2012 10:07 PM

Review of Learner Objectives


Slide 3-59

You should be able to do the following:

Discuss the available monitoring tools. Use vCenter Server performance charts. Use resxtop commands. Discuss guest operating systembased performance tools. Describe how to choose the right tool for a given situation.
Performance in a Virtualized Environment

Module 3 Performance in a Virtualized Environment

125

VS5OS_LectGuideVo11.book Page 126 Monday, June 25, 2012 10:07 PM

Key Points
Slide 3-60

The vSphere troubleshooting methodology is to follow a fixed-order flow through the most common observable performance problems. For most workloads, using the default monitor mode is recommended. vSphere tools for monitoring performance are vCenter Server performance charts and resxtop. Guest-based performance monitoring is an inaccurate means of evaluating performance in virtual deployments.

Questions?

126

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 127 Monday, June 25, 2012 10:07 PM

MODULE 4

Network Scalability
Slide 4-1

Module 4

4
Network Scalability

VMware vSphere: Optimize and Scale

127

VS5OS_LectGuideVo11.book Page 128 Monday, June 25, 2012 10:07 PM

You Are Here


Slide 4-2

Course Introduction VMware Management Resources Performance in a Virtualized Environment

Storage Optimization CPU Optimization Memory Optimization VM and Cluster Optimization Host and Management Scalability

Network Scalability Network Optimization Storage Scalability

128

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 129 Monday, June 25, 2012 10:07 PM

Importance
Slide 4-3

As you scale your VMware vSphere environment, you must be aware of the vSphere features and functions that help you manage networking in your environment.

4
Network Scalability

Module 4 Network Scalability

129

VS5OS_LectGuideVo11.book Page 130 Monday, June 25, 2012 10:07 PM

Module Lessons
Slide 4-4

Lesson 1: Lesson 2:

Introduction to VMware vSphere Distributed Switch Distributed Switch Features

130

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 131 Monday, June 25, 2012 10:07 PM

Lesson 1: Introduction to vSphere Distributed Switch


Slide 4-5

Lesson 1: Introduction to vSphere Distributed Switch


4
Network Scalability

Module 4 Network Scalability

131

VS5OS_LectGuideVo11.book Page 132 Monday, June 25, 2012 10:07 PM

Learner Objectives
Slide 4-6

After this lesson, you should be able to do the following:

List the benefits of using vSphere distributed switches. Create a distributed switch. Manage the distributed switch. Describe the distributed switch architecture. Describe the properties of a distributed switch.

132

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 133 Monday, June 25, 2012 10:07 PM

Distributed Switch
Slide 4-7

A distributed switch provides similar functionality to a vSphere standard switch, but it functions as a single virtual switch across all associated hosts.

VMware vCenter Server owns the configuration of the distributed switch. The configuration is consistent across all the hosts that use it. A distributed switch can support up to 350 hosts. A distributed switch can benefit from the performance of 10 Gigabit Ethernet physical network interface cards (NICs).

The behavior of distributed switches is consistent with standard switches. You can configure virtual machine port groups and VMkernel ports.

4
Network Scalability

A VMware vSphere distributed switch (VDS) provides similar functionality to a vSphere standard switch. Virtual machines connect to port groups. Virtual machine and VMkernel interfaces can be connected to port groups. But you configure a distributed switch in VMware vCenter Server instead of on an individual host. Still, some configuration is specific to the host. A hosts uplinks are allocated to the distributed switch and are managed in the hosts network configuration. Similarly, the VMkernel ports are managed in the hosts network configuration as well. A distributed switch supports Gigabit Ethernet and 10 Gigabit Ethernet physical network interface cards (NICs).

Module 4 Network Scalability

133

VS5OS_LectGuideVo11.book Page 134 Monday, June 25, 2012 10:07 PM

Benefits of Distributed Switches


Slide 4-8

Benefits of distributed switches over standard switches:

Simplify datacenter administration Provide support for private VLANs Enable networking statistics and policies to migrate with virtual machines during a migration with VMware vSphere vMotion Provide for customization and third-party development

standard switches

distributed switches

Having the network configuration at the datacenter level (distributed switches), not at the host level (standard switches), offers several advantages: Datacenter setup and administration are simplified by centralizing network configuration. For example, adding a host to a cluster and making it compatible with VMware vSphere vMotion is much easier. Distributed switches support private VLANs. With private VLANs, you can use VLAN IDs in a private network without having to worry about duplicating VLAN IDs across a wider network. Distributed ports migrate with their clients. For example, when you migrate a virtual machine with vMotion, the distributed port statistics and policies move with the virtual machine, thus simplifying debugging and troubleshooting. Enterprise networking vendors can provide proprietary networking interfaces to monitor, control, and manage virtual networks. VMware vSphere Network Appliance API enables third-party developers to create distributed switch solutions.

134

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 135 Monday, June 25, 2012 10:07 PM

Distributed Switch Example


Slide 4-9

Example:

Create a distributed switch named vDS01. Create a port group named Production, which will be used for virtual machine networking. Assign uplinks vmnic1 on host ESXi01 and vmnic1 on host ESXi02 to the distributed switch.
distributed switch, vDS01
production uplink port group

virtual physical

Network Scalability

uplinks
vmnic0 vmnic1 vmnic2 ESXi01 vmnic0 vmnic1 vmnic2 ESXi02

On the slide, a distributed switch named vDS01 is created. A port group named Production is defined on this switch. vmnic1 on host ESXi01 is assigned to the distributed switch as is vmnic1 on ESXi02. When the distributed switch is created, an uplink port group is also created to include the uplinks of the hosts.

Module 4 Network Scalability

135

VS5OS_LectGuideVo11.book Page 136 Monday, June 25, 2012 10:07 PM

Viewing Distributed Switches


Slide 4-10

View distributed switches in the Networking inventory view.

Give port groups descriptive names. For example, change the name dvPortGroup (default) to Production.
To view a distributed switch:

Go to the Networking inventory view. On the slide, the distributed switch vDS-01 is displayed. The Production port group is shown, as are the two virtual machines connected to it: VM12 and VM11. The uplink port group vDS-01DVUplinks-47 is also shown. 47 is an identifier that is chosen by vCenter Server.
Note that Cisco and IBM also have distributed switches.

136

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 137 Monday, June 25, 2012 10:07 PM

Managing Virtual Adapters


Slide 4-11

Click the Manage Virtual Adapters link to add a virtual adapter or to migrate an existing virtual adapter to a distributed switch.

Managing virtual adapters is performed at the host level.

Network Scalability

Virtual network adapters handle host network services over a distributed switch. You can configure VMkernel network adapters and VMkernel ports for a VMware vSphere ESXi host. For example, you might want to create a VMkernel network adapter for use as a vMotion interface, for accessing IP storage, or for VMware vSphere Fault Tolerance logging. The Manage Virtual Adapters dialog box provides the means to add, edit, and remove the VMkernel virtual adapters that are used by the selected host.
To add a virtual adapter: 1. Select a host in the Hosts and Clusters inventory view and click the Configuration tab. 2. In the Hardware list, select Networking and click Manage Virtual Adapters. 3. In the Manage Virtual Adapters dialog box, select Add to create a virtual adapter.

You also have the option to migrate virtual adapters to a different virtual switch.

Module 4 Network Scalability

137

VS5OS_LectGuideVo11.book Page 138 Monday, June 25, 2012 10:07 PM

Managing Physical Adapters


Slide 4-12

network configuration for host esxi02.vclass.local

Modify physical adapter configuration at the host level, for example, to create a NIC team.

The association of physical adapters to distributed switch uplink groups must be done at the host level, not the distributed switch level. In the distributed switch view, click the Manage Physical Adapters link to add or remove physical adapters from the uplink port group. You can also team NICs in your distributed switch. The slide shows a NIC team on the host named esxi02.vclass.local.

138

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 139 Monday, June 25, 2012 10:07 PM

Enabling IPv6 on the ESXi Host


Slide 4-13

network configuration for host esxi02.vclass.local

Enable IPv6 support for this host. You must restart the system for changes to take effect.

4
Network Scalability Internet Protocol version 6 (IPv6) is designated by the Internet Engineering Task Force as the successor to IPv4. The most obvious difference is address length. IPv6 uses 128-bit addresses rather than the 32-bit addresses used by IPv4. This increase resolves the problem of address exhaustion and eliminates the need for network address translation. Other differences include link-local addresses that appear as the interface is initialized, addresses that are set by router advertisements, and the ability to have multiple IPv6 addresses on an interface. IPv6 support in ESXi provides the ability to use features such as NFS in an IPv6 environment. Use the Networking Properties dialog box to enable or disable IPv6 support on the host.
To enable IPv6: 1. Click the Networking link in the Configuration tab of your ESXi host. 2. Click either vSphere Standard Switch or vSphere Distributed Switch. (Either view allows

you to set IPv6 for the entire system.)


3. Click the Properties link in the upper right corner. The Networking Properties dialog box is

displayed.
4. Select the Enable IPv6 support on this host system check box and click OK. The changes do

not take effect until the system is restarted.


Module 4 Network Scalability 139

VS5OS_LectGuideVo11.book Page 140 Monday, June 25, 2012 10:07 PM

Connecting a Virtual Machine to a Distributed Port Group


Slide 4-14

Connect a virtual machine to a distributed port group by:

Modifying the NIC configuration in the virtual machine properties Migrating virtual machines to a distributed switch

second page of the Migrate Virtual Machine Networking wizard

Connect virtual machines to distributed switches by connecting their associated virtual network adapters to distributed port groups. You can make this connection for an individual virtual machine by modifying the virtual machines network adapter configuration. You can also make this connection for a group of virtual machines by migrating virtual machines from an existing virtual network to a distributed switch.
To migrate virtual machines to a distributed switch: 1. In the Networking inventory view, right-click the datacenter and select Migrate Virtual

Machine Networking. The Migrate Virtual Machine Networking wizard starts.


2. Select a source network to migrate adapters from. Select a destination network to migrate

adapters to. Click Next.


3. Select the virtual machines and adapters to migrate to the destination network. Click Next. 4. Verify that the source network, destination network, and number of virtual machines to migrate

are correct. Click OK.

140

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 141 Monday, June 25, 2012 10:07 PM

Distributed Switch Architecture


Slide 4-15

management port management port distributed ports and port groups uplink port groups distributed switch (control plane) vMotion port

vCenter Server

hidden virtual switches (I/O plane) physical NICs (uplinks) virtual physical

Network Scalability

host 1

host 2

The distributed switch components move network management to the datacenter level. A distributed switch is a managed entity configured in vCenter Server. It abstracts a set of distributed switches that are configured on each associated host. vCenter Server owns the configuration of distributed switches, and the configuration is consistent across all hosts. Consider a distributed switch as a template for the network configuration on each ESXi host. Each distributed switch includes distributed ports. A distributed port represents a port to which you can connect any networking entity, such as a virtual machine or a VMkernel interface. vCenter Server stores the state of distributed ports in the vCenter Server database. So networking statistics and policies migrate with virtual machines when the virtual machines are moved from host to host. A distributed port group provides a way to logically group distributed ports to simplify configuration. A distributed port group specifies port configuration options for each member port on a distributed switch. Distributed port groups define how a connection is made through a distributed switch to a network. Ports can also exist without port groups. An uplink is an abstraction to associate the vmnics from multiple hosts to a single distributed switch. An uplink is to a distributed switch what a vmnic is to a standard switch. Two virtual
Module 4 Network Scalability 141

VS5OS_LectGuideVo11.book Page 142 Monday, June 25, 2012 10:07 PM

machines on different hosts can communicate with each other only if both virtual machines have uplinks in the same broadcast domain. The distributed switch architecture consists of two planes: the control plane and the I/O plane. The control plane resides in vCenter Server. The control plane is responsible for configuring distributed switches, distributed port groups, distributed ports, uplinks, NIC teaming, and so on. The control plane also coordinates the migration of the ports and is responsible for the switch configuration. For example, in the case of a conflict in the assignment of a distributed port, the control plane is responsible for deciding what to do. The I/O plane is implemented as a hidden virtual switch in the VMkernel of each ESXi host. The I/O plane manages the I/O hardware on the host and is responsible for forwarding packets.

142

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 143 Monday, June 25, 2012 10:07 PM

Editing General Distributed Switch Properties


Slide 4-16

General properties include the distributed switch name, number of uplink ports and optional uplink names, the number of ports, and notes.

Distributed ports and port groups inherit property settings defined at the distributed switch level.

Network Scalability

If necessary, you can edit the properties of a distributed switch. Settings on the Properties tab are grouped into the categories General and Advanced. In the General panel, you can name your uplink ports. Naming uplinks is a good way to help administrators understand which uplinks to associate with port groups for the policies settings. The Settings dialog box has several tabs: Properties, Network Adapters, Private VLAN, NetFlow, and Port Mirroring. These tabs are available only for distributed switches, not for individual distributed ports or distributed port groups. The Network Adapters tab is a read-only form that enables you to verify which physical adapters are connected to the distributed switch. The Private VLAN, NetFlow, and Port Mirroring tabs are discussed later in the module.
To display general distributed switch properties: 1. In the Networking inventory view, right-click the distributed switch. 2. Select Edit Settings. The distributed switch Settings dialog box is displayed, as shown on the

slide.

Module 4 Network Scalability

143

VS5OS_LectGuideVo11.book Page 144 Monday, June 25, 2012 10:07 PM

Editing Advanced Distributed Switch Properties


Slide 4-17

The Properties tab also has the following settings for Advanced properties:

Maximum MTU (maximum transmission unit) Discovery Protocol Administrator Contact Information

Advanced properties on the distributed switch enable you to define the maximum transmission unit (MTU), the discovery protocol status and operation type, and administrator contact details. MTU determines the maximum size of frames in this distributed switch. The distributed switch drops frames bigger than the specified size. If your environment supports jumbo frames, use this option to enable or disable jumbo frames on the distributed switch.
To enable jumbo frames on a distributed switch:

Set the Maximum MTU field to 9000. To use jumbo frames, the network must support it end to end. That is, jumbo frame support must be enabled on the physical switch, on the distributed switch, and in the guest operating system of the virtual machine. To enable jumbo frames in the guest operating system of a virtual machine, first ensure that the latest version of VMware Tools is installed. Then, for the virtual network adapter, use either the vmxnet3, vmxnet2, e1000, or e1000e virtual device. ESXi supports jumbo frames in the guest operating system and on VMkernel ports. Distributed switches support two types of discovery protocols: Cisco Discovery Protocol (CDP) and Link Layer Discovery Protocol (LLDP). Discovery protocols are discussed later in this module.
144 VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 145 Monday, June 25, 2012 10:07 PM

Editing Distributed Port Group Properties


Slide 4-18

Most of the port group properties are available for both distributed port groups and standard port groups.

Some exceptions exist:

A distributed port group has the following load balancing policy: Route based on physical NIC load.

4
Network Scalability

Distributed switches have a load balancing policy that is not available on standard switches. The load balancing policy is called Route based on physical NIC load. Also known as load-based teaming, this policy considers virtual machine network I/O load into account. This policy tries to avoid congestion by dynamically reassigning and balancing the virtual switch port to physical NIC mappings. Load-based teaming maps virtual NICs to physical NICs and remaps the virtual NIC-to-physical NIC affiliation if the load exceeds specific thresholds on an uplink. Load-based teaming uses the same initial port assignment as the originating port id load balancing policy. The first virtual NIC is affiliated to the first physical NIC, the second virtual NIC to the second physical NIC, and so on. After initial placement, load-based teaming examines both ingress and egress load of each uplink in the team. Load-based teaming then adjusts the virtual NIC-tophysical NIC mapping if an uplink is congested. The NIC team load balancer flags a congestion condition if an uplink experiences a mean use of 75% or more over a 30-second period.

Module 4 Network Scalability

145

VS5OS_LectGuideVo11.book Page 146 Monday, June 25, 2012 10:07 PM

Distributed Switch Configuration: .dvsData Folder


Slide 4-19

When a virtual machine uses a distributed port, a folder named .dvsData is create on the datastore on which the virtual machine resides:

A subfolder exists named after the UUID of the distributed switch:

The UUID folder contains one or more files, each file corresponding to a port ID used by a virtual machine.

When a virtual machine is connected to a port on a distributed switch, a folder named .dvsData is created on the datastore on which the virtual machine resides. The .dvsData folder is only created if you have a virtual machine that is attached to a distributed switch and that is located on that datastore. If virtual machines are attached only to standard switches, then the .dvsData folder does not exist. Also, the datastore that holds only the virtual machines .vmx config file has the .dvsData folder. In the .dvsData folder is a subfolder whose name matches the UUID of the distributed switch. Each distributed switch has a UUID in the format 31 4c 2a 50 cf 99 c3 bf-d0 9a ba 8d ef 6f fe 71" (as an example). In the UUID folder, you might find one or more files. Each file corresponds to the port ID that the virtual machine is associated with. This number corresponds to the parameter ethernet#.dvs.portId, where # is 0, 1, and so on. This parameter is located in the virtual machines .vmx configuration file. The ESXi host periodically synchronizes the virtual machines port state into this file. The host does this every five minutes. The .dvsData folder and the subfolder is primarily used for VMware vSphere High Availability. When a vSphere HA event occurs and the virtual machine is restarted on a different ESXi host, the destination host reads the distributed port state from the .dvsData subfolder and starts the virtual machine on that datastore.

146

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 147 Monday, June 25, 2012 10:07 PM

As a general rule, do not delete the .dvsData folder. However, if you must delete the folder (because you are performing VMFS maintenance for instance), ensure that no virtual machines on the datastore are registered in vCenter Server. After determining that no virtual machines are registered, then you can safely remove the .dvsData folder and its subfolders.

4
Network Scalability

Module 4 Network Scalability

147

VS5OS_LectGuideVo11.book Page 148 Monday, June 25, 2012 10:07 PM

Standard Switch and Distributed Switch Feature Comparison


Slide 4-20
Feature Layer 2 switch VLAN segmentation IPv6 support 802.1Q tagging NIC teaming Outbound traffic shaping Inbound traffic shaping VM network port block Private VLANs Load-based teaming Datacenter-level management Network vMotion vNetwork switch APIs Per-port policy settings Port state monitoring NetFlow Port mirroring Standard switch Distributed switch

The table provides a summary of the capabilities present in standard and distributed switches. Distributed switches have more features than standard switches, including: Inbound traffic shaping Support for private VLANs Load-based teaming Port mirroring Load-based teaming enables you to balance the load across the team of NICs, based on the current loads on the physical NICs. Network vMotion is the ability of distributed ports to migrate with their virtual machine. When you migrate a virtual machine with vMotion, the distributed port statistics and policies move with the virtual machine.

148

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 149 Monday, June 25, 2012 10:07 PM

Lab 3
Slide 4-21

In this lab, you will create a distributed switch.


1. Configure infrastructure to use a single vCenter Server - Part I. 2. Configure infrastructure to use a single vCenter Server - Part II. 3. Create a distributed switch for the virtual machine network. 4. Create a distributed switch port group. 5. Migrate virtual machines to a distributed switch port group. 6. Verify that your virtual machine has proper access to the Production

network.

4
Network Scalability

Module 4 Network Scalability

149

VS5OS_LectGuideVo11.book Page 150 Monday, June 25, 2012 10:07 PM

Review of Learner Objectives


Slide 4-22

You should be able to do the following:

List the benefits of using vSphere distributed switches. Create a distributed switch. Manage the distributed switch. Describe the distributed switch architecture. Describe the properties of a distributed switch.

150

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 151 Monday, June 25, 2012 10:07 PM

Lesson 2: Distributed Switch Features


Slide 4-23

Lesson 2: Distributed Switch Features


4
Network Scalability

Module 4 Network Scalability

151

VS5OS_LectGuideVo11.book Page 152 Monday, June 25, 2012 10:07 PM

Learner Objectives
Slide 4-24

After this lesson, you should be able to do the following:

Describe distributed switch port binding. Explain how private VLANs work. Describe the types of discovery protocols. Configure network resource pools. Configure NetFlow. Configure port mirroring on a distributed switch.

152

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 153 Monday, June 25, 2012 10:07 PM

Distributed Switch Port Binding


Slide 4-25

Port binding is configured at the port group level. Port binding determines when and how a virtual machines virtual NIC is assigned to a virtual switch port. Three Port binding options:

Static binding Dynamic binding:

Dynamic binding is deprecated in VMware vSphere ESXi 5.0.

Ephemeral no binding

Network Scalability

Three types of port binding are available: static, dynamic, and ephemeral. All three have different effects on network connections. When you connect a virtual machine to a port group configured with static binding, a port is immediately assigned and reserved for it, guaranteeing connectivity at all times. The port is disconnected only when the virtual machine is removed from the port group. You can connect a virtual machine to a static-binding port group only through vCenter Server. Static binding is the default setting and is recommended for general use. In a port group configured with dynamic binding, a port is assigned to a virtual machine only when the virtual machine is powered on and its NIC is in a connected state. The port is disconnected when the virtual machine is powered off or the virtual machine's NIC is disconnected. Virtual machines connected to a port group configured with dynamic binding must be powered on and off through vCenter. Dynamic binding can be used in environments where you have more virtual machines than available ports, but you do not plan to have a greater number of virtual machines active than you have available ports. For example, if you have 300 virtual machines and 100 ports, but you never have more than 90 virtual machines active at one time, dynamic binding would be appropriate for your port group.
Module 4 Network Scalability 153

VS5OS_LectGuideVo11.book Page 154 Monday, June 25, 2012 10:07 PM

In a port group configured with ephemeral binding, a port is created and assigned to a virtual machine when the virtual machine is powered on and its NIC is in a connected state. The port is deleted when the virtual machine is powered off or the virtual machine's NIC is disconnected. Ephemeral port assignments can be made through ESXi as well as vCenter Server, giving you the flexibility to manage virtual machine connections through the host when vCenter Server is down. Although only an ephemeral binding allows you to modify virtual machine network connections when vCenter Server is down, network traffic is unaffected by a vCenter Server failure regardless of port binding type. Ephemeral port groups should be used only for recovery purposes when you want to provision ports directly on an ESXi host, bypassing vCenter Server, not for any other case. The reasons for this recommendation are the following: Scalability An ESXi 5.x host can support up to 256 ephemeral port groups. Since ephemeral port groups are always pushed to hosts, this limit is effectively also the vCenter Server limit. Performance Every operation, including add-host and virtual machine power operations, is slower comparatively because ports are created and destroyed in the operation code path. Virtual machine operations are far more frequent than add-host or switch operations, so ephemeral ports are more demanding in general. Non-persistency Port-level permissions and controls are lost across power cycles, so no historical context is saved.

154

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 155 Monday, June 25, 2012 10:07 PM

Port-Binding Examples
Slide 4-26
Static port-binding example:
distributed switch

Three ports in the distributed switch First three virtual machines to connect get the ports. Those ports are permanently locked to those virtual machines. Power state of virtual machine does not matter.

distributed switch

Dynamic port-binding example:

Three ports Ports are assigned when the virtual machine is powered on. Only three out of the four virtual machines are connected.

distributed switch

Ephemeral port-binding example:

Network Scalability

As many ports as you need. Power state of virtual machines does not matter. Ports are created as you connect. Limited only by the maximum for vSphere on your hardware.

Each type of port binding (static, dynamic, or ephemeral) has different effects on network connections. The first example shows static port binding. Three ports exist on the distributed switch. The first three virtual machines to connect acquire these ports. These ports are permanently bound to these virtual machines. The fourth virtual machine cannot connect. The second example shows dynamic port binding. Three ports exist on the distributed switch. But four virtual machines are powered on. With dynamic port binding, only three of the virtual machines can be connected. The third example shows ephemeral port binding. As many ports as you need are available. The power state of the virtual machine does not matter. Ports are created as virtual machines are connected. The number of ports is limited only by the maximum values for vSphere on your physical hardware.

Module 4 Network Scalability

155

VS5OS_LectGuideVo11.book Page 156 Monday, June 25, 2012 10:07 PM

VLAN Policies for Distributed Port Groups


Slide 4-27

VLAN policies for distributed port groups:

None VLAN VLAN Trunking Private VLAN


Right-click a distributed port group and select Edit Settings.

The VLAN policy enables virtual networks to join physical VLANs. The VLAN type provides the following options: None Do not use VLAN. The distributed switch performs no tagging or untagging, and the traffic travels untagged between virtual machines and the physical switch. Use this option for external switch tagging or for cases where you are not using VLANs at all. VLAN Also known as virtual switch tagging, this option enables you to specify which VLAN to tag or untag. All traffic from the distributed switch and the physical switch is tagged to the specified VLAN, and all traffic between the distributed switch and the virtual machines is untagged. The accepted value is a number between 1 and 4094. VLAN Trunking This option is used for VLAN trunking and enables you to specify which VLANs to allow in the trunk. You can enter one or more VLAN trunk values, including a range of values. All VLAN tagging of packets is performed by the virtual switch before leaving the host. Host network adapters must be connected to trunk ports on the physical switch. Port groups that are connected to the virtual switch must have an appropriate VLAN ID specified. Private VLAN This option enables you to specify which private VLAN to use. Select this option after you define private VLANs on the PVLAN tab for the distributed switch.
156 VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 157 Monday, June 25, 2012 10:07 PM

What Is a Private VLAN?


Slide 4-28

A VLAN is:

A firmware or software construct used to create separate networks in physical switches:

VLANs divide a single broadcast domain into several logical broadcast domains, each with its own IP network address.

A private VLAN is:

An extension to the VLAN standard Further segmentation of a single VLAN into secondary private VLANs

A secondary private VLAN: Exists only in the primary VLAN Shares the same IP network address Is identified on the physical and distributed switches by a unique VLAN ID

Network Scalability

A private VLAN enables you to isolate traffic between virtual machines in the same isolated VLAN. A private VLAN provides additional security between virtual machines on the same subnet without exhausting VLAN number space. Private VLANs are useful on a DMZ where the server must be available to external connections and possibly internal connections, but rarely must communicate with the other servers on the DMZ. The basic concept behind private VLANs is to divide an existing VLAN, called the primary VLAN, into one or more separate VLANs, called secondary VLANs. Private VLANs have existed in the networking industry for several years. But now you can create private VLANs for virtual machines within distributed switches.

Module 4 Network Scalability

157

VS5OS_LectGuideVo11.book Page 158 Monday, June 25, 2012 10:07 PM

Types of Secondary Private VLANs


Slide 4-29

Three types of secondary private VLANs:

Promiscuous Isolated Community

The type of secondary private VLAN determines packet forwarding rules.


Primary 5 5 5 Secondary 5 155 17 Type promiscuous isolated community

Three types of secondary VLANs are available: promiscuous, isolated, and community. Virtual machines in a promiscuous private VLAN are reachable by and can reach any machine in the same primary VLAN. Virtual machines in an isolated private VLAN can communicate with no virtual machines except those in the promiscuous private VLAN. Virtual machines in a community private VLAN can communicate with one another and with the virtual machines in the promiscuous private VLAN, but not with any other virtual machine. Traffic in both community and isolated private VLANs travels tagged as the associated secondary private VLAN. Consider these two observations about how vSphere implements private VLANs: vSphere does not encapsulate traffic in private VLANs. In other words, no secondary private VLAN is encapsulated in a primary private VLAN packet. Traffic between virtual machines on the same private VLAN, but on different hosts, moves through the physical switch. Thus the physical switch must be private VLANaware and configured appropriately so that traffic in the secondary private VLANs can reach its destination.
158 VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 159 Monday, June 25, 2012 10:07 PM

Promiscuous Private VLANs


Slide 4-30

Primary 5 5 5

Secondary 5 155 17

Type promiscuous isolated community 155 VM 3 17 VM 2 VM 1

A node attached to a port in a promiscuous secondary private VLAN can send and receive packets to any node in any other secondary private VLAN associated with the same primary. Routers are typically attached to promiscuous ports.

5
VM 5 VM 6

VM 4

Network Scalability

In the example, virtual machines VM 5 and VM 6 are attached to promiscuous PVLAN 5. VM 5 and VM 6 can communicate with each other and they can communicate with the other virtual machines (VM 1, VM 2, VM 3, and VM 4).

Module 4 Network Scalability

159

VS5OS_LectGuideVo11.book Page 160 Monday, June 25, 2012 10:07 PM

Isolated Private VLANs


Slide 4-31

Primary 5 5 5

Secondary 5 155 17

Type promiscuous isolated community VM 2 VM 1

155 A node attached to a port in an isolated secondary private VLAN can send to and receive packets only from the promiscuous private VLAN.
VM 3 17 5 VM 5 VM 6 VM 4

In the example, virtual machines VM 1and VM 2 are attached to isolated PVLAN 155. VM 1 and VM 2 can communicate only with VM 5 and VM 6. They cannot communicate with each other. They also cannot communicate with VM 3and VM 4 in the community private VLAN. The only reason that they can communicate with VM 5 and VM 6 is that both of those virtual machines are connected to a promiscuous PVLAN.

160

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 161 Monday, June 25, 2012 10:07 PM

Community Private VLANs


Slide 4-32

Primary 5 5 5

Secondary 5 155 17

Type promiscuous isolated community 155 VM 2 VM 1

A node attached to a port in a community secondary private VLAN can send to and receive packets from other ports in the same secondary private VLAN as well as ports in the promiscuous private VLAN.

VM 3

17

5 VM 5 VM 6

VM 4

Network Scalability

In the example, virtual machines VM 3 and VM 4 are attached to community PVLAN 17. They can communicate with each other because they are both in the same community. VM 3 and VM 4 can also communicate with VM 5 and VM 6 because both of those virtual machines are connected to a promiscuous PVLAN. They cannot communicate with VM 1 and VM 2 in the isolated PVLAN.

Module 4 Network Scalability

161

VS5OS_LectGuideVo11.book Page 162 Monday, June 25, 2012 10:07 PM

Private VLAN Implementation


Slide 4-33

Standard 802.1Q tagging No double encapsulation Switch software decides which ports to forward the frame to, based on the tag and the private VLAN tables.

Primary 5 5 5

Secondary 5 155 17

Type promiscuous isolated community

For private VLANs, the VLAN ID is the secondary ID.

distributed switch

VLAN 5

PVLAN 5 (promiscuous)

PVLAN 155 (isolated)

PVLAN 17 (community)

How are private VLANs implemented in a physical switch? There is no encapsulation of a private VLAN in a VLAN. Everything is done with one tag per packet. Each packet has an additional field that identifies a tag. The packets are tagged according to the switch port configuration, or they arrive already tagged if the port is a trunk. A VLAN would behave the same way on a physical switch. The switch internally holds a table that associates some tags to some private VLANs, defining the behavior or field of existence of each packet. A broadcast packet might enter the switch with a VLAN tag that belongs to an isolated VLAN. The switch forwards the broadcast packet only to ports associated with the promiscuous VLAN or to trunks that allow the promiscuous VLAN. The equivalent applies for the other types. Promiscuous private VLANs have the same VLAN ID for both the primary VLAN and the secondary VLAN. Community and isolated private VLANs traffic travel tagged as the associated secondary private VLAN. Traffic in private VLANs is not encapsulated. That is, no secondary private VLAN is encapsulated in a primary private VLAN packet.
162 VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 163 Monday, June 25, 2012 10:07 PM

Traffic between virtual machines on the same private VLAN but on different ESXi hosts must go through the physical switch.

4
Network Scalability

Module 4 Network Scalability

163

VS5OS_LectGuideVo11.book Page 164 Monday, June 25, 2012 10:07 PM

Private VLANs and Physical Switches


Slide 4-34

Packets travel tagged with the secondary ID. Each virtual machine can send to and receive from different secondary private VLANs.

Example: community and promiscuous

A physical switch can be confused by the fact that each MAC address is visible in more than one VLAN tag A physical switch must trunk to the ESXi host and not be in a secondary private VLAN.

The physical switch must be private VLANaware and configured appropriately to enable the secondary private VLANs to reach their destination. Some physical switches discover MAC addresses per VLAN. This discovery can be a problem for private VLANs because each virtual machine appears to the physical switch to be in more than one VLAN. The physical switch can also be confused by the fact that each MAC address is visible in more than one VLAN tag. For these reasons, it is a requirement that each physical switch where ESXi hosts with private VLANs are connected must be private VLANaware. Being private VLANaware enables the physical switch to merge the secondaries into one single broadcast domain. Because of the private VLAN implementation, packets travel tagged with the secondary ID and each virtual machine can receive and send to different secondary private VLANs (for example, community and promiscuous). The physical switch must trunk to the ESXi host and not be in a secondary private VLAN. Private VLANs in the distributed switch work even with physical switches that are not private VLANaware and are not discovering MAC addresses per VLAN because the MAC address is associated to the single port.
164 VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 165 Monday, June 25, 2012 10:07 PM

Most PVLAN problems are caused by physical switches that are configured incorrectly. Compare the PVLAN maps in the physical switch to the PVLAN configuration in the distributed switch. All physical network switches have commands that allow the network administrator to examine the PVLAN and VLAN configuration. If you believe a PVLAN problem is being caused by a physical switch configuration, look at your physical PVLAN configuration. Double-check the configuration of the physical switches that the ESXi hosts are running traffic through.

4
Network Scalability

Module 4 Network Scalability

165

VS5OS_LectGuideVo11.book Page 166 Monday, June 25, 2012 10:07 PM

Physical Switch PVLAN-Aware


Slide 4-35

A virtual machine in a promiscuous PVLAN sends an ARP request for a virtual machine in an isolated PVLAN. The target virtual machine is on a different ESXi host. The physical switch is PVLAN-aware.

Switch ports that see the same MAC address through different VLAN tags

PVLAN logic detects that the destination is isolated, so it acts as if the tag were 155. ARP request tag: 5
Primary

ARP request tag: 5 ARP request tag: none promiscuous

ARP request tag: none


Secondary 5 155 17 Type promisc isolated comm

VDS

5 5 5

isolated ARP reply tag: none ARP reply tag: 155 ARP reply tag: 155 ARP reply tag: none

In this brief animation, you can see possible problems with PVLANs and physical switches. A virtual machine in a promiscuous PVLAN attempts to exchange ARP information with a virtual machine in an isolated PVLAN. This exchange is completely legal inside the PVLAN logic, so the promiscuous PVLAN sends a broadcast ARP request that asks whose MAC address corresponds to the IP address of the isolated virtual machine. This packet travels through the physical switch under the promiscuous PVLAN ID (5, in this example). The packet reaches the destination distributed switch. There the PVLAN logic forwards the broadcast to the isolated virtual machine, which sits in the secondary (isolated) PVLAN 155. The virtual machine replies with the ARP reply, and this packet travels back using PVLAN ID 155. Both physical switch ports in the diagram see the same MAC address through different VLAN IDs. The incoming ARP request had the tag of PVLAN 5. The response packet has the tag PVLAN 155. If these were VLANs and not related PVLANs, this exchange would not be allowed. This exchange would confuse the physical switch and it would drop the packet. This exchange would also not be allowed if these were PVLANs and both of these were isolated PVLANs.
166 VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 167 Monday, June 25, 2012 10:07 PM

The solution is to manually configure the physical switch. You must enter the PVLAN tables that are being used on the connected distributed switches into the physical switch. Then the physical switch will be aware of which PVLANs are being used. It will also be aware that PVLAN 5 is a promiscuous PVLAN and that PVLAN 155 is an isolated secondary PVLAN that is related to PVLAN 5. This awareness means that traffic between PVLAN 5 and PVLAN 155 is considered legal and that the packet will be forwarded.

4
Network Scalability

Module 4 Network Scalability

167

VS5OS_LectGuideVo11.book Page 168 Monday, June 25, 2012 10:07 PM

Configuring and Assigning Private VLANs


Slide 4-36

Right-click the distributed switch and select Edit Settings. Click Private VLAN tab.

Configure.

Right-click the distributed port group and select Edit Settings. Select VLAN.

Assign.

The first step in setting up a private VLAN is to create the private VLAN primary and secondary associations. Three types of secondary VLANs are available: promiscuous, isolated, and community. Virtual machines in a promiscuous private VLAN are reachable by and can reach any machine in the same primary VLAN. Virtual machines in an isolated private VLAN can communicate with no virtual machines except those in the promiscuous private VLAN. Virtual machines in a community private VLAN can communicate with one another and with the virtual machines in the promiscuous private VLAN, but not with any other virtual machine.
To configure a primary VLAN and secondary VLANs: 1. Right-click the distributed switch in the networking inventory view and select Edit Settings. 2. Select the Private VLAN tab. 3. Under Primary Private VLAN ID, click Enter a Private VLAN ID here and enter the

number of the primary private VLAN.

168

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 169 Monday, June 25, 2012 10:07 PM

4. Click anywhere in the dialog box and select the primary private VLAN that you added. The

primary private VLAN that you added is displayed under Secondary Private VLAN ID.
5. For each new secondary private VLAN, click Enter a Private VLAN ID here under

Secondary Private VLAN ID and enter the number of the secondary private VLAN.
6. Click anywhere in the dialog box, select the secondary private VLAN that you added, and select

either Isolated or Community for the port type.


7. Click OK.

After the primary and secondary private VLANs are associated for the distributed switch, use the association to configure the VLAN policy for the distributed port group.
To configure the VLAN policy for a distributed port group: 1. Right-click the distributed port group in the networking inventory view and select Edit

Settings.

2. Select Policies. 3. Select the VLAN type to use and click OK.

Network Scalability

Module 4 Network Scalability

169

VS5OS_LectGuideVo11.book Page 170 Monday, June 25, 2012 10:07 PM

Discovery Protocols
Slide 4-37

Switch discovery protocols help network administrators determine the capabilities of a network device.

vSphere supports two discovery protocols: Cisco Discovery Protocol (CDP) and Link Layer Discovery Protocol (LLDP).

You can use CDP and LLDP to gather configuration and connection information about the physical or virtual switch. Such information might aid troubleshooting network problems.
CDP Introduced in vSphere 4.0 Available on a standard switch or a distributed switch Specific to Cisco LLDP New in vSphere 5.0 Available only on a distributed switch Vendor-neutral protocol

Discovery protocols are available in vSphere virtual switches. CDP was developed by Cisco Systems to broadcast information at network layer 2 about connected devices. CDP is available for standard switches and distributed switches, both of which must be connected to Cisco physical switches. CDP has been supported in vSphere since version 4.0. LLDP is new in vSphere 5.0 and is available for distributed switches only. LLDP supports the standards-based discovery protocol IEEE 802.1AB. CDP is specific to Cisco. LLDP is vendor-neutral. CDP or LLDP is used by network devices for advertising their identity, capabilities, and neighbors on a network. When CDP or LLDP is enabled for a virtual switch, you can use the VMware vSphere Client to view properties of the peer physical switch. These properties include the device ID, the software version, and timeout.

170

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 171 Monday, June 25, 2012 10:07 PM

Configuring CDP or LLDP


Slide 4-38

With CDP or LLDP enabled, the virtual switch can be configured for three different modes of operation:

Listen Information is received from the physical switches. Advertise Information is sent to the physical switches. Both Information is sent to and received from the physical switches.

4
Network Scalability

The CDP and LLDP discovery protocols enable vCenter Server and the vSphere Client to identify properties of a physical switch, such as switch name, port number, and port speed/duplex settings. You can also configure CDP or LLDP so that information about physical adapters and ESXi host names is passed to the CDP- or LLDP-compatible switches. After the discovery protocol is enabled, it has three Operation options: Listen (default) The ESXi host detects and displays information about the associated physical switch port, but information about the virtual switch is not available to the physical switch administrator. Advertise The ESXi host makes information about the virtual switch available to the physical switch administrator but does not detect and display information about the physical switch. Both The ESXi host detects and displays information about the associated physical switch and makes information about the virtual switch available to the physical switch administrator.
To enable a discovery protocol on a distributed switch: 1. Right-click a distributed switch in the Networking inventory view and select Edit Settings. 2. On the Properties tab, select Advanced.

Module 4 Network Scalability

171

VS5OS_LectGuideVo11.book Page 172 Monday, June 25, 2012 10:07 PM

3. Select Enabled from the Status drop-down menu. 4. Select the discovery protocol type from the Type drop-down menu. 5. Select the operation mode from the Operation drop-down menu.

172

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 173 Monday, June 25, 2012 10:07 PM

Viewing CDP Information


Slide 4-39

Different icons are used to view CDP information on standard and distributed switches.

4
Network Scalability For you to view Cisco information from the vSphere Client, the CDP mode for the distributed switch must be either Listen or Both. The information icon enables you to view the Cisco information for the distributed switch. Because the CDP advertisements of Cisco equipment typically occur once a minute, you might notice a delay between enabling CDP and the availability of CDP data from the vSphere Client.

Module 4 Network Scalability

173

VS5OS_LectGuideVo11.book Page 174 Monday, June 25, 2012 10:07 PM

Viewing LLDP Information


Slide 4-40

Example of LLDP output from a physical switch

When LLDP is enabled for a distributed switch, you can use the vSphere Client to view properties of the physical switch. Information like the following is displayed: Chassis ID Timeout value System name and description VLAN ID Peer device capabilities LLDP must be enabled on the physical switch. Some physical switches have LLDP enabled by default.

174

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 175 Monday, June 25, 2012 10:07 PM

What Is Network I/O Control?


Slide 4-41

user-defined resource pools

system-defined resource pools


vMotion

FT NFS iSCSI

VR

Mgmt

port group distributed switch

port group

port group

Network I/O Control enables distributed switch traffic to be divided into different network resource pools.

10GigE

Network Scalability

You can use shares and limits to control traffic priority.

Network resource pools determine the priority that different network traffic types are given on a distributed switch. Network I/O Control is disabled by default. When Network I/O Control is enabled, distributed switch traffic is divided into the following system-defined network resource pools: Fault Tolerance traffic iSCSI traffic vMotion traffic Management traffic NFS traffic Virtual machine traffic vSphere Replication traffic vSphere Replication (VR) is a new alternative for the replication of virtual machines. VR is introduced in VMware vCenter Site Recovery Manager 5.0. VR is an engine that provides replication of virtual machine disk files. VR tracks changes to virtual machines and ensures that blocks that differ in a specified recovery point objective are replicated to a remote site.
Module 4 Network Scalability 175

VS5OS_LectGuideVo11.book Page 176 Monday, June 25, 2012 10:07 PM

You can also create custom, user-defined network resource pools for virtual machine traffic. You can control the bandwidth that each network resource pool is given by setting the physical adapter shares and host limit for each network resource pool.

176

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 177 Monday, June 25, 2012 10:07 PM

Configuring System-Defined Network Resource Pools


Slide 4-42

When Network I/O Control is enabled, distributed switch traffic is divided into predefined network resource pools.

Traffic is controlled with physical adapter shares and host limits.


Networking inventory view > select distributed switch > Resource Allocation tab.

4
Network Scalability You can control the priority given to the traffic from each of these network resource pools by setting the physical adapter shares and host limits for each network resource pool. Network shares and limits work in the same way as CPU, memory, and storage I/O shares and limits. The physical adapter shares assigned to a network resource pool determine the share of the total available bandwidth guaranteed to the traffic associated with that network resource pool. The share of transmit bandwidth available to a network resource pool is determined by the network resource pools shares and what other network resource pools are actively transmitting. For example, you set your Fault Tolerance traffic and iSCSI traffic resource pools to 100 shares, and all other resource pools to 50 shares. The Fault Tolerance traffic and iSCSI traffic resource pools each receive 22 percent of the available bandwidth. The remaining resource pools each receive 11 percent of the available bandwidth. These reservations apply only when the physical adapter is saturated. Network shares and limits apply to a hosts outbound network I/O traffic only. The iSCSI traffic resource pool shares do not apply to iSCSI traffic on a dependent hardware iSCSI adapter.

Module 4 Network Scalability

177

VS5OS_LectGuideVo11.book Page 178 Monday, June 25, 2012 10:07 PM

To enable Network I/O Control: 1. Select Home > Inventory > Networking. 2. Select the distributed switch in the inventory and click the Resource Allocation tab. 3. Click the Properties link and select Enable network I/O control on this vDS. To enable network resource pool settings: 1. In the Networking inventory view, select the distributed switch. 2. On the Resource Allocation tab, right-click the network resource pool to edit and select Edit

Settings.
3. Modify the physical adapter shares value and host limit for the network resource pool. 4. (Optional) Select the QoS priority tag from the drop-down menu. The QoS priority tag specifies

an IEEE 802.1p tag, enabling quality of service at the media access control level.
5. Click OK.

178

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 179 Monday, June 25, 2012 10:07 PM

User-Defined Network Resource Pools


Slide 4-43

User-defined network resource pools are used for customized network resource management.

4
Network Scalability

Limits, shares, and a QoS priority tag can be assigned to each network resource pool, whether the pool is system-defined or user-defined. In the example, two new network resource pools have been created: Virtual Machine Normal Traffic and Virtual Machine Test Traffic. Virtual Machine Normal Traffic has 50 shares. Virtual Machine Test Traffic has 25 shares. The system network resource pool named Virtual Machine Traffic has 100 shares. So you might do the following: Put your highly critical virtual machines in the Virtual Machine Traffic pool. Put your less critical virtual machines in the Virtual Machine Normal Traffic pool. Put all other virtual machines in the Virtual Machine Test Traffic pool.

Module 4 Network Scalability

179

VS5OS_LectGuideVo11.book Page 180 Monday, June 25, 2012 10:07 PM

To configure a network resource pool: 1. Click the New Network Resource Pool link to create the network resource pool. 2. Click the Manage Port Groups link to associate one or more port groups with a network

resource pool.
In this release, Network I/O Control does not participate in the placement of virtual machines through VMware vSphere Distributed Resource Scheduler (DRS).

180

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 181 Monday, June 25, 2012 10:07 PM

Configuring a User-Defined Network Resource Pool


Slide 4-44

Select distributed switch > Resource Allocation tab > New Network Resource Pool link.

Select distributed switch > Resource Allocation tab > Manage Port Groups link.

QoS priority tag 1 None (0) 2 3 4 5 6 7

Network priority 0 (lowest) 1 2 3 4 5 6 7 (highest)

Traffic characteristics Background Best effort Excellent effort Critical applications

4
Network Scalability

Video, < 100 ms latency Voice, < 10 ms latency Internetwork control Network control

To create a network resource pool: 1. Select your distributed switch in the inventory and click the Resource Allocation tab. 2. Click the New Network Resource Pool link. The Network Resource Pool Settings dialog box

is displayed.
3. Enter information about the resource pool by modifying the various fields.

With the QoS priority tag menu, you can select a priority code, from 1 to 7, or no code at all. Each code is given a network priority and is associated with a network traffic characteristic. The QoS priority tag gives you a way of identifying which network resource pool has a higher priority than the others. The priority tag determines the service level that the packet receives when crossing an 802.1p-enabled network. vSphere does nothing with the priority tag. The physical switch prioritizes packets based on the QoS tag. 802.1p priority tagging is available only with distributed switches, not standard switches. After the network resource pool is created, you can assign one or more distributed port groups to a user-defined network resource pool.

Module 4 Network Scalability

181

VS5OS_LectGuideVo11.book Page 182 Monday, June 25, 2012 10:07 PM

To assign multiple port groups to the same network resource pool: 1. In the Resource Allocation tab, click the Manage Port Groups link. The Manage Port Groups

dialog box is displayed.


2. Select a network resource pool to associate with each port group. The Assign multiple button

enables you to assign multiple port groups to the same network resource pool.

182

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 183 Monday, June 25, 2012 10:07 PM

What Is NetFlow?
Slide 4-45

NetFlow:

A network analysis tool for monitoring the network and for gaining visibility into virtual machine traffic A tool that can be used for profiling, intrusion detection, networking forensics, and compliance Supported on distributed switches only
ESXi hosts

distributed switch enabled for NetFlow

NetFlow collector
network flow data

Network Scalability

NetFlow is a protocol that Cisco Systems developed for analyzing network traffic. NetFlow has become an industry standard specification for collecting types of network data for monitoring and reporting. The data sources are network devices, such as switches and routers. For ESXi deployments, NetFlow enables visibility into virtual machine traffic. NetFlow can be used for: Network profiling Intrusion detection and prevention Networking forensics Sarbanes-Oxley compliance NetFlow version 5 is implemented in distributed switches. Standard switches do not support NetFlow.
Since VMware ESX 3.5, VMware has provided experimental support for NetFlow. A distributed switch implements NetFlow version 5. However, newer versions of NetFlow enable additional capabilities, such as aggregation and caching, that are not available in a distributed switch. Some customers might find this confusing, especially customers who are familiar with later versions of NetFlow on physical switches.

Module 4 Network Scalability

183

VS5OS_LectGuideVo11.book Page 184 Monday, June 25, 2012 10:07 PM

Network Flows
Slide 4-46

A network flow is a unidirectional sequence of packets, with each packet sharing a common set of properties. NetFlow captures two types of flows:

Internal flow Represents intrahost virtual machine traffic External flow Represents interhost virtual machine traffic and physical machinetovirtual machine traffic
external flows

Flow records are sent to a NetFlow collector for analysis.


internal flow

ESXi hosts physical host

NetFlow collector

network flow records

Network flows give you full visibility into virtual machine traffic, which can be collected for historical views and used for multiple purposes. NetFlow captures two types of flows: internal and external. Internal flows are generated from intrahost virtual machine traffic, that is, traffic between virtual machines on the same hosts. External flows are generated from interhost virtual machine traffic, traffic between virtual machines located on different hosts, or virtual machines on different distributed switches. External flows are also generated from physical machinetovirtual machine traffic. A flow is a sequence of packets that share the same seven properties: Source IP address Destination IP address Source port Destination port Input interface ID

184

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 185 Monday, June 25, 2012 10:07 PM

Output interface ID Protocol A flow is unidirectional. Flows are processed and stored as flow records by supported network devices, such as a distributed switch. The flow records are then sent to a NetFlow collector for additional analysis. Although flow processing is an efficient method, NetFlow can put additional strain on the distributed switch. NetFlow requires additional processing and additional storage on the host for the flow records to be processed and exported.

4
Network Scalability

Module 4 Network Scalability

185

VS5OS_LectGuideVo11.book Page 186 Monday, June 25, 2012 10:07 PM

Network Flow Analysis


Slide 4-47

Network flow data is sent to a third-party NetFlow collector, which:

Accepts and stores network flow records Includes a storage system for long-term storage of flow-based data:

You can investigate and isolate excessive network bandwidth utilization, bottlenecks, and unexpected application traffic. You can view historical records to diagnose the cause of these outages or breaches. You can analyze network traffic by rate, volume, and utilization. You can analyze trends in virtual machine and host traffic. NetFlow collector
NetFlow collector IP address: 172.20.10.100

Mines, aggregates, and reports on the collected data:

VDS IP address: 192.168.10.24


network flow records

NetFlow sends aggregated network flow data to a NetFlow collector. Third-party vendors have NetFlow collector products. A NetFlow collector accepts and stores the completed network flow records. NetFlow collectors vary in functionality by vendor. Some features that a NetFlow collector might provide include: Analysis software to mine, aggregate, and report on the collected data A storage system to allow for long-term storage so that you can archive the network flow data A customized user interface, often Web-based The NetFlow collector reports on various kinds of networking information, including: The current top network flows consuming the most bandwidth in a particular (virtual) switch The IP addresses that are behaving irregularly The number of bytes that a particular virtual machine has sent and received in the past 24 hours With NetFlow data, you can investigate the causes of excessive use of network bandwidth, bottlenecks, and unexpected application traffic. The historical records that you stored in long-term storage can help you diagnose what might have caused these outages or breaches.

186

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 187 Monday, June 25, 2012 10:07 PM

NetFlow data comes from NetFlow-enabled network devices, so additional network probes to collect the flow-based data are not needed. NetFlow collectors and analyzers can provide a detailed set of network performance data. Given enough storage on the NetFlow collector, flow data can be archived for a long time, providing a long-term record of network behavior.

4
Network Scalability

Module 4 Network Scalability

187

VS5OS_LectGuideVo11.book Page 188 Monday, June 25, 2012 10:07 PM

Configuring NetFlow on a Distributed Switch


Slide 4-48

Networking inventory view > select distributed switch > Configuration tab > Edit Settings link.

1. Configure

NetFlow on the distributed switch.

2. Enable or disable

NetFlow on a distributed port group, a specific port, or at the uplink.


To configure NetFlow on a distributed switch, specify the following NetFlow settings: Collector IP address and Port The IP address and port number used to communicate with the NetFlow collector system. These fields must be set for NetFlow monitoring to be enabled for the distributed switch or for any port or port group on the distributed switch. VDS IP address An optional IP address that is used to identify the source of the network flow to the NetFlow collector. The IP address is not associated with a network port, and it does not have to be pingable. This IP address is used to fill the source IP field of NetFlow packets. This IP address enables the NetFlow collector to interact with the distributed switch as a single switch, rather than seeing a separate, unrelated switch for each associated host. If this IP address is not configured, the hosts management IP address is used instead. Active flow export timeout The number of seconds after which active flows (flows where packets are being sent) are forced to be exported to the NetFlow collector. The default is 300. The value range is 03600. Idle flow export timeout The number of seconds after which idle flows (flows where no packets have been seen for x number of seconds) are forced to be exported to the collector. The default is 15. The value range is 0300.

188

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 189 Monday, June 25, 2012 10:07 PM

Sampling rate The value that is used to determine what portion of data that NetFlow collects. For example, if the sampling rate is 2, the data is collected from every second packet. If the sampling rate is 5, the data is collected from every fifth packet. Although increasing the sampling rate reduces the load on the distributed switch, a higher sampling rate also makes the NetFlow reporting less accurate. If the value is 0, sampling is disabled. The default value is 0. The value range is 01000. Process internal flows only Indicates whether to limit analysis to traffic that has both the source virtual machine and the destination virtual machine on the same host. By default, the check box is not selected, which means that both internal and external flows are processed. You might select this check box if you already have NetFlow deployed in your datacenter and you want to see only the flows that cannot be seen by your existing NetFlow collector. After configuring NetFlow on the distributed switch, you can enable NetFlow monitoring on a distributed port group, a specific port, or an uplink.

4
Network Scalability

Module 4 Network Scalability

189

VS5OS_LectGuideVo11.book Page 190 Monday, June 25, 2012 10:07 PM

What Is Port Mirroring?


Slide 4-49

Port mirroring is a technology that duplicates network packets of a switch port (source) to another port (destination).

The sources traffic is monitored at the destination.


VM A VM B VM C

Port mirroring is used: To assist in troubleshooting As input for network analysis appliances

source port

destination port

Many network switch vendors implement port mirroring in their products. vSphere supports port mirroring on a distributed switch: Used to monitor virtual machine traffic
host

VDS

normal traffic mirrored traffic

Port mirroring is a technology that duplicates network packets of a switch port (the source) to another port (the destination). The sources network traffic is monitored at the destination. Port mirroring is used to assist in troubleshooting. On physical switches, administrators are accustomed to being able to mirror traffic to special ports to troubleshoot network-related problems. Port mirroring is commonly used for network appliances that require monitoring of network traffic, such as an intrusion detection system. Many network switch vendors implement port mirroring in their products. For example, port mirroring on a Cisco Systems switch is usually called Switched Port Analyzer (SPAN). In vSphere 5.0, port mirroring functionality is supported on a distributed switch. Port mirroring is used to monitor virtual machine traffic. Port mirroring overcomes the vSphere 4.x limitations of enabling promiscuous mode on a distributed port. If you enable promiscuous mode on a distributed port, this port sees all the network traffic going through the distributed switch. You cannot select which traffic from a port or port group that a particular promiscuous port is allowed to see. The promiscuous port can see all the traffic on the same broadcast domain. Port mirroring overcomes this limitation by enabling the administrator to control which traffic can be seen by the distributed port that is enabled for port mirroring.
190 VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 191 Monday, June 25, 2012 10:07 PM

Creating a Port Mirroring Session: General Properties


Slide 4-50

To create a port mirroring session, specify general properties and optional session details.

4
Network Scalability

To mirror distributed switch traffic to specific destination ports, you must create a port mirroring session. You can create port mirroring sessions on vSphere 5.0 distributed switches.
To create a port mirroring session: 1. Go to the Networking inventory view. 2. Right-click the distributed switch and click Edit Settings. 3. Click the Port Mirroring tab. 4. In the General Properties page, enter the following information:

Name The session name, which cannot contain more than 80 characters Description The description of your choice Port Mirroring Session Details: (Optional) Allow normal I/O on destination ports If you do not select this check box, mirrored traffic is allowed out on destination ports, but no traffic is allowed in. (Optional) Encapsulation VLAN Create a VLAN ID that encapsulates all frames at the destination ports. If packets already have a VLAN, the VLAN is replaced with the
Module 4 Network Scalability 191

VS5OS_LectGuideVo11.book Page 192 Monday, June 25, 2012 10:07 PM

new VLAN ID specified here. Use of the new VLAN enables the captured traffic to be sent to a different host on a different VLAN. This VLAN shares a trunk port that has the original traffic with the original VLAN ID passing through it. (Optional) Preserve original VLAN This check box is available only if you select Encapsulation VLAN. The original VLAN is kept, and a packet is added with another VLAN tag specified. (Optional) Mirrored packet length Put a limit on the size of mirrored frames. If this check box is selected, all mirrored frames are truncated to the specified length if their length is greater than this value. Increasing the mirrored packet length increases the amount of time required to process packets and effectively decreases the amount of packet buffering. You can limit the packet length so that it captures only the protocol information that you are interested in.
Port mirroring also works with jumbo frames.

192

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 193 Monday, June 25, 2012 10:07 PM

Creating a Port Mirroring Session: Source and Destination


Slide 4-51
Traffic direction for the source can be one of the following: Ingress Traffic from source virtual machine to VDS is mirrored. Egress Traffic from VDS to source virtual machine is mirrored. Both Both ingress and egress traffic are mirrored.

For the port mirroring session, select a destination type: Port One or more port IDs Uplink One or more uplinks

Network Scalability

After specifying the general properties for the port mirroring session, select the source and the destination. For the source, select a traffic direction to mirror: Ingress/Egress, Ingress, or Egress. The traffic direction is from the perspective of the distributed switch. Ingress traffic direction refers to traffic flowing from the source virtual machine into the distributed switch. Egress traffic direction refers to traffic flowing out from the distributed switch into the source virtual machine. Also for the source, add one or more port IDs. The Ports tab of a distributed switch enables you to map a virtual machine to its corresponding port ID on the distributed switch. The port mirroring Destination type can be Port or Uplink. If you select Port, specify one or more port IDs. If you select Uplink, specify one or more uplinks. Port mirroring has the following restrictions: In a session, a port cannot be both a source and a destination. A port cannot be a destination for more than one session. A promiscuous port cannot be an egress source or destination. An egress source cannot be a destination of any sessions, to avoid cycles of mirroring paths.
Module 4 Network Scalability 193

VS5OS_LectGuideVo11.book Page 194 Monday, June 25, 2012 10:07 PM

These restrictions are in place to avoid flooding the network with mirrored traffic. Source and destination ports must be on the same ESXi host. If a source and a destination are not on the same host, the mirroring path between them does not take effect, although the session can still be added. Finally (not shown here), you must select the Enable This Port Mirroring Session check box to enable the port mirroring session. By default, the session is disabled.

194

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 195 Monday, June 25, 2012 10:07 PM

Lab 4
Slide 4-52

In this lab, you will configure and use port mirroring.


1. Configure port mirroring on a distributed switch. 2. Verify that port mirroring works properly.

4
Network Scalability

Module 4 Network Scalability

195

VS5OS_LectGuideVo11.book Page 196 Monday, June 25, 2012 10:07 PM

Lab 5 (Optional)
Slide 4-53

In this lab, based on a set of requirements, you will design a network configuration for an ESXi environment.
1. Analyze the requirements. 2. Design distributed switches and physical connections. 3. Add vMotion traffic to your design.

196

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 197 Monday, June 25, 2012 10:07 PM

Review of Learner Objectives


Slide 4-54

You should be able to do the following:

Describe distributed switch port binding. Explain how private VLANs work. Describe the types of discovery protocols. Configure network resource pools. Configure NetFlow. Configure port mirroring on a distributed switch.

4
Network Scalability

Module 4 Network Scalability

197

VS5OS_LectGuideVo11.book Page 198 Monday, June 25, 2012 10:07 PM

Key Points
Slide 4-55

A distributed switch provides similar functionality to a standard switch. But it defines a single configuration that is shared across all associated hosts. Port binding determines when and how a virtual machines virtual NIC is assigned to a virtual switch port. A private VLAN is an extension to the VLAN standard. A private VLAN segments a single VLAN into secondary private VLANs. vSphere supports two discovery protocols: CDP and LLDP. Network I/O Control enables distributed switch traffic to be divided into different network resource pools. Distributed switches support the use of network analysis and troubleshooting tools, specifically, NetFlow and port mirroring.

Questions?

198

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 199 Monday, June 25, 2012 10:07 PM

MODULE 5

Networking Optimization
Slide 5-1

Module 5

5
Networking Optimization

VMware vSphere: Optimize and Scale

199

VS5OS_LectGuideVo11.book Page 200 Monday, June 25, 2012 10:07 PM

You Are Here


Slide 5-2

Course Introduction VMware Management Resources Performance in a Virtualized Environment

Storage Optimization CPU Optimization Memory Performance VM and Cluster Optimization Host and Management Scalability

Network Scalability Network Optimization Storage Scalability

200

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 201 Monday, June 25, 2012 10:07 PM

Importance
Slide 5-3

Network performance can be measured in terms of how many packets have been dropped when transmitting or receiving data. You should know how to monitor for dropped packets and to troubleshoot network performance problems.

5
Networking Optimization

Module 5 Networking Optimization

201

VS5OS_LectGuideVo11.book Page 202 Monday, June 25, 2012 10:07 PM

Module Lessons
Slide 5-4

Lesson 1: Lesson 2: Lesson 3: Lesson 4:

Networking Virtualization Concepts Monitoring Networking I/O Activity Command-Line Network Management Troubleshooting Network Performance Problems

202

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 203 Monday, June 25, 2012 10:07 PM

Lesson 1: Networking Virtualization Concepts


Slide 5-5

Lesson 1: Networking Virtualization Concepts

5
Networking Optimization

Module 5 Networking Optimization

203

VS5OS_LectGuideVo11.book Page 204 Monday, June 25, 2012 10:07 PM

Learner Objectives
Slide 5-6

After this lesson, you should be able to do the following:

Describe network virtualization overhead. Describe network adapter features that affect performance. Describe VMware vSphere networking features that affect performance.

204

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 205 Monday, June 25, 2012 10:07 PM

Network I/O Virtualization Overhead


Slide 5-7

guest OS
I/O stack

Network I/O virtualization overhead can come from several different sources:

virtual NIC driver

virtual NIC device

VMkernel
virtual I/O stack

Emulation overhead Packet processing Scheduling Virtual interrupt coalescing Halting (and waking up) the physical CPU Halting (and waking up) the virtual CPU

physical NIC driver

Network I/O latency can increase due to this virtualization overhead.

5
Networking Optimization Packets sent or received from a virtual machine experience some amount of overhead compared to a bare-metal (native) environment. The overhead that is experienced by virtual machines is because packets must traverse through an extra layer of virtualization stack. This overhead can also increase network I/O latency. The additional latency due to the virtualization stack can come from several sources. The most commons ones include the following: Emulation overhead Certain privileged instructions and some operations to access I/O devices that are executed by the virtual machine are intercepted by the hypervisor. This activity adds some overhead and contributes to network I/O latency as well. Packet processing The network virtualization stack forwards a network packet from the physical NIC to the virtual machine and the reverse. This activity requires some computation and processing such as switching decisions at the virtual switch, inserting and stripping the VLAN tag, and copying packets if necessary. This processing adds some latency on both the transmit and receive paths. Scheduling Packet transmission and reception involves multiple hypervisor threads and virtual CPUs. The VMkernel scheduler has to schedule and then execute these threads (if they are not already running) upon receipt and transmission of a packet. On an idle system, this activity takes a couple of microseconds. But on a busy system, if high CPU contention exists, this activity can take tens of microseconds and occasionally milliseconds.
Module 5 Networking Optimization 205

VS5OS_LectGuideVo11.book Page 206 Monday, June 25, 2012 10:07 PM

Virtual interrupt coalescing Similar to physical NIC interrupt coalescing, the virtual machine does virtual interrupt coalescing. That is, a virtual machine might not be interrupted immediately after receiving or transmitting a packet. Instead, the VMkernel might wait to receive or transmit more than one packet before an interrupt is posted. Also, on the transmit path, the virtual machine might wait until a few packets are queued up before sending the packets down to the hypervisor. Sometimes interrupt coalescing might have a noticeable impact on average latency. Halting (and waking up) the physical CPU Modern hardware and operating systems are power efficient. That is, operating systems put the physical CPU in sleep mode (halt) when the CPU is idle. Waking up the CPU from halt mode adds some latency. The amount of latency varies from CPU to CPU. Depending on the load of the system, the CPU might go into deeper sleep states from which it takes even longer to wake up. This overhead is generally common between nonvirtualized and virtualized platforms. However, sometimes the side effect of this can be worse in virtualized environments than in physical environments. For example, a CPU might flush its cache on entering halt mode. In this case, the halt operation might affect a virtualized system more than a physical system because the cache footprint is larger in a virtual environment. Halting (and waking up) the virtual CPU When the guest operating system issues a halt instruction, the VMkernel scheduler might de-schedule the vCPU. Rescheduling the virtual CPU (putting the virtual CPU back in the run queue) can take a couple of microseconds. Latency introduced due to the virtual or physical CPU halt function is more prominent when the CPU is lightly loaded, that is, when the CPU halts frequently.

206

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 207 Monday, June 25, 2012 10:07 PM

vmxnet Network Adapter


Slide 5-8

guest OS
TCP/IP

vmxnet is the VMware paravirtualized device driver that does the following:

vmxnet driver

Shares a ring buffer between the virtual machine and the VMkernel Supports transmission packet coalescing Supports interrupt coalescing Offloads TCP checksum calculation to the hardware

vmxnet device

VMkernel
virtual I/O stack

physical NIC driver

5
Networking Optimization

vmxnet is the VMware paravirtualized device driver for virtual networking. The vmxnet driver implements an idealized network interface that passes through network traffic from the virtual machine to the physical cards with minimal overhead. The three versions of vmxnet are vmxnet, vmxnet2 (Enhanced vmxnet), and vmxnet3. The driver improves performance through a number of optimizations: Shares a ring buffer between the virtual machine and the VMkernel, and uses zero-copy, which in turn saves CPU cycles. Traditional networking uses a series of buffers to process incoming network data and deliver it efficiently to users. However, higher-speed modern networks are turning this approach into a performance bottleneck as the amount of data received from the network often exceeds the size of the kernel buffers. Zero-copy improves performance by having the virtual machines and the VMkernel share a buffer, reducing the internal copy operations between buffers to free up CPU cycles. Takes advantage of transmission packet coalescing to reduce address space switching. Batches packets and issues a single interrupt, rather than issuing multiple interrupts. Offloads TCP checksum calculation to the network hardware rather than use the CPU resources of the virtual machine monitor.
Module 5 Networking Optimization 207

VS5OS_LectGuideVo11.book Page 208 Monday, June 25, 2012 10:07 PM

Virtual Network Adapters


Slide 5-9

VMware vSphere ESXi supports a number of virtual network adapters:

vlance* vmxnet* e1000 and e1000e vmxnet2 (Enhanced vmxnet) vmxnet3

*configurable using the Flexible adapter type

When you configure a virtual machine, you can add NICs and specify the adapter type. The type of network adapters that are available depend on the following factors: The version of the virtual machine, which depends on which host created it or most recently updated it Whether or not the virtual machine has been updated to the latest version for the current host The guest operating system The following virtual NIC types are supported: vlance This virtual NIC is an emulated version of the AMD 79C970 PCnet32 LANCE NIC. vlance is an older 10Mbps NIC with drivers available in most 32-bit guest operating systems (except Windows Vista and later). A virtual machine configured with this network adapter can use its network immediately. This type cannot be selected directly. It can only be configured using the Flexible adapter type. vmxnet The virtual NIC has no physical counterpart. vmxnet is optimized for performance in a virtual machine. Because operating system vendors do not provide built-in drivers for this card, you must install VMware Tools to have a driver for the vmxnet network adapter

208

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 209 Monday, June 25, 2012 10:07 PM

available. This type cannot be selected directly. It can only be configured using the Flexible adapter type. Flexible This virtual NIC identifies itself as a vlance adapter when a virtual machine boots, but initializes itself and functions as either a vlance or a vmxnet adapter. Whether the adapter used is vlance or vmxnet depends on which driver initializes it. With VMware Tools installed, the vmxnet driver changes the vlance adapter to the higher-performance vmxnet adapter. e1000, e1000e This virtual NIC is an emulated version of the Intel 82545EM Gigabit Ethernet NIC. This driver is available in most newer guest operating systems, including Windows XP and later and Linux versions 2.4.19 and later. The e1000e is an emulated version of the Intel 82574L Gigabit Ethernet NIC. This driver is available for operating systems that use this NIC, such as the Windows 8 64-bit operating system. Whether the e1000 driver or the e1000e driver is loaded depends on the physical NIC adapters that are used. Enhanced vmxnet (vmxnet2) The Enhanced vmxnet adapter is based on the vmxnet adapter but provides some high-performance features, such as jumbo frames and hardware offloads. This virtual network adapter is supported only for a limited set of guest operating systems.

5
Networking Optimization

vmxnet3 The vmxnet3 adapter is the next generation of a paravirtualized NIC designed for performance. It is not related to vmxnet or vmxnet2. The vmxnet3 adapter offers all the features available in vmxnet2 and adds several features, like multiqueue support (called Receive-Side Scaling in Windows), IPv6 off-loads, and MSI/MSI-X interrupt delivery. vmxnet3 devices support fault tolerance and record/replay. This virtual network adapter is supported only for a limited set of guest operating systems and is available only on virtual machines with hardware version 7. For more about choosing a network adapter, see VMware knowledge base article 1001805 at http://kb.vmware.com/kb/1001805.
Here is some background on the e1000 and e1000e drivers: Intel supplied a driver for the 82545EM 1Gbps NIC that the e1000 driver emulated. Intels driver had several bugs, which the VMware e1000 driver tried to fix. Intel then decided to end-of-life the 82545EM NIC. As a result, the driver for this NIC was no longer shipped in new operating systems, so VMware released the e1000e driver. This driver emulates the Intel 82574L NIC. The driver for the Intel 82574L NIC is shipped on the install media of new operating systems. To use the e1000e driver, you must edit the virtual machines properties and add a network adapter, choosing e1000e as the network adapter type.

Module 5 Networking Optimization

209

VS5OS_LectGuideVo11.book Page 210 Monday, June 25, 2012 10:07 PM

Network Performance Features


Slide 5-10

vSphere takes advantage of many of the performance features of modern network adapters, including the following:

TCP checksum offload TCP segmentation offload Jumbo frames Use of DMA to access high memory

Other features that can improve vSphere network performance are the following: 10 Gigabit Ethernet NetQueue VMware vSphere DirectPath I/O SplitRx Mode Virtual Machine Communication Interface (VMCI)

VMware vSphere includes support for many of the performance features of modern network adapters. The list of network adapter features that are enabled on your NIC can be found in the VMware vSphere ESXi file named /etc/vmware/esx.conf. Look for the lines that start with /net/ vswitch. As a general rule, do not change the default NICs driver settings unless you have a valid reason to do so. A good practice is to follow any configuration recommendations that are specified by the hardware vendor.
Be brief on the slide. Each of these features is discussed separately in the next several slides.

210

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 211 Monday, June 25, 2012 10:07 PM

TCP Checksum Offload


Slide 5-11

TCP checksum offload is a feature on physical NICs that does the following:

Allows the adapter to perform checksum operations on network packets Reduces the load on the physical CPU Provides a performance savings, depending on packet size

5
Networking Optimization

Modern NIC adapters have the ability to perform checksum calculations natively. TCP checksums are used to determine the validity of transmitted or received network packets based on error correcting code. These calculations are traditionally performed by the hosts CPU. By offloading these calculations to the network adapters, the CPU is freed up to perform other tasks. As a result, the system as a whole runs better.

Module 5 Networking Optimization

211

VS5OS_LectGuideVo11.book Page 212 Monday, June 25, 2012 10:07 PM

TCP Segmentation Offload


Slide 5-12

TCP Segmentation Offload (TSO) improves networking performance by reducing the CPU overhead involved with sending large amounts of TCP traffic.

The adapter divides the packet into MTU-sized frames.

TCP/IP headers of the frame are adjusted to match the new frame sizes.

TSO is enabled on VMkernel interfaces by default if the hardware supports this feature. TSO can be manually enabled at the virtual machine level.

TCP Segmentation Offload (TSO) allows a TCP/IP stack to emit large frames (up to 64KB) even though the maximum transmission unit (MTU) of the interface is smaller. A TCP message must be broken down into Ethernet frames. The size of each frame is the maximum transmission unit. The default MTU is 1,500 bytes, defined by the Ethernet specification. The process of breaking messages into frames is called segmentation. Historically, the operating system used the CPU to perform segmentation. Modern NICs try to optimize this TCP segmentation by using a larger segment size as well as offloading work from the CPU to the NIC hardware. ESXi utilizes this concept to provide a virtual NIC with TSO support, without requiring specialized network hardware. With TSO, instead of processing many small MTU-sized frames during transmit, the system can send fewer, larger virtual MTU-sized frames. TSO improves performance for TCP network traffic coming from a virtual machine and for network traffic (such as VMware vSphere vMotion traffic) sent out of the server. TSO is supported at the virtual machine level and in the VMkernel TCP/IP stack.

212

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 213 Monday, June 25, 2012 10:07 PM

TSO is enabled on the VMkernel interface by default. If TSO becomes disabled for a particular VMkernel interface, the only way to enable TSO is to delete that VMkernel interface and re-create it with TSO enabled. TSO is used in the guest when the vmxnet2 (or later) network adapter is installed. To enable TSO at the virtual machine level, you must replace the existing vmxnet or flexible virtual network adapter with a vmxnet2 (or later) adapter. This replacement might result in a change in the MAC address of the virtual network adapter. TSO support through the enhanced vmxnet network adapter is available for virtual machines that run the following guest operating systems: Microsoft Windows 2003 Enterprise Edition with Service Pack 2 (32-bit and 64-bit) Red Hat Enterprise Linux 4 (64-bit) Red Hat Enterprise Linux 5 (32-bit and 64-bit) SUSE Linux Enterprise Server 10 (32-bit and 64-bit) When the physical NICs provide TSO functionality, the ESXi host can leverage the specialized NIC hardware to improve performance. TSO is used in the host when the physical NIC supports it. However, performance improvements related to TSO need not require NIC hardware support for TSO. Virtual machines that use TSO on an ESXi host show lower CPU utilization than virtual machines that lack TSO support when performing the same network activities. For details on how to enable TSO support for a virtual machine or to check whether TSO is enabled on a VMkernel interface, see vSphere Networking Guide at https://www.vmware.com/support/pubs/ vsphere-esxi-vcenter-server-pubs.html.

5
Networking Optimization

Module 5 Networking Optimization

213

VS5OS_LectGuideVo11.book Page 214 Monday, June 25, 2012 10:07 PM

Jumbo Frames
Slide 5-13

Before transmitting packets, the IP layer fragments data into MTUsized frames.

The Ethernet MTU is 1,500 bytes. The receive side reassembles the data. Is an Ethernet frame with a bigger MTU, up to 9,000 bytes Reduces the number of frames transmitted Reduces the CPU utilization on the transmit and receive side

A jumbo frame:

Virtual machines must be configured with e1000, vmxnet2, or vmxnet3 adapters. The network must support jumbo frames end-to-end.

The Ethernet MTU (packet size) is 1,500 bytes. For each packet, the system has to perform a nontrivial amount of work to package and transmit the packet. As Ethernet speed increased, so did the amount of work necessary, which resulted in a greater burden on the system. A jumbo frame has an Ethernet MTU of more than 1,500 bytes. A jumbo frame can have an Ethernet packet size up to 9,000 bytes. Jumbo frames decrease the number of packets requiring packaging, compared to previously sized packets. That decrease results in less work for network transactions, which frees up resources for other activities. The network must support jumbo frames end-to-end. That is, the physical NICs at both ends and all the intermediate hops, routers, and switches must support jumbo frames. Jumbo frames must be enabled at the virtual switch level, at the virtual machine, and at the VMkernel interface. ESXi hosts using jumbo frames realize a decrease in load due to network processing. Before enabling jumbo frames, check with your hardware vendor to ensure that your network adapter supports jumbo frames. For details on how to enable jumbo frames, see vSphere Networking Guide at https://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html.

214

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 215 Monday, June 25, 2012 10:07 PM

Using DMA to Access High Memory


Slide 5-14

To speed up packet handling, network adapters can be configured for direct memory access (DMA) to high memory. DMA:

Bypasses the CPU and allows the NIC direct access to memory Avoids memory space that would need to be shared with the VMkernel Is commonly implemented with scatter-gather elements Are used with DMA to allow data to be written to memory in noncontiguous blocks Allow more flexible use of available memory, resulting in better performance

Scatter-gather elements:

5
Networking Optimization To facilitate the checksum calculations, the physical NIC requires access to physical memory. The use of direct memory access (DMA) and high memory reduces the amount of conflict between the physical NIC and the physical CPU. This reduction of conflict improves the performance of both the NIC and the CPU. Systems that support DMA are able to transfer data to and from the NIC with less CPU overhead than systems that do not support DMA. The ability to support many multiple scatter/gather elements in a DMA operation is standard on many of the higher-end networking interfaces. Scatter/gather elements allow the transfer of data to and from multiple memory areas in a single DMA transaction. Scatter/gather functionality is equivalent to chaining together multiple simple DMA requests. This functionality is managed by a buffer chain list that is maintained by the adapters device driver. No vSphere configuration tasks are required to use DMA or scatter or gather functionality.

Module 5 Networking Optimization

215

VS5OS_LectGuideVo11.book Page 216 Monday, June 25, 2012 10:07 PM

10 Gigabit Ethernet (10GigE)


Slide 5-15

10GigE is a natural complement to vSphere:

High-consolidation ratios and high-bandwidth applications can quickly saturate Gigabit Ethernet network devices. Smaller servers need to do more with fewer slots. 10GigE works in conjunction with other network performance features such as jumbo frames and TSO.

With high consolidation ratios combined with the high-bandwidth requirements of todays applications, the total I/O load on a server is substantial. Single Gigabit Ethernet adapters are becoming increasingly unable to support the demands of these applications in large enterprise environments. And multiple NICs are often impractical because of the number of ports used on the host and on the network switches. 10 Gigabit Ethernet adapters offer a solution that provides much higher bandwidth while using fewer ports on the hosts and switches. Support for 10 Gigabit Ethernet adapters was introduced in ESXi 3.5. Since then, features such as TSO, jumbo frames, and multiple receive queues were introduced to efficiently use these adapters.

216

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 217 Monday, June 25, 2012 10:07 PM

NetQueue
Slide 5-16

NetQueue is a performance technology that does the following:

Significantly improves performance in virtualized environments that use 10GigE adapters Uses multiple transmit and receive queues to allow I/O processing across multiple CPUs Is limited to systems that specifically support the interrupt mechanism, MSI-X

5
Networking Optimization VMware supports NetQueue, a performance technology that improves performance in virtualized environments that use 10GigE adapters. NetQueue takes advantage of the multiple queue capability that newer physical network adapters have. Multiple queues allow I/O processing to be spread across multiple CPUs in a multiprocessor system. So, while one packet is queued up on one CPU, another packet can be queued up on another CPU at the same time. NetQueue can also use multiple transmit queues to parallelize access that is normally serialized by the device driver. Multiple transmit queues can also be used to get some sort of guaranteed isolation from the hardware. A separate, prioritized queue can be used for different types of network traffic. NetQueue monitors the load of the virtual machines as they are receiving packets and can assign queues to critical virtual machines. All other virtual machines use the default queue. NetQueue is enabled by default. Disabling or enabling NetQueue on a host is done by using the VMware vSphere Command-Line Interface (vCLI). For details on how to disable and enable NetQueue, see vSphere Networking Guide at https://www.vmware.com/support/pubs/vsphere-esxivcenter-server-pubs.html.

Module 5 Networking Optimization

217

VS5OS_LectGuideVo11.book Page 218 Monday, June 25, 2012 10:07 PM

vSphere DirectPath I/O


Slide 5-17

device driver

vSphere DirectPath I/O allows a virtual machine to directly access the physical NIC instead of using an emulated or paravirtualized device:

The virtual machine controls the physical hardware. Performance is improved by avoiding emulated I/O. Requires I/O MMU Leverages Intel VT-d and AMD-Vi hardware support

virtualization layer

vSphere DirectPath I/O:

I/O device

VMware vSphere DirectPath I/O leverages Intel VT-d and AMD-Vi hardware support to allow guest operating systems to directly access hardware devices. In the case of networking, vSphere DirectPath I/O allows the virtual machine to access a physical NIC directly rather than using an emulated device or a paravirtualized device. An example of an emulated device is the e1000 virtual NIC, and examples of paravirtualized devices are the vmxnet and vmxnet3 virtual network adapters. vSphere DirectPath I/O provides limited increases in throughput, but it reduces the CPU cost for networking-intensive workloads. vSphere DirectPath I/O is not compatible with certain core virtualization features: When ESXi is running on certain configurations of the Cisco Unified Computing System (UCS) platform, vSphere DirectPath I/O for networking is compatible with the following: vSphere vMotion Hot adding and removing of virtual devices, Suspend and resume VMware vSphere High Availability VMware vSphere Distributed Resource Scheduler (DRS) Snapshots
218 VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 219 Monday, June 25, 2012 10:07 PM

For server hardware other than the Cisco UCS platform, vSphere DirectPath I/O is not compatible with hot adding and removing of virtual devices, suspend and resume, VMware vSphere Fault Tolerance, vSphere HA, DRS, and snapshots. For DRS, limited availability is provided. The virtual machine can be part of a cluster, but cannot migrate across hosts. Typical virtual machines and their workloads do not require the use of vSphere DirectPath I/O. However, for workloads that are networking intensive and do not need the core virtualization features just mentioned, vSphere DirectPath I/O might be useful to reduce CPU usage. For performance and use cases of vSphere DirectPath I/O for networking, see http://blogs.vmware.com/performance/2010/12/performance-and-use-cases-of-vmware-directpathio-for-networking.html.

5
Networking Optimization

Module 5 Networking Optimization

219

VS5OS_LectGuideVo11.book Page 220 Monday, June 25, 2012 10:07 PM

SplitRx Mode
Slide 5-18

SplitRx mode uses multiple physical CPUs to process network packets received in a single network queue. Enable SplitRx mode in situations where you have multiple virtual machines on an ESXi host receiving multicast traffic from the same source. SplitRx mode is supported only for vmxnet3 network adapters. (This feature is disabled by default.)

Multicast is an efficient way of disseminating information and communicating over the network. A single sender can connect to multiple receivers and exchange information while conserving network bandwidth. Financial stock exchanges, multimedia content delivery networks, and commercial enterprises often use multicast as a communication mechanism. Multiple receivers can be enabled on a single ESXi host. Because the receivers are on the same host, the physical network does not have to transfer multiple copies of the same packet. Packet replication is carried out in the hypervisor instead. SplitRx mode is an ESXi feature that uses multiple physical CPUs to process network packets received in a single network queue. This feature provides a scalable and efficient platform for multicast receivers. SplitRx mode typically improves throughput and CPU efficiency for multicast traffic workloads. SplitRx mode is supported only on vmxnet3 network adapters. This feature is disabled by default. VMware recommends enabling splitRxMode in situations where multiple virtual machines share a single physical NIC and receive a lot of multicast or broadcast packets. SplitRx mode is individually configured for each virtual NIC. For information about how to enable this feature, see Performance Best Practices for VMware vSphere 5.0 at http://www.vmware.com/ pdf/Perf_Best_Practices_vSphere5.0.pdf.
220 VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 221 Monday, June 25, 2012 10:07 PM

VMCI
Slide 5-19

VMCI is a virtual device that provides high-speed communication between the following:

A virtual machine and the ESXi host on which it resides Two virtual machines on the same ESXi host

The VMCI device is made available to application programmers through the VMCI Sockets library.

Performance: VMCI is a high-performance alternative to regular TCP/IP sockets. VMCI bypasses the guest or VMkernel networking stack to allow for fast communication among virtual machines on the same host.

5
Networking Optimization

Virtual Machine Communication Interface (VMCI) provides a high-speed communication channel between a virtual machine and the ESXi host on which it runs. VMCI can also be enabled for communication between virtual machines that run on the same host, without using the guest networking stack. The VMCI device is made available to application programmers through the VMCI Sockets library. The VMware VMCI Sockets library offers an API that is similar to the Berkeley UNIX socket interface and the Windows socket interface, two industry standards. VMCI sockets support fast and efficient communication between guest virtual machines and their host. For information about how to enable VMCI between virtual machines, see vSphere Virtual Machine Administration Guide at https://www.vmware.com/support/pubs/vsphere-esxi-vcenter-serverpubs.html. For details on how to program VMCI sockets, see VMCI Sockets Programming Guide at http://www.vmware.com/support/developer/vmci-sdk. For a performance study using VMCI sockets, see VMCI Socket Performance (for VMware vSphere 4.0) at http://www.vmware.com/pdf/vsp_4_VMCI_socket_perf.pdf.

Module 5 Networking Optimization

221

VS5OS_LectGuideVo11.book Page 222 Monday, June 25, 2012 10:07 PM

Review of Learner Objectives


Slide 5-20

You should be able to do the following:

Describe network virtualization overhead. Describe network adapter features that affect performance. Describe vSphere networking features that affect performance.

222

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 223 Monday, June 25, 2012 10:07 PM

Lesson 2: Monitoring Networking I/O Activity


Slide 5-21

Lesson 2: Monitoring Networking I/O Activity

5
Networking Optimization

Module 5 Networking Optimization

223

VS5OS_LectGuideVo11.book Page 224 Monday, June 25, 2012 10:07 PM

Learner Objectives
Slide 5-22

After this lesson, you should be able to do the following:

Determine which network metrics to monitor. View metrics in vCenter Server and resxtop. Demonstrate how to monitor network throughput.

224

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 225 Monday, June 25, 2012 10:07 PM

Network Capacity Metrics


Slide 5-23

Identify network problems to determine available bandwidth and compare with expectations. What do I do?

Check key metrics. Significant network statistics in a vSphere environment are the following:

Network usage Network packets received Network packets transmitted Received packets dropped Transmitted packets dropped

5
Networking Optimization

Network performance depends on application workload and network configuration. Dropped network packets indicate a bottleneck in the network. To determine whether packets are being dropped, use the advanced performance charts in the VMware vSphere Client or use the resxtop command. If received packets are being dropped, adjust the virtual machine CPU shares. If packets are not being dropped, check the size of the network packets and the data received rate and the data transmitted rate. In general, the larger the network packets, the faster the network speed. When the packet size is large, fewer packets are transferred, which reduces the amount of CPU required to process the data. When network packets are small, more packets are transferred, but the network speed is slower because more CPU is required to process the data. In some instances, large packets can result in high latency. To rule out this issue, check network latency. If packets are not being dropped and the data receive rate is slow, the host probably lacks the CPU resources required to handle the load. Check the number of virtual machines assigned to each physical NIC. If necessary, perform load balancing by moving virtual machines to different virtual switches or by adding more NICs to the host. You can also move virtual machines to another host or increase the CPU resources of the host or virtual machines.

Module 5 Networking Optimization

225

VS5OS_LectGuideVo11.book Page 226 Monday, June 25, 2012 10:07 PM

vSphere Client Networking Statistics


Slide 5-24

Useful network counters:

Per adapter or aggregated statistics

Data transmit rate Data receive rate Packets transmitted Packets received Transmit packets dropped Receive packets dropped

You can use the advanced performance charts in the vSphere Client to track network statistics per host, per virtual machine, or per NIC (virtual or physical). However, a single chart can display either physical objects (host and vmnic#) or virtual objects (virtual machine and virtual NIC). Track these counters to determine network performance: Data transmit rate Average amount of data transmitted (in Kbps) per second Data receive rate Average amount of data received (in Kbps) in the sampling interval Packets transmitted Number of packets transmitted in the sampling interval Packets received Number of packets received in the sampling interval Transmit packets dropped Number of outbound packets dropped in the sampling interval Receive packets dropped Number of inbound packets dropped in the sampling interval

226

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 227 Monday, June 25, 2012 10:07 PM

vSphere Client Network Performance Chart


Slide 5-25

5
Networking Optimization This example shows the advanced network performance chart of an ESXi host. Specific details for a point in time can be displayed by hovering the cursor over a point in the graph. The graph has been configured to show the network statistics for all of the physical network interfaces in the host. However, the interfaces are displayed only by the vmnic names. You must know the specific interfaces connected to the VMkernel port groups in order to identify those categories of traffic. To see network statistics for a specific virtual machine, select the virtual machine in the inventory and display its advanced network performance chart.

Module 5 Networking Optimization

227

VS5OS_LectGuideVo11.book Page 228 Monday, June 25, 2012 10:07 PM

resxtop Networking Statistics


Slide 5-26

Useful network counters:

MbTX/s MbRX/s

Data transmit rate Data receive rate

PKTTX/s Packets transmitted PKTRX/s Packets received %DRPTX Percentage of transmit packets dropped %DRPRX Percentage of receive packets dropped

The specific counters to examine in resxtop are: MbTX/s Amount of data transmitted in Mbps MbRX/s Amount of data received in Mbps PKTTX/s Average number of packets transmitted per second in the sampling interval PKTRX/s Average number of packets received per second in the sampling interval %DRPTX Percentage of outbound packets dropped in the sampling interval %DRPRX Percentage of inbound packets dropped in the sampling interval
NOTE

The units for the statistics differ between the vSphere Client and resxtop. Make sure you normalize the units before comparing output from these monitoring tools.

228

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 229 Monday, June 25, 2012 10:07 PM

resxtop Network Output


Slide 5-27

configuration

performance

The virtual machine, TestVM01, is connected to a distributed switch. vmnic1 is connected to the distributed switch.

5
Networking Optimization

To display network statistics in resxtop, type n in the window. Configuration information about the objects is listed first, followed by the performance metrics. The USED-BY column identifies the network connections by: Physical adapter An example is vmnic0. vSphere network object One example is VMkernel port, such as vmk0. Another example is virtual machines NIC, identified as 357663:TestVM01.eth0. The value 357663 is an internal virtual machine ID, TestVM01 is the virtual machine name, and eth0 identifies the network interface. By default, the command shows the fields PKTTX/s, MbTX/s, PKTRX/s, MbRX/s, %DRPTX, and %DRPRX. If you do not see all of the fields, resize the width of the window.

Module 5 Networking Optimization

229

VS5OS_LectGuideVo11.book Page 230 Monday, June 25, 2012 10:07 PM

vSphere Client or resxtop?


Slide 5-28

resxtop can run in batch mode, which allows you to store output in a file. The VMware vSphere Client provides user-friendly displays, such as performance charts and various tab displays.

Ports tab for the distributed switch, vDS-01

Both vSphere Client and resxtop are good ways to gather performance data for your vSphere environment. However, in terms of networking, resxtop does have one advantage. Unlike vSphere Client, resxtop displays both physical and virtual network users on the same screen. resxtop allows administrators to determine, using a single screen, how much of the network load on a physical NIC is due to a particular virtual machine. The vSphere Client, on the other hand, has user-friendly displays, such as its performance charts and various tab displays. For example, in the Networking view, a distributed switch has a Ports tab. This tab provides you with port configuration information as well as performance information such as dropped inbound and dropped outbound packets per port.

230

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 231 Monday, June 25, 2012 10:07 PM

Lab 6 Introduction: Test Cases 1 and 2


Slide 5-29

Test case 1
# ./nptest1.sh TestVM01 # ./netserver TestVM02

Test case 2
# ./nptest1.sh TestVM01 # ./netserver TestVM02

E1000

E1000

VMXNET3

VMXNET3

vDS-01 Production

vSwitch0 VM Network

vDS-01 Production

vSwitch0 VM Network

5
Networking Optimization In this lab, you generate network traffic and compare the network performance of different network adapters connected to different networks or the same network. You use a client-side script named nptest1.sh to generate network traffic. nptest1.sh generates a large amount of network traffic. You are instructed to run this script on the network client virtual machine, TestVM01. You also use a server-side script called netserver. This script runs on the network server virtual machine, TestVM02. netserver receives the data sent by nptest1.sh. You perform three test cases, two of which are described here: Test case 1 Both test virtual machines are configured with e1000 network adapters. The first test virtual machine is connected to the Production port group on the distributed switch named vDS-01. The second test virtual machine is connected to the port group named VM Network, located on the standard switch, vSwitch0. Network traffic flows between two different networks. Test case 2 This test case is similar to test case 1, except you modify both TestVM01 and TestVM02 to use the vmxnet3 adapter instead of the e1000 adapter.

Module 5 Networking Optimization

231

VS5OS_LectGuideVo11.book Page 232 Monday, June 25, 2012 10:07 PM

Lab 6 Introduction: Test Case 3


Slide 5-30

Test case 3
# ./nptest1.sh TestVM01 # ./netserver TestVM02

VMXNET3

VMXNET3

vDS-01 Production

Test case 3 is similar to test case 2 because both TestVM01 and TestVM02 are configured with a vmxnet3 network adapter. However, in test case 3, both test virtual machines are connected to the Production port group on the distributed switch, vDS-01.

232

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 233 Monday, June 25, 2012 10:07 PM

Lab 6
Slide 5-31

In this lab, you will use performance charts and resxtop to monitor network performance.
1. Start the network client and network server virtual machines. 2. Configure resxtop. 3. Perform test case 1: Measure network activity over the e1000 network

adapter.
4. Modify test virtual machines to use vmxnet3. 5. Perform test case 2: Measure network activity over the vmxnet3

network adapter.
6. Perform test case 3: Measure network activity over the same network. 7. Summarize your findings. 8. Clean up for the next lab.

5
Networking Optimization

Module 5 Networking Optimization

233

VS5OS_LectGuideVo11.book Page 234 Monday, June 25, 2012 10:07 PM

Lab 6 Review
Slide 5-32

Counters for Production vmnic

Test Case 1 Virtual machines using e1000, on different networks (nptest1.sh)

Test Case 2 Virtual machines using vmxnet3, on different networks (nptest1.sh)

Test Case 3 Virtual machines using vmxnet3, on same network (nptest1.sh)

MbTX/s MbRX/s %DRPTX %DRPRX

Your instructor will review the lab with you.


Case 1 versus case 2: The packets transmitted and received should be higher for case 2 because you are using a higher-performance network adapter. Case 2 versus case 3: The packets transmitted and received should exceed 1Gbps because network traffic is internal to the ESXi host. Network traffic does not touch the physical NICs.

234

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 235 Monday, June 25, 2012 10:07 PM

Review of Learner Objectives


Slide 5-33

You should be able to do the following:

Determine which network metrics to monitor. View metrics in vCenter Server and resxtop. Demonstrate how to monitor network throughput.

5
Networking Optimization

Module 5 Networking Optimization

235

VS5OS_LectGuideVo11.book Page 236 Monday, June 25, 2012 10:07 PM

Lesson 3: Command-Line Network Management


Slide 5-34

Lesson 3: Command-Line Network Management

236

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 237 Monday, June 25, 2012 10:07 PM

Learner Objectives
Slide 5-35

After this lesson, you should be able to do the following:

Describe command-line tools to manage virtual networking. Use VMware vSphere Management Assistant (vMA) to manage standard switches and distributed switches.

5
Networking Optimization

Module 5 Networking Optimization

237

VS5OS_LectGuideVo11.book Page 238 Monday, June 25, 2012 10:07 PM

Command-Line Overview for Network Management


Slide 5-36

Command-line tools are available for you to view, configure, and troubleshoot standard switches and distributed switches:

VMware vSphere ESXi Shell:

esxcli network esxtop esxcli network The vicfg- network-related commands resxtop

VMware vSphere Command-Line Interface (vCLI):

The direct console user interface (DCUI) can be used to view and configure the management network.

Several command-line tools are available for you to view, configure, monitor, and troubleshoot your virtual network configuration. Tools are available from vCLI or from the VMware vSphere ESXi Shell. The esxcli network namespace includes the ip, vswitch, and nic namespace. The commands in these namespaces allow you to view and configure IP address, virtual switch, and virtual NIC information. The esxcli command is available with the ESXi Shell and vCLI.
vicfg- commands exist to view and configure virtual NICs, routing information, Simple Network Management Protocol (SNMP), VMkernel ports, and virtual switches. These commands are vicfgnics, vicfg-route, vicfg-snmp, vicfg-vmknic, and vicfg-vswitch. The vicfg-

commands are available with the ESXi Shell and vCLI. The esxtop and resxtop commands are used to monitor realtime networking activity. esxtop is available with the ESXi Shell and resxtop is available with vCLI. VMware recommends that you use vCLI to run commands against your ESXi hosts. Run commands directly in the ESXi Shell only in troubleshooting situations. In addition to these tools, the direct console user interface (DCUI) can be used to manage your hosts management network configuration.
238 VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 239 Monday, June 25, 2012 10:07 PM

Using the DCUI to Configure the Management Network


Slide 5-37

The DCUI allows you to do the following:

Configure the management network settings:

VLAN IP information IPv6 support DNS server addresses Custom DNS suffixes

Restart the management network Test the management network Restore network settings Restore the standard switch

This selection is available if the management network is connected to a distributed switch.

Networking Optimization

The DCUI provides the following options from the System Customization window: Configure Management Network This option allows you to view and modify the network adapters that provide the default network connection to and from this host. You can also view and modify the hosts VLAN, and whether to use a DHCP-assigned or static IP address. From this option, you can configure the host to support IPv6. You can also configure the hosts primary and alternate DNS server addresses and any custom DNS suffixes. Restart Management Network This option allows you to restart the management network interface, which might be required to restore networking or to renew a DHCP lease. Restarting the management network results in a brief network outage that might temporarily affect running virtual machines. If a renewed DHCP lease results in a new network identity (such as an IP address or host name), remote management software is disconnected. Test Management Network This option allows you to perform a brief network test. The test attempts to ping the configured default gateway, ping the configured primary and alternate DNS servers, and resolve the configured host name. Restore Network Settings This option allows you to restore the management network settings by automatically configuring the network with the system defaults. Restoring the network settings stops all running virtual machines on the host.
Module 5 Networking Optimization 239

VS5OS_LectGuideVo11.book Page 240 Monday, June 25, 2012 10:07 PM

Restore Standard Switch This option is available only if your management network is connected to a distributed switch. When you restore the standard switch, a new virtual adapter is created. Also, the management network uplink that is connected to distributed switch is migrated to the new virtual switch. You might need to restore the standard switch for the following reasons: The distributed switch is not needed or is not functioning. The distributed switch needs to be repaired to restore connectivity to the VMware vCenter Server system and the hosts need to remain accessible. You do not want vCenter Server to manage the host. When the host is not connected to vCenter Server, most distributed switch features are unavailable to the host. The DCUI can be accessed in two ways: Directly, from the ESXi hosts console Remotely, from an SSH session with the command, dcui

240

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 241 Monday, June 25, 2012 10:07 PM

Managing Virtual Networks


Slide 5-38

Network management task Retrieve network port information. Set virtual switch attributes. List, create, and delete standard switches. Manage a VMkernel port. List, add, and remove port groups. Configure a port group with a VLAN ID. Link and unlink uplink adapters to or from a standard switch. Manage uplinks in distributed switches. Configure the SNMP agent. Configure DNS and routing.

vMA command esxcli esxcli esxcli vicfg-vswitch esxcli esxcli vicfg-vswitch esxcli vicfg-vswitch esxcli vicfg-vswitch vicfg-vswitch

vicfg-snmp vicfg-dns vicfg-route

Networking Optimization

The vCLI networking commands enable you to manage the vSphere network services. You can connect virtual machines to the physical network and to one another and configure standard switches. Limited configuration of distributed switches is also supported. You can also set up your vSphere environment to work with external networks like SNMP or NTP.
Be brief on the slide. These tasks and commands are discussed on the next several slides.

Module 5 Networking Optimization

241

VS5OS_LectGuideVo11.book Page 242 Monday, June 25, 2012 10:07 PM

Retrieving Network Port Information


Slide 5-39

Use the esxcli command with the network ip namespace:

For the --server option, you must specify an ESXi host, not a VMware vCenter Server system. To get a list of VMkernel ports configured on a host:

Examples:

esxcli -server esxi01 network ip interface list


esxcli -server esxi01 network ip interface ipv4 get i vmk1

To retrieve IPv4 information for a VMkernel port:

To view port connections (corresponding to the Linux netstat command):

esxcli -server esxi01 network ip connection list

In the examples, the connection option contains only the --server option. In this case, you are prompted for the user name and password when you run this command. This authentication step can be bypassed by adding the server as a target server (vifp addserver) and initializing the server for vi-fastpass authentication (vifptarget -s <server>).

242

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 243 Monday, June 25, 2012 10:07 PM

Setting Virtual Switch Attributes


Slide 5-40

Use the esxcli command with the network vswitch standard namespace:

esxcli <conn_options> network vswitch standard <cmd_options>

Examples:

To set the maximum transmission unit size:

esxcli -server esxi02 network vswitch standard set --mtu=9000 --vswitch-name=vSwitch5 esxcli -server esxi02 network vswitch standard set --cdp-status=advertise --vswitch-name=vSwitch5

To set the Cisco Discovery Protocol status:

5
Networking Optimization MTU and Cisco Discovery Protocol (CDP) settings can be viewed and configured with the esxcli command. You usually set MTUs when you are configuring the virtual network to support jumbo frames. Ensure that the physical network supports jumbo frames before you configure jumbo frames on the virtual network. CDP is a Cisco technology that enables switches to broadcast and receive broadcasts. Switching information is passed between the switches, which aids in troubleshooting network issues. On a standard switch, CDP is supported. On distributed switches, CDP and Link Layer Discovery Protocol are the supported protocols.

Module 5 Networking Optimization

243

VS5OS_LectGuideVo11.book Page 244 Monday, June 25, 2012 10:07 PM

Listing, Creating, and Deleting Standard Switches


Slide 5-41

Use the esxcli command with the network vswitch standard namespace:

esxcli <conn_options> network vswitch standard <cmd_options>

Examples of using esxcli:

To create a standard switch:

esxcli -server esxi02 network vswitch standard add -vswitch-name=vSwitch5

To list information about a standard switch:

esxcli -server esxi02 network vswitch standard list -vswitch-name=vSwitch5

To delete a standard switch:

esxcli -server esxi02 network vswitch standard remove -vswitch-name=vSwitch5

You can also use the vicfg-vswitch command. See examples in the notes.

By default, each ESXi host has one virtual switch, named vSwitch0. More virtual switches can be added and managed in the vSphere Client or at the command prompt. A virtual switch cannot be deleted if any of the ports are still in use. All virtual switch connections must be severed before the switch can be removed. You can also use the vicfg-vswitch command to perform similar functions. To create a standard switch:
vicfg-vswitch --server esxi02 --add vSwitch5

To list information about all of your standard switches:


vicfg-vswitch --server esxi02 --list

To delete a standard switch:


vicfg-vswitch --server esxi02 --delete vSwitch5

244

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 245 Monday, June 25, 2012 10:07 PM

Managing a VMkernel Port


Slide 5-42

Use the esxcli command with the network ip interface namespace:

esxcli <conn_options> network ip interface <cmd_options>

Examples:

To list existing VMkernel ports:

esxcli -server esxi02 network ip interface list

To add a VMkernel port:


esxcli -server esxi02 network ip interface add --interface-name=vmk6 -portgroup-name=FT Logging

The FT Logging port group is an existing port group.

To configure a VMkernel port:

esxcli -server esxi02 network ip interface ipv4 set --ipv4=172.20.18.22 --netmask=255.255.255.0 --type=static --interface-name=vmk6

To remove a VMkernel port:

esxcli -server esxi02 network ip interface remove --interface-name=vmk6

Networking Optimization

VMkernel ports are used primarily for management traffic, which can include vMotion, IP storage, and Fault Tolerance. You can also bind newly created VMkernel ports for use by software and dependent hardware iSCSI initiators. You do the binding with the esxcli iscsi commands. Creating a VMkernel port involves creating the port, then configuring the port with a network identity. When you create the VMkernel port, you specify a VMkernel interface name. The name uses the convention vmk<x>, where <x> is a number. You can determine the number to use from the results of the list option. The list option shows the existing VMkernel interfaces, for example, vmk1, vmk2, and vmk3. If vmk3 was the last VMkernel interface defined, then a new VMkernel interface would be named vmk4. Run the following command to get IPv4 settings for the VMkernel port:
esxcli -server esxi02 network ip interface ipv4 get i vmk6

where vmk6 is the name of the VMkernel port These commands are used to manage VMkernel ports on standard switches only. These commands to not apply to distributed switches.

Module 5 Networking Optimization

245

VS5OS_LectGuideVo11.book Page 246 Monday, June 25, 2012 10:07 PM

Listing, Adding, and Removing Port Groups


Slide 5-43

Use the esxcli command with the network vswitch standard portgroup namespace:

esxcli <conn_options> network vswitch standard portgroup <cmd_options>

Examples:

To list port groups on all standard switches:

esxcli -server esxi02 network vswitch standard portgroup list

To add a port group to a standard switch:


esxcli -server esxi02 network vswitch standard portgroup add --portgroup-name=TestDev --vswitch-name=vSwitch5

To remove a port group from a standard switch:

esxcli -server esxi02 network vswitch standard portgroup remove --portgroup-name=VM Network --vswitch-name=vSwitch0

You can also use the vicfg-vswitch command. See examples in the notes.

Network services connect to virtual switches through port groups. A port group enables you to group traffic and specify configuration options like bandwidth limitations and VLAN tagging policies for each port in the port group. A virtual switch must have one port group assigned to it. You can assign additional port groups as necessary. You can also use the vicfg-vswitch command to perform similar functions. To list port groups on all standard switches:
vicfg-vswitch --server esxi02 --list

To add a port group to a standard switch:


vicfg-vswitch --server esxi02 --add-pg TestDev vSwitch5

To remove a port group from a standard switch:


vicfg-vswitch --server esxi02 --del-pg TestDev vSwitch5

246

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 247 Monday, June 25, 2012 10:07 PM

Configuring a Port Group with a VLAN ID


Slide 5-44

Use the esxcli command with the network vswitch standard portgroup namespace:

esxcli <conn_options> network vswitch standard portgroup <cmd_options>

Examples:

To set the VLAN ID on a port group:

esxcli -server esxi02 network vswitch standard portgroup set p VM Network -vlan-id 49 esxcli -server esxi02 network vswitch standard portgroup list

To list port groups and their VLAN IDs:

You can also use the vicfg-vswitch command. See examples in the notes.

5
Networking Optimization

You can set the port group VLAN ID on a standard switch with either the esxcli or vicfgvswitch command. A VLAN ID restricts port group traffic to a logical Ethernet segment in the physical network. Set the VLAN ID to 0 to disable the VLAN for this port group:
esxcli --server esxi02 network vswitch standard portgroup set -p <port_group_name> --vlan-id 0

You can also use the vicfg-vswitch command to perform similar functions. To set the VLAN ID on the port group named Production (located on vSwitch2):
vicfg-vswitch --server esxi02 --vlan 49 --pg Production vSwitch2

To disable VLAN for the port group named Production:


vicfg-vswitch --server esxi02 --vlan 0 --pg Production vSwitch2

Module 5 Networking Optimization

247

VS5OS_LectGuideVo11.book Page 248 Monday, June 25, 2012 10:07 PM

Linking and Unlinking Uplink Adapters


Slide 5-45

Use the vicfg-vswitch command:

vicfg-vswitch <conn_options> <cmd_options> To add a new uplink adapter to a standard switch:

Examples:

vicfg-vswitch -server esxi02 -link vmnic3 vSwitch5 vicfg-vswitch -server esxi02 -unlink vmnic3 vSwitch5

To remove an uplink adapter from a standard switch:

A virtual switch that is not connected to the network can be very useful. You can use such virtual switches to group virtual machines that want to communicate with one another but not with virtual machines on other hosts. In most cases, you set up the virtual switch to transfer data to external networks by attaching one or more uplink adapters to the virtual switch.

248

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 249 Monday, June 25, 2012 10:07 PM

Configuring the SNMP Agent


Slide 5-46

Use the vicfg-snmp command:

vicfg-snmp <conn_options> --communities community1,community2 vicfg-snmp <conn_options> --targets hostname[@port][/community] To specify three SNMP communities:

Examples:

vicfg-snmp -server esxi01 -communities public,internal01,internal02 vicfg-snmp -server esxi01 -targets snmpdest.vclass.local@163/public

To configure an SNMP trap destination:

To display the current SNMP agent settings:


vicfg-snmp -server esxi01 -show

Networking Optimization

SNMP allows management programs to monitor and control networked devices. You can use the SNMP agent embedded in ESXi to send virtual machine and environmental traps to management systems. This host-based embedded SNMP agent is disabled by default. You can configure and enable this agent with the vicfg-snmp command. To configure the agent to send traps, you must specify a target (receiver) address, the community, and an optional port. If you do not specify a port, the SNMP agent sends traps to User Datagram Protocol (UDP) port 162 on the target management system by default. Two additional uses of the vicfg-snmp command are the following: To enable the SNMP agent:
vicfg-snmp --server esxi01 --enable

To send a test trap to verify that the agent is configured correctly:


vicfg-snmp --server esxi01 --test

An SNMP agent is also included with vCenter Server. This agent can send traps when the vCenter Server system is started or when an alarm is triggered on vCenter Server. You can manage the vCenter Server agent with the vSphere Client, but not with the vicfg-snmp command.

Module 5 Networking Optimization

249

VS5OS_LectGuideVo11.book Page 250 Monday, June 25, 2012 10:07 PM

Configuring DNS and Routing


Slide 5-47

Use the vicfg-dns and vicfg-route commands. Examples of vicfg-dns:

To display DNS properties for the specified server:

vicfg-dns -server esxi02 vicfg-dns -server esxi02 -dns 172.20.10.10

To add a DNS server:

Examples of vicfg-route: To list route entries for the specified server:

vicfg-route -server esxi02 vicfg-route -server esxi02 -add 172.20.16.0 255.255.255.0 172.20.16.10

To add a route:

The vicfg-dns command lists and specifies the DNS configuration of your ESXi host. Call the command without command-specific options to list the existing DNS configuration. You can also use esxcli network ip dns for DNS management. If you move your ESXi host to a new physical location, you might have to change the default IP gateway. You can use the vicfg-route command to manage the default gateway for the VMkernel IP stack. vicfg-route supports a subset of the options available with the Linux route command. No esxcli command exists to manage the default gateway.

250

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 251 Monday, June 25, 2012 10:07 PM

Managing Uplinks in Distributed Switches


Slide 5-48

Distributed switches are typically created and modified by using the VMware vSphere Client. Use the vicfg-vswitch command:

vicfg-vswitch <conn_options> --add-dvp-uplink <adapter_name> --dvp <dvPort_ID> <dvSwitch_name> vicfg-vswitch <conn_options> --del-dvp-uplink <adapter> --dvp <dvPort_ID> <dvSwitch_name> To add an uplink to a distributed switch: vicfg-vswitch -server vc01 -vihost esxi01 -add-dvp-uplink vmnic3 -dvp dvPortGroup vDS01 To remove an uplink from a distributed switch: vicfg-vswitch -server vc01 -vihost esxi01 -del-dvp-uplink vmnic3 -dvp dvPortGroup vDS01

Examples:

5
Networking Optimization

You can use the vSphere Client to create distributed switches. After you have created a distributed switch, you can use the vSphere Client to add hosts to the distributed switch as well as to create distributed port groups. You can also use the vSphere Client to edit distributed switch properties and policies. You can add and remove uplink ports by using the vicfg-vswitch command.

Module 5 Networking Optimization

251

VS5OS_LectGuideVo11.book Page 252 Monday, June 25, 2012 10:07 PM

Using the net-dvs Command


Slide 5-49

The net-dvs command lets you view information about your distributed switch configuration.

switch identifier (UUID)


VDS name

uplink and PVLAN information

net-dvs is a command, available in the ESXi Shell, that can provide detailed information about your distributed switch configuration, which can be useful when troubleshooting problems. netdvs is only available in the ESXi Shell. To run net-dvs, log in to the ESXi Shell as user root.

Distributed switch information is cached in a file named /etc/vmware/dvsdata.db. The ESXi host updates this file every five minutes. This file can be collected by the vm-support command if necessary. The file is binary, but you can view its contents by running the net-dvs command with the -f option and the filename. The slide shows sections of the net-dvs output which might be of interest to you. The net-dvs output includes all the distributed switches. Each distributed switch has a switch identifier. The switch identifier allows you to associate a given distributed switch with a particular virtual machine. The virtual machines .vmx file references the switch identifier. The output also displays the name of the distributed switch, which you defined when you created the switch. The list of uplinks uses the uplink names as specified in vCenter Server. If private VLANs are defined, you are able to see this information in the PVLAN map section.
CAUTION

Using the net-dvs command with options other than listing information is not supported.
252 VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 253 Monday, June 25, 2012 10:07 PM

net-dvs Output (1)


Slide 5-50

MTU and CDP/LLDP information

port group attributes

network resource pool information

Networking Optimization

The slide shows more sections from the net-dvs output. You can view MTU information. You can also identify which discovery protocol has been enabled, CDP or LLDP. The command output reports on specific uplinks and specific port groups. Load balancing, link speed, link selection, security policy, and VLAN tagging also are displayed in this part of the output.

Module 5 Networking Optimization

253

VS5OS_LectGuideVo11.book Page 254 Monday, June 25, 2012 10:07 PM

net-dvs Output (2)


Slide 5-51

Indicators of inbound and outbound traffic

indicators of errors

port is in use and link is up

This section of the net-dvs output lists statistics for a specific port. This information can be extremely useful in troubleshooting. Pay careful attention to fields like pktsInDropped and pktsOutDropped. These fields can be used to indicate some type of hardware problem or physical network overload. This section of the output also reports on inbound and outbound traffic. In general, high outbound traffic might indicate that the virtual machines on this port are good candidates for outbound traffic shaping. High inbound traffic levels might reveal a system that needs special treatment to give it high bandwidth or more resources in general (CPU, RAM, and so on).

254

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 255 Monday, June 25, 2012 10:07 PM

Lab 7
Slide 5-52

In this lab, use vMA and other commands to manage networking.


1. View your virtual switch configuration. 2. Examine .vmx file entries related to the network configuration. 3. Examine net-dvs output.

5
Networking Optimization

Module 5 Networking Optimization

255

VS5OS_LectGuideVo11.book Page 256 Monday, June 25, 2012 10:07 PM

Review of Learner Objectives


Slide 5-53

You should be able to do the following:

Describe command-line tools to manage virtual networking. Use vMA to manage standard switches and distributed switches.

256

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 257 Monday, June 25, 2012 10:07 PM

Lesson 4: Troubleshooting Network Performance Problems


Slide 5-54

Lesson 4: Troubleshooting Network Performance Problems

5
Networking Optimization

Module 5 Networking Optimization

257

VS5OS_LectGuideVo11.book Page 258 Monday, June 25, 2012 10:07 PM

Learner Objectives
Slide 5-55

After this lesson, you should be able to do the following:

Describe various network performance problems. Discuss the causes of network performance problems. Propose solutions to correct network performance problems. Discuss an example of troubleshooting a network performance problem.

The network performance that can be achieved by an application depends on many factors. These factors can affect network-related performance metrics, such as bandwidth or end-to-end latency. These factors can also affect metrics such as CPU overhead and the achieved performance of applications. Among these factors are the network protocol, the guest operating system network stack, NIC capabilities and offload features, CPU resources, and link bandwidth. In addition, less obvious factors, such as congestion due to other network traffic and buffering along the sourcedestination route, can lead to network-related performance problems. These factors are identical in virtualized and nonvirtualized environments. Networking in a virtualized environment does add factors that must be considered when troubleshooting performance problems. Virtual switches are used to combine the network traffic of multiple virtual machines onto a shared set of physical uplinks. As a result, virtual switches place greater demands on a hosts physical NICs and the associated network infrastructure than in most nonvirtualized environments. For network-related problems, the best indicator of problems will be dropped packets.

258

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 259 Monday, June 25, 2012 10:07 PM

Review: Basic Troubleshooting Flow for ESXi Hosts


Slide 5-56
1. Check for VMware Tools status. 2. Check for resource pool CPU saturation. 3. Check for host CPU saturation. 4. Check for guest CPU saturation. 5. Check for active VM memory swapping. 6. Check for VM swap wait. 7. Check for active VM memory compression. 8. Check for an overloaded storage device. 9. Check for dropped receive packets. 10. Check for dropped transmit packets. 14. Check for random increase in I/O latency on a shared storage device. 15. Check for random increase in data transfer rate on network controllers. 12. Check for high CPU ready time on VMs running in under-utilized hosts. 13. Check for slow storage device.

11. Check for using only one vCPU in an SMP VM.

16. Check for low guest CPU utilization. 17. Check for past VM memory swapping. 18. Check for high memory demand in a resource pool. 19. Check for high memory demand in a host. 20. Check for high guest memory demand.

Definite problems

Likely problems

Possible problems

Networking Optimization

Network packets might get stored (buffered) in queues at multiple points along their route from the source to the destination. Network switches, physical NICs, device drivers, and network stacks all contain queues where packet data or headers might get buffered. Packet data might have to be buffered before being passed to the next step in the delivery process. These queues are finite in size. When these queues fill up, no more packets can be received at that point on the route, causing additional arriving packets to be dropped. TCP/IP networks use congestion-control algorithms that limit, but do not eliminate, dropped packets. When a packet is dropped, TCP/IPs recovery mechanisms work to maintain in-order delivery of packets to applications. However, these mechanisms operate at a cost to both networking performance and CPU overhead, a penalty that becomes more severe as the physical network speed increases. vSphere presents virtual NICs, such as the vmxnet or virtual e1000 devices, to the guest operating system running in a virtual machine. For received packets, the virtual NIC buffers packet data coming from a virtual switch until it is retrieved by the device driver running in the guest operating system. The virtual switch contains queues for packets sent to the virtual NIC.

Module 5 Networking Optimization

259

VS5OS_LectGuideVo11.book Page 260 Monday, June 25, 2012 10:07 PM

If the guest operating system does not retrieve packets from the virtual NIC rapidly enough, the queues in the virtual NIC device can fill up. This condition can in turn cause the queues in the corresponding virtual switch port to fill up. If a virtual switch port receives a packet bound for a virtual machine when its packet queue is full, the port must drop the packet.

260

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 261 Monday, June 25, 2012 10:07 PM

Dropped Network Packets


Slide 5-57

Network packets are buffered in queues if the following are the case:

The destination is not ready to receive them The network is too busy to send them Virtual NIC devices buffer packets when they cannot be handled immediately. If the queue in the virtual NIC fills, packets are buffered by the virtual switch port. Packets are dropped if the virtual switch port fills.

These queues are finite in size:

5
Networking Optimization If a guest operating system fails to retrieve packets quickly enough from the virtual NIC, a network throughput issue might exist. Two possible causes of this issue are: High CPU use When the applications and guest operating system are driving the virtual machine to high CPU utilizations, there might be extended delays. These delays occur from the time the guest operating system receives notification that packets are available until those packets are retrieved from the virtual NIC. Sometimes, the high CPU use might in fact be caused by high network traffic. The reason is because the processing of network packets can place a significant demand on CPU resources. Improper guest operating system driver configuration Device drivers for networking devices often have parameters that are tunable from within the guest operating system. These parameters control such behavior as whether to use interrupts or perform polling. These parameters also define the number of packets retrieved on each interrupt and the number of packets that a device should buffer before interrupting the operating system. Improper configuration of these parameters can cause poor network performance and dropped packets in the networking infrastructure.

Module 5 Networking Optimization

261

VS5OS_LectGuideVo11.book Page 262 Monday, June 25, 2012 10:07 PM

To resolve high CPU use: Increase the CPU resources provided to the virtual machine. When the virtual machine is dropping receive packets due to high CPU use, it might be necessary to add vCPUs in order to provide sufficient CPU resources. If the high CPU use is due to network processing, ensure that the guest operating system can use multiple CPUs when processing network traffic. See the operating system documentation for the appropriate CPU requirements. Increase the efficiency with which the virtual machine uses CPU resources. Applications that have high CPU use can often be tuned to improve their use of CPU resources. To resolve improper guest operating system driver configuration: Tune the network stack in the guest operating system. It might be possible to tune the networking stack within the guest operating system to improve the speed and efficiency with which it handles network packets. See the documentation for the operating system. Add virtual NICs to the virtual machine and spread network load across them. In some guest operating systems, all of the interrupts for each NIC are directed to a single processor core. As a result, the single processor can become a bottleneck, leading to dropped receive packets. Adding more virtual NICs to these virtual machines allows the processing of network interrupts to be spread across multiple processor cores.

262

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 263 Monday, June 25, 2012 10:07 PM

Dropped Receive Packets


Slide 5-58

If the number of dropped receive packets > 0, then a network throughput issue might exist.
Cause Solution Increase CPU resources provided to the virtual machine. High CPU utilization Increase the efficiency with which the virtual machine uses CPU resources. Tune the network stack in the guest operating system. Add virtual NICs to the virtual machine and spread the network load across them.

Improper guest operating system driver configuration

5
Networking Optimization

When a virtual machine transmits packets on a virtual NIC, those packets are buffered in the associated virtual switch port until being transmitted on the physical uplink devices. The traffic from a virtual machine, or set of virtual machines sharing the virtual switch, might exceed the physical capabilities of the uplink NICs or the networking infrastructure. In this case, the virtual switch buffers can fill up and additional transmit packets arriving from the virtual machine are dropped. To prevent transmit packets from being dropped, take one of the following steps: Add uplink capacity to the virtual switch. Adding more physical uplink NICs to the virtual switch might alleviate the conditions that are causing transmit packets to be dropped. However, traffic should be monitored to ensure that the NIC teaming policies selected for the virtual switch lead to proper load distribution over the available uplinks. Move some virtual machines with high network demand to a different virtual switch. If the virtual switch uplinks are overloaded, moving some of the virtual machines to different virtual switches can help to rebalance the load. Enhance the networking infrastructure. Sometimes the bottleneck might be in the networking infrastructure (for example, the network switches or interswitch links). You might have to add additional capacity in the network to handle the load.

Module 5 Networking Optimization

263

VS5OS_LectGuideVo11.book Page 264 Monday, June 25, 2012 10:07 PM

Reduce network traffic. Reducing the network traffic generated by a virtual machine can help to alleviate bottlenecks in the networking infrastructure. The implementation of this solution depends on the application and guest operating system being used. Techniques such as using caches for network data or tuning the network stack to use larger packets (for example, jumbo frames) might reduce the load on the network.

264

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 265 Monday, June 25, 2012 10:07 PM

Dropped Transmit Packets


Slide 5-59

If the number of dropped transmit packets > 0, then a network throughput issue might exist.
Cause Solution Add uplink capacity to the virtual switch. Traffic from a virtual machine, or a set of virtual machines sharing a virtual switch, exceeds the physical capabilities of the uplink NICs or the networking infrastructure. Move some virtual machines with high network demand to a different virtual switch. Enhance the networking infrastructure. Reduce network traffic.

5
Networking Optimization High speed Ethernet solutions (such as 10 Gigabit Ethernet) provide a consolidated networking infrastructure to support different traffic flows through a single physical link. Examples of these traffic flows are flows from applications running in a virtual machine, vMotion, and Fault Tolerance. All these traffic flows can coexist and share a single link. Each application can use the full bandwidth in the absence of contention for the shared network link. However, a situation might arise when the traffic flows contend for the shared network bandwidth. This situation might affect the performance of the application. Two possible causes of this issue are the following: Resource contention It is possible for the total demand from all the users of the network can exceed the total capacity of the network link. In this case, users of the network resources might experience an unexpected impact on performance. These instances, though infrequent, can still cause fluctuating performance behavior which might be rather frustrating. Few users dominating the resource usage vMotion or VMware vSphere Storage vMotion, when triggered, can hog the network bandwidth. In this case, the performance of virtual machines that share network resources with these traffic flows can be affected. The performance impact might be more significant if the applications running in these virtual machines were latency-sensitive or business-critical.
Module 5 Networking Optimization 265

VS5OS_LectGuideVo11.book Page 266 Monday, June 25, 2012 10:07 PM

To overcome the problems listed here, use a resource-control mechanism. Resource control allows applications to freely use shared network resources when the resources are underused. Access to resources can be controlled when the network is congested. Network I/O Control addresses these issues by distributing the network bandwidth among the different types of network traffic flows. Network bandwidth can be distributed among different traffic flows using shares. Shares provide greater flexibility for redistributing unused bandwidth capacity. When the underlying shared network link is not saturated, applications are allowed to freely use the shared link. When the network is congested, Network I/O Control restricts the traffic flows of different applications according to their shares. Traffic flows can be restricted to use only a certain amount of bandwidth when limits are set on these flows. However, a limit should be used with caution. A limit imposes a hard limit on the bandwidth that a traffic flow is allowed to use, irrespective of the traffic conditions on the link. Bandwidth is restricted even when the link is underused.

266

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 267 Monday, June 25, 2012 10:07 PM

Random Increase in Data Transfer Rate


Slide 5-60

Different traffic flows using the same physical link contend for the shared network bandwidth, which impacts performance.
Cause Resource contention Solution

Use Network I/O Control (shares and limits) to distribute the network bandwidth among the different Few users dominating the resource types of network traffic flows. usage

5
Networking Optimization

When configuring your network, consider the best practices mentioned in the slide. The default virtual network adapter emulated in a virtual machine is either an AMD PCnet32 device (vlance) or an Intel E1000 device (e1000). The vmxnet family of paravirtualized network adapters, however, provides better performance than these default adapters. vmxnet adapters should be used for optimal performance in any guest operating system for which they are available. For the best networking performance, use network adapters that support the following hardware features: TCP checksum offload TCP segmentation offload Ability to handle high-memory DMA Ability to handle multiple scatter gather elements per transmission frame Jumbo frames Multiple physical network adapters between a single virtual switch and the physical network constitute a NIC team. NIC teams can provide passive failover in the event of hardware failure or

Module 5 Networking Optimization

267

VS5OS_LectGuideVo11.book Page 268 Monday, June 25, 2012 10:07 PM

network outage. In some configurations, NIC teaming can increase performance by distributing the traffic across those physical network adapters. Use separate virtual switches, each connected to its own physical network adapter, to avoid contention between the VMkernel and virtual machines. This guideline applies especially to virtual machines running heavy networking workloads. To establish a network connection between two virtual machines that reside on the same ESXi host, connect both virtual machines to the same virtual switch. If the virtual machines are connected to different virtual switches, traffic goes through the wire and incurs unnecessary CPU and network overhead. Virtual machines that communicate with each other on the same ESXi host can also use the VMCI device. For networks, such as 10 Gigabit Ethernet networks, that support different types of traffic flow, use Network I/O Control to allocate and control network bandwidth. Network I/O Control can guarantee bandwidth for specific needs and can prevent any one resource pool from impacting the others. In a native environment, CPU use plays a significant role in network throughput. To process higher levels of throughput, more CPU resources are needed. The effect of CPU resource availability on the network throughput of virtualized applications is even more significant. Because insufficient CPU resources limit maximum throughput, monitoring the CPU use of high-throughput workloads is essential.

268

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 269 Monday, June 25, 2012 10:07 PM

Networking Best Practices


Slide 5-61

Use the vmxnet3 network adapter where possible:

If using vmxnet3 is not possible, then use vmxnet or vmxnet2.

Use a physical network adapter that supports high-performance features. Team NICs for load balancing. Use separate virtual switches, each connected to separate NICs, to avoid workload contention. Where possible, run co-dependent virtual machines on the same host to take advantage of VMCI. For networks that support different types of traffic flow, take advantage of Network I/O Control to allocate and control network bandwidth. Ensure that sufficient CPU resources exist for workloads that have high network throughput.
After students are finished with the exercise, review the solution with them. To do this, present the slides in appendix A of your PowerPoint slide deck. Tell the students that the solution to this lab is in appendix A of their lab guide.

5
Networking Optimization

Module 5 Networking Optimization

269

VS5OS_LectGuideVo11.book Page 270 Monday, June 25, 2012 10:07 PM

Review of Learner Objectives


Slide 5-62

You should be able to do the following:

Describe various network performance problems. Discuss the causes of network performance problems. Propose solutions to correct network performance problems. Discuss an example of troubleshooting a network performance problem.

270

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 271 Monday, June 25, 2012 10:07 PM

Key Points
Slide 5-63

vSphere takes advantage of many of the performance features of modern network adapters, such as TSO offloading and jumbo frames. Virtual networking can be managed from the command line with esxcli and vicfg- commands. resxtop displays both physical and virtual network data on the same screen. For network-related problems, the best indicator is dropped packets.

Questions?

5
Networking Optimization

Module 5 Networking Optimization

271

VS5OS_LectGuideVo11.book Page 272 Monday, June 25, 2012 10:07 PM

272

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 273 Monday, June 25, 2012 10:07 PM

MODULE 6

Storage Scalability
Slide 6-1

Module 6

6
Storage Scalability

VMware vSphere: Optimize and Scale

273

VS5OS_LectGuideVo11.book Page 274 Monday, June 25, 2012 10:07 PM

You Are Here


Slide 6-2

Course Introduction VMware Management Resources Performance in a Virtualized Environment

Storage Optimization CPU Optimization Memory Performance VM and Cluster Optimization Host and Management Scalability

Network Scalability Network Optimization Storage Scalability

274

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 275 Monday, June 25, 2012 10:07 PM

Importance
Slide 6-3

As the enterprise grows, new scalability features in VMware vSphere enable the infrastructure to handle the growth efficiently. Datastore growth and balancing issues can be addressed automatically with VMware vSphere Storage DRS.

6
Storage Scalability

Module 6 Storage Scalability

275

VS5OS_LectGuideVo11.book Page 276 Monday, June 25, 2012 10:07 PM

Module Lessons
Slide 6-4

Lesson 1: Lesson 2: Lesson 3:

Storage APIs and Profile-Driven Storage VMware vSphere Storage I/O Control Datastore Clusters and Storage DRS

276

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 277 Monday, June 25, 2012 10:07 PM

Lesson 1: Storage APIs and Profile-Driven Storage


Slide 6-5

Lesson 1: Storage APIs and Profile-Driven Storage

6
Storage Scalability

Module 6 Storage Scalability

277

VS5OS_LectGuideVo11.book Page 278 Monday, June 25, 2012 10:07 PM

Learner Objectives
Slide 6-6

After this lesson, you should be able to do the following:

Describe VMware vSphere Storage APIs Array Integration (VAAI). Describe VMware vSphere Storage APIs Storage Awareness. Configure and use profile-driven storage.

278

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 279 Monday, June 25, 2012 10:07 PM

VMware vSphere Storage APIs - Array Integration


Slide 6-7

VAAI helps storage vendors provide hardware assistance to accelerate VMware I/O operations that are more efficiently accomplished in the storage hardware. VAAI includes the following API subsets:

Hardware Acceleration APIs:

Allows arrays to integrate with vSphere to transparently offload certain storage operations to the array:

This integration significantly reduces the CPU overhead on the host. Support for NAS plug-ins for array integration exists.

Array Thin Provisioning APIs:

Allows the monitoring of space on thin-provisioned storage arrays:

This functionality helps to prevent out-of-space conditions and to perform space reclamation.

Storage APIs is a family of APIs used by third-party hardware, software, and storage providers to develop components that enhance several vSphere features and solutions. This module describes two sets of Storage APIs: Array Integration and Storage Awareness. For a description of other APIs from this family, see http://www.vmware.com/technical-resources/virtualization-topics/virtualstorage/storage-apis.html. VMware vSphere Storage APIs Array Integration (VAAI) is a set of protocol interfaces and VMkernel APIs between VMware vSphere ESXi and storage arrays. In a virtualized environment, virtual disks are files located on a VMware vSphere VMFS datastore. Disk arrays cannot interpret the VMFS datastores on-disk data layout, so the VMFS datastore cannot leverage hardware functions per virtual machine or per virtual disk file. The goal of VAAI is to help storage vendors provide hardware assistance to accelerate VMware I/O operations that are more efficiently accomplished in the storage hardware. VAAI plug-ins can improve data transfer performance and are transparent to the end user. Storage vendors can take advantage of the following features: Hardware Acceleration for NAS This plug-in enables NAS arrays to integrate with VMware vSphere to transparently offload certain storage operations to the array, such as offline cloning (cold migrations, cloning from templates). This integration reduces CPU overhead on the host.
Module 6 Storage Scalability 279

6
Storage Scalability

VS5OS_LectGuideVo11.book Page 280 Monday, June 25, 2012 10:07 PM

Hardware Acceleration for NAS is deployed as a plug-in that is not shipped with ESXi. This plug-in is developed and distributed by the storage vendor but signed by the VMware certification program. Array/device firmware enabled for Hardware Acceleration for NAS must use the Hardware Acceleration for NAS features. The storage vendor is responsible for the support of the plug-in. Array Thin Provisioning This extension assists in monitoring disk space usage on thinprovisioned storage arrays. Monitoring this usage helps prevent the condition where the disk is out of space. Monitoring usage also helps when reclaiming disk space. No installation steps are required for the Array Thin Provisioning extensions. Array Thin Provisioning works on all VMFS-3 and VMFS-5 volumes. Device firmware enabled for this API is required to take advantage of the Array Thin Provisioning features. ESXi continuously checks for firmware that is compatible with Array Thin Provisioning. After the firmware is upgraded, ESXi starts using the Array Thin Provisioning features.
In vSphere 5.0, another enhancement to VAAI is that all of the fundamental operations, or primitives, are T10compliant. These primitives are Atomic Test and Set (ATS), Clone Blocks/Full Copy/XCOPY, and Zero Blocks/Write Same. Arrays that are T10-compliant can use these primitives immediately with a default VAAI plug-in. In addition, the ATS primitive has been extended in vSphere 5.0 and VMFS-5 to cover even more operations, resulting in even better performance. Additional ATS operations include acquire heartbeat, clear heartbeat, mark a heartbeat, and reclaim a heartbeat.

280

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 281 Monday, June 25, 2012 10:07 PM

VMware vSphere Storage APIs - Storage Awareness


Slide 6-8

VMware vSphere Storage APIs Storage Awareness (VASA) enables a storage vendor to develop a software component (known as a storage vendor provider) for its storage arrays.

A storage vendor provider gets information from the storage array about available storage topology, capabilities, and state.
storage vendor provider

vCenter Server

vSphere Client

storage device

VMware vCenter Server connects to a storage vendor provider.

Information from the storage vendor provider is displayed in the VMware vSphere Client.

Today, vSphere administrators do not have visibility in VMware vCenter Server into the storage capabilities of the storage array on which their virtual machines are stored. Virtual machines are provisioned to a storage black box. All the vSphere administrator sees of the storage is a logical unit number (LUN) identifier, such as a Network Address Authority ID (NAA ID) or a T10 identifier. VMware vSphere Storage APIs Storage Awareness (VASA) is a set of software APIs that a storage vendor can use to provide information about their storage array to vCenter Server. Information includes storage topology, capabilities, and the state of the physical storage devices. Administrators now have visibility into the storage on which their virtual machines are located because storage vendors can make this information available. vCenter Server gets the information from a storage array by using a software component called a VASA provider. A VASA provider is written by the storage array vendor. The VASA provider can exist on either the storage array processor or on a standalone host. This decision is made by the storage vendor. Storage devices are identified to vCenter Server with a T10 identifier or an NAA ID. VMware recommends that vendors use these types of identifiers so that devices can be matched between the VASA provider and vCenter Server. The VASA provider acts as a server in the vSphere environment. vCenter Server connects to the VASA provider to obtain information about available storage topology, capabilities, and state. The information is viewed in the VMware vSphere Client. A VASA provider can report information
Module 6 Storage Scalability 281

6
Storage Scalability

VS5OS_LectGuideVo11.book Page 282 Monday, June 25, 2012 10:07 PM

about one or more storage devices. A VASA provider can support connections to a single or multiple vCenter Server instances. For information about the concepts of VASA and developing a VASA provider, see VASA Programming Guide at http://www.vmware.com/support/pubs.

282

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 283 Monday, June 25, 2012 10:07 PM

Benefits Provided by Storage Vendor Providers


Slide 6-9

Storage vendor providers benefit vSphere administrators by:

Allowing administrators to be aware of the topology, capabilities, and state of the physical storage devices on which their virtual machines are located Allowing them to monitor the health and usage of their physical storage devices Assisting administrators in choosing the right storage in terms of space, performance, and service-level agreement requirements:

Done by using virtual machine storage profiles

A VASA provider supplies capability information in the form of descriptions of specific storage attributes. Types of capability information include the following: Performance capabilities, such as the number and type of spindles for a volume or the I/O operations or megabytes/second Disaster recovery information, such as recovery point objective and recovery time objective metrics for disaster recovery Space efficiency, such as type of compression used or if thick-provisioned format is used This information allows vSphere administrators: To be more aware of the topology, capabilities, and state of the physical storage devices on which their virtual machines are located. To monitor the health and usage of their physical storage devices. To choose the right storage in terms of space, performance, and service-level agreement requirements. Storage capabilities can be displayed in the vSphere Client. Virtual machine storage profiles can be created to make sure that the storage being used for virtual machines complies with the required levels of service.
Module 6 Storage Scalability 283

6
Storage Scalability

VS5OS_LectGuideVo11.book Page 284 Monday, June 25, 2012 10:07 PM

Configuring a Storage Vendor Provider


Slide 6-10

Select Home > Administration > Storage Providers.

After adding a storage provider, the storage vendor provider is listed in the Vendor Providers pane.

If your storage supports a VASA provider, use the vSphere Client to register and manage the VASA provider. The Storage Providers icon on the vSphere Client Home page allows you to configure the VASA provider. All system storage capabilities that are presented by the VASA provider are displayed in the vSphere Client. The new Storage Capabilities panel appears in a datastores Summary tab. To register a VASA provider, the storage vendor provides a URL, a login account, and a password. Users log in to the VASA provider to get array information. vCenter Server must trust the VASA provider host. So a security certificate from the VASA provider must be installed on the vCenter Server system. For procedures, see the VASA provider documentation.

284

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 285 Monday, June 25, 2012 10:07 PM

Profile-Driven Storage
Slide 6-11

Profile-driven storage enables the creation of datastores that provide different levels of service. Profile-driven storage can be used to do the following:

Categorize datastores based on system-defined or userdefined levels of service:

gold

silver

bronze

uncategorized

For example, user-defined levels might be gold, silver, and bronze.

Provision a virtual machines disks on correct storage Check that virtual machines comply with user-defined storage requirements

compliant

not compliant

Profile-driven storage enables the creation of datastores that provide varying levels of service. With profile-driven storage, you can use storage capabilities and virtual machine storage profiles to ensure that virtual machines use storage that provides a certain level of capacity, performance, availability, redundancy, and so on. Profile-driven storage minimizes the amount of storage planning that the administrator must do for each virtual machine. For example, the administrator can use profile-driven storage to create basic storage tiers. Datastores with similar capabilities are tagged to form a gold, silver, and bronze tier. Redundant, high-performance storage might be tagged as the gold tier, Nonredundant, mediumperformance storage might be tagged as the bronze tier. Profile-driven storage can be used during the provisioning of a virtual machine to ensure that a virtual machines disks are placed on the storage that is best for its situation. For example, profiledriven storage can help you ensure that the virtual machine running a critical I/O-intensive database is placed in the gold tier. Ideally, the administrator wants to create the best match of predefined virtual machine storage requirements with available physical storage properties. Finally, profile-driven storage can be used during the ongoing management of the virtual machines. An administrator can periodically check whether a virtual machine has been migrated to or created on inappropriate storage, potentially making it noncompliant. Storage information can also be used
Module 6 Storage Scalability 285

6
Storage Scalability

VS5OS_LectGuideVo11.book Page 286 Monday, June 25, 2012 10:07 PM

to monitor the health and usage of the storage and report to the administrator if the virtual machines storage is not compliant.

286

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 287 Monday, June 25, 2012 10:07 PM

Storage Capabilities
Slide 6-12

Storage capabilities:

System defined From storage vendor providers User-defined

storage vendor provider 1 SYSTEM CAPABILITIES

storage vendor provider 2 SYSTEM CAPABILITIES

datastore A USER-DEFINED CAPABILITIES

vCenter Server

Profile-driven storage is achieved by using two key components: storage capabilities and virtual machine storage profiles. A storage capability outlines the quality of service that a storage system can deliver. It is a guarantee that the storage system can provide a specific set of characteristics. The two types of storage capabilities are system-defined and user-defined. A system-defined storage capability is one that comes from a storage system that uses a VASA vendor provider. The vendor provider informs vCenter Server that it can guarantee a specific set of storage features by presenting them as a storage capability. vCenter Server recognizes the capability and adds it to the list of storage capabilities for that storage vendor. vCenter Server assigns the system-defined storage capability to each datastore that you create from that storage system. A user-defined storage capability is one that you can define and associate with datastores. Examples of user-defined capabilities are: Storage array type Replication status Storage tiers, such as gold, silver and bronze datastores A user-defined capability can be associated with multiple datastores. You can associate a userdefined capability with a datastore that already has a system-defined capability.
Module 6 Storage Scalability 287

6
Storage Scalability

VS5OS_LectGuideVo11.book Page 288 Monday, June 25, 2012 10:07 PM

Virtual Machine Storage Profiles


Slide 6-13

Virtual machine storage profiles:

Contain one or more storage capabilities Are associated with one or more virtual machines Can be used to test that virtual machines reside on compliant storage

storage vendor provider 1 SYSTEM CAPABILITIES

storage vendor provider 2 SYSTEM CAPABILITIES

datastore A USER-DEFINED CAPABILITIES

virtual machine storage profiles

compliant

compliant

not compliant

Storage capabilities are used to define a virtual machine storage profile. A virtual machine storage profile lists the storage capabilities that virtual machine home files and virtual disks require to run the applications in the virtual machine. A virtual machine storage profile is created by an administrator, who can create different storage profiles to define different levels of storage requirements. The virtual machine home files (.vmx, .vmsd, .nvram, .log, and so on) and the virtual disks (.vmdk) can have separate virtual machine storage profiles. With a virtual machine storage profile, a virtual machine can be checked for storage compliance. If the virtual machine is placed on storage that has the same capabilities as those defined in the virtual machine storage profile, the virtual machine is storage-compliant.
Many of the ideas for virtual machine storage profiles came from the Host Profiles feature.

288

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 289 Monday, June 25, 2012 10:07 PM

Overview of Steps for Configuring Profile-Driven Storage


Slide 6-14

To configure profile-driven storage:


1. View existing storage capabilities. 2. (Optional) Create user-defined storage capabilities. 3. Associate user-defined storage capabilities with a datastore or

datastore cluster.
4. Enable the VM Storage Profiles function on a host or cluster. 5. Create a virtual machine storage profile. 6. Associate a virtual machine storage profile with a virtual machine.

An overview of the steps to configure profile-driven storage:


1. Before you add your own user-defined storage capabilities, view the system-defined storage

6
Storage Scalability

capabilities that your storage system defines. You are checking to see whether any of the system-defined storage capabilities match your virtual machines storage requirements. For you to view system-defined storage capabilities, your storage system must use a VASA provider.
2. Create necessary user-defined storage capabilities based on the storage requirements of your

virtual machines.
3. After you create user-defined storage capabilities, associate these capabilities with datastores.

Whether or not a datastore has a system-defined storage capability, you can assign a userdefined storage capability to it. A datastore can have only one user-defined and only one system-defined storage capability at a time.
4. Virtual machine storage profiles are disabled by default. Before you can use virtual machine

storage profiles, you must enable them on a host or a cluster.


5. Create a virtual machine storage profile to define storage requirements for a virtual machine

and its virtual disks. Assign user-defined or system-defined storage capabilities or both to the virtual machine storage profile.
Module 6 Storage Scalability 289

VS5OS_LectGuideVo11.book Page 290 Monday, June 25, 2012 10:07 PM

Associate a virtual machine storage profile with a virtual machine to define the storage capabilities that are required by the applications running on the virtual machine. You can associate a virtual machine storage profile with a powered-off or powered-on virtual machine.

290

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 291 Monday, June 25, 2012 10:07 PM

Using the Virtual Machine Storage Profile


Slide 6-15

Use the virtual machine storage profile when you create, clone, or migrate a virtual machine.

When you create, clone, or migrate a virtual machine, you can associate the virtual machine with a virtual machine storage profile. When you select a virtual machine storage profile, the vSphere Client displays the datastores that are compatible with the capabilities of the profile. You can then select a datastore or a datastore cluster. If you select a datastore that does not match the virtual machine storage profile, the vSphere Client shows that the virtual machine is using noncompliant storage. When a virtual machine storage profile is selected, datastores are now divided into two categories: compatible and incompatible. You can still choose other datastores outside of the virtual machine storage profile, but these datastores put the virtual machine into a noncompliant state. By using virtual machine storage profiles, you can easily see which storage is compatible and incompatible. You can eliminate the need to ask the SAN administrator, or refer to a spreadsheet of NAA IDs, each time that you deploy a virtual machine.

6
Storage Scalability

Module 6 Storage Scalability

291

VS5OS_LectGuideVo11.book Page 292 Monday, June 25, 2012 10:07 PM

Checking Virtual Machine Storage Compliance


Slide 6-16

After clicking the Check Compliance Now link

You can associate a virtual machine storage profile with a virtual machine or individual virtual disks. When you select the datastore on which a virtual machine should be located, you can check whether the selected datastore is compliant with the virtual machine storage profile.
To check the storage compliance of a virtual machine:

In the Virtual Machines tab of the virtual machine storage profile, click the Check Compliance Now link. If you check the compliance of a virtual machine whose host or cluster has virtual machine storage profiles disabled, the virtual machine will be noncompliant because the feature is disabled. Virtual machine storage compliance can also be viewed from the virtual machines Summary tab.

292

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 293 Monday, June 25, 2012 10:07 PM

Identifying Advanced Storage Options


Slide 6-17

Some advanced storage options include the following:

N_Port ID virtualization (NPIV) vCenter Server storage filters Identifying and tagging solid state drive (SSD) devices Software iSCSI port binding VMware vSphere VMFS (VMFS) resignaturing Pluggable storage architecture (PSA)

6
Storage Scalability

Module 6 Storage Scalability

293

VS5OS_LectGuideVo11.book Page 294 Monday, June 25, 2012 10:07 PM

N_Port ID Virtualization
Slide 6-18

NPIV assigns a virtual World Wide Name and virtual N_Port ID to an individual virtual machine.

NPIV gives each virtual machine an identity on the SAN. Track storage traffic per virtual machine. Zone and mask LUNs per virtual machine. Leverage SAN quality-of-service per virtual machine. Improve I/O performance through per virtual machine array-level caching.
ESXi host

NPIV benefits:

Configure NPIV if you have the following requirements: A management requirement to monitor SAN LUN usage at the virtual machine level A security requirement to be able to zone a specific LUN to a specific virtual machine
WWN

WWN

WWN

WWN

In normal ESXi operation, only the Fibre Channel HBA has a World Wide Name (WWN) and N_Port ID. N_Port ID Virtualization (NPIV) is used to assign a virtual WWN and virtual N_Port ID to a virtual machine. NPIV is most useful in two situations: Configure NPIV if there is a management requirement to be able to monitor SAN LUN usage down to the virtual machine level. Because a WWN is assigned to an individual virtual machine, the virtual machines LUN usage can be tracked by SAN management software. NPIV is also useful for access control. Because Fibre Channel zoning and array-based LUN masking use WWNs, access control can be configured down to the individual virtual machine level.

294

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 295 Monday, June 25, 2012 10:07 PM

N_Port ID Virtualization Requirements


Slide 6-19

NPIV requires the following:

Virtual machines use RDMs. Fibre Channel HBAs support NPIV. Fibre Channel switches support NPIV. ESXi hosts have access to all LUNs used by their virtual machines.

NPIV cannot be used with virtual machines configured with VMware vSphere Fault Tolerance (FT).

The requirements to configure NPIV are listed on the slide. For more about NPIV, see vSphere Storage Guide at http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html.

6
Storage Scalability

Module 6 Storage Scalability

295

VS5OS_LectGuideVo11.book Page 296 Monday, June 25, 2012 10:07 PM

vCenter Server Storage Filters


Slide 6-20

Filter Name
VMFS Filter

Description
Filters out storage devices, or LUNs, that are already used by a VMware vSphere VMFS datastore on any host managed by vCenter Server. Filters out LUNs that are already referenced by an RDM on any host managed by vCenter Server. The LUNs do not show up as candidates to be formatted with VMFS or to be used by a different RDM. Filters out LUNs ineligible for use as VMFS datastore extents because of host or storage type incompatibility. Automatically rescans and updates VMFS datastores after you perform datastore management operations. The filter helps provide a consistent view of all VMFS datastores on all hosts managed by vCenter Server.

Key
config.vpxd.filter.vmfsFilter

RDM Filter

config.vpxd.filter.rdmFilter

Same Host and Transports Filter Host Rescan Filter

config.vpxd.filter.SameHostAndTran sportsFilter config.vpxd.filter.hostRescanFilter

vCenter Server provides storage filters to help you avoid storage device corruption or performance degradation that can be caused by an unsupported use of storage devices. The filters below are available by default: VMFS Filter - Filters out storage devices, or LUNs, that are already used by a VMFS datastore on any host managed by vCenter Server. The LUNs do not show up as candidates to be formatted with another VMFS datastore or to be used as an RDM. RDM Filter - Filters out LUNs that are already referenced by an RDM on any host managed by vCenter Server. The LUNs do not show up as candidates to be formatted with VMFS or to be used by a different RDM. If you need virtual machines to access the same LUN, the virtual machines must share the same RDM mapping file. For information about this type of configuration, see the vSphere Resource Management documentation. Same Host and Transports Filter - Filters out LUNs ineligible for use as VMFS datastore extents because of host or storage type incompatibility. Prevents you from adding the following LUNs as extents: LUNs not exposed to all hosts that share the original VMFS datastore.

296

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 297 Monday, June 25, 2012 10:07 PM

LUNs that use a storage type different from the one the original VMFS datastore uses. For example, you cannot add a Fibre Channel extent to a VMFS datastore on a local storage device. Host Rescan Filter - Automatically rescans and updates VMFS datastores after you perform datastore management operations. The filter helps provide a consistent view of all VMFS datastores on all hosts managed by vCenter Server.
To change the filter behavior: 1. In the vSphere Client, select Administration > vCenter Server Settings. 2. In the settings list, select Advanced Settings. 3. In the Key text box, type the key you want to change. 4. To disable the setting, type False for the key. 5. Click Add. 6. Click OK.

Before making any changes to the device filters, consult with the VMware support team.

6
Storage Scalability

Module 6 Storage Scalability

297

VS5OS_LectGuideVo11.book Page 298 Monday, June 25, 2012 10:07 PM

Identifying and Tagging SSD Devices


Slide 6-21

The VMkernel can automatically detect, tag, and enable an SSD. Use the vSphere Client to identify an SSD. By knowing which storage is SSD, you can use that storage for the following:
ESXi hosts Storage panel on the Summary tab

Quicker VMware vSphere Storage vMotion migrations among hosts that share the same SSD Improving a virtual machines performance by placing its swap file on it

The VMkernel can now automatically detect, tag, and enable an SSD. ESXi detects SSD devices through an inquiry mechanism based on the T10 standard. This mechanism allows ESXi to discover SSD devices on many storage arrays. Devices that cannot be autodetected (that is, arrays that are not T10-compliant) can be tagged as SSD by setting up new Pluggable Storage Architecture Storage Array Type Plug-in claim rules. You can use the vSphere Client to identify your SSD storage. The storage section in the ESXi hosts Summary tab identifies the drive type. The drive type shows you whether a storage device is SSD. The benefits to using SSD include: Quicker VMware vSphere Storage vMotion migrations can occur among hosts that share the same SSD. SSD can be used as swap space for improved system performance when under memory contention.
For information about the T10 standard, go to http://www.t10.org.

298

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 299 Monday, June 25, 2012 10:07 PM

Configuring Software iSCSI Port Binding


Slide 6-22

From the Network Configuration tab, select the Storage Adapters link. Highlight the software initiator and select properties.

iSCSI port binding enables a software or a hardware dependent iSCSI initiator to be bound to a specific VMkernel adapter. If you are using dependent hardware iSCSI adapters, then you must bind the adapter to a VMkernel port to function properly. By default, all network adapters appear as active. If you are using multiple VMkernel ports on a single switch, you must override this setup, so that each VMkernel interface maps to only one corresponding active NIC. For example, vmk1 maps to vmnic1 and vmk2 maps to vmnic2. To bind a iSCSI adapter to a VMkernel port, create a virtual VMkernel adapter for each physical network adapter on your host. If you use multiple VMkernel adapters, set up the correct network policy:
1. Click the Configuration tab, and click Storage Adapters in the Hardware panel. 2. The list of available storage adapters appears. 3. Select the software or dependent iSCSI adapter to configure and click Properties. 4. In the iSCSI Initiator Properties dialog box, click the Network Configuration tab. 5. Click Add and select a VMkernel adapter to bind with the iSCSI adapter.You can bind the

6
Storage Scalability

software iSCSI adapter to one or more VMkernel adapters. For a dependent hardware iSCSI adapter, only one VMkernel interface associated with the correct physical NIC is available.
Module 6 Storage Scalability 299

VS5OS_LectGuideVo11.book Page 300 Monday, June 25, 2012 10:07 PM

6. Click OK. 7. The network connection appears on the list of VMkernel port bindings for the iSCSI adapter. 8. Verify that the network policy for the connection is compliant with the binding requirements.

300

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 301 Monday, June 25, 2012 10:07 PM

VMFS Resignaturing
Slide 6-23

VMFS UUID: VMFS_1 4e26f26a-9fe2664c-c9c7-000c2988e4dd

resignature

Storage Array Replication replicate VMFS_1


protected site recovery site New UUID: snap-snapID#-VMFS_1 4e26f26a-9fe2664c-c9c7-000c2988e4dd

Datastore resignaturing overwrites the original VMFS UUID:

The LUN copy that contains the VMFS datastore that you resignature is no longer treated as a LUN copy. The LUN appears as an independent datastore with no relation to the source of the copy. A spanned datastore can be resignatured only if all its extents are online.

When a LUN is replicated or a copy is made, the resulting LUN copy is identical, byte-for-byte, with the original LUN. As a result, the original LUN contains a VMFS datastore with UUID X, and the LUN copy appears to contain an identical copy of a VMFS datastore (a VMFS datastore with the same UUID). ESXi can determine whether a LUN contains the VMFS datastore copy and does not mount it automatically. The LUN copy must be resignatured before it is mounted. When a datastore resignature is performed, consider the following points: Datastore resignaturing is irreversible because it overwrites the original VMFS UUID. The LUN copy that contains the VMFS datastore that you resignature is no longer treated as a LUN copy. Instead it appears as an independent datastore with no relation to the source of the copy. A spanned datastore can be resignatured only if all its extents are online. The resignaturing process is crash-and-fault tolerant. If the process is interrupted, you can resume it later. The default format of the new label assigned to the datastore is snap-snapID-oldLabel (where snapID is an integer and oldLabel is the label of the original datastore).
Module 6 Storage Scalability 301

6
Storage Scalability

VS5OS_LectGuideVo11.book Page 302 Monday, June 25, 2012 10:07 PM

Pluggable Storage Architecture


Slide 6-24

PSA is a collection of vStorage APIs that allow third-party hardware vendors to insert code directly into the SCSI middle layer.

The PSA allows third-party software developers to design their own load-balancing techniques and failover mechanisms for particular storage array types. Third-party vendors can add support for new arrays into the SCSI middle layer without having to provide internal information or intellectual property about the array to VMware.

VMware provides a generic multipathing plug-in (MPP) called the Native Multipathing Plug-in (NMP). PSA coordinates the operation of the NMP and third-party MPPs or the built-in Storage Array Type Plug-in (SATP).

The Pluggable Storage Architecture (PSA) sits in the SCSI middle layer of the VMkernel I/O stack. The VMware Native Multipathing Plug-in (NMP) supports all storage arrays on the VMware storage hardware compatibility list. The NMP also manages sub-plug-ins for handling multipathing and load balancing. The PSA discovers available storage paths and, based on a set of predefined rules, determines which multipathing plug-in (MPP) should be given ownership of the path. The MPP associates a set of physical paths with a specific storage device or LUN. The details of handling path failover for a given storage array are delegated to a sub-plug-in called the Storage Array Type Plug-in (SATP). The SATP is associated with paths. The details for determining which physical path is used to issue an I/O request (load balancing) to a storage device are handled by a sub-plug-in called the Path Selection Plug-in (PSP). The PSP is associated with logical devices. PSA tasks: Load and unload multipathing plug-ins Handle physical path discovery and removal (through scanning) Route I/O requests for a specific logical device to an appropriate multipathing plug-in Handle I/O queuing to the physical storage HBAs and to the logical devices
302 VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 303 Monday, June 25, 2012 10:07 PM

Implement logical device bandwidth sharing between virtual machines Provide logical device and physical path I/O statistics NMP tasks: Manage physical path claiming and unclaiming Create, register, and deregister logical devices Associate physical paths with logical devices Process I/O requests to logical devices Select an optimal physical path for the request (load balancing) Perform actions necessary to handle failures and requests retries Support management tasks like abort or reset of logical devices

6
Storage Scalability

Module 6 Storage Scalability

303

VS5OS_LectGuideVo11.book Page 304 Monday, June 25, 2012 10:07 PM

VMware Default Multipathing Plug-in


Slide 6-25

The top-level plug-in is the MPP. The VMware default MPP is the NMP, which includes SATPs and Path Selection Plug-ins (PSPs). PSA
VMware NMP
The default MPP is the NMP, which includes the SATPs and PSPs.

VMware SATP VMware SATP VMware SATP

VMware PSP VMware PSP VMware PSP

The PSA uses plug-ins to manage and access storage. The top-level plug-in is the MPP. All storage is accessed through an MPP. MPPs can be supplied by storage vendors or by VMware. The VMware default MPP is the NMP. The NMP includes SATPs and PSPs.

304

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 305 Monday, June 25, 2012 10:07 PM

Overview of the MPP Tasks


Slide 6-26

The PSA:

Discovers available storage (physical paths) Uses predefined claim rules to assign each device to an MPP

An MPP claims a physical path and exports a logical device. Details of path failover for a specific path are delegated to the SATP. Details for determining which physical path is used to a storage device (load balancing) are handled by the PSP.

The PSA has two major tasks. The first task is to discover what storage devices are available on a system. Once storage is detected, the second task is to assign predefined claim rules to control the storage device. Each device should be claimed by only one claim rule. Claim rules come from and are used by MPPs. So when a device is claimed by a rule, it is being claimed by the MPP associated with that rule. The MPP is actually claiming a physical path to a storage device. Once the path has been claimed, the MPP exports a logical device. Only an MPP can associate a physical path with a logical device. Within each MPP there are two sub-plug-in types. These are SATPs and PSPs. The SATP is associated with physical paths and controlling path failover. SATPs are covered in detail later. The PSP handles which physical path is used to issue an I/O request. This activity is load balancing. PSPs are associated with logical devices. PSPs are covered in detail later. A single MPP can support multiple SATPs and PSPs. The modular nature of the PSA allows for the possibility of SATPs and PSPs from different third-party vendors. For example, a storage device could be configured to be managed by the MPP written by the same vendor, while using a VMware SATP and a PSP from some other vendor.
Module 6 Storage Scalability 305

6
Storage Scalability

VS5OS_LectGuideVo11.book Page 306 Monday, June 25, 2012 10:07 PM

If the storage vendor has not supplied an MPP, SATP, or PSP, a VMware MPP, SATP, or PSP is assigned by default. This modularity can also cause problems. An MPP, SATP, or PSP might be assigned to a storage device incorrectly. The physical hardware might not support correctly the feature set that has been assigned. Troubleshooting might involve switching a device to a different MPP, SATP, or PSP.

306

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 307 Monday, June 25, 2012 10:07 PM

Path Selection Example


Slide 6-27

Information about all functional paths is forwarded by the SATP to the PSP. The PSP chooses which path the use.

SATP 5 HBA 1 2

PSP

NMP

PSA

Vmkernel storage stack

HBA 2 3

When a virtual machine issues an I/O request to a logical device managed by the NMP, the following takes place:
1. The NMP calls the PSP assigned to this logical device. 2. The PSP selects an appropriate physical path to send the I/O, load-balancing the I/O if

6
Storage Scalability

necessary.
3. If the I/O operation is successful, the NMP reports its completion. If the I/O operation reports

an error, the NMP calls an appropriate SATP.


4. The SATP interprets the error codes and, when appropriate, activates inactive paths and fails

over to the new active path.


5. The PSP is then called to select a new active path from the available paths to send the I/O.

Module 6 Storage Scalability

307

VS5OS_LectGuideVo11.book Page 308 Monday, June 25, 2012 10:07 PM

Lab 8
Slide 6-28

In this lab, you will work with profile-driven storage.


1. Create a VMFS datastore configuration for this lab. 2. Create a user-defined storage capability. 3. Create a virtual machine storage profile. 4. Enable your host to use virtual machine storage profiles. 5. Associate storage profiles with virtual machines.

308

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 309 Monday, June 25, 2012 10:07 PM

Review of Learner Objectives


Slide 6-29

You should be able to do the following:

Describe vSphere Storage APIs Array Integration. Describe vSphere Storage APIs Storage Awareness. Configure and use profile-driven storage.

6
Storage Scalability

Module 6 Storage Scalability

309

VS5OS_LectGuideVo11.book Page 310 Monday, June 25, 2012 10:07 PM

Lesson 2: Storage I/O Control


Slide 6-30

Lesson 2: Storage I/O Control

310

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 311 Monday, June 25, 2012 10:07 PM

Learner Objectives
Slide 6-31

After this lesson, you should be able to do the following:

Describe Storage I/O Control. Configure Storage I/O Control.

6
Storage Scalability

Module 6 Storage Scalability

311

VS5OS_LectGuideVo11.book Page 312 Monday, June 25, 2012 10:07 PM

What Is Storage I/O Control?


Slide 6-32

Storage I/O Control allows cluster-wide storage I/O prioritization:

Allows for better workload consolidation Helps reduce extra costs associated with overprovisioning Is used to balance I/O load in a datastore cluster enabled for Storage DRS

without Storage I/O Control


Data Mining Print Server Online Store Mail Server Data Mining Print Server

with Storage I/O Control


Online Store Mail Server

during high I/O from a noncritical application

Storage I/O Control extends the constructs of shares and limits to handle storage I/O resources. Storage I/O Control is a proportional-share IOPS scheduler that, under contention, throttles IOPS. You can control the amount of storage I/O that is allocated to virtual machines during periods of I/O congestion. Controlling storage I/O ensures that more important virtual machines get preference over less important virtual machines for I/O resource allocation. When VMware vSphere Storage DRS is enabled with I/O metrics, Storage I/O Control is automatically enabled on the datastores in the datastore cluster. You can use Storage I/O Control with or without Storage DRS. There are two thresholds: one for standalone Storage I/O Control and one for Storage DRS. For Storage DRS, latency statistics are gathered by Storage I/O Control for an ESXi host and sent to vCenter Server and stored in the vCenter Server database. With these statistics, Storage DRS can make the decision on whether a virtual machine should be migrated to another datastore.

312

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 313 Monday, June 25, 2012 10:07 PM

Storage I/O Control Requirements


Slide 6-33

Datastores that enabled for Storage I/O Control must be managed by a single vCenter Server system. Storage I/O Control is supported for Fibre Channel, iSCSI, and NFS storage. Storage I/O Control does not support datastores with multiple extents. Verify whether your automated tiered storage array is certified as compatible with Storage I/O Control.

Storage I/O Control has several requirements and limitations. Storage I/O Control is not supported for raw device mappings.

6
Storage Scalability

Before using Storage I/O Control on datastores that are backed by arrays with automated storage tiering capabilities, verify that your automated tiered storage array has been certified to be compatible with Storage I/O Control. See the online VMware Compatibility Guide at http://www.vmware.com/resources/compatibility. Automated storage tiering is the ability of an array (or group of arrays) to migrate LUNs/volumes or parts of LUNs/volumes to different types of storage media (solid-state drive, Fibre Channel, SAS, SATA) based on user-set policies and current I/O patterns. No special certification is required for arrays that do not have these automatic migration/tiering features, including those that provide the ability to manually migrate data between different types of storage media.

Module 6 Storage Scalability

313

VS5OS_LectGuideVo11.book Page 314 Monday, June 25, 2012 10:07 PM

Configuring Storage I/O Control


Slide 6-34

To configure Storage I/O Control:


1. Enable Storage I/O Control for the datastore. 2. Set the number of storage I/O shares and upper limit of I/O operations

per second (IOPS) allowed for each virtual machine.


Example: Two virtual machines running Iometer (VM1:1,000 shares, VM2: 2,000 shares) Without shares/limits IOPS VM1 VM2 1,500 1,500 Iometer latency 20ms 21ms With shares/limits IOPS 1,080 1,900 Iometer latency 31ms 16ms

Storage I/O Control provides quality-of-service capabilities for storage I/O in the form of I/O shares and limits that are enforced across all virtual machines accessing a datastore, regardless of which host they are running on. Using Storage I/O Control, vSphere administrators can ensure that the most important virtual machines get adequate I/O resources even in times of congestion. When you enable Storage I/O Control on a datastore, ESXi begins to monitor the device latency that hosts observe when communicating with that datastore. When device latency exceeds a threshold, the datastore is considered to be congested, and each virtual machine that accesses that datastore is allocated I/O resources in proportion to their shares. When you allocate storage I/O resources, you can limit the IOPS that are allowed for a virtual machine. By default, the number of IOPS allowed for a virtual machine is unlimited. If the limit that you want to set for a virtual machine is in terms of megabytes per second instead of IOPS, you can convert megabytes per second into IOPS based on the typical I/O size for that virtual machine. For example, a backup application has a typical I/O size of 64KB. To restrict a backup application to 10MB per second, set a limit of 160 IOPS (10MB per second / 64KB I/O size = 160 I/Os per second). On the slide, virtual machines VM1 and VM2 are running an I/O load generator called Iometer. Each virtual machine is running on a different host, but they are running the same type of workload: 16KB random reads. The shares of VM2 are set to twice as many shares as VM1, which implies that
314 VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 315 Monday, June 25, 2012 10:07 PM

VM2 is more important than VM1. With Storage I/O Control disabled, the IOPS that each virtual machine achieves, as well as their I/O latency, is identical. But with Storage I/O Control enabled, the IOPS achieved by the virtual machine with more shares (VM2) are greater than the IOPS of VM1. The example assumes that each virtual machine is running enough load to cause a bottleneck on the datastore.
To enable Storage I/O Control on a datastore: 1. In the Datastores and Datastore Clusters inventory view, select a datastore and click the

Configuration tab.
2. Click the Properties link. 3. Under Storage I/O Control, select the Enabled check box. 4. Click Close. To set the storage I/O shares and limits: 1. Right-click the virtual machine in the inventory and select Edit Settings. 2. In the Virtual Machine Properties dialog box, click the Resources tab.

By default, all virtual machine shares are set to Normal (1000), with unlimited IOPS. For more about Storage I/O Control, see vSphere Resource Management Guide at http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html.

6
Storage Scalability

Module 6 Storage Scalability

315

VS5OS_LectGuideVo11.book Page 316 Monday, June 25, 2012 10:07 PM

Review of Learner Objectives


Slide 6-35

You should be able to do the following:

Describe Storage I/O Control. Configure Storage I/O Control.

316

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 317 Monday, June 25, 2012 10:07 PM

Lesson 3: Datastore Clusters and Storage DRS


Slide 6-36

Lesson 3: Datastore Clusters and Storage DRS

6
Storage Scalability

Module 6 Storage Scalability

317

VS5OS_LectGuideVo11.book Page 318 Monday, June 25, 2012 10:07 PM

Learner Objectives
Slide 6-37

After this lesson, you should be able to do the following:

Create a datastore cluster. Configure Storage DRS. Explain how Storage I/O Control and Storage DRS complement each other.

318

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 319 Monday, June 25, 2012 10:07 PM

What Is a Datastore Cluster?


Slide 6-38

A datastore cluster is a collection of datastores that are grouped together without functioning together. A datastore cluster enabled for Storage DRS is a collection of datastores working together to balance:

2TB

datastore cluster

Capacity I/O latency

500GB 500GB 500GB 500GB

The datastore cluster serves as a container or folder. The user can store datastores in the container, but the datastores work as separate entities. A datastore cluster that is enabled for Storage DRS is a collection of datastores designed to work as a single unit. In this type of datastore cluster, Storage DRS balances datastore use and I/O latency.

6
Storage Scalability

Module 6 Storage Scalability

319

VS5OS_LectGuideVo11.book Page 320 Monday, June 25, 2012 10:07 PM

Datastore Cluster Rules


Slide 6-39

General rules for datastore clusters (with or without Storage DRS):

Datastores from different arrays can be added to the same datastore cluster.

LUNs from arrays of different types can adversely affect performance if they are not equally performing LUNs.

Datastore clusters must contain similar or interchangeable datastores. Datastore clusters support only VMware vSphere ESXi 5.0 hosts.

Rules specific to datastore clusters enabled for Storage DRS:


Do not mix VMFS and NFS datastores in the same datastore cluster. Do not mix replicated datastores with nonreplicated datastores. You can mix VMFS-3 and VMFS-5 datastores in the same datastore cluster.

Datastores and hosts that are associated with a datastore cluster must meet certain requirements to use datastore cluster features successfully. A datastore cluster can contain a mix of datastores with different sizes and I/O capacities, and can be from different arrays and vendors. However, LUNs with different performance characteristics can cause performance problems. All hosts attached to the datastores in a datastore cluster must be ESXi 5.0 and later. ESXi 4.x and earlier hosts cannot be included in a datastore cluster. NFS and VMFS datastores cannot be combined in the same datastore cluster enabled for Storage DRS. Storage DRS cannot move virtual machines between NFS and VMFS datastores. VMFS-3 and VMFS-5 datastores can be added to the same Storage DRS cluster. But performance of these datastores should be similar.

320

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 321 Monday, June 25, 2012 10:07 PM

Relationship of Host Cluster to Datastore Cluster


Slide 6-40

The relationship between a VMware vSphere High Availability / VMware vSphere Distributed Resource Scheduler cluster and a datastore cluster can be one to one, one to many, or many to many.
one to one
host cluster host

one to many
host cluster host

many to many
host clusters

datastore cluster

datastore clusters

datastore clusters

Host clusters and datastore clusters can coexist in the virtual infrastructure. A host cluster refers to a VMware vSphere Distributed Resource Scheduler (DRS)/VMware vSphere High Availability (vSphere HA) cluster. Load balancing by DRS and Storage DRS can occur at the same time. DRS balances virtual machines across hosts based on CPU and memory usage. Storage DRS load-balances virtual machines across storage, based on storage capacity and IOPS. A host that is not part of a host cluster can also use a datastore cluster.

6
Storage Scalability

Module 6 Storage Scalability

321

VS5OS_LectGuideVo11.book Page 322 Monday, June 25, 2012 10:07 PM

Storage DRS Overview


Slide 6-41

Storage DRS provides the following functions:

Initial placement of virtual machines based on storage capacity and, optionally, on I/O latency Use of Storage vMotion to migrate virtual machines based on storage capacity Use of Storage vMotion to migrate virtual machines based on I/O latency Configuration in either manual or fully automated modes Use of affinity and anti-affinity rules to govern virtual disk location Use of fully automated, storage maintenance mode to clear a LUN of virtual machine files

Storage DRS manages the placement of virtual machines in a datastore cluster, based on the space usage of the datastores. It attempts to keep usage as even as possible across the datastores in the datastore cluster. Storage vMotion migration of virtual machines can also be a way of keeping the datastores balanced. Optionally, the user can configure Storage DRS to balance I/O latency across the members of the datastore cluster as a way to help mitigate performance issues that are caused by I/O latency. Storage DRS can be set up to work in either manual or fully automated mode: Manual mode presents migration and placement recommendations to the user, but nothing is executed until the user accepts the recommendation. Fully automated mode automatically handles initial placement and migrations based on runtime rules.

322

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 323 Monday, June 25, 2012 10:07 PM

Initial Disk Placement


Slide 6-42

When virtual machines are created, cloned, or migrated:

You select a datastore cluster, rather than a single datastore.

Storage DRS selects a member datastore based on capacity and optionally on IOPS load.

By default, a virtual machines files are placed on the same datastore in the datastore cluster.

Storage DRS affinity and anti-affinity rules can be created to change this behavior.

When a virtual machine is created, cloned, or migrated, the user has the option of selecting a datastore cluster on which to place the virtual machine files. When the datastore cluster is selected, Storage DRS chooses a member datastore (a datastore in the datastore cluster) based on storage use. Storage DRS attempts to keep the member datastores evenly used. By default, Storage DRS locates all the files that make up a virtual machine on the same datastore. However, Storage DRS anti-affinity rules can be created so that virtual machine disk files can be placed on different datastores in the cluster.

6
Storage Scalability

Module 6 Storage Scalability

323

VS5OS_LectGuideVo11.book Page 324 Monday, June 25, 2012 10:07 PM

Migration Recommendations
Slide 6-43

Migration recommendations are executed:

When the IOPS response time is exceeded When the space utilization threshold is exceeded Space utilization is checked every five minutes. IOPS load history is checked every eight hours. Storage DRS selects a datastore based on utilization and IOPS load. Load balancing is based on IOPS workload, which ensures that no datastore exceeds a particular VMkernel I/O latency level.

Storage DRS provides as many recommendations as necessary to balance the space and, optionally, the IOPS resources of the datastore cluster. Reasons for migration recommendations include: Balancing space usage in the datastore Reducing datastore I/O latency Balancing datastore IOPS load Storage DRS can also make mandatory recommendations based on whether: A datastore is out of space Storage DRS anti-affinity or affinity rules are being violated A datastore is entering maintenance mode Storage DRS also considers moving powered-off virtual machines to balance datastores.

324

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 325 Monday, June 25, 2012 10:07 PM

Configuration of Storage DRS Migration Thresholds


Slide 6-44

Datastores and Datastore Clusters inventory view > right-click datacenter > New Datastore Cluster.

Option for including I/O latency in balancing

Configuration settings for utilized space and latency thresholds

Advanced settings for latency thresholds

In the SDRS Runtime Rules page of the wizard, select or deselect the Enable I/O metric for SDRS recommendations check box to enable or disable IOPS metric inclusion. When I/O load balancing is enabled, Storage I/O Control is enabled for all the datastores in the datastore cluster if it is not already enabled. When this option is deselected, you disable: IOPS load balancing among datastores in the datastore cluster Initial placement for virtual disks based on IOPS metric Space is the only consideration when placement and balancing recommendations are made. Storage DRS thresholds can be configured to determine when Storage DRS recommends or performs initial placement or migration recommendations: Utilized Space Determines the maximum percentage of consumed space allowed before Storage DRS recommends or performs an action. I/O Latency Indicates the maximum latency allowed before recommendations or migrations are performed. This setting is applicable only if the Enable I/O metric for SDRS recommendations check box is selected

6
Storage Scalability

Module 6 Storage Scalability

325

VS5OS_LectGuideVo11.book Page 326 Monday, June 25, 2012 10:07 PM

Click the Show Advanced Options link to view advanced options: No recommendations until utilization difference between source and destination is Defines the space utilization threshold. For example, datastore A is 80 percent full and datastore B is 83 percent full. If you set the threshold to 5, no recommendations are made. If you set the threshold to 2, a recommendation is made or a migration occurs. Evaluate I/O load every Defines how often Storage DRS checks space or IOPS load balancing or both. I/O imbalance threshold Defines the aggressiveness of IOPS load balancing if the Enable I/O metric for SDRS recommendations check box is selected.

326

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 327 Monday, June 25, 2012 10:07 PM

Storage DRS Affinity Rules


Slide 6-45

datastore cluster

datastore cluster

datastore cluster

Intra-VM VMDK affinity


Keep a virtual machines
VMDKs together on the same datastore.

Keep a virtual

Intra-VM VMDK anti-affinity


machiness VMDKs on different datastores.

VM anti-affinity
Keep virtual machines
on different datastores.

Maximize virtual machine


availability when all disks are needed in order to run.

Rule is similar to the


DRS anti-affinity rule.

Rule can be applied to


all or a subset of a virtual machines disks.

Maximize availability of
a set of redundant virtual machines.

Rule is on by default for


all virtual machines.

By default, all of a virtual machines disks can be on the same datastore. A user might want the virtual disks on different datastores. For example, a user can place a system disk on one datastore and place the data disks on another. In this case, the user can set up a virtual machine disk (VMDK) anti-affinity rule, which keeps a virtual machines virtual disks on separate datastores. Virtual machine anti-affinity rules keep virtual machines on separate datastores. This rule is useful when redundant virtual machines must always be available.

6
Storage Scalability

Module 6 Storage Scalability

327

VS5OS_LectGuideVo11.book Page 328 Monday, June 25, 2012 10:07 PM

Adding Hosts to a Datastore Cluster


Slide 6-46

Select the host cluster that will use the datastore cluster.

If no host clusters are created, the user can select individual ESXi hosts to use the datastore cluster.
Datastores and Datastore Clusters inventory view > right-click datacenter > New Datastore Cluster.

You can configure a host cluster or individual hosts to use the datastore cluster enabled for Storage DRS.

328

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 329 Monday, June 25, 2012 10:07 PM

Adding Datastores to the Datastore Cluster


Slide 6-47

Select the datastores to add to the datastore cluster.

VMware recommends selecting datastores that all hosts can access.

You can select one or more datastores in the Available Datastores pane. The Show Datastores drop-down menu enables you to filter the list of datastores to display. VMware recommends that all hosts have access to the datastores that you select. In the example, all datastores accessed by all hosts in the vCenter Server inventory are displayed. All datastores are accessible by all hosts, except for the datastores Local01 and Local02.

6
Storage Scalability

Module 6 Storage Scalability

329

VS5OS_LectGuideVo11.book Page 330 Monday, June 25, 2012 10:07 PM

Storage DRS Summary Information


Slide 6-48

A panel on the datastore clusters Summary tab displays the Storage DRS settings.

The vSphere Storage DRS panel on the Summary tab of the database cluster displays the Storage DRS settings: I/O metrics Displays whether or not the I/O metric inclusion option is enabled Storage DRS Indicates whether Storage DRS is enabled or disabled Automation level Indicates either manual or fully automated mode Utilized Space threshold Displays the space threshold setting I/O latency threshold Displays the latency threshold setting

330

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 331 Monday, June 25, 2012 10:07 PM

Storage DRS Migration Recommendations


Slide 6-49

Use the Storage DRS tab to monitor for migration recommendations.

The Storage DRS tab displays the Recommendations view by default. In this view, datastore cluster properties are displayed. Also displayed are the migration recommendations and the reasons for the recommendations. To refresh recommendations, click the Run Storage DRS link. To apply recommendations, click Apply Recommendations. The Storage DRS tab has two other views. The Faults view displays issues that occurred when applying recommendations. The History view maintains a migration history.

6
Storage Scalability

Module 6 Storage Scalability

331

VS5OS_LectGuideVo11.book Page 332 Monday, June 25, 2012 10:07 PM

Storage DRS Maintenance Mode


Slide 6-50

Storage DRS maintenance mode allows you to take a datastore out of use in order to service it. Storage DRS maintenance mode:

Evacuates virtual machines from a datastore placed in maintenance mode.

Registered virtual machines (on or off) are moved. Templates and unregistered virtual machines are not moved.

Storage DRS allows you to place a datastore in maintenance mode. A datastore enters or leaves maintenance mode only as the result of your performing the task. Storage DRS maintenance mode is available to datastores in a datastore cluster enabled for Storage DRS. Standalone datastores cannot be placed in maintenance mode. When a datastore cluster enters Storage DRS maintenance mode, only registered virtual machines are moved to other datastores in the datastore cluster. Unregistered virtual machines, templates, ISO images, and other nonvirtual machine files are not moved. The datastore does not enter maintenance mode until all files on the datastore are moved. So you must manually move these files off the datastore in order for the datastore to enter Storage DRS maintenance mode. If the datastore cluster is set to fully automated mode, virtual machines are automatically migrated to other datastores. If the datastore cluster is set to manual mode, migration recommendations are displayed in the Storage DRS tab. The virtual disks cannot be moved until the recommendations are accepted.

332

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 333 Monday, June 25, 2012 10:07 PM

To place a datastore into Storage DRS maintenance mode: 1. Go to the Datastores and Datastore Clusters inventory view.

Right-click the datastore in the datastore cluster enabled for Storage DRS and select Enter SDRS Maintenance Mode.

6
Storage Scalability

Module 6 Storage Scalability

333

VS5OS_LectGuideVo11.book Page 334 Monday, June 25, 2012 10:07 PM

Backups and Storage DRS


Slide 6-51

Backing up virtual machines can add latency to a datastore. You can schedule a task to disable Storage DRS behavior for the duration of the backup.

Scheduled tasks can be configured to change Storage DRS behavior. Scheduled tasks can be used to change the Storage DRS configuration of the datastore cluster to match enterprise activity. For example, if the datastore cluster is configured to perform migrations based on I/O latency, you might disable the use of I/O metrics by Storage DRS during the backup window. You can reenable I/O metrics use after the backup window ends.
To set up a Storage DRS scheduled task for a datastore cluster: 1. In the Datastores and Datastore Clusters inventory view, right-click the datastore cluster and

select Edit Settings.


2. In the left pane, select SDRS Scheduling and click Add. 3. In the Set Time page, enter the start time, end time, and days that the task should run. Click

Next.
4. In the Start Settings page, enter a description and modify the Storage DRS settings as you want

them to be when the task starts. Click Next.


5. In the End Settings page, enter a description and modify the Storage DRS settings as you want

them to be when the task ends. Click Next. Click Finish to save the scheduled task.
334 VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 335 Monday, June 25, 2012 10:07 PM

Storage DRS Compatibility


Slide 6-52

Feature or product VMware snapshots Raw device mapping pointer files NFS ESXi 3.x and 4.x hosts

Supported/Not supported Supported Supported Supported Not supported

The table shows some features that are supported with Storage DRS. For information about Storage DRS features and requirements, see vSphere Resource Management Guide at http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html.

6
Storage Scalability

Module 6 Storage Scalability

335

VS5OS_LectGuideVo11.book Page 336 Monday, June 25, 2012 10:07 PM

Storage DRS and Storage I/O Control


Slide 6-53

Storage DRS and Storage I/O Control are complementary solutions:

Storage I/O Control is enabled by default on datastore clusters enabled for Storage DRS.

Storage DRS works to avoid I/O bottlenecks. Storage I/O Control manages unavoidable I/O bottlenecks.

Storage I/O Control works in real time. Storage DRS does not use real-time latency to calculate load balancing. Storage DRS and Storage I/O Control provide you with the performance that you need in a shared environment, without having to massively overprovision storage.

Both Storage DRS and Storage I/O Control work with IOPS and should be used together. Storage DRS works to avoid IOPS bottlenecks. Storage I/O Control is enabled when you enable Storage DRS. Storage DRS is used to manage unavoidable IOPS bottlenecks, such as short, intermittent bottlenecks, and congestion on every datastore in the datastore cluster. Storage I/O Control runs in real time. It continuously checks for latency and controls I/O accordingly. Storage DRS uses IOPS load history to determine migrations. Storage DRS runs infrequently and does analysis to determine long-term load balancing. Storage I/O Control monitors the I/O metrics of the datastores. Storage DRS uses this information to determine whether a virtual machine should be moved from one datastore to another.

336

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 337 Monday, June 25, 2012 10:07 PM

Lab 9
Slide 6-54

In this lab, you will create a datastore cluster and configure Storage DRS.
1. Create a datastore cluster enabled for Storage DRS. 2. Perform a datastore evacuation with datastore maintenance mode. 3. Manually run Storage DRS and apply migration recommendations. 4. Acknowledge Storage DRS alarms. 5. Clean up for the next lab.

6
Storage Scalability

Module 6 Storage Scalability

337

VS5OS_LectGuideVo11.book Page 338 Monday, June 25, 2012 10:07 PM

Review of Learner Objectives


Slide 6-55

You should be able to do the following:

Create a datastore cluster. Configure Storage DRS. Explain how Storage I/O Control and Storage DRS complement each other.

338

VMware vSphere: Optimize and Scale

VS5OS_LectGuideVo11.book Page 339 Monday, June 25, 2012 10:07 PM

Key Points
Slide 6-56

vSphere Storage API Array Integration consists of APIs for hardware acceleration and array thin provisioning. vSphere Storage API Storage Awareness allows storage vendors to provide information about the capabilities of their storage arrays to vCenter Server. Profile-driven storage is a feature that introduces storage compliance to vCenter Server. Storage I/O Control allows clusterwide storage I/O prioritization. Storage DRS allows an easy way for an organization to balance its storage utilization and minimize the effect of I/O latency. A datastore cluster enabled for Storage DRS is a collection of datastores working together to balance storage capacity and I/O latency.

Questions?

6
Storage Scalability

Module 6 Storage Scalability

339

VS5OS_LectGuideVo11.book Page 340 Monday, June 25, 2012 10:07 PM

340

VMware vSphere: Optimize and Scale

Vous aimerez peut-être aussi