Académique Documents
Professionnel Documents
Culture Documents
Performance Characterization
of VMFS and RDM Using a SAN
ESX Server 3.5
VMwareESXServerofferstwochoicesformanagingdiskaccessinavirtualmachineVMwareVirtual
MachineFileSystem(VMFS)andrawdevicemapping(RDM).ItisveryimportanttounderstandtheI/O
characteristicsofthesediskaccessmanagementsystemsinordertochoosetherightaccesstypeforaparticular
application.Choosingtherightdiskaccessmanagementmethodcanbeakeyfactorinachievinghighsystem
performanceforenterpriseclassapplications.
ThispaperisafollowontoapreviousperformancestudythatcomparestheperformanceofVMFSandRDM
inESXServer3.0.1(PerformanceCharacteristicsofVMFSandRDM:VMWareESXServer3.0.1;see
Resourcesonpage 11foralink).TheexperimentsdescribedinthispapercomparetheperformanceofVMFS
andRDMinVMwareESXServer3.5.Thegoalistoprovidedataonperformanceandsystemresource
utilizationatvariousloadlevelsfordifferenttypesofworkloads.Thisinformationoffersyouanideaof
relativethroughput,I/Orate,andCPUcostforeachoftheoptionssoyoucanselecttheappropriatediskaccess
methodforyourapplication.
Adirectcomparisonoftheresultsinthispapertothosereportedinthepreviouspaperwouldbeinaccurate.
Thetestsetupweusedtoconducttestsforthispaperisdifferentfromtheoneusedforthetestsdescribedin
thepreviouspaperwithESXServer3.0.1.Previously,wecreatedthetestdisksonlocal10KrpmSASdisks.In
thispaperweusedFibreChanneldisksinanEMCCLARiiONCX340tocreatethetestdisk.Becauseofthe
differentprotocolsusedtoaccessthedisks,theI/OpathintheESXServersoftwarestackchangessignificantly
andthustheI/Olatencyexperiencedbytheworkloadsiniometeralsochange.
Thisstudycoversthefollowingtopics:
TechnologyOverviewonpage 2
ExecutiveSummaryonpage 2
TestEnvironmentonpage 2
PerformanceResultsonpage 5
Conclusiononpage 10
Configurationonpage 10
Resourcesonpage 11
Appendix:EffectofCachePageSizeonSequentialReadI/OPatternswithI/OBlockSizeLessthanCache
PageSizeonpage 12
Technology Overview
VMFSisaspecialhighperformancefilesystemofferedbyVMwaretostoreESXServervirtualmachines.
AvailableaspartofESXServer,itisaclusteredfilesystemthatallowsconcurrentaccessbymultiplehoststo
filesonasharedVMFSvolume.VMFSoffershighI/Ocapabilitiesforvirtualmachines.Itisoptimizedfor
storingandaccessinglargefilessuchasvirtualdisksandthememoryimagesofsuspendedvirtualmachines.
RDMisamappingfileinaVMFSvolumethatactsasaproxyforarawphysicaldevice.TheRDMfilecontains
metadatausedtomanageandredirectdiskaccessestothephysicaldevice.Thistechniqueprovides
advantagesofdirectaccesstophysicaldeviceinadditiontosomeoftheadvantagesofavirtualdiskonVMFS
storage.Inbrief,itoffersVMFSmanageabilitywiththerawdeviceaccessrequiredbycertainapplications.
YoucanconfigureRDMintwoways:
VirtualcompatibilitymodeThismodefullyvirtualizesthemappeddevice,whichappearstotheguest
operatingsystemasavirtualdiskfileonaVMFSvolume.VirtualmodeprovidessuchbenefitsofVMFS
asadvancedfilelockingfordataprotectionanduseofsnapshots.
PhysicalcompatibilitymodeThismodeprovidesaccesstomosthardwarecharacteristicsofthemapped
device.VMkernelpassesallSCSIcommandstothedevice,withoneexception,therebyexposingallthe
physicalcharacteristicsoftheunderlyinghardware.
BothVMFSandRDMprovidesuchclusteredfilesystemfeaturesasfilelocking,permissions,persistent
naming,andVMotioncapabilities.VMFSisthepreferredoptionformostenterpriseapplications,including
databases,ERP,CRM,VMwareConsolidatedBackup,Webservers,andfileservers.AlthoughVMFSis
recommendedformostvirtualdiskstorage,rawdiskaccessisneededinafewcases.RDMisrecommended
forthosecases.SomeofthecommonusesofRDMareinclusterdataandquorumdisksforconfigurations
usingclusteringbetweenvirtualmachinesorbetweenphysicalandvirtualmachinesorforrunningSAN
snapshotorotherlayeredapplicationsinavirtualmachine.
FormoreinformationonVMFSandRDM,seetheServerConfigurationGuidementionedinResourceson
page 11.
Executive Summary
Themainconclusionsthatcanbedrawnfromthetestsdescribedinthisstudyare:
Forrandomreadsandwrites,VMFSandRDMyieldasimilarnumberofI/Ooperationspersecond.
Forsequentialreadsandwrites,performanceofVMFSisveryclosetothatofRDM(exceptonsequential
readswithanI/Oblocksizeof4K).BothRDMandVMFSyieldaveryhighthroughputinexcessof300
megabytesperseconddependingontheI/Oblocksize
Forrandomreadsandwrites,VMFSrequires5percentmoreCPUcyclesperI/Ooperationcomparedto
RDM.
Forsequentialreadsandwrites,VMFSrequiresabout8percentmoreCPUcyclesperI/Ooperation
comparedtoRDM.
Test Environment
ThetestsdescribedinthisstudycharacterizetheperformanceofVMFSandRDMinESXServer3.5.Weran
thetestswithauniprocessorvirtualmachineusingWindowsServer2003EnterpriseEditionwithSP2asthe
guestoperatingsystem.ThevirtualmachineranonanESXServersysteminstalledonalocalSCSIdisk.We
attachedtwodiskstothevirtualmachineonevirtualdiskfortheoperatingsystemandaseparatetestdisk,
whichwasthetargetfortheI/Ooperations.WegeneratedI/OloadusingIometer,averypopulartoolfor
evaluatingI/Operformance(seeResourcesonpage 11foralinktomoreinformation).SeeConfiguration
onpage 10foradetailedlistofthehardwareandsoftwareconfigurationweusedforthetests.
Forallworkloadsexcept4Ksequentialread,weusedthedefaultcachepagesize(8K)inthestorage.However,
for4Ksequentialreadworkloads,thedefaultcachepagesettingresultedinlowerperformanceforbothVMFS
andRDM.EMCrecommendssettingthecachepagesizeinstoragetoapplicationblocksize(4Kinthiscase)
forastableworkloadwithaconstantI/Oblocksize.Hence,for4Ksequentialreadworkloads,wesetthecache
pagesizeto4K.SeeFigure12andFigure13foraperformancecomparisonof4KsequentialreadI/Ooperations
with4Kand8Kcachepagesettings.
Formoreinformationoncachepagesettings,seethewhitepaperEMCCLARiiONBestPracticesforFibre
ChannelStoragewhitepaper(seeResourcesonpage 11foralink).
Disk Layout
Inthisstudy,disklayoutreferstotheconfiguration,location,andtypeofdiskusedfortests.Weusedasingle
serverattachedtoaFibreChannelarraythroughahostbusadapterforthetests.
Wecreatedalogicaldriveonasinglephysicaldisk.WeinstalledESXServeronthisdrive.Onthesamedrive,
wealsocreatedaVMFSpartitionandusedittostorethevirtualdiskonwhichweinstalledtheWindows
Server2003guestoperatingsystem.
ThetestdiskwaslocatedonametaLUNontheCLARiiONCX340,whichwascreatedasfollows:Wecreated
twoRAID0groupsontheCLARiiONCX340,onecontaining15FibreChanneldisksandtheothercontaining
10FiberChanneldisks.Weconfigureda10GBLUNoneachRAIDgroup.Wethencreateda20GBmetaLUN
usingthetwo10GBLUNsinastripedconfiguration(seeResourcesonpage 11foralinktotheEMC
NavisphereManagerAdministratorsGuidetogetmoreinformation).WeusedametaLUNforthetestdisk
becauseitallowedustousemorethan16spindlestostripeaLUNinaRAID0configuration.Weusedthis
testdiskonlyfortheI/Ostresstest.Wecreatedavirtualdiskonthetestdiskandattachedthatvirtualdiskto
theWindowsvirtualmachine.Totheguestoperatingsystem,thevirtualdiskappearsasaphysicaldrive.
Figure1showsthediskconfigurationusedinourtests.IntheVMFStests,weimplementedthevirtualdisk
(seenasaphysicaldiskbytheguestoperatingsystem)asa.vmdkfilestoredonaVMFSpartitioncreatedon
thetestdisk.IntheRDMtests,wecreatedanRDMfileontheVMFSvolume(thevolumethatheldthevirtual
machineconfigurationfilesandthevirtualdiskwhereweinstalledtheguestoperatingsystem)andmapped
theRDMfiletothetestdisk.WeconfiguredthetestvirtualdisksoitwasconnectedtoanLSISCSIhostbus
adapter.
Figure 1. Disk layout for VMFS and RDM tests
ESX Server
ESX Server
Virtual
machine
Virtual
machine
.vmdk
.vmdk
.vmdk
RDM
mapping
file pointer
Raw device
mapping
Test disk
Test disk
Test disk
VMFS volume
VMFS volume
VMFS volume
VMFS volume
Raw device
VMFS test
RDM test
Fromtheperspectiveoftheguestoperatingsystem,thetestdiskswererawdiskswithnopartitionorfile
system(suchasNTFS)createdonthem.Iometercanreadandwriteraw,unformatteddisksdirectly.Weused
thiscapabilitysowecouldcomparetheperformanceoftheunderlyingstorageimplementationwithout
involvinganyoperatingsystemfilesystem.
Software Configuration
WeconfiguredtheguestoperatingsystemtousetheLSILogicSCSIdriver.OnVMFSvolumes,wecreated
virtualdiskswiththethickoption.Thisoptionprovidesthebestperformingdiskallocationschemefora
virtualdisk.Allthespaceallocatedduringdiskcreationisavailablefortheguestoperatingsystem
immediatelyafterthecreation.Anyolddatathatmightbepresentontheallocatedspaceisnotzeroedout
duringvirtualmachinewriteoperations.
Unlessstatedotherwise,weleftallESXServerandguestoperatingsystemparametersattheirdefaultsettings.
Ineachtestcase,wezeroedthevirtualdisksbeforestartingtheexperimentusingthecommandlineprogram
vmkfstools(withthe-woption)
NOTEESXServer3.5offersfouroptionsforcreatingvirtualdiskszeroedthick,eagerzeroedthick,
thick,andthin.WhenavirtualdiskiscreatedusingtheVIClient,thezeroedthickoptionisusedby
default.Virtualdiskswitheagerzeroedthick,thick,orthinformatscanbecreatedonlywithvmkfstools,
acommandlineprogram.Thezeroedthickandthinformatshavecharacteristicssimilartothethick
formataftertheinitialwriteoperationtothedisk.Inourtests,weusedthethickoptiontopreventthe
warmupanomalies.FordetailsonthesupportedvirtualdiskformatsrefertoChapter5oftheVMware
Infrastructure3ServerConfigurationGuide.Fordetailsonusingvmkfstools,seeappendixesoftheVMware
Infrastructure3ServerConfigurationGuide.
Test Cases
Inthisstudy,wecharacterizetheperformanceofVMFSandRDMforarangeofdatatransfersizesacross
variousaccesspatterns.Thedatatransfersizesweselectedwere4KB,8KB,16KB,32KB,and64KB.Theaccess
patternswechosewererandomreads,randomwrites,sequentialreads,sequentialwrites,oramixofrandom
readsandwrites.ThetestcasesaresummarizedinTable1.
Table 1. Test cases
100% Sequential
100% Random
100% Read
4KB,8KB,16KB,32KB,64KB
4KB,8KB,16KB,32KB,64KB
100% Write
4KB,8KB,16KB,32KB,64KB
4KB,8KB,16KB,32KB,64KB
4KB,8KB,16KB,32KB,64KB
Load Generation
WeusedtheIometerbenchmarkingtool,originallydevelopedatIntelandwidelyusedinI/Osubsystem
performancetesting,togenerateI/OloadandmeasuretheI/Operformance.Foralinktomoreinformation,
seeResourcesonpage 11.IometerprovidesoptionstocreateandexecuteawelldesignedsetofI/O
workloads.Becausewedesignedourteststocharacterizetherelativeperformanceofvirtualdisksonraw
devicesandVMFS,weusedonlybasicloademulationfeaturesinthetests.
Iometerconfigurationoptionsusedasvariablesinthetests:
Transferrequestsizes:4KB,8KB,16KB,32KB,and64KB.
Percentrandomorsequentialdistribution:foreachtransferrequestsize,weselected0percentrandom
access(equivalentto100percentsequentialaccess)and100percentrandomaccesses.
Percentreadorwritedistribution:foreachtransferrequestsize,weselected0percentreadaccess
(equivalentto100percentwriteaccess),100percentreadaccesses,and50percentreadaccess/50percent
writeaccess(onlyforrandomaccess,whichisreferredtoasrandommixedworkloadintherestofthe
paper).
Iometerparametersconstantforalltestcases:
NumberofoutstandingI/Ooperations:64
Runtime:5minutes
Rampuptime:60seconds
Numberofworkerstospawnautomatically:1
Performance Results
Thissectionpresentsdataandanalysisofstoragesubsystemperformanceinauniprocessorvirtualmachine.
Metrics
ThemetricsweusedtocomparetheperformanceofVMFSandRDMareI/Orate(measuredasnumberofI/O
operationspersecond),throughputrate(measuredasMBpersecond),andCPUcostmeasuredintermsof
MHzperI/Ops.
Inthisstudy,wereporttheI/OrateandthroughputrateasmeasuredbyIometer.Weuseacostmetric
measuredintermsofMHzperI/OpstocomparetheefficienciesofVMFSandRDM.Thismetricisdefinedas
theCPUcost(inprocessorcycles)perunitI/Oandiscalculatedasfollows:
IometercollectedI/OoperationspersecondandthroughputinMBps
esxtopcollectedtheaverageCPUutilizationofphysicalCPUs
ForlinkstoadditionalinformationonhowtocollectI/OstatisticsusingIometerandhowtocollectCPU
statisticsusingesxtop,seeResourcesonpage 11.
Performance
Thissectioncomparestheperformancecharacteristicsofeachtypeofdiskaccessmanagement.Themetrics
usedareI/Orate,throughput,andCPUcost.
Random Workload
Inourtestsforrandomworkloads,VMFSandRDMproducedsimilarI/OperformanceasevidentfromFigure
2,Figure3,andFigure4.
Figure 2. Random mixed (50 percent read access/50 percent write access) I/O operations per second (higher
is better)
9000
8000
VMFS
7000
RDM (virtual)
RDM (physical)
6000
5000
4000
3000
2000
1000
0
4
16
Data size (KB)
32
64
RDM (virtual)
RDM (physical)
5000
4000
3000
2000
1000
0
4
16
Data size (KB)
32
64
12000
RDM (virtual)
RDM (physical)
10000
8000
6000
4000
2000
0
4
16
32
64
Sequential Workload
For4Ksequentialread,wechangedthecachepagesizeontheCLARiiONCX340to4K.Weusedthedefault
sizeof8Kforallotherworkloads.
InESXServer3.5,forsequentialworkloads,performanceofVMFSisveryclosetothatofRDMforallI/Oblock
sizesexcept4Ksequentialread.MostapplicationswithasequentialreadI/Opatternuseablocksizegreater
than4K.BothVMFSandRDMprovidesimilarperformanceinthosecases,asshowninFigure5andFigure6.
BothVMFSandRDMdeliververyhighthroughput(inexcessof300megabytespersecond,dependingonthe
I/Oblocksize).
Figure 5. Sequential read I/O operations per second (higher is better)
45000
40000
VMFS
RDM (virtual)
RDM (physical)
35000
30000
25000
20000
15000
10000
5000
0
16
32
64
25000
20000
15000
10000
5000
16
32
64
Table2andTable3showthethroughputratesinmegabytespersecondcorrespondingtotheaboveI/O
operationspersecondforVMFSandRDM.Thethroughputrates(I/Ooperationspersecond*datasize)are
consistentwiththeI/OratesshownaboveanddisplaybehaviorsimilartothatexplainedfortheI/Orates.
Table 2. Throughput rates for random workloads in megabytes per second
Random Mix
Random Read
Random Write
VMFS
RDM (V)
RDM (P)
VMFS
RDM (V)
RDM (P)
VMFS
RDM (V)
RDM (P)
30.96
30.98
31.15
23.41
23.27
23.27
46.38
46.1
45.87
58.8
58.68
58.68
44.67
44.43
44.35
90.24
89.22
88.74
16
105.22
106.03
105.09
80.14
80.72
80.58
161.2
158.25
159.42
32
173.72
176.1
176.69
135.91
138.53
138.11
241.4
241.12
242.05
64
256.14
262.92
263.98
215.1
215.49
214.59
309.6
311.13
310.15
Sequential Write
VMFS
RDM (V)
RDM (P)
VMFS
RDM (V)
RDM (P)
137.21
142.13
153
93.76
93.8
93.36
272.61
276.35
284.41
188.56
188.45
187.84
16
341.08
342.41
338.75
280.95
281.17
278.91
32
363.86
365.26
364.23
352.17
354.86
352.89
64
377.35
377.6
377.09
384.02
385.36
386.81
CPU Cost
CPUcostcanbecomputedintermsofCPUcyclesrequiredperunitofI/Oorunitofthroughput(byte).We
obtainedafigureforCPUcyclesusedbythevirtualmachineformanagingtheworkload,includingthe
virtualizationoverhead,bymultiplyingtheaverageCPUutilizationofalltheprocessorsseenbyESXServer,
theCPUratinginMHz,andthetotalnumberofcoresinthesystem(fourinourtestserver).Inthisstudy,we
measuredCPUcostasCPUcyclesperunitofI/Ooperationspersecond.
NormalizedCPUcostforvariousworkloadsisshowninfigures7through11.WeusedCPUcostforRDM
(physicalmapping)asthebaselineforeachworkloadandplottedtheCPUcostsofVMFSandRDM(virtual
mapping)asafractionofthebaselinevalue.ForrandomworkloadstheCPUcostofVMFSisonaverage5
percentmorethantheCPUcostofRDM.Forsequentialworkloads,theCPUcostofVMFSis8percentmore
thantheCPUcostofRDM.
Aswithanyfilesystem,VMFSmaintainsdatastructuresthatmapfilenamestophysicalblocksonthedisk.
EachfileI/Orequiresaccessingthemetadatatoresolvefilenamestoactualdatablocksbeforereadingdata
fromorwritingdatatoafile.TheaddressresolutionrequiresafewextraCPUcycleseverytimethereisanI/O
access.Inaddition,maintainingthemetadataalsorequiresadditionalCPUcycles.RDMdoesnotrequireany
underlyingfilesystemtomanageitsdata.Dataisaccesseddirectlyfromthedisk,withoutanyfilesystem
overhead,resultinginalowerCPUcycleconsumptionandbetterCPUcostforRDMaccess.
1.50
1.00
0.50
0.00
16
Data size (KB)
VMFS
RDM (Virtual)
32
64
RDM (Physical)
1.50
1.00
0.50
0.00
8
VMFS
16
Data size (KB)
RDM (virtual)
32
64
RDM (physical)
1.50
1.00
0.50
0.00
8
VMFS
16
Data size (KB)
RDM (virtual)
32
64
RDM (physical)
Figure 10. Normalized CPU cost for sequential read (lower is better)
Normalized CPU cost
(MHz/IOps)
1.50
1.00
0.50
0.00
16
32
64
RDM (virtual)
RDM (physical)
Figure 11. Normalized CPU cost for sequential write (lower is better)
Normalized CPU cost
(MHz/IOps)
1.50
1.00
0.50
0.00
VMFS
16
Data size (KB)
RDM (virtual)
32
64
RDM (physical)
Conclusion
VMwareESXServerofferstwooptionsfordiskaccessmanagementVMFSandRDM.Bothoptionsprovide
clusteredfilesystemfeaturessuchasuserfriendlypersistentnames,distributedfilelocking,andfile
permissions.BothVMFSandRDMallowyoutomigrateavirtualmachineusingVMotion.Thisstudy
comparestheperformancecharacteristicsofbothoptionsandfindsonlyminordifferencesinperformance.
Forrandomworkloads,VMFSandRDMproducesimilarI/Othroughput.Forsequentialworkloadswith
smallI/Oblocksizes,RDMprovidesasmallincreaseinthroughputcomparedtoVMFS.However,the
performancegapdecreasesastheI/Oblocksizeincreases.Forallworkloads,RDMhasslightlybetterCPU
cost.
ThetestresultsdescribedinthisstudyshowthatVMFSandRDMprovidesimilarI/Othroughputformostof
theworkloadswetested.ThesmalldifferencesinI/Operformanceweobservedwerewiththevirtualmachine
runningCPUsaturated.Thedifferencesseeninthesestudieswouldthereforebeminimizedinreallife
workloadsbecausemostapplicationsdonotusuallydrivevirtualmachinestotheirfullcapacity.Most
enterpriseapplicationscan,therefore,useeitherVMFSorRDMforconfiguringvirtualdiskswhenrunina
virtualmachine.
However,thereareafewcasesthatrequireuseofrawdisks.BackupapplicationsthatusesuchinherentSAN
featuresassnapshotsorclusteringapplications(forbothdataandquorumdisks)requirerawdisks.RDMis
recommendedforthesecases.WerecommenduseofRDMforthesecasesnotforperformancereasonsbut
becausetheseapplicationsrequirelowerleveldiskcontrol.
Configuration
Thissectiondescribesthehardwareandsoftwareconfigurationsweusedinthetestsdescribedinthisstudy.
Server Hardware
Server:DellPowerEdge2950
Processors:2dualcoreIntelXeon5160processors,3.00GHz,4MBL2cache(4corestotal)
Memory:8GB
Localdisks:1Seagate146GB10KRPMSAS(forESXServerandtheguestoperatingsystem)
Storage Hardware
Storage:CLARiiONCX340(4Gbps)
Memory:4GBperstorageprocessor
Fiberchanneldisks:25Seagate146GB15KRPMinRAID0configuration(firstfivediskscontainingFlare
OSwerenotused)
HBA:QLA2460(4Gbps)
10
Software
Virtualizationsoftware:ESXServer3.5(build64607)
Operatingsystem:WindowsServer2003R2EnterpriseEdition32bit,ServicePack2,512MBofRAM,1
CPU
Testdisk:20GBunformatteddisk
Storage Configuration
Readcache:1GBperstorageprocessor
Writecache:1.9GB
RAID0group1:10disks(10GBLUN)
RAID0group2:15disks(10GBLUN)
MetaLUN:20GB
Iometer Configuration
NumberofoutstandingI/Os:64
Rampuptime:60seconds
Runtime:5minutes
Numberofworkers(threads):1
Accesspatterns:random/mix,random/read,random/write,sequential/read,sequential/write
Transferrequestsizes:4KB,8KB,16KB,32KB,64KB
Resources
PerformanceCharacteristicsofVMFSandRDM:VMWareESXServer3.0.1
http://www.vmware.com/resources/techresources/1019
ToobtainIometer,goto
http://www.iometer.org/
FormoreinformationonhowtogatherI/OstatisticsusingIometer,seetheIometerusersguideat
http://www.iometer.org/doc/documents.html
TolearnmoreabouthowtocollectCPUstatisticsusingesxtop,seeUsingtheesxtopUtilityinthe
VMwareInfrastructure3ResourceManagementGuideat
http://www.vmware.com/pdf/vi3_35/esx_3/r35/vi3_35_25_resource_mgmt.pdf
ForadetaileddescriptionofVMFSandRDMandhowtoconfigurethem,seechapters5and8ofthe
VMwareInfrastructure3ServerConfigurationGuideat
http://www.vmware.com/pdf/vi3_35/esx_3/r35/vi3_35_25_3_server_config.pdf
VMwareInfrastructure3ServerConfigurationGuide
http://www.vmware.com/pdf/vi3_35/esx_3/r35/vi3_35_25_3_server_config.pdf
EMCCLARiiONBestPracticesforFibreChannelStorage
http://powerlink.emc.com
EMCNavisphereManagerAdministratorsGuide
http://powerlink.emc.com
11
45000
40000
VMFS
35000
RDM (virtual)
RDM (physical)
30000
25000
20000
15000
10000
5000
0
Figure13showsthenormalizedCPUcostsperI/OoperationforVMFSandRDMwith4Kand8Kcachepage
size.WeusedtheCPUcostofRDM(physicalmapping)with4Kcachepagesizeasthebaselinevalueand
plottedtheCPUcostsofVMFSandRDM(bothphysicalandvirtualmapping)withboth4Kand8Kcachepage
sizeasafractionofthebaselinevalue.TheCPUcostimprovedbyanaveragevalueof70percentwith4Kcache
pagesizeforbothVMFSandRDM
Figure 13. Normalized CPU cost for 4K sequential read with different cache page size (lower is better)
Normalized efficiency
(MHz/IOps)
2.00
VMFS
RDM (virtual)
RDM (physical)
1.50
1.00
0.50
0.00
4K (8K cache page size)
12