Vous êtes sur la page 1sur 13

When you try to Power On a virtual machine, you will receive the below error:

Failed to start the virtual machine.


Module DevicePowerOn power on failed.
Unable to create virtual SCSI device for scsi

Cause
You might have tried to PowerOn a VM with ESXi incompatible vmdk file. This VMDK file
will be compatible only with VMware workstation, VMware servers etc and not with the
ESXi.
Resolution
Make the vmdk file compatible with ESXi.
Steps:
- Remove the vmdk file from the VM. (Only from the VM!!!)
- Connect to the ESXi box using putty
- Execute the below command to convert the existing vmdk file to new ESXi compatible
vmdk
vmkfstools -i existing_vmdk_location new_vmdk_location

- Point the VM with the newly formatted vmdk file and Power On. Thats it.

Focus : How can we optimize the performance of a virtual machine ?


Below are a few points you can consider during your virtual machine deployment time:
Ensure that the proper guest OS is selected for the VM

Selecting the correct OS during VM provisioning helps in installing the


correct drivers for the VM

Incorrect selection of guest OS could result in VM boot failure

Use the latest hardware version supported for the vSphere


o

Upgrading the vSphere alone to a new version will not give you all the new
features. For that, the VM should be upgraded to the latest hardware
version.

Install the latest VMware tools


o

VMware tools helps the VM in Memory overcommitment. It uses ballooning


for this feature.

VMware tools helps the VM in syncing time with the ESXi server

VMware tools enhance user experience for the VM

Assign only the required number of vCPUs to a VM


o

CPU over commitment can cause high ready time

CPU over commitment is more critical than memory over commitment

Set cores and sockets cautiously


o

Try to make cores a divisor of the physical cores

I've often seen VMs with 9 or 7 cores when we they have a 8 or 16 core
physical machines. But this will not help the performance. VMware
suggests to use the socket and cores in a more intelligent manner. For eg:
if you want a VM with 8 vCPUs, you can consider using 2 sockets and 4
cores

Do not use CPU affinity unless it is badly required


o

CPU affinity can result in Unbalanced load and scheduler complexities

Consider putting swap files on local disk or an SSD cache

This will help in enhancing the performance of the VM

Choose memory intelligently


o

Ensure enough memory for the VM's working

Check the number of deployed and powered on VMs

Use the correct virtual SCSI adapter


o

Incorrect SCSI controller will cause guest OS to fail to boot.

Always prefer to use vmxnet3 network adapters


o

vmxnet3 is the most suited adapter for VMs. It supports jumbo frames

Next preference goes to 'Enhanced' and finally 'Flxible'

Enable Jumbo frames to improve the network performance

If you take care of the above points, I can assure you that your VMs will get extra wings
to fly!!!
Symptoms
VMs will be shown as inaccessible in the vCenter

Reason
A VM can become inaccessible due to any of the below reason:

Issue with the ESXi servers

Issue with the vCenter

Issue with the datastore

Resolution
In all the three cases these are the below three troubleshooting steps:
First step will be to restart the management agents in the ESXi.
Login to the ESXi using SSH

Run any of the below commands to restart the management agents

/etc/init.d/hostd restart
/etc/init.d/vpxa restart
or
services.sh restart
If this step did not resolve the issue for you, try the second step
Second step will be to remove the VM from the inventory and add using the vmx file

Right click on the affected VM

Choose the option 'Remove from the Inventory' (Be cautious about this action...Do not
delete the VM)

After this step, go to the vmx location of the VM

Right click on the VM and 'Add to the Inventory'

This step will definitely resolve your issue. But this step works fine only when we know the vmx
location of the VM. If you are not sure about the vmx location you will end up in adding incorrect
VMs.
Keep in your mind that you cannot access the vmdk location to find the vmx path when the VM is
inaccessible.

In these kind of situations, the best method is to use command line, the third step!!!

Login to the ESXi hosting the inaccessible VM using SSH

Run the below command to know the vmid of the VMs in the host

vim-cmd vmsvc/getallvms

You will receive a message "Skipping invalid VM '144' " along with the details of valid VMs.

The skipped VM will be the invalid one. The value '144' represents the vmid of the VM.

Now run the below command to reload the invalid VM

vim-cmd vmsvc/reload vmid where vmid is the id of the invalid VM

ESXi is a hypervisor which is intended to work on a server platform. But that doesn't mean that
we cannot install hypervisor on a workstation. Here, I'm trying to install VMware ESXi 5.5 on Dell
Optiplex 990 machine.
I will not be discussing about the installation steps as I believe it is pretty much straight forward
and most of you are very much expert in it. Will be just focusing on the tips that would help us
install ESXi on a workstaion.
I've booted my DELL Optiplex 990 using the VMware ESXi 5.5 bootable disk and everything
was working fine until I was stopped my the below error message.
'No network adapter were detected. Either no network adapters are physically connected to the
system.............................
Ensure that there is at least one network adapter........................'

This is an issue with the network card in the Optiplex 990 machine. The machine's network card
driver is not available in the VMware ESXi cd.
How to resolve it?
Just add the network card driver in VIB format and integrate it with the ESXi ISO. :)
I've downloaded the network driver for the Optiplex 990 machine in VIB format from this site.
Use ESXi customizer to integrate the downloaded VIB with the ISO.
Once this is done, burn the new ISO and try to install ESXi again!!!
Issue

When you Power On a VM, you will be greeted with the below error message:
'Failed to start the virtual machine.

Module DevicePowerOn power on failed.


Unable to create virtual SCSI device for scsi0:0, '/vmfs/volumes/...........vmdk'
Failed to open disk scsi0:0: Unsupported or invalid disk type 7. Ensure that the disk
has been imported.'

Cause
This will occur when you
import a vmdk from one VMware product (eg: Workstation) and try to Power On the VM
in another VMware product (eg: VMware ESXi).
Resolution
Use VMware converter to convert this VM to a compatible version.
Issue
You will find software iscsi adapter missing in ESXi
Resolution

1. Log in to the vSphere Client


2. Select the configuration tab
3. Select Storage Adapters
4. Click on the Add option

5. You will be given the option to add Software iSCSI adapter

I don't know if this is a subject for a blog. But since the resolution of this issue
appeared to be so silly and simple, thought of sharing that with you.
Issue
VM in an ESXi appears to be unresponsive at times. We are receiving ping response
but not able to connect via RDP or console.
The ultimate method of resolving this issue was a VM reset until I found the root
cause.
Resolution
After many hours of troubleshooting, the issue got resolved.
The CD/DVD drive was connected to the Host device. Ever since I changed the
CD/DVD drive to client device, the issue did not recur.
The resolution appears to be silly, but worked for me. This resolution is only one of
many reason that could impact VM performance.
Explanation
When a CD/DVD drive of a VM is connected to the host device and if that device is
unavailable, VMware will block the actions of the VM while waiting for a response
from the host CD/DVD device.
Symptom:
The host and VMs in a vcenter server appears to be disconnected.
Issue:
The host and its corresponding vms could appear as disconnected due to issue with any
of the following services.

vpxa - a service which runs in the esxi host. This service communicates with
vcenter server.

hostd - core service which runs in the esxi host

vpxd - a service which runs in the vcenter server which communicates with the
vpxa

Resolution:
Try to ping the ESXi host. This ensures that the host is reachable
1. Ensure you can login to the VMs remotely. This ensures that VMs in the host are
working fine.
2. Try to login to the ESXi host using vsphere client. This ensures that the hostd
services are running perfectly. If this step fails, restart the hostd service using
putty.
3. If the above three steps worked fine for you, the issue would be with the vpxa or
vpxd service. In that case, first try to restart the vpxa service in the host and if
that didnot resolve the issue, try restarting vcenter server service of the vcenter.

Issue
When VMware vSphere Update Manager 5 tries to remediate a host, the remediation
fails at 25% with the below error:
fault.com.vmware.vcIntegrity.VcIntegrityFault.summary
Root Cause
You may have VMs with RDM disks. Shutdown these VMs to remediate successfully.

You may have DRS rules configured for that host. Disable the rule for successful
remediation.

You may have VMs with Shared storage clustering. The SCSI controller in bus
sharing mode will not allow vMotion or svMotion operations. Shutdown these VMs
to remediate successfully.

You may have VMs with CD rom or other external devices attached to it. Detach
removable devices from the VM and remediate.

Issue

We faced an issue today morning with one of our VMs hosted in VMware. The VM
related options were greyed out.

Root cause

There was a snapshot job running in the background (not visible from vCenter), which
prevented any administration task in the VM. This task was stuck at 0%. This activity
cannot be cancelled from vCenter or from console as it was initiated by a system user
called vpxuser.

Workaround

Login to the SSH console of the ESXi host holding the VM using putty.
Identify the vmid of the affected VM (In our case the vmid was 391) using the
command vim-cmd vmsvc/getallvms
Check the tasks running in background for this particular VM using the command
vim-cmd vmsvc/get.tasklist 391

See if you can cancel the task using the command vim-cmd vimsvc/task_cancel
<taskname> [Task name will be something like hatask-391vim.virtualmachine.createsnapshot-1234567] . In our case this was not working
as the task was initiated by a system user. But in scenarios were the snapshot or
any VM related tasks initiated by user hangs, this command would help.

Identify the PID assigned to the VM using the command ps | grep vmx | grep
<VMname>

Kill the process using the command kill<pid>

The parent process will be killed and the VM will be now in powered off state. After
powering back the server, the options will be available.

Issue
While performing vMotion, the operation fails at 14% with the below error :
vMotion migration [-1062731490:1419235061251156] failed to create a connection
with remote host <Destination vMotion IP>: The ESX hosts failed to connect over the
VMotion network
Migration [-1062731490:1419235061251156] failed to connect to remote host
<Destination vMotion IP> from host <source IP>: Network unreachable
The vMotion failed because the destination host did not receive data from the source
host on the vMotion network. Please check your vMotion network settings and physical
network configuration and ensure they are correct.

Resolution
I've already penned a post on VMware vMotion failure at 14%. This blog is an
extended version of that post. If none of the steps mentioned in my previous post
helped you, then you are in the right page.
Check whether vMotion is selected for multiple vmkernel NICs in ESXi host. !!!!
If yes, make only one NIC available for vMotion.
Symptom:
Eth0 interface will not be present for a Centos VM after cloning. Only the loopback
networking interface will be available. If you try to turn up the interface manually (using
the command ifup eth0 or ifup-eth0), you will receive the below error.
Device eth0 does not seem to be present, delaying initialisation
Root Cause:
When you clone a Centos VM from a template, a new NIC card will be created for the
cloned VM. In other terms, a new MAC address will be generated for the NIC of the
cloned machine. This change happens only in VMware perspective and no modification
is made in Centos. Therefore the kernel will be still searching for the NIC with old MAC
address and hence fails.
Resolution:
1. Update the exisiting ethernet configuration file to reflect the new MAC address.
Check the new MAC address using vSphere client and modify the ifcfg-eth0 interface
configuration using the command:
vi /etc/sysconfig/networking/devices/ifcfg-eth0
Replace HWADDR with the new MAC address
2. Remove the kernel's networking interface rules file
rm -f /etc/udev/rules.d/70-persistent-net.rules
3. Reboot the VM
steps for cloning VM using SSH:

SSH to the ESXi host

Identify the path of the source


(say, /vmfs/volumes/datastore1/SourceVM/where sourceVM is the name of the
source VM)

Create a new folder in the desired datastore

mkdir /vmfs/volumes/datastore1/DesintationVM
where DestinationVM is the name of the new VM

Clone the sourceVM vmdk to the newly created folder DestinationVM

vmkfstools -i /vmfs/volumes/datastore1/SourceVM/sourceVM.vmdk
/vmfs/volumes/datastore1/DesintationVM/DesintationVM.vmdk

Once cloning is completed, proceed with the creation of the new VM using
vSphere client

In the option where you need to provision the harddisk for the new VM, choose
'Use an existing virtual disk'

Browse and point the newly cloned vmdk file

I don't know if this is a subject for a blog. But since the resolution of this issue
appeared to be so silly and simple, thought of sharing that with you.
Issue
VM in an ESXi appears to be unresponsive at times. We are receiving ping response
but not able to connect via RDP or console.
The ultimate method of resolving this issue was a VM reset until I found the root
cause.
Resolution

After many hours of troubleshooting, the issue got resolved.


The CD/DVD drive was connected to the Host device. Ever since I changed the
CD/DVD drive to client device, the issue did not recur.
The resolution appears to be silly, but worked for me. This resolution is only one of
many reason that could impact VM performance.
Explanation
When a CD/DVD drive of a VM is connected to the host device and if that device is
unavailable, VMware will block the actions of the VM while waiting for a response
from the host CD/DVD device.

Scenario
When we try to reduce or shrink the size of an existing vmdk file, the operation fails.
Resolution
There is no option to reduce the size using vSphere client. For this you may need to use
putty or cli. Please remember to delete unwanted data from the OS and to shrink the
partition internally using the diskmgmt.msc tool. After shrinking , perform the below:

Login to the ESXi using putty

Browse to the vmdk location (eg: cd vmfs/volumes/datastore1/VMname)

Take a backup of the existing vmname.vmdk and vmname-flat.vmdk files using


thecp command in linux (cp filename backup_filename)

Open the vmdk file using vi editor


o

vi vmname.vmdk

Modify the value corresponding to RW to the required disk space. If you need to
shrink the file to xGB, use the value : x*1024*1024*2. For eg: if you want to
shrink the disk to 25GB, give the value 25*1024*1024*2= 52428800

Once finished save the file and use vmkfstools command to clone a disk using
the new settings.

Remove the old vmdk files


o

vmkfstools -i vmname.vmdk vmname-new.vmdk

rm vmname.vmdk rm vmname-flat.vmdk

Once removed, again use the vmkfstools to clone vmdk files of the same old
name.
o

vmkfstools -i vmname-new.vmdk vmname.vmdk

Using vi client , remove the hard disk from virtual machine and add again.

Vous aimerez peut-être aussi