Vmware esxi 5 feature comparison
Dating > Vmware esxi 5 feature comparison
Last updated
Dating > Vmware esxi 5 feature comparison
Last updated
Download links: → Vmware esxi 5 feature comparison → Vmware esxi 5 feature comparison
The Nexus1000v is developed in co-operation between Cisco and VMware and uses the of the dvS Third party management tools Because VMware ESX is a leader in the server-virtualisation market, software and hardware vendors offer a range of tools to integrate their products or services with ESX. Оверхед каждой машины более всего зависит от количества её vCPU и объёма памяти. Такого же размера могут быть и RDM-диски в режиме виртуальной совместимости.
Вами будут получены лицензионные ключи, которые нужно будет ввести после установки ESXi для бесплатного использования продукта. This release lets you view virtual disk performance charts for virtual machine objects as well. This issue is resolved in this release. И желательно вообще единообразие в аппаратной конфигурации кластера — проще управлять Host Profiles и диагностировать. Например, у Citrix в платформе XenServer фильтрация трафика виртуальных машин осуществляется на уровне виртуального коммутатора с помощью простых правил для конкретного виртуального интерфейса, ВМ или всего кластера.
This list of issues pertains to this release of ESXi 5. VMware dropped development of ESX at version 4.
vSphere: The Efficient and Secure Platform for Your Hybrid Cloud - Возможности «толстого» клиента заморожены на версии 5.
Earlier Releases of ESXi 5. To view release notes for earlier releases of ESXi 5. You can set this configuration for the duration of a single session by supplying a command-line switch. This configuration applies to the interface text and does not affect other locale-related settings such as date and time or numeric formatting. In addition, check this site for information about supported management and backup agents before installing ESXi or vCenter Server. The vSphere Web Client and the vSphere Client are packaged with the vCenter Server and modules ZIP file. ESXi, vCenter Server, and VDDK Compatibility Virtual Disk Development Kit VDDK 5. For more information about VDDK, see. During an installation or upgrade, the installer checks the compatibility of the host CPU with vSphere 5. For CPU support, see the Guest Operating System Compatibility for ESXi To determine which guest operating systems are compatible with vSphere 5. Hardware version 3 is no longer supported. To use hardware version 3 virtual machines on ESXi 5. Installation Notes for This Release Read the documentation for step-by-step guidance on installing and configuring ESXi and vCenter Server. After successful installation, you must perform some licensing, networking, and security configuration. For information about these configuration tasks, see the following guides in the vSphere documentation. When you upgrade a 4. To upgrade or migrate such hosts successfully, you must use Image Builder to create a custom ESXi ISO image that includes the missing VIBs. To upgrade without including the third-party software, use the ForceMigrate option or select the option to remove third-party software modules during the remediation process in vSphere Update Manager. For information about how to use Image Builder to make a custom ISO, see the documentation. For information about upgrading with third-party customizations, see the and documentation. For information about upgrading with vSphere Update Manager, see the and documentation. L3-routed NFS Storage Access vSphere 5. If you are using a non-Cisco router, be sure to use Virtual Router Redundancy Protocol VRRP instead. See your router documentation for details. Contact your storage vendor for details. Do not use other storage protocols such as FCoE over the same physical network. Other environments such as WAN are not supported. Upgrades for This Release For instructions about how to upgrade vCenter Server and ESXi hosts, see the documentation. Upgrading VMware Tools VMware ESXi 5. VMware Tools is a suite of utilities that enhances the performance of the virtual machine's guest operating system. Refer to the for a list of issues resolved in this release of ESX related to VMware Tools. To determine an installed VMware Tools version, see KB 1003947. See the instructions in the documentation, or for complete documentation about vSphere Update Manager, see the documentation. You can run the ESXi 5. This method is appropriate for upgrading a small number of hosts. Scripted upgrades also provide an efficient way to deploy multiple hosts. You can use a script to upgrade ESXi from a CD-ROM or DVD drive, or by PXE-booting the installer. Supported Upgrade Paths for Upgrade to ESXi 5. Instead, you must upgrade using update-from-esxi5. VMware vSphere SDKs VMware vSphere provides a set of SDKs for vSphere server and guest operating system environments. A collection of software development kits for the vSphere management programming environment. Includes support for new features available in ESXi 5. For more information, see the Documentation. For more information, see. For more information, see. The VMware vSphere Guest SDK 4. The SDK for Perl 5. For more information, see the. Open Source Components for VMware vSphere The copyright statements and licenses applicable to the open source software components distributed in vSphere 5. You can also download the source files for any GPL, LGPL, or other similar licenses that require the source code or modifications to source code to be made available for the most recent generally available release of vSphere. Patches Contained in this Release This release contains all bulletins for ESXi that were released earlier to the release date of this product. See the VMware page for more information about the individual bulletins. Patch Release contains the following individual bulletins: Patch Release contains the following individual bulletins: Patch Release contains the following image profiles: Patch Release contains the following image profiles: For information on patch and update classification, see. This issue is resolved in this release. This might casue sfcb-hhrc to stop and sfcbd to restart. As a result, WS-Management GetInstance action issues a wsa:DestinationUnreachable fault on ESXi server. This issue is resolved in this release. Failed with reason: No space left on device The vmkernel. LOCK for process python because the visorfs inode table is full. LOCK for process hostd because the visorfs inode table is full. LOCK for process python because the visorfs inode table is full. LOCK for process hostd because the visorfs inode table is full. This issue is resolved in this release. Correcting the sensor type and the PerceivedSeverity return value solves this issue. This issue is resolved in this release. Any third-party tool might not be able to monitor the ESXi host's hardware status. This issue is resolved in this release. Error:Timeout or other socket error waiting for response from provider. This issue is resolved in this release. As a result, Netlogond might fail and ESXi host might lose Active Directory functionality. This issue is resolved in this release. This issue is resolved in this release. This issue is observed on vSphere Client connected to a vCenter Server. This issue is resolved in this release. This issue is resolved in this release. This issue is resolved in this release. Including a check for configuration and core dump partition at the start of hostd service solves this issue. This issue is resolved in this release. As a result, for a remote syslog collector, all ESXi hosts appear to have the same host name. This issue is resolved in this release. When you re-enable the port group, the default gateway is still left blank. This issue is resolved in this release. The default gateway is updated and not left blank. The ESXi hosts experience a divide-by-zero exception during the TSO split and finally results in the failure of the host. This issue occurs when the the bnx2x driver sends the LRO packet with a TCP Segmentation Offload TSO MSS value set to zero. This issue occurs when you shut down the guest operating system during the DVFilter cleanup process. This issue is resolved in this release. This issue is resolved in this release. This issue is resolved in this release. For an IOPS value of 1 or less, the GAVG value for NFS datastores might be as high as 40ms. This issue is resolved in this release. If the Linux kernel version is later than 2. This issue is resolved in this release. This issue is resolved in this release. This issue is resolved in this release. This issue occurs when the default link state for the adapters is set as carrier ok mode, and as a result, the operstate is not updated. This release resolves the issue by setting the default link state for the adapters to no carrier mode. This might casue virtual machines to disconnect from the network after they are restarted or migrated using vMotion. The log files might display messages similar to the following: 2012-11-30T11:29:06. This issue is resolved in this release. The Common Vulnerabilities and Exposures project cve. The Common Vulnerabilities and Exposures project cve. Exploitation of the issue may lead to a Denial of Service of the hostd-vmdb service. The Common Vulnerabilities and Exposures project cve. This issue occurs due to a change in the IPMI sensor IDs used by IBM System x iDataPlex dx360 M3 servers. This issue is resolved in this release. This issue is resolved in this release. This issue occurs when a high volume of HTTP URL requests are sent to hostd and the hostd service fails. This issue is resolved in this release. This issue is resolved for ESXi 5. This issue is resolved in this release. You can apply host profiles even if there is a invalid indication in the host profile. This issue is resolved in this release. As applyHostConfig is not a progressive task, the hostd service is unable to distinguish between failed task and long-running task during hostd timeout. As a result, the hostd service reports that the applyHostConfig has failed. This issue is resolved in this release by installing 30-minute timeout as a part of HostProfileManager Managed Object. However, this issue might still occur when you attempt to apply a large host profile and the task might exceeds 30-minute timeout limit. To workaround this issue, re-apply the host profile. This might have caused connections to the host to be dropped. This issue occurs when a check is performed to ensure correct cache configuration. This issue is resolved in this release. This issue is resolved in this release. You are unable to view the domain to which you have joined the host in the drop-down menu for adding permissions to AD users and groups. This issue occurs because the lsassd service on the host stops. This happens because of the buffer overflow and data truncation in hpsc handler. This issue is resolved in this release. Passing a rescan timeout parameter to ESX command line before the boot process allows the user to configure the timeout value, this resolves the issue. This issue is resolved in this release. Messages similar to the following are displayed in the log files: vmsyslog. The logging memory pool limit is increased to 48MB. This issue is resolved in this release. This issue occurs when this failure of memory allocation is not handled properly. This issue is resolved in this release. This issue is resolved in this release. This happens because a datastore refresh call is processed for every vdiskupdate message. Modifying the datastore refresh logic resolves this issue. This issue is resolved in this release. As a result the ESXi host displays storage related Unknown messages in syslog. In this release the line buffer limit is increased to resolve this issue. This issue is resolved in this release. This issue occurs when memory pages for the file system cache buffer cache falls in first 2MB region of memory. As a result, the migration rates become slow for ESXi. This issue is resolved in this release. This issue is resolved in this release. Metadata corruption of LUNs will now result in an error message. The updated SATP claim rule uses the reset option to clear the reservation from the LUN and allows other users to set the reservation option. This issue is resolved in this release. This issue is resolved in this release by correctly pointing to Host Bus Busy H:0x2 status messages for issues in the device drivers similar to the following: 2013-04-04T13:16:27. When you create a thick virtual machine disk file VMDK with a size of 2TB, the datastore browser incorrectly reports the VMDK disk size as 0. This issue is resolved in this release. This issue occurs when the vpxa process exceeds the limit of memory allocation during cold migration. As a result, the ESXi host loses the connection from the vCenter Server and the migration fails. This issue is resolved in this release. This issue is resolved in this release. This issue is resolved in this release. This issue occurs when two failover cluster virtual machines are placed on two different ESXi hosts and storage array is running in ALUA configuration. This issue is resolved in this release. An error message similar to the following is displayed: Unable to get Console path for Mount ESXi host maintains NAS volume as a combination of NFS server IP address and the complete path name of the exported share. This issue occurs when this combination exceeds 128 characters. This issue is resolved in this release by increasing the size of NAS volume to 1024. This issue is resolved in this release. This issue is resolved in this release. However, do not configure USB controllers as passthrough if you boot ESXi hosts from USB devices such as an SD card. For more information, see. As a result, the host cannot enter maintenance mode, and the remediation cannot be completed. This issue is resolved in bulletins created in this release and later. This issue occurs if the state. As a result, a No such a file exception error message is displayed when you reset the ESXi license. This issue is resolved in this release. This issue is observed when the user attempts an ESXi upgrade using the esxcli command. This issue is resolved in this release. However, in ESXi 5. Hence, while upgrading multiple ESXi 4. The value of logdir attribute is the directory name of the old logfile attribute value. The migration only happens when the directory is still a valid directory on the upgraded system. This issue is resolved in this release. This issue is resolved in this release. This issue is resolved in this release. This release lets you view virtual disk performance charts for virtual machine objects as well. This is useful when you need to trigger alarms based on virtual disk usage by virtual machines. The following error message is displayed: The system cannot find the file specified. This issue is resolved in this release. The vmx configuration is outdated and it points to the original VMDK. If the virtual machine fails between the snapshot operation and the next power on, data loss occurs and the VMDK is left in an orphaned state. This issue is resolved in this release. If the file resides on removable media, reattach the media. Followed by: vmx MsgQuestion: msg. This issue is resolved in this release. This issue is resolved in this release. This issue is resolved in this release. This issue is resolved in this release. This issue occurs when the virtual machine swap files. This issue is resolved in this release. Removing some Linux drivers from the kernel solves this issue. This issue is resolved in this release. This issue happens because of the kernel incompatibility. This issue is resolved in this release. This issue occurs with Windows Server 2008, Windows Server 2008 R2, and Windows Server 2012 virtual machines. The issue does not occur in Windows 2003 virtual machines. This issue is resolved in this release.. For some virtual machines, the prefixLength property of the NetIpConfigInfoIpAddress data object, which is used to denote the length of a generic Internet network address prefix, might display incorrect subnet mask information. This issue occurs when the IP address endianness attribute, which determines how bytes are ordered within computer memory, is not correct in the subnet mask calculation. This issue is observed in Windows Server 2008 R2 64-bit and Windows Server 2003 virtual machines. This issue is resolved in this release. This issue occurs when the default settings of the SVGA drivers that are installed with VMware Tools are not proper. The virtual machines might also stop responding if you move the mouse and press any key during the restart process. This issue is resolved in this release. This issue is observed on SUSE Linux Enterprise Server 11 Service Pack 2 and Red Hat Enterprise Linux Server 6. This issue is resolved in this release. This might cause attempts to install VMware Tools to fail. This issue is resolved in this release. This happens because the uninstall process starts without waiting for vmusr process to finish. More specifically, the uninstall process deletes a registry key which the vmusr process later tries to read, leading to a VMware Tools service failure. This issue is resolved in this release. Including the file in the signed form resolves this issue. This issue is resolved in this release. This issue occurs with VMware Tools version 8. This issue is resolved in this release. This issue is resolved in this release. This list of issues pertains to this release of ESXi 5. Some known issues from previous releases might also apply to this release. If you encounter an issue that is not listed in this known issues list, you can review the known issues from previous releases, search the VMware Knowledge Base, or let us know by providing feedback. Known Issues List The issues are grouped as follows. You may be unable to access this system until you customize its network configuration However, the host acquires DHCP IP and can ping other hosts. Workaround: This error message is benign and can be ignored. The error message disappears if you press Enter on the keyboard. One common possible failure is signature verification, which can only be checked after the VIB is downloaded. The transaction fails with a VibDownloadError message. Workaround: Perform the following steps to resolve the problem. If the names change after reboot in a scripted upgrade, the upgrade might be interrupted. Workaround: When possible, use Network Address Authority Identifiers NAA IDs for disk devices. For machines that do not have disks with NAA IDS, such as Hewlett Packard machines with CCISS controllers, perform the upgrade from a CD or DVD containing the ESXi installer ISO. Installation and upgrade script commands are documented in the vSphere Installation and Setup and vSphere Upgrade documentation. In either case, when the ESX system is upgraded, the system will pause for up to one minute attempting to fetch an IPv4 address from a DHCP server. After the system pauses for up to one minute, it will continue to the successful completion of the upgrade. The system might display a prompt to press Enter to continue. You can either press Enter or ignore the prompt. In either case, the system will proceed with the upgrade after the pause. As a result, these hosts lose network connectivity and will not be automatically added to vCenter Server 5. Workaround: To use a 4. Use the newly created host profile with vSphere Auto Deploy to boot up stateless ESX 5. To boot up ESX 5. You can also disable or remove vDS settings from a 4. Typically the MAC address, IP address, and Subnet Mask should appear here. Make a note of the name of the adapter, which appears in the VSS diagram. Because this port is the port through which the ESXi Dump Collector server receives core dumps from ESXi hosts, these silent exits prevent ESXi host core dumps from being collected. Because error messages are not sent by ESXi Dump Collector to vCenter Server, the vSphere administrator is unaware of this problem. If not resolved, this affects supportability when failures occur on an ESXi host. Workaround: Select a port only from within the recommended port range to configure the ESXi Dump Collector server to avoid this failure. Using the default port is recommended. Workaround: Use other NICs for gPXE boot. If you enable more than 16 VMkernel network adapters for vMotion on a given host, vMotion to or from that host might fail. The error message says refusing request to initialize 17 stream ip entries, where the number indicates how many VMkernel network adapters you have enabled for vMotion. Workaround: Disable vMotion VMkernel network adapters until only 16, at most, are enabled for vMotion. A side effect of this functionality is that each resource pool is tagged with a default 802. IPv6-only settings, however, are not supported in host profiles. If you configure VMkernel network adapters with IPv6-only settings, you are asked to provide IPv4 configurations in the host profile answer file. However, some Cisco switches 4948 and 6509 drop the packets if the tagged packets are sent on the native VLAN VLAN 0. The length of the delay increases with the number of BE2 and BE3 interfaces on the host and can last for several minutes. The maximum number of network resource pools allowed on a host is 56. LimitExceeded This error message indicates that the distributed switch already has the maximum number of network resource pools. The maximum number for network resource pools on a vSphere Distributed Switch is 56. Unless a system name is explicitly set to advertise on the extreme switch, LLDP cannot display this information. Workaround: Run the configure lldp ports advertise system-name command to advertise the system name on the extreme switch. Other operations that truncate packets might also cause ESXi to fail. Workaround: Do not set a mirrored packet length for a port mirroring session. However, a link to enter Storage DRS maintenance mode appears on the Summary page for a datastore that is not in a datastore cluster. When you click Enter SDRS maintenance mode for a standalone datastore, the datastore attempts to enter maintenance mode and the task appears to be pending indefinitely. Workaround: Cancel the Enter SDRS Maintenance Mode task in the Recent Tasks pane of the vSphere Client. This condition persists until the background Storage vMotion operation completes. This action could take a few minutes or hours depending on the Storage vMotion operation time. During this time, no other operation can be performed for that particular host from vCenter Server. After the Storage vMotion operation completes, vCenter Server reconnects the host back to the inventory. None of the running virtual machines on non-APD datastores are affected by this failure. Symbolic links referencing incorrect files and folders might cause this problem. Workaround: Remove the symbolic links. Do not use symbolic links in datastores. If you select the device that does not support ATS to extend the ATS-capable datastore, the operation fails. The vSphere Client displays the An error occurred during host configuration message. In the log file, you might also find the following error message: Operation failed, unable to add extent to filesystem. Workaround: When you test Storage DRS load balancing, use real data to populate at least 20 percent of the storage space on the datastore. The dialog box displays Place new virtual machine hard disk on. When virtual machines are being created, hard disk names are not assigned until the disks are placed. If the virtual machine hard disks are of different size and are placed on different datastores, you can use the Space Utilization before and after statistics to estimate which disk is placed on which datastore. You cannot deselect the Disable Storage DRS check box for the virtual machine in the Scheduled Task wizard. The Disable Storage DRS check box is always selected in the Scheduled Task wizard. However, after the Scheduled Task runs and the virtual machine is created, the automation level of the virtual machine is the same as the default automation level of the datastore cluster. The following error message appears: The resource is in use. However, when you check the disk type on the Virtual Machine Properties dialog box, the Disk Provisioning section always shows Thick Provision Eager Zeroed as the disk format no matter which format you selected during the disk creation. ESXi does not distinguish between lazy zeroed and eager zeroed virtual disks on NFS datastores. Workaround: After migration, use the vSphere Client to change the disk's mode to Independent Persistent. The vSphere Client displays the following error message: Reconfigure failed: vim. Workaround: Remove a child disk to be able to add a virtual compatibility RDM. After you enable software FCoE adapters on these hosts, attempts to display Storage Maps in the vSphere Client fail. The following error message appears: An internal error has occurred: Failed to serialize response. Workaround: Configure software FCoE on the ESXi host first, and then add the host to vCenter Server. In this configuration, when one of the NFS volumes runs out of space, other NFS volumes that share the same RPC client might also report no space errors. MaxConnPerIP on the right. For ESXi hosts not provisioned with Auto Deploy, the SATP PSP change is correctly updated in the host. However, a compliance check of the ESXi host fails compliance check with the host profile. Workaround: After applying the host profile to the ESXi host, delete the host profile and extract a new host profile from the ESXi host, then attach it to the host before rebooting. To do this, use the Update from Reference Host feature in the Host Profiles UI. This task deletes the host profile and extracts a new profile from the host while maintaining all the current attachments. Use the esxcli command to edit the SATP PSP on the host itself before you extract the host profile. Do not use the host profile editor to edit the SATP PSP. The host profile application process does not disable the services on the target ESXi host. This situation is commonly encountered by users who have enabled the ESXShell or SSH services on the target ESXi hosts through the Security Profile in the vSphere Client or Troubleshooting Options in the DCUI. Workaround: The reboot process disables the services. You can also manually stop the services in the vSphere Client by configuring the host. Perform this procedure for each service. If the answer file status is Completed, after attaching another host profile to the host, the answer file status in the host profile view still appears as Completed. The actual status, however, might be changed to Incomplete. Workaround: Manually update the answer file status after attaching a host profile. The host profile answer file status is updated. In such cases, the user sees the Cannot apply the host configuration error message in the vSphere Client, although the underlying process on ESXi that is applying the configuration might continue to run. LOCK, to be released. Another process has kept this file locked for more than 20 seconds. The process currently holding the lock is hostd-worker 5055. This is likely a temporary condition. Please try your operation again. This error is caused by contention on the system while multiple operations attempt to gather system configuration information while the host profiles apply operation sets configuration. Because of these errors and other timeout-related errors, even after the host profiles apply operation completes on the system, the configuration captured in the host profile might not be fully applied. Checking the host for compliance shows which parts of the configuration failed to apply, and perform an Apply operation to fix those remaining non-compliance issues. By default, the apply operation times out in 10 minutes. This entry lets you set a longer timeout. For example, a value of 3600 increases the timeout to 1 hour. The value you enter might vary depending on the specific host profile configuration. After you set a high enough value, the apply operation timeout error no longer appears and the task is visible in the vSphere Client until it is complete. The configuration in the host profile and answer file is applied on the system during initialization. Large configurations might take longer to boot, but it can be significantly faster than manually applying the host profile through the vSphere client. Workaround: Update the answer file for the profile before performing a compliance check. This can occur even if the host is compliant with the host profile. This occurs when the portgroupprofile policy options are set to use the default values. This setting leads to an issue where the comparison between the profile and the host configuration might incorrectly fail when the profile is applied. At this time, the compliance check passes. The comparison failure causes the apply profile action to recreate the vSwitches and portgroups. This affects all subprofiles in portgroupprofile. Workaround: Change the profile settings to match the desired settings instead of selecting to use the default. Workaround: To retrieve the correct installation date, use the esxcli command esxcli software vib list. The unknown status is not critical and the list of PCI IDs is updated regularly in major vSphere releases. Workaround: When you run the snmpwalk command, use the -t option to specify the timeout interval and the -r option to set the number of retries. For example: snmpwalk -m all -c public -v1 host-name -r 2 -t 10 variable-name. This might result in a port conflict when you enable the agent, producing an Address already in use error message. Workaround: Enable the embedded SNMP agent before configuring the port. This issue occurs when you configure USB 2. Workaround: Set the following configuration option in the. However, an xHCI driver might not be available for many operating systems. Without a driver installed in the guest operating system, you cannot use USB 3. No drivers are known to be available for the Windows operating system at this time. Contact the operating system vendor for availability of a driver. When you create or upgrade virtual machines with Windows guest operating systems, you can continue using the existing EHCI+UHCI controller, which supports USB 1. If your Windows virtual machine has xHCI and EHCI+UHCI USB controllers, newly added USB 1. Workaround: Remove the xHCI controller from the virtual machine's configuration to connect USB devices to EHCI+UHCI. Linux kernels earlier than 2. If the powered-on virtual machine has more than 3GB of memory, you can increase the virtual machine memory to 16 times the initial virtual machine power-on size or to the hardware version limit, whichever is smaller. The hardware version limit is 255GB for hardware version 7 and 1011GB for hardware version 8. Linux 64 bit and 32 bit Windows 7 and Windows 8 guest operating systems freeze when memory grows from less than or equal to 3GB to greater than 3GB while the virtual machine is powered on. This vSphere restriction ensures that you do not trigger this bug in the guest operating system. For hardware version 7 virtual machines with cores per socket greater than 1, when you enable CPU hot add in the Virtual Machine Properties dialog box and try to hot add virtual CPUs, the operation fails and a CPU hot plug not supported for this virtual machine error message appears. Workaround: To use the CPU hot-add feature with hardware version 7 virtual machines, power off the virtual machine and set the number of cores per socket to 1. For best results, use hardware version 8 virtual machines. When you hot-add memory to a system that has less than 3GB of memory before the hot-add, but more than 3GB of memory after hot-add, the Windows state is corrupted and eventually causes Windows to stop responding. Workaround: Use the latest LSI SAS driver available from the LSI website. Do not use the LSISAS1068 virtual adapter for Windows 2003 virtual machines. You see the incorrect address when you run the ifconfig file and compare the output of the command with the list of addresses in the vSphere Client. This incorrect information also appears when you run the vim-cmd command to get the GuestInfo data. CannotAccessFile error message appears and the create virtual machine operation fails. Workaround: Create additional virtual machines in smaller batches of, for example, 64, or try creating virtual machines in different datastores or within different directories on the same datastore. The devices can also disconnect if DRS triggers a migration. When the devices disconnect, they revert to the host and are no longer connected to the virtual machine. This problem occurs more often when you migrate virtual machines that have multiple USB devices connected, but occasionally happens when one or a small number of devices are connected. Workaround: Migrate the virtual machine back to the ESXI host to which the USB devices are physically attached and reconnect the devices to the virtual machine. In the vSphere 5. This version is supported on the existing host, but upgrade if new functionality does not work. VMware Tools installed with vSphere 4. Workarounds: In vSphere 5. This setting also disables the exclamation point icon in the guest, which indicates that VMware Tools is unsupported. Neither of these settings affect the behavior of VMware Tools when the Summary tab shows the status as Unsupported or Error. In these situations, the exclamation mark icon appears and VMware Tools is automatically upgraded if configured to do so , even when the advanced configuration settings are set to FALSE. You can set advanced configuration parameters either by editing the virtual machine's configuration file,. This feature does not work in ESXi 5. Ignore any documentation procedures related to this feature. Workaround: Install VMware Tools manually. In such cases, the Mac OS X guest operating system stops responding and a variant of one of the following messages is written to the vmware. The first line of the panic report is: Panic CPU 0 : Unresponsive processor The guest OS panicked. Cloning of ESXi 5. In such cases, you cannot set custom VMware Tools scripts. Workaround: Set the DNS suffix manually in Windows XP and Windows 2003. The SNMP agent reports the processor status as Unknown for the hrDeviceStatus object in HOST-RESOURCES-MIB. Workaround: Use either CIM APIs or SMBIOS data to check the processor status. The result is a nonlinear snapshot tree hierarchy. This situation occurs if you change the snapshotDirectory settings to point to different datastores more than once and take snapshots of the virtual machine between the snapshotDirectory changes. For example, you take snapshots of a virtual machine with snapshotDirectory set to Datastore A, revert to a previous snapshot, then change the snapshotDirectory settings to Datastore B and take additional snapshots. Now you migrate the virtual disk from Datastore B to Datastore A. The best practice is to retain the default setting, which stores the parent and child snapshots together in the snapshot directory. Avoid changing the snapshotDirectory settings or taking snapshots between datastore changes. If you set snapshot. Workaround: Manually update the disk path references to the correct datastore path in the snapshot database file and disk descriptor file. For example, local clocks in areas that observe DST were set forward 1 hour on Sunday, March 27, 2011 at 3am. The tick markers on the time axis of performance charts should have been labeled... The labels actually displayed are... If the same operation is performed on a virtual machine on an ESXi 5. While a virtual machine is being migrated from one host to another, the original host might fail, become unresponsive, or lose access to the datastore containing the configuration file of the virtual machine. If such a failure occurs and the vMotion subsequently also fails, vSphere HA might not restart the virtual machine and might unprotect it. Workaround: If the virtual machine fails and vSphere HA does not power it back on, power the virtual machine back on manually. This problem is also seen if you remove the splashscreen directive from the grub. Workaround: Verify that the splashscreen directive is present and references a file that is accessible to the boot loader. Workaround: Manually activate and deactivate the adapters with the ifconfig utility. The manual activation and deactivation will not persist through the next reboot process. This option is not compatible with ESXi 5. The log file does not contain links to rotated log files. In this situation, you cannot install OSPs. Under certain conditions in which the proxy server is enabled for the machine being used, PowerCLI fails to add the online depot in its session. This failure can occur only if all the following conditions exist.