vCSA Upgrade to 6.7 Failure
Yesterday I was upgrading a vCSA 6.0 to 6.7 Update 2, I ran into a strange failure.
I was using a JSON file for the upgrade, because I better can choose the size and, it’s easier to do multiple tries, and don’t have to go thru the wizard every time.
When i ran the upgrade i failed in the precheck on the source with this strange error:
Cannot detect upgrade runner version on source vc, proceed to cleanup and upload Uninstalling vmware-upgrade-requirements-* on vcenter.domain.local. ('Failed to uninstall %(rpm)s on %(sourcevc_hostname)s: %(error)s', {'error': CommandError("Failed to run and wait for command in guest with error 'Command '/bin/rpm' exited with non-zero status 1'",), 'sourcevc_hostname': 'vcenter.domain.local', 'rpm': 'vmware-upgrade-requirements-*'}) Failed to run and wait for command in guest with error 'Command '['ls', '/var/tmp/vmware-upgrade-requirements/checker.sh']' exited with non-zero status 2' Uploading rpm to /var/tmp/upgrade-requirements.rpm, on VM vcenter.domain.local.
I looked thru the logs but I did not find any use full clue, on the failure.
But after some investigation I found the on the ESXi host where the source vCenter was running the /tmp folder was full:
[root@esxi01:/tmp] vdf -h …… Ramdisk Size Used Available Use% Mounted on root 32M 2M 29M 9% -- etc 28M 300K 27M 1% -- opt 32M 544K 31M 1% -- var 48M 1M 46M 2% -- tmp 256M 256M 0B 100% -- iofilters 32M 0B 32M 0% -- hostdstats 1803M 9M 1793M 0% --
I found that i was a HPE module (AMS) that was filling the /tmp folder, there is a HPE advisory on this here: https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-a00073323en_us&docLocale=en_US.
So after deleting the /tmp/ ams-bbUsg.txt” file the Upgrade work fine.
Update Juli 31 2019: Just seen the same failure message, but this time, i could see some log files, but not anything use full.
cd /storage/log/vmware/upgrade find | grep 2019 ./CollectRequirements_com.vmware.vmon_2019_07_31_09_41.log ./CollectRequirements_com.vmware.is_2019_07_31_09_41.log ./CollectRequirements_com.vmware.vpxd_2019_07_31_09_41.log ./CollectRequirements_com.vmware.vsan-health_2019_07_31_09_41.log ./CollectRequirements_com.vmware.sps_2019_07_31_09_41.log ./CollectRequirements_com.vmware.license_2019_07_31_09_41.log ./CollectRequirements_com.vmware.vcdb_2019_07_31_09_41.log ./CollectRequirements_com.vmware.rbd_2019_07_31_09_41.log ./CollectRequirements_com.vmware.rhttpproxy_2019_07_31_09_41.log ./CollectRequirements_com.vmware.applmgmt_2019_07_31_09_41.log ./CollectRequirements_com.vmware.netdump_2019_07_31_09_41.log ./CollectRequirements_com.vmware.sso_2019_07_31_09_41.log ./CollectRequirements_com.vmware.vcha_2019_07_31_09_41.log ./CollectRequirements_com.vmware.syslog_2019_07_31_09_41.log ./CollectRequirements_com.vmware.cls_2019_07_31_09_41.log ./CollectRequirements_com.vmware.vcIntegrity_2019_07_31_09_41.log ./CollectRequirements_com.vmware.vmafd_2019_07_31_09_41.log ./CollectRequirements_com.vmware.common_upgrade_2019_07_31_09_41.log ./CollectRequirements_com.vmware.ngc_2019_07_31_09_41.log
The solution was to reboot the source vCenter, and the upgrade worked.