I had a partition on a Windows Server (VM) that was used for log files and was 300GB, 200GB of which will never be used. I needed to shrink this drive to 100GB. This can not be achieved via the vSphere client so here are the steps I took.
First open up Disk Management in Computer Management in your guest Windows environment.
Right click the volume on the disk you want to shrink.
Windows will inform you the maximum amount it can shrink the disk by. Choose an amount that you wish to actually shrink it by and click Shrink.
Once it is done and you are satisfied that the volume on the disk is the size you want.
Shut down the VM.
Enable SSH Access – This can be done via vSphere or by logging in to the iLo, depending on your infrastructure. I did it via vSphere as it was just quicker and easier.
Use Putty to SSH to the ESXi server itself and login as root
Navigate to the datastore path where the VMware virtual machine disk (.vmdk) is located.
cd /vmfs/volumes/<datastore name>/<vm name>
Make a backup copy of the .vmdk file, to be sure, just the descriptor file.
cp vmdiskname.vmdk vmdiskname-bak.vmdk
Need to edit the *.vmdk, which is the descriptor file, which contains the variables for the size of the *.-flat file. Using cat, this is what the descriptor file contains.
The number under the heading #Extent description defines size
e.g. 300GB = 300 X 1024 X 1024 X 1024 / 512 = 629145600
I wanted 100GB so… 100GB = 100 x 1024 x 1024 x 1024 / 512 = 209715200
Using vi, edited the descriptor file and changed the number.
Saved the file.
Exited out of SSH and started the VM.
Logged into Windows guest machine and opened Disk Management…Voila
I was tasked with adding additional users to receive emails from a particular Exchange Group. Easy enough!. This group had a type of ‘Security Group’ and had several email addresses associated as it basically replaced a public folder setup.
After adding the users and clicking on ‘Save’ I got the following error;
“You don’t have sufficient permissions. This operation can only be performed by a manager of the group”
But I was logged into the ECP as the Domain Administrator whom is a member of the Exchange Administrators group also.
This issue occurs if you’re not a manager of the group. In this situation, you’re not listed in the ManagedBy attribute.
Running the following PowerShell script determined that there was no entry in the ‘ManagedBy’ attribute.
Get-DistributionGroup -Identity FrontOfficeGroup Select-Object ManagedBy
I ran the following PowerShell script to add the appropriate person to the ‘ManagedBy’ attribute.
Set-DistributionGroup -Identity FrontOfficeGroup -ManagedBy “Joe.Blow@company.com” -BypassSecurityGroupManagerCheck
In SQL Server 2012 there is a default job called ‘syspolicy_purge_history’ which was continually failing.
The job has 3 steps and the last one is to run a PowerShell script, which is…
This was the job that was failing.
The execution policy of powershell on the server is set to Restricted, which is good as this will prevent any scripts from being run. However, this is also why the job was failing. I was not comfortable with changing the execution policy to a blanket unrestricted. I discovered that you can allow the sql PS engine to run with the execution policy of unrestricted without allowing all.
Check if the following registry keys exist, if not, add them. If so, change appropriately:
For SQL 2012
ExecutionPolicy REG_SZ RemoteSigned
Path REG_SZ C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\SQLPS.exe
For SQL 2014
ExecutionPolicy REG_SZ RemoteSigned
Path REG_SZ C:\Program Files (x86)\Microsoft SQL Server\120\Tools\Binn\SQLPS.exe
Open the Task Manager and ‘Show processes from all users’ and find the process that is causing the high CPU percentage. Assuming it is either an IIS Worker Process or w3wp.exe process do the following to identify which AppPool is related to the process.
Open an Administrative Command Prompt, and change to the %windir%\System32\inetsrv directory (cd %windir%\System32\inetsrv) and run appcmd list wp. This will show the process identifier (PID) of the process in quotes. You can match that PID with the PID available in Task Manager.
Problem: When running the Tor Browser it does not open and displays a Windows 10 message saying the application has ‘Stopped Working’, ‘Close’ or ‘Debug’
Explanation: Webroot Antivirus has blocked the firefox.exe in the Tor Browser directory.
Resolution: In Webroot Antivirus in the Identity Protection -> Application Protection section you will find the Application ‘protected’. Change thsi to Allow and start the browser.
Problem: Attempted to upgrade VMware 5.5 to 5.5 Update 1 – Error conflicting VIB’s
The upgrade contains the following set of conflicting VIBs;
Remove the conflicting VIBs or use Image Builder to create a custom upgrade ISO image that contains the newer versions of the conflicting VIBs, and try to upgrade again.
Actually the problem is NOT the net-bnx2 package, but the net-cnic package it depends on. Or to be more precise, it looks like a problem during the ESXi update procedure to interpret version numbers correctly.
You can see that if you check the esxupdate.log or manually try to update net-bnx2 it with the –update option:
# esxcli software vib update –vibname net-bnx2 –dry-run –depot /tmp/BCM-NetXtremeII-4.0-offline_bundle-1796156.zip
VIB Broadcom_bootbank_net-bnx2_2.2.5d.v55.2-1OEM.522.214.171.1241820 requires misc-cnic-register = 1.7a.02.v55.1-1OEM.5126.96.36.1991820, but the requirement cannot be satisfied within the ImageProfile.
Please refer to the log file for more details.
So our net-bnx2 driver needs at least misc-cnic-register = 1.7a.02.v55.1-1OEM.5188.8.131.521820, but that package is in the bundle. Hm.
Next step, what net-cnic version do I currently have on my host?
# esxcli software vib list | grep -i net-cnic
net-cnic 1.72.52.v55.1-1vmw.5184.108.40.2061820 VMware VMwareCertified 2013-11-21
Ok, we got 1.72.52.v55.1-1vmw.5220.127.116.111820 here. So the battle boils down to version string 1.7a vs. 1.72. Might be a bit ambiguous.
Now let’s ask our ESXi host about what he assumes is the newer driver (make sure you scroll to the right to the Status column):
# esxcli software sources vib list -d /tmp/BCM-NetXtremeII-4.0-offline_bundle-1796156.zip
Name Version Vendor Creation Date Acceptance Level Status
—————— ———————————- ——– ————- —————- ———
net-cnic 1.7a.05.v55.3-1OEM.518.104.22.1681820 Broadcom 2014-03-04 VMwareCertified Downgrade
scsi-bnx2i 2.7a.03.v55.2-1OEM.522.214.171.1241820 Broadcom 2014-01-31 VMwareCertified Downgrade
net-bnx2x 1.7a.10.v55.1-1OEM.5126.96.36.1991820 Broadcom 2014-03-04 VMwareCertified Downgrade
scsi-bnx2fc 1.7a.08.v55.1-1OEM.5188.8.131.521820 Broadcom 2014-04-17 VMwareCertified Downgrade
misc-cnic-register 1.7a.02.v55.1-1OEM.5184.108.40.2061820 Broadcom 2013-12-21 VMwareCertified Downgrade
net-bnx2 2.2.5d.v55.2-1OEM.5220.127.116.111820 Broadcom 2014-03-27 VMwareCertified Update
Oops! All drivers contained in the bundle except net-bnx2 are actually being interpreted as version downgrades! Which is not quite correct though. The host doesn’t see any need to update these packages with the –update switch
# esxcli software vib update –vibname net-cnic –dry-run -d /tmp/BCM-NetXtremeII-4.0-offline_bundle-1796156.zip
Message: Dryrun only, host not changed. The following installers will be applied: 
Reboot Required: false
VIBs Skipped: Broadcom_bootbank_net-cnic_1.7a.05.v55.3-1OEM.518.104.22.1681820
So what’s basically happening causing these errors everybody has is:
The host looks through the metadata version information and compares it with it’s existing packages.
It assumes it can only update net-bnx2 while everything else is a downgrade, so only net-bnx2 is considered for the update process.
It checks the dependencies of the newer net-bnx2 driver, which say it needs at least net-cnic version 1.7a.
The host reverses the comparison approach it just did when it decided the other packages including net-cnic are downgrades, and thinks the existing net-cnic version 1.72 is too old to satisfy this requirement.
It throws an error that it can’t update net-bnx2 (because of this dependency).
Apparently the ESXi updater gets confused by the ambiguous 1.7a vs 1.72 version strings conundrum. I don’t know what’s the general industry standard on versioning when it involves letters, so either VMware/Broadcom/HP messed up when naming these new versions or the ESXi updater is buggy when interpreting them.
1) Log into the host (we did this from the console via ESXi Shell)
2) Remove all broadcom components. The order of removal was important due to dependencies. We tried removing just bnx2 and bnx2x but got the error again, so we took them all out
esxcli software vib remove –vibname=net-bnx2
esxcli software vib remove –vibname=net-bnx2x
esxcli software vib remove –vibname=net-tg3
esxcli software vib remove –vibname=scsi-bnx2fc
esxcli software vib remove –vibname=scsi-bnx2i
esxcli software vib remove –vibname=net-cnic
esxcli software vib remove –vibname=misc-cnic-register
3) Reboot host to iso
4) Run the upgrade, preserve datastore -> reboot host
5) Remediate via the vsphere client if necessary. We did
6) Exit maintenance mode, start vm’s and the upgrade is complete
Problem: Restarting the Management agents on ESXi
Solution: To restart the management agents on ESXi:
From the Direct Console User Interface (DCUI):
Connect to the console of your ESXi host.
Press F2 to customize the system.
Log in as root.
Use the Up/Down arrows to navigate to Restart Management Agents.
Note: In ESXi 4.1 and ESXi 5.0, 5.1 and 5.5, this option is available under Troubleshooting Options.
Press F11 to restart the services.
When the service has been restarted, press Enter.
Press Esc to log out of the system.
From the Local Console or SSH:
Log in to SSH or Local console as root.
Run these commands:
Note: In ESXi 4.x, run this command to restart the vpxa agent:
service vmware-vpxa restart
To reset the management network on a specific VMkernel interface, by default vmk0, run the command:
esxcli network ip interface set -e false -i vmk0; esxcli network ip interface set -e true -i vmk0
Note: Using a semicolon (;) between the two commands ensures the VMkernel interface is disabled and then re-enabled in succession. If the management interface is not running on vmk0, change the above command according to the VMkernel interface used.
To restart all management agents on the host, run the command: