
Note that shared PCI is a feature of VMware’s Enterprise Plus licensing. Once the VIB is installed, using the HTML vSphere Client, you should be able to add the shared PCI device to the Desktop VM (or image) and see the appropriate profiles. If the VIB is installed correctly and you have you GPU cards in the host, you should see a similar output. To verify that the VIB was installed correctly, you can putty over to a host and run the command nvidia-smi. Now you can easily scan and remediate individual hosts or groups of hosts to install the VIB for Shared vGPUs.

(Note that the Kepler one is the consumer version and should NOT be used) If you know how to remove it from the patch repository, drop me a note on twitter or in the comments.įrom there, you can add it to a host extension baseline. Once you upload the VIB Offline bundle, you should see it in the list of patches. Once you have your offline bundle, you can head over to your Update Manager screen and choose the patch repository and Import Patches. To start, make sure you are getting the correct enterprise versions of the VIBs and drivers. The advantage being scale, consistency, and also the ability to see the VIB installation (Baseline) in vCenter. I found that you can also easily and successfully use Update Manager to push the entire installation of the VIBs to your hosts. So the official installation guide for the VIB is pretty much this KB article: I’ve been working with a customer getting the NVIDIA Tesla M60 cards working in their environment and compiled some great information for those of you looking into this.
