Proxmox VE 5.3

Proxmox VE 5.3

The new version of Proxmox was released a few days ago, and brings several very nice new features with it. For me the most interesting is that they've added PCI passthrough support in the Web UI (you'd have to manually edit your VM configuration files previously), but there's several other note-worthy additions. You can check the official press release here, or keep reading to see my take on things, along with screenshots that show the features in action.

Updating your own box is easy, just use the usual update method to get the latest version (Node » Updates » Refresh, and then Upgrade, or just run sudo apt-get update && sudo apt-get upgrade -y in shell.

There's now a "PCI Device" option in the Hardware Add menu.

PCI Passthrough

I have only recently started playing with PCI (or USB) passthrough, so having a UI for this is especially handy for me. While the VM configuration file isn't all that complicated, it's nice to have a UI that lists out all devices it can find so you don't have to run separate commands for that. Adding a PCI device is now roughly as easy as adding a USB device. After initial setup, that is.

The PCI Device add modal.
The list of hardware devices I can pass through to my VM. This of course varies depending on what hardware you have installed.

A note on initial setup

You may get an error message the first time you try to use this feature. This is because you apparently need to manually enable the modules and settings that enable hardware pass-through. I am not sure if this is automatically done with a clean install of Proxmox 5.3, but after upgrading I had to do this manually. I'll be sharing what I had to do to make the feature work below.

hardware passthrough will only work if your hardware (specifically your CPU) supports these features. Fortunately virtually all Intel Xeon CPUs support this, so if you're running Proxmox on an actual server you'll most likely be fine. The important abbreviations features to look for are VT-d and IOMMU. If you have an AMD CPU, instead of VT-d you should instead look for AMD-Vi, although as I only have Intel CPUs I won't be able to confirm this on my own.

For the remainder of this document I'll assume you have an Intel CPU, but feel free to replace Intel-specific parts with the appropriate AMD equivalent.

1) Enable VT-d and/or SR-IOV

Depending on your motherboard/vendor, they may use different names or abbreviations to indicate the thing we're looking for. Some may simply call it VT-d or VT-x, whereas others may have different names. I recommend you consult your specific motherboard and/or server documentation to find the appropriate options, and to ensure the hardware supports this feature in the first place.

In my case on a Dell R510 I had to enable Virtualization Technology, as-well as SR-IOV (Single Root I/O Virtualization). The former can be found under Settings » Virtualization Technology, and the latter under Settings » Integrated Devices » SR-IOV Global Enable. I believe the options are named similarly on the R610, R620 and R720, though I don't have access to these servers to verify this.

2) Modify Grub config

With that done, you need to modify a few settings in your Proxmox Debian installation. After booting up your server, SSH into your server or use the Dashboard's Shell option. We'll start with updating the Grub configuration file.

Use your favorite text editor to open /etc/default/grub (use sudo if you're not logged in as root). Find the line that sets GRUB_CMDLINE_LINUX_DEFAULT and replace its default "quiet" to "quiet intel_iommu=on". If you're using an AMD CPU, this should be set to "quiet amd_iommu=on". Save the changes, and then run update-grub.

3) Modify /etc/modules

Open /etc/modules using your favorite text editor, and add the following lines to it (note: the file may be empty at first, which is fine. Just add these lines):


Save the changes, and reboot your server

That should be it.

After rebooting you should now be able to add PCI passthrouh devices to your virtual machines. You can check if things work alright via the Proxmox UI by heading over to the Hardware section of any of your VMs, and then clicking the Add » PCI Device option. You can also use the lspci to get a list of all PCI devices and their respective identifiers, in case you want to manually assign these via the VM config file, or just to quickly check if a certain device actually shows up.


With 5.3 it is now also easier to add additional storage options to your setup. I haven't looked too much into this just yet, but I plan to play around with this a bit later as something like CIFS might be a good choice for my specific setup. CephFS seems to be a highlight addition, but I am not familiar with it just yet.

For now, here are some screenshots at least that show off the new UI features in action:

There are now multiple new storage options you can choose from.
The CIFS (SMB) features exposed. It defaults to SMB3.
You're able to store pretty much anything on CIFS storage, which is nice.

Hosts file editing

This is both a big and small addition, as it can make it slightly easier to update your server's IP address and/or hostname, something I've found to be more complicated than it ought to be.

While it's fairly easy to just edit the hosts file directly from a prompts, it's kind of nice they exposed this in the UI too. It's likely still (too) challenging to update your host's IP address if it's part of a cluster already, and Proxmox' official recommendation for wanting to change the IP address is to a) not, or b) completely re-install Proxmox. Yikes.

Overall I think it's a nice update, especially the PCI passthrough UI, and I'll certainly be playing with this feature some more.