1. Intro

My first steps into serious virtualization, necessitated because my current, and only, physical server runs CentOS 6, which will be end-of-life 2020-11-30.

VMware ESXi refused to play nicely with my lab-server, so looked to Proxmox for further endeavours into virtualization at home.


2. Notes

Notes are in no particular order, but usually noted down as they arise.


2.1. Proxmox main host

2x Intel Xeon X5560 @ 2.80GHz /16 cores)
Perc 6i
6x 2 TB SAS (mixed Seagate Constellations and IBM ESXS), RAID5, 12 GB total; 9 TB usable

  • The filesystem used upon Proxmox install was XFS.
  • Proxmox was installed on the raid array, as I wasn't able to install it to the internal USB-stick.
  • The swap partition was 2 GB.


2.2. Extra configs on proxmox host

  • # cat /etc/netplan/00-installer-config.yaml 
    # This is the network config written by 'subiquity' network: ethernets: ens18: addresses: - gateway4: nameservers: addresses: -
    - search: - skynet-tng.internal version: 2
  • # apt install ntp htop ncdu lsb-release sudo iotop iftop qemu-guest-agent
  • # nano /etc/ntp.conf
  •  NTP
    • server iburst
      pool 0.se.pool.ntp.org iburst
      pool 1.se.pool.ntp.org iburst
      pool 2.se.pool.ntp.org iburst
      pool 3.se.pool.ntp.org iburst
      • See reference [8].
  • Monitoring Dell PERC raids; see Using perccli with Dell PE R710 and Perc 6/i.
    Monitoring HP Smart Array raids; see Using ssacli with HP Proliant ML150 G6 and HP Smart Array 410i Controller.


2.3. No guest agent configured

See ref's [4,5].


2.4. Check Proxmox repos

Specifically that the no-subscription repo is active.

See reference [7].


2.5. Upload Ubuntu Server-ISO to Proxmox repository

Upload via web-GUI is possible too, but I encountered a problem with ISO's uploaded this way - the ISO wasn't bootable for some reason after upload.

The ISO is visible in the web-GUI; Folder view/Storage/local/Content.

root@dragonborn:~# cd /var/lib/vz/template/iso/
root@dragonborn:/var/lib/vz/template/iso# wget http://cdimage.ubuntu.com/releases/18.04.4/release/ubuntu-18.04.4-server-amd64.iso



2.6. Installed stuff and general settings

  • LAMP (Ubuntu Server 18.04.4 LTS, Apache 2.4.29, MySQL 5.7.29, Php 7.2)
  • Tasksel at install: choose LAMP server and OpenSSH server.
  • After install: apt install qemu-guest-agent htop ncdu
  • UFW config:
    • # ufw default deny incoming
      # ufw default allow outgoing
      # ufw allow ssh
      # ufw allow http
      # ufw allow https
      # ufw allow from
      # ufw enable
      # ufw status
  •  ...


2.7. Uninstall ceph

Follow the instructions very carefully, or you may end up with a broken system!

See reference [6].


2.8. Config for root-user on Proxmox host

  • Edit /root/.bashrc and enable aliases as needed.


2.9. Cloning and migrating virtual machines

Migrating a vm to another node in a cluster seems to be a more stable affair if done on the command line, and doing it in two main steps.

  1. Shutdown the vm.
  2. Clone the vm to the same node.
  3. Migrate the clone to the new node.
  4. Don't forget to power on the clone or original as needed!


2.9.1. Clone a vm

# qm clone vmid newid options

# qm clone 116 101 -name dns2-clone


2.9.2. Migrate a virtual machine

# qm migrate vmid target-node options

# qm migrate 116 cyndane4  
# qm migrate 103 cyndane4 -migration_type insecure Options

Usable options below.

-migration_type insecure
If on a  private network, the insecure option disables ssh-tunneling for a performance increase.

-online 1
Use an online migration, if the vm is running. Might be iffy, and is not always entirely stable, but the option to do it is available.



3. Errors and problems and how to resolve them 

3.1. Can't shutdown or stop a VM



3.2. VM is locked (snapshot-delete)



4. Cluster nodes

Creating a cluster of nodes, I have three nodes.

 Node name IP Comment
 Dragonborn Main node
 Cyndane3 Secondary node
 Smaug Witness node; to prevent "split brain" in the cluster. This is a minipc with no storage pools, it just sits there witnessing cluster-stuff.


4.1. Main node 

Login to the main node and create the cluster.

# pvecm create skynet
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/urandom.
Writing corosync key to /etc/corosync/authkey.


Check the status..

 # pvecm status
Quorum information
Date:             Sat Nov 28 17:45:13 2020
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000001
Ring ID:          1
Quorate:          Yes

Votequorum information
Expected votes:   1
Highest expected: 1
Total votes:      1
Quorum:           1   
Flags:            Quorate  

Membership information
   Nodeid      Votes Name
0x00000001          1 (local)


4.2. Secondary node

Login to the secondary node and connect to the main node.

# pvecm add

If you see "successfully added node 'cyndane3' to cluster" all went well.

In my case I had a test-vm already on there, and the cluster connection failed. After stopping and removing it I redid the cluster connection and it worked as expected.


4.3. Witness node

# pvecm add


4.4. Finalization

Check the cluster status on the main node.

# pvecm status
Cluster information
Name:             skynet
Config Version:   3
Transport:        knet
Secure auth:      on

Quorum information
Date:             Sat Nov 28 20:55:27 2020
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          0x00000001
Ring ID:          1.d
Quorate:          Yes

Votequorum information
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2   
Flags:            Quorate  

Membership information
   Nodeid      Votes Name
0x00000001          1 (local)
0x00000002          1
0x00000003          1

Lovely, it's workling!


4.5. Listing the nodes

Run the below command to list the nodes only.

# pvecm nodes 

Membership information
   Nodeid      Votes Name
        1          1 dragonborn (local)
        2          1 smaug
        3          1 cyndane3



5. Shared cluster storage

So I have three Proxmox nodes. One is a witness node and doesn't do much else except for, well, watching the cluster...
It does have a small 120 GB SSD though, of which about 60 GB are assigned to local-lvm and the rest to local, so why not use it to something worthwhile, like a cluster-shared NFS for eg ISO-images?

Said and done!


5.1. Howto

The default ISO folder on Proxmox is located at /var/lib/vz/template/iso, so that's what we'll use for the NFS-share.


5.1.1. Install the NFS-server

root@smaug# apt install nfs-kernel-server


5.1.2. Configure the share

We'll share this to all clients on the local subnet.

root@smaug# nano /etc/exports


Make sure the shared folder isn't accessible by root only.

root@smaug# chown -Rv nobody.nogroup /var/lib/vz/template/iso
root@smaug# chmod -Rv 777 /var/lib/vz/template/iso


5.1.3. Activate the NFS share

root@smaug# exportfs -a
root@smaug# systemctl restart nfs-kernel-server


5.1.4. Connecting the clients

I chose to add a line in /etc/fstab to automount the share.

root@cyndane3# nano /etc/fstab /var/lib/vz/template/iso             nfs4          defaults,user,exec


And finally, mount the share and check the contents.

root@cyndane3# mount -a
root@cyndane3# ll /var/lib/vz/template/iso
total 835596
drwxrwxrwx 2 nobody nogroup      4096 jan 11 17:27 .
drwxr-xr-x 5 root   root         4096 dec 12 11:52 ..
-rw-r--r-- 1 root   root            0 jan 11 17:27 000.this.folder.is.on.smaug.txt
-rwxrwxrwx 1 nobody nogroup 855638016 jan 11 17:17 focal-legacy-server-amd64.iso


That's all there is to it.

Don't forget now, that the ISO-images only need to be uploaded to the NFS-share on Smaug in this case. It will then be visible and accessible to all the nodes.



6. Proxmox containers

6.1. General container management

Create a container with id 999 on the local node's local-lvm storage, using a Ubuntu 20.04 template, a 4 GB disk, 2 cpu cores, a NIC called eth0 with IP, gateway and DNS set, and in a nested manner.

# pct create 999 local:vztmpl/ubuntu-20.04-standard_20.04-1_amd64.tar.gz --rootfs local-lvm:4 --cores 2 --net0 name=eth0,bridge=vmbr0,ip=,gw= --unprivileged 1 --password p@ssw0rd --features nesting=1 --nameserver


Delete and purge a container with id 999

# pct destroy 999 --purge 1


Enter the container console on the node you're logged in to.

# pct enter 999


Start and stop a container with id 999

# pct start 999
# pct stop 999


Show a container's configuration.

# pct config 999


All the below settings can be set live, except for the disk reduction process that must be done offline.

Add a second NIC to a container:
# pct set 999 -net1 name=eth1,bridge=vmbr0,ip=,gw=

Increase a container's RAM.
# pct set 999 -memory 1024

Increase a container's disk size.
# pct resize 999 rootfs 6G

Decreasing the container's disk size is not supported from the GUI, but is possible to do offline.
Be sure to backup the container first or just do a snapshot as this is a sensitive operation!
Go to the node containing the container and run the below commands as root.

List the containers: # pct list Stop the particular container you want to resize: # pct stop 999 Find out it's path on the node: # lvdisplay | grep "LV Path\|LV Size" For good measure one can run a file system check: # e2fsck -fy /dev/pve/vm-999-disk-0 Resize the file system: # resize2fs /dev/pve/vm-999-disk-0 10G Resize the local volume: # lvreduce -L 10G /dev/pve/vm-999-disk-0 Edit the container's conf, look for the rootfs line and change accordingly: # nano /etc/pve/lxc/999.conf rootfs: local-lvm:vm-999-disk-0,size=32G
rootfs: local-lvm:vm-999-disk-0,size=10G Start it: # pct start 999 Enter it and check the new size: # pct enter 999 # df -h


For convenience, you can create a simple bash-script with the below contents to create several containers. 

echo "Run with /path/to/script//create-ct.sh <vmid>"
/usr/sbin/pct create 999 local:vztmpl/ubuntu-20.04-standard_20.04-1_amd64.tar.gz --rootfs backup:8 --cores 2 --net0 name=eth0,bridge=vmbr0,ip=dhcp --unprivileged 1 --password p@ssw0rd --features nesting=1

/usr/sbin/pct start $1

/usr/sbin/pct list


6.2. Using docker in a Proxmox container

Installing docker in a proxmox container isn't to dissimilar to the regular way.

Install docker as outlined in Docker notes for use with Ubuntu, then continue with installing docker-composer and pull the containers you need or run a docker-compose script.

# apt install docker-compose

# docker pull portainer/portainer-ce

# docker run -d --name=Portainer --hostname=Portainer --network=host --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data -e TZ='Europe/Stockholm' portainer/portainer-ce



7. Sources

  1. https://endoflife.software/operating-systems/linux/centos
  2. https://www.howtoforge.com/tutorial/how-to-install-proxmox-ve-4-on-debian-8-jessie/
  3. http://cdimage.ubuntu.com/releases/18.04.4/release/
  4. https://jonspraggins.com/the-idiot-installs-the-qemu-agent-on-a-windows-10-vm-on-proxmox/
  5. https://pve.proxmox.com/wiki/Qemu-guest-agent#Linux
  6. https://forum.proxmox.com/threads/remove-ceph.59576/
  7. https://pve.proxmox.com/wiki/Package_Repositories#sysadmin_no_subscription_repo
  8. https://www.pool.ntp.org/zone/se
  9. https://www.howtoforge.com/tutorial/how-to-configure-a-proxmox-ve-4-multi-node-cluster/
  10. https://en.wikipedia.org/wiki/Split-brain_(computing)
  11. https://linuxconfig.org/how-to-set-up-a-nfs-server-on-debian-10-buster
  12. https://pve.proxmox.com/wiki/Linux_Container
  13. https://pve.proxmox.com/pve-docs/pct.1.html
  14. https://www.prado.lt/proxmox-shrinking-disk-of-an-lvm-backed-container#comment-117405
  15. https://pve.proxmox.com/pve-docs/pve-admin-guide.html