Intro

My first steps into serious virtualization, necessitated because my current, and only, physical server runs CentOS 6, which will be end-of-life 2020-11-30.

VMware ESXi refused to play nicely with my lab-server, so looked to Proxmox for further endeavours into virtualization at home.

 

Notes

Notes are in no particular order, but usually noted down as they arise.

 

Proxmox main host

2x Intel Xeon X5560 @ 2.80GHz /16 cores)
Perc 6i
6x 2 TB SAS (mixed Seagate Constellations and IBM ESXS), RAID5, 12 GB total; 9 TB usable
1x CDROM
24 GB RAM

  • The filesystem used upon Proxmox install was XFS.
  • Proxmox was installed on the raid array, as I wasn't able to install it to the internal USB-stick.
  • The swap partition was 2 GB.

 

Extra configs on proxmox host

  • # cat /etc/netplan/00-installer-config.yaml 
    
    # This is the network config written by 'subiquity' network: ethernets: ens18: addresses: [192.168.0.23/24] nameservers: addresses: [192.168.0.3,192.168.0.12,192.168.0.21,192.168.0.1] search: [skynet-tng.internal]
    routes:
    - to: default
    via: 192.168.0.1 version: 2
  • # apt install ntp htop ncdu lsb-release sudo iotop iftop qemu-guest-agent
  • # nano /etc/ntp.conf
  •  NTP
    • server 192.168.0.1 iburst
      pool 0.se.pool.ntp.org iburst
      pool 1.se.pool.ntp.org iburst
      pool 2.se.pool.ntp.org iburst
      pool 3.se.pool.ntp.org iburst
      • See reference [8].
  • Monitoring Dell PERC raids; see Using perccli with Dell PE R710 and Perc 6/i.
    Monitoring HP Smart Array raids; see Using ssacli with HP Proliant ML150 G6 and HP Smart Array 410i Controller.

 

No guest agent configured

See ref's [4,5].

 

Check Proxmox repos

Specifically that the no-subscription repo is active.

See reference [7].

  

Upload Ubuntu Server-ISO to Proxmox repository

Upload via web-GUI is possible too, but I encountered a problem with ISO's uploaded this way - the ISO wasn't bootable for some reason after upload.

The ISO is visible in the web-GUI; Folder view/Storage/local/Content.

root@dragonborn:~# cd /var/lib/vz/template/iso/
root@dragonborn:/var/lib/vz/template/iso# wget http://cdimage.ubuntu.com/releases/18.04.4/release/ubuntu-18.04.4-server-amd64.iso

 

 

Installed stuff and general settings

  • LAMP (Ubuntu Server 18.04.4 LTS, Apache 2.4.29, MySQL 5.7.29, Php 7.2)
  • Tasksel at install: choose LAMP server and OpenSSH server.
  • After install: apt install qemu-guest-agent htop ncdu
  • UFW config:
    • # ufw default deny incoming
      # ufw default allow outgoing
      # ufw allow ssh
      # ufw allow http
      # ufw allow https
      # ufw allow from 192.168.0.0/24
      # ufw enable
      # ufw status
  •  ...

 

Uninstall ceph

Follow the instructions very carefully, or you may end up with a broken system!

See reference [6].

 

Config for root-user on Proxmox host

  • Edit /root/.bashrc and enable aliases as needed.

 

Cloning and migrating virtual machines

Migrating a vm to another node in a cluster seems to be a more stable affair if done on the command line, and doing it in two main steps.

  1. Shutdown the vm.
  2. Clone the vm to the same node.
  3. Migrate the clone to the new node.
  4. Don't forget to power on the clone or original as needed!

 

Clone a vm

# qm clone vmid newid options

# qm clone 116 101 -name dns2-clone

 

Migrate a virtual machine

# qm migrate vmid target-node options

# qm migrate 116 cyndane4  
# qm migrate 103 cyndane4 -migration_type insecure
Options

Usable options below.

-migration_type insecure
If on a  private network, the insecure option disables ssh-tunneling for a performance increase.

-online 1
Use an online migration, if the vm is running. Might be iffy, and is not always entirely stable, but the option to do it is available.

 

Resize a VMs LVM

So the VMs harddisk image in Proxmox is getting too small?

Proxmox has a built-in mechanism for resizing disk images that seems to work better than adding a separate empty disk image first, and then adding it to the VM.

See this guide from; https://pve.proxmox.com/wiki/Resize_disks.

 

 

Errors and problems and how to resolve them 

Can't shutdown or stop a VM

https://forum.proxmox.com/threads/bug-vm-dont-stop-shutdown.9020/post-51154
https://forum.proxmox.com/threads/how-to-unlock-vm-if-it-doesnt-start.7792/

In short;

# qm unlock <vmid>

# qm unlock 121

 

 

 

VM is locked (snapshot-delete)

https://forum.proxmox.com/threads/cannot-remove-snapshot-vm-is-locked-snapshot-delete.37567/

 

Cluster nodes

Creating a cluster of nodes, I have three nodes.

 Node name IP Comment
 Dragonborn 192.168.0.9 Main node
 Cyndane3 192.168.0.8 Secondary node
 Smaug 192.168.0.7 Witness node; to prevent "split brain" in the cluster. This is a minipc with no storage pools, it just sits there witnessing cluster-stuff.

 

Main node 

Login to the main node and create the cluster.

# pvecm create skynet
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/urandom.
Writing corosync key to /etc/corosync/authkey.
...

 

Check the status..

 # pvecm status
Quorum information
------------------
Date:             Sat Nov 28 17:45:13 2020
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000001
Ring ID:          1
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   1
Highest expected: 1
Total votes:      1
Quorum:           1   
Flags:            Quorate  

Membership information
----------------------
   Nodeid      Votes Name
0x00000001          1 192.168.0.9 (local)

 

Secondary node

Login to the secondary node and connect to the main node.

# pvecm add 192.168.0.9

If you see "successfully added node 'cyndane3' to cluster" all went well.

In my case I had a test-vm already on there, and the cluster connection failed. After stopping and removing it I redid the cluster connection and it worked as expected.

 

Witness node

# pvecm add 192.168.0.9

 

Finalization

Check the cluster status on the main node.

# pvecm status
Cluster information
-------------------
Name:             skynet
Config Version:   3
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Sat Nov 28 20:55:27 2020
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          0x00000001
Ring ID:          1.d
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2   
Flags:            Quorate  

Membership information
----------------------
   Nodeid      Votes Name
0x00000001          1 192.168.0.9 (local)
0x00000002          1 192.168.0.7
0x00000003          1 192.168.0.8


Lovely, it's workling!

 

Listing the nodes

Run the below command to list the nodes only.

# pvecm nodes 

Membership information
----------------------
   Nodeid      Votes Name
        1          1 dragonborn (local)
        2          1 smaug
        3          1 cyndane3

 

 

Shared cluster storage

So I have three Proxmox nodes. One is a witness node and doesn't do much else except for, well, watching the cluster...
It does have a small 120 GB SSD though, of which about 60 GB are assigned to local-lvm and the rest to local, so why not use it to something worthwhile, like a cluster-shared NFS for eg ISO-images?

Said and done!

 

Howto

The default ISO folder on Proxmox is located at /var/lib/vz/template/iso, so that's what we'll use for the NFS-share.

 

Install the NFS-server

root@smaug# apt install nfs-kernel-server

 

Configure the share

We'll share this to all clients on the local subnet.

root@smaug# nano /etc/exports
/var/lib/vz/template/iso 192.168.0.0/24(rw,async,fsid=root,no_root_squash,no_subtree_check,insecure)

 

Make sure the shared folder isn't accessible by root only.

root@smaug# chown -Rv nobody.nogroup /var/lib/vz/template/iso
root@smaug# chmod -Rv 777 /var/lib/vz/template/iso

 

Activate the NFS share

root@smaug# exportfs -a
root@smaug# systemctl restart nfs-kernel-server

 

Connecting the clients

If the nfs-client software isn't installed yet, you need to do that first.

root@cyndane3# apt install nfs-common

Then I added a line in /etc/fstab to automount the share at boot.

root@cyndane3# nano /etc/fstab
192.168.0.7:/var/lib/vz/template/iso /var/lib/vz/template/iso             nfs4          defaults,user,exec #Alternative; auto,nofail,relatime,nolock,intr,tcp,actimeo=1800

 

And finally, mount the share and check the contents.

root@cyndane3# mount -a
root@cyndane3# ll /var/lib/vz/template/iso
total 835596
drwxrwxrwx 2 nobody nogroup      4096 jan 11 17:27 .
drwxr-xr-x 5 root   root         4096 dec 12 11:52 ..
-rw-r--r-- 1 root   root            0 jan 11 17:27 000.this.folder.is.on.smaug.txt
-rwxrwxrwx 1 nobody nogroup 855638016 jan 11 17:17 focal-legacy-server-amd64.iso

 

That's all there is to it.

Don't forget now, that the ISO-images only need to be uploaded to the NFS-share on Smaug in this case. It will then be visible and accessible to all the nodes.

 

 

Proxmox containers

General container management

Create a container with id 999 on the local node's local-lvm storage, using a Ubuntu 20.04 template, a 4 GB disk, 2 cpu cores, a NIC called eth0 with IP, gateway and DNS set, and in a nested manner.

# pct create 999 local:vztmpl/ubuntu-20.04-standard_20.04-1_amd64.tar.gz --rootfs local-lvm:4 --cores 2 --net0 name=eth0,bridge=vmbr0,ip=192.168.0.30/24,gw=192.168.0.1 --unprivileged 1 --password p@ssw0rd --features nesting=1 --nameserver 192.168.0.12

 

Delete and purge a container with id 999

# pct destroy 999 --purge 1

 

Enter the container console on the node you're logged in to.

# pct enter 999

 

Start and stop a container with id 999

# pct start 999
# pct stop 999

 

Show a container's configuration.

# pct config 999

 

All the below settings can be set live, except for the disk reduction process that must be done offline.

Add a second NIC to a container:
# pct set 999 -net1 name=eth1,bridge=vmbr0,ip=192.168.0.31/24,gw=192.168.0.1

Increase a container's RAM.
# pct set 999 -memory 1024

Increase a container's disk size.
# pct resize 999 rootfs 6G

Decreasing the container's disk size is not supported from the GUI, but is possible to do offline.
Be sure to backup the container first or just do a snapshot as this is a sensitive operation!
Go to the node containing the container and run the below commands as root.

List the containers: # pct list Stop the particular container you want to resize: # pct stop 999 Find out it's path on the node: # lvdisplay | grep "LV Path\|LV Size" For good measure one can run a file system check: # e2fsck -fy /dev/pve/vm-999-disk-0 Resize the file system: # resize2fs /dev/pve/vm-999-disk-0 10G Resize the local volume: # lvreduce -L 10G /dev/pve/vm-999-disk-0 Edit the container's conf, look for the rootfs line and change accordingly: # nano /etc/pve/lxc/999.conf rootfs: local-lvm:vm-999-disk-0,size=32G
>>
rootfs: local-lvm:vm-999-disk-0,size=10G Start it: # pct start 999 Enter it and check the new size: # pct enter 999 # df -h

 

For convenience, you can create a simple bash-script with the below contents to create several containers. 

create-ct.sh:
#!/bin/bash
echo "Run with /path/to/script//create-ct.sh <vmid>"
/usr/sbin/pct create 999 local:vztmpl/ubuntu-20.04-standard_20.04-1_amd64.tar.gz --rootfs backup:8 --cores 2 --net0 name=eth0,bridge=vmbr0,ip=dhcp --unprivileged 1 --password p@ssw0rd --features nesting=1

/usr/sbin/pct start $1

/usr/sbin/pct list

 

Using docker in a Proxmox container

Installing docker in a proxmox container isn't to dissimilar to the regular way.

Install docker as outlined in Docker notes for use with Ubuntu, then continue with installing docker-composer and pull the containers you need or run a docker-compose script.

# apt install docker-compose

# docker pull portainer/portainer-ce

# docker run -d --name=Portainer --hostname=Portainer --network=host --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data -e TZ='Europe/Stockholm' portainer/portainer-ce

 

 

Sources

  1. https://endoflife.software/operating-systems/linux/centos
  2. https://www.howtoforge.com/tutorial/how-to-install-proxmox-ve-4-on-debian-8-jessie/
  3. http://cdimage.ubuntu.com/releases/18.04.4/release/
  4. https://jonspraggins.com/the-idiot-installs-the-qemu-agent-on-a-windows-10-vm-on-proxmox/
  5. https://pve.proxmox.com/wiki/Qemu-guest-agent#Linux
  6. https://forum.proxmox.com/threads/remove-ceph.59576/
  7. https://pve.proxmox.com/wiki/Package_Repositories#sysadmin_no_subscription_repo
  8. https://www.pool.ntp.org/zone/se
  9. https://www.howtoforge.com/tutorial/how-to-configure-a-proxmox-ve-4-multi-node-cluster/
  10. https://en.wikipedia.org/wiki/Split-brain_(computing)
  11. https://linuxconfig.org/how-to-set-up-a-nfs-server-on-debian-10-buster
  12. https://pve.proxmox.com/wiki/Linux_Container
  13. https://pve.proxmox.com/pve-docs/pct.1.html
  14. https://www.prado.lt/proxmox-shrinking-disk-of-an-lvm-backed-container#comment-117405
  15. https://pve.proxmox.com/pve-docs/pve-admin-guide.html
  16. https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nfs-mount-on-ubuntu-22-04

 

 

 

 

 

 

 

 

 

 

 

 

Stop Spam Harvesters, Join Project Honey Pot

 

Get a free SSL certificate!

 

The leading nonprofit defending digital privacy, free speech, and innovation.

 

The Linux Foundation provides a neutral, trusted hub for developers and organizations to code, manage, and scale open technology projects and ecosystems.

 

Kubuntu is an operating system built by a worldwide community of developers, testers, supporters and translators.

 

 43ef5c89 CanonicalUbuntudarktext