Proxmox Offsite Backups

I was looking for a cheap but durable solution where I can store Proxmox offsite backusps so that I can follow the 321 principles of backups, which means:

  • 3 backup copies
  • across 2 different types of media (e.g. disk, tape)
  • and 1 offsite

(The 2 differnet media one is is a bit tricky nowadays, especially for small businesses or home users, so lets not worry about that one)

RClone supports pretty much every cloud storage solution imaginable so I decided to go with it.


Create an LXC Container in Proxmox

First, we need to create an LXC container. You can download, for example, the latest LTS Ubuntu template directly from Proxmox. Just click the template button and then download.

LXC Templates

Next, right click your node (pve01) and click on “Create CT”. The only thing you need to change is the following; untick the “Unpriovileged Container” checkbox.

Untick the "Unprivileged Container" checkbox

You can keep the defaults for CPU (just 1), storage (8GB), and memory (512MB). For network you can select DHCP for IPv4 and Static for IPv6. If you select DHCP for IPv6 and you don’t have an IPv6 DHCP server, your container might hang for about 5 minutes before booting up because it will be waiting for an IPv6 IP address.


Proxmox LXC Container no console and no network issue

Once your LXC container is created, start it up and click on the “Console button. If you don’t see anything, press the Enter key. You should now see your console. If you don’t read further.

LXC Container Console

There seems to be a bug with newer versions of Ubuntu, or newer versions of Proxmox, not sure which. But essentially it looks like App Armor is messing about.

To determine if this is the case, click on pve01 (your node), then on shell, and enter the command

pct enter 100

Replace 100 with your LXC container number.

pct enter 100

You will see the terminal changed to your LXC container. We now have console access on that LXC container.

Terminal changed to container

Now run the below command. You should see output as in the below screenshot.

systemctl status systemd-networkd.service
systemctl good output

If you see a bunch of red errors and a message such as “Process: 664 ExecStart=/usr/lib/systemd/systemd-networkd (code=exited, status=226/NAMESPACE)” it means your network could not start up.


Proxmox LXC Container no console and no network Fix

What you need to do is the following:

Still in the same shell, type in “exit” to get back to pve01. Or just close the pve01 shell and open it again.

Next, cd to this directory:

cd /etc/pve/lxc/

In here you will see your LCX container config files. When you create an LXC container using the Proxmox website, Proxmox creates this file that contains all the options you selected

lxc folder

Now you need to edit the 100.conf file (replace 100 with your LXC container number)

nano 100.conf

And add this line to the end of the file:

lxc.apparmor.profile: unconfined
lxc.apparmor.profile: unconfined

Now shut down the container (just right click it and shut it down) and start it up again. The console will now work.

Big up to this guy who found the solution: https://forum.proxmox.com/threads/ubuntu-24-04-lxc-containers-fail-to-boot-after-upgrade.145848/page-2


Where does Proxmox store its backups?

Next we need to figure out where Proxmox stores its backup files so that we can mount that location into the LXC container we just created. The easiest way to find this out is by clicking on “Datacenter”, just above the node (pve01), and click on Storage.

Proxmox backup location

If you want to, you can open an shell on pve01 and cd into this directory if you want to check out the contents.


Mount the backup folder in the LXC Container

Next, we need to edit that 100.conf file again, just like we did above, so that we can mount this folder into our LXC container. So to recap, here are the 2 commands. Remember to replace 100 with your LXC container number.

cd /etc/pve/lxc/
nano 100.conf

Now we are going to add this line to the 100.conf file. You can add it just above the network config line:

mp0: /var/lib/vz/dump,mp=/mnt/proxmox-backups

Here is what this line does:

  • mp0 = mount point 0
  • /var/lib/vz/dump = the path on pve01 (the Proxmox node) where our backups are stored
  • mp = mount point on the LXC container
  • /mnt/proxmox-backups = the folder on the LXC container where we can find the backups. This can be anything, but using /mnt is good practice.

So now your config should look something like this:

Mount local proxmox backup folder

Shut down the LXC container again and start it up. Then type:

cd /mnt/proxmox-backups

Now you will be able to see your backup files

Mount folder in LXC container

This guy has an excellent video explaining the above in much more detail: https://www.youtube.com/watch?v=TxwHxW62S6Y


Where should you backup your backup files to?

I opted to use Cloudflare R2 as it is quite a lot cheaper than the other options (AWS, Azure Blob, etc), and it offers 10GB free per month, and 1 million Class A operations. Class A operations are your read, write, list etc operations.

In order to use Cloudflare R2 storage, you need a Cloudflare account. You can then go directly to R2 and create a bucket:

Cloudflare R2 bucket

As a test, upload a file via the Cloudflare website. You will see after a while the bucket size and operations change as well:

Bucket size and operations


Setting up the Cloudflare R2 Bucket

To read more about how Cloudflare bills for its R2 buckets, check here: https://developers.cloudflare.com/r2/pricing/#storage-usage

You need to get your backup URL. Cloudflare R2 buckets are AWS S3 compatible, that is why your Cloudflare R2 bucket will show “S3 API”. Store this URL in a text file so long.

As a side note, these buckets are not publically accessible, it will only become accessible once you connect a domain under the “Public Access” section.

You can read more here: https://developers.cloudflare.com/r2/buckets/public-buckets/

Cloudflare R2 bucket settings


Get an API key

Next, click on R2 Storage, and then click on API, and then click on “Manage API Tokens” in the drop down

Click on API

Then click on “Create API Token”

Create API Token

Give your token a descriptive name, set the permissions, the R2 bucket it will have access to, and set up IP address filtering. If you have a static IP, it is a good idea to allow only that IP address, and click on Create Token.

Token details


Save all this information to a text file:

Save all this API information


Installing RClone

Back in your LXC Container, run the following commands one by one to update all packages:

apt update
apt upgrade 

And then just run:

apt-install rclone


Configuring RClone for Cloudflare R2 Buckets

Most of the below is taken from these sites, but I will still take you through it step by step and explain some of the concepts along the way:

To check Cloudflare R2 Bucket costs, check this link: https://developers.cloudflare.com/r2/pricing/

(As we progress, keep in mind that Cloudflare’s R2 buckets are compliant with AWS S3)


RClone Commands

The first command allows you to see where your rclone config file will be located, or is located if you already added a provider:

rclone config file

Next, run the below command to create a new remote and make the following selections

rclone config
  • Select “n” for a new provider
  • Then enter a name for the remote. I chose “CloudflareProxmoxBackups“.
    • You can have spaces in the name, but then you will need to enclose this remote in double quotes when you refer to it in subsequent commands, so rather don’t uses spaces.

Next, all the remotes will be printed. You need to select the one that says “AWS Compliant“. In my case, it is option 5. So press 5 and hit enter.

Promox offsite backup config. Select Amazon S3 compliant

Then select “Cloudflare” from the next list. In my case, it was option 5 again.

Select Cloudflare

Then select option 1 to enter your Cloudflare credentials (RClone calls it AWS, but it means Cloudflare). Environment variables will most likely be more secure, but that won’t be covered here.

Credential selection

After selecting 1, you will be asked for your access key id, secret key, and endpoint. Enter those details.

Keys and secrets

Once done, rclone will show you your config. Just confirm it is correct and follow the prompts to exit the config.

As a final step, enter the below command:

rclone config file

And then enter the following:

nano <path to your config file here>

In my case I entered nano /root/.config/rclone/rclone.conf

Add this line to the bottom of the config file:

no_check_bucket = true

This seems to only be required if the API key is set to “Object Level Permissions”

Object level permissions


Testing RClone to your Cloudflare R2 Bucket

Now that we have our remote added, lets run some tests. Always replace the remote name and bucket name with your own.

rclone lsf "CloudflareProxmoxBackups":proxmox-backups
rclone tree "CloudflareProxmoxBackups":proxmox-backups

The 2 commands above will both list folder contents, but the second command lists the contents in a tree like structure. Here is the output of the “rclone tree” command.

Tree command output


Take Note: Even though you can have “folders” in Cloudflare R2, they are not treated as folders, but prefixes. I will illustrate this below:

rclone copy myfile.txt "CloudflareProxmoxBackups":proxmox-backups/randomdata

In the above command “randomdata” is the “folder”. In the Cloudflare R2 GUI you will see this option:

View prefixes as directories selected

Disabling this chekbox flattens it out:

View prefixes as directories not selected


Copying data using RClone

The maximum file size you can upload to a Cloudflare R2 bucket is 5GB, unless you chunk the file, then you can upload much larger files. Chunking just breaks the file up into smaller parts and then transfers those parts.

Our final command will look something like this, but you need to play around with the options, especially if you plan to later on transfer many small files as these options are not suitable for that:

rclone sync /mnt/proxmox-backups/ "CloudflareProxmoxBackups":proxmox-backups/weeklybackups --s3-upload-cutoff=32M --s3-chunk-size=32M --transfers 1 --s3-upload-concurrency 4 --max-age 48h -P

Lets break down the command:

  • /mnt/proxmox-backups/ = The local folder that must be synced to Cloudflare R2
  • CloudflareProxmoxBackups = The remote that we set up just now
  • proxmox-backups = The name of the Cloudflare R2 bucket
  • weeklybackups = The name of the “folder” our backups must go into in our bucket
  • –s3-upload-cutoff = In mibibyte. Just call it megabyte, not getting into this now.
    • Cutoff for switching to chunked upload. Any files larger than this will be uploaded in chunks of chunk_size
    • Minumum 5M. For large files, use something like 64MB, or 256MB
  • –s3-chunk-size = Chunk size in which files will be uploaded
  • transfers: The maximum number of concurrent file transfers. 1 is fine for our use case. For many small files, increase this
  • –s3-upload-concurrency: This is the number of chunks of the same file that are uploaded concurrently for multipart uploads and copies
  • –max-age: RClone will only copy files that are not older than 48 hours
    • In my case, I will sync my backups to Cloudflare R2 about 24 hours after my local backup is done, and only once per week, so 48 hours is plenty. I’m just adding another 24 hours in case. You need to modify this for your needs.
  • -P: Gives verbose output, otherwise it looks like nothing is happening. Useful for testing.


RClone copy vs Sync

For most of us, “rclone sync” will probably be fine, but keep in mind that the sync command will delete data from the destination (our Cloudflare R2 bucket) if they don’t exist on the source anymore. If this is not your intention, then use “rclone copy“.

In rclone, “sync” will synchronize a source directory with a destination, meaning it will copy new files, update modified ones, and delete files that exist in the destination but not in the source; while “copy” simply copies files from source to destination, leaving any existing files in the destination untouched, making “sync” ideal for keeping two directories completely identical, while “copy” is better for one-way data transfer without deleting files on the destination


A note on RClone memory usage

According to the RClone documentation:

Multipart uploads will use --transfers * --s3-upload-concurrency * --s3-chunk-size extra memory. Single part uploads to not use extra memory.

So in our example, we will need at least this much memory:

1 (transfers) * 4 (s3-upload-concurrency) * 32 (s3-chunk-size) = 128MB memory


Prevent Cloudflare R2 Bucket costs from getting out of hand

One way to prevent your R2 bucket costs from getting our of hand, is to add a policy to your bucket to delete objects older than X number of days. To do this go to you bucket’s settings and scroll down to “Object Lifecycle Rules” and click on “Add Rule”:

Object Lifecycle Rules

I already added a rule to delete objects after 8 days. The rule will also terminate incomplete multipart uploads after 1 day.

Delete after 8 days

You can also check bucket size and operations:

Bucket size and operations


Automate the RClone command

Timezones (only if you want to)

Fisrt, run the command “timedatectl” to check our timezone. Keep this in mind when you schdule your rclone command.

timedatectl

Check the timezones:

timedatectl list-timezones

This will get all the timezones. Pick your timezone. Then set your timezone:

timedatectl set-timezone <Country/City>

Replace “<Country/City>” with your timezone.

Crontab

Seeing our command is quite long, lets first create an executable file that our crontab can execute. This is much cleaner. Make sure you are in the /root directory or any directory of your choice.

nano /root/rclone_copy.sh

Paste this command into this file. Remember to modify it for your needs:

#!/bin/bash
rclone sync /mnt/proxmox-backups/ "CloudflareProxmoxBackups":proxmox-backups/weeklybackups --s3-upload-cutoff=32M --s3-chunk-size=32M --transfers 1 --s3-upload-concurrency 4 --max-age 48h -P

Next, make this new .sh file executable

chmod +x rclone_copy.sh

Run “crontab -e“. If asked for an option, choose option 1 for nano.

My Proxmox backs up every Friday at 10pm, so I will schedule my rclone to run every Saturday at 10pm. So my cron job looks like this:

0 22 * * 6 /root/rclone_copy.sh

You can use this site to test out your cront schedule: https://crontab.guru/

cron schedule

Now wait for the time to come and check your Cloudflare R2 Bucket!


Cloudflare R2 Bucket metrics after a successful upload

After a successful upload of about 18GB, we can see Cloudflare’s metric are updated:

File list in R2 bucket

Because Cloudflare bills based on average storage usage over a time period, it will always show average usage in the metrics tab

Average storage metrics tab

Categories: Proxmox

necrolingus

Tech enthusiast and home labber