Scheduled backups / ssh / strftime substitution

6 posts / 0 new
Last post
#1 Wed, 06/14/2017 - 02:56
konamacphee
konamacphee's picture

Scheduled backups / ssh / strftime substitution

Hello all,

I'm using Webmin 1.831, Virtualmin 5.07, with Authentic theme 18.49-7 on Centos 7.

I run scheduled VirtualMin backups with the destination being a remote filesystem accessed by ssh.

The backup destination ("File on server") has the form /home/user/Backups/%Y-%m-%d with directory autocreate on and one backup file per server - so eg. tomorrow morning's Virtualmin virtual server backup files will appear in a directory called /home/user/Backups/2017-06-14/ and will contain a bunch of individual server backups each consisting of three files (where "domain" is the main domain for the virtual server):

domain.tar.gz
domain.tar.gz.dom
domain.tar.gz.info

This is all working fine and dandy; however, "Delete old backups" is not working in this configuration. The scheduled backup is certainly checking for old backups, but it isn't finding them:

Deleting backups from /home/user/Backups/%Y-%m-%d on SSH server xyz.com older than 32 days ..
.. no backups to delete were found

Presumably this is because the old backups purge step isn't doing the necessary strftime substitution. Is this a known issue, or is there an extra configuration step I'm missing in terms of fully enabling the strftime functionality?

Cheers!

Wed, 06/14/2017 - 23:30
unborn
unborn's picture

hi, as from help text... this working for me on local usb hdd and on my home server (ssh connection both boxes running debian).

Delete old backups

If you are backing up to a local directory or S3 bucket whose path is based on the backup time, this option can be used to delete backups older than some number of days. This can be useful to reduce the amount of disk space needed to store old data.

However, this is only usable when the backup destination directory is named like /backup/%d-%m-%Y, or the filename is named like /backup/virtualmin-%d-%m-%Y.tar.gz. This is required for Virtualmin to be able to work out which files and directories were created by the backup process.

You must also check the box Do strftime-style time substitutions on file or directory name, or else date and time codes in the backup path will not be converted when the backup is run.

The maximum backup age is entered in days, but you can use fractions like 0.25 for four hours.

Configuring/troubleshooting Debian servers is always great fun

Thu, 06/22/2017 - 03:30
konamacphee
konamacphee's picture

Thanks for the reply!

In terms of strftime substitution, are you putting the datestamp in the parent directory ( /backup/%d-%m-%Y) or the actual filename (/backup/virtualmin-%d-%m-%Y.tar.gz)?

I'm using the directory approach (because my VPSs are saved as a collection of individual files within that directory.) The directory is successfully created (so yes, the strftime tickbox is ticked), it's just the cleanup of old files that's failing.

When I get time, I'll go and dig into the virtualmin backup script to find out what's awry. Any trailing slash on the directory name is stripped away when you save the scheduled backup settings; perhaps the cleanup side of things is not reinstating it where required.

Thu, 06/29/2017 - 15:58
scotwnw

Here my paths, which is working just fine for incremental backup. Note 'incr' in folder name and a full backup will have 'full' there.

Deleting backups from /home/ript/md0/hlz-sva/client-bu/hlz/incr-%m-%d-%Y on SSH server nas-1.domain.com older than 30 days .. Deleting file /home/ript/md0/hlz-sva/client-bu/hlz/incr-05-29-2017 via SSH, which is 30 days old .. .. deleted 4 kB. .. deleted 1 old backups, and skipped 26 that were not old enough

So its creating this folder 'incr-05-29-2017' and putting 3 files in it for that domain. Then the folder gets deleted when to old. Sound like yours should be working. Looks identical to mine.

Fri, 06/30/2017 - 03:00
konamacphee
konamacphee's picture

Interesting that this is working for you - the only difference I can see is that your folder names have a prefix as well as the auto-generated datestamp. (Mine just have the datestamp). Haven't had time to dig into this yet - I'll try changing the folder name to include a prefix and see what happens!

Sat, 07/01/2017 - 04:46
konamacphee
konamacphee's picture

Ok, mystery solved - this fix will help anybody who is backing up via ssh to directories that are symlinks on the remote machine.

My destination server is running slackware, with the backup ssh account using bash as the default shell, but things should be pretty similar with other distros. The notes below assume that USER is the user account on the destination machine.

I back up to directories of the form /home/USER/backup/%d-%m-%Y) where /home/USER/backup is a symlink to another directory (one on an external USB disk, as it happens.)

The script /usr/libexec/webmin/virtual-server/backups-lib.pl uses the command "ls -l" to list the directory contents:

       local $lscmd = $sshcmd." LANG=C ls -l ".quotemeta($base);
       local $lsout = &run_ssh_command($lscmd, $pass, \$err, $asuser);

(see lines 4394-4395). It does not add a trailing slash to the directory name, and there is no way to store the directory name with a trailing slash in the backup configuration in virtualmin (that was one of the first things I'd tried, and any trailing slash is stripped off.)

The backup script uses the ls -l command to get a listing of all the files in the directory. Unfortunately when you do "ls -l" on a symlinked directory with no trailing slash on the directory name, you don't get a listing of the content, but rather the detail of the symlink:

ls -l /home/USER/backup
lrwxrwxrwx 1 root root 36 Apr 29 16:46 /home/USER/backup -> /mnt/vpsbackup/

This explains why the backup script was not finding any candidate backups for deletion.

In an interactive shell, you can add the following to .bashrc so that symlinks are properly expanded and directory contents listed instead of symlink details:

alias ls='ls --dereference-command-line-symlink-to-dir'

The command "ls -l" will then follow the symlink and produce the expected list of the contents of the symlinked directory, even if you don't add the trailing slash.

However, by default, your .bashrc won't be loaded when a command is run non-interactively via ssh. To fix this on my distro I had to do three things:

1 - Allow user environments to be accessed for ssh commands. Make sure the following is set in /etc/ssh/sshd_config (or wherever your sshd_config is kept):

PermitUserEnvironment yes

(Note: apparently some distros may have this option disabled in the source, so you will need to recompile your own ssh packages if so.)

2 - Add the following line to USER's .bashrc BEFORE the alias line, to enable alias expansion in ssh non-interactive shells:

shopt -s expand_aliases

3 - Create a file called /home/USER/.ssh/environment that contains the following line to configure the environment during ssh access:

BASH_ENV=/home/USER/.bashrc

Once all that's in place, old backups are successfully cleaned up.

Or you could just hack backups-lib.pl to add a trailing slash, of course! ;)

As a development request, it would be great if the backup script could explicitly check for and correctly manage this situation for ssh backups, by adding the trailing slash where required (e.g. as an extra check if "ls -l" didn't produce the expected list of files.)

Topic locked