erroror during backup: file changed as we read it

I'm backing up /var/log

I set to YES the option: Ignore read errors on files?

It RUN CORRECTLY the "Command to run before backup": find /home/USER/backup/*logs-backup.tar.gz -mtime +6 -exec rm {} \;

It FAIL the "Command to run after backup": find /home/USER/backup/*logs-backup.tar.gz -mtime -1 -exec aws s3 --profile BackupVirtualmin cp {} s3://MYBUCKET/backup/logs/ \;

I set up also: CHECKED - Report failure if command fails UNCHECKED - Skip command even if backup fails

The archives are created correctly but I receiving this mail and the "Command to run after backup" is not running:

Backup of /var/log on

tar: Removing leading `/' from member names tar: /var/log/virtualmin/mydomain.com_access_log: file changed as we read it

Backup failed!



Howdy -- Webmin/Virtualmin use tar to back things up. Tar has an option to ignore read errors, but those are typically issues like not having a permissions to access a file.

If the file changes while reading it, tar returns an error status.

It sounds like you're seeing this issue here:

That means that backing up /var/log might be a bit tricky since those files are changing so frequently.

The best suggestion I can offer there would be to make a copy of /var/log to another directory, and then back up the newly created directory. That's the only way to ensure that the files aren't being changed during the backup process.

I see this on my own system as well sometimes - sadly, there appears to be no flag to tar to avoid it :-(

Thank you andrey, I already solved exactly with that SO question.

using "--warning=no-file-changed" the error disappear.

But the "Command to run after the backup" continue to NOT be launched.

Ok... I can see the problem know. It was a problem on the command itself:

find /home/USER/backup/*logs-backup.tar.gz -mtime -1 -exec aws s3 --profile BackupVirtualmin cp {} s3://MYBUCKET/backup/logs/ \;

If I run it as ROOT the profile BackupVirtualmin is not found and nothing happen!

Ah I'm glad you were able to figure that out! Thanks for letting us know what the issue was.