need help understanding best back up practice

8 posts / 0 new
Last post
#1 Wed, 07/31/2013 - 17:30
rapidwebs

need help understanding best back up practice

okay, so i have three servers, and one "back up" server.

all three servers are running Virtualmin, and the backup server is running just Webmin.

I want All Virtual Hosts, at the beginning of the month, to be backed up to the local server/disk.

at the end of that month, I would like to upload this/these backups to the remote host (backup server).

I would like this done using as little memory, cpu cycles, space and i/o activity as possible.

I have started with slightly downgrading the cron job processes cpu/nice/io priority. next i created a monthly incremental backup to /var/backup/all_virtualhosts. lastly, i have used Cluster File Copy to copy these files to the remote server (using built in webmin fast rpc).

everything seems to go swimmingly, except only sometimes when uploading the backup, i have to first delete the backup, - it was having a problem finishing the upload on some large archives.

MY QUESTION: Is this the preferred and best practice for doing what i need? should i be making a full backup instead and uploading that? should i even be using incremental backups the way i am?

THANKS!

Wed, 07/31/2013 - 22:55
andreychek

Howdy,

Well, what you're describing should work, in theory.

How large are the backups that are failing? And are you receiving any kind of error message when they fail to copy?

However, you could always try performing the backups locally, and then just using a tool such as rsync in order to copy the backups to your backup server.

-Eric

Thu, 08/01/2013 - 10:19
rapidwebs

well, okay, so generally speaking, i want to have one large backup, on the local disk. i want it to backup only files changed since the previous month (and to add new virtual hosts when they are created)

i want to be able to use this to restore the entire server and its settings in the event that something happens to the server;

and i want to be able to use this to restore virtual host's files to that of what they were earlier in the month, when the incremental backup was last performed.

is this how i should be performing it? (monthly incremental backup to the same folder each time).

the reason i ask is that i have read some information about how some people need two backup schedules, and that incremental backups are only to be restored after the Full backup... does this mean that the actual Way i am performing my incremental backup is Wrong, or does it still work in the way i am trying to use it.

also, in response to the questions about the Cluster File Copy procedure;

there was never really any error, just three vague words which trailed the file name that did not upload. sadly, i cannot recall these at the moment, but as soon as i see it again i will post them back here.

i could simply add a cron job to delete the backup before the new one is uploaded, but than that kind of defeats the purpose of the cluster file copy (assuming it also only uploads files that have changed, in an effort to preserve bandwidth costs). and that would kind of make the whole procedure ... redundant. therefore, if this does not start working 100%, rsync does sound like the proper alternative. but i would like to try and get the built in methods to work first :)

the file sizes were anywhere from 90 MB to 1GB, however, most of them are very small. currently, the biggest archive is about 100 MB and thus, we are not experiencing as many problems, and deleting the culprit file does not impose much loss of bandwidth, which means cluster file copy is working for me (for now...)

thank you for your help!

Thu, 08/01/2013 - 11:07
andreychek

Howdy,

Well, the way incremental backups work, is that they look at your last full backup, and backup all files that have changed since the last full backup.

When performing a restore -- what you'd need to do is first restore the full backup, then restore the incremental.

I'd be curious to hear the exact wording that you received when it didn't work -- it shouldn't be a problem to perform a backup of a 1GB file, and it shouldn't be necessary to first delete the backups.

-Eric

Thu, 08/01/2013 - 11:36
rapidwebs

i understand that much. what i don't understand is this,

can i just supply a path (i.e. /var/backups/all_virtualhosts) and it will append to the existing archives, any new files and/or files that have changed,

or am i doing it wrong- should i be applying variables for date and possibly time to the backup path, so that i actually have a differential set of backup archives.

i have one set of archives under /var/backups/all_virtualhosts believe,

one thing that came to mind was that if i truly was doing this wrong, could there be a chance that a corrupt archive could be throwing off the routine in which checks for consistency between files, and thats what might be causing the cluster file sync to occasionally fail? just an idea.

Thu, 08/01/2013 - 13:14
rapidwebs

i did a little test (renamed a file from a virtual host under public_html) and performed the incremental backup

i read into the tar and found that the public_html in the existing archive was updated to contain the current files, and no left overs

everything seems to be working fine

i will inform you if i get that error again during Cluster file copy;

thank you for your hasty support

  • steve
Fri, 08/02/2013 - 21:22
rapidwebs

okay. strangely enough, i think my backups are all still running fine

but i tried adding a second schedule. this one, is uploading files in /var/back/all_webmin, which is just one tar, which contains the backup created by Backup Webmin + All Modules

anyways, it just fails, and does so on All servers, even my remote VPS

Copying /var/backups/all_webmin/super.tar.gz to selected servers (under directory /var/backups/all_webmin/rwjimbo) ..

Failed to copy to RapidWebs Cluster Server 3 : Unknown reason

Fri, 08/02/2013 - 21:33
rapidwebs

seems that i had to change read permissions to group readable to reach a solution. temporary fix might be to run a command before the scheduled cluster file copy. if i remember correctly, there is a field to do just that; a chmod might be all i need pre-transfer.

Topic locked