Backup to Amazon S3 bucket fails

Hello,

I have just updated to Virtualmin 3.91.gpl to try out backing up to the Amazon S3 bucket; a feature that I was in need and I am very grateful for. Unfortunately, the full backup to S3 bucket keeps failing with the following error:

Uploading archive to Amazon's S3 service ..
HTTP/1.0 500 Perl execution failed Server: MiniServ/1.585 Date: Sun, 15 Apr 2012 22:31:28 GMT Content-type: text/html; Charset=iso-8859-1 Connection: close
Error - Perl execution failed

Can't use an undefined value as a symbol reference at ../web-lib-funcs.pl line 7049.

I have tried the full backup several times and each time it fails for different a domain, i.e. this problem looks to be "domain independent".

This error doesn't generate any log messages in /var/log and the backup log is not generated (as it this is not a scheduled backup I suppose).

  • My perl version is: `This is perl, v5.8.8 built for x86_64-linux-thread-multi`
  • Virtualmin version: 3.91.gpl GPL
  • Webmin version: 1.585
  • Operating system: CentOS Linux 5.7

Does anyone have any ideas on why this is happening and how to resolve it?

Thanks a lot!

Greetings, tomiskou

Status: 
Active

Comments

Could you try adding the line error_stack=1 to /etc/webmin/config , re-trying the backup, and let us know the full error message that is logged now?

Dear Jamie,

Thanks a lot for a quick answer! I have added error_stack=1 to /etc/webmin/config, restarted webmin and re-tried backup. In which log should I be looking for the error message?

Thanks a lot!

Tomas

Jamie could answer this for certain, but I think it's going to appear in the Webmin error log, which is located in /var/webmin/miniserv.error.

If not, see if there's a new log file located in /var/webmin with debug information in it.

Unfortunately, the error message in /var/webmin/miniserv.error is the same as in my original post:

[17/Apr/2012:15:37:44 +0200] [<myipaddress>] /virtual-server/backup.cgi/backup.tgz : Perl execution failed : Can't use an undefined value as a symbol reference at ../web-lib-funcs.pl line 7049.

I also attach /var/webmin/webmin.debug extract concerning the backup script. The last output from the backup script is:

1799 [17/Apr/2012 15:37:44.490556] root <myipaddress> - STOP "runtime=430"

Thanks.

In your /etc/webmin/virtual-server/config file, is the s3_endpoint= line set to anything?

I don't have a s3_endpoint= line in my /etc/webmin/virtual-server/config.

Thanks!

Just to make sure we are testing the same kind of backup, could you try one to S3 using the Virtualmin shell API, with a command like :

virtualmin backup-domain --domain yourdomain.com --all-features --dest s3://access-key:secret-key@s3-backup-test-bucket/yourdomain.com.tar.gz

Replace yourdomain.com , access-key and secret-key as appropriate.

I am sorry for not being clear enough in my first post. The backup fails when I run it for all the domains at once. It works for a couple of domains and then it fails suddenly for the next one in the list. What is interesting, though, is that it fails not for the same domain each time.

This might mean that the problem is caused by a miss-configuration of a given domain (or a couple of domains in the list). This is not the case, however, as I have successfully tried to backup each domain on which the script got stuck separately.

By now I have tried the backup of all my domains at once approximately 10 times and not once it finished without an error (even though, I explicitly pick the option that the backup shouldn't halt on error).

I am starting to have a feeling that the error might have something to do with the connection time or number of connections to the S3.

In any case, to answer your original question, the command you have asked me to run works. This command virtualmin backup-domain --all-domains --newformat --all-features --dest s3://access-key:secret-key@s3-backup-test-bucket/yourdomain.com.tar.gz, however, fails after a couple of successful domains.

Thanks again!

How long does it run for before your s3 backup fails? I suspect the cause could be some kind of session timeout..

I did three tests: once it got stuck on 17th domain (after ~9minutes) and the other two on 15th domain (after ~7minutes45s). Just for completeness the sum of all the archives created by the backup script doesn't go over 4MB.

If you like, I could send you a Virtualmin update that will report a more useful error when this problem happens - that would make it easier to debug. If you're interested, let me know if you are running the GPL or pro version, and on which Linux distribution.

I would like that very much. I am running GPL version on Centos 5.8 x86_64.

Thanks a lot!

Ok, I have emailed you an updated Virtualmin GPL RPM ..

Thanks a lot! I have installed the rpm and now the error reads:

Uploading archive to Amazon's S3 service ..
.. upload failed! HTTP connection to mybucket.s3.amazonaws.com:443 for /mydomain.tar.gz failed : Failed to connect to mybucket.s3.amazonaws.com:443 : No buffer space available

That is, while running the backup for all the domains at one it succeeds for a couple of domains and suddenly it fails for the next one in the list with the error above. This time the backup continues instead of stopping immediately, however, it fails for all the domains after the first failure (with the same error).

Thanks!

It appears as if that error is a networking error of some sort.

Are you using a dedicated server, or a VPS? And if it's a VPS, do you know what kind of VPS it is (ie, Xen, OpenVZ, etc).

If it's OpenVZ, can you attach your /proc/user_beancounters file? Thanks!

It's an OpenVZ VPS. See the /proc/user_beancounters in the attachment.

Thanks!

Ah, there we go -- you're running up against OpenVZ resource limits, unfortunately... it looks like your provider has a number of limits setup for your VPS, and sending the backups to Amazon is causing your VPS to exceed those limits.

When those limits are hit, it then throws that error -- "No buffer space available".

You can see the limits you're exceeding by looking at the failcnt field on the right of your user_beancounters output.

That's something you would need to work with your provider on -- in order for this to work, they would need to increase your limits.

Either that, or you may just need to spread out the backups of your domains over a period of time, maybe backing up a few domains every hour over the course of a day.