Perl dies with error message during backup up using S3's 'S3-compatible server hostname' option to backup to Google Cloud Storage instead of Amazon.

Note: I'm using Virtualmin GPL latest version as I just ran package update hoping a new package would fix this. (Not Pro, but since I get a Perl error message I though I'd report it as a bug)

My client hates amazon for some reason, so instead of just using S3 we're going to use Google's S3 copy Google Cloud Storage. Because of S3's popularity every cloud provider has a S3 copy that is compatible with S3. For google this is "Google Cloud Storage". Virtualmin's backup supports Google Cloud Storage and every other S3 copy through the "S3-compatible server hostname" option. I've set this option to "storage.googleapis.com", which is what it needs to be. I've also gone to this address in my browser, and it seems to work just fine without any network problems. On my server, I've created a simple test script in perl to test if it's perl's fault or my server's firewall or something like that. This script seems to work just fine. It returns a usable IO::Socket object, without any exceptions in $!, so perl should be able to connect to google's S3 network just fine.

Perl script: use strict; use warnings; use IO::Socket; use Data::Dumper; my $sock = new IO::Socket::INET (PeerAddr => 'www.ora.com', PeerPort => 80, Proto => 'tcp'); die "$!" unless $sock;

print Dumper($sock);

Perl script output: [root@web5 ~]# perl test.pl $VAR1 = bless( *Symbol::GEN0, 'IO::Socket::INET' );

Steps to reproduce: 1. Go to Backup and Restore -> Cloud Storage Providers 2. Click Amazon S3 3. At option 'S3-compatible server hostname' select the radio button next to the text box, 4. Enter "storage.googleapis.com" in the text box 5. Click Save 6. Create interoperability keys for Amazon S3 style connections if needed 1. Inside The Google Cloud Web GUI Click "Three Lines for table of contents" -> Storage -> settings 2. Click interoperability tab 3. Turn on interoperability if needed. 4. Click "Create a new key" to create a new key. 7. Back in Virtualmin Click Backup and Restore -> Scheduled Backups -> Add a new backup schedule 8. Fill in whatever backup settings for testing you want *. My settings are: Destination and Format: Amazon S3 bucket, Access Key & Secret Key should be copy and pasted from step 6, bucket and path "btu-test-backup-%d-%m-%Y, Delete old backups = yes after 1 day, check "Do strftime style time substitutions on file or directory name", check "Create destination directory?", and then I set it to run every day. 9. Click save to save it. 10. Manually test it: Go to Backup and Restore -> Scheduled Backups, and click "backup" to the right of the scheduled backup that you just created under Actions. 11. Double-check settings, scroll down and click "Backup Now". 12. Get this error message:

Starting backup of 3 domains to Amazon S3 bucket btu-test-backup-%d-%m-%Y ..

HTTP/1.0 500 Perl execution failed Server: MiniServ/1.890 Date: Thu, 9 Aug 2018 03:50:41 GMT Content-type: text/html; Charset=iso-8859-1 Connection: close Error - Perl execution failed

File does not exist: 500 Can't connect to storage.googleapis.com:443 (connect: Network is unreachable) at S3/ListBucketResponse.pm line 26

I know some Perl, but the error isn't at that line, since that line calls some function Xmlin() or something like that that's presumably where the problem is.

Again, there shouldn't be any problems with my server connecting to Google's S3 endpoint, because I've tried manually with nc and wget and with the crappy bit of perl code above. So, the problem shouldn't be a firewall or something preventing my server from connecting to google's S3.

Thanks, Dave.

Status: 
Active

Comments

The Perl script above is wrong it has the wrong server and port. Correct code is below:

use strict; use warnings; use IO::Socket; use Data::Dumper;

my $sock = new IO::Socket::INET (PeerAddr => 'storage.googleapis.com', PeerPort => 443, Proto => 'tcp'); die "$!" unless $sock;

print Dumper($sock);

Thanks, Dave. Perl script output: [root@web5 ~]# perl test.pl $VAR1 = bless( *Symbol::GEN0, 'IO::Socket::INET' );

Why use S3 compatability mode when Virtualmin natively supports google cloud storage as a backup destination though?

As mentioned I'm currently using Virtualmin GPL, and the google cloud storage support is a premium feature only currently, so I suppose that's the only reason why I'm not using it. Virtualmin seems to support other cloud's S3 compatibility with this option, so I though I'd give it a shot.

I decided to report this as a bug instead of a forum post, because I was getting a perl error message.

Oh right, I forgot about that. Coming back to the original error - can you try connecting to storage.googleapis.com:443 from the command line with something like :

telnet storage.googleapis.com 443

First I used telnet, and it worked fine except it disconnected, because it expects SSL stuff to happen not http right away, because its https, and the ssl stuff needs to happen.

[root@web5 ~]# telnet storage.googleapis.com 443
Trying 172.217.9.16...
Connected to storage.googleapis.com.
Escape character is '^]'.
GET /
Connection closed by foreign host.
[root@web5 ~]# telnet storage.googleapis.com 443
Trying 216.58.218.208...
Connected to storage.googleapis.com.
Escape character is '^]'.
GET / HTTP/1.0
Connection closed by foreign host.

Below I'm using openssl's S_client, which is like a telnet that supports ssl, and it manages to get an error message from google's api servers. So, I'm pretty confident that there is no connection, networking, or firewall problem between my server, and google's.

I think there's a real bug here somewhere, but it's strange, because I'm sure loads of people are using the S3 backup option, so maybe the problem is perhaps just in the code where the different hostname is used?

[root@web5 ~]# openssl s_client -connect storage.googleapis.com:443      
CONNECTED(00000003)
depth=2 OU = GlobalSign Root CA - R2, O = GlobalSign, CN = GlobalSign
verify return:1
depth=1 C = US, O = Google Trust Services, CN = Google Internet Authority G3
verify return:1
depth=0 C = US, ST = California, L = Mountain View, O = Google LLC, CN = *.storage.googleapis.com
verify return:1
---
Certificate chain
0 s:/C=US/ST=California/L=Mountain View/O=Google LLC/CN=*.storage.googleapis.com
   i:/C=US/O=Google Trust Services/CN=Google Internet Authority G3
1 s:/C=US/O=Google Trust Services/CN=Google Internet Authority G3
   i:/OU=GlobalSign Root CA - R2/O=GlobalSign/CN=GlobalSign
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIGajCCBVKgAwIBAgIIMzk8G0heOvQwDQYJKoZIhvcNAQELBQAwVDELMAkGA1UE
BhMCVVMxHjAcBgNVBAoTFUdvb2dsZSBUcnVzdCBTZXJ2aWNlczElMCMGA1UEAxMc
R29vZ2xlIEludGVybmV0IEF1dGhvcml0eSBHMzAeFw0xODA3MjQxNjAxMjJaFw0x
ODEwMDIxNTU5MDBaMHIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlh
MRYwFAYDVQQHDA1Nb3VudGFpbiBWaWV3MRMwEQYDVQQKDApHb29nbGUgTExDMSEw
HwYDVQQDDBgqLnN0b3JhZ2UuZ29vZ2xlYXBpcy5jb20wggEiMA0GCSqGSIb3DQEB
AQUAA4IBDwAwggEKAoIBAQC0I6Q8GiDPpBV+/rOZLcpGFURQWTQnqRa/YSHJCOHy
FSZSV6dn5bSn7j4PirMKRc/rDBrladLipGyMkLialH44pIFCfDuiCRNRAlwTq9os
7BzYwVx5BEUYg+Rmq22NNjAQs2McGeajqZHJNBPwqfQj1keBaUKxoQKm6joob5wx
cJpL36lgs5yc2VrQ/yO/qf3rXzy7pBJBc+70mY0e+4085ATk3+C0ZHEyZ5zwju35
Gy9DJDrTLkRGjnQrIe0aKeNJfeMemWExVPZ9mq/D0NEYfsCJ8apw2uoOKWW/XHk2
O8bwoHyzsFU2+TwA2qrhefh9xomAh9hg4830qv/JNkSjAgMBAAGjggMgMIIDHDAT
BgNVHSUEDDAKBggrBgEFBQcDATCCAfUGA1UdEQSCAewwggHoghgqLnN0b3JhZ2Uu
Z29vZ2xlYXBpcy5jb22CJCouYXBwc3BvdC5jb20uc3RvcmFnZS5nb29nbGVhcGlz
LmNvbYIiKi5jb21tb25kYXRhc3RvcmFnZS5nb29nbGVhcGlzLmNvbYIpKi5jb250
ZW50LXN0b3JhZ2UtZG93bmxvYWQuZ29vZ2xlYXBpcy5jb22CJyouY29udGVudC1z
dG9yYWdlLXVwbG9hZC5nb29nbGVhcGlzLmNvbYIgKi5jb250ZW50LXN0b3JhZ2Uu
Z29vZ2xlYXBpcy5jb22CECouZ29vZ2xlYXBpcy5jb22CISouc3RvcmFnZS1kb3du
bG9hZC5nb29nbGVhcGlzLmNvbYIfKi5zdG9yYWdlLXVwbG9hZC5nb29nbGVhcGlz
LmNvbYIfKi5zdG9yYWdlLnNlbGVjdC5nb29nbGVhcGlzLmNvbYIgY29tbW9uZGF0
YXN0b3JhZ2UuZ29vZ2xlYXBpcy5jb22CK3N0YXRpYy5wYW5vcmFtaW8uY29tLnN0
b3JhZ2UuZ29vZ2xlYXBpcy5jb22CFnN0b3JhZ2UuZ29vZ2xlYXBpcy5jb22CHXN0
b3JhZ2Uuc2VsZWN0Lmdvb2dsZWFwaXMuY29tgg91bmZpbHRlcmVkLm5ld3MwaAYI
KwYBBQUHAQEEXDBaMC0GCCsGAQUFBzAChiFodHRwOi8vcGtpLmdvb2cvZ3NyMi9H
VFNHSUFHMy5jcnQwKQYIKwYBBQUHMAGGHWh0dHA6Ly9vY3NwLnBraS5nb29nL0dU
U0dJQUczMB0GA1UdDgQWBBSvGkYhlmVduXzG1hM0lY46FkkWYjAMBgNVHRMBAf8E
AjAAMB8GA1UdIwQYMBaAFHfCuFCaZ3Z2sS3ChtCDoH6mfrpLMCEGA1UdIAQaMBgw
DAYKKwYBBAHWeQIFAzAIBgZngQwBAgIwMQYDVR0fBCowKDAmoCSgIoYgaHR0cDov
L2NybC5wa2kuZ29vZy9HVFNHSUFHMy5jcmwwDQYJKoZIhvcNAQELBQADggEBADju
SRvT2EaIrSgQ7KT9TbnNEz6nsLMQvH+KFaptXljVm3uEI8CzBMarIL3s9K+G94RU
109omAoe/SdH6fSljuG1QFFZStuojDt1IRagmpO3q3kJhD4m36iu7kOBXAqPvLVU
t1twG7HBDSSSm5yRX10o9xKvbidvVV5mRDdVq1WeCnuR/jjkiFpDUMzxT0J/swN8
UNVp2xGObP6Z+wetQ45wPHxuesGP6OLghKtRElNk7JaBdA8CNK5pAPwIlsJ9eF5S
mYPWu8jC1txkeRNpwNpz2oLZ8hEl59QdmurxGOJ9UOQVkVB84jgACmcbX5gLR7ri
orAZmY5pFtkZonGKXPk=
-----END CERTIFICATE-----
subject=/C=US/ST=California/L=Mountain View/O=Google LLC/CN=*.storage.googleapis.com
issuer=/C=US/O=Google Trust Services/CN=Google Internet Authority G3
---
No client certificate CA names sent
Server Temp Key: ECDH, prime256v1, 256 bits
---
SSL handshake has read 3474 bytes and written 373 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-GCM-SHA256
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES128-GCM-SHA256
    Session-ID: 35FA09D774FAF7785141F0CF7D2ABBECD78EE1259E90D224A09C0C7605296DD9
    Session-ID-ctx:
    Master-Key: EA2D45E00BE61E53F59885069E7F9618897E9C3CAD42642C139A06F1520C2E1494B31C6CB4A97BB4F27EAC9847D99EDE
    Key-Arg   : None
    Krb5 Principal: None
    PSK identity: None
    PSK identity hint: None
    TLS session ticket lifetime hint: 100800 (seconds)
    TLS session ticket:
    0000 - 00 b8 86 ac 77 f4 0c 87-5e d7 51 9b 57 e2 7b d4   ....w...^.Q.W.{.
    0010 - 6a 55 ae e0 3c c3 f6 5c-14 a8 e4 5a 00 ec 88 e9   jU..<..\...Z....
    0020 - 3f ec 54 00 68 0c b2 56-d3 18 df f4 c5 8e 07 75   ?.T.h..V.......u
    0030 - cb 56 c1 ec ba 4f 60 80-1f 67 f2 d5 0e 1f 18 f3   .V...O`..g......
    0040 - d4 5a 12 28 4b 3a 3b ba-83 45 3c 63 dd 60 1b 7b   .Z.(K:;..E<c.`.{
    0050 - 2f 80 6a d0 40 d2 0b 34-6b 39 f0 1e 09 20 83 8e   /.j.@..4k9... ..
    0060 - b9 0d 3e 8d 3f 54 49 b7-0a 56 a6 5e 31 5f 87 e7   ..>.?TI..V.^1_..
    0070 - 58 71 74 b8 a6 9a a3 99-e2 40 d5 5c 8f bd da 1f   Xqt......@.\....
    0080 - a0 43 9d c8 19 16 6b 3f-c9 07 f0 b1 b5 bc c3 27   .C....k?.......'
    0090 - c5 a8 c3 eb 41 ae cf 6b-da 5b 00 0a ca 4e 3b 01   ....A..k.[...N;.
    00a0 - a1 c4 75 52 74 6f b4 5f-39 4d a3 f6 09 9e 5e 68   ..uRto._9M....^h
    00b0 - 16 a5 27 42 d0 9b bf 0f-ed 96 81 a0 e3 d6 c5 ff   ..'B............
    00c0 - c3 00 3f 88 76 a4 c1 d3-67 5f de 54 e5 c0 84 dd   ..?.v...g_.T....
    00d0 - ea 6e 33 d3 65                                    .n3.e

    Start Time: 1534040060
    Timeout   : 300 (sec)
    Verify return code: 0 (ok)
---
GET / http/1.0
Host: storage.googleapis.com                                                                  

HTTP/1.0 400 Bad Request
Content-Type: text/html; charset=UTF-8
Referrer-Policy: no-referrer
Content-Length: 1555
Date: Sun, 12 Aug 2018 02:15:08 GMT

<!DOCTYPE html>
<html lang=en>
  <meta charset=utf-8>
  <meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
  <title>Error 400 (Bad Request)!!1</title>
  <style>
    *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}
  </style>
  <a href=//www.google.com/><span id=logo aria-label=Google></span></a>
  <p><b>400.</b> <ins>That’s an error.</ins>
  <p>Your client has issued a malformed or illegal request.  <ins>That’s all we know.</ins>
read:errno=0
[root@web5 ~]#

Google's server is telling me my request is no good, so this means I can connect to storage.googleapis.com:443 just fine. I think there might be a real bug here.

Please try my steps to reproduce the bug. It's very easy to reproduce as it happens every time.

Thanks.

Here are faster and easier steps to reproduce this problem:

  1. Change the hostname from the default amazon one to google's "storage.googleapis.com" in Backup and Restore -> Cloud Storage Providers -> Amazon S3 -> S3-compatible server hostname should be set to "storage.googleapis.com". Don't forget to change the radio button to the one next to the text box!
  2. Now just go to Backup and Restore -> Amazon S3 Buckets
  3. Below is the error message you get:
Amazon S3 Buckets
HTTP/1.0 500 Perl execution failed Server: MiniServ/1.890 Date: Sun, 12 Aug 2018 02:35:46 GMT Content-type: text/html; Charset=iso-8859-1 Connection: close
Error - Perl execution failed

File does not exist: 500 Can't connect to storage.googleapis.com:443 (connect: Network is unreachable) at S3/ListAllMyBucketsResponse.pm line 27

Try this way of reproducing the problem as it's faster, and you won't have to set up a backup; however, source code file and line number are different than the one used for backup, so both places or perhaps just the same function both places call will need fixing.

Thanks.

I can't re-produce this, but I noticed that storage.googleapis.com also has an IPv6 address.

If your system has IPv6 enabled but doesn't have proper v6 routing setup to the Internet, this could happen.

This could actually be the cause of the problem.

Our server does have a IPv6 configuration, but it's quite a few years old before IPv6 was really needed, so our server has a IPv6 configuration for our host's internal network, but it has no publicly available IPv6 IPs that can reach the internet.

If Virtualmin is setup to prefer IPv6 if IPv6 is available, then this could be the cause. I don't know if it makes sense to add support for dropping down to IPv4 if IPv6 gives a network unavailable error, or if supporting that makes no sense.

I've created a support ticket with our host to get them to add a public IPv6 address to our server. Once that's done, if IPv6 being privately configured, but not publicly configured is the cause, then that should fix this problem.

I'll update this ticket after the server gets a IPv6 address, and I retest it.

Thanks.

Thanks, for figuring out the IPv6 problem. It turns out the server already had a global internet usable IPv6 address allocated, but for whatever reason it was never configured on the server.

I configured it, and backup works great. It uploads to the amazon S3 bucket just fine!

But it only works when you specify a specific S3 bucket that has already been created. If you use the %d-%m-%Y option and the create destination directory option and the delete after 1 day option it fails.

Is this option supported in S3? It seems like it should work just fine.

When you use the "Do strftime-style time substitutions on file or directory name" you get the error message below.

Starting backup of 3 domains to Amazon S3 bucket btu-%d-%m-%Y ..

Failed to create S3 bucket :

Backup failed! See the progress output above for the reason why.

Also when you use the Amazon S3 Buckets page the buckets are listed just fine, but when you click on one you get this weird Perl syntax error:

HTTP/1.0 500 Perl execution failed Server: MiniServ/1.890 Date: Tue, 14 Aug 2018 07:39:55 GMT Content-type: text/html; Charset=iso-8859-1 Connection: close
Error - Perl execution failed

Can't use string ("US") as a HASH ref while "strict refs" in use at S3/AWSAuthConnection.pm line 283.

IPv6 works great now, but still getting weird Perl errors when using the strftime-style backup option.

To reproduce configure google's S3 as the endpoint as described above, then try listing S3 buckets' contents in the Amazon S3 buckets page, and then configure a S3 backup using the strftime-style backup option with the create directory option set too and the delete after 1 day option too. Then just run the backup manually, and you should see the error message.

Please look into these weird errors.

Thanks.

Looks like a subtle problem with the way Google's implementation of the Amazon API works and how Virtualmin expects it to work..

When you get either of these errors, is anything more detailed logged to /var/webmin/miniserv.error ?