Letsencrypt Failed to request certificate

After requesting a certificate for the server I get this error. I tried several times this week, before and after last update of Webmin to 1.791

Parsing account key...
Parsing CSR...
Registering account...
Already registered!
Verifying server11.example.com...
Traceback (most recent call last):
  File "/usr/libexec/webmin/webmin/acme_tiny.py", line 203, in <module>
    main(sys.argv[1:])
  File "/usr/libexec/webmin/webmin/acme_tiny.py", line 199, in main
    signed_crt = get_crt(args.account_key, args.csr, args.acme_dir, log=LOGGER, CA=args.ca)
  File "/usr/libexec/webmin/webmin/acme_tiny.py", line 128, in get_crt
    wellknown_path, wellknown_url))
ValueError: Wrote file to /home/server11/public_html/.well-known/acme-challenge/XByAXj3GF8pYzovO-c8XbVgh7wQYtM6izgrJZT0vBkE, but couldn't download http://server11.example.com/.well-known/acme-challenge/XByAXj3GF8pYzovO-c8XbVgh7wQYtM6izgrJZT0vBkE

I can confirm the directory .well-known/acme-challenge is created but empty.

Status: 
Active

Comments

ATM server11.example.com must be a vhost. Bit stupid IMHO.

Yes, you need to have an Apache virtual host matching the domain name you want to request the cert for.

I wish let's encrypt didn't work that way, but unfortunately that's the way they've implemented the protocol.

I replaced the real name by example.com for this report. In reality it has a valid domain name and a vhost set up. All seems to work except writing the file.

Same problem

I originally tried entering "example.com *.example.com" (name changed from actual name, vhost already up and running in virtualmin), but got error

Failed to request certificate : A domain name cannot start with *. To create a wildcard domain, first create it without the * and then go to Server Configuration -> Website Options.

when I went back to rerun with "example.com www.example.com" got same error message as Gerard

FWIW, I originally had a self signed cert that I wanted to replace with a letsencrypt cert. That process works, I have done that before and the new cert replaced the old cert based upon info from page "Manage SSL Certificate", fields Organization, Issuer Organization, and Certificate Type

If the domain is in fact already registered, any way to re-download registered certs, or find out what is actually registered?

In searching for how to find out what certs were issued, on page https://community.letsencrypt.org/t/public-beta-rate-limits/4772/99 on Feb 11 serverco responded to a another post with

My guess is you will need to wait a few more hours .... you can check the time of day that your first certificate was issued at https://crt.sh/22 If you created the first certificate at 4pm, then you will have to wait until after 4pm on the 7th day.

So I went to crt.sh/22 and entered my domain name root (example.com) expecting an empty search result. Instead what I got was Issuer Name C=GB, ST=Greater Manchester, L=Salford, O=COMODO CA Limited, CN=COMODO ECC Domain Validation Secure Server CA 2 followed by Certificates (22) in table column 1 and then

Not Before Not After Subject Name
2016-02-10 2016-08-14 OU=Domain Control Validated, OU=PositiveSSL Multi-Domain, CN=sni79918.cdnname.com
2016-02-05 2016-08-07 OU=Domain Control Validated, OU=PositiveSSL Multi-Domain, CN=sni79918.cdnname.com
2016-02-05 2016-08-07 OU=Domain Control Validated, OU=PositiveSSL Multi-Domain, CN=sni79918.cdnname.com
dot dot dot
2015-01-15 2015-11-11 OU=Domain Control Validated, OU=PositiveSSL Multi-Domain, CN=sni79918.cdnname.com
2014-11-29 2015-11-11 OU=Domain Control Validated, OU=PositiveSSL Multi-Domain, CN=sni79918.cdnname.com

(actual CDN domain name replaced by cdnname.com above to protect the guilty).

Could it be that the letsencrypt "error" is not an error, but rather a actual good faith effort on the part of letsencrypt to tell me that the CDN has already generated a cert for my domain?

Wait, do you have a CDN in front of your webserver?

Failed to request certificate : <pre>Parsing account key...
Parsing CSR...
Registering account...
Already registered!
Verifying tex.hype.ro...
Traceback (most recent call last):
  File &quot;/usr/share/webmin/webmin/acme_tiny.py&quot;, line 203, in &lt;module&gt;
    main(sys.argv[1:])
  File &quot;/usr/share/webmin/webmin/acme_tiny.py&quot;, line 199, in main
    signed_crt &#61; get_crt(args.account_key, args.csr, args.acme_dir, log&#61;LOGGER, CA&#61;args.ca)
  File &quot;/usr/share/webmin/webmin/acme_tiny.py&quot;, line 128, in get_crt
    wellknown_path, wellknown_url))
ValueError: Wrote file to /home/tex/public_html/.well-known/acme-challenge/NU91722mYTQesgUc5gWiohd_Yj1alcs5SVmsjKnQ1UQ, but couldn&#39;t download http://tex.hype.ro/.well-known/acme-challenge/NU91722mYTQesgUc5gWiohd_Yj1alcs5SVmsjKnQ1UQ
</pre>

The folder .well-known/acme-challenge/ is created but is empty no files there.

I have no CDN in front of the server.

Do you perhaps have any redirects or aliases setup?

nope is a clean virtual host no strange stuff. I have just tried with ca completely clean one and I get the same error.

For anyone who is seeing this - try creating a directory like /home/example/public_html/.well-known/acme-challenge , and put a small file called test.txt in it, then try accessing http://example.com/.well-known/acme-challenge/test.txt

I did, no problem

Let me know if there is anything else I can do.

[26/Mar/2016:10:11:12 +0100] "GET /.well-known/acme-challenge/test.txt HTTP/1.1" 200 24 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.87 Safari/537.36"

Getting the same issue on a brand new installation under Debian8.

To test, I commented out the call to os.remove(wellknown_path) in /usr/share/webmin/webmin/acme_tiny.py. Using the reportedly problematic URL in a browser shows that it correctly generated and placed the file on the filesystem, and is successfully requestable via HTTP. Seems like the call to resp = urlopen(wellknown_url) has failed, possibly due to environmental issues?

EDIT: Printed out the exception details: Exception: <class 'urllib2.HTTPError'>: HTTP Error 403: Forbidden. Puzzling that it works fine with a browser.

EDIT2: Tracked it down, Cloudflare's CDN disallows the python script from requesting the URL. Annoying.

EDIT3: Created a Cloudflare page rule for *mydomain.com/.well-known/*, disabling all security checks. Works fine afterwards.

Interesting - although even if acme_tiny.py wasn't requesting that URL, the Let's Encrypt servers would still need to and would presumably get tripped up by the same Cloudflare restriction?

Seems likely.

One thing I also noted was that if your shell's umask is set to 0077 (mine is by default) and you restart webmin's service, then any subsequent requests through the web interface to generate certificates bombs out due to resulting file permissions being set to 0600, regardless of the CDN rules. Don't make the same mistake!

Guys, my Virtualmin SSL websites are HOSED.

I use Webmin 1.791 on Ubuntu 14.04.1, and I keep it updated daily. I host 4 domains with 3 subdomains on one of the domains. Each of them has "SSL website enabled", and I'd previously set up individual self-signed certs for each one of them, using the Virtualmin "manage certs" facility. They worked; browsers throwing the expected "warnings".

domain1 - Wordpress
domain2 - Wordpress
domain3 - Wordpress
domain4 - static
-subdomain1 - Wordpress
-subdomain2 - static
-subdomain3 - static

A few weeks ago when I learned that support for Let's Encrypt had been added, I requested LE certs for two of the the non-subdomain WP sites, which replaced the self-signed certs as it says. I configured / to https:// Apache redirects so that all traffic would be encrypted. I did need to make a bunch of changes to remove hard-coded http:// and https:// to just // and iron out some problems with embedded mixed media and such. Everything was working great by Sunday night.

Then, I requested a LE cert for that parked WP blog on subdomain1.domain4.tld. The request failed, and the output is identical to Gerard's first post. However, something did happen. All of my HTTPS websites were now throwing up the error that it was using the wrong cert. Every one of them was now using the subdomain1.domain4.tld's self-signed cert! clicking "I understand/visit anyway" brought me to the subdomain1.domain4.tld site, no matter which website I started from. I disabled SSL for all of the sites except the most important website. "Manage certs" still showed that LE certs were installed for those two which were working before, but the subdomain blog's ss cert was still the one being offered! Disabling SSL for the neighborhood website and then turning it back on again got it to offer the LE cert, but browsers still thought it was the wrong domain!?!?

Virtualmin seems to be holding onto the SSL configuration even after disabling and re-enabling SSL. Requesting a self-signed cert to overwrite the LE cert fails. There does not appear to be an "erase SSL config" or "recheck cert" function so I'm looking at manually hacking at the config files, probably resulting in a conflict with Virtualmin's database-stored state. The sites continue to work fine as http, but obviously I would prefer to un-cripple SSL.

Is there a tutorial on stripping out Virtualmin SSL configs and starting over? Are there other issues I'll need to worry about? Does anyone have an idea about why my environment got screwed up? I'm super-wary of making manual changes and have always let Virtualmin manage every aspect of my websites other than to insert a custom log definition in http and https containers.

I was experiencing a similar issue ("ValueError: Wrote file to ..., but couldn't download") and I had an .htaccess file with large numbers of "Deny from" IP blocks (basically attempting to block entire countries - you can guess which ones). I temporarily removed the .htaccess file and the certificate was then created successfully.

Actually in my case it appears to be related to .htaccess but not to the Deny from blocks. I had the same issue with another site and solved it again by temporarily removing .htaccess, but this time there were no denied IPs in it. These are both Drupal sites.

Well hang me if I know what's going on, but the one really important WP blog's SSL cert simply started working right again once I turned "SSL website" option back on. Then, I turned it on for the 6 others, and refreshed their self-signed certs, and everything works as it did when I'd only had the first LE cert installed. I've now lost the LE key and cert I had for my personal website, though the "ssl.ca" file remains.

I still cannot acquire a Let'sEncrypt cert for any of the remaining domains or subdomains for exactly the same reason as before, as in Gerard's first post. I tailed my website's access and error logs and there was no attempt to access the ".well-known" directory, although it was created, with no contents. One main difference starting today is that it takes 25 minutes for the failure message to appear! Virtualmin just shows a blank page until then. I thought perhaps LE was just under heavy burden, but I've tried since afternoon until now in the very early morning. LE's website also says on it's performance page that there are no issues going on.

I tried requesting the LE cert with .htaccess file renamed to htaccess.old, and this made no difference. I do have a big blocklist-by-country set up, but I log these requests as 403's, and nothing showed up in the log so I don't think i have LE's IP banned somehow. I also have some "ban" plugins (Wordfence) and web server redirects on my WP blogs to disable the login page due to brute force attacks. Note that I was able to request LE certs with these things in place last weekend and I have not changed the config since.

Requesting a certificate for domain.net, www.domain.net from Let's Encrypt ..
.. request failed :

Failed to request certificate : <pre>Parsing account key...
Parsing CSR...
Registering account...
Already registered!
Verifying domain.net...
Traceback (most recent call last):
  File &quot;/usr/share/webmin/webmin/acme_tiny.py&quot;, line 203, in &lt;module&gt;
    main(sys.argv[1:])
  File &quot;/usr/share/webmin/webmin/acme_tiny.py&quot;, line 199, in main
    signed_crt &#61; get_crt(args.account_key, args.csr, args.acme_dir, log&#61;LOGGER, CA&#61;args.ca)
  File &quot;/usr/share/webmin/webmin/acme_tiny.py&quot;, line 128, in get_crt
    wellknown_path, wellknown_url))
ValueError: Wrote file to /home/domain/www/.well-known/acme-challenge/B5dvwp_LjAKBtiOwnkSq3venKgWi0dYhQRZfD312zNE, but couldn&#39;t download http://domain.net/.well-known/acme-challenge/B5dvwp_LjAKBtiOwnkSq3venKgWi0dYhQRZfD312zNE
</pre>

So; this thing still just doesn't work. Does someone more familiar with Virtualmin+Let'sEncrypt have some ideas?

If you're seeing that error about "couldn't download ..." , one way to debug what's happening is to create the directory /home/domain/www/.well-known/acme-challenge/ manually and create a small file named text.txt in it. Then run :

curl http://domain.net/.well-known/acme-challenge/test.txt

and see what output you get. If that doesn't download the file, it means that some .htaccess rule is blocking or redirecting access away from the directory, and so the Let's Encrypt client won't work.

@JamieCameron: Good suggestion, but it didn't pan out.

I located four /home/%username%/www/.well-known/acme-challenge/ directories which had been left behind by virtualmin. They were all empty and didn't have any further sub-directories. I placed an index.html file and a %username%.txt file in each and then used TOR as a VPN to see if I could fetch them remotely. They were all accessible, and these efforts were logged properly. Attempts to fetch nonexistent files were also logged properly.

My log showed four successful connections from LetsEncrypt last week, two hits each for the two domains which did get an LE cert, from outbound1.letsencrypt.org for

/home/%username1%/www/.well-known/acme-challenge/EGRDkjljug-DEwwJkOoJFWwqxVHYykjjgrEchgJkKJH
/home/%username2%/www/.well-known/acme-challenge/hgJkKJHEG-RDkckOoJFeWwjljugDEwwqxVHYykjjgrE
(filenames changed for this example)

Letsencrypt checked each of these files twice, within a couple seconds of each other. It never checked for similar files when I made later cert requests. I have fail2ban running, but the logs show it never blocked an IP similar to letsencrypt's [66.133.109.36].

I wonder if perhaps I've been blacklisted by LetsEncrypt? The request process was failing quickly last week, but is now taking 25 minutes to timeout. It could be that they've changed their firewall's termination from REJECT to DROP in order to stall automated scripts that are abusing them. My Virtualmin server is on Google Fiber. Some blacklists include all residential networks just to be bastards.

Update: the 25 min delay is gone, but still getting the identical error message.

The error message you got isn't from Let's Encrypt service - its from a command Virtualmin runs on your own machines.

If you SSH into your system and try to fetch a test file in that .well-known directory, does it work?

@JamieCameron I find that a /home/%username%/www/.well-known/acme-challenge/ directory has been created, but the verification file has not been placed within it. Watching the web logs, I see LetsEncrypt does not even request it. If I place a file within it, it is accessible to the public.

I suspect the Virtualmin script, which worked on March 20th and 23rd, no longer works for unknown reasons. It's hard to tell if the output is just from the script or if some of it is relayed from LetsEncrypt. It sort of looks like it does. Perhaps a more verbose script can be provided by the Virtualmin designers? hint-hint :)

What if you try to fetch a file from that directory on your system, with the wget command? For example, put a file called test.txt in the directory, and then run wget http://example.com/.well-known/acme-challenge/test.txt

I have the same problem.

NO FILE IS CREATED in .well-known/acme-challenge

If I create a test file

/home/erinmed/public_html/.well-known/acme-challenge/test.html

I can access it OK from a browser:

http://www.erinmed.co.uk/.well-known/acme-challenge/test.html

It is clear to me that the subfolders are created correctly under public_html, but the randomly named file is not written to the acme-challenge folder so the subsequent access check by Lets Encrypt inevitably fails

@JamieCameron: I could NOT wget the test.txt file until I added the domain name to /etc/hosts/ on the VM server...
I gave VM another try at requesting a LE certificate, but as we speak it's been sitting there at a white screen for over 20 minutes.
A new www/.well-known/acme-challenge/ directory has been created at the same time I clicked the button, but there's no verification file inside it.
I can wget a test file out of the acme directory though.
Lord, I wish the cert-request script was more verbose and/or logged activity somewhere I could watch in real-time. :/

OK, I was able to request another LE cert! I repaired an issue with my router where NAT-loopback wasn't working. This is where I should be able to use my Internet/WAN/external IP to reach services within my LAN from another PC in the same LAN. When NAT-loopback isn't working, the external WAN IP behaves as though it were the Internet gateway IP (you'll get your router's web gui as accessed from LAN side). So scripts running on my VM server which requested files via a domain name url would 404 since the router's web gui would be substituted for the intended domain web page.

It's a complete mystery to me how I was able to get the one cert I did get last week, since my router configuration was still sub-par then.

I'll try getting more certs for the rest of my domains tomorrow.

Ok, that would explain it - if your system cannot connect to it's own external IP, you'll get this error.

@JamieCameron: well, it could connect to the network's public/external IP, but the router was redirecting traffic which originated inside the LAN away from the desired content. Perhaps most VM admins have their servers on a public IP? Mine is on a LAN IP behind a NAT. Everything was set up with 192.168.. IPs and the router worried about forwarding ports from the public IP. The VM server didn't even know the public IP. ** That configuration worked for getting a cert! **

So for the record, the Virtualmin script for Let's Encrypt will request a file from the web server by domain name. So when we see the issue Gerard, Daniel and I had, then we likely have an issue with the LE script requesting content - from the server which it is running on - by domain name.

Most virtualmin systems are either directly on a public IP, or at least can contact their own public IP from the virtualmin system.

@jamieCameron I don't know how you arrived at that statistic, but I've used Linux and Virtualmin for years, behind a NAT and with forwarded ports, and they had no idea of what the dynamic public IP was. They worked perfectly without modification. Most Linux systems think of themselves at 127.0.0.1 :D

I had the exact error when I tried this as well.

Actually the problem is debian7/8 --- you need to have the backports repo and then you need to use 'aptitude install letsencrypt' -- I am sure this is the reason why other debian users are having a issue with it. If you try to use just 'apt-get install letsencrypt' you will get a unmet problem.

Drupal comes with a htaccess file with the following rule meant to 'Block access to "hidden" directories whose names begin with a period':

RewriteRule "(^|/)." - [F]

so I added the following condition to the Drupal htaccess file for Let's Encrypt to work:

  RewriteCond %{REQUEST_URI} !^/.well-known/*
  RewriteRule "(^|/)\." - [F]

The above applies to Drupal 7 while interestingly in Drupal 8 the rule is already:

RewriteRule "(^|/).(?!well-known)" - [F]

Just  vi /etc/hosts and comment out your domain. Like #makebypolarr.com 127.0.0.1

/etc/hosts has the domain i am requesting the certificate for, apparently the file is not gettings created. I am able to download via wget from the .well-known folder a text file and curl command not.

i get a feeling its with .htaccess file an issue, but this was not a problem before. Also need to mention that the Check connectivity option on virtualmin pro gives an error about DNS not found.. i am waiting for the namerservers to get updated and might try again.

If your domain cannot be resolved externally via DNS, that would also prevent a Let's Encrypt cert from being issued, as the service that issues the cert also needs to request that test URL from your website.

Hi Jamie,

I am installing the server again as i was using the nameserver and trying to request a certificate for nameserver subdomain ex: ns1.domain.com apparently thats a conflict with the local host or something.. i am still learning Linux. what works is that if i have the domains and nameserver FQDN different there is no problem with the encripyt. So difficult to get everything correct at the initial stages. :(

Thanks

You should make sure that the Let's Encrypt cert is requested for your domain (ie. domain.com) and not a hostname (like ns1.domain.com)

Ok, I'm back, as I was unaware that my LE certs would expire after only 3 months. I'm a bit shocked. Fortunately, I found out about this only an hour after the cert expired! I think the Virtualmin "Manage SSL Certificate" page should say that, right? And AUTOMATIC 3-MO RENEWAL SHOULD BE THE DEFAULT!

Another thing this module should clearly warn about is that if you click "Copy to Webmin/Usermin/etc" buttons, that it becomes the one-and-only cert those resources answer to, and if you've copied another domain's cert to it before, then this overwrites it. This is important because one would assume that the buttons add the current domain's cert to a pool of them. This makes sense since all of your domain admins would probably like to access Virtualmin using their domain address. Instead, the master admin needs to select a single domain to use as the address for all of the rest of the domain's admins to visit to work on their own domains. Not very slick. And a nasty surprise when this undocumented condition is discovered. I'd originally visited each of my subdomain's control panels to click that button, thinking that I was enabling this for each of my domains!!! This stopped my mail from working! Lastly, it needs a "remove cert" button since I have no idea how the cert is being "copied" and the possibility is there to mess up Virtualmin trying to undo it manually.

I was then actually unable to renew my cert for another 3 months because the Virtualmin LE script doesn't work if, like me, you have a redirect from http to https. It's a Website Redirect entry from "/" to "https://domainname/$1". I had to manually disable it and then re-enable it later. This sort of redirect is becoming common, as privacy-minded admins improve the safety of their visitors. Perhaps Virtualmin should have a discrete setting to turn on http->https redirection so an updated LE script can temporarily turn it off and then on again. Without that, the renewal is a manual process which I basically will only learn about once I notice my website is no longer getting visits from folks panicking at the dire cert warning from their browser.

Yes, Let's Encrypt only issues certs valid for 3 months. I suppose we could make automatic renewal the default, as that's likely what most users want.

Regarding the HTTP redirect, this shouldn't prevent requesting of a cert. What error message did you get with this enabled?

@JamieCameron The HTTP->HTTPS redirect caused the same error as Gerard, in the first post of this topic. It was the reason I joined the convo a few months back.

The server redirect prevents the request/update of a cert because outbound1.letsencrypt.org requests the acme-challenge as an http URL. The web server will then redirect to https - which isn't working anyway because it uses a currently-invalid cert or self-signed cert. LE won't follow a redirect, and even if it did, it would have an expired/insecure cert to deal with. In summary, it makes LE think that the acme-challenge isn't found at the authorized domain.

A solution would be to add an option to the VM Website Redirect module: "Select all. | Invert selection. | Add a new website redirect. | Redirect http to https". Since this would be a VM-managed redirect, the LE script could ask VM to temporarily turn it off until the cert was updated. Otherwise, it would have to have AI to read the list of redirects and figure out if/how a particular admin had implemented a redirect.

A more elegant solution would be for LE to check the acme-challenge using https, and the applicant would be required to have at least a self-signed cert in place to do this. Otherwise, spooks could sniff the new cert as it passes by via http. But that's outside VM's control, of course.

Ok, it sounds like what we really want is to not redirect the /.well-known URL path. I'll work on implementing this.

@JamieCameron: Well, yes, sorta. But I don't approach it mentally as "circumvent server redirects for the /.well-known URL path". That seems tricky, and you can't predict exactly what the various admin's approaches to a server redirect will look like.

Bear in mind, we all should be using https for everything now! There should be a checkbox next to "enable https website" to "enable http to https redirect" and it should be checked by default. We might even consider an option to turn the http server off entirely.

It would be much more elegant if virtualmin had a built-in option to enable this redirect, so it could also know it's state and toggle it as needed for things like LE. VM would certainly be the hero in that case! :)

My redirect looks like this:

Website redirect or alias details
Source URL path /
Destination [x]URL at other website https://www.mydomain.com/$1
[ ]Directory on this system
Include sub-paths in redirect? [x]Yes [ ]No
Enable redirect for [x]Non-SSL website [ ]SSL website

It's actually possible to setup an exclusion to the redirect, so that /.well-known is still served via HTTP. I'm kind of surprised that Let's Encrypt doesn't accept redirects though..

As to whether HTTP -> HTTPS redirects should happen by default, I can add an option for that.

I'm not surprised that LE won't follow 302 or other server redirects. That's man-in-the-middle type behavior. I am surprised they don't make us set up a self-signed cert to do the ./well-known handshake entirely by https.

I guess there's zero security either way - without a signed cert, an attacker could still MITM traffic.

It's another layer on the onion though, and it eliminates all but the most powerful attackers from slurping up those signed certs.

Thanks nobody for tip in post #32 for Drupal 7

I had the same issues with some subdomains and the following steps fixed it for me. SInce it's the first result for this issue in google, I post my solution here, maybe it'll be useful to someone.

  1. disabled SSL website feature (Edit Virtual Server->Enabled feature) for main domain and each subdomain
  2. re-enabled it for main domain and requested certificate (when Server configuration -> Manage SSL Certificates stated that shared certificate are used, I've hit the break shared certificates button)
  3. re-enabled SSL website feature and requested certificate for each subdomain.

As I have multiple sll websites on same ip with SNI, when I re-enabled SSL, in the website's apache conf was specified also the listened ip address, which I replace with *, eg

instead of

<VirtualHost A.B.C.D:443>

I changed to

<VirtualHost *:443>

Problem solved!

Joe's picture
Submitted by Joe on Sun, 03/26/2017 - 13:12 Pro Licensee

Most users should go the other direction. Instead of changing all instances of VirtualHost x.x.x.x:443 to *, you should change all instances of * to a fully specified IP address. The only time * works as expected is if you only have one IP address and never plan to add more IP addresses.

I had a similar issue to this today.

On creating a new Virtual Server, I tried to create a LetsEncrypt SSL and it kept failing.

The LetsEncrypt module was placing the file in the .wellknown directory but was trying to access it via the mail subdomain.

Rather than ask for the SSL to be created initially for the tld, www and mail domains ie 'example.com', 'www.example.com' and 'mail.example.com' which is the default - I chose 'Domains Listed Here' with 'example.com' and 'www.example.com' being the domains.

LetsEncrypt could then create the SSL without issue.

Once created I could add the 'mail.example.com' to the listed domain and re-request the certificate....

Seems the default list of domains creates the issue in VirtualMin...

The issue with the "mail." sub-domain will be fixed in the next Virtualmin version.

instead of <VirtualHost A.B.C.D:443> I changed to <VirtualHost *:443> Problem solved!

Can someone help me please understand the implications of doing this? I read Joe's comment that:

Most users should go the other direction. Instead of changing all instances of VirtualHost x.x.x.x:443 to *, you should change all instances of * to a fully specified IP address. The only time * works as expected is if you only have one IP address and never plan to add more IP addresses.

But I'm a bit beside myself at this point and crying uncle with Let's Encrypt in Virtualmin. Been searching and trying different solutions for 2 days straight. Sometimes certification works and sometimes it doesn't, yet all domains seem to be setup the same.

The key problem shows itself when I curl the file that Let's Encrypt just failed to access (e.g., curl -i http://autodiscover.mydomain.com/.well-known/acme-challenge/1CZdP2nUAmoT...). Always comes back as 404 Not Found even when I can access it via a browser. The way to make it consistently succeed is changing <VirtualHost xx.xx.xx.xx:80> to <VirtualHost *:80> in /etc/apache2/sites-available/mydomain.com.conf. I just don't understand why this works and what problems it may cause.

Additionally, when I make this change for a virtual server (myserver.com) that includes my fully qualified domain name (host.myserver.com) so I can have a Let's Encrypt cert for Webmin/Virtualmin, bringing up the domain (www.myserver.com) actually brings up a different virtual host's site. Is this a bug? Why would changing the specific IP to an * redirect one site to another?

Debian Linux 9, Virtualmin 6.00, Webmin 1.852. Any thoughts or direction? Thanks.

One possible cause for the different behavior between when you use an IP vs * is if your system is being some kind of proxy or NAT that is insulating it from the real external IP address. This can be the case for some cloud VM providers, for example EC2.

You can check if this is the case by SSHing in as root and running the ip addr command, and see if your xx.xx.xx.xx IP appears in the list.

One possible cause for the different behavior between when you use an IP vs * is if your system is being some kind of proxy or NAT that is insulating it from the real external IP address. This can be the case for some cloud VM providers, for example EC2. You can check if this is the case by SSHing in as root and running the ip addr command, and see if your xx.xx.xx.xx IP appears in the list.

Thanks Jamie. My box isn't behind a proxy or NAT (which I just confirmed) so it doesn't appear that's contributing to the problems. Any other thoughts?

In my case the problem was caused by the htaccess in the document root. The Typo3 installation had a .htaccess which prohibits access to invisible folders.

# ------------------------------------------------------------------------------
# | Block access to hidden files and directories.                              |
# ------------------------------------------------------------------------------
#<IfModule mod_rewrite.c>
#    RewriteCond %{SCRIPT_FILENAME} -d [OR]
#    RewriteCond %{SCRIPT_FILENAME} -f
#    RewriteRule "(^|/)\." - [F]
#</IfModule>

Yeah, I'd recommend checking for any .htaccess files that could be intercepting requests to /.well-known

I'm running into this same problem on my server, and have attempted to add an exception for the .well-known directory. At this point, I've even tried to disable https entirely for the domain, but it's still redirecting. Yes, I have cleared cache, and have tried multiple browsers. Not sure what to do at this point. The cert will expire the middle of next week, so I have a few days to work on it.

The verification file is being written to the directory, but is not accessible by http. I can access it by https. I have to agree with AustinTX that a much cleaner solution would be for LE to use https for the verification.

Any ideas?

For others having this issue. i hope this helps. under server configuration > manage ssl certificate. under the tab "Create signing Request". Check the input "Other domain names" does not include a different domain. then press "generate CSR Now". After this is finished try generating a ssl using lets encrypt tab

Suddenly I get errors again.

Domain: domain.com (In the log there is an actual domain name)
   Type:   unauthorized
   Detail: Invalid response from
   http://domain.com/.well-known/acme-challenge/V7jfiXPJbDPierazn3H6SnvGkshAO2RaGUUm9ah07bo:

Two days ago this update was installed: /usr/bin/yum  -y install apr.i686 python27-setuptools.noarch

Could this have to do with it? There is no htaccess in the root directory

You may want to try updating to the lastest Virtualmin version which just came out, that may help with the issue you're seeing.

If that doesn't help, could you share the URL that isn't working and caused the error? We'll need to be able to access the URL in order to see why it's not working properly.

Updating to last version of Virtualmin did not solve it. I registered a domain name for testing and created a virtual domain. This is the error when requesting a certificate:

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator webroot, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for letsssl.space
http-01 challenge for www.letsssl.space
Using the webroot path /home/letsssl/public_html for all unmatched domains.
Waiting for verification...
Cleaning up challenges
Unable to clean up challenge directory /home/letsssl/public_html/.well-known/acme-challenge
Failed authorization procedure. letsssl.space (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://letsssl.space/.well-known/acme-challenge/JAYIorr0UggBLQZwcYyGEGGJRXAD83JUvbpH7Bv79l4: "<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>404 Not Found</title>
</head><body>
<h1>Not Found</h1>
<p", www.letsssl.space (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://www.letsssl.space/.well-known/acme-challenge/Xdzqq3sg7B9CyxeC7Z7Dkg7N5dz-4h1k3C0X_zdMPt0: "<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>404 Not Found</title>
</head><body>
<h1>Not Found</h1>
<p"
IMPORTANT NOTES:
- The following errors were reported by the server:

   Domain: letsssl.space
   Type:   unauthorized
   Detail: Invalid response from
   http://letsssl.space/.well-known/acme-challenge/JAYIorr0UggBLQZwcYyGEGGJRXAD83JUvbpH7Bv79l4:
   "<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
   <html><head>
   <title>404 Not Found</title>
   </head><body>
   <h1>Not Found</h1>
   <p"

   Domain: www.letsssl.space
   Type:   unauthorized
   Detail: Invalid response from
   http://www.letsssl.space/.well-known/acme-challenge/Xdzqq3sg7B9CyxeC7Z7Dkg7N5dz-4h1k3C0X_zdMPt0:
   "<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
   <html><head>
   <title>404 Not Found</title>
   </head><body>
   <h1>Not Found</h1>
   <p"

   To fix these errors, please make sure that your domain name was
   entered correctly and the DNS A/AAAA record(s) for that domain
   contain(s) the right IP address.


And this is in the apache error log

[Thu Dec 07 08:18:23 2017] [warn] RSA server certificate is a CA certificate (BasicConstraints: CA == TRUE !?)
[Thu Dec 07 08:18:23 2017] [warn] RSA server certificate wildcard CommonName (CN) `*.letsssl.space' does NOT match server name!?


The directory /home/letsssl/public_html/.well-known/acme-challenge is empty apart from a .htaccess
I uploaded the letsencrypt.log to this comment Let me know if you need more information.

Looks like you have something redirecting requests to /.well-known that is breaking the Let's Encrypt validation. Virtualmin does attempt to turn this off, but there are so many ways Apache can be configured to perform redirects that we can't catch them all.

There is something strange in Virtualmin, updated to most recent version. Under: Server Configuration > Website Options I no longer see the options for executing PHP as apache or owner. Does anyone experience this?

I am reporting here since it could have to do with this issue.

@Gerard - that's a bug. We're going to do a new minor release to fix it..

I updated to version 1.870 but website options has not changed, they are not available. And letsencrypt cert doesn't work yet. I don't see why, nothing has changed in apache conf lately.

I get
Not Found The requested URL /.well-known/acme-challenge/JAYIorr0UggBLQZwcYyGEGGJRXAD83JUvbpH7Bv79l4 was not found on this server.

error_log: [Thu Dec 14 09:40:18 2017] [error] [client deleted-my-ip] File does not exist: /home/letsssl/public_html/.well-known/acme-challenge/JAYIorr0UggBLQZwcYyGEGGJRXAD83JUvbpH7Bv79l4

Ok, that looks like what should happen. But if you re-try requesting an SSL cert, does it still fail with the same error message?

I retried which resulted in the same errors. Then deleted domain, recreated domain and tried again. Same errors. If you want I can give you access to the server.

Thanks for the login - I fixed this on your system. The problem is that your domain letsssl.space has an IPv6 address in DNS, but Apache wasn't configured to access requests to that address. The Let's Encrypt service will use IPv6 in preference, so it was always failing the request!

Thanks very much! I did a search and replace in apache conf and everything works just fine now.

For me it was a server config issue!

Make sure you test your setup, by putting a test.html with random text in it, and try to access it via the /.well-known/acme-challenge directory in a browser.

I am using Drupal, and I had to add an .htaccess to the .well-known folder with these lines

#

Override overly protective .htaccess in webroot

# RewriteEngine On Satisfy Any