Let’s encrypt SSL certificates at cPanel automatically and without native support (for example at Namecheap)

Maybe you are already aware that StartSSL free certificates are no longer an option after Google and Mozilla decided to distrust StartCom, and only certificates issued before October 21st will be valid, while any others issued after that date will provoke alerts at Google Chrome, Mozilla Firefox, and potentially any other software.

Fortunately, we have Let’s Encrypt free certificates, if you are able to automate renewals (as the certificates expire after just three months).

Usually, this is not a problem if you control your systems. There are a lot of different solutions to automate renewals and installation, using python, bash and a lot of other different languages, and with good integration with several different services such as Apache HTTP or nginx.

However playing with shared hosting as I do for some of my webpages, is a different matter.

If they support Let’s Encrypt out of the box, then it will be a piece of cake. But If they do not (and my provider does not), maybe it won’t be easy to configure all the stuff.

In my case, one of the providers is Namecheap, an they decided not to integrate Let’s Encrypt support into cPanel because (they say) it requires a big amount of changes to their infrastructure.

I am happy enough with their service, given the features I get for the money I pay. So despite my first idea was to switch to another hosting, I decided to see if I could automate all the needed parts, to renew the certificate and uploaded it to cPanel.

After some time I was able to generate a set of scripts to use acme.sh to renew the certificates and install them when needed, so here is my small guide on how to configure everything end to end.

1. Get SSH access to your shared hosting.

Usually you just need to contact your tech support and request it. Then at cPanel you need to configure your SSH public key, and then access with your preferred client.

2. Get and configure acme.sh

acme.sh is a bash script able to use the ACME protocol to verify domain and subdomain ownership and create Let’s Encrypt certificates.

The main benefit is that it does not need any external dependencies and, at least for me, it works at Namecheap without doing anything special.

I suggest you create a git folder outside your public_html to place the git repositories we are going to create, so…

cd ~
mkdir git
cd git
git clone https://github.com/Neilpang/acme.sh.git
cd acme.sh

Then you need to configure acme, so it can verify ownership for the domains or subdomains that you will include at your certificate(s).

For my scripts to work, you should generate one certificate for each set of domain and subdomains. For example one for domain1.com, www.domain1.com, www2.domain1.com and another certificate for domain2.com, www.domain2.com, www2.domain2.com.

Make sure that the domain is the first instance of -d parameter when you call acme.sh

The method you decide to use for validation is up to you.

Personally I use DNS method, but keep in mind that you won’t be able to use it if any of the subdomains is a CNAME (see RFC1912 section 2.4 for reference). If you have this problem, try to use the webroot method.

At this point, you should check acme.sh help and documents, to make sure how to configure it.

3. Get and configure cp-installssl

Now it is time to clone my scripts to automate everything.

cd ~
cd git
git clone https://github.com/juliogonzalez/cp-installssl.git
cd cp-installssl

Configure cp-installssl by copying the file .cp-installssl.ini to your home and changing the content:

cp .cp-installssl.ini ~/
edit ~/cp-installssl

Finally open the CRON configuration:

crontab -e

And add a new line:

0 0 * * * ${HOME}/git/cp-installssl/wrapper/cp-installssl-wrapper -d ‘[@.,www.,www2.]domain1.com [@.,www.,www2.]domain2.com g’ -a ~/git/acme.sh/ -c ~/git/cp-installss
l/ > /dev/null

Note that you will need to adjust the time, and the content of -d parameter, according to the domains and subdomains you have.

To see the format of -d parameter, have a look at cp-installssl-wrapper help:

git/cp-installssl/wrapper/cp-installssl-wrapper -h

Now, for the most tech-savvy, notice that there are two scripts at my git repository

  • cp-installssl is a PHP script which takes care of installing SSL certificates to cPanel using its JSON API. It’s based on code by Rob Parham.
  • cp-installssl-wrapper is a BASH script will take care of calling acme.sh to renew the certificates when needed, and if a renewal happened will install them using cp-installssl

4. Test

Just run the command you added at CRON, without the schedule and without ‘> /dev/null‘ so you can see the output.

5. Enjoy!

From this point my scripts will take care of trying to renew the SSL certificates with the schedule you configured at CRON.

Note that the certificates are only renewed when needed

 

 

 

 

Sonatype Nexus 3.x RPMs on OpenShift

Categories: Uncategorized
Tags: No Tags
Comments: No Comments
Published on: 05/06/2016

My sonatype-nexus-openshift repository is now able to deploy Nexus 3.x. You can check the README.md file to see how to do it.

As the scripts will change the JVM memory configuration, it is possible to run Nexus 3.x on small gears.

Please keep in mind that at this moment is not possible to migrate Nexus 2.x to Nexus 3.x, so I suggest you deploy to a new cartbrige.

Scripts to generate Sonatype Nexus 3.x RPMs are now available

Categories: Uncategorized
Tags: No Tags
Comments: No Comments
Published on: 10/04/2016

I just updated my nexus-oss-rpm GitHub repository to support RPMs for the new 3.x releases.

Please be advised that, according to the official doc, it’s not possible to migrate from Nexus 2.x at this point. And that you’ll need at least 1GB of RAM (minimum for Nexus 2.x was 512MB).

Nagios/Icinga email alerts using AWS Simple Email Service (SES)

Comments: No Comments
Published on: 26/02/2016

The biggest problem with sending email from AWS EC2 instances is that -sooner or later- your instance’s IP will be added to a blacklist. It doesn’t matter how secure your MTA is. It doesn’t matter if it’s not reachable from the internet. And it doesn’t matter if you are not sending spam at all.

Sooner or later one of your neighbors using an IP on the same IP range your instance is, will send spam, Then one of the blacklists (such as Trendmicro’s) will add the whole range as spam sender.

In theory you could try setting up rDNS but that’s not always a warranty of staying out of the lists. And obviously what AWS recommends is using (and paying) Simple Email Service or SES. It’s pretty easy to setup and pretty easy to use (you can setup IAM accounts to use SMTP).

For some services the configuration will be easy: add the SES endpoint as SMTP server hostname, enable SSL, select TCP port 465, add your credentials, and ready to go.

But how to do it for Nagios or Icinga (version 1)? (more…)

Automating the Tibco Spotfire Webplayer installer

Comments: No Comments
Published on: 21/02/2016

Tibco Spotfire is one of the commercial options for Business Intelligence Analytics, with several different components available to be installed.

Some of the components, such as Tibco Spotfire Server (which is the core of the Tibco Spotfire platform), are more or less easy to automate if you decided to go for the GNU/Linux version as it works on Red Hat, and you can use Oracle as DB, even as AWS RDS (I will create another post for this).

But some other components can be tricky, specially those running on Microsoft Windows.

(more…)

Nexus on OpenShift using the official jetty bundle

Categories: Cloud, DevOps, OpenShift
Comments: No Comments
Published on: 14/02/2016

For the last two years, I was using Shekhar Gulati GitHub repository to manage my personal Sonatype Nexus service. Deploying to OpenShift was easy, and I could get Nexus running in less than five minutes.

However, this repository was using Nexus as a war file in Tomcat, something which is now deprecated by Sonatype, and won’t be provided for the upcoming 3.0 release. Besides that, the war doesn’t include support for NPM or Ruby gems, and uploading new war files to the repository was slow and inefficient.

(more…)

Easy and fast backup (and recovery!) for huge PostgreSQL instances at AWS

Comments: No Comments
Published on: 01/02/2016

Usually you have several options when you want to backup PostgreSQL instances. First stop would be the offical doc. But at some point dumping data, or stopping the instance would not be enough. Of course, you could also go for the incremental backups using the Write Ahead Log (WAL).

But what if you want fast backups, with fast recovery, and your instance is really huge? (more…)

Removing unwanted files from GIT repositories

Comments: No Comments
Published on: 13/01/2014

A lot of times you inherit a repository with several binaries which cause an exponential size growth if they’re modified. But a GIT repository should not host binaries. In fact no VCS should host those files.

Other times a developer (or yourself) uploads something mistake that shouldn’t be on the repository.

In either case you’ll want to delete those files.

I tried several guides but it never worked as supposed because they don’t update all the branches on the repository and therefore the file is never deleted.

So let’s go: (more…)

ebs-changer: How to change between EBS volume types (or number of PIOPs) in a fast and reliable way

Comments: No Comments
Published on: 07/11/2013

NOTE: ebs-changer is no longer maintained, as is now included into EBS-Tools suite

Did you ever want to change standard EBS to io1 volumes on Amazon Web Services? Maybe io1 to standard? Did you want to increase the number of PIOPs your volumes are using? Did you performed this tedious job by hand?

In my current project (SmartSteps) at Telefonica R&D we needed to to this a lot of times on several environments using MongoDB replicasets and RAID0 on each mongo server.

So each time we needed to stop mongo services (or the instances), snapshot all the volumes, detach, delete old volumes, create new volumes from the snapshots and then reattach them using the same devices. That was more than 15 times the same set of operations.

And of course, it was likely possible to make mistakes in the process.

So, why should we do this by hand when it’s possible to automate and run the changes in parallel? (more…)

How to check if a database exists on MongoDB

Categories: Databases
Comments: No Comments
Published on: 01/05/2013

Why do you need this?

Just suppose you want to backup a Mongo database using mongodump.

As backups for individual databases don’t support –oplog, you may want to make sure you get a consistent database and therefore you decide to use fsyncLock() to flush any pending write operation and to lock new writes until your task is finished (do this only over secondary nodes on replica sets)

But then you launch mongodump against the MongoDB node… and the task keeps waiting for something…

(more…)

«page 1 of 2

Jenkins Status
ebs-tools
nexus2-openshift
nexus3-openshift
nexus-oss-rpm
tds_fdw
full-backup
Follow Me
GithubLinkedIn
Account

Welcome , today is Friday, 09/12/2016