Maybe you are already aware that StartSSL free certificates are no longer an option after Google and Mozilla decided to distrust StartCom, and only certificates issued before October 21st will be valid, while any others issued after that date will provoke alerts at Google Chrome, Mozilla Firefox, and potentially any other software.
Fortunately, we have Let’s Encrypt free certificates, if you are able to automate renewals (as the certificates expire after just three months).
Usually, this is not a problem if you control your systems. There are a lot of different solutions to automate renewals and installation, using python, bash and a lot of other different languages, and with good integration with several different services such as Apache HTTP or nginx.
However playing with shared hosting as I do for some of my webpages, is a different matter. (more…)
My sonatype-nexus-openshift repository is now able to deploy Nexus 3.x. You can check the README.md file to see how to do it.
As the scripts will change the JVM memory configuration, it is possible to run Nexus 3.x on small gears.
Please keep in mind that at this moment is not possible to migrate Nexus 2.x to Nexus 3.x, so I suggest you deploy to a new cartbrige.
I just updated my nexus-oss-rpm GitHub repository to support RPMs for the new 3.x releases.
Please be advised that, according to the official doc, it’s not possible to migrate from Nexus 2.x at this point. And that you’ll need at least 1GB of RAM (minimum for Nexus 2.x was 512MB).
The biggest problem with sending email from AWS EC2 instances is that -sooner or later- your instance’s IP will be added to a blacklist. It doesn’t matter how secure your MTA is. It doesn’t matter if it’s not reachable from the internet. And it doesn’t matter if you are not sending spam at all.
Sooner or later one of your neighbors using an IP on the same IP range your instance is, will send spam, Then one of the blacklists (such as Trendmicro’s) will add the whole range as spam sender.
In theory you could try setting up rDNS but that’s not always a warranty of staying out of the lists. And obviously what AWS recommends is using (and paying) Simple Email Service or SES. It’s pretty easy to setup and pretty easy to use (you can setup IAM accounts to use SMTP).
For some services the configuration will be easy: add the SES endpoint as SMTP server hostname, enable SSL, select TCP port 465, add your credentials, and ready to go.
But how to do it for Nagios or Icinga (version 1)? (more…)
For the last two years, I was using Shekhar Gulati GitHub repository to manage my personal Sonatype Nexus service. Deploying to OpenShift was easy, and I could get Nexus running in less than five minutes.
However, this repository was using Nexus as a war file in Tomcat, something which is now deprecated by Sonatype, and won’t be provided for the upcoming 3.0 release. Besides that, the war doesn’t include support for NPM or Ruby gems, and uploading new war files to the repository was slow and inefficient.
Usually you have several options when you want to backup PostgreSQL instances. First stop would be the offical doc. But at some point dumping data, or stopping the instance would not be enough. Of course, you could also go for the incremental backups using the Write Ahead Log (WAL).
But what if you want fast backups, with fast recovery, and your instance is really huge? (more…)
NOTE: ebs-changer is no longer maintained, as is now included into EBS-Tools suite
Did you ever want to change standard EBS to io1 volumes on Amazon Web Services? Maybe io1 to standard? Did you want to increase the number of PIOPs your volumes are using? Did you performed this tedious job by hand?
In my current project (SmartSteps) at Telefonica R&D we needed to to this a lot of times on several environments using MongoDB replicasets and RAID0 on each mongo server.
So each time we needed to stop mongo services (or the instances), snapshot all the volumes, detach, delete old volumes, create new volumes from the snapshots and then reattach them using the same devices. That was more than 15 times the same set of operations.
And of course, it was likely possible to make mistakes in the process.
So, why should we do this by hand when it’s possible to automate and run the changes in parallel? (more…)
Why do you need this?
Just suppose you want to backup a Mongo database using mongodump.
As backups for individual databases don’t support –oplog, you may want to make sure you get a consistent database and therefore you decide to use fsyncLock() to flush any pending write operation and to lock new writes until your task is finished (do this only over secondary nodes on replica sets)
But then you launch mongodump against the MongoDB node… and the task keeps waiting for something…