by Michael Cropper | Apr 3, 2016 | Developer, Retrospective |
Rarely a day goes by without a new project with a new set of requirements, legacy systems and challenges which need to be overcome. This was one of these projects, where one of the many popular solutions we work through simply wasn’t going to meet the requirements on this occasion. I’ll leave the full reasons behind this story for a case study once we write this up. For now, to keep things simple we had to migrate around 250,000 emails from an old web server running IMAP technology and split these between two new places, the most used email accounts to run on the recommended Microsoft Exchange platform and the other email addresses to run on the older IMAP technology. There are many reasons why this had to be done, but I’ll skip over these for now.
So how exactly do you go about migrating 250,000 emails without losing access to anyones account, without any emails going astray during the migration and without losing any of the emails from the old system? Well, it requires quite a lot of planning, clear communication and most importantly a strategy that is going to work within the situation. This guide is for anyone else experiencing a large email migration project where it is essential everything continues running at all times. Did we get is 100% perfect, not quite. I’ve yet to experience any IT or software project be 100% perfect, but I’d like to think we did this with 99.5% perfection given the restrictions we were working with. This guide is going to cut straight to the chase and explain what the key stages of this process you need to go through to get ever closer to the perfect migration system.
Technologies
If you aren’t familiar with the differences between IMAP and Microsoft Exchange, then have a good read through our guide on the topic to get familiar. In short, when needing full traceability and efficient working with email technologies from anywhere in the world and on any device, quite simply you need to be using Microsoft Exchange. IMAP is not suitable for running a large organisation for a multitude of reasons.
With IMAP being the less advanced technology, this ultimately means that there are things that it doesn’t do as efficiently as Microsoft Exchange and most importantly, IMAP isn’t actually a piece of software per say. It is actually a protocol, a set of rules for how things are handled. What this means is that while there are a set of rules for how things are handled, as the protocol is implemented by software running on a web server, this in itself can lead to a whole host of problems depending on the how well the web server has been set up and configured, or how badly. As such, it is difficult to predict with 100% accuracy exactly what is going to be needed without actually jumping in and getting on with it. This being said, you can certainly improve your odds by working through a structured process to maximise the efficiencies within the process.
Requirements Gathering
This has to be one of the most important aspects of any project. Before you jump in and start recommending any solutions, you need to fully understand the situation that is in place at the minute, the issues with this set up and the ideal functionality that is required within the organisation. Spend time talking to as many people as possible to understand the challenges they face and most importantly explaining the variety of options and their benefits. Once you have a full list of requirements you can then begin to plan the best solution.
Planning
With your set of requirements in place, you can review the options that are available to you and decide upon the best solution. There is no one-size fits all when it comes to technology, so make sure you are taking advice from a reputable company, it will save you a fortune in the long term. Or if you are the company implementing this process for someone else, make sure you don’t bite off more than you can chew, project like this can go really bad really quickly if you aren’t 100% sure what you’re doing.
In this specific example, our planning stage was a little more complex due to splitting all of the old IMAP emails between a new IMAP destination and Microsoft Exchange. This in itself required a change in email addresses for some of the users as you cannot use multiple technologies for emails on a single domain such as @example.com. In this case we had to set up a sub-domain for something.example.com which meant that the @example.com domain names could be powered by the superior Microsoft Exchange technology while the @something.example.com email addresses could be powered by the older IMAP technology.
To achieve this requires good web hosting and DNS management in the background. When you are on cheaper budget systems, you often cannot set up sub-domain delegation for MX records which is required to do this. If you aren’t sure what an MX record is, essentially this is a single piece of information which is used by ‘the internet’ which means that when an email is sent to hello@example.com, ‘the internet’ checks the MX record associated with example.com to see where it should forward the email onto. This is generally either Microsoft Exchange or an IMAP server somewhere, generally the one running on your web server at a specific port. As we were using two different technologies in the background, this meant that we needed to have two MX records which is not possible on a single domain, hence needing to set up an MX record at the sub-domain level too.
A key part of the planning process is communication which brings me on to the next section, before we look at the nitty gritty of how to actually migrate 250,000 emails with zero downtime.
Communication
Throughout any project, success or failure is generally down to lack of communication in one way or another. Migrating 250,000 emails is no different. Particularly when email is generally the most efficient way of speaking to people, you cannot rely on this should anything go wrong. In other words, you need another form of communication as a backup should things go pear shaped for whatever reason. Whether another email account on a 3rd party platform or the good old telephone.
As the technical person implementing this, the process behind migrating the large number of emails requires this information to be communicated to the key stake holders in a non-technical and easily understandable way – which can always be a challenge when working with naturally very technical aspects. So whatever you do, make sure you are over communicating with everything you are doing.
Implementation
So, let’s get down to the nitty gritty for how to migrate 250,000 emails with zero downtime.
Old Web Server Review
You need to know what you are starting with. Generally speaking, when you are migrating web servers for whatever reason, it is usually due to the old web server being a bit rubbish in whatever form. This often is coupled with a lack of flexibility that you are used to when working with industry leading technology. You must fully review the old web server, every aspect and speak to the key stake holders who have had anything to do with managing this in the past to fully understand any interesting configurations that may have been set up in the past which can be easily missed.
We’ve already covered about how to migrate a web server seamlessly with zero downtime, so have a read of that if you need more information on the topic. For now we’re going to look specifically at the email side of things. Remember, we wanted to migrate 250,000 emails from IMAP, half to Microsoft Exchange and half to the new web server which was also running IMAP. This required two significantly different approaches to achieving this task.
Prepare the DNS
First things first, we would always recommend migrating things in stages. So let’s assume that you are first going to migrate the contents of the web server, namely your websites, then secondly migrate your emails. Or vice versa. To do this requires you to prepare your DNS settings accordingly.
As DNS propagation can take up to 24 hrs to complete fully, you need to plan this step in advance. With good DNS providers, this is generally extremely quick, although do not count on this, plan in advance.
In our example, we were also migrating the DNS settings which were previous set at the old web hosting company. Instead we moved these to an independent setup, meaning that we could prepare these in advance so propagation could take place throughout the system prior to us changing nameservers at the registrar.
Registrar – Changing Nameservers
Once we had successfully added the DNS settings at the new DNS provider so that the website would run off the new web server and the emails would continue running off the old web server, we were set to go with changing the nameservers. Once we made this change, this meant that we could be confident that and downtime was kept to a minimum.
Ok, so now the first process had been completed, this meant that we could move onto the emails. To do so requires more planning and careful implementation.
Preparing Microsoft Exchange
If you have ever set up Microsoft Exchange before you will know that there are a few settings to configure at the DNS level. To configure Microsoft Exchange in preparation for when we needed it, we actually had to add some of the DNS settings on at the old web hosting company prior to switching the nameservers. This is required to give you the time to authorise your account along with configuring all of the relevant mailboxes and communicating with the individual users for the project.
This guide is not designed to look at how to configure Microsoft Exchange, so if you don’t know how to do that, get in touch and I’m sure we can help you in some way with what you’re working on. For now, assume that a significant amount of time was taken to get everything configured correctly. Interestingly, as part of this specific project, we had to hunt down a rogue Microsoft Exchange account that had been created many moons ago by someone within the organisation with zero information about what had been set up and why and most importantly who owned this. As the Microsoft Exchange system is extremely secure, there are safeguards in place to prevent someone else taking control of an Exchange domain if one has already been set up, meaning that we first had to hunt this down and regain control of this. I certainly hope you never have to do this as I can’t tell you how time consuming this is to do.
Ok, so now you have Exchange all prepared for your email migration project. Next is preparing the IMAP email accounts on your new web server.
Preparing the new IMAP email addresses
First things first, you need to create all of the accounts on the new web server for all of the email addresses which are going to be using IMAP. Make sure you do an audit up front on the old web server to understand how large the individual email accounts are. Often when you create an email account with a default size of say 1GB on IMAP, then for certain users this may not be enough space. You want to plan for this in advance as you do not want to find this out when you are half way through transferring emails only to have to start again.
Thankfully on good web server control panels such as cPanel, doing this initial research is simple and relatively quick. Unfortunately on poor control panels which often come with budget web hosting packages, finding this out before hand can be a little challenging and in some cases simply just not possible which is a little annoying.
Switching the New Email Addresses On
Now everything has been prepared with regards to your new email technologies, it’s time to switch the DNS settings to point all the email related activities to the correct sources. Make sure you don’t make any mistakes when doing this as everything will start to go wrong and emails could go astray.
Migrating 250,000 Emails
Ok, so now we have everything prepared, we’ve switched the emails on but there is nothing there other than emails which are now sent from this point forward. Jumping back a little, it is actually possible to migrate the emails to Microsoft Exchange when configuring all of the accounts. There is a handy tool within Microsoft Exchange which isn’t 100% but can often connect to your old mail server and import the IMAP emails into Exchange which is a different protocol. When this works, this is perfect as you now have a system whereby all of the sent and received emails along with relevant folder structures that people use within their emails has been successfully ported over to Exchange. I’d recommend doing this section prior to switching your new emails on because it can take a while, as in days. So set this running and be patient. The larger number of mail boxes you are dealing with, the longer this will take. When migrating in batches, migrating around 20 mailboxes from IMAP to Exchange took around 8 hours to put this into perspective.
Now, as mentioned before, IMAP isn’t as good as Exchange when it comes to the underlying technology. Meaning that while there is a handy little tool that comes with Microsoft Exchange to do this task, the same is not true for IMAP. As IMAP runs of your web server, there are endless different configurations, setups and options which come into play. Meaning that you’re going to have to figure this bit out for yourself to a certain extent with a lot of testing various options.
Thankfully there is a seriously awesome tool called IMAPSync, which is designed to help with a lot of this process. At only £100 it is worth its weight in gold and anyone attempting to migrate any amount of emails between web servers should purchase this tool. The developer behind the scenes who built the tool is also extremely useful when helping with the usual configuration niggles that you face with things like this.
I’m not going to cover how to use this tool in detail as this is going to differ depending on your individual requirements. Suffice to say, that with a bit of testing for various configurations etc. you will get there in the end. Make sure you test things first on a test account on your new IMAP web server though so you can be confident that this is working as expected. The last thing you want is to be all gung-ho and start everything running only to find you did something wrong.
As a quick overview of IMAPSync, this tool requires you to install the software from the Yum repositories, upload a tarball to the web server under the Root account via SSH on the new web server, unzip this zipped folder, test a few configurations to make sure the scripts can actually access the internet and then you’re off to go. Something to bear in mind is that if you are having trouble getting the script to connect to the internet, it’s likely due to the firewall running on your web server, either physical or software based, which is blocking specific ports on the outbound side, generally port 143 for IMAP.
So there you go, once you’ve managed to run through this entire process, you’ll be in a situation where all of your IMAP emails are now also migrated to the new web server.
Sender Policy Framework
Something to bear in mind is that when setting up a IMAP emails, you will likely be blocked as spam if you don’t have SPF records set up at the DNS level. Leading technologies such as Microsoft Exchange will block emails from domains who don’t authenticate their self through SPF records by default as many of these emails are automatically generated spam. Something to bear in mind if you start getting issues with deliverability to certain email accounts or domains
Summary
We deal with email migration on a regular basis and you often can’t quantify the project before you get involved, it is extremely difficult to predict. Overall though, what I can say is that if you are going through a similar project, you need to be planning things effectively and implementing the right technologies to make sure everything works. For larger organisations, losing access to their emails for even an hour is a lifetime. With a well thought out strategy, detailed requirements gathering and planning your implementation will be much more straight forward. Follow a similar process when you are migrating emails and you too will be able to migrate a significant amount of emails with virtually zero downtime.
The only other thing I would finish with is that it is best if you can implement some of the critical items such as DNS changes at times which are less likely to have an impact. For example late in the evenings or at weekends. It’s not ideal for the person implementing this, although the benefits to the organisation you are working with is enormous, it will result in a situation where emails don’t end up getting lost in the ether while DNS propagation is taking place. It is also recommended to keep your users informed at all stages and most importantly tell them when not to send emails when you are making DNS changes, users often work at all hours of the day when you may be expecting them not to be, so keep in touch with people throughout.
by Michael Cropper | Mar 23, 2016 | Developer, Security, Technical, Thoughts |
If it ain’t broke, don’t fix it, right? Wrong. This is the belief system of inexperienced software developers and businesses owners who are, in some cases rightfully so, worried about what problems any changes may cause. The reality though is that as a business owners, development manager or other, you need to make sure every aspect of software you are running is up to date at all times. Any outdated technologies in use can cause problems, likely have known vulnerabilities and security issues and will ultimately result in a situation whereby you are afraid of making any changes for fear of the entire system imploding on itself.
Ok, now I’ve got that out of the way, I assume you are now also of the belief system that all software needs to be kept up to date at all times without exception. If you are not convinced, then as a developer you need to come to terms with this quickly, or as a business owner you have not been in a situation yet which has resulted in a laissez-faire attitude to software updates costing you tens of thousands of pounds to remedy, much more expensive than pro-active updates and regular maintenance.
The biggest challenge software developers face at the coal face of the build process is the inherent unpredictability of software development. As a business owner or user of a software application, whether that is an online portal of some form, a web application, a mobile app, a website or something in between, what you see as a user is the absolute tip of the iceberg, the icing on the cake and this can paint completely false pictures about the underlying technology.
Anyone who has heard me speak on software development and technologies in the past will have no doubt heard my usual passionate exchange of words along the lines of using the right technology, investing seriously and stop trying to scrimp on costs. The reason I am always talking to businesses about this is not as part of a sales process designed to fleece someone of their hard earned money. No. The reason I talk passionately about this is because when businesses take on board the advice, this saves them a significant amount of money in the long run.
To build leading software products that your business relies on for revenue you need to be using the best underlying technology possible. And here lies the challenge. Any software product or online platform is built using a plethora of individual technologies that are working together seamlessly – at least in well-built systems. Below is just a very small sample of the individual technologies, frameworks, methodologies and systems that are often working together in the background to make a software product function as you experience as the end user.

What this means though is that when something goes wrong, this often starts a chain reaction which impacts the entire system. This is the point when a user often likes to point out that “It doesn’t work” or “This {insert feature here} is broke”. And the reason something doesn’t work is often related to either a poor choice of technology in the first place or some form of incompatibility or conflict between different technologies.
To put this into a metaphor that is easier to digest, imagine an Olympic relay team. Now instead of having 4 people in this team, there were 30-50 people in this team. The team in this analogy is the software with the individual people being the pieces of software and technologies that make the software work. Now picture this. An outdated piece of technology as part of the team is the equivalent of having an athlete from 20 years ago who was once top of their game, but hasn’t trained in the last 20 years. They have put on a lot of weight, their fitness level is virtually zero and they can’t integrate as part of the team and generally have no idea what they should be doing. This is the same thing that happens with technologies. Ask yourself the question, would you really want this person as part of the team if you were relying on them to make your team the winning team at the Olympics? The answer is no. The same applies with software development.
Now to take the analogy one step further. Imagine that every member of the relay team is at a completely different level of fitness and experience. Each member of the team is interacting with different parts of the system, but often not all of the system at once. What this means is that when one of the athletes decides to seriously up their game and improve their performance, i.e. a piece of technology gets updated as part of the system, then this impacts other parts of the system in different ways. The reality of software development is that nothing is as linear as a relay team where person 1 passes the baton to person 2 and so on. The reality of software development is that often various pieces of technology will impact on many other pieces of technology and vice versa and often in a way that you cannot predict until an issue crops up.
So bringing this analogy back to software development in the real world. What this means is that when a new update comes out, for example, an update to the core Apple iOS operating system, for which all mobile applications rely on, then this update can cause problems if a key piece of technology is no longer supported for whatever reason. This seemingly small update from version 9.0 to 9.2 for example could actually result in a catastrophic failure which needs to be rectified for the mobile application to continue to work.
Here lies the challenge. As a business owner, IT manager or software developer, you have a choice. To update or not to update. To update leads you down the path of short term pain and costs to enhance the application with the long term benefits of a completely up to date piece of software. To hold off on updating leads you down the path of short term gain of not having to update anything with the long term implications being that over time as more and more technologies are updated your application is getting left in the digital dark ages, meaning that what would have been a simple upgrade previously has now resulted in a situation whereby a full or major rebuild of the application is the only way to go forward to bring the application back into the modern world. Remember the film Demolition Man with Wesley Snipes and Sylvester Stallone, when they stayed frozen in time for 36 years and how much the world had changed around them? If you haven’t seen the film, go and watch the 1993 classic, it will be a well spent 1 hr 55 mins of your time.
The number of interconnecting pieces powering any software product in the background is enormous and without serious planned maintenance and improvements things will start to go wrong, seemingly randomly, but in fact being caused by some form of automatic update somewhere along the lines. Just imagine a children’s playground at school if this was left unattended for 12 months with no form of teacher around to keep everything under control. The results would be utter chaos within no time. This is your software project. As a business owner, IT manager or software developer you need a conductor behind your software projects to ensure they are continually maintained and continue to function as you expect. It is a system and just like all systems of nature, they tend to prefer to eventually lead towards a system of chaos rather than order. There is a special branch of mathematics called Chaos Theory which talks about this in great depth should you wish to read into the topic.
As a final summary about the inherent unpredictability of software development. Everything needs to be kept up to date and a continual improvement process and development plan is essential that your software doesn’t get left behind. A stagnant software product in an ever changing digital world soon becomes out of date and needs a significant overhaul. What this does also lead to is the highly unpredictable topic of timelines and deliverables when dealing with so many unknown, unplanned and unpredictable changes that will be required as a continual series of improvements are worked through. What I can say is that any form of continual improvement is always far better than sitting back and leaving a system to work away. For any business owner who is reading this, when a software project is delayed, this is generally why. The world of software development is an ever moving and unpredictable goal post which requires your understanding. Good things come to those who wait.
by Michael Cropper | Mar 6, 2016 | Future, News |

My primary role at Contrado Digital is talking to businesses about their ambitions, goals, plans and naturally the many problems they face within their businesses. While we can help out with most problems businesses face when it comes to digital, growth, online marketing and technology, one of the problems that we kept hearing was around recruitment. Not just for digital roles, but for many industries and organisations of all sizes struggling to recruit their ideal candidate.
Employers struggling to find the right skills for the job, not getting enough applicants applying for their jobs and in many cases, companies having to turn down work as they don’t have the right skills to fulfil the work. Even after paying thousands of pounds in job advertisements and recruitment related fees. This clearly isn’t a good situation for anyone involved. So we thought we could do something about that.
Introducing Tendo Jobs. The revolutionary recruitment platform is designed to link employers directly with job hunters in a fully transparent, seamless and highly efficient way. The risk free platform allows you to promote your job to active job hunters for free. Never worry about the traditional high cost of recruitment ever again.
Register now as an employer to promote your job vacancies to job hunters looking to find their next role. The platform has been designed to allow you as an employer to be fully in control of your recruitment process. With our sophisticated ApplicantRank algorithm working in the background, the perfect applicants are automatically prioritised for you so you don’t have to go sifting through hundreds of CVs.
If all of this wasn’t enough, while we are in the early stages of launching, everthing is completely free to use for early adopters! So if you are recruiting currently or know of a company that is recruiting, share this information.
by Michael Cropper | Feb 13, 2016 | Developer |
To set the scene. Firstly, you should never be hosting emails on your web server, it is extremely bad practice these days for a variety of reasons, read the following for why; Really Simple Guide to Business Email Addresses, Really Simple Guide to Web Servers, Really Simply Guide to Web Server Security, The Importance of Decoupling Your Digital Services and most importantly, just use Microsoft Exchange for your email system.
Ok, so now we’ve got that out of the way, let’s look at some of the other practicalities of web application development. Most, if not all, web applications require you to send emails in some form to a user. Whether this is a subscriber, a member, an administrator or someone else. Which is why it is important to make sure these emails are being received in the best possible way by those people and not getting missed in the junk folder, or worse, automatically deleted which some email systems actually do.
Sending emails from your web application is not always the most straight forward task and hugely depends on your web server configuration, the underlying technology and more. So we aren’t going to cover how to do this within this blog post, instead we are going to assume that you have managed to implement this and are now stuck wondering why your emails are ending up in your and your customers junk folders. The answer to this is often simple, it’s because your email looks like spam, even though it isn’t.
Introducing Sender Policy Framework (SPF Records)
The Sender Policy Framework is an open standard which specifies a technical method to prevent sender address forgery. To put this into perspective, it is unbelievable simple to send a spoof email from your.name@your-website.com. Sending spoof emails was something I was playing around with my friends when still in high school at aged 14, sending spoof emails to each other for a bit of fun and a joke.
Because it is so simple to send spoof emails, most email providers (Microsoft Exchange, Hotmail, Outlook, Gmail, Yahoo etc.) will often automatically classify emails as spam if they aren’t identified as a legitimate email. A legitimate email in this sense is an email that has been sent from a web server which is handling the Mail Exchanger technologies.
To step back a little. The Mail Exchanger, is a record which is set at the DNS level which is referred to as the MX record. What this record does is that whenever someone sends an email to your.name@your-website.com, the underlying technologies of the internet route this email to the server which is handling your emails, the server which is identified within your MX Record at your DNS. The server then does what it needs to do when receiving this email so you can view it.
For example, when hosting your email on your web server with IMAP (as explained before, you shouldn’t be doing this), your MX record will likely be set to something along the lines of ‘mail.your-domain.com’. When using Microsoft Exchange, your MX record will be something along the lines of ‘your-domain-com.mail.protection.outlook.com’.
What this means is that if an email is sent from @your-domain.com which hasn’t come from one of the IP addresses attached to your-domain-com.mail.protection.outlook.com, then this looks to have been sent by someone who is not authorised to send emails from your domain name, and hence why this email will then end up in the spam box.
With Microsoft Exchange specifically to use this as an example, you will also have an additional SPF record attached to your DNS as a TXT type which will look something along the lines of, ‘v=spf1 include:spf.protection.outlook.com -all’, which translates as, the IP addresses associated with the domain spf.protection.outlook.com are allowed to send emails from the @your-domain.com email address, and deny everything else.
This is perfect for standard use, but doesn’t work so well when trying to send emails from a web application. Which is why we need to configure the DNS records to make the web server which is sending emails from your web application a valid sender too.
How to Configure your DNS with SPF Records
To do this, we simply add the IP address of your web server into the DNS TXT SPF record as follows;
v=spf1 include:spf.protection.outlook.com ip4:123.456.123.456 –all’
Your records will likely be different, so please don’t just copy and paste the above into your DNS. Make sure you adapt this to your individual needs. That is it. Now when your web application sends emails to your users, they will arrive in their main inbox instead of their junk folder. Simple.
A few handy tools for diagnosing DNS propagation along with SPF testing include;
by Michael Cropper | Feb 7, 2016 | Developer |
Quick reference guide for how to implement. Let’s Encrypt is a new free certificate authority, allowing anyone and everyone to encrypt communications between users and the web server with ease. For many businesses, cost is always a concern, so saving several hundred pounds for a basic SSL Certificate often means that most websites aren’t encrypted. This no longer needs to be the case and it would be recommended to implement SSL certificates on every website. Yes…we’re working on getting around to it on ours 🙂
We recently implemented Let’s Encrypt on a new project, Tendo Jobs and I was quite surprised how relatively straight forward this was to do. It wasn’t a completely painless experience, but it was reasonably straight forward. For someone who manages a good number of websites, the cost savings annually by implementing Let’s Encrypt on all websites that we manage and are involved with is enormous. Looking forward to getting this implemented on more websites.
Disclaimer as always, make sure you know what you’re doing before jumping in and just following these guidelines below. Every web server setup and configuration is completely different. So what is outlined below may or may not work for you, but hopefully either way this will give you a guide to be able to adjust accordingly for your own web server.
How to Set up Lets Encrypt
So, let’s get straight into this.
- Reference: https://community.letsencrypt.org/t/quick-start-guide/1631
- Run command, yum install epel-release, to install the EPEL Package, http://fedoraproject.org/wiki/EPEL. Extra Packages for Enterprise Linux, lots of extra goodies, some of which are required.
- Run command, sudo yum install git-all, to install GIT, https://git-scm.com/book/en/v2/Getting-Started-Installing-Git
- Clone the GIT repository for Let’s Encrypt with the command, git clone https://github.com/letsencrypt/letsencrypt, http://letsencrypt.readthedocs.org/en/latest/using.html#id22
- For cPanel servers, need to run a separate script, hence the next few steps
- Install Mercurial with the command, yum install mercurial, http://webplay.pro/linux/how-to-install-mercurial-on-centos.html. This is Mercurial, https://www.mercurial-scm.org/
- Run the install script command, hg clone https://bitbucket.org/webstandardcss/lets-encrypt-for-cpanel-centos-6.x /usr/local/sbin/letsencrypt && ln -s /usr/local/sbin/letsencrypt/letsencrypt-cpanel* /usr/local/sbin/ && /usr/local/sbin/letsencrypt/letsencrypt-cpanel-install.sh, https://bitbucket.org/webstandardcss/lets-encrypt-for-cpanel-centos-6.x
- Run the command to verify the details have been installed correctly, ls -ald /usr/local/sbin/letsencrypt* /root/{installssl.pl,letsencrypt} /etc/letsencrypt/live/bundle.txt /usr/local/sbin/userdomains && head -n12 /etc/letsencrypt/live/bundle.txt /root/installssl.pl /usr/local/sbin/userdomains && echo “You can check these files and directory listings to ensure that Let’s Encrypt is successfully installed.”
- Generate an SSL certificate with the commands;
- cd /root/letsencrypt
- ./letsencrypt-auto –text –agree-tos –michael.cropper@contradodigital.com certonly –renew-by-default –webroot –webroot-path /home/{YOUR ACCOUNT HERE}/public_html/ -d tendojobs.com -d www.tendojobs.com
- Note: Make sure you change the domains in the above, your email address and the {YOUR ACCOUNT HERE} would be replaced with /yourusername/ without the brackets.
- Reference: https://forums.cpanel.net/threads/how-to-installing-ssl-from-lets-encrypt.513621/
- Run the script with the commands;
- cd /root/
- chmod +x installssl.pl
- ./installssl.pl tendojobs.com
- Again, change your domain name above
- Set up a CRON Job within cPanel as follows, which runs every 2 months;
- 0 0 */60 * * /root/.local/share/letsencrypt/bin/letsencrypt –text certonly –renew-by-default –webroot –webroot-path /home/{YOUR ACCOUNT HERE}/public_html/ -d tendojobs.com -d www.tendojobs.com; /root/installssl.pl tendojobs.com
- For reference, The SSL certificate is placed in /etc/letsencrypt/live/bundle.txt when installing Let’s Encrypt.
- Done!
Note on adding CRON job to cPanel, this is within cPanel WHM, not a cPanel user account. cPanel user accounts don’t have root privileges so a CRON job from within here won’t work. To edit the CRON job at the root level, first SSH into your server, then run the following command to edit the main CRON job file;
crontab -e
Add the CRON job details to this file at the bottom. Save the file. Then restart the CRON deamon with the following command;
service crond restart
It is recommended to have a 2 month renewal time at first as this gives you 4 weeks to sort this out before your certificate expires. Thankfully you should receive an email from your CRON service if this happens and you will also receive an email from Let’s Encrypt when the certificate is about to expire so there are double safe guards in place to do this.
On-Going Automatic Renewal & Manually SSL Certificate Installation
Important to note that when you automatically renew your Let’s Encrypt certificates, they won’t be automatically installed. The installssl.pl script doesn’t seem to handle the installation of the certificate. Instead, you may need to update the renewed certificates within the user cPanel account for the domain manually. To do this, open cPanel and view the SSL/TSL settings page, update the currently installed (and about to expire) SSL certificate and enter in the new details. The details for the new certificate will need to be obtained via logging into the ROOT server via SSH and viewing the updated SSL certificate details in the folder, /etc/letsencrypt/live/yourdomain.com where you can use the command pico cert.pem and pico privkey.pem to view the details you need to copy over to cPanel. It’s decoding the SSL certificates in these two files to make sure the dates have been updated, you can use a tool such as an SSL Certificate Decoder to decode the certificate. If the certificate is still showing the old details, then you may need to run the command letsencrypt-auto renew which will update the certificates.
Hope this is useful for your setup. Any questions, leave a comment.