move let's encypt to a new server - certificate

I have a website in the server A and I want to move it to server B, I know I can move the certs or simply issue a new one, my question is, is there a way to do this without downtime? I prefer to issue a new one but in this case I need to change the dns to point the new server but since that moment the website will be untrusted, once the dns is propagated I can issue the new cert and everything else but there will be like 1 minutes where website will be down...
My server is working with docker, each website is a container and when I need to create/renew a cert I'll spin up a container with certbot....any suggestions?
many thanks

Step 1: Leave the original server running.
Step 2: Build a new server. Copy the certificates from the old server and install/setup on the new server. Duplicate content, database connections, etc. Verify that the new server is working.
Step 3: Leave the old server running. Edit your DNS settings to point to the new server.
Step 4: About a week later, decommission the old server. The old server will continue to serve traffic for cached DNS entries until the old DNS Resource Records TTL expire.
These steps should minimize downtime provided that you build the new server correctly.
Note: You will want your web server running on the new server to serve your SSL certificate as the default. That way you can use a tool like openssl to verify that your SSL certificate is being used for connections using just the IP address. Disable redirects that send the IP address to the domain name temporarily.

Related

Issues sending email through Google's SMTP Relay

My Ubuntu based webserver needs to occasionally send emails. My python code is:
withsmtplib.SMTP('smtp-relay.gmail.com', 587, 'mydomain.com') as s:
s.sendmail(fromaddr, toaddr, msg.as_string())
s.quit()
I have
a Google workspace account
am using IP authentication (not SMTP auth)
my staging and production servers added as trusted IPs (staging is
local, production is cloud)
This setup had been working fine for 6+ months.
Two days ago I upgraded Ubuntu from 20LTS to 22LTS and python 3.8 to 3.10. Now the email is working fine on the staging server, but production keeps throwing:
Invalid credentials for relay [...]. The IP\n5.7.1 address you've registered in your G Suite SMTP Relay
service doesn't\n5.7.7 match domain of the account this email is being sent from. If you are\n5.7.1 trying to
relay mail from a domain that isn't registered under your G\n5.7.1 Suite account or has empty envelope-from,
you must configure your\n5.7.1 mail server either to use SMTP AUTH to identify the sending domain or\n5.7.1 to
present one of your domain names in the HELO or EHLO command. For\n5.7.1 more information, please visit
https://support.google.com/a/answer/6140680#invalidcred ...
Any suggestions?
Edit 1:
I fired up my old ubuntu server in the cloud. I added its new IP as trusted on Google. The email worked fine. I can think of only three possibilities
Google somehow recognizes and trusts requests coming from the old
device (even though it now has a different IP)
Linode is somehow not sending the correct IP address from my new
server
Something broke during the Ubuntu upgrade
I find each of the 3 possibilities quite bizarre and unbelievable at this point, but I'll keep researching.
PS: Three factoids that may/may not be relevant:
I upgraded the staging server in place. For production I spun a new
instance, made sure everything else was working fine (except
email) and then transferred IP from the existing instance to new
When I log in to my google admin account to edit trusted IP list, my
IP is the same as staging server. I don't think I have the same
option for production, since it's an Ubuntu server I manage through SSH
I found some comments online (none in official documentation), that
the reverse DNS needs to be setup before Google would relay anything.
I set up the entry about 20 hours ago for production, but still
getting the same error. And for my staging server, I don't have rDNS
and it still sends emails (it's accessible from the internet, but I
don't have a static IP)
PPS:
The sender email is someuser#mydomain.com (not #gmail.com)
The production server is hosted on linode.com
This post comes close
to discussing a similar situation, but that is focused more on
signing in. My setup uses IP authentication, not SMTP auth. Plus it was working fine until Friday (8/12)
It turned out to be a really frustrating issue. My best guess is that Linode's Ubuntu 22.04 repository has issues. We were thinking of migrating to AWS anyway, this gave us a strong impetus.
Anyway, here are some tips from my experience that a future reader might be able to benefit from
When you're using IP authentication for Google's SMTP relay, the updates are fairly quick. I ended up spinning at least 5 instances with 5 different IPs, and each time Google was able to trust my IP within 2-3 seconds (after I updated in workspace console)
Google didn't care about my reverse DNS entries. I had read some comments online that Google wouldn't relay without rDNS, but I didn't face any such problems (at least not any rDNS I was setting. The ISP or the cloud provider have a default entry, that was good enough - if Google was even looking for it). This one was particularly problematic because that information can take hours to propagate, and I kept thinking maybe my code will start to work tomorrow.
The error message I received from Google was pretty uninformative. I contacted Google support to see if they have access to anything more meaningful on the server side. They didn't - it was a waste of time
It was somewhat helpful to run a fake SMTP server to see what my client was sending. I got it from this post. I ran it for a setup that was working and one that wasn't. In my case, the communication received was identical. Though in hindsight maybe I would've seen some differences if I ran it on a remote server.
python -m smtpd -n -d -c DebuggingServer localhost:2500

Mongo Meteor AWS EC2 Multiple Deploy

I was using Galaxy to host my meteor app and recently decided to host my app with Amazon Cloudfront serving static webpage (angular client) and connect that to my meteor app running on an EC2 container.
I have the static page working and I have the meteor app on the EC2 container, which points to a remote mongo server, working as well. I am using the meteor-client-bundler package to attempt to connect the client (static cloudfront) to the Meteor server via DDP URL. Here is where I am stuck.
The DDP Url should be my meteor server correct? Hosted at ec2....amazonaws.com)? I feel like it has to be because I have publications and methods on the server I will need to hit constantly. If that is correct, then what if I also want to have two EC2 containers running the same Meteor app? Just like in Galaxy, in case 1 is getting maintenance work done or goes down, I want the backup to take over. How can I set up two different DDP urls?
You should use a custom domain for the server, and use that custom domain in the DDP URL. While using the EC2 address will work, it's better to use a different address, especially if you ever want to move to another provider.
You can use NGINX as a reverse proxy to have 2 or more Meteor apps on the one box. It's not too difficult to set up.
You can also use Meteor up (aka mup) to do multiple deployments to the same box. http://meteor-up.com/ Meteor up will give you a very simple way to deploy, it will even revert to the previous version if something goes wrong automatically. You can even configure it to run letsencrypt to give you https security, and automatically renew the certs.
For anyone who is new to this stuff like I am, I figured out to buy another domain name, use dns (route 53) to a load balancer (elastic beanstalk) which handles multiple ec2s for 1 domain, and then point your ddp from the client to the domain. Boom. Thanks for the help #Mikkel

What to write instead of CMS Address?

I want to realize a digital signage via Xibo software on some TV screens. I've downloaded xibo-server-170-rc1.tar, xibo-client-1.7.4-win32-x86 and installed Xibo. However, I've inserted nothing in CMS Address and key when Xibo Player Options window has opened although I gave some admin user key during an installation. Then I've tried to add a display IP and Mac Address. However,as it is possible only from the client side, I have started to install Xibo client server on another computer. But I don't know what I must insert as a CMS address and key by a side of client. (I've not done it also by a side of server)
I'm a new in Xibo topic.
The actual server setup which is well documented and adding a screen can be found via links below. Do take some time to review the documentation.
For your question you will need the key created during server setup to be used for the client configuration. Additionally, you will need to go into the CMS and 'approve' the new client before it will start to work.
Server Setup and config http://xibo.org.uk/manual/en/install_cms.html
Connecting a Windows Client http://xibo.org.uk/manual/en/install_windows_client.html
Approving the new display can be found here http://xibo.org.uk/manual/en/displays.html

Using capistrano with a load balancer

We have a site on Rackspace with 2 servers and a load balancer, deployed with Capistrano (actually Capifony). I would like to:
Disable server 1 on the load balancer
Upgrade server 1 to the new code
Pause and let me test server 1 by logging in to it's IP address
Reenable server 1; Disable server 2 on the load balancer (users will now get the new version of the site)
Upgrade server 2 to the new code
Pause and let me test server 2
Reenable server 2 on the load balancer.
The database is hosted elsewhere and is not affected by this upgrade.
Capistrano seems very good at deploying to multiple servers at once (although I'd like to see an answer to this question), but it's not clear how to do the above. It seems like a safe way to do an upgrade in what is a pretty common scenario.
I guess if I add rules to do the load balancer, I might be able to use this answer to make the deployments run consecutively, not all at once.
An option that would be nice is if capistrano could do all of the deployment, but not change the current symlink on both servers. Then I could manually do the load balancing and update the symlinks myself.
This question is similar, but the answer given won't work with PHP as there is no need to restart the server - the new code will start executing as soon as you upload it.

Using any/fake domains with ejabberd

I've recently purchased a cloud server which has public IP and I am using it to host an xmpp server.
My first task was to ensure my users connected using my subdomain - as an example m.chat.com.
In my configuration I have the following:
%% Hostname
{hosts, ["m.chat.com"]}.
I then created an admin user with that domain.
In parrellel I have created the following DNS record with my host provider, hostgator for my subdomain m.chat.com
Name TTL Class Type Record
m.chat.com 14400 IN A [IP of the server]
One thing that puzzled me was my ability to access the ejabberd web admin console. This was achieved via: [IP of the server]:5280/admin however I could not access it via m.chat.com:5280/admin
That aside, inside the web console, under "Virtual Hosts" I could see the host "m.chat.com". I created a user "user#m.chat.com" and tried to connect via Adium.
Inside Adium, simply typing in user#m.chat.com with the password did not work. Instead I had to also specify the "Connect server" which in this case was the [IP of the server].
It has connected fine and I have registered other users to check everything is working and it is.
Then I thought I'd go back to the ejabberd configuration and start messing around. I changed the hostname to the following:
%% Hostname
{hosts, ["m.chat.com", "facebook.com"]}.
I registered a user with that domain and restarted ejabberd. Upon checking the web console, to my surprise, I could see the Virtual host "facebook.com". I tested this user in Adium with the [IP of the server] defined in the "Connect server" section and it connected fine. I asked other people with their own internet connections to use this account on their PCs and they were able to connect too.
Story over - my question to everyone is how is this possible? Am I missing something? Is there no domain authentication. After searching online, it seems you can even use fake domains.
If I am to operate my own service in the future (iOS chat app) I do not want anyone using my domain names with their own public servers.
Can someone shine some light.
Thanks!
Edit: A second question - Preferably I do not want to have to define the "Connect Server" upon using a client. I would like the client to recognise the #m.chat.com domain and establish a connection to the Servers IP automatically. Have I configured my DNS record correctly? For anyone else using Hostgator, is there an additional task I must do?
Edit: I can now access the web console via m.chat.com:5280/admin and I no longer have to specify the Connect server when using a client. I didnt do anything, I think it was a case of Hostgater updating the DNS or something, they say it usually takes 4 hours. However I am still slightly puzzled as to why I can create accounts with the facebook.com domain. I understand that because I can not access the DNS admin for this domain I can not create any records but that does not prevent me from using the domain and just specifying a Connect server.
Your initial problems (unable to access the server by using m.chat.com) were almost certainly DNS issues, and it seems you have isolated that down to the time taken to update the record.
Your second question - about the fact that you can name virtual hosts without restriction, is simple but interesting. What makes you think there should be any kind of restriction? It would be like you dictating that I can't save "m.chat.com" in a file on my disk, or that I can't send "m.chat.com" in a message across the internet.
This is why DNS exists and is structured the way it is. Although I can tell my server that it hosts facebook.com, nobody will connect to it because the DNS record for facebook.com does not point at my server (users generally don't set the "connect host" manually). Which begs the question... why would I want to tell my server it hosts facebook.com, and if I did, why should Facebook care?
An additional, but relevant, identity layer on top of DNS are certificates - which clients should validate for the virtual host name in spite of any "connect host" set. Since it's not possible to have a certificate for facebook.com, clients should generally pop up warnings or fail to connect at all. If they don't, they're probably not validating the certificate correctly.