Using capistrano with a load balancer - capistrano

We have a site on Rackspace with 2 servers and a load balancer, deployed with Capistrano (actually Capifony). I would like to:
Disable server 1 on the load balancer
Upgrade server 1 to the new code
Pause and let me test server 1 by logging in to it's IP address
Reenable server 1; Disable server 2 on the load balancer (users will now get the new version of the site)
Upgrade server 2 to the new code
Pause and let me test server 2
Reenable server 2 on the load balancer.
The database is hosted elsewhere and is not affected by this upgrade.
Capistrano seems very good at deploying to multiple servers at once (although I'd like to see an answer to this question), but it's not clear how to do the above. It seems like a safe way to do an upgrade in what is a pretty common scenario.
I guess if I add rules to do the load balancer, I might be able to use this answer to make the deployments run consecutively, not all at once.
An option that would be nice is if capistrano could do all of the deployment, but not change the current symlink on both servers. Then I could manually do the load balancing and update the symlinks myself.
This question is similar, but the answer given won't work with PHP as there is no need to restart the server - the new code will start executing as soon as you upload it.

Related

Deploy a WebApp and always keep it running

I have a web application spread over multiple servers and the incoming traffic is handled by HAProxy in order to balance the load. When we do the distribution, we do it at night because the users are much less and therefore we are less in service. To make the distribution we use the following strategy:
we shut down half of the servers
we deploy on servers that are turned off
we reactivate the servers that are turned off
we perform the same procedure on the other servers
The problem is that in any case I turn off the servers we close connections to users. Is there a better strategy for doing this? How could I improve this and avoid disservices and maybe be able to make distributions even during the day?
I hope I was clear. Thanks
I strongly suggest to use health checks for the servers.
Using HAProxy as an API Gateway, Part 3 [Health Checks]
You should have a URL ("/health") which you can use for health check of the backend server and add option redispatch to the config.
Now when you want to maintain the backend server just "remove" the "/health" URL and haproxy automagically routes the user to the other available servers.

Play Framework as reverse proxy with ScalaWS

I am trying to document a server and replicate its setup done by another person. Server is running Play Framework which also acts as a reverse proxy to MediaWiki running on Apache on the same server on a port that is not open externally on the server.
The Play Framework routes requests to the Media Wiki Server using ScalaWS. When I check the request it creates a request by using the server domain with the Apache port and the media wiki file.
In the real server it is working fine but in the test deployment it fails to reach mediawiki. It works if in the test deployment I open the Apache port externally.
So Somehow the request to the local server running internally on the machine needs to be accessed without routing the request externally. How can this be done? If anyone can give some quick tips or things I can check or even explain how this may be working, that would really help save me some time.
The /etc/hosts file had the wrong domain defined. Fixing that fixed the problem.

Mongo Meteor AWS EC2 Multiple Deploy

I was using Galaxy to host my meteor app and recently decided to host my app with Amazon Cloudfront serving static webpage (angular client) and connect that to my meteor app running on an EC2 container.
I have the static page working and I have the meteor app on the EC2 container, which points to a remote mongo server, working as well. I am using the meteor-client-bundler package to attempt to connect the client (static cloudfront) to the Meteor server via DDP URL. Here is where I am stuck.
The DDP Url should be my meteor server correct? Hosted at ec2....amazonaws.com)? I feel like it has to be because I have publications and methods on the server I will need to hit constantly. If that is correct, then what if I also want to have two EC2 containers running the same Meteor app? Just like in Galaxy, in case 1 is getting maintenance work done or goes down, I want the backup to take over. How can I set up two different DDP urls?
You should use a custom domain for the server, and use that custom domain in the DDP URL. While using the EC2 address will work, it's better to use a different address, especially if you ever want to move to another provider.
You can use NGINX as a reverse proxy to have 2 or more Meteor apps on the one box. It's not too difficult to set up.
You can also use Meteor up (aka mup) to do multiple deployments to the same box. http://meteor-up.com/ Meteor up will give you a very simple way to deploy, it will even revert to the previous version if something goes wrong automatically. You can even configure it to run letsencrypt to give you https security, and automatically renew the certs.
For anyone who is new to this stuff like I am, I figured out to buy another domain name, use dns (route 53) to a load balancer (elastic beanstalk) which handles multiple ec2s for 1 domain, and then point your ddp from the client to the domain. Boom. Thanks for the help #Mikkel

remotely pulling configuration information from BIND9 nameserver

How do I remotely pull configuration information from a running bind name server without logging in as root on the server where it is running?
I searched a lot and read many materials about BIND9 but still no answers.
I know there are some commands to conduct zone transfer or update zone resource data, but I didn't find any way to pull configuration info from a name server.
In short: you cannot. There is no provision in the DNS protocol to send server configuration. So whatever technology you use, it will NOT be DNS. And since Bind9 is designed to serve DNS requests and send DNS replies only, Bind9 cannot be coerced to send its configuration the way you'd expect.
You have to install and configure some other piece of software to be able to access the configuration. SSH is one of the most widespread such technology used for managing server configurations.
You could use "rndc -s dns-server dumpdb".
In named's configuration you point dump-file to a shared folder which is accessible from the system that ran rndc.

How does MOSS Front End (IIS) load balancing work?

I would like to know how MOSS Front End load balancing works, just an overview or a link to a site that contains this type of information.
In otherwords, I have 2 front end servers in the farm, how does MOSS distribute the work load?
Sorry to disappoint , but I've just been informed that MOSS does not do any load balancing on its own, you need to set this up yourself outside of MOSS.
The MOSS front end farms only sync IIS content between each other - this is provided by MOSS
MOSS lives on Windows 2003 or 2008 servers. You can enable the NLB services within the OS on the web front ends. I don't recall the OS versions that support that but certainly Enterprise and DataCenter editions...
All server versions support NLB (network load balancing). There are really three ways to accomplish load balancing.
You can use DNS to point users to different WFEs by handing out different IP addresses for the same FQDN. This is the 5 minute load balance solution.
The second solution is to use windows version of network load balancing. This is the more robust version of load balancing as it takes into account actual load on the WFEs. If one WFE is processing a large number of request traffic will go to the other box. This solution also accomidates failover if one box goes down. The DNS solution does not.
The third solution is to use a load balancer in front of your WFEs like a cisco or F5 load balancer. This is the solution for farms with many WFE's.
The next question is how do you know if load balancing is occuring. I wrote a webpart for sharepoint that you can add to any page that tells you what server is serving the page. If your load balancing is working you should see the server name change as you make requests to the same page.
You can get the webpart here: Sharepoint Server Info Web Part