How can I deploy code 'on-demand' using chef client-server architecture? - deployment

Here's the scenario:
I can SSH into my Chef-Server . But I can't SSH into any of the Chef-Clients. So this is how I work : I have a workstation to change or create Roles . All the chef-clients are running as daemons , so when they wake up , they notice state changes and start updating themselves .
Now , I need to configure code deployments on these clients . I was thinking I could use application cookbook for that , and add recipes to the roles using my workstation . But won't that result in deployments every time the chef-clients wake up and find revision changes ? I want an On Demand kind of deployment : I want to deploy only when the code is deployment ready , not for any other commit till that point .
How do I achieve this ?

Couple of questions
When id your code deployment ready? How would you know? If it's a repeatable process could you not code that into a recipe? if it's not a repeatable process you need to make it one so that it can be automated
IE run cucumber tests and if they all pass then deploy else just do nothing?
We feed from Artifactory and use the web api to check the latest installer available to us. If it's the same as previously installed (done by checking/creating a registry key) we say to the user, this build is already installed so we're skipping. If it's not the same we install. Now I know this isn't the exact same scenario but it feels to me like some custom code is going to be needed here.
Either that or leverage databag values to say install=true or false depending on the state of the code. You would update project a's install item in the databag when you want to deploy and the rest of the time it's set to false. The recipe would only proceed if the value was true?

Why not have a branch where HEAD is always ready to be deployed? Only push to this branch when your code is ready to go out into the world. Then you don't have to worry about intermediate, unstable states of your repository being synced by chef. Of course, you still have to wait for a client to wake up and sync before you see your changes, so if latency is a problem this won't work.

Related

Deploy code to multiple production servers under the load balancer without continuous deployments

I am the only developer (full-stack) in my company I have too much work other than automating the deployments as of now. In the future, we may hire a DevOps guy for the same.
Problem: We have 3 servers under Load Balancer. I don't want to block 2nd & 3rd servers till the 1st server updated and repeat the same with 2nd & 3rd because there might be huge traffic for one server initially and may fail at some specif time before other servers go live.
Server 1
User's ----> Load Balancer ----> Server 2 -----> Database
Server 3
Personal Opinion: Is there a way where we can pull the code by writing any scripts in the Load Balancer. I can replace the traditional Digital Ocean load balancer with Nginx Server making it a reverse proxy.
NOTE: I know there are plenty of other questions asked in Stack
Overflow on the same but none of them solves my queries.
Solutions I know
GIT Hooks - I know somewhat about GIT Hooks but don't want to use it because if I commit to master branch by mistake then it must not get sync to my production and create havoc in the live server and live users.
Open multiple tabs of servers and do it manually (Current Scenario). Believe me its pain in the ass :)
Any suggestions or redirects to the solutions will be really helpful for me. Thanks in advance.
One of the solutions is to write ansible playbook for this. With Ansible, you can specify to run it per one host at the time and also as the last step you can include verification check that checks if your application responds with response code 200 or it can query some endpoint that indicates the status of your application. If the check fails, Ansible will stop the execution. For example, in your case, Server1 deploys fine, but on server2 it fails. The playbook will stop and you will have servers 1 and 3 running.
I have done it myself. Works fine in environments without continuous deployments.
Here is one example

Hyperledger composer rest server not updating

can someone help me when it comes in deploying a rest server, because when I added or edit my participants and assets on my business model and I use
composer create archive -t dir -n . and deploy it with composer-rest-server
my http://localhost:3000/explorer does not update the things i change in my business model it is still the same as before I make change of it.
thank you for those who will can help me..
this doc explains how to update a network definition with a new bna and it shows you how to change the version number:
https://hyperledger.github.io/composer/latest/business-network/upgrading-bna
Your problem is the version number which you more than likely left unchanged.
Once you manage to update your network definition don't forget to regenerate the REST service.
Your Rest service probably runs on the default port 3000. Kill the process using something like :
sudo kill $(sudo lsof -t -i:3000)
where 3000 is the port number it runs on, then run the composer-rest-server command again. It will see the new definition and it will recreate the endpoints correctly.
You can update your network definition using the playground if you prefer as well, you can upload your bna that way and update it using the UI which makes it easier, if you run a development setup.
Any time you change your model or .js files, remember to go into your package.json and update the version number. Then deploy the new .bna file. (This file will have the new version number.)
When you start the Composer Rest Server you see the first thing it does is to "Discover" the network and build the endpoints. It does this only when you start the rest server. So if you change your model and upgrade the network you will need to stop the rest server and start it again for it to do a new discovery and build the new endpoints. (Also need to refresh the page in the Browser if you are using the Explorer through a browser window.)
you have to install the network again after updating your BNA file.
follow these steps:-
1) install the network again
2) start the network
3) ping the network with your card
then start the composer rest server

AWS deployment without using SSH

I've read some articles recently on setting up AWS infrastructure w/o enabling SSH on Ec2 instances. My web app requires a binary to run. So how can I deploy my application to an ec2 instance w/o using ssh?
This was the article in question.
http://wblinks.com/notes/aws-tips-i-wish-id-known-before-i-started/
Although doable, like the article says, it requires to think about servers as ephemeral servers. A good example of this is web services that scale up and down depending on demand. If something goes wrong with one of the servers you can just terminate your server and spin up another one.
Generally, you can accomplish this using a pull model. For example at bootup pull your code from a git/mecurial repository and then execute scripts to setup your instance. The script will setup all the monitoring required to determine whether your server and application are up and running appropriately. You would still need an SSH client for this if you want to pull your code using ssh. (Although you could also do it through HTTPS)
You can also use configuration management tools that don't use ssh at all like Puppet or Chef. Essentially your node/server will pull all your application and server configuration from the Puppet master or the Chef server. The Puppet agent or Chef client would then perform all the configuration/deployment/monitoring changes for your application to run.
If you with this model I think one of the most critical components is monitoring. You need to know at all times if there's something wrong with one of your server and in the event something goes wrong discard the server and spin up a new one. (Even better if this whole process is automated)
Hope this helps.

Rack: Bundler::GemNotFound errors during `bundle install --deployment`

So I have a few machines in production that are running a Sinatra app on top of Rack. Usually everything is hunky dory until Puppet - which we're using to sync changes to our servers - notices that the Gemfile.lock for the project has changed, and as a result, needs to issue the bundle install --binstubs --deployment command so we get the new gems. When this happens, ANY http request will cause a 500 error when it calls into Bundler to require our gems, because the new gems haven't been installed yet.
We usually have at least one Rack process hanging around due to another process that periodically makes an http request to ensure the server is alive, but when this happens, there are no Rack processes alive. It seems like the PassengerMinInstances directive might help if the problem were with new instances, but we also have a process that periodically fetches a page to test that the server is still up, so there still should be at least one Rack process alive to handle the request.
I should probably note that puppet doesn't actually restart Rack (by touching the restart.txt file) until after the bundle install has completed, so it doesn't make any sense why our Rack processes would go away at this time. Has anyone encountered anything like this? Is there some Rack option to not reload the entire environment on every request that I've overlooked?
I know this doesn't directly answer your question, but what I've done in the past to get around this kind of thing happening is to deploy the app into version-numbered dirs with a soft link pointing to them and an (Nginx) proxy server routing requests to the link. At the end of the deployment the deploy script points the link to the new app.
It seemed to work well enough for me, and if things really go wrong you can always manually repoint the link back to the previous version.
For posterity's sake, I'll answer this question. As part of the deployment, all of the files were touched with chown -R, which updates the ctime (but not the mtime) of the file. There is also an interesting bug/feature in Passenger where they will restart the server whenever the mtime or ctime of the /tmp/restart.txt file changes.
Solution: stop chowning the directory during a deployment.

Sensible deployment using EC2

We're currently using RightScale, and every time we deploy, we execute a script on the server or server array that we want to update. It pulls the code from a GitHub repository, creates a new folder in /var/www/releases/TIMESTAMP, and symlinks the document root, /var/www/current, to that directory.
We're looking to get a better deployment strategy, such as something where we SSH into one of the servers on the private network, and run a command-line script to deploy what we want to deploy.
However, this means that this one server has to have its public key in the authorized_keys of all of the servers we want to deploy to. Is this safe? Wouldn't this be a single server that would allow all the other servers to be accessed?
What's the best way to approach this?
Thanks!
We use a similar strategy to deploy, though we're not with Rightscale anymore.
I think generally that approach is fine and I'd be interested to learn what you think is not serious about it.
If you want to do your ssh thing, then I'd go about it the following:
Lock down ssh using security groups, e.g. open ssh only up to specific IP or servers with a deploy security-group, or similar. The disadvantage here is that you might lock yourself out when the other servers are down, etc..
I'd put public keys on each instance to allow a password-less login. If you're security concious, you rotate those keys on a monthly basis or for example, when employees are leaving, etc..
Use fabric or capistrano to log into your servers (from the deploy master) using ssh and do your deployment.
Again, I think Rightscale's approach is not unique to them. A lot of services do it like that. The reason is that e.g. when you symlink and keep the previous version around, it's easier to rollback and so on.