Server specific details with Capistrano during deploy - capistrano

I have a lot of servers running the same code with different configuration that i would like to deploy with Capistrano. The server configuration looks like this:
role(:server) { ["127.0.0.1", {:name => "mymachine1"}] }
role(:server) { ["127.0.0.2", {:name => "mymachine2"}] }
role(:server) { ["127.0.0.3", {:name => "mymachine3"}] }
The problem is that is would like to create a symlink depending on the server name e.g.
task :setup_all_server do
find_servers(:roles => "server").each do |server|
server_name = server.options[:name]
run "mkdir -p #{deploy_to}/releases"
run "ln -s #{deploy_to}/current/scripts /home/#{user}/scripts"
run "ln -s #{deploy_to}/current/configuration/#{server_name} /home/#{user}/configuration"
end
end
The setup and the deploy works pretty good for all servers, but is there a simple way if I would like to only deploy or setup one server depending on the name? Something like
cap deploy [:name=>"mymachine1337"]
Or something like:
How to deploy to a single specific server using Capistrano
but with a filter for :name without loosing the server.options[:name] when running the setup task.

This is more of a workaround than a solution for the specific problem. I ended up using the multistage extension for Capistrano https://github.com/capistrano/capistrano/wiki/2.x-Multistage-Extension with a specific configuration for each installation instead of a configuration file for each environment e.g.
/deploy/myinstallation.rb
set :config_folder, "myinstallation"
role(:server) { ["myuser#127.0.0.1"] }
This makes it possible to run:
cap myinstallation deploy:setup
cap myinstallation deploy

Related

docker-composer run multiple commands

As other answers have noted, I'm trying to run multiple commands in docker-compose, but my container exits without any errors being logged. I've tried numerous variations of this in my docker-compose file:
command: echo "the_string_for_the_file" > ./datahub-frontend/conf/user.props && datahub-frontend/bin/datahub-frontend
The Dockerfile command is:
CMD ["datahub-frontend/bin/datahub-frontend"]
My Real Goal
Before the application starts, I need to create a file named user.props in a location ./datahub-frontend/conf/ and add some text to that file.
Annoying Constraints
I cannot edit the Dockerfile
I cannot use a volume + some init file to do my bidding
Why? DataHub is an open source project for documenting data. I'm trying to create a very easy way for non-developers to get an instance of DataHub hosted in the cloud. The hosting we're using (AWS Elastic Beanstalk) is cool in that it will accept a docker-compose file to create a web application, but it cannot take other files (e.g. an init script). Even if it could, I want to make it really simple for folks to spin up the container: just a single docker-compose file.
Reference:
The container image is located here:
https://registry.hub.docker.com/layers/datahub-frontend-react/linkedin/datahub-frontend-react/465e2c6/images/sha256-e043edfab9526b6e7c73628fb240ca13b170fabc85ad62d9d29d800400ba9fa5?context=explore
Thanks!
You can use bash -c if your docker image has bash
Something like this should work:
command: bash -c "echo \"the_string_for_the_file\" > ./datahub-frontend/conf/user.props && datahub-frontend/bin/datahub-frontend"

A way to use puppet module (postgres example)

I'm using vagrant to build a virtual environment. I have some question about provisioning with puppet. I understood that I can create modules on my own or to use existing modules (for example puppet forge ones). To use existing modules i follow this approach:
In Vagrant file I install modules I need
config.vm.provision :shell do |shell|
shell.inline = "mkdir -p /etc/puppet/modules;
puppet module install puppetlabs-postgresql"
and then in /puppet/manifest/site.pp
node 'db' {
class { 'postgresql::server':
listen_addresses => '*',
postgres_password => 'postgres',}
postgresql::server::db { 'music':
user => 'post',
password => postgresql_password('post', 'post'),}
postgresql::server::pg_hba_rule { 'allow application network to access database':
description =>....}}
I have many VM so I have to declare in this file the conf I need for each of them. Is this a valid way to proceed in using existing puppet modules? Or there is any kind of different pattern to follow?
If you have different VM to setup with different configuration, you should look at hiera to extract the config into a yaml and reference each node configuration into the puppet file.

Ansible playbook: pipeline local cmd output (e,g. git archive) to server?

So my project has a special infrastructure, the server has only SSH connection, I have to upload my project code to server using SSH/SFTP everytime, manually. The server can not fetch.
Basically I need something like git archive master | ssh user#host 'tar -zxvf -' automatically done using playbook.
I looked at docs, local_action seems to work but it requires a local ssh setup. Are there other ways around?
How about something like this. You may have to tweak to suit your needs.
tasks:
- shell: git archive master /tmp/master.tar.gz
- unarchive: src=/tmp/master.tar.gz dest={{dir_to_untar}}
I still do not understand it requires a local ssh setup in your question.

How to deploy a meteor application to my own server?

How to deploy a meteor application to my own server?
flavour 1: the development and deployment server are the same;
flavour 2: the development server is one (maybe my localhost) and the deployment server is another (maybe a VPS in the cloud);
flavour 3: I want to make a "meteor hosting" domain, just like "meteor.com". Is it possible? How?
Update:
I'm running Ubuntu and I don't want to "demeteorize" the application. Thank you.
Meteor documentation currently says:
"[...] you need to provide Node.js 0.8 and a MongoDB server. You can
then run the application by invoking node, specifying the HTTP port
for the application to listen on, and the MongoDB endpoint."
So, among the several ways to install Node.js, I got it up and running following the best advice I found, which is basically unpacking the latest version available directly in the official Node.JS website, already compiled for Linux (64 bits, in my case):
# Does NOT need to be root user:
# create directory
mkdir -p ~/.nodes && cd ~/.nodes
# download latest Node.js distribution
curl -O http://nodejs.org/dist/v0.10.13/node-v0.10.13-linux-x64.tar.gz
# unpack it
tar -xzf node-v0.10.13-linux-x64.tar.gz
# discard it
rm node-v0.10.13-linux-x64.tar.gz
# rename unpacked folder
mv node-v0.10.13-linux-x64 0.10.13
# create symlink
ln -s 0.10.13 current
# add path to PATH
export PATH="~/.nodes/current/bin:$PATH"
# check
node --version
npm --version
And to install MongoDB, I simply followed the instructions in the MongoDB manual available in the Documentation section of its official website:
# Needs to be root user (apply "sudo" if not at root shell)
apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10
echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/10gen.list
apt-get update
apt-get install mongodb-10gen
The server is ready to run Meteor applications! For deployment, the main "issue" is where the "bundle" operation happens. We need to run meteor bundle command from inside the application source files tree. For example:
cd ~/leaderboard
meteor bundle leaderboard.tar.gz
If the deployment will happen in another server (flavour 2), we need to upload the bundle tar.gz file to it, using sftp, ftp, or any other file transfer method. Once the file is there, we follow both Meteor documentation and the README file which is magically included in the root of the bundle tree:
# unpack the bundle
tar -xvzf leaderboard.tar.gz
# discard tar.gz file
rm leaderboard.tar.gz
# rebuild native packages
pushd bundle/programs/server/node_modules
rm -r fibers
npm install fibers#1.0.1
popd
# setup environment variables
export MONGO_URL='mongodb://localhost'
export ROOT_URL='http://example.com'
export PORT=3000
# start the server
node main.js
If the deployment will be in the same server (flavour 1), the bundle tar.gz file is already there, and we don't need to recompile the native packages. (Just jump the corresponding section above.)
Cool! With these steps, I've got the "Leaderboard" example deployed to my custom server, not "meteor.com"... (only to learn and value their services!)
I still have to make it run on port 80 (I plan to use NginX for this), persist environment variables, start Node.JS dettached from terminal, et cetera... I am aware this setup in a "barely naked" one... just the base, the first step, basic foundation stones.
The application has been "manually" deployed, without taking advantage of all meteor deploy command magic features... I've seen people published their "meteor.sh" and "meteoric.sh" and I am following the same path... create a script to emulate the "single command deploy" feature... aware that in the near future all this stuff will be part of the pioneer Meteor explorers only, as it will grow into a whole Galaxy! and most of these issues will be an archaic thing of the past.
Anyway, I am very happy to see how fast the deployed application runs in the cheapest VPS ever, with a surprisingly low latency and almost instant simultaneous updates in several distinct browsers. Fantastic!
Thank you!!!
Try Meteor Up too
With that you can deploy into any Ubuntu server. This uses meteor build command internally. And used by many for deploying production apps.
I created Meteor Up to allow developers to deploy production quality Meteor apps until Galaxy comes.
I would recommend flavor two with a separate deployment server. Separation of concerns leads to a more stable environment for your code and its easier to debug.
To do it, there's the excellent Meteoric bash script that helps you deploy to Amazon's EC2 or your own server.
As for how to roll your own meteor.com, I suggest you break that out into it's own StackOverflow question as it's not related. Plus, I can't answer it :)
I done with it few days ago. I deployed my Meteor application to my own server on the DigitalOcean. I used Meteor Up tool for managing deploys and Nginx on the server to serve the app.
It's very simple to use. You should install meteor up with the command:
npm install -g mup
Then create the folder for deployment configuration and go to the created directory. Then run mup init command. It will created two configuration files. We are have interest for mup.json file. It have configurations for deployment process. It's looks like this:
{
// Server authentication info
"servers": [
{
"host": "hostname",
"username": "root",
"password": "password",
// or pem file (ssh based authentication)
//"pem": "~/.ssh/id_rsa",
// Also, for non-standard ssh port use this
//"sshOptions": { "port" : 49154 },
// server specific environment variables
"env": {}
}
],
// Install MongoDB on the server. Does not destroy the local MongoDB on future setups
"setupMongo": true,
// WARNING: Node.js is required! Only skip if you already have Node.js installed on server.
"setupNode": true,
// WARNING: nodeVersion defaults to 0.10.36 if omitted. Do not use v, just the version number.
"nodeVersion": "0.10.36",
// Install PhantomJS on the server
"setupPhantom": true,
// Show a progress bar during the upload of the bundle to the server.
// Might cause an error in some rare cases if set to true, for instance in Shippable CI
"enableUploadProgressBar": true,
// Application name (no spaces).
"appName": "meteor",
// Location of app (local directory). This can reference '~' as the users home directory.
// i.e., "app": "~/Meteor/my-app",
// This is the same as the line below.
"app": "/Users/arunoda/Meteor/my-app",
// Configure environment
// ROOT_URL must be set to https://YOURDOMAIN.com when using the spiderable package & force SSL
// your NGINX proxy or Cloudflare. When using just Meteor on SSL without spiderable this is not necessary
"env": {
"PORT": 80,
"ROOT_URL": "http://myapp.com",
"MONGO_URL": "mongodb://arunoda:fd8dsjsfh7#hanso.mongohq.com:10023/MyApp",
"MAIL_URL": "smtp://postmaster%40myapp.mailgun.org:adj87sjhd7s#smtp.mailgun.org:587/"
},
// Meteor Up checks if the app comes online just after the deployment.
// Before mup checks that, it will wait for the number of seconds configured below.
"deployCheckWaitTime": 15
}
After you fill all data fields you can start the setup process with command mup setup. It will setup your server.
After sucessfull setup you can deploy your app. Just type mup deploy in the console.
Another alternative is to just develop on your own server to start with.
I just created a Digital Ocean box and then connected my Cloud9 IDE account.
Now, I can develop right on the machine in a Cloud IDE and deployment is easy--just copying files.
I created a tutorial that shows exactly how my set up works.
I had a lot of trouble with meteor up, so I decided writing my own deploy script. I also added additional info how to set up nginx or mongodb. Hope it helps!
See /sh folder in repository
What the script meteor-deploy.sh does:
Select environment (./meteor-deploy.sh for staging, ./meteor-deploy.sh prod for production)
Build and bundle production version of the meteor app
Copy bundle to server
SSH into server
Do a mongodump to backup database
Stop the running app
Unpack bundle
Overwrite app files
Re-install app node package dependencies
Start the app (uses forever)
Tested for the following server configurations:
Ubuntu 14.04.4 LTS
meteor --version 1.3.2.4
node --version v0.10.41
npm --version 3.10.3

Vagrant Box Breaking After Knife Cookbook Installs

I'm a Vagrant n00b who's having issues getting Vagrant and Chef's knife command to play nice together as I'm setting up a pretty simple CentOS LAMP box using chef-solo.
Here's a quick rundown of ths issue:
I've created a basic Vagrantfile using the CentOS 6.3 w/ Chef base box on vagrantbox.es. You can see the basics in this gist.
I've downloaded all the cookbooks via knife cookbook site install nameofcookbook using a configuration that puts them in ./chef/cookbooks.
I've successfully run vagrant up to You can see the basics in this gist.
I've tested apache, php, etc. All good.
Now comes the trick: with the VM running, I run knife to add another package (in this case i3).
From here on, Vagrant fails to perform various tasks in the VM:
When I run vagrant provision I get an error like this
The chef binary (either `chef-solo` or `chef-client`) was not found on
the VM and is required for chef provisioning. Please verify that chef
is installed and that the binary is available on the PATH.
When I run vagrant halt I get an error that the ssh command exited with a non-zero error code.
I am able to run vagrant ssh however, and confirm that (a) chef-solo does, in fact, exist in the box and (b) I can shutdown via the commandline in the box.
When I run vagrant up I get an error like this:
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mkdir -p /vagrant</li>
I'm stumped. I've had this happen on two boxes already, and I know that Knife and Vagrant should be able to play well together.
What am I doing wrong?
Any help much appreciated, I've very excited about digging into Vagrant!
chef.add_recipe "sudo"
Nuked your sudo file after the first run.
Add the appropriate json to your vagrant file for your vagrant user.
Something like:
config.vm.provision :chef_solo do |chef|
# add your recipes
# chef.add_recipe "foo"
# chef.add_role "bar"
chef.json = {
"authorization" => {
"sudo" => {
"users" => [ "vagrant" ],
"passwordless" => true,
}
}
}
end