I'm using capistrano to deploy code to my hosts. If I want to run cap roll HOSTS=server1, server2, ..., what delimiter do I use between the server names?
You have it about right, it's the comma. The whitespace is your problem. Try quoting:
cap roll HOSTS="server1, server2, server3"
or just don't use whitespace, and you won't need to quote.
cap roll HOSTS=server1,server2,server3
Alternatively, if the set of servers you're deploying to is defined as a role (and it probably should be), you can use the ROLES environment variable.
cap roll ROLES=myrole
or if you want to invoke on multiple roles at once, they can also be specified in the same style as the HOSTS variable:
cap roll ROLES=myrole1,myrole2,myrole3
(Assuming you're using Bash here--I've never had to run cap from another shell, so this may not apply if you're using something unusual like the windows shell)
Related
I need my docker containers to connect to different PostgreSQL server, depending on the environment (test & production). What I desire is testing my application locally with local database instance, and push the fixes after. From what I read, PostgreSQL's default connection parameters can be determined by environment variables, so I think writing two different environment variables files for test/production and pass the desired one in with --env-file option of docker run command would do the trick.
Is this a suitable way to test & deploy an web application? If not, what would be a better solution?
Yes, in general this is the approach you should take when using Docker. Store your DB connection parameters (URL, Username, Password) in environment variables. There is no real need to use an environment file unless you have a ton of environment variables, you could also pass an arbitrary number of "-e" parameters to docker as well. This is closer to how services like amazon's ECS will expect you to pass parameters.
If you're going to write those to a file, make sure that the file is encrypted/encoded somehow - storing database passwords in a file in plaintext is not a great security practice.
I have powershell script which is present on chef server to run on remote windows server, how can i run this powershell script from chef server on remote windows server.
Chef doesn't do anything like this. First, Chef Server can never remotely access servers directly, all it does is stores data. Second, Chef doesn't really do "run a thing in a place right now". We offer workstation tools like knife ssh and knife winrm as simplistic wrappers but they aren't made for anything complex. The Chef-y way to do this would be to make a recipe and run your script using the the powershell_script resource.
Does it mean chef is also running on Windows server ?
If yes, why not to use psexec from Windows Ps tools ?
https://learn.microsoft.com/en-us/sysinternals/downloads/psexec
Here is my understanding of what you are trying to achieve. If I'm wrong then please correct me in a comment and I will update my answer.
You have a powershell script that you need to run on a specific server or set of servers.
It would be convenient to have a central management solution for running this script instead of logging into each server and running it manually.
Ergo you either need to run this script in many places when a condition isn't filled, such as a file is missing, or you need to run this script often, or you need this script to be run with a certain timing in regards to other processes you have going on.
Without knowing precisely what you're trying to achieve with your script the best solution I know of is to write a cookbook and do one of the following
If your script is complex place it in your cookbook/files folder (assuming the script will be identical on all computers it runs on) or in your cookbook/templates folder (if you will need to inject information into it at write time). You can then write the .ps file to the local computer during a Chef converge with the following code snippet. After you write it to disk you will also have to call it with one of the commands in the next bullet.
Monomorphic file:
cookbook_file '<destination>' do
source '<filename.ps>'
<other options>
end
Options can be found at https://docs.chef.io/resource_cookbook_file.html
Polymorphic file:
template '<destination>' do
source '<template.ps.erb>'
variables {<hash of variables and values>}
<other options>
end
Options can be found at https://docs.chef.io/resource_template.html
If your script is a simple one-liner you can instead use powershell_script, powershell_out! or execute. powershell_out! has all the same options and features as the shell_out! command and the added advantage that your converge will pause until it receives an exit status for the command, if that is desirable. The documentation on using it is a bit more spotty though so spend time experimenting with it and googling.
https://docs.chef.io/resource_powershell_script.html
https://docs.chef.io/resource_execute.html
Which ever option you end up going with you will probably want to guard your resource with conditions on when it should not run, such as when a file already exists, a registry key is set or what ever else your script changes that you can use. If you truly want the script to execute every single converge then you can skip this step, but that is a code smell and I urge you to reconsider your plans.
https://docs.chef.io/resource_common.html#guards
It's important to note that this is not an exhaustive list of how to run a powershell script on your nodes, just a collection of common patterns I've seen.
Hope this helped.
We are dockerizing an application (written in Node.js) that will need to access some sensitive data at run-time (API tokens for different services) and I can't find any recommended approach to deal with that.
Some information:
The sensitive information is not in our codebase, but it's kept on another repository in encrypted format.
On our current deployment, without Docker, we update the codebase with git, and then we manually copy the sensitive information via SSH.
The docker images will be stored in a private, self-hosted registry
I can think of some different approaches, but all of them have some drawbacks:
Include the sensitive information in the Docker images at build time. This is certainly the easiest one; however, it makes them available to anyone with access to the image (I don't know if we should trust the registry that much).
Like 1, but having the credentials in a data-only image.
Create a volume in the image that links to a directory in the host system, and manually copy the credentials over SSH like we're doing right now. This is very convenient too, but then we can't spin up new servers easily (maybe we could use something like etcd to synchronize them?)
Pass the information as environment variables. However, we have 5 different pairs of API credentials right now, which makes this a bit inconvenient. Most importantly, however, we would need to keep another copy of the sensitive information in the configuration scripts (the commands that will be executed to run Docker images), and this can easily create problems (e.g. credentials accidentally included in git, etc).
PS: I've done some research but couldn't find anything similar to my problem. Other questions (like this one) were about sensitive information needed at build-time; in our case, we need the information at run-time
I've used your options 3 and 4 to solve this in the past. To rephrase/elaborate:
Create a volume in the image that links to a directory in the host system, and manually copy the credentials over SSH like we're doing right now.
I use config management (Chef or Ansible) to set up the credentials on the host. If the app takes a config file needing API tokens or database credentials, I use config management to create that file from a template. Chef can read the credentials from encrypted data bag or attributes, set up the files on the host, then start the container with a volume just like you describe.
Note that in the container you may need a wrapper to run the app. The wrapper copies the config file from whatever the volume is mounted to wherever the application expects it, then starts the app.
Pass the information as environment variables. However, we have 5 different pairs of API credentials right now, which makes this a bit inconvenient. Most importantly, however, we would need to keep another copy of the sensitive information in the configuration scripts (the commands that will be executed to run Docker images), and this can easily create problems (e.g. credentials accidentally included in git, etc).
Yes, it's cumbersome to pass a bunch of env variables using -e key=value syntax, but this is how I prefer to do it. Remember the variables are still exposed to anyone with access to the Docker daemon. If your docker run command is composed programmatically it's easier.
If not, use the --env-file flag as discussed here in the Docker docs. You create a file with key=value pairs, then run a container using that file.
$ cat >> myenv << END
FOO=BAR
BAR=BAZ
END
$ docker run --env-file myenv
That myenv file can be created using chef/config management as described above.
If you're hosting on AWS you can leverage KMS here. Keep either the env file or the config file (that is passed to the container in a volume) encrypted via KMS. In the container, use a wrapper script to call out to KMS, decrypt the file, move it in to place and start the app. This way the config data is not exposed on disk.
I recently upgraded to Capistrano 2.x to 3.x and the process was fairly straightforward. But what bugs me is the loss of command line functionality that used to exist in Capistrano 2.x. Specifically, if I wanted to deploy to a remote server via a different user than my system’s current logged in user I could run a command like this:
cap -s user=remote_user staging deploy
Capistrano 3.x allows one to set a specific user in the deployment scripts itself via a configuration setting like this; note the user: option:
server 'example.com', user: 'remote_user', roles: %w{app db web}, my_property: :my_value
Which is very nice! But I do system administration on setups where multiple users—not just a generic deploy user—can deploy code that is managed by group permissions on the server. Meaning users, linus, snoopy and lucy and all work on the same code and then deploy under their own unique username and everything works. And no, switching this setup to use a generic deploy is not an easy option for now to say the least. The codebase is fairly straightforward PHP code—from a deployment standpoint—and Capistrano helps simplify the deployment workflow.
I have no desire to roll back to Capistrano 2.x. So how can I keep these users and Capistrano scripts in line with Capistrano 3.x settings but still allow everyone to independently deploy code via their own individual user accounts?
So how can I keep these users and Capistrano scripts in line with
Capistrano 3.x settings but still allow users to independently deploy
code via their own individual user accounts?
While not as clean as the Capistrano 2.x method of using -s user=remote_user, a similar goal of setting custom usernames can be achieved by using environment variables in Capistrano 3.x. This has been tested and works well in Capistrano 3.4.0.
The first thing one has to do is adjust their server settings to allow for an environment variable. So the server line that reads like this above:
server 'example.com', user: 'remote_user', roles: %w{app db web}, my_property: :my_value
Now gets changed to add an ENV["CAP_USER"] variable like this:
server 'example.com', user: ENV["CAP_USER"] || 'remote_user', roles: %w{app db web}, my_property: :my_value
The logic basically boils down to: If ENV["CAP_USER"] is set, use that. If not, just use remote_user.
And now to take advantage of that from the command line, one would just deploy like this:
export CAP_USER=remote_user && cap staging deploy
Note the export CAP_USER=remote_user is what sets the environment variable named CAP_USER which then gets accessed as ENV["CAP_USER"] in the Capistrano 3.x script. The && is just a Bash command to concatenate that command to the next cap staging deploy.
If one wanted to, one could just adjust their ~/.bash_profile to permanently set the CAP_USER to remote_user so the basic cap staging deploy can be used as-is without having to remember the syntax of a compound command.
I'm running a pylons app using fastcgi and apache2. There are two versions (different revisions from my svn repo), one for staging and one for production. I'd like them to use different paste config files.
Right now, my dispatch.fcgi inside htdocs in the pylons app just uses one config file (so both stage and live use the same configuration). I'd like to be able to have debugging enabled on the stage server but not on the live server, for example. Any suggestions?
One approach would be to have more than one dispatch.fcgi prepared (referencing different INI files), then run a script on deployment to copy the correct one into the active position.
Another approach would be to have two .fcgi files, then use an IfDefine directive to select the proper rules in your main httpd.conf.
In other words, on the staging server, you start httpd with httpd -D staging, then put the staging config inside <IfDefine staging></IfDefine> and the other config inside <IfDefine !staging></IfDefine>
The limitation of this approach is that since IfDefine is binary, going past two options while still having a "default" option requires a bunch of extra lines. It's not the end of the world, and if you require a parameter to be given on all deployments it stays clean.
Still, I would use option #1.